Authors\' Response ?Parallelism and recursion in message passing libraries: An efficient methodology?, C. Rodr�guez, F. de Sande, C. Le�n and L. Garc�a,Concurrency: Practice and Experience 1999; 11(7):355-365

July 25, 2017 | Autor: Casiano León | Categoría: Distributed Computing, Computer Software, Message Passing
Share Embed


Descripción

CONCURRENCY: PRACTICE AND EXPERIENCE Concurrency: Pract. Exper. 2000; 12:1515–1516

Authors’ Response

‘Parallelism and recursion in message passing libraries: An efficient methodology’, C. Rodr´ıguez, F. de Sande, C. Le´on and L. Garc´ıa, Concurrency: Practice and Experience 1999; 11(7):355–365

We have three remarks regarding Professor Brinch Hansen’s comments on our paper ‘Parallelism and recursion in message passing libraries’: 1. SuperPascal is a portable publication language that allows the expression of message passing and recursion. However, we are not aware of current efficient implementations of SuperPascal in present parallel platforms. To achieve this goal, there are some problems to solve. Only a few days before this letter was written, the OpenMP committee approved a couple of extensions to make nested parallelism in Shared Memory Machines more feasible. Even in this easier environment, the implementation of nested parallelism presents important challenges. The paper intended to address some of these issues in the distributed memory context. 2. The problem approached in the paper was how to deal with nested parallelism when the number of threads is smaller than the number of processors. (The opposite situation has been studied exhaustively.) The solution presented is also an example of Professor Brinch Hansen’s ‘Search for Simplicity’ philosophy: In our proposal the computation is structured in groups. Each group represents a virtual thread: the computation of which is replicated by all the processors in a group. Even the I/O is instrumented so that the input is broadcast from the group master to the other processors in the group. When a parallel statement asking for k new processes is reached (PAR or forall or whatever you want to call it) the set is split into k groups. To balance the computation, the size of these groups is proportional to the size of the complexity of the task they are committed to solving (WEIGHTEDPAR macros). At the end of the parallel statement the groups have to exchange the results of their computations. This is done in a hypercubic-like pattern. This pattern extends the classic k-ary hypercube ‘divide and conquer’ algorithm to some kind of ‘Dynamic Politope’. Each new nested forall creates a new dimension in the politope. Observe that the arity on each dimension varies and the exchange pattern is not necessarily one-to-one. This idea is the main contribution of the paper. Copyright  2000 John Wiley & Sons, Ltd.

Received 2 December 2000

1516

AUTHORS’ RESPONSE

3. We certainly acknowledge that Professor Brinch Hansen’s work pioneered the research on nested parallelism. His work has been always a constant source of inspiration in our study. All his books are on our shelves and often on our tables. The references quoted by Professor Brinch Hansen in his letter are an obligated reading for any researcher interested in this area. We encourage you to read them: they are masterpieces of computer science literature. Casiano Rodr´ıguez Le´on∗ Universidad de La Laguna

∗ Correspondence to: Casiano Rodr´ıguez Le´on, Dpto. de Estad´ıstica IO y Computaci´ on, Edificio F´ısica/Matematica, Universidad de La Laguna, Calle A. Fco. S´anchez s/n 38271 La Laguna, Tenerife, Spain. E-mail: [email protected].

Copyright  2000 John Wiley & Sons, Ltd.

Concurrency: Pract. Exper. 2000; 12:1515–1516

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.