default search action
4th PPOPP 1993: San Diego, California
- Marina C. Chen, Robert Halstead:
Proceedings of the Fourth ACM SIGPLAN Symposium on Principles & Practice of Parallel Programming (PPOPP), San Diego, California, USA, May 19-22, 1993. ACM 1993, ISBN 0-89791-589-5
Session 1
- David E. Culler, Richard M. Karp, David A. Patterson, Abhijit Sahay, Klaus E. Schauser, Eunice E. Santos, Ramesh Subramonian, Thorsten von Eicken:
LogP: Towards a Realistic Model of Parallel Computation. 1-12 - Jaspal Subhlok, James M. Stichnoth, David R. O'Hallaron, Thomas R. Gross:
Programming Task and Data Parallelism on a Multicomputer. 13-22 - Gul Agha, Christian J. Callsen:
ActorSpaces: An Open Distributed Programming Paradigm. 23-32
Session 2
- Mary W. Hall, Timothy J. Harvey, Ken Kennedy, Nathaniel McIntosh, Kathryn S. McKinley, Jeffrey D. Oldham, Michael H. Paleczny, Gerald Roth:
Experiences Using the ParaScope Editor: an Interactive Parallel Programming Tool. 33-43 - Sekhar R. Sarukkai, Allen D. Malony:
Perturbation Analysis of High Level Instrumentation for SPMD Programs. 44-53
Session 3
- David A. Kranz, Kirk L. Johnson, Anant Agarwal, John Kubiatowicz, Beng-Hong Lim:
Integrating Message-Passing and Shared-Memory: Early Experience. 54-63 - Leonidas I. Kontothanassis, Robert W. Wisniewski:
Using Scheduler Information to Achieve Optimal Barrier Synchronization Performance. 64-72 - Lorenz Huelsbergen, James R. Larus:
A Concurrent Copying Garbage Collector for Languages that Distinguish (Im)mutable Data. 73-82 - Shun-Tak Leung, John Zahorjan:
Improving the Performance of Runtime Parallelization. 83-91
Session 4
- Barbara M. Chapman, Piyush Mehrotra, Hans P. Zima:
High Performance Fortran Without Templates: An Alternative Model for Distribution and Alignment. 92-101 - Guy E. Blelloch, Siddhartha Chatterjee, Jonathan C. Hardwick, Jay Sipelstein, Marco Zagha:
Implementation of a Portable Nested Data-Parallel Language. 102-111 - Pushpa Rao, Clifford Walinsky:
An Equational Language for Data-Parallelism. 112-118
Session 5
- Jan F. Prins, Daniel W. Palmer:
Transforming High-Level Data-Parallel Programs into Vector Operations. 119-128 - Stephen P. Masticola, Barbara G. Ryder:
Non-concurrency Analysis. 129-138
Session 6
- Vasanth Bala, Jeanne Ferrante, Larry Carter:
Explicit Data Placement (XDP): A Methodology for Explicit Compile-Time Representation and Optimization. 139-148 - Siddhartha Chatterjee, John R. Gilbert, Fred J. E. Long, Robert Schreiber, Shang-Hua Teng:
Generating Local Address and Communication Sets for Data-Parallel Programs. 149-158 - Dirk Grunwald, Harini Srinivasan:
Data Flow Equations for Explicitly Parallel Programs. 159-168
Session 7
- Soumen Chakrabarti, Katherine A. Yelick:
Implementing an Irregular Application on a Distributed Memory Multiprocessor. 169-178 - Kurt Siegl:
Parallelizing Algorithms for Symbolic Computation using ||MAPLE||. 179-186 - Donald Yeung, Anant Agarwal:
Experience with Fine-Grain Synchronization in MIMD Machines for Preconditioned Conjugate Gradient. 187-197
Session 8
- J. Gregory Morrisett, Andrew P. Tolmach:
Procs and Locks: A Portable Multiprocessing Platform for Standard ML of New Jersey. 198-207 - David B. Wagner, Brad Calder:
Leapfrogging: A Portable Technique for Implementing Efficient Futures. 208-217 - Kenjiro Taura, Satoshi Matsuoka, Akinori Yonezawa:
An Efficient Implementation Scheme of Concurrent Object-Oriented Languages on Stock Multicomputers. 218-228
Session 9
- Harjinder S. Sandhu, Benjamin Gamsa, Songnian Zhou:
The Shared Regions Approach to Software Cache Coherence on Multiprocessors. 229-238 - Wilson C. Hsieh, Paul Wang, William E. Weihl:
Computation Migration: Enhancing Locality for Distributed-Memory Parallel Systems. 239-248 - Rohit Chandra, Anoop Gupta, John L. Hennessy:
Data Locality and Load Balancing in COOL. 249-259
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.