The Chapel Parallel Programming Language

 

Archived Publications and Papers (reverse chronologically)

Optimizing PGAS Overhead in a Multi-Locale Chapel Implementation of CoMD [slides]. Riyaz Haque and David Richards. PGAS Applications Workshop (PAW) at SC16, November 14, 2016.
This is a study of the CoMD proxy application in Chapel conducted by LLNL.
Chapel chapter, Bradford L. Chamberlain, Programming Models for Parallel Computing, edited by Pavan Balaji, published by MIT Press, November 2015.
This is currently the best introduction to Chapel's history, motivating themes, and features. It also provides a brief summary of current and future activities at the time of writing. An early pre-print of this chapter was made available under the name A Brief Overview of Chapel.
LLVM-based Communication Optimizations for PGAS Programs. Akihiro Hayashi, Jisheng Zhao, Michael Ferguson, Vivek Sarkar. 2nd Workshop on the LLVM Compiler Infrastructure in HPC (LLVM-HPC2), November 2015.
This paper describes how LLVM passes can optimize communication in PGAS languages like Chapel. In particular, by representing potentially remote addresses using a distinct address space, existing LLVM optimization passes can be used to reduce communication.
Caching Puts and Gets in a PGAS Language Runtime [slides]. Michael P. Ferguson, Daniel Buettner. 9th International Conference on Partitioned Global Address Space Programming Models (PGAS 2015), Sept 2015.
This paper describes an optimization implemented for Chapel in which the runtime library aggregates puts and gets in accordance with Chapel's memory consistency model in order to reduce the potential overhead of doing fine-grained communications.
Parameterized Diamond Tiling for Stencil Computations with Chapel Parallel Iterators. [slides]. Ian J. Bertolacci, Catherine Olschanowsky, Ben Harshbarger, Bradford L. Chamberlain, David G. Wonnacott, Michelle Mills Strout. ICS 2015, June 2015.
This paper explores the expression of parameterized diamond-shaped time-space tilings in Chapel, demonstrating competitive performance with C+OpenMP along with significant software engineering benefits due to Chapel's support for parallel iterators.
Towards Resilient Chapel: Design and Implementation of a Transparent Resilience Mechanism for Chapel, Konstantina Panagiotopoulou and Hans-Wolfgang Loidl. EASC '15, April 21-23 2015.
This paper describes the design and prototype implementation of resilience support for Chapel in a transparent manner.
A Study of Successive Over-relaxation (SOR) Method Parallelization Over Modern HPC Languages [code], Sparsh Mittal, International Journal of High Performance Computing and Networking (IJHPCN), vol. 7, no. 4, 2014
This paper compares Chapel, D, and Go in the context of Successive Over-relazation.
Affine Loop Optimization Based on Modulo Unrolling in Chapel [slides], Aroon Sharma, Darren Smith, Joshua Koehler, Rajeev Barua, and Michael Ferguson, PGAS 2014, October 7-10, 2014
This paper describes an optimization that coarsens communications via modifications to Chapel's leader/follower iterators.
Benchmarking Usability and Performance of Multicore Languages (awarded "Best Paper"), Sebastian Nanz, Scott West, Kaue Soares da Silveira, and Bertrand Meyer. ESEM 2013, October 2013.
This paper compares Chapel, Cilk, Go, and TBB across a suite of six benchmarks (with both beginner and expert versions of each), comparing code size, coding time, execution time, and speedup.
Examining the Expert Gap in Parallel Programming, Sebastian Nanz, Scott West, and Kaue Soares da Silveira. Euro-Par 2013, August 2013.
This paper studies the impact of expert opinions on benchmark codes written in Chapel, Cilk, Go, and TBB.
The State of the Chapel Union [slides]. Bradford L. Chamberlain, Sung-Eun Choi, Martha Dumler, Thomas Hildebrandt, David Iten, Vassily Litvinov, Greg Titus. CUG 2013, May 2013.
This paper provides a snapshot of the Chapel project at the juncture between the end of the HPCS project and the start of the next phase in Chapel's development. It covers past successes, current status, and future directions.
A Brief Overview of Chapel (revision 1.0). Bradford L. Chamberlain. (pre-print of a chapter that is to appear in an upcoming programming models book), January 2013.
This pre-print chapter serves as a good overview of Chapel's history, motivating themes, and features. It also provides a brief summary of future activities. It's currently the best overview in print about the Chapel project.
Run, Stencil, Run! HPC Productivity Studies in the Classroom [slides], Helmar Burkhart, Madan Sathe, Matthias Christen, Olaf Schenk, and Max Rietmann. PGAS 2012, October 2012.
This paper describes classroom productivity studies conducted at the University of Basel, comparing Chapel with Java, OpenMP, MPI, UPC, and PATUS.
Global Data Re-allocation via Communication Aggregation in Chapel [slides], Alberto Sanz, Rafael Asenjo, Juan Lopez, Rafael Larrosa, Angeles Navarro, Vassily Litvinov, Sung-Eun Choi, and Bradford L. Chamberlain. UMA-DAC-12/02 (this is an extended version of the paper that appeared at the 24th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD'2012), New York City, NY), October 2012.
This paper describes a Chapel optimization that aggregates communication for array-to-array assignments (or slices thereof) to reduce communication overheads.
An Empirical Performance Study of Chapel Programming Language [slides], Nan Dun, Kenjiro Taura. HIPS 2012, May 2012.
This paper performs a performance study of various Chapel features with the goal of understanding the current performance obtained and identifying future optimization opportunities for the development team.
Performance Portability with the Chapel Language. Albert Sidelnik, Saeed Maleki, Bradford L. Chamberlain, María J. Garzarán, David Padua. IPDPS 2012, May 2012.
This paper describes the use of Chapel to target GPUs and multicore processors using a unified set of language concepts.
User-Defined Parallel Zippered iterators in Chapel [slides]. Bradford L. Chamberlain, Sung-Eun Choi, Steven J. Deitz, Angeles Navarro. PGAS 2011: Fifth Conference on Partitioned Global Address Space Programming Models, October 2011.
This paper describes how users can create parallel iterators that support zippered iteration in Chapel, demonstrating them via several examples that partition iteration spaces statically and dynamically.
Interfacing Chapel with Traditional HPC Programming Languages [slides], Adrian Prantl, Thomas Epperly, Shams Imam, Vivek Sarkar. PGAS 2011: Fifth Conference on Partitioned Global Address Space Programming Models, October 2011.
This paper describes work being done by LLNL and Rice to extend Babel's interoperability capabilities to support calls between Chapel and other HPC-oriented languages.
Composite Parallelism: Creating Interoperability Between PGAS Languages, HPCS Languages, and Message Passing Libraries, Thomas Epperly, Adrian Prantl, Bradford Chamberlain, LLNL Progress Report, September 2011.
This is a progress work reporting on the work described in the Prantl et al. PGAS 2011 paper in more detail.
A First Implementation of Parallel IO in Chapel for Block Data Distribution, Rafael Larrosa, Rafael Asenjo, Angeles Navarro, Bradford L. Chamberlain. ParCo 2011, September 2011.
This paper reports on some initial work to parallelize file I/O for Block-distribted arrays in Chapel
Authoring User-Defined Domain Maps in Chapel [slides]. Bradford L. Chamberlain, Sung-Eun Choi, Steven J. Deitz, David Iten, Vassily Litvinov. CUG 2011, June 2011.
This paper builds on our HotPAR 2010 paper by describing the programmer's role in implementing user-defined distributions and layouts in Chapel.
The Chapel Tasking Layer Over Qthreads [slides], Kyle B. Wheeler, Richard C. Murphy, Dylan Stark, Bradford L. Chamberlain. CUG 2011, May 2011.
This paper reports on our initial work mapping Chapel's parallel tasks down to the Qthreads user-level tasking library being developed at Sandia National Laboratories.
A Scalable Implementation of Language-Based Software Transactional Memory for Distributed Memory Systems. Srinivas Sridharan, Jeffrey Vetter, Bradford L. Chamberlain, Peter Kogge, Steve Deitz. Technical Report Series No. FTGTR-2011-02, Oak Ridge, TN: Future Technologies Group, Oak Ridge National Lab, May 2011.
This paper reports on an implementation of Chapel's atomic statements using distributed Software Transactional Memory (STM) techniques.
Translating Chapel to Use FREERIDE: A Case Study in Using an HPC Language for Data-Intensive Computing. Bin Ren, Gagan Agrawal, Brad Chamberlain, Steve Deitz. 16th International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS 2011), May 2011.
This paper reports on a study investigating compiling Chapel features like reductions down to the FREERIDE library developed at OSU in support of data-intensive computing.
Using the High Productivity Language Chapel to Target GPGPU Architectures. Albert Sidelnik, Maria J. Garzaran, David Padua. UIUC Dept. of Computer Science Technical Report, April 2011.
This report presents initial work to target Chapel computation to GPUs using specialized domain maps.
User-Defined Distributions and Layouts in Chapel: Philosophy and Framework [slides]. Bradford L. Chamberlain, Steven J. Deitz, David Iten, Sung-Eun Choi. 2nd USENIX Workshop on Hot Topics in Parallelism (HotPar'10), June 2010.
This paper describes our approach and software framework for implementing user-defined distributions and memory layouts using Chapel's domain map concept.
Five Powerful Chapel Idioms [slides] Steven J. Deitz, Bradford L. Chamberlain, Sung-Eun Choi, David Iten. CUG 2010, May 2010.
This paper highlights some powerful Chapel features through five short example codes.
Mechanisms that Separate Algorithms from Implementations for Parallel Patterns. Christopher D. Krieger, Andrew Stone, and Michelle Mills Strout. Workshop on Parallel Programming Patterns (ParaPLOP), March 2010.
This paper studies some common parallel programming patterns in Chapel and other programming models to study how entangled different concerns end up being.
HPC Challenge Benchmarks in Chapel (2009 entry) [slides]
This paper reports on our 2009 entry for the class 2 HPC Challenge competition, which was awarded "most elegant implementation." Our entries to previous years' competitions can be downloaded as well:
HPCC STREAM and RA in Chapel: Performance and Potential [slides], Steven J. Deitz, Bradford L. Chamberlain, Samuel Figueroa, David Iten, CUG 2009, May 2009.
This is an update to our May 2007 CUG paper, presenting initial results on the HPC Challenge benchmarks using distributed domains and arrays, along with pointers to next steps.
Scalable Software Transactional Memory for Global Address Space Architectures. Srinivas Sridharan, Jeffrey Vetter, Peter Kogge. Technical Report Series No. FTGTR-2009-04. Oak Ridge, TN: Future Technologies Group, Oak Ridge National Lab, April 2009.
This report describes GTM, a library designed to support scalable asynchronous distributed software transactional memory (STM).
Software Transactional Memory for Large-Scale Clusters, Robert L. Bocchino Jr., Vikram S. Adve, and Bradford L. Chamberlain, The 13th ACM SIGPLAN Symposium on Principles and Practices of Parallel Programming (PPoPP 2008), Salt Lake City, UT, February 2008..
This paper describes an initial effort to develop software to support distributed memory software transactional memory (STM) for use in Chapel.
Chapel: Productive Parallel Programming at Scale [slides | video], Bradford L. Chamberlain, Google Seattle Conference on Scalability, Seattle, WA, June 2008.
This is an abridged overview of Chapel, aimed at more of a mainstream technical audience, possibly with datacenter leanings, ratehr than the HPC community.
Multiresolution Languages for Portable yet Efficient Parallel Programming, Bradford L. Chamberlain, whitepaper, October 2007.
This is a position paper written in Q&A format that serves as the first written description of Chapel's multiresolution language design philosophy.
Parallel Programmability and the Chapel Language Bradford L. Chamberlain, David Callahan, Hans P. Zima. International Journal of High Performance Computing Applications, August 2007, 21(3): 291-312.
This is an early overview of Chapel's themes and main language concepts.
An Approach to Data Distributions in Chapel. Roxana E. Diaconescu and Hans P. Zima. International Journal of High Performance Computing Applications, August 2007, 21(3): 313-335.
This paper presents early exploratory work in developing a philosophy and foundation for Chapel's user-defined distributions.
Global HPCC Benchmarks in Chapel: STREAM Triad, Random Access, and FFT [slides]. Bradford L. Chamberlain, Steven J. Deitz, Mary Beth Hribar, Wayne A. Wong, CUG 2007, Seattle, WA, May 2007.
This paper provided the CUG community with an early look at three of the HPC Challenge benchmarks in Chapel.
Chapel: Cascade High-Productivity Language; An Overview of the Chapel Parallel Programming Model [slides]. Steven J. Deitz, Bradford L. Chamberlain, Mary Beth Hribar, CUG 2006, Lugano, Switzerland, May 2006.
This was a language overview to introduce the CUG community to Chapel.
Iterators in Chapel. Mackale Joyner, Bradford L. Chamberlain, Steven J. Deitz. Eleventh International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS 2006), Rhodes Island, Greece, April 25, 2006.
This paper presents some early work and approaches for implementing Chapel's iterators.
Global-view Abstractions for User-Defined Reductions and Scans. Steven J. Deitz, David Callahan, Bradford L. Chamberlain, Lawrence Snyder. In Proceedings of the Eleventh ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP 2006), March 2006
This paper outlines our general strategy for supporting user-defined reductions and scans in Chapel.
Reusable and Extensible High Level Data Distributions, Roxana E. Diaconescu, Bradford Chamberlain, Mark L. James, Hans P. Zima. In Proceedings of the Workshop on Patterns in High Performance Computing (patHPC), May 2005.
This paper strived to express the early ideas we were pursuing for user-defined data distributions using a patterns framework.
The Cascade High Productivity Language. David Callahan, Bradford L. Chamberlain, Hans P. Zima. In 9th International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS 2004), pages 52-60. IEEE Computer Society, April 2004.
This is the original Chapel paper which lays out some of our motivation and foundations for exploring the language. The language has evolved significantly since this paper was published, but it remains a good starting point for learning about Chapel.