High Performance Computing Group Compiler optimizations in the scalar world have, many researchers believe, run their course. We're squeezing about as much performance as we can out of a single processor, at least in terms of compiler techniques. The result is that the most active area of compiler research addresses parallel computer architectures, at both the fine-grained or instruction level and at the more coarse-grained or distributed level. One observation is that many of the existing compiler and optimization techniques can be adapted to benefit parallel architectures, sometimes with unexpected benefits. The focus of my research follows this general trend. Project: Automatic Parallelization Purpose: Develop automatic parallelization compilation techniques for discovering and exploiting thread-level and other coarse-grained parallelism Researchers: Thomas Way Researcher Alumni: Rushikesh Katikar, Puroshothan Ch Applications: Compiler optimizations for high-performance computing Abstract of current research paper: Automatic parallelization in a
compiler is becoming more important as computer Past compiler research on
automatic parallelization dealt with parallelizing Recently, a shift in paradigm
began and researchers started to look In this paper, we review known
parallelization techniques Project: Source Code Characterization Purpose: Develop analysis techniques for characterization of the machine requirements of source code. Researchers: Tom Way Researcher Alumni: Rushikesh Katikar, Puroshothan Ch Applications: Compiler optimizations for high performance computing, machine description generation for reconfigurable computing. Description: Execution of any program depends on the architecture of the processor, which includes memory organization, registers, functional units and other components. As reconfigurable computing becomes more popular, research is needed to determine how to make the best use of this greater processing flexibility. This project will develop source code analyses, implemented within a research compiler framework, to characterize the machine requirements of the source code input, including specific hardware needs, availability for parallelization, etc., producing a corresponding machine configuration or description. The resulting machine description can be used to setup a reconfigurable computer to improve performance of the original source program by providing a best-fit hardware on which it can be executed. Resources
Topics to explore
Project: Partial Inlining Purpose: Explore partial inlining Researchers: Tom Way Applications: Compiler optimizations for high-performance computing Description: This research involves Explicitly Parallel Instruction Computing (EPIC), a form of Instruction Level Parallel (ILP) computing in which parallelism is indicated in a more direct way by the compiler. This is based on earlier research which revolved around Region-based Compilation, an interprocedural analysis and optimization technique, that was integrated into the Trimaran Research Compiler platform. This earlier research led to discoveries in demand-driven inlining, improvements to cloning, interesting metrics for measuring how well regions are formed, and a novel approach to partial inlining. The current research looks at the efficiency of partial inlining techniques. A standard compiler technique for many years has been procedure inlining, where the call to a procedure (or function or method) is replaced with the body of that procedure when it is determined that it could be beneficial to the resulting program's performance. The well-known problem with this technique is that it leads to "code growth," a situation where the program grows very large due to the introduction of multiple inlined copies of the same procedure throughout the program at each of its "call sites." Partial inlining is a technique where just the most "important" portions of a procedure are inlined at a call site. Since it is common that only a small portion of a typical procedure is frequently executed, it makes sense to try to only inline that portion. By inlining this "hot" code, the opportunities for performance-enhancing improvements, often called "optimizations," are increased because there is more good code to work with in a given portion of the program. Code growth is better controlled using partial inlining while the benefits to performance of traditional inlining are maintained. Research subprojects:
Papers:
updated: 10/01/09 actlab.csc.villanova.edu |