Dan's Research Page

Dan's Research Page


Nov 21, 2007 Updated Literature Review Survey

Nov 7, 2007   Literature Review Survey

Topic: Parallel Computing: Implicit Parallelism


Description: Parallel computing is the partitioning of code to be executed on multiple processors.  The research will cover two main areas.  The first will be how the code split into pieces to be executed by different processors.  The second part will be how an implicit compiler decides how to split up the code to be executed.  The problem then becomes how to optimize these compilers so that they divide up code most efficiently.


Motivation:  There has been for a long time projects that give people the opportunity to put their own computer to work in conjunction with others to problem solve.  The idea of a network that splits up a problem to solve it quicker has always been fascinating.  If perfected in the up coming years, there could be a network of computers that are able to solve problems that previously were unimaginable.


References:

[1] B. Barney, "Introduction to Parallel Computing," Introduction to Parallel Computing, Livermore Computing, June 2007. [Online]. Available: http://www.llnl.gov/computing/tutorials/parallel_comp. [Accessed: Sept. 11, 2007].[This website gives a large overview to parallel computing including terminology, concepts, and other models.  Parallel computing is using multiple processors simultaneously to solve a problem.  Parallel computing saves time, it can prove to be more cost efficient and it can be used to solve larger problems.  It talks about the difference between automatic and manual parallelism dealing with compilers.  A fully automatic or implicit compiler is able to find areas in the code which can be split and sent to other processors.  Automatic compilers are even able to analyze the cost it would take to execute the code through parallelism and see if it is worth splitting.  The paper also discusses different parallel models for programming and their implementations.  Overall, this website is an excellent guide to parallel computing.]

[2] I. Emmons, "Multiprocessor Optimizations: Fine-Tuning Concurrent Access to Large Data Collections," MSDN Magazine, 2007. [Online]. Available: http://msdn.microsoft.com/msdnmag/issues/01/08/concur/default.aspx. [Accessed: Sept. 11, 2007]. [This shows the need for concurrent access of information in a program and how a high concurrency machine will out perform a high end multiple cpu machine with low concurrency.]

[3] J. Plazek, J. Kitowski, and K. Banas, "Efficiency Comparison of Explicit and Implicit Parallel Programming for a FEM Problem on HP Exemplar Systems," Cracow University of Technology, 1997. [Online]. Available: http://www.icsr.agh.edu.pl/publications/html/hiper97_kb/hiper97_kb.html. [Accessed: Sept. 11, 2007].[Compares implicit and explicit parallelism for solving finite element problems.]

[4] N. Drakos, “Implicit Parallelism,” Computer Based Learning Unit, University of Leeds, 1996. [Online]. Available: http://www.cs.nmsu.edu/~epontell/adventure/node6.html. [Accessed: Sept. 17, 2007]. [Here, the author discusses the idea of implicit parallelism.  As noted, the parallelism is separate from the programmer.  Project FX and Paradigm are introduced as implicit languages or languages that determine parallelism through the compiler.  Declarative languages are also discussed such as Functional and Logic which are very high level languages that have opened hope for stronger implicit parallelism. Problems such as the compilers inability to comprehend the size of components of code or how the system is not fully efficient are introduced to think about.]

 

[5] S. Chang, “Parallelization of Codes on the SGI Origins,” NASA Ames Research Center. [Online]. Available: http://people.nas.nasa.gov/~schang/origin_parallel.html#fraction [Accessed: Sept 26, 2007]. [This website is another great reference tool for parallel computing.  Again, the main goal of parallel computing is to cut down on the time required to execute code.  The website offers insight as well as basic theory behind parallel computing.  It also stresses other components involved in parallel execution of code and efficiency of execution.  There are even other tools and theories on the site that will allow a programmer to determine the performance of parallel code and how to analyze the code.]

[6] D Leijen and J Hall, “Optimize Managed Code For Multi-Core Machines,” The Microsoft Journal for Developers, 2007. [Online]. Available: http://msdn.microsoft.com/msdnmag/issues/07/10/Futures/default.aspx [Accessed: Sept 26, 2007]. [As stated, single processor speeds are coming to a halt and the need for multi-processor computing is clear.  The Task Parallel Library is one tool that programmers have to write new code that will execute better on multiple processors.  Although the methods here are more directed towards explicit parallelism, a strong need for parallelism is apparent as technology heads towards multiple processor systems.  Examples including a 3D rendering are available as well as comparisons of threading vs. TPL tools for parallelism.]

[7] “Optimization Topics,” Maui High Performance Computing Center, 2004. [Online.] Available: http://www.mhpcc.edu/training/workshop2/optimization/MAIN.html#OptimizationTypes [Accessed: Oct. 1, 2007]. [Given on this website is a basic understanding of optimization.  The process of optimizing code is given by the author in a seven step overview. 

·        Debug your source code and verify program correctness (usually with optimization switches off).

·        Profile your code; identify opportunities for performance improvement.

·        Perform hand-tuning operations, such as algorithm optimization, re-coding of bottlenecks, etc.

·        Apply preprocessor optimizations and/or compile with optimization switches on

·        Profile your code; Examine blocks of code that consume the most execution time.

·        Repeatedly apply various optimizations to such blocks.

·        Ensure mathematical correctness of the program.

Also included are compiler optimization descriptions and a basic overview of compiler optimizations.]