Browsing by Author "Jayaraj, Jagan"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item A Code Transformation Framework for Scientific Applications on Structured Grids(2011-09-29) Lin, Pei-Hung; Jayaraj, Jagan; Woodward, Paul; Yew, Pen-ChungThe combination of expert-tuned code expression and aggressive compiler optimizations is known to deliver the best achievable performance for modern multicore processors. The development and maintenance of these optimized code expressions is never trivial. Tedious and error-prone processes greatly decrease the code developer's willingness to adopt manually-tuned optimizations. In this paper, we describe a pre-compilation framework that will take a code expression with a much higher programmability and transform it into an optimized expression that would be much more difficult to produce manually. The user-directed, source-to-source transformations we implement rely heavily on the knowledge of the domain expert. The transformed output, in its optimized format, together with the optimizations provided by an available compiler is intended to deliver exceptionally high performance. Three computational fluid dynamics (CFD) applications are chosen to exemplify our strategy. The performance results show an average of 7.3x speedup over straightforward compilation of the input code expression to our pre-compilation framework. Performance of 7.1 Gflops/s/core on the Intel Nehalem processor, 30% of the peak performance, is achieved using our strategy. The framework is seen to be successful for the CFD domain, and we expect that this approach can be extended to cover more scientific computation domains.Item A strategy for high performance in computational fluid dynamics(2013-08) Jayaraj, JaganComputational Fluid Dynamics is an important area in scientific computing. The weak scaling of codes is well understood with about two decades of experience using MPI. The recent proliferation of multi- and many-core processors have made the modern nodes compute rich, and the per-node performance has become very crucial for the overall machine performance. However, despite the use of thread programming, obtaining good performance at each core is extremely challenging. The challenges are primarily due to memory bandwidth limitations and difficulties in using the short SIMD engines effectively. This thesis is about the techniques, strategies, and a tool, to improve the in-core performance. Fundamental to the strategy is a hierarchical data layout made of small cubical structures of the problem state called the briquettes. The difficulties in computing the spatial derivatives (also called near neighbor computations in the literature) in a hierarchical data layout are well known, and data blocking is extremely unusual in finite difference codes. This work details how to simplify programming for the new data layout, the inefficiencies of the programming strategy, and how to overcome the inefficiencies.The transformation to eliminate the overheads is called pipeline-for-reuse. It is followed by a storage optimization called maximal array contraction. Both pipeline-for-reuse and maximal array contraction are highly tedious and error-prone. Therefore, we built a source-to-source translator called CFD Builder to automate the transformations using directives. The directive based approach we adopted to enable the transformations eliminates the need for complex analysis, and this work provides the linear time algorithms to perform the transformations under the stated assumptions. The benefits of the briquettes and CFD Builder are demonstrated individually with three different applications on two different architectures and two different compilers. We see up to 6.92x performance improvement with applying both the techniques. This strategy with briquettes and CFD Builder was evaluated against commonly known transformations for data locality and vectorization. Briquettes and pipeline-for-reuse transformations to eliminate the overheads outperforms even the best combination of canonical transformations, for data locality and vectorization, applied manually by up to 2.15x