Zheng, Bixia2020-09-022020-09-022000-02-11https://hdl.handle.net/11299/215400Scalar analyses and optimizations are indispensable and challenging in parallelizing and optimizing compilers. Scalar analyses statically collect information on how programs manipulate variables. Scalar optimizations, on the other hand, improve the efficiency of the compiled codes and the precision of the analyzed results. Performing scalar analyses and optimizations is challenging, especially when the programming languages being considered support general pointer usage. This is because many existing pointer analysis algorithms are either imprecise or inefficient, and the interactions between analyses and optimizations raise the issue of ordering the analyses and the optimizations. In this dissertation, we propose a new approach to perform scalar analyses and optimizations. We first analyze both pointer and non-pointer variables interprocedurally. The analyzed results are represented by an extended version of the traditional static single assignment (SSA) form. We then perform the optimizations in tandem using the SSA form. Each optimization incrementally updates the SSA form to reflect its refinement to the dataflow information. This incremental update can improve the precision of the analyzed result and expose more optimization opportunity. The final SSA form after the optimizations can be used by later optimization and parallelization.Our approach improves the efficiency and effectiveness of scalar analyses and optimizations by two means. First, we partition the variables based on their pointer levels and analyze the variables one pointer level at a time. The pointer level of a variable is the maximum level of indirection to the variable. Analyzing one pointer level at a time can avoid the complexity in modeling multiple levels of dependency due to multiple levels of pointers, and thus allows simpler and more efficient algorithms without sacrificing the precision of the analyzed results. Second, in contrast to many existing approaches which interleave optimizations at the procedure level, we interleave optimizations at the statement level to allow faster response to the interactions among optimizations and to reduce the number of program traversals needed to perform the optimizations.We implement our approach in the Agassiz Compiler. Using the SPEC benchmark suite and several other pointer intensive C programs, we demonstrate the effectiveness and efficiency of our approach.en-USIntegrating Scalar Analyses and Optimizations in a Parallelizing and Optimizing CompilerReport