Integrating Scalar Analyses and Optimizations in a Parallelizing and Optimizing Compiler

2000-02-11
Loading...
Thumbnail Image

View/Download File

Persistent link to this item

Statistics
View Statistics

Journal Title

Journal ISSN

Volume Title

Title

Integrating Scalar Analyses and Optimizations in a Parallelizing and Optimizing Compiler

Published Date

2000-02-11

Publisher

Type

Report

Abstract

Scalar analyses and optimizations are indispensable and challenging in parallelizing and optimizing compilers. Scalar analyses statically collect information on how programs manipulate variables. Scalar optimizations, on the other hand, improve the efficiency of the compiled codes and the precision of the analyzed results. Performing scalar analyses and optimizations is challenging, especially when the programming languages being considered support general pointer usage. This is because many existing pointer analysis algorithms are either imprecise or inefficient, and the interactions between analyses and optimizations raise the issue of ordering the analyses and the optimizations. In this dissertation, we propose a new approach to perform scalar analyses and optimizations. We first analyze both pointer and non-pointer variables interprocedurally. The analyzed results are represented by an extended version of the traditional static single assignment (SSA) form. We then perform the optimizations in tandem using the SSA form. Each optimization incrementally updates the SSA form to reflect its refinement to the dataflow information. This incremental update can improve the precision of the analyzed result and expose more optimization opportunity. The final SSA form after the optimizations can be used by later optimization and parallelization.Our approach improves the efficiency and effectiveness of scalar analyses and optimizations by two means. First, we partition the variables based on their pointer levels and analyze the variables one pointer level at a time. The pointer level of a variable is the maximum level of indirection to the variable. Analyzing one pointer level at a time can avoid the complexity in modeling multiple levels of dependency due to multiple levels of pointers, and thus allows simpler and more efficient algorithms without sacrificing the precision of the analyzed results. Second, in contrast to many existing approaches which interleave optimizations at the procedure level, we interleave optimizations at the statement level to allow faster response to the interactions among optimizations and to reduce the number of program traversals needed to perform the optimizations.We implement our approach in the Agassiz Compiler. Using the SPEC benchmark suite and several other pointer intensive C programs, we demonstrate the effectiveness and efficiency of our approach.

Keywords

Description

Related to

Replaces

License

Series/Report Number

Funding information

Isbn identifier

Doi identifier

Previously Published Citation

Suggested citation

Zheng, Bixia. (2000). Integrating Scalar Analyses and Optimizations in a Parallelizing and Optimizing Compiler. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/215400.

Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.