Browsing by Author "Zheng, Bixia"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item A Hierarchical Approach to Context-Sensitive Interprocedural Alias Analysis(1999-04-21) Zheng, Bixia; Yew, Pen-ChungIn this paper, we present a hierarchical flow-sensitive alias analysis algorithm which parameterizes the context-sensitive level. Our approach groups the pointers in a program by their maximum possible dereference levels. It then orders the analysis of each pointer group by its pointer level, starting from the highest level down to the lowest level. During the analysis of each pointer group by its pointer level, starting from the highest level down to the lowest level. During the analysis of each pointer group, a bottom-up traversal of a program call graph is followed by a top-down traversal with the necessary interprocedural information propagated along the way. The interprocedudral inforamtion is tagged with call-chains, which are the program call graph paths, to achieve context-sensitivity. We also provide empirical results to quantify how different context-sensitive levels affect the precision and the efficiency of the algorithm. Our studies show that (1) the precision improvement achieved by increasing the context-sensitive level of the analysis varies significantly depending on the programs analyzed; (2) increasing the maxium call-chain length for a higher degree of context sensitivity may trigger the exponential complexity problem [15, 10, 23]. Thus, it is very desirable for an algorithm to allow users to select an appropriate context-sensistive level that works best for a particular program. By parameterizing the maxiumum call-chain length used in tagging the interprocedural information, our approach provides this type of flexibility.Item Design & Implementation of a Framework for Integrating Front-End & Back-End Compilers(1999-04-20) Yew, Pen-Chung; Li, Zhiyuan; Song, Yonghong; Tsai, Jenn-Yuan; Zheng, BixiaWe propose a new universal High-Level Information (HLI) format to effectively integrate front-end and back-end compilers by passing front-end information to the back-end compiler. Importing this information into an existing back-end leverages the state-of-the-art analysis and transformation capabilities of existing front-end compilers to allow the back-end greater optimization potential than it has when relying on only locally-extracted information. A version of the HLI has been implemented in the SUIF parallelizing compiler and the GCC back-end compiler. Experimental results with the SPEC benchmarks show that HLI can provide GCC with substantially more accurate data dependence information than it can obtain on its own. Our results show that the number of dependence edges in GCC can be reduced substantially for the benchmark programs studied, which provides greater flexibility to GCC's code scheduling pass, common subexpression elimination pass, loop invariant code removal pass and register local allocation pass. Even with GCC's optimizations limited to basic blocks, the use of HLI produces moderate speedups compared to usng only GCC's dependence tests when the optimized programs are executed on a MIPS R10000 processor.Item Integrating Scalar Analyses and Optimizations in a Parallelizing and Optimizing Compiler(2000-02-11) Zheng, BixiaScalar analyses and optimizations are indispensable and challenging in parallelizing and optimizing compilers. Scalar analyses statically collect information on how programs manipulate variables. Scalar optimizations, on the other hand, improve the efficiency of the compiled codes and the precision of the analyzed results. Performing scalar analyses and optimizations is challenging, especially when the programming languages being considered support general pointer usage. This is because many existing pointer analysis algorithms are either imprecise or inefficient, and the interactions between analyses and optimizations raise the issue of ordering the analyses and the optimizations. In this dissertation, we propose a new approach to perform scalar analyses and optimizations. We first analyze both pointer and non-pointer variables interprocedurally. The analyzed results are represented by an extended version of the traditional static single assignment (SSA) form. We then perform the optimizations in tandem using the SSA form. Each optimization incrementally updates the SSA form to reflect its refinement to the dataflow information. This incremental update can improve the precision of the analyzed result and expose more optimization opportunity. The final SSA form after the optimizations can be used by later optimization and parallelization.Our approach improves the efficiency and effectiveness of scalar analyses and optimizations by two means. First, we partition the variables based on their pointer levels and analyze the variables one pointer level at a time. The pointer level of a variable is the maximum level of indirection to the variable. Analyzing one pointer level at a time can avoid the complexity in modeling multiple levels of dependency due to multiple levels of pointers, and thus allows simpler and more efficient algorithms without sacrificing the precision of the analyzed results. Second, in contrast to many existing approaches which interleave optimizations at the procedure level, we interleave optimizations at the statement level to allow faster response to the interactions among optimizations and to reduce the number of program traversals needed to perform the optimizations.We implement our approach in the Agassiz Compiler. Using the SPEC benchmark suite and several other pointer intensive C programs, we demonstrate the effectiveness and efficiency of our approach.Item Superthreading: Integrating Compilation Technology and Processor Architecture for Cost-Effective Concurrent Multithreading(1997) Tsai, Jenn-Yuan; Jiang, Zhenzhen; Li, Zhiyuan; Lilja, David; Wang, Xin; Yew, Pen-Chung; Zheng, Bixia; Glamm, RobertAs the number of transistors that can be integrated on a single chip continues to grow, it is important for computer architects to think beyond the traditional approaches of deeper pipelines and wider instruction issue units for improving performance. This single-threaded execution model limits these approaches to exploiting only the relatively small amount of instruction-level parallelism available in application programs. While integrating an entire multiprocessor onto a single chip is feasible, this architecture is limited to exploiting only relatively coarse-grained heavy-weight parallelism. We propose the superthreaded architecture as an excellent alternative for utilizing the large number of transistors that will become available on a single high-density chip. As a hybrid of a wideissue superscalar processor and a multiprocessor-on-a-chip, this new concurrent multithreading architecture can leverage the best of existing and future parallel hardware and software technologies. By incorporating speculation for control dependences and run-time checking of data dependences, the superthreaded architecture can exploit the multiple granularities of parallelism available in general-purpose application programs to reduce the execution time of a single program.