Browsing by Subject "Automatic Parallelization"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Exploiting parallelism in multicore processors through dynamic optimizations.(2011-11) Luo, YangchunEfficiently utilizing multi-core processors to improve their performance potentials demands extracting thread-level parallelism from the applications. Various novel and sophisticated execution models have been proposed to extract thread-level parallelism from sequential programs. One such execution model, Thread-Level Speculation (TLS), allows potentially dependent threads to execute speculatively in parallel. However, TLS execution is inherently unpredictable, and consequently incorrect speculation could degrade performance and/or energy efficiency for the multi-core systems. To address these issues, this dissertation proposes dynamic optimizations that exploit the benefit of successful speculations, while minimizing the impact of failed speculations. First, we propose optimizations to dynamically determine where TLS should be applied in the original sequential program, whereas prior works have focused on using the compiler to statically select program regions. Our research shows that even the state-of-the-art compiler makes suboptimal decisions, due to the unpredictability of TLS execution. In this dissertation, speculative threads are monitored using the hardwarebased counters and their performance impact is dynamically evaluated. Performance tuning policies are devised to adjust the behaviors of speculative threads accordingly. Dynamic performance tuning naturally allows the system to adapt to many program behaviors that are runtime dependent. Second, we propose a heterogeneous multi-core architecture to support energyefficient TLS. By carefully analyzing the behaviors of standard benchmark workloads, we identify a set of heterogeneous components that diversify in power and performance trade-offs and are also feasible to integrate. We have also devised a competent resource allocation scheme that dynamically monitors the program behavior, analyzes its characteristics, and matches it with the most energy-efficient configuration of the system. Throttling mechanisms are introduced to mitigate the overhead associated with configuration changes. Under the context of TLS, our findings have shown that on-chip heterogeneity and dynamic resource allocation are two key ingredients for achieving performance improvement in an energy-efficient way.