Repository logo
Log In

University Digital Conservancy

University Digital Conservancy

Communities & Collections
Browse
About
AboutHow to depositPolicies
Contact

Browse by Subject

  1. Home
  2. Browse by Subject

Browsing by Subject "Timing analysis"

Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item
    Scalable methods for reliability analysis in digital circuits using physics-based device-level models
    (2012-10) Fang, Jianxin
    As technology has scaled aggressively, device reliability issues have become a growing concern in digital CMOS very large scale integrated (VLSI) circuits. There are three major effects that result in degradation of device reliability over time, namely, time-dependent dielectric breakdown (TDDB), bias-temperature instability (BTI), and hot carrier (HC) effects. Over the past several years, considerable success has been achieved at the level of individual devices to develop new models that accurately reconcile the empirical behavior of a device with the physics of reliability failure. However, there is a tremendous gulf between these achievements at the device level and the more primitive models that are actually used by circuit designers to drive the analysis and optimization of large systems. By and large, the latter models are decades old and fail to capture the intricacies of the major advances that have been made in understanding the physics of failure; hence, they cannot provide satisfactory accuracy. The few approaches that can be easily extended to handle new device models are primarily based on simulation at the transistor level, and are prohibitively computational for large circuits. This thesis addresses the circuit-level analysis of these reliability issues from a new perspective. The overall goal of this body of work is to attempt to bridge the gap between device-level physics-based models and circuit analysis and optimization for digital logic circuits. This is achieved by assimilating updated device-level models into these approaches by developing appropriate algorithms and methodologies that admit scalability, resulting in the ability to handle large circuits. A common thread that flows through many of the analysis approaches involves performing accurate and computationally feasible cell-level modeling and characterization, once for each device technology, and then developing probabilistic techniques to utilize the properties of these characterized libraries to perform accurate analysis at the circuit level. Based on this philosophy, it is demonstrated that the proposed approaches for circuit reliability analysis can achieve accuracy, while simultaneously being scalable to handle large problem instances. The remainder of the abstract presents a list of specific contributions to addressing individual mechanisms at the circuit level. Gate oxide TDDB is an effect that can result in circuit failure as devices carry unwanted and large amounts of current through the gate due to oxide breakdown. Realistically, this results in catastrophic failures in logic circuits, and a useful metric for circuit reliability under TDDB is the distribution of the failure probability. The first part of this thesis develops an analytic model to compute this failure probability, and differs from previous area-scaling based approaches that assumed that any device failure results in circuit failure. On the contrary, it is demonstrated that the location and circuit environment of a TDDB failure is critical in determining whether a circuit fails or not. Indeed, it is shown that a large number of device failures do not result in circuit failure due to the inherent resilience of logic circuits. The analysis begins by addressing the nominal case and extends this to analyze the effects of gate oxide TDDB in the more general case where process variations are taken into account. The result shows derivations that demonstrate that the circuit failure probability is a Weibull function of time in the nominal case, while has a lognormal distribution and at a specified time instant under process variations. This is then incorporated into a method that performs gate sizing to increase the robustness of a circuit to TDDB effect. Unlike gate oxide TDDB, which results in catastrophic failures, both BTI and HC effects result in temporal increases in the transistor threshold voltages, causing a circuit to degrade over time, and eventually resulting in parametric failures as the circuit violates its timing specifications. Traditional analyses of the HC effects are based on the so-called lucky electron model (LEM), and all known circuit-level analysis tools build upon this model. The LEM predicts that as device geometries and supply voltages reduce to the level of today's technology nodes, the HC effects should disappear; however, this has clearly not been borne out by empirical observations on small-geometry devices. An alternative energy-based formulation to explain the HC effects has emerged from the device community: this thesis uses this formulation to develop a scalable methodology for hot carrier analysis at the circuit level. The approach is built upon an efficient one-time library characterization to determine the age gain associated with any transition at the input of a gate in the cell library. This information is then utilized for circuit-level analysis using a probabilistic method that captures the impact of HC effects over time, while incorporating the effect of process variations. This is combined with existing models for BTI, and simulation results show the combined impact of both BTI and HC effects on circuit delay degradation over time. In the last year or two, the accepted models for BTI have also gone through a remarkable shift, and this is addressed in the last part of the thesis. The traditional approach to analyzing BTI, also used in earlier parts of this thesis, was based on the reaction-diffusion (R-D) model, but lately, the charge trapping (CT) model has gained a great deal of traction since it is capable of explaining some effects that R-D cannot; at the same time, there are some effects, notably the level of recovery, that are better explained by the R-D model. Device-level research has proposed that a combination of the two models can successfully explain BTI; however, most work on BTI has been carried out under the R-D model. One of the chief properties of the CT model is the high level of susceptibility of CT-based mechanisms to process variations: for example, it was shown that CT models can result in alarming variations of several orders of magnitude in device lifetime for small-geometry transistors. This work therefore develops a novel approach for BTI analysis that incorporates effect of the combined R-D and CT model, including variability effects, and determines whether the alarming level of variations at the device level are manifested in large logic circuits or not. The analysis techniques are embedded into a novel framework that uses library characterization and temporal statistical static timing analysis (T-SSTA) to capture process variations and variability correlations due to spatial or path correlations.
  • Loading...
    Thumbnail Image
    Item
    Timing estimation and optimization for physical design using machine learning approaches
    (2025-01) Jiang, Wenjing
    Rapid progress in semiconductor technology has driven integrated circuit (IC) designs to become increasingly complex, resulting in significant challenges in achieving optimality in design. The growing intricacy and the high design costs of modern ICs demand accurate prediction of the quality of results (QoR) to guide early design decisions. Inaccurate predictions can lead to inefficient design iterations, degraded QoR, and even design failures. Therefore, improving QoR prediction accuracy while maintaining computational efficiency has become a critical objective in electronic design automation (EDA). The recent advancements in machine learning (ML) have provided promising solutions for addressing these challenges. ML-based predictive models and optimization methodologies have been developed to improve both quality and efficiency across the design flow. The first part of the thesis focuses on timing prediction after placement and clock tree synthesis. Due to the unavailability of detailed routing information in design stages prior to detailed routing (DR), the tasks of timing prediction and optimization pose major challenges. This part first documents that having "oracle knowledge'' of the final post-DR parasitics enables post-gloabal routing(GR) optimization to produce improved final timing outcomes. To bridge the gap between post-GR timing estimation and final timing results during post-GR optimization, ML-based parasitic and interconnect delay models are proposed for accurate path delay estimation. These models, trained on diverse datasets, demonstrate higher prediction accuracy compared to traditional methods based on the route guide generated in GR stage. Applied during post-GR optimization, the design shows better timing slack in post-DR without exacerbating routing congestion. The methodology is applied to both open-sourced tool flows and a commercial tool flow. The results on an open-source 45nm bulk and a commercial 12nm FinFET enablement show the robustness and good generalization of the proposed models under varying clock constraints and noisy training data. The second part of the thesis focuses on engineering change orders (ECOs) in late design stages, where minimal design fixes are required to address the timing shifts caused by excessive IR drops. We integrate IR-drop-aware timing analysis and reinforcement learning (RL) to develop an efficient ECO timing optimization. The method operates after physical design and power grid synthesis, and rectifies IR-drop-induced timing degradation through gate sizing. It incorporates the conventional gate sizing technique, Lagrangian relaxation (LR), into a novel RL framework, which trains a relational graph convolutional network (R-GCN) agent to sequentially size gates to fix timing violations. The R-GCN agent outperforms a classical LR-only algorithm in an open 45nm technology. It moves the Pareto front of the delay-power tradeoff curve to the left, saves runtime over the prior approaches by running fast inference using trained models, and reduces the perturbation to placement by sizing fewer cells. It is also shown to be transferable across timing constraints and adaptable to unseen designs with fine-tuning, further highlighting its versatility and efficiency. The last part of the thesis studies the correlation between proxy metrics used in traditional logic optimization and actual post-synthesis delay, and the importance of accurate timing estimation on the effectiveness of logic optimization. As circuit designs become more intricate, obtaining accurate performance estimation in early stages, for effective design space exploration, becomes more time-consuming. Traditional logic optimization approaches often rely on proxy metrics to approximate post-synthesis performance and area. However, these proxies do not always correlate well with actual post-mapping delay and area, resulting in suboptimal designs. To address this issue, a ground-truth-based optimization flow is explored to directly incorporate the exact post-synthesis delay and area during optimization. While this approach improves design quality, it also significantly increases computational costs due to finishing technology mapping for every logic optimization iteration, particularly for large-scale designs. To overcome the runtime challenge, we apply ML models to predict post-mapping delay and area using the features extracted from logic represenation graph. Our experimental results show that the model has high prediction accuracy with good generalization to unseen designs. Furthermore, the ML-enhanced logic optimization flow significantly reduces runtime while maintaining comparable performance and area outcomes.

UDC Services

  • About
  • How to Deposit
  • Policies
  • Contact

Related Services

  • University Archives
  • U of M Web Archive
  • UMedia Archive
  • Copyright Services
  • Digital Library Services

Libraries

  • Hours
  • News & Events
  • Staff Directory
  • Subject Librarians
  • Vision, Mission, & Goals
University Libraries

© 2025 Regents of the University of Minnesota. All rights reserved. The University of Minnesota is an equal opportunity educator and employer.
Policy statement | Acceptable Use of IT Resources | Report web accessibility issues