Browsing by Subject "Graph Neural Networks"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Analog Design Automation in the Era of Machine Learning(2022-12) Kunal, KishorAnalog and mixed-signal circuits are everywhere -- in phones, smart watches, self-driving cars, humanoid robots, and IoT devices. However, the problem of automating analog design has perplexed several generations of researchers in electronic design automation (EDA). At its core, the difficulty of the problem is related to the fact that machine-generated designs have been unable to match the quality of the human designer, who recognizes blocks from a netlist and draws upon her/his experience to translate these blocks into a silicon layout. The ability to annotate blocks in a schematic or netlist-level description of a circuit is key to this entire process, but it is a process fraught with complexity. A major reason for this is a large number of variants of each circuit type, which an experienced designer can easily comprehend, but are difficult to encode into an EDA tool. The recent advent of machine learning (ML) provides pathways to breakthrough solutions to automated analog design. Such a capability can enable more widespread use of AMS circuits, which are widely known to have the potential to provide energy-efficient implementations for real-world applications. In fact, for a number of emerging applications, such as the design of ML hardware, AMS implementations can provide superior performance as compared to conventional digital designs. The first part of the thesis showcases applications of graph neural networks (GNNs) for analog layout automation within the ALIGN open-source EDA framework. The automatic identification of hierarchical functional blocks in analog designs can facilitate a variety of design automation tasks. For example, in circuit layout optimization, the optimal layout is dictated by constraints at each level, such as symmetry requirements, that depend on the topology of the hierarchical block. At higher levels of the design hierarchy, where numerous design variants are possible, recent advances in GNNs are leveraged, using a variety of GNN strategies, to identify circuit functional blocks, thus replicating the role of the human expert. At lower levels of hierarchy, where the degrees of freedom in circuit topology are limited, structures are identified using graph-based algorithms. The proposed hierarchical recognition scheme enables the identification of layout constraints such as symmetry and matching, which enable high-quality layout synthesis. This method is demonstrated to be scalable and applicable across a wide range of analog designs. The method shows a high degree of accuracy over a range of designs, identifying functional blocks such as low-noise amplifiers, operational transconductance amplifiers, mixers, oscillators, and band-pass filters within larger circuits. Another challenge in analog layout automation is the need to identify matching and symmetry between elements in the circuit netlist. However, the set of symmetries is circuit-specific and a versatile algorithm, applicable to a broad variety of circuits, has been elusive. The next part of this thesis presents a general methodology for the automated generation of symmetry constraints, and applies these constraints to guide automated layout synthesis. The proposed method operates hierarchically and uses graph-based algorithms to extract multiple axes of symmetry within a circuit. An important ingredient of the algorithm is its ability to identify arrays of repeated structures. In some circuits, these "repeated'' structures are not perfect replicas but show a high degree of similarity, and can only be identified through approximate graph matching. A fast graph neural network-based methodology is developed for this purpose, based on evaluating the graph edit distance between candidate structures. The algorithm is demonstrated on operational amplifiers, data converters, equalizers, and low-noise amplifiers. The final part of the thesis focuses on the application of analog circuits for energy-efficient ML inference. Due to the inherent error tolerance of ML algorithms, many parts of the inference computation can be performed with adequate accuracy and low power under relatively low precision. Early approaches have used digital approximate computing methods to explore this space. An alternative is to use analog circuits, which can deliver lower-power solutions, but are well known to be more susceptible to noise, which degrades precision. Even so, several recent efforts have shown the benefit of using purely analog-based operations to achieve power-efficient computation at moderate precision. This work combines the best of both worlds by proposing a mixed-signal design approach, MiSOML, that optimally blends analog and digital computation for ML inference hardware, incorporating the cost of analog-digital/digital-analog converters where needed. Based on models for speed, accuracy, and power, an integer linear programming formulation is developed to optimize design metrics over the space of analog/digital implementations. On multiple ML architectures, MiSOML demonstrates 5x--8x energy improvement over 8-bit quantized digital implementations.Item Towards Learning Powerful Deep Graph Neural Networks and Embeddings(2020-06) Verma, SaurabhLearning powerful data embeddings has recently become the core of machine learning algorithms especially in natural language processing and computer vision domains. In the graph domain, the applications of learning graph embeddings are vast and have distinguished use-cases across multi-cross domains such as bioinformatics, chemoinformatics, social networks and recommendation systems. To date, graph remains the most fundamental data structure that can represent many forms of real-world datasets. However, due to its rich but complex data structure, graph presents a significant challenge in forging powerful graph embeddings. Even standard deep learning techniques such as Recurrent Neural Networks (RNNs) or Convolutional Neural Networks (CNNs) are not capable enough to operate on the data lying beyond 1D sequence of say words or 2D pixel-grid of images and therefore, cannot generalize to arbitrary graph structure. Recently, Graph Neural Networks (GNNs) have been proposed to alleviate such limitations but the current state is far from being mature in both theory and applications. To that end, this thesis aims at developing powerful graph embedding models for solving wide-variety of real-world problems on the graph. We study some of the major approaches for devising graph embedding namely Graph Kernel Or Spectrum and GNN. We expose and tackle some of their fundamental weakness and contribute several novel state-of-the-art graph embedding models. These models can achieve superior performance in solving many real-world problems on graphs such as node classification, graph classification or link prediction over existing methods and that too comes with desirable theoretical guarantees. We first study the capabilities of Graph Kernel or Spectrum approaches toward yielding powerful graph embeddings in terms of uniqueness, stability, sparsity and computationally efficiency. Second, we propose Graph Capsule Neural Network that can yield powerful graph embeddings by capturing much more information encoded in the graph structure in comparison with existing GNNs. Third, we devise a first ever universal and transferable GNN and thus, makes transfer learning possible in graph domain. Specifically with this particular GNN, graph embeddings can be shared and transfered across different models and domains, reaping the huge benefits of transfer learning. Lastly, there is a dearth of theoretical explorations of GNN models such as their generalization properties. We take the first step towards developing a deeper theoretical understanding of GNN models by analyzing their stability and deriving their generalization guarantees. To the best of our knowledge, we are the first to study stability bounds on graph learning in a semi-supervised setting and derive related generalization bounds for GNN models. In summary, this thesis contributes several state-of-the-art graph embeddings and novel graph theory, specifically (i) Powerful Graph Embedding called Family of Graph Spectral Distances (Fgsd) (ii) Highly Informative GNN Called Graph Capsule Neural Network (GCAPS) (iii) Universal and Transferable GNN called Deep Universal and Transferable Graph Neural Network (DUGNN) (iv) Stability Theory and Generalization Guarantees of GNN.