Browsing by Subject "Distributed optimization"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Methods Of Distributed And Nonlinear Process Control: Structuredness, Optimality And Intelligence(2020-05) Tang, WentaoChemical processes are intrinsically nonlinear and often integrated into large-scale networks, which are difficult to control effectively. The traditional challenges faced by process control, as well as the modern vision of transitioning industries into a smart manufacturing paradigm, requires the instillation of new perspectives and application of new methods to the control of chemical processes. The goal is to realize highly automated, efficient, well-performing and flexible control strategies for nonlinear, interconnected and uncertain systems. Motivated by this, in this thesis, the following three important aspects (objectives) for contemporary process control -- structuredness, optimality, and intelligence -- are discussed in the corresponding three parts. 1. For the control of process networks in a structured and distributed manner, a network-theoretic perspective is introduced, which suggests to find a decomposition of the problem according to the block structures in the network. Such a perspective is examined by sparse optimal control of Laplacian network dynamics. Community detection-based methods are proposed for input--output bipartite and variable-constraint network representations and applied to a benchmark chemical process. 2. For the optimality of control, we first derive a computationally efficient algorithm for nonconvex constrained distributed optimization with theoretically provable convergence properties -- ELLADA, which is applied to distributed nonlinear model predictive control of a benchmark process system. We derive bilevel optimization formulations for the Lyapunov stability analysis of nonlinear systems, and stochastic optimization for optimally designing the Lyapunov function, which can be further integrated with the optimal process design problem. 3. Towards a more intelligent diagram of process control, we first investigate an advantageous Lie-Sobolev nonlinear system identification scheme and its effect on nonlinear model-based control. For model-free data-driven control, we discuss a distributed implementation of the adaptive dynamic programming idea. For chemical processes where states are mostly unmeasurable, dissipativity learning control (DLC) is proposed as a suitable framework of input--output data-driven control, and applied to several nonlinear processes. Its theoretical foundations are also discussed.Item Scalable Learning and Energy Management for Power Grids(2019-01) Zhang, LiangContemporary power grids are being challenged by unprecedented levels of voltage fluctuations, due to large-scale deployment of electric vehicles (EVs), demand-response programs, and renewable generation. Nonetheless, with proper coordination, EVs and responsive demands can be controlled to enhance grid efficiency and reliability by leveraging advances in power electronics, metering, and communication modules. In this context, the present thesis pioneers algorithmic innovations targeting timely opportunities emerging with future power systems in terms of learning, load control, and microgrid management. Our vision is twofold: advancing algorithms and their performance analysis, while contributing foundational developments to guarantee situational awareness, efficiency, and scalability of forthcoming smart power grids. The first thrust to this end deals with real-time power grid monitoring that comprises power system state estimation (PSSE), state forecasting, and topology identification modules. Due to the intrinsic nonconvexity of the PSSE task, optimal PSSE approaches have been either sensitive to initialization or computationally expensive. To bypass these hurdles, this thesis advocates deep neural networks (DNNs) for real-time PSSE. By unrolling an iterative physics-based prox-linear PSSE solver, a novel model-specific DNN with affordable training and minimal tuning effort is developed. To further enable system awareness even ahead of the time horizon, as well as to endow the DNN-based estimator with resilience, deep recurrent neural networks (RNNs) are also pursued for state forecasting. Deep RNNs leverage the long-term nonlinear dependencies present in the historical voltage time series to enable forecasting, and they are easy to implement. Finally, multi-kernel learning based partial correlations accounting for nonlinear dependencies between given nodal measurements are leveraged to unveil connectivity of power grids. The second thrust leverages the obtained state and topology information to design optimal load control and microgrid management schemes. With regards to EV load control, a decentralized protocol relying on the Frank-Wolfe algorithm is put forth to manage the heterogeneous charging loads. The novel paradigm has minimal computational requirements, and is resilient to lost updates. When higher levels of EV load exceed prescribed voltage limits, the underlying grid needs to be taken into account. In this context, communication-free local reactive power control and optimal decentralized energy management schemes, are developed based on the proximal gradient method and the alternating direction method of multipliers, respectively.Item A Unified Framework for Understanding Distributed Optimization Algorithms: System Design and its Applications(2023-11) Zhang, XinweiMore than ever before, technology advances across the spectrum have meant that we have individualized and decentralized access to data, resources, and human capital. The capability to utilize massively and distributedly generated data (e.g., personal shopping records) and distributed computation (e.g., fast smartphone processors) has simplified our lives, facilitated optimal resource allocation, and unlocked innovation across industries. Distributed algorithms play a central role in the optimal operation of distributed systems in many applications, such as machine learning, signal processing, and control. Significant research efforts have been devoted to developing and analyzing new algorithms for various applications. However, existing methods are still facing difficulties in using computational resources and distributed data safely and efficiently. The three major challenges in state-of-the-art distributed systems are 1) finding appropriate models to describe the resources and problems in the system, 2) developing a general approach to solving problems efficiently, and 3) ensuring participants' privacy. My thesis research focuses on building an algorithmic framework to resolve these fundamental and practical challenges. This thesis provides a fresh perspective to understand, analyze, and design distributed optimization algorithms. Through the lens of multi-rate feedback control, this thesis theoretically proves that a wide class of distributed algorithms, including popular decentralized and federated schemes, can be viewed as discretizing a certain continuous-time feedback control system, possibly with multiple sampling rates, while preserving the same convergence behavior. Further, the proposed system unifies the stochasticities in a wide range of distributed optimization algorithms as several types of noises injected into the control system, and provides a uniform convergence analysis to a class of distributed stochastic optimization algorithms. The control-based framework is applied to designing new algorithms in decentralized optimization and federated learning to meet different system requirements including achieving convergence, optimal performance, or meeting privacy concerns. In summary, this thesis establishes a control-based framework to understand, analyze, and design distributed optimization algorithms, with applications in decentralized optimization and federated learning algorithm design.