Robust, Deep, and Reinforcement Learning for Management of Communication and Power Networks
2021-08
Loading...
View/Download File
Persistent link to this item
Statistics
View StatisticsJournal Title
Journal ISSN
Volume Title
Title
Robust, Deep, and Reinforcement Learning for Management of Communication and Power Networks
Authors
Published Date
2021-08
Publisher
Type
Thesis or Dissertation
Abstract
Data-driven machine learning advances have effectively handled a wide spectrum of ap- plication domains. However, formidable challenges remain, especially for managing and opti- mizing the next-generation complex cyber-physical systems, autonomously-driving cars, and self-surgical systems, which welcome ground-breaking control, monitoring, and decision mak- ing that can guarantee robustness, scalability, and situational awareness. In this context, the present thesis first develops principled methods to robustify learning models against distributional uncertainties and adversarial data. The developed framework is particularly attractive when training and testing data are drawn from mismatched distributions. By leveraging the Wasserstein distance, the novel approaches minimize the worst-case expected loss over a prescribed family of data distributions. Building on this robust framework, the thesis next introduces a robust semi-supervised learning approach over networked data whose interdependencies are captured by graphs. Subsequently, the thesis contributes machine learning tools for next-generation wired and wireless networks, through the design of intelligent caching modules using deep reinforcement learning. These modules are equipped with storage devices, and can thus prefetch popular contents (reusable information) during off-peak traffic hours, and service them to the network edge at peak traffic instances. Finally, the thesis contributes to the management and control of power networks, and specifically distribution grids with high penetration of renewable sources and demand response programs. Reactive power is optimally allocated to both utility-owned control devices (e.g., capacitor banks), as well as smart inverters of distributed generation units with cyber-capabilities. The resultant novel dynamic control algorithms are scalable and adaptive to real-time changes of renewable generation and load consumption. To further enhance the situational awareness in power networks, the thesis further contributes robust power system state estimation solvers.
Keywords
Description
University of Minnesota Ph.D. dissertation. 2021. Major: Electrical Engineering. Advisor: Georgios Giannakis. 1 computer file (PDF); xii, 176 pages.
Related to
Replaces
License
Collections
Series/Report Number
Funding information
Isbn identifier
Doi identifier
Previously Published Citation
Other identifiers
Suggested citation
Sadeghi, Alireza. (2021). Robust, Deep, and Reinforcement Learning for Management of Communication and Power Networks. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/225028.
Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.