Power electronics control using Reinforcement learning
2023-03
Loading...
View/Download File
Persistent link to this item
Statistics
View StatisticsJournal Title
Journal ISSN
Volume Title
Title
Power electronics control using Reinforcement learning
Alternative title
Authors
Published Date
2023-03
Publisher
Type
Thesis or Dissertation
Abstract
Most power electronics systems require control of analog variables to meet system objectives. The control design for these systems is typically done using first principles approach with performance heavily dependent upon system parameters and operating condition which makes them susceptible to variations. Reinforcement learning (RL) can solve such control problems by making no model assumptions. It can also expand scope to learn complex control policies to address hard-to-model non-linear real life dynamics with generalization capability. In this thesis, RL is employed to train deep-Q network (DQN) agents that interact with the power electronic plant environment to generate data in order to learn a policy (or controller). The policy is then deployed on a microcontroller and performance is verified via hardware measurements. A deep dive was done to connect RL components to power electronics system addressing all aspects from problem formulation to learning a policy, and three power electronics problems were considered. In the first hardware design, policies have been trained to perform voltage regulation of a buck converter both for fixed output voltage and commanded reference voltage. In this work, steady state error dependency on action space, action to duty ratio mapping formulation, and control based on commanded input are new contributions. In the second design, a renewable energy application of maximum power point (MPP) tracking of a photovoltaic module is considered. This DQN based hardware implementation is shown to optimally track MPP. The thesis also provides insights into DQN neural net size requirement for MPP tracking which can be a useful starting point for future research work. The final design focuses on reliability in which a policy is trained to control the temperature of the switching element of a buck converter along with safe charging of the connected battery. Novel methods to compress simulation time to attain the necessary data for training are proposed. This is a first-of-its-kind modeling of obtaining a RL policy for fairly uncorrelated analog variables with slow dynamic behavior. Throughout these experiments, comparisons have been made with conventional control methods which demonstrated superior control performance of RL based policy. RL policies also performed well for unseen states due to generalization offered by DQN algorithm. The thesis also provides guidance to expedite sample-inefficient RL training process both from power electronics standpoint like model and circuitry simplification, and also from machine learning standpoint like hyperparameter and neural network size selection. Finally, conclusion and future explorations opportunities are provided. With lower cost of training, improved algorithms and training methods, faster graphical processing units, and artificial intelligence accelerators, these intelligent RL methods will proliferate to power industries. To that end, reported literature in this work should help expand RL applications in the field of power electronics.
Keywords
Description
University of Minnesota Ph.D. dissertation. March 2023. Major: Electrical Engineering. Advisor: Ned Mohan. 1 computer file (PDF); xi, 129 pages.
Related to
Replaces
License
Collections
Series/Report Number
Funding information
Isbn identifier
Doi identifier
Previously Published Citation
Other identifiers
Suggested citation
Shekhar, Sameer. (2023). Power electronics control using Reinforcement learning. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/269998.
Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.