Anderson, Loren2022-08-292022-08-292022-04https://hdl.handle.net/11299/241402University of Minnesota Ph.D. dissertation. 2022. Major: Mathematics. Advisor: Fadil Santosa. 1 computer file (PDF); 113 pages.We perform a comprehensive study on Bayesian sequential optimal experimental design techniquesapplied to inverse problems. We transform the Bayesian sequential optimal experimental design problem into a reinforcement learning problem to gauge the power of recent deep reinforcement learning algorithms compared to other baseline algorithms. Using KL-divergence as a measure of information gain, we construct objectives to maximize information gain for batch design, greedy design, black-box Bayesian optimization, multi-armed bandit optimization, dynamic programming, approximate dynamic programming, and reinforcement learning. This work showcases novel comparisons between the aforementioned methods and a new application of off-the-shelf reinforcement learning algorithms to Bayesian sequential optimal experimental design for inverse problems in differential equation models.enBayesian Experimental DesignDeep Reinforcement LearningInformation GainInverse ProblemsOptimal Experimental DesignSequential Experimental DesignBAYESIAN SEQUENTIAL OPTIMAL EXPERIMENTAL DESIGN FOR INVERSE PROBLEMS USING DEEP REINFORCEMENT LEARNINGThesis or Dissertation