Deep reinforcement learning for personalized treatment recommendation
2023-03
Loading...
View/Download File
Persistent link to this item
Statistics
View StatisticsJournal Title
Journal ISSN
Volume Title
Title
Deep reinforcement learning for personalized treatment recommendation
Authors
Published Date
2023-03
Publisher
Type
Thesis or Dissertation
Abstract
In precision medicine, the ultimate goal is to recommend the most effective treatment to an individual patient based on patient-specific molecular and clinical profiles, possibly high-dimensional. To advance cancer treatment, large-scale screenings of cancer cell lines against chemical compounds have been performed to help better understand the relationship between genomic features and drug response; existing machine learning approaches use exclusively supervised learning, including penalized regression and recommender systems. When there is only one time point, it refers to individualized treatment selection, which is employed to maximize a certain clinical outcome of a specific patient based on a patient's clinical or genomic characteristics, given a patients' heterogeneous response to treatments. Although developing such a rule is conceptually important to personalized medicine, existing methods such as the $L_1$-penalized least squares \citep{qian2011performance} suffers from the difficulty of indirect maximization of clinical outcome, while the outcome weighted learning \citep{zhao2012estimating} directly maximizing the
clinical outcome is not robust against any perturbation of the outcome. We will first propose a weighted $\psi$-learning method to optimize
an individualized treatment rule, which is robust again perturbation of data
near decision boundary through the notation of separation. To deal with
nonconvex minimization, we employ a difference of convex algorithm to
solve the non-convex minimization iteratively based on a decomposition of
the cost function into a difference of two convex function. On this ground,
we also introduce a variable selection method for further removing redundant
variables for higher performance. Finally, we illustrate the proposed method through simulations and a lung health study, and demonstrate that it yields higher performance in terms of accuracy of prediction of individualized treatment. However, it would be more efficient to apply reinforcement learning (RL) to sequentially learn as data accrue, including selecting the most promising therapy for a patient given individual molecular and clinical features and then collecting and learning from the corresponding data. In this way, we propose a novel personalized ranking system called Proximal Policy Optimization Ranking (PPORank), which ranks the drugs based on their predicted effects per cell line (or patient) in the framework of deep reinforcement learning (DRL). Modeled as a Markov decision process (MDP), the proposed method learns to recommend the most suitable drugs sequentially and continuously over time. As a proof-of-concept, we conduct experiments on two large-scale cancer cell line data sets in addition to simulated data. The results demonstrate that the proposed DRL-based PPORank outperforms the state-of-the-art competitors based on supervised learning. Taken together, we conclude that novel methods in the framework of DRL have great potential for precision medicine and should be further studied.
Description
University of Minnesota Ph.D. dissertation. March 2023. Major: Statistics. Advisors: Xiaotong Shen, Wei Pan. 1 computer file (PDF); xi, 92 pages.
Related to
Replaces
License
Collections
Series/Report Number
Funding information
Isbn identifier
Doi identifier
Previously Published Citation
Other identifiers
Suggested citation
Liu, Mingyang. (2023). Deep reinforcement learning for personalized treatment recommendation. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/257106.
Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.