Browsing by Subject "robust"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Robust Fragmentation: A Data-Driven Approach to Decision-Making Under Distributional Ambiguity(2016-07) Moulton, JeffreyDecision makers often must consider many different possible future scenarios when they make a decision. A manager must choose inventory levels to maximize profit when the demand is unknown, investors choose a portfolio to maximize gain in the face of uncertain stock price fluctuations, and service providers must pick facility locations to minimize distances traveled where the location of the next customer is random. When uncertainty is involved, the desired outcome will be affected by factors and variables outside the decider's control and knowledge at the time of the decision. Such factors of uncertainty are hard to account for, especially when exact information about the uncertainty is unknown. It may be difficult to account for all possibilities or to estimate how likely each scenario is. There may be an infinite number of scenarios, too many for the human brain to contemplate. Generally, a precise distribution of these factors is unavailable. The unknown data, which can be quantified as a vector of parameters, follows an unknown distribution leading to the term distributional ambiguity. The decision maker may only have a sample of past events to guide them in balancing the costs and benefits of every possibility. Such limited historical data can help to provide guidance, but how best to use it? One would desire a decision strategy that will efficiently use the data and perform well in terms of the expectation, but also be robust to possible noise, changing conditions, and small sample size. To this end, we develop robust fragmentation (RF), a data-driven approach to decision-making under distributional ambiguity. We consider a stochastic programming framework, where the decision maker aims to optimize an expected outcome. However, we assume that the governing distribution of the random parameters affecting the outcome is unknown. Only a historical sample of past realizations of such parameters is available. Our goal is to leverage the historical data to effectively and robustly approximate the true problem. This is done by fragmenting the support of the data into pieces to dissect the structure. We reduce the data by replacing it with summary statistics by region. These parameters are used to construct an ambiguity set for the distribution. The ambiguity set consists of all distributions which would satisfy the same regional reduction. We therefore reduce the problem size and avoid overfitting to the training sample. Our approach allows for two types of ambiguity: ambiguity in support and ambiguity in probability. After constructing the ambiguity set, we consider the worst case expectation to provide distributional robustness. Constraining the distribution regionally allows us to capture detailed information about the distribution and keeps the approach from being too conservative. The ambiguity may be tailored to the structure, amount, and reliability of the data. Robust fragmentation is a generalization and extension of several classical and newer approaches to approximating a stochastic program, including the sample average approximation (SAA), moments-based distributionally robust optimization (MDRO), and distributionally robust optimization based on probabilities of individual events. We demonstrate how RF conceptually fits into the greater picture and provides an adaptive way of balancing the benefits and drawbacks of each kind of approach. We make comparisons both theoretically and through numerical experiments. We outline a procedure for breaking up the data and formulating the RF optimization problem for polytopal, ellipsoidal, and other types of fragmentations. We prove convergence results to demonstrate the consistency of our approach. RF can handle overlapping regions in the fragmentation by adapting the way statistics are computed. Our algorithm for fragmenting the data relies heavily on strategic clustering of the data sample to dissect the modality. We discuss methods and heuristics for implementing our approach. We extend and compare different clustering methods in terms of how well they serve as a preliminary step to our procedure. We consider applications in management science and financial mathematics and perform numerical experiments to demonstrate how the approach fits into existing frameworks and to evaluate its performance. It turns out that the new approach has a clear advantage when the data is multimodal and the training sample is small, but not too small relative to the number of modes. Additionally, we illustrate some extensions to the general model. The black swan approach involves adding an extra layer of robustness to account for rare and unexpected events of large magnitude and consequence. Two-stage and multistage stochastic programs extend the framework to decisions that must be made in stages, where after each stage random occurences influence the parameters of the next stage. We derive formulations for these extended models and investigate the production transportation-problem as an application.