Browsing by Subject "Convex Optimization"
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
Item Computability of Preference, Utility, and Demand(Center for Economic Research, Department of Economics, University of Minnesota, 1996-12) Richter, Marcel K.; Wong, Kam-ChauThis paper studies consumer theory from the bounded rationality approach proposed in Richter and Wong (1996a), with a "uniformity principle" constraining the magnitudes (prices, quantities, etc.) and the operations (to perceive, evaluate, choose, communicate, etc.) that agents can use. In particular, we operate in a computability framework, where commodity quantities, prices, consumer preferences, utility functions, and demand functions are computable by finite algorithms (Richter and Wong (1996a)). We obtain a computable utility representation theorem. We prove an existence theorem for computable maximizers of quasiconcave computable utility functions (preferences), and prove the computability of the demand functions generated by such functions (preferences). We also provide a revealed preference characterization of computable rationality for the finite case. Beyond consumer theory, the results have applications in general equilibrium theory (Richter and Wong (1996a)).Item A Computational and Statistical Study of Convex and Nonconvex Optimization with Applications to Structured Source Demixing and Matrix Factorization Problems(2017-09) Kadkhodaie Elyaderani, MojtabaModern machine learning problems that emerge from real-world applications typically involve estimating high dimensional model parameters, whose number may be of the same order as or even significantly larger than the number of measurements. In such high dimensional settings, statistically-consistent estimation of true underlying models via classical approaches is often impossible, due to the lack of identifiability. A recent solution to this issue is through incorporating regularization functions into estimation procedures to promote intrinsic low-complexity structure of the underlying models. Statistical studies have established successful recovery of model parameters via structure-exploiting regularized estimators and computational efforts have examined efficient numerical procedures to accurately solve the associated optimization problems. In this dissertation, we study the statistical and computational aspects of some regularized estimators that are successful in reconstructing high dimensional models. The investigated estimation frameworks are motivated by their applications in different areas of engineering, such as structural health monitoring and recommendation systems. In particular, the group Lasso recovery guarantees provided in Chapter 2 will bring insight into the application of this estimator for localizing material defects in the context of a structural diagnostics problem. Chapter 3 describes the convergence study of an accelerated variant of the well-known alternating direction method of multipliers (ADMM) for minimizing strongly convex functions. The analysis is followed by several experimental evidence into the algorithm's applicability to a ranking problem. Finally, Chapter 4 presents a local convergence analysis of regularized factorization-based estimators for reconstructing low-rank matrices. Interestingly, the analysis of this chapter reveals the interplay between statistical and computational aspects of such (non-convex) estimators. Therefore, it can be useful in a wide variety of problems that involve low-rank matrix estimation.Item Geometric partitioning algorithms for fair division of geographic resources(2014-07) Devulapalli, RaghuveerThis dissertation focuses on a fundamental but under-researched problem: how does one divide a piece of territory into smaller pieces in an efficient way? In particular, we are interested in \emph{map segmentation problem} of partitioning a geographic region into smaller subregions for allocating resources or distributing a workload among multiple agents. This work would result in useful solutions for a variety of fundamental problems, ranging from congressional districting, facility location, and supply chain management to air traffic control and vehicle routing. In a typical map segmentation problem, we are given a geographic region $R$, a probability density function defined on $R$ (representing, say population density, distribution of a natural resource, or locations of clients) and a set of points in $R$ (representing, say service facilities or vehicle depots). We seek a \emph{partition} of $R$ that is a collection of disjoint sub-regions $\{R_1, . . . , R_n\}$ such that $\bigcup_i R_i = R$, that optimizes some objective function while satisfying a shape condition. As examples of shape conditions, we may require that all sub-regions be compact, convex, star convex, simply connected (not having holes), connected, or merely measurable.Such problems are difficult because the search space is infinite-dimensional (since we are designing boundaries between sub-regions) and because the shape conditions are generally difficult to enforce using standard optimization methods. There are also many interesting variants and extensions to this problem. It is often the case that the optimal partition for a problem changes over time as new information about the region is collected. In that case, we have an \emph{online} problem and we must re-draw the sub-region boundaries as time progresses. In addition, we often prefer to construct these sub-regions in a \emph{decentralized} fashion: that is, the sub-region assigned to agent $i$ should be computable using only local information to agent $i$ (such as nearby neighbors or information about its surroundings), and the optimal boundary between two sub-regions should be computable using only knowledge available to those two agents.This dissertation is an attempt to design geometric algorithms aiming to solve the above mentioned problems keeping in view the various design constraints. We describe the drawbacks of the current approach to solving map segmentation problems, its ineffectiveness to impose geometric shape conditions and its limited utility in solving the online version of the problem. Using an intrinsically interdisciplinary approach, combining elements from variational calculus, computational geometry, geometric probability theory, and vector space optimization, we present an approach where we formulate the problems geometrically and then use a fast geometric algorithm to solve them. We demonstrate our success by solving problems having a particular choice of objective function and enforcing certain shape conditions. In fact, it turns out that such methods actually give useful insights and algorithms into classical location problems such as the continuous $k$-medians problem, where the aim is to find optimal locations for facilities. We use a map segmentation technique to present a constant factor approximation algorithm to solve the continuous $k$-medians problem in a convex polygon. We conclude this thesis by describing how we intend to build on this success and develop algorithms to solve larger classes of these problems.Item High Dimensional Learning with Structure Inducing Constraints and Regularizers(2017-08) Asiaeetaheri, AmirExplosive growth in data generation through science and technology calls for new computational and analytical tools. To the statistical machine learning community, one major challenge is the data sets with dimensions larger than the number of samples. Low sample-high dimension regime violates the core assumption of most traditional learning methods. To address this new challenge, over the past decade many high-dimensional learning algorithms have been developed. One of the significant high-dimensional problems in machine learning is the linear regression where the number of features is greater than the number of samples. In the beginning, the primary focus of high-dimensional linear regression literature was on estimating sparse coefficient through $l_1$-norm regularization. In a more general framework, one can assume that the underlying parameter has an intrinsic ``low dimensional complexity'' or \emph{structure}. Recently, researchers have looked at structures beyond sparsity that are induced by \emph{any norm} as the regularizer or constraint. In this thesis, we focus on two variants of the high-dimensional linear model, i.e., data sharing and errors-in-variables where the structure of the parameter is captured with a suitable norm. We introduce estimators for these models and study their theoretical properties. We characterize the sample complexity of our estimators and establish non-asymptotic high probability error bounds for them. Finally, we utilize dictionary learning and sparse coding to perform Twitter sentiment analysis as an application of high dimensional learning. Some discrete machine learning problems can also be posed as constrained set function optimization, where the constraints induce a structure over the solution set. In the second part of the thesis, we investigate a prominent set function optimization problem, the social influence maximization, under the novel ``heat conduction'' influence propagation model. We formulate the problem as a submodular maximization with cardinality constraints and provide an efficient algorithm for it. Through extensive experiments on several large real and synthetic networks, we show that our algorithm outperforms the well-studied methods from influence maximization literature.Item Noninvasive Electromagnetic Neuroimaging Of Epilepsy Networks(2018-05) Sohrabpour, AbbasIn this dissertation I propose a new source imaging algorithm which uses surface non-invasive measurements such as EEG and MEG to estimate underlying brain activities. I employ sparse signal processing techniques (imposing sparsity in multiple domains) as well as an iterative reweighting scheme to come up with an algorithm which objectively and without the need of applying any subjective thresholds, yields extended solutions that not only precisely estimates the location of underlying sources of activity in the brain, but also provides a high quality estimate of the underlying sources’ extent and size. This algorithm is formulated as a convex optimization problem. I have also proposed a scheme to further develop this algorithm to become suitable for imaging sources that evolve over time, basically providing a spatio-temporal estimate of underlying brain activity. In this manner an efficient algorithm that can image dynamic underlying brain networks is developed. The main application this algorithm was motivated by and validated in, is imaging the epileptogenic tissue in focal epilepsy patients. It is shown in this dissertation through analyzing inter-ictal spikes and ictal signals from the EEG recordings of focal epilepsy patients and subsequently comparing it to clinical findings in these patients that the proposed algorithm works well in real-world applications and clinical settings. These clinical findings included the surgically resected volume and seizure onset zone identified by intra-cranial studies; our estimated epileptogenic tissue matched these clinical findings well. While this algorithm was developed for and tested in this particular application, i.e. epileptic activity imaging, it is a general source imaging algorithm and many other applications are also possible, as will be pointed out throughout this dissertation.Item Quantication of the Impact of Uncertainty in Power Systems using Convex Optimization(2017-06) Choi, HyungjinRampant integration of renewable resources (e.g., photovoltaic and wind-energy conversion systems) and uncontrollable and elastic loads (e.g., plug-in hybrid electric vehicles) are rapidly transforming power systems. In this environment, an analytic method to quantify the impact of parametric and input uncertainty will be critical to ensure the reliable operation of next-generation power systems. This task is analytically and computationally challenging since power-system dynamics are nonlinear in nature. In this thesis, we present analytic methods to quantify the impact of parametric and input uncertainties for two important applications in power systems: i) uncertainty propagation in power-system differential-algebraic equation model and power flow, and ii) robust stability assessment of power-system dynamics. For the first topic, an optimization-based method is presented to estimate maximum and minimum bounds on state variables while acknowleding worst-case parametric and input uncertainties in the model. The approach leverages a second-order Taylor-series expansion of the states around a nominal (known) solution. Maximum and minimum bounds are then estimated from either Semidefinite relaxation of Quadratically-Constrained Quadratic-Programming or Alternating Direction Method of Multipliers. For the second topic, an analytical method to quantify power systems stability margins while acknowleding uncertainty is presented within the framework of Lyapunov's direct method. It focuses on the algorithmic construction of Lyapunov functions and the estimation of the robust Region-Of-Attraction with Sum-of-Squares optimization problems which can be translated into semidefinite problems. For both topics, numerical case studies are presented for different test systems to demonstrate and validate the proposed methods.Item Successive convex approximation: analysis and applications(2014-05) Razaviyayn, MeisamThe block coordinate descent (BCD) method is widely used for minimizing a continuous function f of several block variables. At each iteration of this method, a single block of variables is optimized, while the remaining variables are held fixed. To ensure the convergence of the BCD method, the subproblem of each block variable needs to be solved to its unique global optimal. Unfortunately, this requirement is often too restrictive for many practical scenarios. In this dissertation, we first study an alternative inexact BCD approach which updates the variable blocks by successively minimizing a sequence of approximations of f which are either locally tight upper bounds of f or strictly convex local approximations of f. Different block selection rules are considered such as cyclic (Gauss-Seidel), greedy (Gauss-Southwell), randomized, or even multiple (Parallel) simultaneous blocks. We characterize the convergence conditions and iteration complexity bounds for a fairly wide class of such methods, especially for the cases where the objective functions are either non-differentiable or non-convex. Also the case of existence of a linear constraint is studied briefly using the alternating direction method of multipliers (ADMM) idea. In addition to the deterministic case, the problem of minimizing the expected value of a cost function parameterized by a random variable is also investigated. An inexact sample average approximation (SAA) method, which is developed based on the successive convex approximation idea, is proposed and its convergence is studied. Our analysis unifies and extends the existing convergence results for many classical algorithms such as the BCD method, the difference of convex functions (DC) method, the expectation maximization (EM) algorithm, as well as the classical stochastic (sub-)gradient (SG) method for the nonsmooth nonconvex optimization, all of which are popular for large scale optimization problems involving big data. In the second part of this dissertation, we apply our proposed framework to two practical problems: interference management in wireless networks and the dictionary learning problem for sparse representation. First, the computational complexity of these problems are studied. Then using the successive convex approximation framework, we propose novel algorithms for these practical problems. The proposed algorithms are evaluated through extensive numerical experiments on real data.