Repository logo
Log In

University Digital Conservancy

University Digital Conservancy

Communities & Collections
Browse
About
AboutHow to depositPolicies
Contact

Browse by Subject

  1. Home
  2. Browse by Subject

Browsing by Subject "Large-Scale Optimization"

Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item
    A Robust Control Perspective on Optimization of Strongly-Convex Functions
    (2016-07) Hu, Bin
    Large-scale optimization is a central topic in big data science. First-order black-box optimization methods have been widely applied in machine learning problems, since the oracle complexity of these methods can be independent of the parameter dimension. In this dissertation, we formulate linear matrix inequality (LMI) conditions to analyze the convergence rates of various deterministic and stochastic optimization methods. We derive these LMIs using integral quadratic constraints (IQCs) and dissipation inequalities. The first part of this dissertation analyzes deterministic first-order methods (gradient descent, Nesterov's method, etc) as generalized eigenvalue problems (GEVPs). A standard dissipation inequality requires a non-negative definite storage function and ``hard'' IQCs which must hold over all finite time horizons. We develop a modified dissipation inequality that requires neither non-negative definite storage functions nor hard IQCs. Then we show that linear rate analysis of a given deterministic first-order method is equivalent to uniform stability analysis of a related scaled system. This enables derivation of linear rate analysis conditions using standard IQCs for a scaled operator. A soft Zames-Falb IQC is derived and used in the modified dissipation inequality, leading to a GEVP formulation for linear rate analysis of first-order optimization methods. In the second part of this dissertation, we extend the IQC framework to analyze stochastic optimization methods which have been widely applied in empirical risk minimization and machine learning problems. We first combine jump system theory with IQCs to derive LMI conditions for rate analysis of the stochastic average gradient (SAG) method and its variants (SAGA, etc). The resultant LMI conditions can be used to analyze the convergence rates of SAG, SAGA, and other related variants with uniform or non-uniform sampling strategies. Then we develop LMI conditions to analyze the stochastic gradient (SG) method and its variants. The SG method with a constant stepsize typically achieves a linear convergence rate only up to some fixed tolerance. We develop stochastically averaged quadratic constraints with disturbance terms quantifying the inaccuracy of the SG method. Several known results about the SG method have been recovered using our proposed LMI conditions. We also obtain new results regarding the convergence of the SG method under different conditions.

UDC Services

  • About
  • How to Deposit
  • Policies
  • Contact

Related Services

  • University Archives
  • U of M Web Archive
  • UMedia Archive
  • Copyright Services
  • Digital Library Services

Libraries

  • Hours
  • News & Events
  • Staff Directory
  • Subject Librarians
  • Vision, Mission, & Goals
University Libraries

© 2025 Regents of the University of Minnesota. All rights reserved. The University of Minnesota is an equal opportunity educator and employer.
Policy statement | Acceptable Use of IT Resources | Report web accessibility issues