Browsing by Subject "Optimization Theory"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Hierarchical Optimization Problems: Theory, Algorithms and Applications(2024-08) Tsaknakis, IoannisOptimization problems are ubiquitous, with applications spanning nearly all areas of science and engineering. Typically, the term "optimization problem'' refers to minimization tasks, where the goal is to minimize the value of a single objective within a given set of constraints. However, there exist many situations whose modelling requires the use of a hierarchy of more than one optimization tasks, such as robust learning and leader-follower games. This thesis focuses on such hierarchical optimization problems, specifically those consisting of two tasks, known as bilevel problems, and two important sub-classes: min-max problems and min-max problems with coupled constraints. Our contributions to these classes of problems are outlined as follows. 1. We develop algorithms for one-sided non-convex min-max problems. Our work is guided by practical considerations, particularly aiming to create simple iterative algorithms that are easy to implement and offer desirable (non-asymptotic) convergence guarantees, similar to gradient descent in minimization problems. Additionally, we develop an adversarial training method for the Stable Diffusion system, formulated as a min-max problem. The proposed method improves the model's robustness against certain prompting attacks, ensuring that the generated output remains consistent with the input prompt even when the latter is subject to adversarial manipulations. 2. We study min-max problems with coupled constraints. These are non-trivial extensions of min-max problems where the constraints of one of two tasks couple both the min and the max variables. As this is a largely unexplored area our contributions start at a fundamental level, including studying the problem's complexity, developing its duality theory, defining notions of stationarity, and devising suitable reformulations. Moreover, we propose a class of deterministic and stochastic provably convergent algorithms for their solution. 3. We study bilevel problems with constraints in the lower-level. The presence of constraints make these problems more challenging than unconstrained ones, primarily due to the non-differentiability of the implicit function. For the case of linear inequality constraints, we introduce a perturbation-based technique that "smooths'' the implicit objective and develop deterministic and stochastic implicit gradient algorithms. Additionally, when dealing with general inequality and linear equality constraints, we adopt a barrier-based penalty approach to transform the constrained problem into a simpler form. This transformation enables the development of practical gradient-based algorithms.