Browsing by Subject "M-SGD"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Optimization and Sampling using Iterative Algorithms(2022-07) Bhattacharya, RiddhimanSampling and optimization are considered as the two key pillars in statistics with applications in many other fields like machine learning, physics, chemistry just to name a few. Although there has been a recent boom in literature for both optimization and sampling algorithms, some key questions on the properties of the aforementioned topics still remain unanswered. In this thesis we study some of these algorithms in detail and address some of the key questions. The common aspect that connects all the algorithms considered by us is that they are iterative in nature. We study the Langevin Monte Carlo (LMC) algorithm with incorrect gradient and establish asymptotic results. We also consider the Multiplicative Stochastic Gradient Descent Algorithm (M-SGD) and establish that the error term is asymptotically Gaussian. Also, we exhibit that the M-SGD algorithm is close to a certain diffusion, irrespective of the weights used, in order of the step size under suitable assumptions. Next, we establish the convergence of the algorithm and a CLT around the optimum in the regime of strong convexity. Lastly, we consider the preconditioned LMC algorithm and exhibit non-asymptotic bounds in the regime of strong convexity.