Distributed Optimization in Learning Algorithms with Parsimony in Communications

Loading...
Thumbnail Image

Persistent link to this item

Statistics
View Statistics

Journal Title

Journal ISSN

Volume Title

Title

Distributed Optimization in Learning Algorithms with Parsimony in Communications

Published Date

2021-01

Publisher

Type

Thesis or Dissertation

Abstract

As we have seen, today machine learning and big data technologies are transforming both our daily life and economies fundamentally. An important factor that fuels the progress of learning algorithms is the abundance of data generated everyday. In many scenarios, including internet of things, intelligent transportation systems, mobile/edge computing, and smart grids, the datasets are often generated and stored locally in different locations. Traditional centralized (concentrated) algorithms, however, are facing challenges in these settings because they usually require much higher computation cost on a single machine, more communications for collecting raw local data, and are more vulnerable to possible failure of the host. Therefore the distributed learning and optimization algorithms, which are essentially exempted from those problems, are becoming promising alternatives that attract growing interest in recent years. Generally speaking, distributed algorithms describe the approaches that solve problems in a collaborative manner over multiple agents (machines, nodes, computation units or cores) based on communications among them. The main theme of this work is the identification of efficient and effective ways to exploit distributed procedures and communication structures in this type of settings and applications. The first part of this work contains the discussions of a communication-efficient distributed optimization framework for general nonconvex nonsmooth signal processing and machine learning problems under an asynchronous protocol. As we know, an important criterion for preferable distributed algorithms in latency and communication sensitive applications is that they can complete tasks fast with as less communication resources as possible. Thus in this part we present an asynchronous efficient distributed algorithm with reduced waiting time based on the updates utilizing local higher-order information and investigate the theoretical guarantee for the convergence and the simulation performance of this type of algorithms in both strongly convex and nonconvex nonsmooth scenarios. The second part of this work examines the relationship between communication structures and efficiency in distributed optimization. The strategy of setting proper communication structures or patterns is an inexpensive way to save the communication cost of distributed algorithms. It is shown here that in the background of multi-agent policy evaluation, certain communication structures can result in significant improvements in the efficiency of distributed algorithms, providing new insight into the setting of communication networks. Specifically, a hierarchical structure that differentiates the roles of each of the agents during the evaluation process is proposed allowing us to freely choose various mixing schemes (and corresponding mixing matrices that are not necessarily symmetric or doubly stochastic), in order to reduce the communication and computation cost, while still maintaining convergence comparable to the homogeneous distributed algorithms. Extensive numerical experiments corroborate that the performance of the proposed approaches indeed improves over other advanced algorithms in terms of total communication efficiency.

Description

University of Minnesota Ph.D. dissertation. January 2020. Major: Electrical Engineering. Advisor: Jarvis Haupt. 1 computer file (PDF); vii, 138 pages.

Related to

Replaces

License

Collections

Series/Report Number

Funding information

Isbn identifier

Doi identifier

Previously Published Citation

Other identifiers

Suggested citation

Ren, Jineng. (2021). Distributed Optimization in Learning Algorithms with Parsimony in Communications. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/219299.

Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.