Reisizadeh, Hadi2025-01-282025-01-282024-08https://hdl.handle.net/11299/269608University of Minnesota Ph.D. dissertation. August 2024. Major: Electrical/Computer Engineering. Advisors: Soheil Mohajer, Mohammad Ali Maddah-Ali. 1 computer file (PDF); xii, 240 pages.Distributed networks connect independent nodes to act as one powerful system. These spread-out machines share resources (data, storage, processing) to handle large tasks efficiently. Such networks have various applications in communication and learning systems including distributed caching and distributed machine learning. Distributed caching improves performance by storing frequently accessed data closer to users, reducing retrieval times and network load. Distributed machine learning leverages the combined computing power of multiple machines to train complex models on massive datasets, leading to faster and more scalable results. However, this distributed nature introduces its own set of challenges, including (1) communication efficiency due to the exchange of large amounts of data and noisy communication links, (2) time-varying networks because of the dynamic nature of networks, and (3) privacy concerns due to the presence of eavesdropper nodes. This dissertation dives into two crucial distributed systems: distributed caching and distributed machine learning. The goal is to make them more efficient, reliable, and adaptable by tackling key challenges. To achieve this, the research leverages advanced techniques from various fields, including distributed optimization, statistical learning theory, probability theory, and communication and coding theory. In the first part of the thesis, we study a distributed caching setup in a fast-fading environment where each user has storage with a fraction of the transmitter's files. We focus on uncoded cache placement and the challenges of assigning signal levels without knowing channel conditions. We study the asymptotic behavior of the system and characterize the maximum achievable source rate for various scenarios, and finally provide an upper bound. We also propose a scheme using linear programming to show the characterization's looseness for the general case. Moving on to the second part, we consider distributed machine learning paradigms, focusing on methods to mitigate communication costs, time-varying links, and privacy concerns. To address the first two challenges, we propose a two-time scale algorithm. In the proposed method, one time scale suppresses the imperfect incoming information from neighboring agents, while the other time scale operates on the gradients of local cost functions. To resolve the privacy concern, we employ a differential privacy mechanism. We also support our theoretical results in both parts with significant improvements in numerical experiments.enEfficient Algorithms for Distributed Networks with Applications in Communication and LearningThesis or Dissertation