Distributed Machine Learning at Wireless Edge Networks: Joint Online Optimization of Computation and Communication

Project: Research project

Project Details

Description

Modern wireless edge networks (e.g., Device-to-Device and Internet-of-Things systems) support broad computation and communication applications (e.g., automatic drive and smart city). In these applications, machine learning will involve big data computation that can easily overburden traditional central servers. Our proposed research will utilize distributed optimization at the local devices to alleviate the heavy computation burdens. However, a huge amount of communication overhead will be incurred as we migrate optimization from central servers to local devices. This encourages us to develop communication-efficient distributed optimization approaches, that require both machine learning and wireless communications techniques.

In most existing works, machine learning and wireless communications are separately considered. However, due to the strong dependance between the communication efficiency and the computation results being transmitted, we can significantly improve the computation performance by proactively optimizing communication, and vice versa. Meanwhile, most existing works neglected online optimization, which is often desirable to adapt learning and transmission to the time-varying computation tasks and communication systems. In the proposed project, we aim at answering the following key research question: How to develop online distributed optimization algorithms that jointly consider computation and communication for machine learning at wireless edge networks?

In the proposed project, we promote a holistic joint online design of previously separated offline machine learning and wireless communication solutions. A main challenge we must tackle is the limited and dynamic computation and communication capacities of wireless devices and edge hosts. Furthermore, since there is a strong dependance between computation and communication, we need to jointly consider their impacts on both the learning performance and communication cost. In addition, effective collaboration among wireless devices, edge hosts, and cloud servers in both computation and communication is critical to overall system performance.

To address these challenges in answering the above key how to optimization question, we propose to: 1) address a when to optimize problem, by designing an information freshness based online scheduling algorithm, to optimize the performance of online distributed learning with asynchronous computation under limited communication resources; 2) address a what to optimize problem, by exploring new distributed constrained online optimization theory, to jointly consider the improvement in machine learning model training and the costs in model aggregation, under dynamic computation and communication systems; 3) address a where to optimize problem, by developing an online distributed robust learning framework to effectively collaborate wireless devices, edge hosts, and cloud servers over multi-level hierarchical networks, for trading off computation and communication performance. Finally, we will develop proof-of-concept prototype systems to demonstrate the performance of our proposals.

With our rich research experience in distributed learning, wireless networks, and online optimization, we expect the outcomes of this project to lead to new integrated computationcommunication applications, thereby accelerating the growth and adoption of distributed machine learning technologies and wireless edge intelligence in the pertinent industries.
StatusActive
Effective start/end date1/01/2531/12/27

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.