TY - JOUR
T1 - Edge-Based Communication Optimization for Distributed Federated Learning
AU - Wang, Tian
AU - Liu, Yan
AU - Zheng, Xi
AU - Dai, Hong-Ning
AU - Jia, Weijia
AU - Xie, Mande
N1 - Funding Information:
This work was supported in part by the Natural Science Foundation of Fujian Province of China under Grant 2020J06023, in part by the National Natural Science Foundation of China (NSFC) under Grants 61872154 and 61972352, UIC Start-up Research Fund under Grant R72021202, and the Key Research, and Development Program of Zhejiang Province under Grant 2021C03150.
Publisher Copyright:
© 2021 IEEE.
PY - 2022/7/1
Y1 - 2022/7/1
N2 - Federated learning can achieve distributed machine learning without
sharing privacy and sensitive data of end devices. However, high
concurrent access to cloud servers increases the transmission delay of
model updates. Some local models may be unnecessary with an opposite
gradient from the global model, thus incurring many additional
communication costs. Existing work mainly focuses on reducing
communication rounds or cleaning local defect data, and neither takes
into account latency associated with high server concurrency. To this
end, we study an edge-based communication optimization framework to
reduce the number of end devices directly connected to the parameter
server while avoiding uploading unnecessary local updates. Specifically,
we cluster devices in the same network location and deploy mobile edge
nodes in different network locations to serve as hubs for cloud and end
devices communications, thereby avoiding the latency associated with
high server concurrency. Meanwhile, we propose a method based on cosine
similarity to filter out unnecessary models, thus avoiding unnecessary
communication. Experimental results show that compared with traditional
federated learning, the proposed scheme reduces the number of local
updates by 60%, and the convergence speed of the evaluated model
increases by 10.3%.
AB - Federated learning can achieve distributed machine learning without
sharing privacy and sensitive data of end devices. However, high
concurrent access to cloud servers increases the transmission delay of
model updates. Some local models may be unnecessary with an opposite
gradient from the global model, thus incurring many additional
communication costs. Existing work mainly focuses on reducing
communication rounds or cleaning local defect data, and neither takes
into account latency associated with high server concurrency. To this
end, we study an edge-based communication optimization framework to
reduce the number of end devices directly connected to the parameter
server while avoiding uploading unnecessary local updates. Specifically,
we cluster devices in the same network location and deploy mobile edge
nodes in different network locations to serve as hubs for cloud and end
devices communications, thereby avoiding the latency associated with
high server concurrency. Meanwhile, we propose a method based on cosine
similarity to filter out unnecessary models, thus avoiding unnecessary
communication. Experimental results show that compared with traditional
federated learning, the proposed scheme reduces the number of local
updates by 60%, and the convergence speed of the evaluated model
increases by 10.3%.
KW - Clustering
KW - Communication optimization
KW - Federated learning
KW - Mobile edge nodes
KW - Model filtering
UR - http://www.scopus.com/inward/record.url?scp=85107362752&partnerID=8YFLogxK
U2 - 10.1109/TNSE.2021.3083263
DO - 10.1109/TNSE.2021.3083263
M3 - Journal article
SN - 2327-4697
VL - 9
SP - 2015
EP - 2024
JO - IEEE Transactions on Network Science and Engineering
JF - IEEE Transactions on Network Science and Engineering
IS - 4
ER -