Abstract
In this article, we investigate the problem of decentralized federated learning (DFL) in Internet of Things (IoT) systems, where a number of IoT clients train models collectively for a common task without sharing their private training data in the absence of a central server. Most of the existing DFL schemes are composed of two alternating steps, i.e., model updating and model averaging. However, averaging model parameters directly to fuse different models at the local clients suffers from client-drift, especially when the training data are heterogeneous across different clients. This leads to slow convergence and degraded learning performance. As a possible solution, we propose the DFL via a mutual knowledge transfer (Def-KT) algorithm, where local clients fuse models by transferring their learned knowledge to each other. Our experiments on the MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100 data sets reveal that the proposed Def-KT algorithm significantly outperforms the baseline DFL methods with model averaging, i.e., Combo and FullAvg, especially when the training data are not independent and identically distributed (non-IID) across different clients.
Original language | English (US) |
---|---|
Pages (from-to) | 1136-1147 |
Number of pages | 12 |
Journal | IEEE Internet of Things Journal |
Volume | 9 |
Issue number | 2 |
DOIs | |
State | Published - Jan 15 2022 |
Externally published | Yes |
Keywords
- Decentralized learning
- Internet of Things (IoT)
- federated learning (FL)
- knowledge transfer
ASJC Scopus subject areas
- Signal Processing
- Information Systems
- Hardware and Architecture
- Computer Science Applications
- Computer Networks and Communications