Decentralized Federated Learning via Mutual Knowledge Transfer

Chengxi Li, Gang Li, Pramod K. Varshney

Research output: Contribution to journalArticlepeer-review

46 Scopus citations


In this article, we investigate the problem of decentralized federated learning (DFL) in Internet of Things (IoT) systems, where a number of IoT clients train models collectively for a common task without sharing their private training data in the absence of a central server. Most of the existing DFL schemes are composed of two alternating steps, i.e., model updating and model averaging. However, averaging model parameters directly to fuse different models at the local clients suffers from client-drift, especially when the training data are heterogeneous across different clients. This leads to slow convergence and degraded learning performance. As a possible solution, we propose the DFL via a mutual knowledge transfer (Def-KT) algorithm, where local clients fuse models by transferring their learned knowledge to each other. Our experiments on the MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100 data sets reveal that the proposed Def-KT algorithm significantly outperforms the baseline DFL methods with model averaging, i.e., Combo and FullAvg, especially when the training data are not independent and identically distributed (non-IID) across different clients.

Original languageEnglish (US)
Pages (from-to)1136-1147
Number of pages12
JournalIEEE Internet of Things Journal
Issue number2
StatePublished - Jan 15 2022
Externally publishedYes


  • Decentralized learning
  • Internet of Things (IoT)
  • federated learning (FL)
  • knowledge transfer

ASJC Scopus subject areas

  • Signal Processing
  • Information Systems
  • Hardware and Architecture
  • Computer Science Applications
  • Computer Networks and Communications


Dive into the research topics of 'Decentralized Federated Learning via Mutual Knowledge Transfer'. Together they form a unique fingerprint.

Cite this