Federated Learning: A Comprehensive Study on Decentralized Machine Learning

Authors

  • Shameem Akhter Professor, Department of Computer Science, Khaja BandaNawaz University, Kalaburagi, Karnataka, India
  • Saniya Iram Khan Student, Department of Computer Science, Khaja BandaNawaz University, Kalaburagi, Karnataka, India

DOI:

https://doi.org/10.61808/jsrt183

Keywords:

Federated Learning, Non-IID Data, Communication Costs, Data Privacy

Abstract

Federated Learning (FL) represents a paradigm shift in machine learning, enabling collaborative training across decentralized devices while preserving data privacy. This study comprehensively examines FL’s architecture, algorithms, and challenges using the MNIST dataset as a testbed. We compare two prominent algorithms—Federated Averaging (FedAvg) and Federated Proximal (FedProx)—under independent and identically distributed (IID) and non-IID conditions. Results show FedAvg achieving 97.5% accuracy on IID data, while FedProx outperforms it with 94.4% on non-IID data, a 2.1% improvement over FedAvg’s 92.3%. We also assess scalability, communication costs, and privacy vulnerabilities, supported by graphs, tables, and images. This work underscores FL’s potential in privacysensitive applications like healthcare and IoT, identifying key limitations and future research directions.

Published

26-03-2025

How to Cite

Shameem Akhter, & Saniya Iram Khan. (2025). Federated Learning: A Comprehensive Study on Decentralized Machine Learning. Journal of Scientific Research and Technology, 3(3), 12–20. https://doi.org/10.61808/jsrt183

Issue

Section

Articles