SplitFed: When Federated Learning Meets Split Learning

Authors

  • Chandra Thapa CSIRO Data61
  • Pathum Chamikara Mahawaga Arachchige CSIRO Data61
  • Seyit Camtepe CSIRO Data61
  • Lichao Sun Lehigh University

DOI:

https://doi.org/10.1609/aaai.v36i8.20825

Keywords:

Machine Learning (ML)

Abstract

Federated learning (FL) and split learning (SL) are two popular distributed machine learning approaches. Both follow a model-to-data scenario; clients train and test machine learning models without sharing raw data. SL provides better model privacy than FL due to the machine learning model architecture split between clients and the server. Moreover, the split model makes SL a better option for resource-constrained environments. However, SL performs slower than FL due to the relay-based training across multiple clients. In this regard, this paper presents a novel approach, named splitfed learning (SFL), that amalgamates the two approaches eliminating their inherent drawbacks, along with a refined architectural configuration incorporating differential privacy and PixelDP to enhance data privacy and model robustness. Our analysis and empirical results demonstrate that (pure) SFL provides similar test accuracy and communication efficiency as SL while significantly decreasing its computation time per global epoch than in SL for multiple clients. Furthermore, as in SL, its communication efficiency over FL improves with the number of clients. Besides, the performance of SFL with privacy and robustness measures is further evaluated under extended experimental settings.

Downloads

Published

2022-06-28

How to Cite

Thapa, C., Mahawaga Arachchige, P. C., Camtepe, S., & Sun, L. (2022). SplitFed: When Federated Learning Meets Split Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8485-8493. https://doi.org/10.1609/aaai.v36i8.20825

Issue

Section

AAAI Technical Track on Machine Learning III