Document Type

Thesis

Degree Name

Master of Applied Computing

Department

Physics and Computer Science

Program Name/Specialization

Applied Computing

Faculty/School

Faculty of Science

First Advisor

Dr. Lilatul Ferdouse

Advisor Role

Thesis Supervisor

Abstract

Federated Learning (FL) is a distributed learning paradigm that performs the Machine Learning (ML) training on the local datasets of multiple clients without requiring any training data points to be transmitted through the network. With an increased number of participating clients, the FL’s performance thrives; however, it can also create difficulties due to the heterogeneous nature of the network. Two of the most challenging issues of FL are Device and Data heterogeneity, which refer to the differences among participating clients’ computational capacity and local data distribution, respectively. Due to device heterogeneity, clients with low computational capacity (low-CC) can potentially delay the FL process, while data heterogeneity can introduce bias in FL if a few clients in the network holds dominating amount of training data points. Furthermore, the recent advancement of the gradient inversion concept has put the privacy characteristic of FL in question, since attackers can reconstruct a training datapoint with just the gradient values. In this work, we address the three mentioned research problems of FL: I) Device Heterogeneity, II) Data Heterogeneity, and III) Security concerns. We introduce the Federated Split Learning (FSL) framework, an improvement of the conventional FL, where the low-CC clients offload heavy-computation to the edge server while performing the FL task. The FSL incorporated the workload-offloading mechanism of Split Learning (SL), another prominent distributed learning paradigm. By successfully integrating SL’s workload-offloading mechanisms for low-CC clients in FL, we allow the clients to participate in computationally-intensive FL tasks despite local hardware limitations. Additionally, we integrate existing novel optimization algorithms, tailored specifically for data heterogeneity, in FSL to demonstrate the adaptability and robustness of the framework, as well as prove that FSL can be extended to address data heterogeneity with minimal modification. Lastly, we propose a Differential Privacy (DP) mechanism, named Split-DP, addressing the threat of gradient inversion attacks in FL. Later, we integrate Split-DP in FSL to enhance the privacy of the framework. We provide validation for all our work through empirical proofs observed through simulation results. For the FSL, we show up to ∽58% reduction in local hardware stress, along with up to ∽64% reduction in local training time. We also provide theoretical proof of the FSL’s convergence properties. We construct a high-scale heterogeneous network to test the efficiency of the optimization algorithms in FSL and, through several experiments, prove the validity of the successful implementation. For Split-DP, we demonstrate practical visualization of how gradient inversion attacks reconstruct clients’ local data in FL. We then implement our Split-DP mechanism, and demonstrate gradient inversion attacks failing to reconstruct the client data. Furthermore, we present the privacy proof of our Split-DP mechanism. Lastly, we integrate Split-DP in the FSL framework and show its validity through empirical proofs from simulation.

Convocation Year

2026

Convocation Season

Spring

Available for download on Thursday, April 15, 2027

Share

COinS