Abstract:
To address privacy concerns in federated learning under heterogeneous data environments, we propose a heterogeneous federated unlearning algorithm that integrates knowledge distillation (KD) with a forgetting mechanism. The proposed method extracts generic knowledge from the global model and distills it to local clients, thereby preserving the learning capability of local models while effectively removing information associated with sensitive data. This enables privacy-oriented unlearning. Experimental results demonstrate that the proposed approach maintains strong model performance across diverse data structures and device configurations, while significantly enhancing data privacy. This study provides an effective technical solution for privacy preservation in heterogeneous Federated Learning settings.