Hierarchical Federated Unlearning for Large Language Models

Abstract

Large Language Models (LLMs) are increasingly integrated into real-world applications, raising concerns about privacy, security and the need to remove undesirable knowledge. We propose a federated unlearning approach for LLMs that is scalable and privacy preserving, with task-specific adapter learning and hierarchical merging.

Publication
KDD Workshop on Federated Learning for Data Mining and Graph Analytics, 2025
Zhuangdi Zhu
Zhuangdi Zhu
Assistant Professor (Tenure-Track)

My research centers around accountable, scalable, and trustworthy AI, e.g., decentralized machine learning, knowledge transfer for supervised and reinforcement learning, debiased representation learning, etc.

Related