Hierarchical Federated Unlearning for Large Language Models
Yisheng Zhong,
Zhengbang Yang,
Zhuangdi Zhu
October 2025
Abstract
Large Language Models (LLMs) are increasingly integrated into real-world applications, raising concerns about privacy, security and the need to remove undesirable knowledge. We propose a federated unlearning approach for LLMs that is scalable and privacy preserving, with task-specific adapter learning and hierarchical merging.
Publication
KDD Workshop on Federated Learning for Data Mining and Graph Analytics, 2025
Assistant Professor (Tenure-Track)
My research centers around accountable, scalable, and trustworthy AI, e.g., decentralized machine learning, knowledge transfer for supervised and reinforcement learning, debiased representation learning, etc.