3

CATNIP: LLM Unlearning via Calibrated and Tokenized Negative Preference Alignment

Pretrained knowledge memorized in LLMs raises critical concerns over safety and privacy, which has motivated LLM Unlearning as a technique for selectively removing the influences of undesirable knowledge. Existing approaches, rooted in Gradient …

DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher

LLM unlearning is a technique to remove the impacts of undesirable knowledge from the model without retraining from scratch, which is indispensable towards trustworthy AI. Existing unlearning methods face significant limitations: conventional …

Hierarchical Federated Unlearning for Large Language Models

Large Language Models (LLMs) are increasingly integrated into real-world applications, raising concerns about privacy, security and the need to remove undesirable knowledge. We propose a federated unlearning approach for LLMs that is scalable and …

Web Intellectual Property at Risk: Preventing Unauthorized Real-Time Retrieval by Large Language Models

The protection of cyber Intellectual Property (IP) such as web content is an increasingly critical concern. The rise of large language models (LLMs) with online retrieval capabilities enables convenient access to information but often undermines the …

ChatWise: AI-Powered Engaging Conversations for Enhancing Senior Cognitive Wellbeing

Cognitive health in older adults presents a growing challenge. We propose a strategy-guided AI chatbot named ChatWise that follows a dual-level conversation reasoning framework with macro-level strategy planning and micro-level utterance generation.