Work

A Survey of Machine Unlearning in Generative AI Models

In this survey, anchored in generative models, machine unlearning approaches are reviewed, categorized, and discussed comprehensively and systematically. Existing unlearning approaches are classified into gradient-based techniques, task vectors, knowledge distillation, data sharding, and reliable unlearning methods. Apart from previous works, this survey extends the review of attack methods that aim to exploit the vulnerability in generative models and assess the robustness of these unlearning methods. In addition, popular metrics and datasets in machine unlearning research are summarized and evaluated based on effectiveness, efficiency, and security. Finally, we shed light on the future directions of this emerging research topic by discussing applications, highlighting challenges, and exploring research frontiers for the current machine unlearning community and the new investigators to come.

Check out this project!

Exact-Fun: An Quantization-based Federated Unlearning Approach

In this paper, we study the unlearning problem in federated learning, which provides a data deletion mechanism in the federated setting. First of all, a quantized federated learning (Q-FL) algorithm is developed to facilitate exact unlearning. Based on the quantized federated learning system, an exact and efficient federated unlearning (Exact-Fun) algorithm is designed to realize the goal of data deletion. Through theoretic analysis and experimental evaluation, our proposed methods not only have the desired unlearning effectiveness but also achieve high unlearning efficiency compared with the existing works.

Dogged Backdoor Attack (DBA)

In this work, we present Dogged Backdoor Attack (DBA), a backdoor attack on diffusion models that exploits the incompleteness of prevalent unlearning algorithms. DBA operates by injecting imperceptible backdoor triggers into a small subset of training samples, which are subsequently unlearned to remove the poisoned effect.

Life