TY - GEN
T1 - Selective Noise-Aided Machine Unlearning with Deep Feature Visualization
AU - Shaheryar, Muhammad
AU - Lee, Jong Taek
AU - Jung, Soon Ki
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - In the rapidly evolving landscape of machine learning, the concept of machine unlearning has become crucial for enhancing data privacy and system security. Our research presents an innovative unlearning technique Selective Noise Unlearning (SNU), designed to reduce the model’s dependency on specific data subsets, known as the forget-set. By employing a noise-induced training paradigm, we effectively disrupt the patterns associated with the forget-set, facilitating unlearning within pre-trained models. This approach enhances computational efficiency by eliminating the need for extensive data retention, thereby streamlining the unlearning process. We validate SNU on ResNet18 architecture using CIFAR-10 and MNIST. Through GradCAMs visualizations, we demonstrate the model’s refocused attention following unlearning. Our method’s ability to achieve quick unlearning with as few as one to two epochs of retraining makes it a practical solution for scenarios requiring rapid adaptation. This research enhances data privacy, improves unlearning efficiency, and supports the enforcement of the right to be forgotten, opening avenues for future innovations in machine learning privacy.
AB - In the rapidly evolving landscape of machine learning, the concept of machine unlearning has become crucial for enhancing data privacy and system security. Our research presents an innovative unlearning technique Selective Noise Unlearning (SNU), designed to reduce the model’s dependency on specific data subsets, known as the forget-set. By employing a noise-induced training paradigm, we effectively disrupt the patterns associated with the forget-set, facilitating unlearning within pre-trained models. This approach enhances computational efficiency by eliminating the need for extensive data retention, thereby streamlining the unlearning process. We validate SNU on ResNet18 architecture using CIFAR-10 and MNIST. Through GradCAMs visualizations, we demonstrate the model’s refocused attention following unlearning. Our method’s ability to achieve quick unlearning with as few as one to two epochs of retraining makes it a practical solution for scenarios requiring rapid adaptation. This research enhances data privacy, improves unlearning efficiency, and supports the enforcement of the right to be forgotten, opening avenues for future innovations in machine learning privacy.
KW - Class Activation Mapping
KW - Forget Class
KW - Machine Unlearning
UR - https://www.scopus.com/pages/publications/85218449173
U2 - 10.1007/978-3-031-77389-1_8
DO - 10.1007/978-3-031-77389-1_8
M3 - Conference contribution
AN - SCOPUS:85218449173
SN - 9783031773884
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 96
EP - 107
BT - Advances in Visual Computing - 19th International Symposium, ISVC 2024, Proceedings
A2 - Bebis, George
A2 - Patel, Vishal
A2 - Gu, Jinwei
A2 - Panetta, Julian
A2 - Gingold, Yotam
A2 - Johnsen, Kyle
A2 - Arefin, Mohammed Safayet
A2 - Dutta, Soumya
A2 - Biswas, Ayan
PB - Springer Science and Business Media Deutschland GmbH
T2 - 19th International Symposium on Visual Computing, ISVC 2024
Y2 - 21 October 2024 through 23 October 2024
ER -