| dc.contributor | 資訊系 | |
| dc.creator (作者) | 廖文宏 | |
| dc.creator (作者) | Liao, Wen-Hung;Lin, Yang-Jing | |
| dc.date (日期) | 2025-11 | |
| dc.date.accessioned | 11-Feb-2026 09:11:08 (UTC+8) | - |
| dc.date.available | 11-Feb-2026 09:11:08 (UTC+8) | - |
| dc.date.issued (上傳時間) | 11-Feb-2026 09:11:08 (UTC+8) | - |
| dc.identifier.uri (URI) | https://nccur.lib.nccu.edu.tw/handle/140.119/161640 | - |
| dc.description.abstract (摘要) | Machine unlearning refers to the process of removing the influence of specific training data from a machine learning model, thereby supporting privacy compliance and data governance. In this study, we extend prior work on weight-resetting unlearning methods by investigating the impact of selective layer-wise freezing on unlearning performance. Using the CIFAR-100 dataset and the ResNet-50 architecture as a testbed, we design a series of experiments that freeze different hierarchical layers during unlearning to assess their contribution to forgetting effectiveness and model recovery. We employ six comprehensive evaluation metrics, including accuracy on forget/retain sets, membership inference attacks (MIA), activation distance, Jensen-Shannon divergence, and Zero Retrain Forgetting (ZRF), to quantify the behavioral shift of the model during unlearning. Our results show that unlearning primarily relies on adjusting high-level features, with deeper layers being more influential in eliminating class-specific knowledge. Additionally, t-SNE visualizations reveal that forgotten samples tend to be reassigned to semantically similar categories, emulating a form of natural forgetting. These findings provide actionable insights into the internal dynamics of unlearning and suggest that targeted manipulation of higher-level features can significantly enhance unlearning effectiveness while preserving model utility. | |
| dc.format.extent | 108 bytes | - |
| dc.format.mimetype | text/html | - |
| dc.relation (關聯) | Pattern Recognition and Computer Vision: 8th Asian Conference on Pattern Recognition, ACPR 2025, IAPR, pp.250-264 | |
| dc.subject (關鍵詞) | Machine Unlearning; Model Manipulation; Weight Reset; Selective Layer-wise Freezing | |
| dc.title (題名) | Selective Freezing of Feature Hierarchies in Deep Models for Machine Unlearning | |
| dc.type (資料類型) | conference | |
| dc.identifier.doi (DOI) | 10.1007/978-981-95-4398-4_18 | |
| dc.doi.uri (DOI) | https://doi.org/10.1007/978-981-95-4398-4_18 | |