iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/978-3-031-70362-1_4
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew Resilience | SpringerLink
Skip to main content

Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew Resilience

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track (ECML PKDD 2024)

Abstract

Federated learning (FL) has recently emerged as a compelling machine learning paradigm, prioritizing the protection of privacy for training data. The increasing demand to address issues such as “the right to be forgotten” and combat data poisoning attacks highlights the importance of techniques, known as unlearning, which facilitate the removal of specific training data from trained FL models. Despite numerous unlearning methods proposed for centralized learning, they often prove inapplicable to FL due to fundamental differences in the operation of the two learning paradigms. Consequently, unlearning in FL remains in its early stages, presenting several challenges. Many existing unlearning solutions in FL require a costly retraining process, which can be burdensome for clients. Moreover, these methods are primarily validated through experiments, lacking theoretical assurances. In this study, we introduce Fast-FedUL, a tailored unlearning method for FL, which eliminates the need for retraining entirely. Through meticulous analysis of the target client’s influence on the global model in each round, we develop an algorithm to systematically remove the impact of the target client from the trained model. In addition to presenting empirical findings, we offer a theoretical analysis delineating the upper bound of our unlearned model and the exact retrained model (the one obtained through retraining using untargeted clients). Experimental results with backdoor attack scenarios indicate that Fast-FedUL effectively removes almost all traces of the target client (achieving a mere 0.01% success rate in backdoor attacks on the unlearned model), while retaining the knowledge of untargeted clients (obtaining a high accuracy of up to 98% on the main task). Significantly, Fast-FedUL attains the lowest time complexity, providing a speed that is 1000 times faster than retraining.

T. T. Huynh and T. B. Nguyen—Both authors contributed equally to this research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bourtoule, L., et al.: Machine unlearning. In: SP, pp. 141–159 (2021)

    Google Scholar 

  2. Cao, X., et al.: MPAF: model poisoning attacks to federated learning based on fake clients. In: Proceedings of the IEEE/CVF CVPR, pp. 3396–3404 (2022)

    Google Scholar 

  3. Cao, Y., et al.: Towards making systems forget with machine unlearning. In: SP, pp. 463–480 (2015)

    Google Scholar 

  4. Cha, S., et al.: Learning to unlearn: instance-wise unlearning for pre-trained classifiers. arXiv preprint arXiv:2301.11578 (2023)

  5. Che, T., et al.: Fast federated machine unlearning with nonlinear functional theory. In: International Conference on Machine Learning, pp. 4241–4268. PMLR (2023)

    Google Scholar 

  6. Chien, E., Pan, C., Milenkovic, O.: Efficient model updates for approximate unlearning of graph-structured data. In: The Eleventh International Conference on Learning Representations (2022)

    Google Scholar 

  7. Golatkar, A., et al.: Eternal sunshine of the spotless net: selective forgetting in deep networks. In: CVPR, pp. 9304–9312 (2020)

    Google Scholar 

  8. Halimi, A., et al.: Federated unlearning: how to efficiently erase a client in FL? arXiv preprint arXiv:2207.05521 (2022)

  9. Horváth, S., Richtárik, P.: Nonconvex variance reduced optimization with arbitrary sampling. In: ICLR, pp. 2781–2789 (2019)

    Google Scholar 

  10. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report 0, University of Toronto, Toronto, Ontario (2009)

    Google Scholar 

  11. Lauter, K.: Private AI: machine learning on encrypted data. In: Chacón Rebollo, T., Donat, R., Higueras, I. (eds.) Recent Advances in Industrial and Applied Mathematics, pp. 97–113. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-86236-7_6

    Chapter  Google Scholar 

  12. LeCun, Y.: The MNIST database of handwritten digits (1998). http://yann.lecun.com/exdb/mnist/

  13. Lee, J.W., et al.: Privacy-preserving machine learning with fully homomorphic encryption for deep neural network. IEEE Access 10, 30039–30054 (2022)

    Article  Google Scholar 

  14. Li, Q., et al.: Federated learning on non-IID data silos: an experimental study. In: ICDE, pp. 965–978 (2022)

    Google Scholar 

  15. Li, Y., Chen, C., Zheng, X., Zhang, J.: Federated unlearning via active forgetting. arXiv preprint arXiv:2307.03363 (2023)

  16. Liu, G., Ma, X., Yang, Y., Wang, C., Liu, J.: Federaser: enabling efficient client-level data removal from federated learning models. In: IWQOS, pp. 1–10 (2021)

    Google Scholar 

  17. Liu, Y., et al.: The right to be forgotten in federated learning: an efficient realization with rapid retraining. In: IEEE INFOCOM, pp. 1749–1758 (2022)

    Google Scholar 

  18. McMahan, B., et al.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS, pp. 1273–1282 (2017)

    Google Scholar 

  19. Mehta, R., Pal, S., Singh, V., Ravi, S.N.: Deep unlearning via randomized conditionally independent hessians. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10422–10431 (2022)

    Google Scholar 

  20. Regulation, P.: Regulation (EU) 2016/679 of the European parliament and of the council. Regulation (EU) 679, 2016 (2016)

    Google Scholar 

  21. Voigt, P., et al.: The EU General Data Protection Regulation (GDPR). A Practical Guide, 1st edn. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57959-7

  22. Wang, H., et al.: Attack of the tails: yes, you really can backdoor federated learning. In: NIPS, vol. 33, pp. 16070–16084 (2020)

    Google Scholar 

  23. Wang, J., et al.: Federated unlearning via class-discriminative pruning. In: WWW, pp. 622–632 (2022)

    Google Scholar 

  24. Wang, W., et al.: BFU: Bayesian federated unlearning with parameter self-sharing. In: Proceedings of the 2023 ACM ASIACCS, pp. 567–578 (2023)

    Google Scholar 

  25. Wang, Y., Lin, L., Chen, J.: Communication-efficient adaptive federated learning. In: International Conference on Machine Learning, pp. 22802–22838. PMLR (2022)

    Google Scholar 

  26. Wu, C., Zhu, S., Mitra, P.: Federated unlearning with knowledge distillation. arXiv preprint arXiv:2201.09441 (2022)

  27. Wu, Y., et al.: Deltagrad: rapid retraining of machine learning models. In: International Conference on Machine Learning, pp. 10355–10366 (2020)

    Google Scholar 

  28. Xie, C., et al.: DBA: distributed backdoor attacks against federated learning. In: ICLR (2019)

    Google Scholar 

  29. Yang, J., et al.: MedMNIST v2: a large-scale lightweight benchmark for 2D and 3D biomedical image classification. arXiv preprint arXiv:2110.14795 (2021)

  30. Zeng, D., Liang, S., Hu, X., Wang, H., Xu, Z.: FedLab: a flexible federated learning framework. J. Mach. Learn. Res. 24(100), 1–7 (2023)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was funded by Vingroup Joint Stock Company (Vingroup JSC), Vingroup, and supported by Vingroup Innovation Foundation (VINIF) under project code VINIF.2021.DA00128. This research is also funded by Hanoi University of Science and Technology (HUST) under grant number T2023-PC-028.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Phi Le Nguyen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huynh, T.T. et al. (2024). Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew Resilience. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14945. Springer, Cham. https://doi.org/10.1007/978-3-031-70362-1_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70362-1_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70361-4

  • Online ISBN: 978-3-031-70362-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics