Differential Privacy Regularization: Protecting Training Data Through Loss Function Regularization

Authors: Francisco Aguilera-Martínez, Fernando Berzal

Abstract: Training machine learning models based on neural networks requires large
datasets, which may contain sensitive information. The models, however, should
not expose private information from these datasets. Differentially private SGD
[DP-SGD] requires the modification of the standard stochastic gradient descent
[SGD] algorithm for training new models. In this short paper, a novel
regularization strategy is proposed to achieve the same goal in a more
efficient manner.

Source: http://arxiv.org/abs/2409.17144v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these