SeungJae Lee
North London Collegiate School
Abstract
In today’s scenario, deep learning has much application in daily living, such as health care, chatbots, entertainment, product recommendation, virtual sessions, etc. In the training phase, deep learning models train the datasets in which information privacy is stored locally through model parameters. However, some privacy concern issues still exist, so applying Differential privacy to the deep learning model is widely recognized for its traditional scenario in rigorous mathematical solutions. This paper revisits the Differential privacy stochastic gradient descent (SGD) method used to achieve good privacy protection. Then deploy the mechanism in the input, hidden, and output layer through pros and cons. Also, provide a broader outlook to this practice.