Abstract:Machine learning involves some implicit sensitive data that may reveal user’s privacy information when attacked by model attacks such as model queries or model tests. In view of the above problems, this paper proposes a sensitivity data privacy protection Mentoring model PATE-T, which provides a strong privacy guarantee for the training data for machine learning. The method combines multiple Master models trained by disjoint sensitive data sets in a black box manner, relying directly on sensitive training data. Disciple is transfer learning by Master’s collection and cannot directly access Master or basic parameters. Disciple’s data field is different but related to the sensitive training data field. In terms of differential privacy, an attacker can query the Disciple and check its internal work, but it cannot obtain the private information of the training data. Experiments show that the privacy protection model proposed in this paper has reached the balance of privacy/practical accuracy on the MNIST data set and SVHN data set,and the results are superior.