NEW BLOG | Protecting Trained Models in Privacy-Preserving Federated Learning

This post is part of a series on privacy-preserving federated learning. The series is a collaboration between NIST and the UK government’s Responsible Technology Adoption Unit (RTA), previously known as the Centre for Data Ethics and Innovation. 

The last two posts in our series covered techniques for input privacy in privacy-preserving federated learning in the context of horizontally and vertically partitioned data. To build a complete privacy-preserving federated learning system, these techniques must be combined with an approach for output privacy, which limit how much can be learned about individuals in the training data after the model has been trained.

As described in the second part of our post on privacy attacks in federated learning, trained models can leak significant information about their training data—including whole images and text snippets.

Training with Differential Privacy

The strongest known form of output privacy is differential privacy. Differential privacy is…

Read the Blog