New Privacy-Preserving Federated Learning Blog Post

Dear Colleagues,

ln the last two posts of our Privacy-Preserving Federated Learning (PPFL) blog series, we covered techniques for input privacy in PPFL in the context of horizontally and vertically partitioned data. However, to complete a PPFL system, these techniques must be combined with an approach for output privacy to limit what can be inferred about individuals after model training.  Want to learn more about output privacy and training with differential privacy?  Found out more in this new post, Protecting Trained Models in Privacy-Preserving Federated Learning!

Protecting Trained Models in Privacy-Preserving Federated Learning by Joseph Near and David Darais
Read the post.  

Read blogs #1 – #6 on our PPFL Blog Series page. We encourage readers to ask questions by contacting us at [email protected].

Meanwhile—stay tuned for the next PPFL blog post! 

All the best,
NIST Privacy Engineering Program

Read More