This is an offer addressed to students of master’s studies, doctoral students, postdocs from Poland and abroad, primarily in the fields of cryptography, mathematics, and theoretical computer science.
Privacy in machine learning is an important and exciting research topic that focuses on how to reap the benefits of machine learning techniques while keeping training data and learned models private.
In the second edition of the IACR Privacy-Preserving Machine Learning (PPML) school, lecturers with theoretical and practical experience will discuss a wide range of issues related to this topic, such as secure computing, differential privacy or federated learning.
Event agenda:
from | to | Monday | Tuesday | Wednesday | Thursday |
09:20 | 09:30 | Stefan Dziembowski – opening remarks | |||
09:30 | 11:00 | Rafael Dowsley – A ML crash course for the school | Nishanth Chandran – MPC & ML 2 | Emiliano de Cristofaro – Synthetic data (online talk) | |
11:00 | 11:30 | break | break | break | break |
11:30 | 13:00 | Nishanth Chandran – MPC and ML 1 | Peter Kairouz – Federated Learning & Differential Privacy 1 | Celia Kherfallah – TFHE in action using concrete-ML | Peter Kairouz – Federated Learning & Differential Privacy 2 |
13:00 | 14:30 | break | break | break | break |
14:30 | 16:00 | Yuriy Polyakov – FHE for ML | Yang Zhang – Model extraction & membership inference | Yang Zhang – Backdoor attacks and other fun attacks | Peter Kairouz – Federated Learning & Differential Privacy 3 (finishing at 15:30) |
16:00 | 16:30 | break | break | break | |
16:30 | 18:00 | Yuriy Polyakov – FHE in action using Open FHE | Rafael Dowsley – PPML, just not for Deep Neural Nets | Dr. Grażyna Żebrowska (IDEAS NCBR board member) –Research and development at IDEAS NCBR |