Skip to content Search
09.12.2022

IDEAS NCBR zaprasza na seminarium naukowe poświęcone „Model stealing and defenses. Federated learning”.

W trakcie seminarium nasi prelegenci dr Adam Dziedzic z Uniwersytetu w Toronto i Vector Institute oraz dr Franziska Boenisch z Vector Institute wygłoszą dwa wykłady:

10:00 – Is this model mine? On stealing and defending machine learning models. – dr Adam Dziedzic,

12:00 – What trust model is needed for federated learning to be private? –  dr Franziska Boenisch.

Szczegółowe opisy tematów, abstrakty oraz biogramy prelegentów (w języku angielskim) znajdują się poniżej.

Seminarium odbędzie się 22 grudnia br. (czwartek) o godz. 10:00 w sali konferencyjnej na 1 p. w budynku UNITRA S.A. przy ul. Nowogrodzkiej 50 (dawny Bank Rolny) w Warszawie.

Zgłoszenia udziału będą przyjmowane za pośrednictwem poczty elektronicznej do dnia 21 grudnia br. na adres: grazyna.wojcik@ideas-ncbr.pl.

Przy rejestracji prosimy o potwierdzenie, czy planują Państwo udział osobisty, czy też skorzystacie Państwo ze streamingu w aplikacji zoom.



1. Title: Is this model mine? On stealing and defending machine learning models.

Abstract: In model extraction attacks, adversaries can steal a machine learning model exposed via a public API by repeatedly querying it and adjusting their own model based on obtained outputs. We present model stealing and defenses for supervised and self-supervised models. To prevent stealing of supervised models, existing defenses focus on detecting malicious queries and truncating or distorting outputs, thus necessarily introducing a tradeoff between robustness and model utility. Instead, we propose to impede model extraction by requiring users to complete a proof-of-work before they can read the model’s predictions. This deters attackers by significantly increasing (even up to 100x) the computational effort needed to leverage query access for model extraction. Since we calibrate the effort required to complete the proof-of-work for each query, this introduces a slight overhead for legitimate users (up to 2x). To achieve this, our calibration applies tools from differential privacy to measure the information revealed by a query. Our method requires no modifications of the victim model and can be applied by machine learning practitioners to guard their publicly exposed models against being easily stolen. Unlike traditional model extraction on supervised models that return labels or low-dimensional scores, Self-Supervised Learning (SSL) encoders output representations, which are of significantly higher dimensionality compared to the outputs from supervised models. Recently, ML-as-a-Service providers have commenced offering trained self-supervised models over inference APIs, which transform user inputs into useful representations for a fee. However, the high cost involved to train these models and their exposure over APIs both make black-box extraction a realistic security threat. We explore model stealing by constructing several novel attacks and evaluating existing classes of defenses. We find that approaches that train directly on a victim’s stolen representations are query efficient and enable high accuracy on downstream tasks. We then show that existing defenses against model extraction are inadequate and not easily retrofitted to the specificities of SSL. Finally, we present dataset inference and watermarking as promising directions toward defending SSL encoders against model stealing.

Bio: Adam Dziedzic is a Postdoctoral Fellow at the University of Toronto and Vector Institute, advised by Prof. Nicolas Papernot. His research focus is on trustworthy machine learning, especially model stealing and defenses as well as on private and confident collaborative machine learning. Adam finished his Ph.D. at the University of Chicago, advised by Prof. Sanjay Krishnan, where he worked on input and model compression for adaptive and robust neural networks. He obtained his Bachelor’s and Master’s degrees from the Warsaw University of Technology. Adam was also studying at the Technical University of Denmark and EPFL. He worked at CERN, Barclays Investment Bank, Microsoft Research, and Google.


2. Title:
What trust model is needed for federated learning to be private?

Abstract: In federated learning (FL), data does not leave personal devices when they are jointly training a machine learning model. Instead, these devices share gradients with a central party (e.g., a company). Because data never “leaves” personal devices, FL was promoted as privacy-preserving. Yet, recently it was shown that this protection is but a thin facade, as even a passive attacker observing gradients can reconstruct data of individual users. In this talk, I will explore the trust model required to implement practical privacy guarantees in FL by studying the protocol under the assumption of an untrusted central party. I will first show that in vanilla FL, when dealing with an untrusted central party, there is currently no way to provide meaningful privacy guarantees. I will depict how gradients of the shared model directly leak some individual training data points—and how this leakage can be amplified through small, targeted manipulations of the model weights. Thereby, the central party can directly and perfectly extract sensitive user-data at near-zero computational costs. Then, I will move on and discuss defenses that implement privacy protection in FL. Here, I will show that an actively malicious central party can still have the upper hand on privacy leakage by introducing a novel practical attack against FL protected by secure aggregation and differential privacy – currently considered the most private instantiation of the protocol. I will conclude my talk with an outlook on what it will take to achieve privacy guarantees in practice.

Bio: Franziska is a Postdoctoral Fellow at the Vector Institute in Toronto. She obtained her Ph.D. in the Computer Science Department at Freie University Berlin. During her Ph.D., Franziska was a research associate at the Fraunhofer Institute for Applied and Integrated Security, Germany. Her research focuses on trustworthy machine learning and privacy. She received a Fraunhofer grant for outstanding female early career researchers and the German Industrial Research Foundation prize for her applied research.

Featured news

08.03.2024
We can’t imagine power grids without AI
07.02.2024
IDEAS NCBR joins Adra
07.02.2024
Krzysztof Walas joins ELLIS Society