You are here

Securing Distributed Learning via Randomized Hash-Combs

13 May 2024
11:00 am
San Francesco Complex - Sagrestia

The massive deployment of Machine Learning (ML) models raises serious concerns about data protection. While much research is being carried out in this domain, hard challenges persist in achieving confidentiality and differential privacy in distributed and federated learning. In this talk, we describe a regulation-compliant data protection scheme for ML models’ training, applicable throughout the ML life cycle regardless of the underlying ML architecture. Designed from the data owner's perspective, our technique encodes both ML models’ training data and parameters using a simple multi-hash encoding (Hash-Comb) of a randomised quantization. The hyper-parameters of our encoding scheme can be shared using standard Secure Multiparty Computation protocols.

 

Join at: imt.lu/sagrestia

relatore: 
Ernestro Damiani, University of Milan
Units: 
SYSMA