Paper 2022/1695
ELSA: Secure Aggregation for Federated Learning with Malicious Actors
Abstract
Federated learning (FL) is an increasingly popular approach for machine learning (ML) in cases where the train- ing dataset is highly distributed. Clients perform local training on their datasets and the updates are then aggregated into the global model. Existing protocols for aggregation are either inefficient, or don’t consider the case of malicious actors in the system. This is a major barrier in making FL an ideal solution for privacy-sensitive ML applications. We present ELSA, a secure aggregation protocol for FL, which breaks this barrier - it is efficient and addresses the existence of malicious actors at the core of its design. Similar to prior work on Prio and Prio+, ELSA provides a novel secure aggregation protocol built out of distributed trust across two servers that keeps individual client updates private as long as one server is honest, defends against malicious clients and is efficient end-to-end. Compared to prior works, the distinguishing theme in ELSA is that instead of the servers generating cryptographic correlations interactively, the clients act as untrusted dealers of these correlations without compromising the protocol’s security. This leads to a much faster protocol while also achieving stronger security at that ef- ficiency compared to prior work. We introduce new techniques that retain privacy even when a server is malicious at a small added cost of 7-25% in runtime with negligible increase in communication over the case of semi-honest server. Our work improves end-to-end runtime over prior work with similar security guarantees by big margins - single-aggregator RoFL by up to 305x (for the models we consider), and distributed trust Prio by up to 8x
Metadata
- Available format(s)
- Category
- Cryptographic protocols
- Publication info
- Published elsewhere. IEEE Security and Privacy (S&P) 2023
- Keywords
- Secure Federated Learning Malicious Privacy Distributed Trust
- Contact author(s)
-
mayankr @ berkeley edu
tomshen @ berkeley edu
sameer @ devron ai
raluca popa @ berkeley edu - History
- 2022-12-10: approved
- 2022-12-07: received
- See all versions
- Short URL
- https://ia.cr/2022/1695
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2022/1695, author = {Mayank Rathee and Conghao Shen and Sameer Wagh and Raluca Ada Popa}, title = {{ELSA}: Secure Aggregation for Federated Learning with Malicious Actors}, howpublished = {Cryptology {ePrint} Archive, Paper 2022/1695}, year = {2022}, url = {https://eprint.iacr.org/2022/1695} }