$\mathsf{OPA}$: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning

Authors: Harish Karthikeyan, Antigoni Polychroniadou

Abstract: Our work aims to minimize interaction in secure computation due to the high
cost and challenges associated with communication rounds, particularly in
scenarios with many clients. In this work, we revisit the problem of secure
aggregation in the single-server setting where a single evaluation server can
securely aggregate client-held individual inputs. Our key contribution is the
introduction of One-shot Private Aggregation ($\mathsf{OPA}$) where clients
speak only once (or even choose not to speak) per aggregation evaluation. Since
each client communicates only once per aggregation, this simplifies managing
dropouts and dynamic participation, contrasting with multi-round protocols and
aligning with plaintext secure aggregation, where clients interact only once.
We construct $\mathsf{OPA}$ based on LWR, LWE, class groups, DCR and
demonstrate applications to privacy-preserving Federated Learning (FL) where
clients \emph{speak once}. This is a sharp departure from prior multi-round FL
protocols whose study was initiated by Bonawitz et al. (CCS, 2017). Moreover,
unlike the YOSO (You Only Speak Once) model for general secure computation,
$\mathsf{OPA}$ eliminates complex committee selection protocols to achieve
adaptive security. Beyond asymptotic improvements, $\mathsf{OPA}$ is practical,
outperforming state-of-the-art solutions. We benchmark logistic regression
classifiers for two datasets, while also building an MLP classifier to train on
MNIST, CIFAR-10, and CIFAR-100 datasets. We build two flavors of $\caps$ (1)
from (threshold) key homomorphic PRF and (2) from seed homomorphic PRG and
secret sharing.

Source: http://arxiv.org/abs/2410.22303v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these