Federated Learning Algorithms & Applications I – Invited Special Session

A4L-E: Federated Learning Algorithms & Applications I - Invited Special Session

Session Type: Lecture
Session Code: A4L-E
Location: Room 5
Date & Time: Wednesday March 22, 2023 (14:00 - 15:00)
Chair: Sanjay Purushotham
Track: 12
Paper IDPaper NameAuthorsAbstract
3036Federated Learning Beyond ConsensusAmrit Singh Bedi{4}, Chen Fan{3}, Alec Koppel{2}, Anit Kumar Sahu{1}, Furong Huang{4}, Dinesh Manocha{4}In federated learning (FL), the objective of collaboratively learning a global model through aggregation of model updates across devices tends to oppose the goal of personalization via local information. In this work, we calibrate this tradeoff in a quantitative manner through a multi-criterion optimization-based framework, which we cast as a constrained program: the objective for a device is its local objective, which it seeks to minimize while satisfying nonlinear constraints that quantify the proximity between the local and the global model. By considering the Lagrangian relaxation of this problem, we develop an algorithm that allows each node to minimize its local component of Lagrangian through queries to a first-order gradient oracle. Then, the server executes Lagrange multiplier ascent steps followed by a Lagrange multiplier-weighted averaging step. We call this instantiation of the primal-dual method Federated Learning Beyond Consensus (texttt{FedBC}). Theoretically, we establish that texttt{FedBC} converges to a first-order stationary point at rates that matches state of the art, up to an additional error term that depends on the tolerance parameter that arises due to the proximity constraints. Overall, the analysis is a novel characterization of primal-dual methods applied to non-convex saddle point problems with nonlinear constraints. Finally, we demonstrate that texttt{FedBC} balances the global and local model test accuracy metrics across a suite of datasets (Synthetic, MNIST, CIFAR-10, Shakespeare), achieving competitive performance with state of the art.
3195Federated Learning Over Cooperative Networks: Towards Energy- and Time-Efficient Distributed Machine Learning Over Time-Varying DataSeyyedali Hosseinalipour{3}, Su Wang{2}, Nicolo Michelusi{1}, Vaneet Aggarwal{2}, Christopher Brinton{2}, David Love{2}, Mung Chiang{2}Federated learning (FL) has been promoted as a popular technique for conducting distributed machine learning over wireless edge devices. In this work, we integrate cooperation among the devices realized through device-to-device (D2D) communications into the conventional FL architecture. We demonstrate how cooperation among the devices can be exploited to compensate for the heterogeneity of devices in terms of their datasets, and communication/computation resources. We consider a realistic environment in which distributions of local datasets of devices evolve over time and propose the notion of concept drift. Along this line, we further introduce idle times in between local model training rounds at the devices to achieve resource savings, the durations of which are optimized according to the concept drift. We analytically characterize the performance of the machine learning model training in our framework of interest and shed light on the notions of cold vs. warmed up model, and model inertia. We then formulate a general network optimization problem to optimize the model learning vs. resource efficiency tradeoff and solve it through a successive convex optimization technique. Through numerical results, we show the interconnections between idle times and concept drift, and the impact of device cooperation on the performance of the trained machine learning model.
3188A Comprehensive Study of Gradient Inversion Attacks in Federated Learning and Baseline Defense StrategiesPretom Roy Ovi, Aryya GangopadhyayWith a greater emphasis on data privacy and legislation, collaborative machine learning (ML) algorithms are being developed to protect sensitive private data. Federated learning (FL) is the most popular of these methods, and FL enables collaborative model construction among a large number of users without the requirement for explicit data sharing. Because FL models are built with a distributed training and gradient sharing protocol, they are more vulnerable to gradient inversion attacks, which occur when sensitive training data is extracted from raw gradients. Gradient inversion attacks to reconstruct data are regarded as one of the wickedest privacy risks in FL, as attackers covertly spy gradient updates and backtrack from the gradients to obtain information about the raw data without compromising model training quality. Even without prior knowledge of the learning model, the attacker can breach the secrecy and confidentiality of the training data via the intermediate gradients. Existing FL protocol designs have been proven to exhibit vulnerabilities that can be exploited by adversaries both within and outside the system to compromise data privacy. Thus, it is critical to make FL system designers aware of the implications of future FL algorithm design on privacy preservation. Motivated by this, our work focuses on exploring the data confidentiality and integrity of FL, where we emphasize the intuitions, important approaches, and fundamental assumptions used by the existing strategies of gradient inversion attacks to retrieve the data. Then we examine the limitations of different approaches and evaluate their qualitative and quantitative performance in retrieving raw data. Furthermore, we assessed the effectiveness of baseline defense mechanisms against these attacks and provided promising future research directions for more robust privacy preservation in FL.