Information-theoretic Methods for Machine Learning I – Invited Special Session

A4L-D: Information-theoretic Methods for Machine Learning I - Invited Special Session

Session Type: Lecture
Session Code: A4L-D
Location: Room 4
Date & Time: Wednesday March 22, 2023 (14:00 - 15:00)
Chair: Flavio Calmon
Track: 12
Paper IDPaper NameAuthorsAbstract
3059Achieving Joint Privacy and Communication Efficiency in Federated Learning and AnalyticsAyfer Özgür, Wei-Ning ChenTwo major challenges in federated learning (FL) and analytics (FA) are 1) preserving the privacy of the local samples; and 2) communicating them efficiently to a central server, while achieving high accuracy for the end-to-end task. In this talk, we will explore how to address these two challenges jointly without needing a trusted server, under local differential privacy (where the locally privatized message satisfies DP) and distributed differential privacy (where the securely aggregated local information satisfies DP). In particular, we consider two canonical problems -- mean and frequency estimation -- that lie at the heart of FL and FA applications, respectively. We study the fundamental trade-offs between privacy, communication, and accuracy and show how to achieve them efficiently with tools including random sampling, random projection, sketching, secure aggregation and distributed privacy mechanisms. Surprisingly, under both local and distributed DP, we show that the requirements for communication efficiency and privacy can be aligned. With more stringent privacy requirements, one can compress more aggressively and achieve higher communication efficiency while preserving roughly the same level of accuracy. Finally, we will conclude the talk with some recent results and open problems that go beyond the worst-case bounds, showing that by incorporating information about the structure of the problem, one can potentially improve the overly pessimistic global minimax bounds and obtain a better instance-dependent convergence.
3087Scalar Compression Methods for Binary Hypothesis TestingFabrizio Carpi, Siddharth Garg, Elza ErkipModern machine learning algorithms typically require high computational power to carry out complex data analytics tasks. Thanks to high throughput wireless links and enhanced processing power at the edge/cloud, resource constrained devices can offload their data to the cloud for processing. This requires compression of data to maximize the analytics task performance while satisfying the communication budget constraint. Hence the remote execution requires a paradigm shift toward task-aware compression, instead of the traditional rate-distortion framework which focuses on source reconstruction. Motivated by this setting, this talk considers a simple binary hypothesis testing scenario where the resource-constrained transmitter performs scalar compression on data sampled from one of two distributions; the receiver performs a hypothesis test on multiple received samples to determine the correct source distribution. To this end, the task-oriented compression problem is formulated as finding the optimal source coder that maximizes the asymptotic error performance of the hypothesis test on the server side under a rate constraint. An analysis of how to design task-oriented compressors is provided, giving intuitions about how to extract the semantic information which is relevant to the hypothesis test. A new source coding strategy based on a greedy optimization procedure is proposed and it is shown that the proposed compression scheme outperforms the universal fixed-length single-shot coding scheme for a range of rate constraints.
3192Toward a Modern Theory of Lossy CompressionSourbh Bhadane{1}, Aaron Wagner{1}, Johannes Ballé{2}Artificial Neural Network (ANN)-based compressors have achieved remarkable performance on real-world multimedia sources, especially images. Yet their performance cannot be explained by classical rate-distortion theory for Gaussian sources with a mean square error distortion constraint, since this theory asserts that linear transforms are near-optimal when combined with entropy coding. We present results toward the development of a modern rate-distortion theory that can explain the superior performance of ANN-based compressors while also exposing their limitations. This is accomplished by characterizing the information-theoretic compression limits of various low-dimensional manifolds, and testing whether they are optimally compressed by state-of-the-art ANN-based compressors. We find that ANN-based compressors are exactly optimal in some cases but far from optimal in others.