Networks I

B4L-A: Networks I

Session Type: Lecture
Session Code: B4L-A
Location: Room 1
Date & Time: Thursday March 23, 2023 (14:00-15:00)
Chair: Andrey Garnaev
Track: 6
Paper No. Paper NameAuthorsAbstract
3085Communication-Efficient Federated Learning with Channel-Aware Sparsification Over Wireless NetworksRicheng Jin{3}, Philip Dai{1}, Kaiqi Xiong{2}Federated learning (FL) has recently emerged as a popular distributed learning paradigm since it allows collaborative training of a global machine learning model while keeping the training data of its participating workers local. This paradigm enables the model training to harness the computing power across the network of FL and preserves the privacy of local training data. However, communication efficiency has become one of the major concerns of FL due to frequent model updates through the network, especially for devices in wireless networks that have limited communication resources. Despite that various communication-efficient compression mechanisms (e.g., quantization and sparsification) have been incorporated into FL, most of the existing studies are only concerned with resource allocation optimization given predetermined compression mechanisms, and few of them take wireless communication into consideration in the design of the compression mechanisms. In this paper, we study the impact of sparsification and wireless channels on FL performance. Specifically, we propose a channel-aware sparsification mechanism and derive a closed-form solution for communication time allocation for workers in a TDMA setting. Extensive simulations are conducted to validate the effectiveness of the proposed mechanism.
3118ACRE: Actor Critic Reinforcement Learning for Failure-Aware Edge Computing MigrationsMarie Siew, Shikhar Sharma, Carlee Joe-WongIn edge computing, users\' service profiles are migrated in response to user mobility, to minimize the user-experienced delay, balanced against the migration cost. Due to imperfect information on transition probabilities and costs, reinforcement learning (RL) is often used to optimize service migration. Nevertheless, current works do not optimize service migration in light of occasional server failures. While server failures are rare, they impact the smooth and safe functioning of latency sensitive edge computing applications like autonomous driving and real-time obstacle detection, because users can no longer complete their computing jobs. As these failures occur at a low probability, it is difficult for RL algorithms, which are data and experience driven, to learn an optimal service migration policy for both the usual and rare event scenarios. Therefore, we propose an algorithm ImACRE, which integrates importance sampling into actor critic reinforcement learning, to learn the optimal service profile and backup placement policy. Our algorithm uses importance sampling to sample rare events in a simulator, at a rate proportional to their contribution to system costs, while balancing service migration trade-offs between delay and migration costs, with failure costs, backup placement and migration costs. We use trace driven experiments to show that our algorithm gives cost reductions in the event of failures.
3151Multi-Agent Deep Reinforcement Learning for Multi-Cell Interference MitigationMadan Dahal, Mojtaba VaeziMulti-cell interference management techniques typically require sharing channel state information (CSI) among all cells involved, making the algorithms ineffective for practical uses. To overcome this shortcoming, an interference mitigation technique that does not require coordination among neighboring cells is developed in this paper. The algorithm leverages distributed deep reinforcement learning to this end and delivers a faster and more spectrally-efficient solution than state-of-the- art centralized techniques. An important aspect of our proposed solution is that it scales very well with the number of cells in the network. The effectiveness of the proposed algorithm is verified by simulation over millimeter-wave networks with two to seven cells. Interestingly, the penalty for not sharing CSI decreases as the number of cells increases. In particular, for a 7-cell network, the proposed algorithm without sharing CSI achieves 92% of the spectral efficiency obtained by sharing CSI.