Distributed & Robust Machine Learning II – Invited Special Session

Session Type: Lecture
Session Code: A2L-E
Location: Room 5
Date & Time: Wednesday March 22, 2023 (10:20 - 11:20)
Chair: Cong Shen
Track: 12

Paper IDPaper NameAuthorsAbstract
3203Interpretable Skill Learning for Dynamic Treatment Regimes Through Imitation Yushan Jiang, Wenchao Yu, Dongjin Song, Wei Cheng, Haifeng ChenImitation learning that mimics experts' skills from their demonstrations has shown great success in discovering dynamic treatment regimes, \textit{i.e.}, the optimal decision rules to treat an individual patient based on related evolving treatment and covariate history. Existing imitation learning methods, however, still lack the capability to interpret the underlying rationales of the learned policy in a faithful way. Moreover, since dynamic treatment regimes for patients often exhibit varying patterns, \textit{i.e.}, symptoms that transit from one to another, the flat policy learned by a vanilla imitation learning method is typically undesired. To this end, we propose an Interpretable Skill Learning (ISL) framework to resolve the aforementioned challenges for dynamic treatment regimes through imitation. The key idea is to model each segment of experts' demonstrations with a prototype layer and integrate it with the imitation learning layer to enhance the interpretation capability. On one hand, the ISL framework is able to provide interpretable explanations by matching the prototype to exemplar segments during the inference stage, which enables doctors to perform reasoning of the learned demonstrations based on human-understandable patient symptoms and lab results. On the other hand, the obtained skill embedding consisting of prototypes serves as conditional information to the imitation learning layer, which implicitly guides the policy network to provide a more accurate demonstration when the patients' state switches from one stage to another. Thoroughly empirical studies demonstrate that our proposed ISL technique can achieve better performance than state-of-the-art methods. Moreover, the proposed ISL framework also exhibits good interpretability which cannot be observed in existing methods.
3072Federated Learning via Indirect Server-Client Communications
Jieming Bian, Cong Shen, Jie XuFederated Learning (FL) is a communication-efficient and privacy-preserving distributed machine learning framework that has gained a significant amount of research attention recently. Despite the different forms of FL algorithms (e.g., synchronous FL, asynchronous FL) and the underlying optimization methods, nearly all existing works implicitly assumed the existence of a communication infrastructure that facilitates the direct communication between the server and the clients for the model data exchange. This assumption, however, does not hold in many real- world applications that can benefit from distributed learning but lack a proper communication infrastructure (e.g., smart sensing in remote areas). In this paper, we propose a novel FL framework, named FedEx (short for FL via Model Express Delivery), that utilizes mobile transporters (e.g., Unmanned Aerial Vehicles) to establish indirect communication channels between the server and the clients. Two algorithms, called FedEx-Sync and FedEx- Async, are developed depending on whether the transporters adopt a synchronized or an asynchronized schedule. Even though the indirect communications introduce heterogeneous delays to clients for both the global model dissemination and the local model collection, we prove the convergence of both versions of FedEx. The convergence analysis subsequently sheds lights on how to assign clients to different transporters and design the routes among the clients. The performance of FedEx is evaluated through experiments in a simulated network on two public datasets.