Trojans & Byzantines

B4L-B: Trojans & Byzantines

Session Type: Lecture
Session Code: B4L-B
Location: Room 2
Date & Time: Thursday March 23, 2023 (14:00-15:00)
Chair: Yingbo Hua
Track: 8
Paper No. Paper NameAuthorsAbstract
3092Human-Machine Hierarchical Networks for Decision Making Under Byzantine AttacksChen Quan{1}, Baocheng Geng{2}, Yunghsiang S. Han{3}, Pramod K. Varshney{1}This paper proposes a belief-updating scheme in a human-machine collaborative decision-making network to combat Byzantine attacks. A hierarchical framework is used to realize the network where local decisions from physical sensors act as reference decisions to improve the quality of human sensor decisions. During the decision-making process, the belief that each physical sensor is malicious is updated. The case when humans have side information available is investigated, and its impact is analyzed. Simulation results substantiate that the proposed scheme can significantly improve the quality of human sensor decisions, even when most physical sensors are malicious. Moreover, the performance of the proposed method does not necessarily depend on the knowledge of the actual fraction of malicious physical sensors. Consequently, the proposed scheme can effectively defend against Byzantine attacks and improve the quality of human sensors\' decisions so that the performance of the human-machine collaborative system is enhanced.
3119Game and Prospect Theoretic Hardware Trojan TestingSatyaki Nan{3}, Laurent Njilla{1}, Swastik Brahma{4}, Charles Kamhoua{2}In this paper, we address the problem of hardware Trojan testing with the buyer of an Integrated Circuit (IC), who is referred to as the defender, and the malicious manufacturer of the IC, who is referred to as the attacker, strategically acting against each other. Our developed model accounts for both imperfections in the testing process as well as costs incurred for performing testing. First, we analytically characterize Nash Equilibrium (NE) strategies for Trojan insertion and testing from the attacker’s and the defender’s perspectives, respectively, considering them to be fully rational in nature. Further, we also characterize NE-based Trojan insertion-testing strategies considering the attacker and the defender to have cognitive biases which make them exhibit irrationalities in their behaviors. Numerous simulation results are presented throughout the paper to provide important insights.
3196Vulnerabilities of Deep Learning-Driven Semantic Communications to Backdoor (Trojan) AttacksYalin Sagduyu{3}, Tugba Erpek{3}, Sennur Ulukus{2}, Aylin Yener{1}This paper highlights vulnerabilities of deep learning-driven semantic communications to backdoor (Trojan) attacks. Semantic communications aims to convey a desired meaning while transferring information from a transmitter to its receiver. The encoder-decoder pair of an autoencoder that is represented by deep neural networks (DNNs) is trained to reconstruct signals such as images at the receiver by transmitting latent features of small size over a limited number of channel uses. In the meantime, the DNN of a semantic task classifier at the receiver is jointly trained with the autoencoder to check the meaning conveyed to the receiver. The complex decision space of the DNNs makes semantic communications susceptible to adversarial manipulations. In a backdoor (Trojan) attack, the adversary adds triggers to a small portion of training samples and changes the label to a target label. When the transfer of images is considered, the triggers can be added to the images or equivalently to the corresponding transmitted or received signals. In test time, the adversary activates these triggers by providing poisoned samples as input to the encoder (or decoder) of semantic communications. The backdoor attack can effectively change the semantic information transferred for the poisoned input samples to a target meaning. As the performance of semantic communications improves with the signal-to-noise ratio and the number of channel uses, the success of the backdoor attack increases as well. Also, increasing the Trojan ratio in training data makes the attack more successful. On the other hand, the attack is selective and its effect on the unpoisoned input samples remains small. Overall, this paper shows that the backdoor attack poses a serious threat to semantic communications and presents novel design guidelines to preserve the meaning of transferred information in the presence of backdoor attacks.