【1】Xin Fan, Yue Wang, Yan Huo, Zhi Tian. Joint Optimization for Federated Learning Over the Air [C] //2022 IEEE International Conference on Communications (IEEE ICC 2022). IEEE, 2022: 1-6(发表当年清华B类,CCF C类,北交大A类)
Abstract:
In this paper, we focus on federated learning (FL) over the air based on analog aggregation transmission in realistic wireless networks. We first derive a closed-form expression for the expected convergence rate of FL over the air, which theoretically quantifies the impact of analog aggregation on FL. Based on that, we further develop a joint optimization model for accurate FL implementation, which allows a parameter server to select a subset of edge devices and determine an appropriate power scaling factor. Such a joint optimization of device selection and power control for FL over the air is then formulated as an mixed integer programming problem. Finally, we efficiently solve this problem via a simple finite-set search method. Simulation results show that the proposed solutions developed for wireless channels outperform a benchmark method, and could achieve comparable performance of the ideal case where FL is implemented over reliable and error-free wireless channels.
【2】Xin Fan, Yue Wang, Yan Huo, Zhi Tian. Best Effort Voting Power Control for Byzantine-resilient Federated Learning Over the Air[C] //2022 IEEE International Conference on Communications (IEEE ICC 2022) . IEEE, 2022: 1-6(发表当年清华B类,CCF C类,北交大A类)
Abstract:
Analog aggregation based federated learning over the air (FLOA) provides high communication efficiency and privacy provisioning in edge computing paradigm. When all edge devices (workers) simultaneously upload their local updates to the parameter server (PS) through the commonly shared time-frequency resources, the PS can only obtains the averaged update rather than the individual local ones. As a result, such a concurrent transmission and aggregation scheme reduces the latency and costs of communication but makes FLOA vulnerable to Byzantine attacks. For the design of Byzantine-resilient FLOA, this paper starts from analyzing the channel inversion (CI) power control mechanism that is widely used in existing FLOA literature. Our theoretical analysis indicates that although CI can achieve good learning performance in the non-attacking scenarios, it fails to work well with limited defensive capability to Byzantine attacks. Then, we propose a novel scheme called the best effort voting (BEV) power control policy, integrated with stochastic gradient descent (SGD). Our proposed BEV-SGD improves the robustness of FLOA to Byzantine attacks, by allowing all the workers to send their local updates at their maximum transmit power. Under the strongest-attacking circumstance, we derive the expected convergence rates of FLOA with CI and BEV, respectively. The comparison reveals that our BEV outperforms its counterpart with CI in terms of better convergence behavior, which is verified by experimental simulations.
【3】Tao Jing, Yue Wu, Yan Huo, Qinghe Gao. A Stackelberg Game based Physical Layer Authentication Strategy with Reinforcement Learning. [C] //2022 IEEE International Conference on Communications (IEEE ICC 2022) . IEEE, 2022: 1-6(发表当年清华B类,CCF C类,北交大A类)
Abstract:
Physical layer authentication as a promising complement for upper layer authentication is the first line of defense against malicious attacks in wireless communication. However, the smart spoofer can learn the rules from receiver’s authentication process and dynamically choose the proper time sending spoofing signal which poses a severe threat to wireless communications. According to this, the Stackelberg game-based physical layer authentication strategy is proposed in this paper to model the interactions between the receiver and the smart spoofer. We first consider the static game-based authentication under the worse condition that the smart spoofer acts as the leader with privilege over the receiver. Moreover the Stackelberg equilibrium of static authentication strategy is derived. Then, we propose a dynamic game-based strategy according to reinforcement learning technique named Policy Hill Climbing, in which the spoofer always choose equilibrium solution and the receiver is unaware of the system parameters, such as the channel timevarying coefficient. Simulation results are presented to validate the effectiveness of the proposed authentication strategy, and the Policy Hill Climbing algorithm improves the utility compared with Q-learning-based algorithm.