講演情報
[9A-01]Dynamic Trust-Weighted Federated Learning with Gradient-Based Time-Series Anomaly Detection for Robust Model Updates
*Berjab Nesrine1 (1. 東京科学大学)
発表者区分:一般
論文種別:ショートペーパー
インタラクティブ発表:なし
論文種別:ショートペーパー
インタラクティブ発表:なし
キーワード:
Federated Learning、IoT、Poisoning Attacks、Security
Federated Learning (FL) enables decentralized machine learning, preserving data privacy while training across multiple clients. However, it remains vulnerable to poisoning attacks, where malicious clients submit manipulated updates to undermine the global model. This paper proposes a novel defense mechanism combining dynamic trust-weighted aggregation and gradient-based time-series anomaly detection. By continuously monitoring the gradient behavior of each client, the system adjusts its influence on the global model in real-time, ensuring robustness against both sudden and gradual poisoning attacks. Leveraging Singular Value Decomposition (SVD) for gradient trajectory analysis, this method addresses challenges in both IID and non-IID data distributions. Experiments validate the proposed approach in healthcare IoT scenarios, demonstrating significant improvements in security and robustness. This work is part of ongoing research and presents preliminary findings to outline the proposed methodology and its potential.