Federated Learning and Privacy-Preserving AI: Investigating the Future of Decentralized Data Training through Secure Model Updates
Main Article Content
Abstract
The rapid advancement of artificial intelligence (AI) has been accompanied by growing concerns around data privacy, security, and compliance. Federated Learning (FL) emerges as a transformative approach to address these challenges by enabling decentralized model training where raw data remains on client devices, and only encrypted or masked model updates are shared. This paper explores the future role of FL in privacy-preserving AI ecosystems, proposing an enhanced federated framework that combines secure aggregation, differential privacy, and adaptive client participation strategies. Through detailed experiments and evaluations, the proposed model demonstrates improved privacy guarantees, communication efficiency, and model robustness across highly heterogeneous client networks. Our findings outline critical design principles and open research directions for next-generation decentralized AI.