A Hybrid Cryptographic Framework for Privacy-Preserving Federated Learning under Gradient Leakage Threats

Authors

  • Arvind Kumar, Dr. Umesh Prasad

Keywords:

Federated Learning, Privacy Preservation, Gradient Leakage, Secure Aggregation, Homomorphic Encryption, Differential Privacy, Hybrid Framework

Abstract

Federated Learning (FL) has emerged as a promising paradigm for collaborative model training without centralized data collection, addressing growing privacy concerns in sensitive domains such as healthcare, finance, and smart governance. Despite its conceptual advantages, recent studies have demonstrated that federated learning remains vulnerable to privacy breaches through gradient leakage, inference attacks, and data reconstruction techniques. Adversaries can exploit shared model updates to recover sensitive information, undermining the foundational privacy guarantees of FL. This paper proposes a hybrid cryptographic framework that integrates secure aggregation, partial homomorphic encryption, and differential privacy to mitigate gradient leakage threats while preserving model utility and system scalability. Unlike existing approaches that rely on a single privacy mechanism, the proposed framework adopts a layered defense strategy, balancing confidentiality, robustness, and computational feasibility. A comprehensive evaluation is conducted using simulated federated environments under adversarial settings. Experimental results demonstrate that the proposed framework significantly reduces information leakage while maintaining competitive model accuracy and acceptable communication overhead. The findings establish that hybrid privacy mechanisms provide stronger and more practical privacy guarantees than isolated solutions, contributing toward the secure deployment of federated learning in real-world systems.

References

Bonawitz, K., Ivanov, V., Kreuter, B., et al. (2017). Practical secure aggregation for privacy-preserving machine learning. Proceedings of the ACM Conference on Computer and Communications Security (CCS), 1175–1191.

McMahan, B., Moore, E., Ramage, D., Hampson, S., & Arcas, B. A. y. (2017). Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 1273–1282.

Dwork, C. (2006). Differential privacy. Proceedings of the 33rd International Colloquium on Automata, Languages and Programming (ICALP), 1–12.

Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4), 211–407.

Zhu, L., Liu, Z., & Han, S. (2019). Deep leakage from gradients. Advances in Neural Information Processing Systems (NeurIPS), 14774–14784.

Hitaj, B., Ateniese, G., & Perez-Cruz, F. (2017). Deep models under the GAN: Information leakage from collaborative deep learning. Proceedings of the ACM CCS, 603–618.

Geyer, R. C., Klein, T., & Nabi, M. (2017). Differentially private federated learning: A client-level perspective. NeurIPS Workshop on Machine Learning on the Phone and other Consumer Devices.

Kairouz, P., McMahan, H. B., Avent, B., et al. (2021). Advances and open problems in federated learning. Foundations and Trends in Machine Learning, 14(1–2), 1–210.

Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020). Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, 37(3), 50–60.

Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology, 10(2), 1–19.

Truex, S., Baracaldo, N., Anwar, A., et al. (2019). A hybrid approach to privacy-preserving federated learning. Proceedings of the ACM Workshop on Artificial Intelligence and Security, 1–11.

Nasr, M., Shokri, R., & Houmansadr, A. (2019). Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks. IEEE Symposium on Security and Privacy, 739–753.

Melis, L., Song, C., De Cristofaro, E., & Shmatikov, V. (2019). Exploiting unintended feature leakage in collaborative learning. IEEE Symposium on Security and Privacy, 691–706.

Aono, Y., Hayashi, T., Wang, L., & Moriai, S. (2017). Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security, 13(5), 1333–1345.

Gentry, C. (2009). Fully homomorphic encryption using ideal lattices. Proceedings of the 41st Annual ACM Symposium on Theory of Computing, 169–178.

Miers, I., Popa, R. A., Garman, C., et al. (2013). Zerocash: Decentralized anonymous payments from Bitcoin. IEEE Symposium on Security and Privacy, 459–474.

Shokri, R., & Shmatikov, V. (2015). Privacy-preserving deep learning. Proceedings of the ACM CCS, 1310–1321.

Abadi, M., Chu, A., Goodfellow, I., et al. (2016). Deep learning with differential privacy. Proceedings of the ACM CCS, 308–318.

Papernot, N., McDaniel, P., Sinha, A., & Wellman, M. (2018). Towards the science of security and privacy in machine learning. IEEE European Symposium on Security and Privacy, 399–414.

Lyu, L., Yu, H., Yang, Q., & Chen, X. (2020). Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133.

Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., & Shmatikov, V. (2020). How to backdoor federated learning. Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2938–2948.

Sun, X., Wang, J., Xiong, J., et al. (2021). Secure aggregation for federated learning: A survey. IEEE Communications Surveys & Tutorials, 23(2), 1231–1261.

Zhao, Y., Li, M., Lai, L., et al. (2018). Federated learning with non-IID data. arXiv preprint arXiv:1806.00582.

Hard, A., Rao, K., Mathews, R., et al. (2018). Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604.

Xu, J., Glicksberg, B. S., Su, C., et al. (2021). Federated learning for healthcare informatics. Journal of Biomedical Informatics, 113, 103654.

Li, X., Gu, Y., Dvornek, N., et al. (2020). Multi-site fMRI analysis using privacy-preserving federated learning. Medical Image Analysis, 65, 101765.

Rieke, N., Hancox, J., Li, W., et al. (2020). The future of digital health with federated learning. npj Digital Medicine, 3(1), 1–7.

Brisimi, T. S., Chen, R., Mela, T., et al. (2018). Federated learning of predictive models from federated electronic health records. International Journal of Medical Informatics, 112, 59–67.

Xiong, J., Zhang, R., Li, M., et al. (2020). Privacy-preserving distributed machine learning via homomorphic encryption. IEEE Transactions on Information Forensics and Security, 15, 2381–2395.

Al-Rubaie, M., & Chang, J. M. (2019). Privacy-preserving machine learning: Threats and solutions. IEEE Security & Privacy, 17(2), 49–58.

Downloads

How to Cite

Arvind Kumar, Dr. Umesh Prasad. (2024). A Hybrid Cryptographic Framework for Privacy-Preserving Federated Learning under Gradient Leakage Threats. International Journal of Engineering Science & Humanities, 14(2), 110–118. Retrieved from https://www.ijesh.com/j/article/view/550

Issue

Section

Original Research Articles

Similar Articles

<< < 1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.