skip to main content
research-article

Optimizing Privacy, Utility, and Efficiency in a Constrained Multi-Objective Federated Learning Framework

Published: 13 December 2024 Publication History

Abstract

Conventionally, federated learning aims to optimize a single objective, typically the utility. However, for a federated learning system to be trustworthy, it needs to simultaneously satisfy multiple objectives, such as maximizing model performance, minimizing privacy leakage and training costs, and being robust to malicious attacks. Multi-Objective Optimization (MOO) aiming to optimize multiple conflicting objectives simultaneously is quite suitable for solving the optimization problem of Trustworthy Federated Learning (TFL). In this article, we unify MOO and TFL by formulating the problem of constrained multi-objective federated learning (CMOFL). Under this formulation, existing MOO algorithms can be adapted to TFL straightforwardly. Different from existing CMOFL algorithms focusing on utility, efficiency, fairness, and robustness, we consider optimizing privacy leakage along with utility loss and training cost, the three primary objectives of a TFL system. We develop two improved CMOFL algorithms based on NSGA-II and PSL, respectively, to effectively and efficiently find Pareto optimal solutions and provide theoretical analysis on their convergence. We design quantitative measurements of privacy leakage, utility loss, and training cost for three privacy protection mechanisms: Randomization, BatchCrypt (an efficient homomorphic encryption), and Sparsification. Empirical experiments conducted under the three protection mechanisms demonstrate the effectiveness of our proposed algorithms.

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, New York, NY, 308–318.
[2]
Richard Allmendinger, Michael T. M. Emmerich, Jussi Hakanen, Yaochu Jin, and Enrico Rigoni. 2017. Surrogate-assisted multicriteria optimization: Complexities, prospective solutions, and business case. Journal of Multi-Criteria Decision Analysis 24, 1–2 (2017), 5–24.
[3]
Arpan Biswas, Claudio Fuentes, and Christopher Hoyle. 2021. A multi-objective Bayesian optimization approach using the weighted Tchebycheff method. Journal of Mechanical Design 144, 1 (08 2021). DOI:
[4]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 1175–1191.
[5]
Zheng-yi Chai, Chuan-dong Yang, and Ya-lun Li. 2022. Communication efficiency optimization in federated learning based on multi-objective evolutionary algorithm. Evolutionary Intelligence 16 (2022), 1033–1044.
[6]
Depeng Chen, Xiao Jiang, Hong Zhong, and Jie Cui. 2023. Building trusted federated learning: Key technologies and challenges. Journal of Sensor and Actuator Networks 12, 1 (2023). DOI:
[7]
Sen Cui, Weishen Pan, Jian Liang, Changshui Zhang, and Fei Wang. 2021. Addressing algorithmic disparity and performance inconsistency in federated learning. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 26091–26102.
[8]
Samuel Daulton, David Eriksson, Maximilian Balandat, and Eytan Bakshy. 2022. Multi-objective Bayesian optimization over high-dimensional search spaces. In Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, Vol. 180, PMLR, 507–517.
[9]
Kalyanmoy Deb and Kalyanmoy Deb. 2014. Multi-Objective Optimization. Springer US, Boston, MA, 403–449.
[10]
Kalyanmoy Deb and Himanshu Jain. 2014. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: Solving problems with box constraints. IEEE Transactions on Evolutionary Computation 18, 4 (2014), 577–601. DOI:
[11]
Kalyanmoy Deb and Amarendra Kumar. 1995. Real-coded genetic algorithms with simulated binary crossover: Studies on multimodal and multiobjective problems. Complex Systems 9 (1995), 431–454.
[12]
K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6, 2 (2002), 182–197.
[13]
Jingda Deng and Qingfu Zhang. 2019. Approximating hypervolume and hypervolume contributions using polar coordinate. IEEE Transactions on Evolutionary Computation 23, 5 (2019), 913–918.
[14]
Luc Devroye, Abbas Mehrabian, and Tommy Reddad. 2018. The total variation distance between high-dimensional Gaussians. arXiv:1810.08693. Retrieved from https://arxiv.org/abs/1810.08693
[15]
Flávio du Pin Calmon and Nadia Fawaz. 2012. Privacy against statistical inference. In Proceedings of the 2012 50th Annual Allerton Conference on Communication, Control, and Computing. IEEE, 1401–1408.
[16]
John Duchi, Martin J. Wainwright, and Michael I. Jordan. 2013. Local privacy and minimax bounds: Sharp rates for probability estimation. In Advances in Neural Information Processing Systems, 1529–1537.
[17]
Jean-Antoine Désidéri. 2012. Multiple-gradient descent algorithm (MGDA) for multiobjective optimization. Comptes Rendus Mathematique 350, 5 (2012), 313–318.
[18]
Tao Fan, Weijing Chen, Guoqiang Ma, Yan Kang, Lixin Fan, and Qiang Yang. 2024. SecureBoost+: Large scale and high-performance vertical federated gradient boosting decision tree. In Proceedings of the 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, 237–249.
[19]
Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. 2020. Inverting gradients - how easy is it to break privacy in federated learning? In Proceedings of the 34th Conference on Neural Information Processing Systems (NIPS ’20). Curran Associates Inc., Red Hook, NY, Article 1421, 11 pages.
[20]
Robin C. Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. arXiv:1712.07557. Retrieved from https://arxiv.org/abs/1712.07557
[21]
Yue-Jiao Gong, Wei-Neng Chen, Zhi-Hui Zhan, Jun Zhang, Yun Li, Qingfu Zhang, and Jing-Jing Li. 2015. Distributed evolutionary algorithms and their models: A survey of the state-of-the-art. Applied Soft Computing 34 (2015), 286–300.
[22]
Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan Yao, and Qiang Yang. 2021. Federated Deep Learning with Bayesian Privacy. arXiv:2109.13012. Retrieved from https://arxiv.org/abs/2109.13012
[23]
Cheng He, Ran Cheng, Ye Tian, Xingyi Zhang, Kay Chen Tan, and Yaochu Jin. 2021. Paired offspring generation for constrained large-scale multiobjective optimization. IEEE Transactions on Evolutionary Computation 25, 3 (2021), 448–462.
[24]
Zeou Hu, Kiarash Shaloudegi, Guojun Zhang, and Yaoliang Yu. 2022. Federated learning meets multi-objective optimization. IEEE Transactions on Network Science and Engineering 9, 4 (2022), 2039–2051.
[25]
Yangsibo Huang, Zhao Song, Kai Li, and Sanjeev Arora. 2020. Instahide: Instance-hiding schemes for private distributed learning. In Proceedings of the International Conference on Machine Learning. PMLR, 4507–4518.
[26]
Li Ju, Tianru Zhang, Salman Toor, and Andreas Hellander. 2023. Accelerating fair federated learning: Adaptive federated Adam. arXiv:2301.09357. Retrieved from https://arxiv.org/abs/2301.09357
[27]
Yan Kang, Yuanqin He, Jiahuan Luo, Tao Fan, Yang Liu, and Qiang Yang. 2022. Privacy-preserving federated adversarial domain adaptation over feature groups for interpretability. IEEE Transactions on Big Data (2022), 1–12.
[28]
Mifa Kim, Tomoyuki Hiroyasu, Mitsunori Miki, and Shinya Watanabe. 2004. SPEA2+: Improving the performance of the strength Pareto evolutionary algorithm 2. In Proceedings of the 8th International Conference on Parallel Problem Solving from Nature-PPSN VIII. Springer, 742–751.
[29]
Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. 2014. The CIFAR-10 dataset. Retrieved from http://www.cs.toronto.edu/kriz/cifar.html
[30]
Marco Laumanns and Jiri Ocenasek. 2002. Bayesian optimization algorithms for multi-objective optimization. In Parallel Problem Solving from Nature—PPSN VII. Springer, Berlin, 298–307.
[31]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998), 2278–2324.
[32]
Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, Jiquan Pei, Jinfeng Yi, and Bowen Zhou. 2023. Trustworthy AI: From principles to practices. ACM Computing Surveys 55, 9 (2023), 1–46.
[33]
Wei Yang Bryan Lim, Zehui Xiong, Chunyan Miao, Dusit Niyato, Qiang Yang, Cyril Leung, and H. Vincent Poor. 2020. Hierarchical incentive mechanism design for federated machine learning in mobile networks. IEEE Internet of Things Journal 7, 10 (2020), 9575–9588.
[34]
Xi Lin, Zhiyuan Yang, Xiaoyuan Zhang, and Qingfu Zhang. 2022. Pareto set learning for expensive multi-objective optimization. In Advances in Neural Information Processing Systems, Vol. 35, 19231–19247.
[35]
Suyun Liu and Luis Nunes Vicente. 2024. The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning. Annals of Operations Research 339 (2024), 1119–1148.
[36]
Guodong Long, Yue Tan, Jing Jiang, and Chengqi Zhang. 2020. Federated learning for open banking. In Federated Learning: Privacy and Incentive. Springer, 240–254.
[37]
David G. Luenberger and Yinyu Ye. 1984. Linear and Nonlinear Programming, Vol. 2. Springer.
[38]
Lingjuan Lyu, Han Yu, Xingjun Ma, Chen Chen, Lichao Sun, Jun Zhao, Qiang Yang, and Philip S. Yu. 2022. Privacy and robustness in federated learning: Attacks and defenses. IEEE Transactions on Neural Networks and Learning Systems 35 (2022), 8726–8746.
[39]
Debabrata Mahapatra and Vaibhav Rajan. 2020. Multi-task learning with user preferences: Gradient descent with controlled ascent in pareto optimization. In Proceedings of the 37th International Conference on Machine Learning, Vol. 119, 6597–6607.
[40]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273–1282.
[41]
H. B. McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2016. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 1273–1282.
[42]
Ninareh Mehrabi, Cyprien de Lichy, John McKay, Cynthia He, and William Campbell. 2022. Towards multi-objective statistically fair federated learning. arXiv:2201.09917. Retrieved from https://arxiv.org/abs/2201.09917
[43]
José Ángel Morell, Zakaria Abdelmoiz Dahi, Francisco Chicano, and Gabriel Luque. 2022. Optimising communication overhead in federated learning using NSGA-II, 317–333. DOI:
[44]
Hirotaka Nakayama, Yeboon Yun, and Min Yoon. 2009. Sequential Approximate Multiobjective Optimization Using Computational Intelligence. Springer Science & Business Media.
[45]
Dinh C. Nguyen, Quoc-Viet Pham, Pubudu N. Pathirana, Ming Ding, Aruna Seneviratne, Zihuai Lin, Octavia Dobre, and Won-Joo Hwang. 2022. Federated Learning for Smart Healthcare: A Survey. ACM Comput. Surv. 55, 3, Article 60 (Feb. 2022), 37 pages.
[46]
Milad Khademi Nori, Sangseok Yun, and Il-Min Kim. 2021. Fast federated learning by balancing communication trade-offs. IEEE Transactions on Communications 69, 8 (2021), 5168–5182.
[47]
Biswajit Paria, Kirthevasan Kandasamy, and Barnabás Póczos. 2020. A flexible framework for multi-objective Bayesian optimization using random scalarizations. In Uncertainty in Artificial Intelligence. Ryan P. Adams and Vibhav Gogate (Eds.), PMLR, 766–776.
[48]
Iman Rahimi, Amir H. Gandomi, Fang Chen, and Efren Mezura-Montes. 2023. A review on constraint handling techniques for population-based algorithms: from single-objective to multi-objective optimization. Archives of Computational Methods in Engineering 30, 3 (2023), 2181–2209.
[49]
Borzoo Rassouli and Deniz Gündüz. 2019. Optimal utility-privacy trade-off with total variation distance as a privacy measure. IEEE Transactions on Information Forensics and Security 15 (2019), 594–603.
[50]
Nicola Rieke, Jonny Hancox, Wenqi Li, Fausto Milletari, Holger R. Roth, Shadi Albarqouni, Spyridon Bakas, Mathieu N. Galtier, Bennett A. Landman, Klaus Maier-Hein, et al. 2020. The future of digital health with federated learning. NPJ Digital Medicine 3, 1 (2020), 1–7.
[51]
Adi Shamir. 1979. How to share a secret. Communications of the ACM 22, 11 (1979), 612–613.
[52]
Ben Tan, Bo Liu, Vincent Zheng, and Qiang Yang. 2020. A Federated Recommender System for Online Services. ACM, New York, NY, 579–581.
[53]
Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, and Wenqi Wei. 2020. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the 3rd ACM International Workshop on Edge Systems, Analytics and Networking, 61–66.
[54]
Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farhad Farokhi, Shi Jin, Tony Q. S. Quek, and H. Vincent Poor. 2020. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security 15 (2020), 3454–3469.
[55]
Yuezhou Wu, Yan Kang, Jiahuan Luo, Yuanqin He, and Qiang Yang. 2022. FedCG: Leverage conditional GAN for protecting privacy and maintaining competitive performance in federated learning. In Proceedings of the 31st International Joint Conference on Artificial Intelligence, 2334–2340.
[56]
Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747. Retrieved from https://arxiv.org/abs/1708.07747
[57]
Kaifeng Yang, Michael Emmerich, André Deutz, and Thomas Bäck. 2019. Multi-objective Bayesian global optimization using expected hypervolume improvement gradient. Swarm and Evolutionary Computation 44 (2019), 945–956.
[58]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology 10, 2 (2019), 1–19.
[59]
Benshun Yin, Zhiyong Chen, and Meixia Tao. 2022. Predictive GAN-powered multi-objective optimization for hybrid federated split learning. arXiv:2209.02428. Retrieved from https://arxiv.org/abs/2209.02428
[60]
Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M. Alvarez, Jan Kautz, and Pavlo Molchanov. 2021. See through gradients: Image batch recovery via GradInversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16337–16346.
[61]
Guo Yu, Lianbo Ma, Yaochu Jin, Wenli Du, Qiqi Liu, and Hengmin Zhang. 2022. A survey on knee-oriented multiobjective evolutionary optimization. IEEE Transactions on Evolutionary Computation 26, 6 (2022), 1452–1472.
[62]
Han Yu, Zelei Liu, Yang Liu, Tianjian Chen, Mingshu Cong, Xi Weng, Dusit Niyato, and Qiang Yang. 2020. A fairness-aware incentive scheme for federated learning. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. ACM, 393–399.
[63]
Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. 2020. BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning. In Proceedings of the 2020 USENIX Annual Technical Conference, 493–506.
[64]
Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In Proceedings of the International Conference on Learning Representations.
[65]
Qingfu Zhang and Hui Li. 2007. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on Evolutionary Computation 11, 6 (2007), 712–731. DOI:
[66]
Richard Zhang and Daniel Golovin. 2020. Random hypervolume scalarizations for provable multi-objective black box optimization. In Proceedings of the International Conference on Machine Learning. PMLR, 11096–11105.
[67]
Xiaojin Zhang, Hanlin Gu, Lixin Fan, Kai Chen, and Qiang Yang. 2022. No free lunch theorem for security and utility in federated learning. ACM Transactions on Intelligent Systems and Technology 14, 1 (2022), 1–35.
[68]
Xiaojin Zhang, Yan Kang, Kai Chen, Lixin Fan, and Qiang Yang. 2023. Trading off privacy, utility, and efficiency in federated learning. ACM Transactions on Intelligent Systems and Technology 14, 6 (2023), 1–32.
[69]
Yifei Zhang, Dun Zeng, Jinglong Luo, Zenglin Xu, and Irwin King. 2023. A survey of trustworthy federated learning with perspectives on security, robustness, and privacy. arxiv:2302.10637. Retrieved from https://arxiv.org/abs/2302.10637
[70]
Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2020. iDLG: Improved deep leakage from gradients. arXiv:2001.02610. Retrieved from https://arxiv.org/abs/2001.02610
[71]
Weijie Zheng, Yufei Liu, and Benjamin Doerr. 2022. A first mathematical runtime analysis of the non-dominated sorting genetic algorithm II. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, 10408–10416.
[72]
Jialin Zhong, Yahui Wu, Wubin Ma, Su Deng, and Haohao Zhou. 2022. Optimizing multi-objective federated learning on non-IID data with improved NSGA-III and hierarchical clustering. Symmetry 14, 5 (2022), 1070.
[73]
Hangyu Zhu and Yaochu Jin. 2020. Multi-objective evolutionary federated learning. IEEE Transactions on Neural Networks and Learning Systems 31, 4 (2020), 1310–1322.
[74]
Hangyu Zhu and Yaochu Jin. 2022. Real-time federated evolutionary neural architecture search. IEEE Transactions on Evolutionary Computation 26, 2 (2022), 364–378. DOI:
[75]
Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. In Advances in Neural Information Processing Systems.
[76]
Eckart Zitzler and Simon Künzli. 2004. Indicator-based selection in multiobjective search. In Parallel Problem Solving from Nature. Springer, Berlin, 832–842.
[77]
Eckart Zitzler, Marco Laumanns, and Lothar Thiele. 2001. SPEA2: Improving the strength Pareto evolutionary algorithm. TIK-Report 103. Retrieved from

Cited By

View all
  • (2024)A Meta-Learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/365261215:3(1-36)Online publication date: 17-May-2024

Index Terms

  1. Optimizing Privacy, Utility, and Efficiency in a Constrained Multi-Objective Federated Learning Framework

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Transactions on Intelligent Systems and Technology
          ACM Transactions on Intelligent Systems and Technology  Volume 15, Issue 6
          December 2024
          727 pages
          EISSN:2157-6912
          DOI:10.1145/3613712
          • Editor:
          • Huan Liu
          Issue’s Table of Contents

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 13 December 2024
          Online AM: 24 October 2024
          Accepted: 24 September 2024
          Revised: 25 May 2024
          Received: 08 May 2023
          Published in TIST Volume 15, Issue 6

          Check for updates

          Author Tags

          1. trustworthy federated learning
          2. multi-objective optimization
          3. privacy

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)218
          • Downloads (Last 6 weeks)42
          Reflects downloads up to 12 Feb 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)A Meta-Learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/365261215:3(1-36)Online publication date: 17-May-2024

          View Options

          Login options

          Full Access

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          Full Text

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media