Multitask optimization
This article provides insufficient context for those unfamiliar with the subject.(November 2021) |
It has been suggested that this article be merged with Multi-task learning. (Discuss) Proposed since August 2024. |
Multi-task optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks simultaneously.[1][2] The paradigm has been inspired by the well-established concepts of transfer learning[3] and multi-task learning[4] in predictive analytics.
The key motivation behind multi-task optimization is that if optimization tasks are related to each other in terms of their optimal solutions or the general characteristics of their function landscapes,[5] the search progress can be transferred to substantially accelerate the search on the other.
The success of the paradigm is not necessarily limited to one-way knowledge transfers from simpler to more complex tasks. In practice an attempt is to intentionally solve a more difficult task that may unintentionally solve several smaller problems.[6]
There is a direct relationship between multitask optimization and multi-objective optimization.[7]
Methods
[edit]There are several common approaches for multi-task optimization: Bayesian optimization, evolutionary computation, and approaches based on Game theory.[1]
Multi-task Bayesian optimization
[edit]Multi-task Bayesian optimization is a modern model-based approach that leverages the concept of knowledge transfer to speed up the automatic hyperparameter optimization process of machine learning algorithms.[8] The method builds a multi-task Gaussian process model on the data originating from different searches progressing in tandem.[9] The captured inter-task dependencies are thereafter utilized to better inform the subsequent sampling of candidate solutions in respective search spaces.
Evolutionary multi-tasking
[edit]Evolutionary multi-tasking has been explored as a means of exploiting the implicit parallelism of population-based search algorithms to simultaneously progress multiple distinct optimization tasks. By mapping all tasks to a unified search space, the evolving population of candidate solutions can harness the hidden relationships between them through continuous genetic transfer. This is induced when solutions associated with different tasks crossover.[2][10] Recently, modes of knowledge transfer that are different from direct solution crossover have been explored.[11]
Game-theoretic optimization
[edit]Game-theoretic approaches to multi-task optimization propose to view the optimization problem as a game, where each task is a player. All players compete through the reward matrix of the game, and try to reach a solution that satisfies all players (all tasks). This view provide insight about how to build efficient algorithms based on gradient descent optimization (GD), which is particularly important for training deep neural networks.[12] In GD for MTL, the problem is that each task provides its own loss, and it is not clear how to combine all losses and create a single unified gradient, leading to several different aggregation strategies.[13][14][15] This aggregation problem can be solved by defining a game matrix where the reward of each player is the agreement of its own gradient with the common gradient, and then setting the common gradient to be the Nash Cooperative bargaining[16] of that system.
Applications
[edit]Algorithms for multi-task optimization span a wide array of real-world applications. Recent studies highlight the potential for speed-ups in the optimization of engineering design parameters by conducting related designs jointly in a multi-task manner.[10] In machine learning, the transfer of optimized features across related data sets can enhance the efficiency of the training process as well as improve the generalization capability of learned models.[17][18] In addition, the concept of multi-tasking has led to advances in automatic hyperparameter optimization of machine learning models and ensemble learning.[19][20]
Applications have also been reported in cloud computing,[21] with future developments geared towards cloud-based on-demand optimization services that can cater to multiple customers simultaneously.[2][22] Recent work has additionally shown applications in chemistry.[23]
See also
[edit]- Multi-objective optimization
- Multi-task learning
- Multicriteria classification
- Multiple-criteria decision analysis
References
[edit]- ^ a b Gupta, Abhishek; Ong, Yew-Soon; Feng, Liang (2018). "Insights on Transfer Optimization: Because Experience is the Best Teacher". IEEE Transactions on Emerging Topics in Computational Intelligence. 2: 51–64. doi:10.1109/TETCI.2017.2769104. hdl:10356/147980. S2CID 11510470.
- ^ a b c Gupta, Abhishek; Ong, Yew-Soon; Feng, Liang (2016). "Multifactorial Evolution: Toward Evolutionary Multitasking". IEEE Transactions on Evolutionary Computation. 20 (3): 343–357. doi:10.1109/TEVC.2015.2458037. hdl:10356/148174. S2CID 13767012.
- ^ Pan, Sinno Jialin; Yang, Qiang (2010). "A Survey on Transfer Learning". IEEE Transactions on Knowledge and Data Engineering. 22 (10): 1345–1359. doi:10.1109/TKDE.2009.191. S2CID 740063.
- ^ Caruana, R., "Multitask Learning", pp. 95-134 in Sebastian Thrun, Lorien Pratt (eds.) Learning to Learn, (1998) Springer ISBN 9780792380474
- ^ Cheng, Mei-Ying; Gupta, Abhishek; Ong, Yew-Soon; Ni, Zhi-Wei (2017). "Coevolutionary multitasking for concurrent global optimization: With case studies in complex engineering design". Engineering Applications of Artificial Intelligence. 64: 13–24. doi:10.1016/j.engappai.2017.05.008. S2CID 13767210.
- ^ Cabi, Serkan; Sergio Gómez Colmenarejo; Hoffman, Matthew W.; Denil, Misha; Wang, Ziyu; Nando de Freitas (2017). "The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously". arXiv:1707.03300 [cs.AI].
- ^ J. -Y. Li, Z. -H. Zhan, Y. Li and J. Zhang, "Multiple Tasks for Multiple Objectives: A New Multiobjective Optimization Method via Multitask Optimization," in IEEE Transactions on Evolutionary Computation, doi:10.1109/TEVC.2023.3294307
- ^ Swersky, K., Snoek, J., & Adams, R. P. (2013). Multi-task bayesian optimization. Advances in neural information processing systems (pp. 2004-2012).
- ^ Bonilla, E. V., Chai, K. M., & Williams, C. (2008). Multi-task Gaussian process prediction. Advances in neural information processing systems (pp. 153-160).
- ^ a b Ong, Y. S., & Gupta, A. (2016). Evolutionary multitasking: a computer science view of cognitive multitasking. Cognitive Computation, 8(2), 125-142.
- ^ Feng, Liang; Zhou, Lei; Zhong, Jinghui; Gupta, Abhishek; Ong, Yew-Soon; Tan, Kay-Chen; Qin, A. K. (2019). "Evolutionary Multitasking via Explicit Autoencoding". IEEE Transactions on Cybernetics. 49 (9): 3457–3470. doi:10.1109/TCYB.2018.2845361. PMID 29994415. S2CID 51613697.
- ^ Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). Deep Learning. MIT Press. ISBN 978-0-262-03561-3.
- ^ Liu, L.; Li, Y.; Kuang, Z.; Xue, J.; Chen, Y.; Yang, W.; Liao, Q.; Zhang, W. (2021-05-04). "Towards Impartial Multi-task Learning". In: Proceedings of the International Conference on Learning Representations (ICLR 2021). ICLR: Virtual event. (2021). Retrieved 2022-11-20.
- ^ Tianhe, Yu; Saurabh, Kumar; Abhishek, Gupta; Sergey, Levine; Karol, Hausman; Chelsea, Finn (2020). "Gradient Surgery for Multi-Task Learning". Advances in Neural Information Processing Systems. 33. arXiv:2001.06782.
- ^ Liu, Bo; Liu, Xingchao; Jin, Xiaojie; Stone, Peter; Liu, Qiang (2021-10-26). "Conflict-Averse Gradient Descent for Multi-task Learning". arXiv:2110.14048 [cs.LG].
- ^ Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya, (2022). Multi-Task Learning as a Bargaining Game. International conference on machine learning.
- ^ Chandra, R., Gupta, A., Ong, Y. S., & Goh, C. K. (2016, October). Evolutionary multi-task learning for modular training of feedforward neural networks. In International Conference on Neural Information Processing (pp. 37-46). Springer, Cham.
- ^ Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? In Advances in neural information processing systems (pp. 3320-3328).
- ^ Wen, Yu-Wei; Ting, Chuan-Kang (2016). "Learning ensemble of decision trees through multifactorial genetic programming". 2016 IEEE Congress on Evolutionary Computation (CEC). pp. 5293–5300. doi:10.1109/CEC.2016.7748363. ISBN 978-1-5090-0623-6. S2CID 2617811.
- ^ Zhang, Boyu; Qin, A. K.; Sellis, Timos (2018). "Evolutionary feature subspaces generation for ensemble classification". Proceedings of the Genetic and Evolutionary Computation Conference. pp. 577–584. doi:10.1145/3205455.3205638. ISBN 978-1-4503-5618-3. S2CID 49564862.
- ^ Bao, Liang; Qi, Yutao; Shen, Mengqing; Bu, Xiaoxuan; Yu, Jusheng; Li, Qian; Chen, Ping (2018). "An Evolutionary Multitasking Algorithm for Cloud Computing Service Composition". Services – SERVICES 2018. Lecture Notes in Computer Science. Vol. 10975. pp. 130–144. doi:10.1007/978-3-319-94472-2_10. ISBN 978-3-319-94471-5.
- ^ Tang, J., Chen, Y., Deng, Z., Xiang, Y., & Joy, C. P. (2018). A Group-based Approach to Improve Multifactorial Evolutionary Algorithm. In IJCAI (pp. 3870-3876).
- ^ Felton, Kobi; Wigh, Daniel; Lapkin, Alexei (2021). "Multi-task Bayesian Optimization of Chemical Reactions". chemRxiv. doi:10.26434/chemrxiv.13250216.v2.