Next Article in Journal
Optimal Design of One-Sided Exponential Adaptive EWMA Scheme Based on Median Run Length
Previous Article in Journal
Algorithmic Advances for 1.5-Dimensional Two-Stage Cutting Stock Problem
Previous Article in Special Issue
Parallelization of the Bison Algorithm Applied to Data Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Surrogate Assisted Multi-Tasking Optimization Algorithm for High-Dimensional Expensive Problems

1
Faculty of Civil Engineering and Mechanics, Jiangsu University, Zhenjiang 212013, China
2
Ocean Institute, Northwestern Polytechnical University, Taicang 215400, China
3
School of Mechanical Engineering, Tianjin University, Tianjin 300354, China
*
Authors to whom correspondence should be addressed.
Algorithms 2025, 18(1), 4; https://doi.org/10.3390/a18010004
Submission received: 25 November 2024 / Revised: 26 December 2024 / Accepted: 27 December 2024 / Published: 29 December 2024
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)

Abstract

:
Surrogate-assisted evolutionary algorithms (SAEAs) are widely used in the field of high-dimensional expensive optimization. However, real-world problems are usually complex and characterized by a variety of features. Therefore, it is very challenging to choose the most appropriate surrogate. It has been shown that multiple surrogates can characterize the fitness landscape more accurately than a single surrogate. In this work, a multi-surrogate-assisted multi-tasking optimization algorithm (MSAMT) is proposed that solves high-dimensional problems by simultaneously optimizing multiple surrogates as related tasks using the generalized multi-factorial evolutionary algorithm. In the MSAMT, all exactly evaluated samples are initially grouped to form a collection of clusters. Subsequently, the search space can be divided into several areas based on the clusters, and surrogates are constructed in each region that are capable of completely describing the entire fitness landscape as a way to improve the exploration capability of the algorithm. Near the current optimal solution, a novel ensemble surrogate is adopted to achieve local search in speeding up the convergence process. In the framework of a multi-tasking optimization algorithm, several surrogates are optimized simultaneously as related tasks. As a result, several optimal solutions spread throughout disjoint regions can be found for real function evaluation. Fourteen 10- to 100-dimensional test functions and a spatial truss design problem were used to compare the proposed approach with several recently proposed SAEAs. The results show that the proposed MSAMT performs better than the comparison algorithms in most test functions and real engineering problems.

1. Introduction

Evolutionary algorithms (EAs), such as genetic algorithms (GAs) [1], differential evolution (DE) [2], and particle swarm optimization (PSO) [3], have been successfully utilized in various practical engineering problems, such as the design of aircraft wings [4], robotics [5], wireless networks [6], and the optimization of vehicle technology trajectory [7,8]. However, the utilization of EAs to address optimization problems requires hundreds of expensive exact fitness evaluations (FEs) to achieve an acceptable solution, which severely hinders their application in practical engineering problems. To solve this problem, SAEAs have been proposed. SAEAs are a class of techniques that combine EAs with surrogate models, which build inexpensive surrogate models for predicting the fitness values of candidate solutions as a replacement for FEs in the EAs. In recent years, the use of SAEAs for optimization problems has received increasing attention because the computational cost of constructing surrogates and predicting fitness values is less than FEs.
Model management strategy is crucial for SAEAs, which select some new samples for exact FEs and use them in the reconstruction of the surrogate model [9]. Currently developed model management strategies are mainly categorized as generation-based and individual-based, where the former selects all samples by generation for accurate FEs and the latter selects promising individuals for real evaluation. Both strategies have been shown to be effective, but in general, individual-based criteria are more flexible than generation-based criteria [10,11]. There are usually two criteria for selecting promising individuals. The first is the performance-based criterion that chooses the individual with the best fitness value for accurate evaluation to effectively enhance the performance of the surrogate and facilitate the exploitation of the currently promising area [12]. The second category is the uncertainty-based criterion, which can enhance exploration by selecting the most uncertain individual for accurate evaluation [13]. It has been shown [14] that using only one infill criterion may lead to early convergence or falling into a local optimum. Therefore, this paper selects the optimal or average individual for real function evaluation.
The current SAEAs can be classified into two principal categories. The first category is the single-layer SAEAs, which do not have hierarchically organized components and usually use only one model, i.e., a local or global surrogate, to assist one evolutionary algorithm. Li et al. [15] presented a fast algorithm using the global radial basis function (RBF) to assist PSO, in which they combined the performance-based criterion with a new criterion that considered the distance and fitness value concurrently, boosting the efficiency of the algorithm. Li et al. [16] also developed a surrogate-assisted multi-swarm optimization technique that alternately carries out teaching-learning-based optimization (TLBO) to enhance exploration and PSO to accelerate convergence. Global surrogates can summarize the overall fitness landscape. Nourian et al. [17] combined graphical neural networks (GNNs) and PSO for optimal design of truss structures. However, generally speaking, a large number of exactly evaluated samples are required to build highly accurate global surrogates when solving large-scale expensive optimization problems, which can make it difficult for algorithms to balance between high accuracy and low time consumption. Therefore, local surrogates are commonly used to increase the accuracy by capturing the regional characteristics of the fitness landscape. Cai et al. [18] presented a search technique based on trust region, which is to establish a local RBF model in the trust region surrounding the optimal population, thereby boosting the optimization efficiency. Pan et al. [19] used neighbor samples to train a local RBF model with higher accuracy to assist the TLBO and DE alternately and proposed a new prescreening criterion.
The second type is known as the hierarchical SAEAs, where multiple models are used to assist different EAs. The fine local surrogate is utilized to depict the specifics of the fitness landscape, while the rough global surrogate is utilized to mitigate the impact of local optima. Sun et al. [20] developed a hierarchical surrogate that uses both local and global models to assist PSO in directing the swarm towards the global optima. Wang et al. [21] developed the use of global RBF and local Kriging to assist DE for high-dimensional expensive problems. Kůdela et al. [22] integrated global RBF and a new Lipschitz-based-surrogate-assisted global optimization procedure with local optimization based on a local RBF model. Hierarchical SAEAs may increase the computational overhead compared to single-layer SAEAs, which can provide a solution in a shorter amount of time. However, for high-dimensional multimodal problems, hierarchical SAEAs are preferred. Hierarchical SAEAs can first identify promising regions in the global search phase and then explore more deeply in local regions, which is very beneficial for a comprehensive search of the decision space.
The reliability of SAEAs is mainly influenced by the surrogate and the evolutionary algorithm. With an increase in problem dimensions, the single surrogate easily causes the algorithm to be trapped in local optima. Meanwhile, the fixed surrogate makes it quite difficult to accurately describe different optimization problems. Therefore, to address this issue, the ensemble of surrogates is widely used in SAEAs [23,24]. Ren et al. [25] developed a hybrid algorithm that first carries out a series of global searches, then takes turns with global and local searches and employs social learning particle swarm optimization (SL-PSO) and DE for global searches assisted by a surrogate ensemble and local searches assisted by RBF, respectively. Guo et al. [13] replaced the time-consuming Gaussian processes with a heterogeneous integration composed of models with different input characteristics, which improves the diversity and reliability of the surrogate. Moreover, research on adaptive surrogate model selection techniques has also received more and more attention. Li et al. [26] adaptively selected one from an RBF or an ensemble model to facilitate PSO according to standard deviation. Liu et al. [27] introduced a decision space partition-based SAEA when addressing expensive optimization problems, which divides the space into several disjoint regions by the k-means clustering technique and establishes an RBF model on each region to assist PSO in global search, and an adaptive surrogate is utilized to assist local exploitation. Zhang et al. [28] designed a hierarchical surrogate framework that adopts RBF-assisted TLBO to locate promising regions and the dynamic surrogate-assisted DE to precisely determine the global optimal solution. The evolutionary algorithm is also a key component of SAEAs. When faced with a complex problem, a single EA may induce the optimization process to follow a similar trajectory and become trapped into local optima [29]. As a result, several SAEAs use different EAs at the same time. In addition, some SAEAs using the multi-task evolutionary algorithm have been developed. Multi-task optimization (MTO) is a method that can process several optimization tasks simultaneously, and it can promote the overall task solving progress through the data transfer between different tasks. Liao et al. [30] introduced a multi-tasking algorithm design framework that uses the generalized multi-factorial evolutionary algorithm (G-MFEA) to assist global and local surrogates simultaneously in addressing expensive optimization problems. Yang et al. [31] took a global Kriging or a local RBF model as the assistant task at different stages and then used the MFEA to optimize the assistant task and the target task simultaneously.
In addition, multi-agent systems using the partition idea are also effective in preventing algorithms from falling into local optima. Intelligent agents evolve in the decomposed dimensional space to find local suboptimal solutions, and the global optimum is determined in the dynamic interaction of intelligent agents. Akopov et al. [32] proposed a multi-agent genetic algorithm for multi-objective optimization, in which global optimization is achieved in the process of coordinating intelligent agents and significantly reduces the number of true function evaluations. Zhong et al. [33] proposed a multi-agent genetic algorithm that fixes all agents in a lattice environment and then uses the competition or cooperation of these agents to find the global optimum solution. Table 1 summarizes the important algorithms, including EAs and SAEAs.
Inspired by the above surrogate-assisted optimization techniques, this study presents a novel multi-surrogate-assisted multi-tasking optimization algorithm (MSAMT) for large-scale optimization problems. The algorithm design first divides the entire goal space into different regions and constructs the corresponding RBF model for each region to support the global exploration. Then, a novel dynamic ensemble of surrogates is used to build surrogate-assisted local exploitation with high reliability near the current optimal solution. Finally, the G-MFEA is used to assist multiple surrogates in searching each subregion and the space around the current optimal solution. The following are the innovations and major contributions of the proposed method:
  • Under the framework of the G-MFEA, a space partition strategy is used to introduce multiple surrogate models as optimization tasks, and global and local searches are carried out simultaneously so that the algorithm can achieve good exploration and considerably reduce the chance of falling into local optima.
  • A novel dynamic ensemble of surrogates is proposed. In this adaptive strategy, a single surrogate or an ensemble of two surrogates or three surrogates is selected as a local model from the base model pool (including RBF, polynomial response surface (PRS), and support vector regression (SVR)) according to the error index, thus accelerating the convergence.
  • Experimental results indicate that the proposed algorithm has better performance than the competition algorithm in most of the test functions and a truss design problem, resulting in better results and faster convergence processes.
The rest of this paper is structured as follows: Section 2 briefly reviews the information on the surrogate models (RBF, PRS, SVR, and the ensemble of surrogates) and the MFEA, and introduces the details of the MSAMT. The performance of the proposed algorithm is evaluated by extensive empirical studies in Section 3. Lastly, Section 4 concludes this work.

2. Materials and Methods

This section first introduces the surrogate models (SVR, RBF, PRS, and the unified ensemble of surrogates) used in this work, followed by a brief description of the MFEA.

2.1. Polynomial Response Surface

Box and Wilson first proposed [36] the PRS model, which is one of the most commonly used surrogates. The most widely utilized is the second-order polynomial function, because overfitting occurs easily when the order exceeds 3. The second-order PRS can be represented as follows:
y ^ x = β 0 + i = 1 m β i x i + i = 1 m β i i x i 2 + i = 1 m 1 j = i + 1 m β i j x i x j
where y ^ x is the predicted value of the surrogate, m denotes the number of input variables, and β = β 0 ,   β i ,   β i i ,   β i j represents the coefficient that needs to be determined.
PRS has the advantages of strong interpretability, simple computation, high efficiency, and good noise filtering, which make it effective for low-dimensional nonlinear problems. However, with the increase in problem dimensions, the accuracy of PRS is relatively low.

2.2. Radial Basis Function

The RBF model, which was first developed by Hardy [37], can be used to approximate irregular terrain. The interpolation form of RBF is as follows:
y ^ x = i = 1 n ω i φ i x x i
where n represents the sample size, φ i · represents the ith basis function, ω = ω 1 ,   ω 2 ,   ,   ω n denotes the weight coefficient, and · represents the Euclidean distance between two samples. It is important to note that there are multiple kinds of basis functions, including thin plate splines, linear splines, cubic splines, and so on.

2.3. Support Vector Regression

The SVR surrogate model is given as follows:
y ^ x = μ + i = 1 n ω i ψ x , x i
Faced with a linear function approximation, SVR can be recalculated as follows:
y ^ x = μ + ω , x
where μ is the threshold and · , · represents the dot product. Find the weights that minimize the margin ε corresponding to solving the optimization problem and introduce slack variables ξ i and ξ i * to keep y ^ x as flat as possible:
m i n 1 2 ω 2 + γ i = 1 n ξ i + ξ i * s . t . y i ω , x i μ ε + ξ i ,       i = 1 , 2 , , n ω , x i + μ y i ε + ξ i * ,       i = 1 , 2 , , n ξ i , ξ i * 0 ,       i = 1 , 2 , , n
where γ represents a regularization parameter that controls the trade-off between the complexity of the model and the level at which errors are allowed to exceed ε . The following formula can be used to obtain the weight coefficient:
ω = i = 1 n α i α i * x i
where α i and α i * are the Lagrange multipliers. The SVR model obtained by inserting Equation (6) into Equation (4) is as follows:
y ^ x = μ + i = 1 n α i α i * x i , x
For nonlinear approximation, the kernel function can be used to replace the dot product of the input vectors to obtain the nonlinear transformation, and the resulting SVR model is expressed as follows:
y ^ x = μ + i = 1 n α i α i * k x i , x

2.4. Dynamic Ensemble of Surrogates

Generally speaking, there is no single surrogate that outperforms other surrogates in all kinds of problems. Therefore, the ensemble surrogate, which consists of several component surrogates that can fully utilize the benefits of each surrogate, has been widely used. The prediction of the ensemble surrogate is obtained through weighting the predicted value of multiple surrogates, so the approximate fitness of the samples can be determined by the following:
y ^ e n s x = i = 1 N e n s ω i x y ^ i x
where y ^ e n s x is the prediction of the ensemble of surrogates, y ^ i x denotes the prediction of the ith surrogate, and N e n s is the number of surrogates, ω i x is the weight factor of the ith surrogate, and ω i x needs to meet the formula i = 1 N e n s ω i x = 1 .
According to the characteristics of weight coefficients, the existing methods for ensemble modeling can be classified into global measures-based, local measures-based, and mixed measures-based. The weight coefficient in global measures is a constant calculated according to the global error of each surrogate, while the weight coefficient in local measures is influenced by the location of the prediction sample. The former ignores the difference in fitting accuracy of the model in local areas, while the accuracy of the latter may decline away from the sample points. Mixed measures-based combines the advantages of the two, taking into account the regions near and away from the samples. On the basis of the existing methods, Zhang et al. [38] suggested a new unified ensemble of surrogates (UES), which integrates global and local measures, and the weight coefficients are determined according to the position of the predicted sample within the entire decision space as follows:
ω i x = ω i G x λ x + ω i L x 1 λ x j = 1 N e n s ω j G x λ x + ω j L x 1 λ x
where ω i G x and ω i L x represent the global and local weight coefficients of the ith surrogate, respectively and λ x = sin π 2 d 1 x d 2 x is the control function used to determine the impact of the global and local measures, where d 1 x and d 2 x represent the distances between the predicted sample and its closest and second-closest samples, respectively.
As a fixed ensemble of KRG, RBF, and PRS, the UES is difficult to apply to high-dimensional problems. For one thing, the time cost of constructing the KRG model is gradually exceed the affordable range as the dimensions increase. For other things, the UES uses leave-one-out cross-validation to calculate weight coefficients, which can consume a lot of time if the fixed ensemble of component surrogates is used. To alleviate the running cost, Zhang et al. [28] developed a dynamic ensemble of surrogates (DES) based on a UES, which takes RBF, PRS, and SVR as component surrogates and uses the determination coefficient R 2 and the normalized root mean square error N R M S E to filter the models. R 2 and N R M S E are represented as follows:
R 2 = 1 i = 1 n t f i f ^ i 2 i = 1 n t f i f ¯ 2
N R M S E = i = 1 n t f i f ^ i 2 i = 1 n t f ^ i 2
where n t denotes the size of test samples, f ^ i denotes the prediction of the i th test sample by surrogates, f i denotes the exact function fitness of the i th test sample, and f ¯ is the average value corresponding to the exact function values for n t test samples.

2.5. Multi-Factorial Evolutionary Algorithm

Gupta et al. [34] first suggested the multi-factorial evolutionary algorithm (MFEA), which represents a prototypical multi-tasking algorithm design technique. The MFEA is inspired by the bio-cultural models of multifactorial inheritance, which mimics the processes of biological mutation, selection, and reproduction for generational evolution. The “multi-factorial” means that the MFEA can deal with multiple tasks simultaneously. The MFEA optimizes several tasks simultaneously for one evolutionary population in the unified space. The definition of the multi-factorial optimization problem is as follows:
x 1 * , x 2 * , , x K * = a r g m i n f 1 x , f 2 x , , f K x s . t . x i * Ω i , i = 1 , 2 , , K
where K denotes the number of optimization problems, f i represents the i th optimization problem, Ω i represents the decision space of the i th optimization problem, D i is the dimension of Ω i , and x i * denotes the optimal solution of the i th optimization problem is denoted.
Each individual in the MFEA is defined by a collection of new properties, which are as follows:
Definition 1.
The factorial cost Ψ j i of the individual p i on the j th optimization problem is given by Ψ j i = f j i , where f j i is the objective value of p i with respect to the j th optimization problem.
Definition 2.
The factorial rank, designated as r j i , represents the index of the individual p i on the j th optimization problem relative to Ψ j j in ascending order.
Definition 3.
The scalar fitness φ i of the individual p i is obtained from its minimum factorial rank achieved across all tasks, i.e., φ i = 1 / min j 1 , 2 , , K r j i .
Definition 4.
The skill factor of an individual p i represents the task at which it is most effective at among all other tasks, that is, τ i = a r g m i n r j i .
In the MFEA, the evolutionary population P is created in the search space Ω with boundary [0, 1] at first. The dimension D Ω of Ω is D Ω = m a x D 1 , D 2 , , D Ω , where D i denotes the dimension of the ith optimization problem. Each individual is associated with an optimization task based on its factorial rank and obtains a skill factor. Then, it generates offspring through assortative mating and assigns skill factors to offspring through vertical cultural transmission. The above steps are repeated until the computational budget is depleted. Finally, some individuals with better scalar fitness are chosen to be the parental population of the next generation.

2.6. Methodology

In this section, a novel SAEA, denoted as the MSAMT, is presented, and the general framework of the MSAMT is first introduced. Then, a global RBF model based on decision space partition and a new dynamic ensemble of surrogates are given. Two phases of the MSAMT are subsequently presented in detail, namely local search based on an ensemble surrogate and global search based on space partition.

2.6.1. The General Framework of the MSAMT Method

The main algorithm design framework of the MSAMT is given in Figure 1. During the surrogate construction process, all of the samples in the database are first divided into n clusters using k-means clustering to prevent SAEAs from entering local optima. Then, the entire space is divided into n disjoint regions, and each of which is associated with a different cluster. Finally, a corresponding RBF model is constructed in each region to assist the optimizer in finding the promising point for the global search, and a dynamic ensemble of surrogates is constructed using the best part of the sample selected from the database to search for the promising point within the local region. The former is concerned with exploring the entire decision space, while the latter tends to exploit the subregion near the optimal individual. The promising points identified by the surrogate-assisted G-MFEA are accurately evaluated and added to the database (DB).
Algorithm 1 provides the pseudocode of the MSAMT. Firstly, N initial samples are sampled by Latin hypercube sampling (LHS). Each sample is evaluated by the real objective function and added to the DB. Secondly, all the exactly evaluated samples are clustered into n groups utilizing k-means clustering, and the entire space is divided into n disjoint areas based on the clusters. Then, the RBF model for each region is built. Thirdly, N D samples having the optimal fitness values are chosen to construct a dynamic ensemble of surrogates proposed in Algorithm 2. Finally, n + 1 surrogates are subsequently applied to help the G-MFEA in the population evolution of k m a x generations as shown in Algorithm 3. Algorithm 4 is applied to produce an initial evolutionary population. Individuals with superior fitness values predicted by the surrogates are exactly evaluated and archived in the DB. The global and local searches are conducted simultaneously, and each iteration locates n promising points distributed in disjoint regions and one promising point in the region around the optimal individual. The entire process is repeated until the maximum number of FEs is exhausted.
Algorithm 1 Pseudocode of the MSAMT algorithm
Input: D , problem dimension; N , initial population size; N D , population size of dynamic ensemble of surrogates; N P , evolutionary population size; DB, database; N F E , the number of exact evaluations; N F E m a x , the maximum number of exact evaluations; f , the real objective function; n , the number of clusters; rmp, matching probability; k m a x , the maximum number for each running of multitasking optimization;
Output: the position of the best individual and its fitness value;
1: Generate N initial samples by LHS;
2: Evaluate exactly N initial samples by the real objective function and save them to the DB;
3: while  N F E < N F E m a x , do
4:   //Decision space partition based global search
5:   Randomly select n samples from the DB as clustering centers;
6:   Calculate the distance between all samples in the DB and each clustering center and assign each sample to the clustering center nearest to it;
7:   Calculate the mean of each cluster as the new clustering center;
8:   Repeat rows 6 and 7 until the new center is unchanged from the original center and output n central position;
9:   for  i = 1 to n , do
10:   Calculate the r i according to Equation (14);
11:  end for
12:  Determine in which region each of the samples in the database is located;
13:  for  i = 1 to n , do
14:   if the number of exactly evaluated samples distributed in the i th region is larger than 5 × D , then
15:    Select all exactly evaluated samples in the i th region to construct the RBF surrogate model;
16:   else
17:    Select the 5 × D neighboring exactly evaluated samples to the i th region to construct the RBF surrogate model;
18:   end if
19:  end for
20:  //Dynamic ensemble of surrogates based local search
21:  Apply Algorithm 2 to construct a dynamic ensemble of surrogates;
22:  //Perform global and local searches using RBF models and the dynamic ensemble of the surrogate-assisted G-MFEA, respectively;
23:  Apply Algorithm 3 to generate an evolutionary population and let k = 0 ;
24:  Apply Algorithm 4 to perform multi-tasking optimization;
25:  Locate the n + 1 individuals having better predicted fitness;
26:  Evaluate the real fitness value of the n + 1 best individuals and save them to the DB;
27:   N F E = N F E + n + 1 ;
28: end while
Algorithm 2 Pseudocode for the dynamic ensemble of surrogates
Input: N D , population size of the dynamic ensemble of surrogates; DB, database;
Output: the dynamic ensemble of surrogates;
1: Apply k -fold cross validation (fivefold cross-validation is adopted in this paper) to N D best samples in the DB and build a pool of different single surrogate models (denoted as M p o o l );
2: Compute the error metrics ( R 2 and N R M S E ) of these surrogates according to Equations (11) and (12);
3: Sort the R 2 and N R M S E of these surrogates in ascending order and add the corresponding indices;
4: Add the R 2 and the N R M S E indices for each surrogate together to obtain the final ranking indicator for all surrogates (denoted as M s o r t );
5: if f i n d   M s o r t = 2 is not empty, then
6:   Remove the other surrogates from M p o o l and then use the single surrogate model with the ranking indicator of 2;
7: else f i n d   M s o r t = 3 is not empty
8:   if there are two surrogates with the ranking indicator of 3, then
9:   Remove the other surrogate from M p o o l and formulate the two surrogates with the ranking indicator of 3 into an ensemble surrogate using the UES;
10:  end if
11: else
12:  Formulate the three surrogates into an ensemble surrogate using the UES;
13: end if
Algorithm 3 Pseudocode for multi-tasking optimization
Input: N p , evolutionary population; N P , evolutionary population size; k m a x , the maximum number for each running of multi-tasking optimization; M g , the global surrogate model; M l , the local surrogate model; θ , the frequency to change the translation direction; φ , the threshold value to start the decision variable translation strategy;
Output: the n + 1 optimal solutions of surrogate models;
1: Apply Algorithm 4 to generate the evolutionary population and let k = 0 ;
2: Predict the fitness of each individual in N p with the RBF surrogates and the ensemble surrogate;
3: Compute the skill factor ( τ ) of each individual;
4: while  k < k m a x , do
5:   Randomize the individuals in N p ;
6:   for  i = 1 to N P / 2 , do
7:    Select two parent candidates p i and p i + N P / 2 from N p ;
8:    Generate a random number r 1 , r 2 , and r 3 between 0 and 1;
9:    if  τ i = τ i + N P / 2 or r 1 < r m p , then
10:    Parents p i and p i + N P / 2 crossover to give two offspring individuals c i and c i + N P / 2 ;
11:   else
12:    Mutate parent p i slightly to give an offspring c i .
13:    Mutate parent p i + N P / 2 slightly to give an offspring c i + N P / 2 ;
14:   end if
15:   Assign skill factors by parent candidates to offspring through vertical cultural transmission;
16:   if  k > φ and m o d   ( k ,   θ ) = 0 , then
17:    Calculate the translated direction of each task;
18:   end if
19:   Update the position of each offspring according to the translated direction of its corresponding task;
20:  end for
21:  Predict the fitness of each offspring with the RBF surrogates and the ensemble surrogate and select better individuals as next parents;
22:   k = k + 1 ;
23: end while
Algorithm 4 Pseudocode for generating the evolutionary population
Input: N P , evolutionary population size; DB, database; n u m , the number of samples in the DB;
Output: the evolutionary population N p ;
1: If n u m < N P / 4 , then
2:   Add all samples from the DB to N p ;
3:   Generate N P n u m individuals by LHS and add them to N p ;
4: else
5:   If n u m < N P / 2 , then
6:    Add the first sample from the DB to N p ;
7:    Randomly select N P / 4 1 samples from other samples in the DB and add them to N p ;
8:   else
9:    Add the first sample from the DB to N p ;
10:   Randomly select N P / 4 1 samples from 1 to N P / 2 in the DB and add them to N p ;
11:  end if
12:  Generate N P N P / 4 individuals by LHS and add them to N p ;
13: end if

2.6.2. The Global and Local Models

The algorithm can be easily trapped into local optima because there are several local optima for various multimodal optimization problems. The proposed MSAMT adopts the method of decision space partition to help the optimization algorithm carry out global search. In this part of the work, we introduce the method suggested by Liu et al. [27] to partition the decision space. Firstly, all the samples in the database are divided into n different clusters using the k-means clustering technique, i.e., c 1 , c 2 , , c n . Then, the decision space is divided into n suitable disjoint regions according to each cluster. The radius of the i th region can be obtained as follows:
r i = max j = 1 , 2 , , n , j i d i s c p i , c p j 2 D D n 1
where c p i and c p j are the center positions of c i and c j , respectively, d i s c p i , c p j represents the Euclidean distance from c p i to c p j , and n and D denote the number of clusters and input variables, respectively.
After the decision space is divided, the samples are selected from the DB to build RBF models (denoted as M g = M g 1 , M g 2 , , M g n ) in each subregion separately. Given the i th region, if the number of accurately assessed samples in this region is greater than 5 × D , all accurately assessed samples are chosen for modeling. Otherwise, the 5 × D neighboring accurately assessed samples to this region are selected for modeling.
As a dynamic ensemble of RBF, PRS, and SVR, the DES model has been used for high-dimensional optimization problems [28]. However, its accuracy and robustness need to be improved. On the one hand, the DES requires two parameters R 2 l b and R 2 u b to select appropriate component surrogates, which may make it not well suited for various optimization problems. For another, the DES makes the first selection of component surrogates by R 2 and then makes the second selection by R 2 or N R M S E according to the number of remaining component surrogates in the first selection. Therefore, the DES may use only one error metric when selecting component surrogates, which may lead to deviations in the accuracy of the final ensemble surrogate.
To enhance the accuracy of the ensemble surrogate, a novel dynamic selection for the ensemble surrogate (DSES) is presented in this paper. Algorithm 2 provides the DSES technique in detail. The main distinctions between the DSES and the DES include the following two points. First, the DSES uses R 2 and N R M S E error metrics simultaneously to select component surrogates, ensuring that the final components used to formulate the ensemble surrogate have better fitting performance. Second, the DSES selects component surrogates from two perspectives at the same time. Therefore, it does not need to limit surrogates by setting thresholds, avoiding the trouble of adjusting parameters. The final DSES (denoted as M l ) can be a single surrogate or an ensemble of two or three surrogates.
To demonstrate that the DSES has better performance than the DES, Figure 2 gives the N R M S E values of both on the 20-dimensional F1 and F3 in the search process. From Figure 2, it was observed that the N R M S E values of DSES and DES fluctuated greatly in the early period, but both gradually stabilized as the number of real function evaluations increased. Obviously, after stabilization, the DSES had smaller N R M S E values than the DES. Therefore, one can consider that for different optimization problems, the DSES is more advantageous in building surrogate models with higher reliability than the DES.

2.6.3. Multi-Tasking Optimization Based on Decision Space Partition and a Dynamic Ensemble of Surrogates

It has been demonstrated that multi-tasking optimization may effectively handle multiple optimization tasks simultaneously [30,31]. In this study, the generalized multi-factorial evolutionary algorithm (G-MFEA) [35] is utilized to identify the best solution. The G-MFEA is a generalization of the MFEA that adds two strategies, decision variable translation and decision variable shuffling, to promote knowledge transfer between optimization problems with different optimal solutions. The n RBF models and one dynamic ensemble of surrogates are optimized simultaneously as n + 1 tasks in the G-MFEA. The pseudocode for multi-tasking optimization is given in Algorithm 3.
In Algorithm 3, an evolutionary population N p with N P individuals is generated at first, and the skill factors of them are computed by predicting fitness using n RBF models and a dynamic ensemble of surrogates. The parent population generates an offspring population through an assortative mating strategy, subsequently utilizing vertical cultural transmission to designate skill factors to the offspring population. A decision variable transformation strategy is employed to map each offspring to a new location. The scalar fitness of the current populations is computed using surrogates, and then, individuals with better scalar fitness are chosen to become the next parent population. Algorithm 4 details the procedure of generating the initial evolutionary population. If the number of accurately assessed samples in the database is less than N P / 4 , all the samples in the database are added to the evolutionary population; if the number is greater than N P / 4 , but less than N P / 2 , N P / 4 samples are randomly selected from the database to be added to the evolutionary population; otherwise, the N P / 2 best samples are picked out from the database at first, and then, N P / 4 samples are selected from them to be added to the evolutionary population. Finally, the LHS is used to generate the remaining evolutionary individuals. This initial evolutionary population is generated in such a way that both historical data and new data are included, which can lead to better algorithmic performance.
In addition, in order to avoid falling into the local optimality, this paper selects an individual with better fitness from the most promising and the average individual for exact evaluation and archiving in the database. The average individual is as follows:
α = N F E N F E m a x
P = i = 1 N P i N × α × G a u s s 0,1
where α represents a parameter that rises with the number of exact evaluations, N denotes the number of samples, and G a u s s 0,1 denotes a random number that conforms to the Gaussian distribution.

3. Results and Discussion

A series of algorithm analyses will be taken to evaluate the performance of the MSAMT in this part. First, the test functions and parameters are provided, and the behavior of the MSAMT is studied in depth. Then, the MSAMT is contrasted with six recently proposed SAEAs, including HSAOA [28], DSP-SAEA [27], BiS-SAHA [25], FSAPSO [15], GL-SADE [21], and LSADE [22], on fourteen test functions and a practical engineering problem. Finally, the computational complexity of MSAMT is analyzed.

3.1. Test Problems

To study the effectiveness of the MSAMT, fourteen generally used test functions with different characteristics were adopted, and all of them considered five dimensions, namely 10D, 20D, 30D, 50D, and 100D. Table 2 provides the relevant details about these functions.

3.2. Parameter Settings

In the MSAMT, for 10D, 20D, and 30D test functions, the initial population size N was set to 5 × D , the maximum number of exact function evaluations N F E m a x was set to 11 × D , and the sample size N D for training the ensemble surrogate was set to 2 × D , which were the same as used in [28]. For 50D and 100D test functions, N and N D were both set to 11 10 D , and N F E m a x was set to 400. In the G-MFEA, the evolutionary population size N P was set to 100 as used in [31], the iteration number for population evolution was set as k m a x = 30 , and the mating probability was set as r m p = 0.4 . In addition, a vital parameter in the MSAMT is the number of clusters n , which was fixed at 2. To avoid randomness, each SAEAs was performed independently 20 times in MATLAB R2021b on the Intel Core i5-10400F with a 2.90 GHz processor to obtain the comparison results, and the MSAMT was compared to the other SAEAs using the Wilcoxon rank sum test (the significance level was set as α = 0.05 ) on fourteen test functions. The symbols, i.e., “+”, “-”, and “=“, show that the proposed MSAMT is superior, inferior, and competitive with the corresponding algorithms, respectively.

3.3. Behavior Study of the MSAMT

3.3.1. Effect of the Clusters Number

The clusters number n is an important parameter in the MSAMT, and in order to study its impact on the MSAMT performance, we compared the outcomes of the MSAMT variants with varying n values across the 10, 20, and 30 dimensions of the 14 test problems, as illustrated in Table A1.
Table A1 displays the results of the algorithm analysis, which demonstrated that different variants performed significantly differently when faced with the same test problems. For example, the MSAMT with n = 2 outperformed the other variants on test problems such as F1, F4, and F5, the MSAMT with n = 5 performed well on the F7 test function, and when n = 6 , the MSAMT outperformed on test functions such as F9 and F10. Comparing the results of all variants, it was found that the MSAMT with n = 2 exhibited a superior performance on most of the test problems. Therefore, n = 2 was used in this paper.

3.3.2. Effect of Matching Probability on Multi-Tasking Optimization

Exploration and exploitation capabilities determine how well the optimization algorithm performs. In the G-MFEA, the matching probability r m p is used to achieve the balance between exploration and development. When r m p is closer to 0, the probability of cross-task mating becomes smaller, thereby enhancing the exploitation ability. However, this may result in the algorithm converging on a local optimum. Conversely, when r m p is closer to 1, the probability of cross-Zzztask mating increases, and it is able to search the decision space extensively, thus escaping from local optimality.
Figure 3 plots the mean fitness values of the MSAMT with different r m p values on problems F2, F3, F5, and F9 with 20D. As Figure 3 shows, there was a considerable variation in the performance of MSAMT with different r m p values. Without considering the red outliers, the MSAMT could obtain the best overall performance when the matching probability r m p was equal to 0.4. Therefore, r m p was set to 0.4 in this work.

3.3.3. Effect of the Maximum Generation Number for Each Running of the G-MFEA

The maximum generation number k m a x for each running of the G-MFEA serves to balance the performance and optimization costs. A smaller k m a x may make it challenging for the algorithm to locate an improved solution, and a larger k m a x is beneficial for the discovery of the optimal solution but may result in an unaffordable running cost.
To achieve the optimal value of k m a x that achieves a balance between performance and optimization time, the same four test problems employed in the preceding section were utilized to assess the performance of the MSAMT variants with different k m a x values. Figure 4 displays the box plots of the average fitness values of the MSAMT variants with different k m a x values. It is evident from the Figure 4 that the medians, the maximum, and the interquartile range of the MSAMT were worse when the k m a x value was equal to 10 or 20. For different test problems, the performance of MSAMT with k m a x values greater than or equal to 30 displayed fluctuations, but the differences were not statistically significant. The maximum number of iterations for the G-MFEA was set to 50, taking into account both the optimization efficiency and the computing cost.

3.3.4. Effect of the Decision Space Partition

To evaluate the effectiveness of the decision space partition strategy, two variants, namely the proposed MSAMT and MSAMT-withoutDSP (only one RBF model is built to facilitate the global search conducted), were compared using the same four test functions as in Section 3.3.3. Figure 5 gives the convergence curves for the two variants. Figure 5 shows that the performance of the MSAMT was significantly better than MSAMT-withoutDSP without the decision space partition strategy. In particular, the MSAMT converged much faster than the MSAMT-withoutDSP in the early search stage.

3.3.5. Effect of the DSES

To validate the impact of the DSES on the MSAMT, comparison studies of the proposed MSAMT and two various variants, i.e., the MSAMT-KRG which adopts the KRG model in local search and the MSAMT-fixed which carries out the original UES, were conducted on 20-dimensional test functions. Statistical results for the three variants are provided in Table A2. The MSAMT demonstrated a superior performance, achieving the best solution on 8 of the 14 test functions. The MSAMT-fixed beat the other two variants on four test functions, while the MSAMT-KRG exhibited the best results on two test functions. On all test functions, the MSAMT took less computational time than the other two variants. The results demonstrate the effectiveness of the DSES.

3.4. Comparison with Other Recently Proposed Algorithms

3.4.1. Experimental Results on 10D, 20D, and 30D Test Problems

Four recently proposed SAEAs were first selected as comparison algorithms, i.e., HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO, and they were compared with the proposed MSAMT on 10D, 20D, and 30D 14 test functions to evaluate the performance of the MSAMT. The HSAOA is a hierarchical surrogate-assisted optimization algorithm that alternately uses DE and TLBO. During the local search phase, the HSAOA selects the samples that are closest to the optimal solution or have the best fitness value to construct a novel dynamic ensemble of surrogates. The DSP-SAEA is a two-stage search algorithm that integrates the global search phase based on decision space partition strategy with the local search phase using an adaptive surrogate within the trust region. The BiS-SAHA performs the global and local search stages in turn using the SL-PSO assisted by a surrogate ensemble and RBF-assisted DE. The FSAPSO is a fast optimization technique using global RBF-assisted PSO that suggests a novel evaluation criterion to select particles for real function evaluation.
Table A3 demonstrates that the suggested MSAMT had a superior performance compared to the other SAEAs, with the best results on 26 functions and the second-best results on 3 functions. Compared with the HSAOA, the MSAMT had a better performance on 28 test problems. Both algorithms used a dynamic ensemble of surrogates developed from the same ensemble. However, the dynamic ensemble of surrogates proposed by the MSAMT had a better fitting performance, which is helpful for the algorithms to converge better. The MSAMT significantly outperformed the DSP-SAEA on 66.7% of the test functions. Despite the introduction of the decision space partition strategy from the DSP-SAEA into the global search, the MSAMT utilizes the G-MFEA to seek the optimal solution, thus facilitating information exchange between different decision space regions and preventing premature convergence to local optima. The MSAMT had a better performance than the BiS-SAHA on 31 out of 42 functions. Both use a surrogate ensemble, but the G-MFEA employed by the MSAMT enables the algorithm to converge better. The MSAMT had a better performance than FSAPSO on 73.8% of the test functions. The FSAPSO uses only the global RBF model, while the MSAMT simultaneously utilizes the RBF model and dynamic ensemble of surrogates, which makes the MSAMT perform better than the FSAPSO.
The convergence curves of the MSAMT and the four comparison algorithms on 10-, 20-, and 30-dimensional test functions are given in Figure A1, Figure A2 and Figure A3, respectively. According to the figures, the MSAMT outperformed the competing algorithms overall, with better convergence on most of the test functions. The preceding analysis indicates that MSAMT is competitive for low- and medium-dimensional problems.

3.4.2. Experimental Results on 50D and 100D Test Problems

High-dimensional optimization problems are widely available in reality, and thus, this paper further validates the scalability of the MSAMT through 50D and 100D test problems. In addition to the four competing algorithms previously described, two additional algorithms are added in this section for comparison with the MSAMT, namely GL-SADE and LSADE. The GL-SADE alternately uses RBF and Kriging to assist DE to perform both global and local searches and then performs local searches again utilizing the rewarding mechanism if the local search yields a better result. The LSADE suggests a new surrogate model that is based on the Lipschitz and adopts DE as an optimization algorithm. Both the GL-SADE and LSADE were proposed for expensive optimization problems. Note that on 50D and 100D test problems, the initial population size N for GL-SADE was set to 200, while for the LSADE it was set to 100 and 200.
Table A4 shows that the MSAMT achieved the optimal mean results on 60.7% of the test functions, thereby exhibiting a superior performance compared to the four competing SAEAs. For 18 of the 28 test functions, the MSAMT performed better than the HSAOA. Compared with the DSP-SAEA and the FSAPSO, the MSAMT achieved better results on 19 test functions. The MSAMT had a better performance than the FSAPSO on 71.4% of the functions. Table A5 presents the comparison results of the MSAMT with the GL-SADE and the LSADE on functions with 50 and 100 dimensions. The MSAMT obtained better results on 19 out of 28 functions. Compared with the GL-SADE, the MSAMT was more competitive on 20 test functions. In contrast to the LSADE, the MSAMT had better mean values on 19 test functions.
Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO for 50D and 100D test functions are given in Figure A4 and Figure A5, respectively. The figures demonstrate that the MSAMT had excellent convergence and could produce outcomes that are either better or comparable on most of the test functions. The HSAOA obtained the optimal results only on the F1-50 and F3-50 functions. The DSP-SAEA outperformed the other algorithms on the F9-100 and F11-100 functions, while the BiS-SAEA was most effective only for the F8-100 function. The FSAPSO obtained the best average value on five test problems, i.e., F6-50, F8-50, F9-50, F10-50, and F11-50. Figure A6 and Figure A7 show the convergence curves of MSAMT, GL-SADE, and LSADE on the 50- and 100-dimensional test problems, respectively. The MSAMT achieved the best mean values for all functions except F6-50, F8, F9, F10-50, and F11-50. The GL-SADE beat the other algorithms only on the F9-100 function. The LSADE won on 7 out of 28 functions. In summary, the MSAMT achieved the mean values on most of the test problems and was competitive in solving high-dimensional problems.

3.5. Computational Complexity Analysis of the MSAMT

Another essential part of measuring the performance of the algorithm is its computational complexity. The time complexity of the MSAMT is mainly determined by five main parts: the real function evaluation, the training of the dynamic ensemble of surrogates and the RBF models, the sorting of the samples in the database, the process of optimization using the G-MFEA, and other operations. The computational costs of other operations in the process are usually negligible, such as the generation and mutation of the population and the updating of the database.
Run times of the proposed MSAMT and six competing algorithms on 10-, 20-, 30-, 50-, and 100-dimensional test functions are given in Figure 6. Based on the statistics presented in Figure 6, it was evident that for the 10-dimensional functions, the BiS-SAHA consumed the least time, followed by the FSAPSO. For the 20- and 30-dimensional functions, the FSAPSO had the lowest time cost, followed by the BiS-SAHA. The LSADE took the least time on the 50- and 100-dimensional functions, while the HSAOA and the DSP-SAEA required significantly more time than the other algorithms. This is due to the fact that the FSAPSO, LSADE, and GL-SADE were constructed only with a single surrogate, whereas the BiS-SAHA, MSAMT, DSP-SAEA, and HSAOA all used a surrogate ensemble. The proposed MSAMT consumed the most time on the 10-dimensional test functions, but with the increasing of problem dimensions, its time cost was much lower than those of the HSAOA and the DSP-SAEA.

3.6. Case Study on a Spatial Truss Design Problem

Spatial truss design is representative in optimization design, which involves several design variables and is widely used to verify the prediction accuracy and robustness of different algorithms. In the following, the MSAMT proposed in this paper was applied to a spatial truss design problem and compared with existing algorithms. The design problem aimed to search for the best structural parameters under the maximum allowable horizontal displacement V m a x , including the length of the horizontal, longitudinal, and vertical bars (denoted as L 1 , L 2 , and L 3 ), the Young’s modulus (denoted as E 1 E 6 ), and cross-sectional area (denoted as A 1 A 6 ) of the vertical, horizontal/longitudinal, and layer diagonal bars from the first to the fourth floor. Figure 7 shows the finite element model of the spatial truss structure comprising 72 bar elements under horizontal loads ( F 1 ~ F 8 ). The horizontal loads were fixed values set as F 1 = F 2 = 3.0 × N , F 3 = F 4 = 4.0 × N , and F 5 = F 6 = 5.0 × N . The initial values and ranges of the parameters to be optimized are shown in Table 3. The objective function to be optimized is given by the following:
min x Ω d V x V m a x     s u b j e c t   t o :   x l b x x u b
where V represents the maximum horizontal displacement, x is a vector containing the 15 parameters to be optimized, x l b and x u b are the upper and lower bounds of the parameters, respectively, and d denotes the number of structural parameters ( d = 15 ).
The HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO were chosen as the competing algorithms. The relevant parameters remained constant with the configurations from the previous sections. The optimal structural parameters and statistical results of the MSAMT and the competing algorithms for the spatial truss structure design problem are given in Table A6 and Table 4, respectively. The data showed that MSAMT achieved the best results. The convergence history is given in Figure 8, and the MSAMT converged faster than the comparison algorithms. In conclusion, the above results show that MSAMT is highly competitive in solving real engineering optimal design problems.

4. Conclusions

This paper developed a multi-surrogate-assisted multi-tasking optimization algorithm (MSAMT) in the framework of the G-MFEA for the expensive optimization problems. The proposed MSAMT first divides the space into several disjoint regions using a decision space partition strategy and then builds an RBF model on each region. A new dynamic ensemble of surrogates is then suggested, constructed using the samples with the optimal fitness values. Several RBF models and a surrogate ensemble are then optimized simultaneously as tasks in multi-tasking optimization using the generalized multi-factorial evolutionary algorithm.
To evaluate the performance of the proposed approach, algorithm analyses of the MSAMT were first carried out on fourteen test functions, analyzing the parameters, the decision space partition strategy, the dynamic ensemble of surrogates, and the computational complexity. Finally, fourteen 10- to 100-dimensional test functions and a spatial truss design problem were used to compare the proposed approach with several recently proposed SAEAs. The results show that the proposed MSAMT exhibits a superior performance compared to comparison algorithms in most of the test functions and an engineering real-world problem.
The present study shows that the surrogate models can greatly affect the optimization results as they are utilized to select the samples that need to be evaluated exactly. Furthermore, as the modality of the optimization problem is increasing, both global and local searches may be unable to identify better individuals. Therefore, the proposed MSAMT enhances the population diversity through strategies such as the new dynamic ensemble of surrogates, the decision space partitioning, and the combination of local and global search under the G-MFEA framework to improve the performance of the algorithm.
In this work, the components used for constructing the ensemble surrogate are RBF, PRS, and SVR, and more components such as Kriging and the response surface model (RSM) could be further investigated to improve the ability of the MSAMT to handle various optimization problems. In addition, the control functions in the formula for calculating the weight coefficients (Equation (10)) can be adaptively selected according to different model combinations to further explore more reasonable weighting factors. Finally, it is possible to consider how the suggested MSAMT makes trade-offs between different objectives, thereby extending it to multi-objective optimization.

Author Contributions

Methodology, investigation, software, writing—original draft, data curation, and formal analysis, H.L.; conceptualization, methodology, supervision, project administration, and writing—review and editing, L.C.; methodology, supervision, validation, funding acquisition, and writing—review and editing, J.Z.; visualization and writing—review and editing, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant No. 11872190).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Performances of the MSAMT with different n values on test functions with 10, 20, and 30 dimensions.
Table A1. Performances of the MSAMT with different n values on test functions with 10, 20, and 30 dimensions.
No.Dn = 2n = 3n = 4n = 5n = 6
F1106.1576 × 10−04/9.0159 × 10−041.1315 × 10−03/2.1076 × 10−031.3484 × 10−03/1.6437 × 10−035.2604 × 10−03/1.4730 × 10−036.4951 × 10−03/1.2067 × 10−02
208.7857 × 10−04/1.2927 × 10−037.5156 × 10−04/1.0109 × 10−032.3772 × 10−03/3.4051 × 10−031.1820 × 10−03/1.7333 × 10−031.2078 × 10−03/1.8732 × 10−03
305.0537 × 10−04/8.3404 × 10−047.4973 × 10−04/1.1724 × 10−032.7145 × 10−03/5.4419 × 10−031.2440 × 10−03/1.6409 × 10−031.7024 × 10−03/4.0160 × 10−03
F2108.9377 × 100/1.9688 × 10−028.9276 × 100/2.4215 × 10−028.9145 × 100/1.8565 × 10−028.9132 × 100/1.4603 × 10−028.9213 × 100/1.1374 × 10−02
201.8948 × 101/4.3698 × 10−021.8972 × 101/3.0864 × 10−021.8968 × 101/2.9814 × 10−021.8982 × 101/2.2537 × 10−021.8957 × 101/3.4526 × 10−02
302.8907 × 101/6.3428 × 10−022.8932 × 101/5.2952 × 10−022.8933 × 101/5.1432 × 10−022.8954 × 101/4.2116 × 10−022.8942 × 101/4.6175 × 10−02
F3101.0046 × 10−1/9.8901 × 10−22.5499 × 10−1/2.4696 × 10−11.9278 × 10−1/1.8896 × 10−14.4993 × 10−1/4.7951 × 10−13.6344 × 10−1/4.7301 × 10−1
203.5661 × 10−2/3.4910 × 10−23.8880 × 10−2/4.9165 × 10−23.4230 × 10−2/2.5054 × 10−25.9253 × 10−2/6.3396 × 10−23.6311 × 10−2/4.0904 × 10−2
302.0405 × 10−2/2.5210 × 10−22.8334 × 10−2/3.1228 × 10−22.4139 × 10−2/2.2255 × 10−22.6908 × 10−2/2.4515 × 10−24.1592 × 10−2/3.5518 × 10−2
F4102.4538 × 10−1/2.5738 × 10−12.9535 × 10−1/3.3024 × 10−14.1281 × 10−1/3.4644 × 10−14.5236 × 10−1/3.5449 × 10−13.7287 × 10−1/3.7932 × 10−1
203.6554 × 10−2/4.8874 × 10−29.7207 × 10−2/2.0167 × 10−11.4333 × 10−1/1.9419 × 10−11.1596 × 10−1/1.9961 × 10−11.4841 × 10−1/1.5769 × 10−1
302.2352 × 10−2/3.6281 × 10−23.9419 × 10−2/6.3087 × 10−25.1184 × 10−2/8.1190 × 10−29.0757 × 10−2/1.3953 × 10−16.1304 × 10−2/9.2959 × 10−2
F5101.5706 × 10−2/1.7458 × 10−24.3639 × 10−2/1.2757 × 10−19.5155 × 10−2/2.8352 × 10−11.5851 × 10−1/4.0449 × 10−12.0106 × 10−1/3.0228 × 10−1
202.5543 × 10−2/4.0294 × 10−23.9266 × 10−2/6.3480 × 10−22.8908 × 10−2/4.8136 × 10−22.2724 × 10−2/6.1190 × 10−22.8357 × 10−2/3.9895 × 10−2
307.7361 × 10−3/2.0119 × 10−21.8737 × 10−2/3.8945 × 10−21.1835 × 10−2/1.4120 × 10−22.3746 × 10−2/4.0766 × 10−21.7943 × 10−2/2.4801 × 10−2
F6103.9109 × 108/2.4226 × 1083.2550 × 108/1.9101 × 1083.4137 × 108/1.6906 × 1083.4896 × 108/1.7638 × 1083.3892 × 108/1.7821 × 108
204.4850 × 108/1.6046 × 1084.4514 × 108/2.0654 × 1084.3962 × 108/1.3582 × 1084.9135 × 108/1.6797 × 1085.3339 × 108/1.9813 × 108
301.6204 × 109/3.4767 × 1081.7716 × 109/4.8095 × 1081.8055 × 109/4.1151 × 1081.8337 × 109/4.2732 × 1081.9834 × 109/5.5657 × 108
F710−1.1906 × 102/8.2173 × 10−2−1.1906 × 102/1.3282 × 10−1−1.1906 × 102/1.2635 × 10−1−1.1912 × 102/6.3425 × 101−1.1902 × 102/1.1162 × 10−1
20−1.1883 × 102/9.7998 × 10−2−1.1885 × 102/8.0033 × 10−2−1.1883 × 102/8.4834 × 10−2−1.1885 × 102/8.4656 × 10−2−1.1883 × 102/9.7280 × 10−2
30−1.1872 × 102/7.4332 × 10−2−1.1870 × 102/5.8735 × 10−2−1.1871 × 102/5.5026 × 10−2−1.1873 × 102/4.8874 × 10−2−1.1872 × 102/5.1845 × 10−2
F810−2.1209 × 102/1.6889 × 101−2.1083 × 102/1.8400 × 101−2.0987 × 102/1.7706 × 101−2.0507 × 102/1.6862 × 101−2.0916 × 102/2.1757 × 101
20−6.6481 × 101/2.9068 × 101−6.8770 × 101/2.8479 × 101−5.3536 × 101/2.5079 × 101−5.6753 × 101/2.2102 × 101−7.4236 × 101/2.9150 × 101
307.6683 × 101/4.5207 × 1019.2089 × 101/3.5833 × 1017.5062 × 101/3.7829 × 1017.8364 × 101/3.5119 × 1016.6761 × 101/4.4122 × 101
F910−1.8777 × 102/1.5327 × 101−1.7113 × 102/2.4084 × 101−1.7066 × 102/2.7885 × 101−1.7145 × 102/2.1716 × 101−1.7240 × 102/2.4936 × 101
206.4664 × 101/3.7553 × 1016.7919 × 101/4.3892 × 1016.6263 × 101/4.1520 × 1015.5372 × 101/3.2486 × 1014.8552 × 101/5.0830 × 101
303.8454 × 102/5.8307 × 1014.1287 × 102/5.2558 × 1013.7172 × 102/9.9035 × 1013.8358 × 102/1.0230 × 1023.3660 × 102/9.2954 × 101
F1010−2.9541 × 102/1.3311 × 10−1−2.9542 × 102/1.3419 × 10−1−2.9544 × 102/1.7197 × 10−1−2.9543 × 102/1.4707 × 10−1−2.9545 × 102/1.3905 × 10−1
20−2.9078 × 102/2.0940 × 10−1−2.9076 × 102/1.3735 × 10−1−2.9077 × 102/2.3274 × 10−1−2.9073 × 102/1.7840 × 10−1−2.9080 × 102/2.1605 × 10−1
30−2.8567 × 102/1.9433 × 10−1−2.8567 × 102/2.0172 × 10−1−2.8570 × 102/1.8898 × 10−1−2.8567 × 102/2.2014 × 10−1−2.8570 × 102/2.0380 × 10−1
F11107.6382 × 102/1.3531 × 1027.9326 × 102/1.4033 × 1027.8300 × 102/1.3476 × 1027.4459 × 102/1.4442 × 1027.4430 × 102/1.2479 × 102
201.0332 × 103/1.4849 × 1021.0469 × 103/1.5155 × 1021.0438 × 103/1.6408 × 1021.0268 × 103/1.7361 × 1021.0437 × 103/1.5754 × 102
301.2662 × 103/1.0251 × 1021.2770 × 103/1.1732 × 1021.2701 × 103/1.1117 × 1021.2567 × 103/1.2224 × 1021.2538 × 103/1.3072 × 102
F12109.1010 × 102/1.5644 × 10−19.1007 × 102/1.2011 × 10−19.1026 × 102/3.9976 × 10−19.1038 × 102/4.5278 × 10−19.1037 × 102/7.8635 × 10−1
209.1001 × 102/9.9019 × 10−39.1002 × 102/2.6173 × 10−29.1002 × 102/2.4352 × 10−29.1001 × 102/2.0584 × 10−29.1005 × 102/1.1789 × 10−1
309.1000 × 102/6.0359 × 10−39.1001 × 102/9.8419 × 10−39.1001 × 102/1.0193 × 10−29.1001 × 102/6.8736 × 10−39.1001 × 102/1.2882 × 10−2
F13109.1014 × 102/2.2779 × 10−19.1008 × 102/8.9839 × 10−29.1020 × 102/3.9035 × 10−19.1015 × 102/1.8995 × 10−19.1048 × 102/1.0171 × 100
209.1001 × 102/1.0621 × 10−29.1002 × 102/1.9057 × 10−29.1002 × 102/5.5961 × 10−29.1003 × 102/6.8497 × 10−29.1001 × 102/1.6544 × 10−2
309.1001 × 102/1.0603 × 10−29.1001 × 102/4.4586 × 10−39.1001 × 102/6.7372 × 10−39.1001 × 102/7.0401 × 10−39.1001 × 102/5.5567 × 10−3
F14109.1005 × 102/6.6851 × 10−29.1008 × 102/9.6278 × 10−29.1037 × 102/6.8048 × 10−19.1042 × 102/6.8477 × 10−19.1078 × 102/1.7724 × 100
209.1001 × 102/1.2135 × 10−29.1002 × 102/1.2772 × 10−29.1001 × 102/8.6495 × 10−39.1003 × 102/3.5058 × 10−29.1001 × 102/2.4665 × 10−2
309.1001 × 102/6.3484 × 10−39.1001 × 102/1.4759 × 10−29.1001 × 102/1.0503 × 10−29.1001 × 102/1.5119 × 10−29.1001 × 102/6.8025 × 10−3
Table A2. Performances of the MSAMT, MSAMT-KRG, and MSAMT-fixed on 20-dimensional test functions.
Table A2. Performances of the MSAMT, MSAMT-KRG, and MSAMT-fixed on 20-dimensional test functions.
No.MSAMT MSAMT-KRG MSAMT-Fixed
Mean/StdTime (s)Mean/StdTime (s)Mean/StdTime (s)
F17.1867 × 10−4/1.2786 × 10−32.2559 × 1028.5112 × 10−4/1.6087 × 10−31.4392 × 1035.6628 × 10−5/1.2308 × 10−41.8153 × 103
F21.8918 × 101/3.0714 × 10−21.3417 × 1021.8957 × 101/3.2927 × 10−21.3663 × 1031.8932 × 101/4.3047 × 10−21.8511 × 103
F32.2813 × 10−2/1.8972 × 10−22.0433 × 1023.1443 × 10−2/3.0788 × 10−21.4397 × 1031.8529 × 10−2/1.9483 × 10−21.7036 × 103
F41.8587 × 10−2/2.6338 × 10−22.0119 × 1024.6343 × 10−2/6.6799 × 10−21.4155 × 1032.1738 × 10−2/2.5219 × 10−21.8022 × 103
F51.9954 × 10−2/3.7313 × 10−21.6742 × 1026.4855 × 10−3/1.2761 × 10−21.4257 × 1035.7906 × 10−4/9.8625 × 10−41.9176 × 103
F64.5970 × 108/1.7431 × 1082.0610 × 1024.4432 × 108/1.8899 × 1081.0199 × 1034.7695 × 108/2.0986 × 1081.4189 × 103
F7−1.1891 × 102/8.6824 × 10−22.0682 × 102−1.1883 × 102/9.6093 × 10−21.0735 × 103−1.1884 × 102/7.2111 × 10−21.3799 × 103
F8−6.1893 × 101/3.3596 × 1011.9759 × 102−5.6754 × 101/2.7057 × 1011.4987 × 103−4.6272 × 101/3.2021 × 1011.9541 × 103
F97.3996 × 101/3.6260 × 1012.1021 × 1025.0348 × 101/4.5175 × 1019.3336 × 1026.6196 × 101/3.5719 × 1011.3000 × 103
F10−2.9075 × 102/1.7213 × 10−12.0911 × 102−2.9078 × 102/1.5655 × 10−11.0734 × 103−2.9081 × 102/1.8345 × 10−11.3873 × 103
F111.0258 × 103/1.7481 × 1022.1239 × 1021.0309 × 103/1.7726 × 1021.1297 × 1031.0365 × 103/1.8045 × 1021.6549 × 103
F129.1001 × 102/9.8825 × 10−31.9979 × 1029.1001 × 102/1.2676 × 10−21.4347 × 1039.1001 × 102/6.2157 × 10−31.8808 × 103
F139.1001 × 102/1.6433 × 10−22.0068 × 1029.1001 × 102/1.6238 × 10−21.3507 × 1039.1001 × 102/3.7595 × 10−31.8873 × 103
F149.1001 × 102/7.5166 × 10−31.9395 × 1029.1001 × 102/1.5057 × 10−21.4379 × 1039.1001 × 102/4.5504 × 10−31.7661 × 103
Table A3. Statistical results (AVG ± STD) obtained by the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test functions with 10, 20, and 30 dimensions.
Table A3. Statistical results (AVG ± STD) obtained by the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test functions with 10, 20, and 30 dimensions.
No.DMSAMTHSAOADSP-SAEABiS-SAHAFSAPSO
F1102.2186 × 10−3/7.3964 × 10−32.4575 × 10−3/2.4031 × 10−3(+)3.3321 × 10−7/6.0800 × 10−7(−)3.1726 × 10−2/3.9344 × 10−2(+)2.7795 × 10−2/3.0173 × 10−2(+)
207.1867 × 10−4/1.2786 × 10−39.5862 × 10−3/2.1410 × 10−2(+)9.8464 × 10−2/1.4483 × 10−1(+)1.5405 × 10−1/1.1005 × 10−1(+)1.9539 × 10−1/1.3521 × 10−1(+)
305.4775 × 10−4/8.8610 × 10−48.1057 × 10−3/1.8270 × 10−2(+)4.9800 × 10−1/2.7486 × 10−1(+)5.3745 × 10−1/2.1538 × 10−1(+)6.1116 × 10−1/3.2927 × 10−1(+)
F2108.9291 × 100/1.6436 × 10−29.7479 × 100/1.4695 × 100(+)1.0654 × 101/1.5344 × 100(+)1.6730 × 101/5.7852 × 100(+)1.4193 × 101/7.3871 × 100(+)
201.8918 × 101/3.0714 × 10−22.3221 × 101/4.1746 × 100(+)2.3291 × 101/2.9022 × 100(+)5.6260 × 101/2.1739 × 101(+)4.6111 × 101/2.6400 × 101(+)
302.8913 × 101/6.1577 × 10−23.0580 × 101/1.6938 × 100(+)3.2337 × 101/1.2599 × 100(+)9.6668 × 101/2.8762 × 101(+)6.1852 × 101/1.9570 × 101(+)
F3107.9531 × 10−2/9.6623 × 10−26.9486 × 10−1/8.2074 × 10−1(+)7.0093 × 100/2.4921 × 100(+)5.6567 × 100/2.8986 × 100(+)5.7167 × 100/2.4296 × 100(+)
202.2813 × 10−2/1.8972 × 10−29.6387 × 10−3/9.7639 × 10−3(−)7.8720 × 100/1.8258 × 100(+)5.9863 × 100/4.0404 × 100(+)7.5047 × 100/2.7653 × 100(+)
301.9428 × 10−2/2.3069 × 10−21.2704 × 10−3/5.7678 × 10−4(−)5.8237 × 100/1.2778 × 100(+)4.3556 × 100/1.6599 × 100(+)7.8665 × 100/2.3169 × 100(+)
F4101.2388 × 10−1/1.7845 × 10−16.8511 × 10−1/2.8208 × 10−1(+)1.0043 × 100/7.5751 × 100(+)7.1742 × 10−1/2.9906 × 10−1(+)9.4176 × 10−1/6.4448 × 10−2(+)
201.8587 × 10−2/2.6338 × 10−22.7908 × 10−1/3.4399 × 10−1(+)9.1369 × 10−1/9.7421 × 10−2(+)3.5613 × 10−1/1.9511 × 10−1(+)7.4943 × 10−1/1.2541 × 10−1(+)
301.7011 × 10−2/3.2582 × 10−21.3709 × 10−1/1.9915 × 10−1(+)9.6615 × 10−1/5.4062 × 10−2(+)2.3262 × 10−1/1.4185 × 10−1(+)6.5227 × 10−1/1.0699 × 10−1(+)
F5105.1065 × 10−2/9.6333 × 10−24.7218 × 101/2.8014 × 101(+)3.9081 × 101/1.4233 × 101(+)3.6467 × 101/2.1593 × 101(+)4.0986 × 101/1.6733 × 101(+)
201.9954 × 10−2/3.7313 × 10−25.2701 × 101/2.5755 × 101(+)8.0158 × 101/2.6458 × 101(+)6.3888 × 101/3.0763 × 101(+)6.1867 × 101/3.2755 × 101(+)
303.4102 × 10−3/5.3363 × 10−34.6209 × 101/2.4258 × 101(+)2.1261 × 101/1.2658 × 101(+)6.1469 × 101/2.9189 × 101(+)7.6937 × 101/3.5378 × 101(+)
F6103.3595 × 108/1.6961 × 1083.8227 × 107/4.1371 × 107(−)7.9564 × 107/6.1272 × 107(−)2.3740 × 107/3.0509 × 107(−)3.1398 × 107/3.0647 × 107(−)
204.5970 × 108/1.7431 × 1085.8898 × 108/2.0217 × 108(+)5.6116 × 108/2.8450 × 108(+)5.7606 × 108/2.2375 × 108(+)6.1811 × 108/2.9055 × 108(+)
301.6180 × 109/3.5130 × 1088.6408 × 107/4.6362 × 107(−)1.9755 × 108/7.0611 × 107(−)8.1104 × 107/4.5028 × 107(−)7.7800 × 107/3.3843 × 107(−)
F710−1.1908 × 102/1.3742 × 10−1−1.1884 × 102/1.0851 × 10−1(+)−1.1907 × 102/1.2945 × 10−1(=)−1.1906 × 102/1.3686 × 10−1(=)−1.1902 × 102/1.2587 × 10−1(=)
20−1.1891 × 102/8.6824 × 10−2−1.1888 × 102/9.6667 × 10−2(=)−1.1885 × 102/1.0603 × 10−1(=)−1.1885 × 102/1.0527 × 10−1(=)−1.1885 × 102/9.6983 × 10−2(=)
30−1.1876 × 102/5.8657 × 10−2−1.1873 × 102/5.4940 × 10−2(=)−1.1874 × 102/5.3362 × 10−2(=)−1.1870 × 102/5.1004 × 10−2(=)−1.1872 × 102/4.7621 × 10−2(=)
F810−2.1031 × 102/1.7589 × 101−2.8556 × 102/1.4680 × 101(−)−2.7223 × 102/1.9182 × 101(−)−2.9346 × 102/1.6692 × 101(−)−2.8002 × 102/1.9757 × 101(−)
20−6.1893 × 101/3.3596 × 101−2.5524 × 102/2.1918 × 101(−)−2.3584 × 102/2.7262 × 101(−)−2.6230 × 102/3.2670 × 101(−)−2.4975 × 102/3.3611 × 101(−)
309.7566 × 101/4.0362 × 101−2.2256 × 102/3.2951 × 101(−)−1.6840 × 102/3.1401 × 101(−)−1.1260 × 102/4.4708 × 101(−)−2.2617 × 102/2.8949 × 101(−)
F910−1.7629 × 102/1.7730 × 101−2.8029 × 102/2.2697 × 101(−)−2.6487 × 102/1.2332 × 101(−)−2.9478 × 102/1.5338 × 101(−)−2.7535 × 102/1.8625 × 101(−)
207.3996 × 101/3.6260 × 101−1.1239 × 102/3.3385 × 101(−)−1.5357 × 102/1.3680 × 101(−)−1.1359 × 102/3.1975 × 101(−)−1.4092 × 102/1.9300 × 101(−)
304.0272 × 102/4.2241 × 101−1.6286 × 102/3.6614 × 101(−)−1.1403 × 102/2.3933 × 101(−)−9.3406 × 101/4.1676 × 101(−)−1.6389 × 102/4.0406 × 101(−)
F1010−2.9541 × 102/1.3752 × 10−1−2.9552 × 102/1.9088 × 10−1(=)−2.9555 × 102/1.6229 × 10−1(=)−2.9557 × 102/2.2424 × 10−1(=)−2.9554 × 102/1.6612 × 10−1(=)
20−2.9075 × 102/1.7213 × 10−1−2.9063 × 102/1.6249 × 10−1(=)−2.9073 × 102/1.9203 × 10−1(=)−2.9064 × 102/2.0176 × 10−1(=)−2.9069 × 102/2.1238 × 10−1(=)
30−2.8570 × 102/1.9496 × 10−1−2.8577 × 102/1.9317 × 10−1(=)−2.8597 × 102/2.7387 × 10−1(=)−2.8580 × 102/2.3075 × 10−1(=)−2.8587 × 102/2.2160 × 10−1(=)
F11107.8293 × 102/1.4282 × 1024.0859 × 102/8.5902 × 101(−)4.5692 × 102/9.1476 × 101(−)4.1606 × 102/8.9614 × 101(−)4.3799 × 102/1.3829 × 102(−)
201.0258 × 103/1.7481 × 1025.9594 × 102/9.8286 × 101(−)6.9914 × 102/1.1353 × 102(−)5.7719 × 102/7.4371 × 101(−)6.0940 × 102/1.0422 × 102(−)
301.2613 × 103/1.2295 × 1024.9137 × 102/9.0273 × 101(−)6.2205 × 102/9.6358 × 101(−)5.1114 × 102/1.3416 × 102(−)5.6040 × 102/1.0212 × 102(−)
F12109.1006 × 102/1.4643 × 10−11.1360 × 103/6.7100 × 101(+)1.2088 × 103/5.1323 × 101(+)1.1684 × 103/6.9199 × 101(+)1.1621 × 103/5.3208 × 101(+)
209.1001 × 102/9.8825 × 10−31.2808 × 103/8.0984 × 101(+)1.2678 × 103/4.4563 × 101(+)1.2720 × 103/7.2891 × 101(+)1.2653 × 103/4.0572 × 101(+)
309.1001 × 102/5.5963 × 10−31.0180 × 103/5.7919 × 101(+)1.0939 × 103/3.7824 × 101(+)9.9759 × 102/4.3238 × 101(+)1.0213 × 103/6.1759 × 101(+)
F13109.1011 × 102/1.3738 × 10−11.1264 × 103/6.7088 × 101(+)1.2016 × 103/7.0017 × 101(+)1.1395 × 103/3.9149 × 101(+)1.1673 × 103/6.1350 × 101(+)
209.1001 × 102/1.6433 × 10−21.2447 × 103/5.9141 × 101(+)1.2806 × 103/5.6512 × 101(+)1.2790 × 103/6.7908 × 101(+)1.2571 × 103/4.5539 × 101(+)
309.1001 × 102/8.4273 × 10−31.0098 × 103/4.6875 × 101(+)1.0963 × 103/3.7419 × 101(+)9.9866 × 102/3.4782 × 101(+)1.0440 × 103/6.4525 × 101(+)
F14109.1010 × 102/1.8543 × 10−11.1254 × 103/7.7471 × 101(+)1.2085 × 103/5.6439 × 101(+)1.1744 × 103/9.4485 × 101(+)1.1591 × 103/8.3353 × 101(+)
209.1001 × 102/7.5166 × 10−31.2435 × 103/7.2796 × 101(+)1.2788 × 103/5.1990 × 101(+)1.2569 × 103/8.4563 × 101(+)1.2566 × 103/3.8660 × 101(+)
309.1001 × 102/4.6669 × 10−39.9491 × 102/4.8337 × 101(+)1.0910 × 103/4.7972 × 101(+)1.0076 × 103/6.3577 × 101(+)1.0462 × 103/6.7583 × 101(+)
+/=/ 24/5/1324/6/1225/6/1125/6/11
Table A4. Statistical results (AVG ± STD) obtained by the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test functions with 50 and 100 dimensions.
Table A4. Statistical results (AVG ± STD) obtained by the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test functions with 50 and 100 dimensions.
No.DMSAMTHSAOADSP-SAEABiS-SAEAFSAPSO
F1501.5139 × 10−3/2.9703 × 10−36.6420 × 10−4/1.1716 × 10−3(−)8.5655 × 100/2.3768 × 100(+)3.5827 × 101/1.0570 × 102(+)8.6070 × 100/4.2065 × 100(+)
F11002.2974 × 10−3/4.1236 × 10−32.7541 × 10−1/5.4010 × 10−1(+)2.2511 × 102/3.6066 × 101(+)5.6663 × 102/4.9770 × 102(+)1.7318 × 102/7.0301 × 101(+)
F2504.8989 × 101/2.0683 × 10−24.9004 × 101/3.8941 × 10−1(+)6.4929 × 101/8.2349 × 100(+)3.5411 × 102/3.4942 × 102(+)9.0141 × 101/2.3460 × 101(+)
F21009.8980 × 101/2.6847 × 10−21.0434 × 102/2.6441 × 100(+)3.6493 × 102/4.7918 × 101(+)8.9207 × 102/3.4642 × 102(+)2.9185 × 102/3.7701 × 101(+)
F3501.5949 × 10−2/1.7975 × 10−22.1525 × 10−4/4.6254 × 10−5(−)7.0014 × 100/1.3281 × 100(+)8.9919 × 100/3.6852 × 100(+)7.5177 × 100/7.1600 × 10−1(+)
F31001.6202 × 10−2/1.6438 × 10−22.1473 × 10−2/1.0452 × 10−2(+)7.9457 × 100/4.3654 × 10−1(+)1.5824 × 101/1.9239 × 100(+)1.3845 × 101/1.3817 × 100(+)
F4502.3249 × 10−2/3.9907 × 10−22.1299 × 10−1/1.3435 × 10−1(+)1.1393 × 100/4.7626 × 10−2(+)4.5965 × 10−1/1.6026 × 10−1(+)7.7063 × 10−1/6.8645 × 10−2(+)
F41002.5404 × 10−2/4.1350 × 10−22.1299 × 10−1/1.3435 × 10−1(+)5.9262 × 100/8.2541 × 10−1(+)1.5578 × 101/2.7415 × 101(+)3.0273 × 100/1.0937 × 100(+)
F5504.3405 × 10−3/8.5700 × 10−31.1369 × 102/3.9883 × 101(+)3.9735 × 102/3.8838 × 101(+)1.4639 × 102/6.6315 × 101(+)1.1791 × 102/3.1529 × 101(+)
F51001.3211 × 10−2/2.5329 × 10−24.3194 × 102/2.2999 × 102(+)8.6395 × 102/3.0622 × 101(+)5.3052 × 102/1.2590 × 102(+)7.1114 × 102/1.2708 × 102(+)
F6504.1817 × 109/1.0109 × 1092.0962 × 108/8.6102 × 107(−)3.1424 × 108/9.2524 × 107(−)1.4923 × 108/6.2721 × 107(−)1.4518 × 108/4.9036 × 107(−)
F61007.4188 × 109/1.0592 × 1098.1092 × 109/1.4021 × 109(+)8.1388 × 109/1.5812 × 109(+)9.6883 × 109/2.0580 × 109(+)8.9691 × 109/1.4803 × 109(+)
F750−1.1865 × 102/4.2963 × 10−2−1.1862 × 102/4.9648 × 10−2(=)−1.1865 × 102/5.0892 × 10−2(=)−1.1862 × 102/4.5914 × 10−2(=)−1.1863 × 102/3.7352 × 10−2(=)
F7100−1.1853 × 102/3.9229 × 10−2−1.1852 × 102/3.9939 × 10−2(=)−1.1852 × 102/2.9043 × 10−2(=)−1.1851 × 102/2.4057 × 10−2(=)−1.1852 × 102/3.1575 × 10−2(=)
F8504.7430 × 102/4.0367 × 1016.2601 × 101/4.0803 × 101(−)1.5347 × 102/3.5406 × 101(−)−5.2526 × 101/4.7545 × 101(−)−8.6248 × 101/4.4722 × 101(−)
F81001.3291 × 103/5.3634 × 1018.6902 × 102/6.8374 × 101(−)7.4663 × 102/5.4666 × 101(−)4.2095 × 102/7.9754 × 101(−)6.8752 × 102/1.1339 × 102(−)
F9509.8795 × 102/5.0244 × 1012.9357 × 102/1.0638 × 102(−)2.3344 × 102/4.2587 × 101(−)1.0728 × 102/7.2037 × 101(−)−1.4294 × 101/4.0704 × 101(−)
F91002.0480 × 103/5.4173 × 1011.7680 × 103/1.3779 × 102(−)1.0183 × 103/7.6560 × 101(−)1.5369 × 103/1.6818 × 102(−)1.3951 × 103/1.7096 × 102(−)
F1050−2.7589 × 102/1.8276 × 10−1−2.7594 × 102/2.2129 × 10−1(=)−2.7613 × 102/3.6886 × 10−1(−)−2.7589 × 102/2.0396 × 10−1(=)−2.7618 × 102/1.7560 × 10−1(−)
F10100−2.5137 × 102/1.6843 × 10−1−2.5114 × 102/1.7815 × 10−1(=)−2.5120 × 102/1.6293 × 10−1(=)−2.5107 × 102/1.8699 × 10−1(=)−2.5129 × 102/2.6863 × 10−1(=)
F11501.4403 × 103/8.7875 × 1016.3811 × 102/1.3438 × 102(−)6.0781 × 102/6.7380 × 101(−)5.7609 × 102/8.6520 × 101(−)4.8557 × 102/9.5737 × 101(−)
F111001.5965 × 103/5.8848 × 1019.4489 × 102/9.5478 × 101(−)6.7137 × 102/3.6199 × 101(−)8.4819 × 102/7.7597 × 101(−)7.7777 × 102/9.1593 × 101(−)
F12509.1000 × 102/2.2301 × 10−39.2622 × 102/7.2549 × 101(+)1.1402 × 103/3.9379 × 101(+)1.0975 × 103/5.4559 × 101(+)1.1225 × 103/5.2349 × 101(+)
F121009.1000 × 102/3.2326 × 10−31.0103 × 103/5.1691 × 101(+)1.2434 × 103/4.7584 × 101(+)1.4063 × 103/3.4757 × 101(+)1.4119 × 103/3.9495 × 101(+)
F13509.1000 × 102/4.4973 × 10−39.4249 × 102/1.0050 × 102(+)1.1483 × 103/4.3132 × 101(+)1.1096 × 103/6.5641 × 101(+)1.1379 × 103/5.7170 × 101(+)
F131009.1000 × 102/4.4477 × 10−31.0144 × 103/6.9600 × 101(+)1.2516 × 103/4.2059 × 101(+)1.4071 × 103/4.0183 × 101(+)1.4066 × 103/3.1890 × 101(+)
F14509.1000 × 102/3.8153 × 10−39.3946 × 102/9.2767 × 101(+)1.1362 × 103/3.0108 × 101(+)1.1092 × 103/5.7922 × 101(+)1.1373 × 103/5.5069 × 101(+)
F141009.1000 × 102/2.9016 × 10−31.0065 × 103/4.9499 × 101(+)1.2338 × 103/3.9768 × 101(+)1.4038 × 103/3.4837 × 101(+)1.4129 × 103/2.9752 × 101(+)
+/=/ 15/4/917/3/817/4/717/3/8
Table A5. Statistical results (AVG ± STD) obtained by the MSAMT, GL-SADE, and LSADE on test functions with 50 and 100 dimensions.
Table A5. Statistical results (AVG ± STD) obtained by the MSAMT, GL-SADE, and LSADE on test functions with 50 and 100 dimensions.
No.DMSAMTGL-SADELSADE
F1501.5139 × 10−3/2.9703 × 10−31.5209 × 103/1.3836 × 102(+)1.9977 × 102/6.6385 × 101(+)
F11002.2974 × 10−3/4.1236 × 10−31.1735 × 104/7.3673 × 102(+)3.1721 × 103/4.0235 × 102(+)
F2504.8989 × 101/2.0683 × 10−21.6358 × 103/1.3374 × 102(+)2.2193 × 102/5.4929 × 101(+)
F21009.8980 × 101/2.6847 × 10−27.4367 × 103/6.0353 × 102(+)1.2336 × 103/2.1562 × 102(+)
F3501.5949 × 10−2/1.7975 × 10−21.6100 × 101/1.0188 × 100(+)1.2094 × 101/1.8939 × 100(+)
F31001.6202 × 10−2/1.6438 × 10−21.9667 × 101/2.3333 × 10−1(+)1.6278 × 101/9.3989 × 10−1(+)
F4502.3249 × 10−2/3.9907 × 10−22.2429 × 100/6.5393 × 10−1(+)2.2290 × 101/1.0124 × 101(+)
F41002.5404 × 10−2/4.1350 × 10−28.1014 × 102/4.3202 × 101(+)2.4936 × 102/2.2081 × 101(+)
F5504.3405 × 10−3/8.5700 × 10−35.3532 × 102/2.2348 × 101(+)3.3873 × 102/5.7416 × 101(+)
F51001.3211 × 10−2/2.5329 × 10−21.1977 × 103/3.8349 × 101(+)9.8004 × 102/7.2114 × 101(+)
F6504.1817 × 109/1.0109 × 1091.2078 × 109/2.2629 × 108(−)6.4619 × 108/2.7999 × 108(−)
F61007.4188 × 109/1.0592 × 1091.3133 × 1010/2.7524 × 109(+)1.1618 × 1010/2.7631 × 109(+)
F750−1.1865 × 102/4.2963 × 10−2−1.1864 × 102/4.4538 × 10−2(=)−1.1863 × 102/4.8161 × 10−2(=)
F7100−1.1853 × 102/3.9229 × 10−2−1.1851 × 102/2.3853 × 10−2(=)−1.1851 × 102/2.2915 × 10−2(=)
F8504.7430 × 102/4.0367 × 1011.6206 × 102/2.1817 × 101(−)−2.1994 × 101/6.1268 × 101(−)
F81001.3291 × 103/5.3634 × 1018.4853 × 102/4.9467 × 101(−)6.0061 × 102/7.1430 × 101(−)
F9509.8795 × 102/5.0244 × 1012.3591 × 102/2.6033 × 101(−)5.2861 × 101/7.8954 × 101(−)
F91002.0480 × 103/5.4173 × 1011.1228 × 103/5.3780 × 101(−)1.0077 × 103/8.6161 × 101(−)
F1050−2.7589 × 102/1.8276 × 10−1−2.7585 × 102/1.6298 × 10−1(=)−2.7601 × 102/2.2793 × 10−1(−)
F10100−2.5137 × 102/1.6843 × 10−1−2.5103 × 102/2.1463 × 10−1(=)−2.5108 × 102/1.6621 × 10−1(=)
F11501.4403 × 103/8.7875 × 1016.1039 × 102/4.4920 × 101(−)5.3563 × 102/9.5648 × 101(−)
F111001.5965 × 103/5.8848 × 1017.0087 × 102/3.1616 × 101(−)6.4495 × 102/3.7649 × 101(−)
F12509.1000 × 102/2.2301 × 10−31.0618 × 103/4.2980 × 101(+)1.0789 × 103/6.2427 × 101(+)
F121009.1000 × 102/3.2326 × 10−31.5139 × 103/3.7043 × 101(+)1.4640 × 103/3.0232 × 101(+)
F13509.1000 × 102/4.4973 × 10−31.0569 × 103/4.4095 × 101(+)1.0810 × 103/6.3660 × 101(+)
F131009.1000 × 102/4.4477 × 10−31.5028 × 103/4.1610 × 101(+)1.4625 × 103/2.8474 × 101(+)
F14509.1000 × 102/3.8153 × 10−31.0662 × 103/4.2043 × 101(+)1.0799 × 103/6.1347 × 101(+)
F141009.1000 × 102/2.9016 × 10−31.5147 × 103/2.9138 × 101(+)1.4620 × 103/2.8016 × 101(+)
+/=/ 17/4/717/3/8
Table A6. Optimal structural parameters obtained by the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on the spatial truss.
Table A6. Optimal structural parameters obtained by the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on the spatial truss.
MethodsOptimal Structural Parameters
L 1 ( m ) L 2 ( m ) L 3 ( m ) E 1 ( Pa ) A 1 ( m 2 ) E 2 ( Pa ) A 2 ( m 2 ) E 3 ( Pa ) A 3 ( m 2 ) E 4 ( Pa ) A 4 ( m 2 ) E 5 ( Pa ) A 5 ( m 2 ) E 6 ( Pa ) A 6 ( m 2 )
MSAMT1.17 × 1001.16 × 1001.18 × 1002.16 × 10113.52 × 10−32.20 × 10113.37 × 10−32.12 × 10113.45 × 10−32.19 × 10113.54 × 10−32.15 × 10113.36 × 10−32.19 × 10113.48 × 10−3
HSAOA1.03 × 1001.00 × 1001.02 × 1002.11 × 10113.10 × 10−32.09 × 10113.03 × 10−31.99 × 10112.88 × 10−32.16 × 10113.05 × 10−32.12 × 10112.89 × 10−32.12 × 10113.09 × 10−3
DSP−SAEA1.02 × 1001.02 × 1001.03 × 1002.07 × 10113.30 × 10−32.19 × 10113.06 × 10−31.98 × 10113.01 × 10−32.17 × 10113.29 × 10−32.17 × 10113.17 × 10−32.08 × 10112.93 × 10−3
BiS−SAHA1.00 × 1009.59 × 10−19.93 × 10−12.13 × 10112.97 × 10−32.21 × 10112.90 × 10−31.97 × 10113.11 × 10−32.15 × 10112.92 × 10−32.08 × 10112.89 × 10−32.09 × 10113.03 × 10−3
FSAPSO1.01 × 1001.00 × 1009.98 × 10−12.11 × 10112.99 × 10−32.15 × 10112.98 × 10−32.02 × 10112.94 × 10−32.12 × 10113.15 × 10−32.11 × 10112.95 × 10−32.10 × 10112.96 × 10−3
Table A7. List of abbreviations (sorted in order of appearance in the text).
Table A7. List of abbreviations (sorted in order of appearance in the text).
AbbreviationsFull Word
SAEAsSurrogate-assisted evolutionary algorithms
MSAMTMulti-surrogate-assisted multi-tasking optimization algorithm
EAsEvolutionary algorithms
GAGenetic algorithm
DEDifferential evolution
PSOParticle swarm optimization
FEsFitness evaluations
RBFRadial basis function
TLBOTeaching-learning-based optimization
GNNGraphical neural network
SL-PSOSocial learning particle swarm optimization
MTOMulti-task optimization
G-MFEAGeneralized multi-factorial evolutionary algorithm
PRPolynomial regression
PRSPolynomial response surface
SVRSupport vector regression
UESUnified ensemble of surrogates
DESDynamic ensemble of surrogates
MFEAMulti-factorial evolutionary algorithm
LHSLatin hypercube sampling
DSESDynamic selection for the ensemble surrogate
RSMResponse surface model
Figure A1. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test problems with 10 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Figure A1. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test problems with 10 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Algorithms 18 00004 g0a1aAlgorithms 18 00004 g0a1b
Figure A2. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test problems with 20 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Figure A2. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test problems with 20 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Algorithms 18 00004 g0a2aAlgorithms 18 00004 g0a2b
Figure A3. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test problems with 30 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Figure A3. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test problems with 30 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Algorithms 18 00004 g0a3aAlgorithms 18 00004 g0a3b
Figure A4. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test problems with 50 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Figure A4. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test problems with 50 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Algorithms 18 00004 g0a4aAlgorithms 18 00004 g0a4b
Figure A5. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test problems with 100 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Figure A5. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on test problems with 100 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Algorithms 18 00004 g0a5aAlgorithms 18 00004 g0a5b
Figure A6. Convergence curves of the MSAMT, GL-SADE, and LSADE on test problems with 50 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Figure A6. Convergence curves of the MSAMT, GL-SADE, and LSADE on test problems with 50 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Algorithms 18 00004 g0a6aAlgorithms 18 00004 g0a6b
Figure A7. Convergence curves of the MSAMT, GL-SADE, and LSADE on test problems with 100 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Figure A7. Convergence curves of the MSAMT, GL-SADE, and LSADE on test problems with 100 dimensions (From (a) to (n) are the convergence curves of F1 to F14, respectively).
Algorithms 18 00004 g0a7aAlgorithms 18 00004 g0a7b

References

  1. Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimedia Tools Appl. 2020, 80, 8091–8126. [Google Scholar] [CrossRef]
  2. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  3. Yang, G.Y. A modified particle swarm optimizer algorithm. In Proceedings of the International Conference on Electronic Measurement and Instruments, Xi’an, China, 16–18 August 2007; pp. 2675–2679. [Google Scholar]
  4. Ong, Y.S.; Keane, A.J. Meta-Lamarckian learning in memetic algorithms. IEEE Trans. Evol. Comput. 2004, 8, 99–110. [Google Scholar] [CrossRef]
  5. Nelson, A.L.; Barlow, G.J.; Doitsidis, L. Fitness functions in evolutionary robotics: A survey and analysis. Robot. Auton. Syst. 2009, 57, 345–370. [Google Scholar] [CrossRef]
  6. Niyato, D.; Hossain, E. Dynamics of network selection in heterogeneous wireless networks: An evolutionary game approach. IEEE Trans. Veh. Technol. 2009, 58, 2008–2017. [Google Scholar] [CrossRef]
  7. Chai, R.; Savvaris, A.; Tsourdos, A.; Xia, Y.; Chai, S. Solving multiobjective constrained trajectory optimization problem by an extended evolutionary algorithm. IEEE Trans. Cybern. 2020, 50, 1630–1643. [Google Scholar] [CrossRef] [PubMed]
  8. Chai, R.; Tsourdos, A.; Savvaris, A.; Xia, Y.; Chai, S. Multiobjective optimal parking maneuver planning of autonomous wheeled vehicles. IEEE Trans. Ind. Electron. 2019, 67, 10809–10821. [Google Scholar] [CrossRef]
  9. Jin, Y.; Wang, H.; Chugh, T.; Guo, D.; Miettinen, K. Data-driven evolutionary optimization: An overview and case studies. IEEE Trans. Evol. Comput. 2019, 23, 442–458. [Google Scholar] [CrossRef]
  10. Shi, L.; Rasheed, K. A survey of fitness approximation methods applied in evolutionary algorithms. In Computational Intelligence in Expensive Optimization Problems; Springer: Berlin/Heidelberg, Germany, 2010; pp. 3–28. [Google Scholar]
  11. Branke, J.; Schmidt, C. Faster convergence by means of fitness estimation. Soft Comput. 2003, 9, 13–20. [Google Scholar] [CrossRef]
  12. Wang, X.; Wang, G.G.; Song, B.; Wang, P.; Wang, Y. A novel evolutionary sampling assisted optimization method for high-dimensional expensive problems. IEEE Trans. Evol. Comput. 2019, 23, 815–827. [Google Scholar] [CrossRef]
  13. Guo, D.; Jin, Y.; Ding, J.; Chai, T. Heterogeneous ensemble-based infill criterion for evolutionary multiobjective optimization of expensive problems. IEEE Trans. Cybern. 2019, 49, 1012–1025. [Google Scholar] [CrossRef]
  14. Yu, M.; Liang, J.; Wu, Z.; Yang, Z. A twofold infill criterion-driven heterogeneous ensemble surrogate-assisted evolutionary algorithm for computationally expensive problems. Knowl.-Based Syst. 2022, 236, 107747. [Google Scholar] [CrossRef]
  15. Li, F.; Shen, W.; Cai, X.; Gao, L. A fast surrogate-assisted particle swarm optimization algorithm for computationally expensive problems. Appl. Soft Comput. 2020, 92, 106303. [Google Scholar] [CrossRef]
  16. Li, F.; Cai, X.; Gao, L.; Shen, W. A surrogate-assisted multiswarm optimization algorithm for high-dimensional computationally expensive problems. IEEE Trans. Cybern. 2021, 51, 1390–1402. [Google Scholar] [CrossRef]
  17. Nourian, N.; El-Badry, M.; Jamshidi, M. Design optimization of truss structures using a graph neural network-based surrogate model. Algorithms 2023, 16, 380. [Google Scholar] [CrossRef]
  18. Cai, X.; Gao, L.; Li, X. Efficient generalized surrogate-assisted evolutionary algorithm for high-dimensional expensive problems. IEEE Trans. Evol. Comput. 2020, 24, 365–379. [Google Scholar] [CrossRef]
  19. Pan, J.S.; Liu, N.; Chu, S.C.; Lai, T. An efficient surrogate-assisted hybrid optimization algorithm for expensive optimization problems. Inf. Sci. 2021, 561, 304–325. [Google Scholar] [CrossRef]
  20. Sun, C.; Jin, Y.; Zeng, J.; Yu, Y. A two-layer surrogate-assisted particle swarm optimization algorithm. Soft Comput. 2014, 19, 1461–1475. [Google Scholar] [CrossRef]
  21. Wang, W.; Liu, H.L.; Tan, K.C. A surrogate-assisted differential evolution algorithm for high-dimensional expensive optimization problems. IEEE Trans. Cybern. 2023, 53, 2685–2697. [Google Scholar] [CrossRef]
  22. Kůdela, J.; Matoušek, R. Combining Lipschitz and RBF surrogate models for high-dimensional computationally expensive problems. Inf. Sci. 2023, 619, 457–477. [Google Scholar] [CrossRef]
  23. Rumpfkeil, M.P.; Bryson, D.; Beran, P. Multi-fidelity sparse polynomial chaos and kriging surrogate models applied to analytical benchmark problems. Algorithms 2022, 15, 101. [Google Scholar] [CrossRef]
  24. Liu, N.; Pan, J.S.; Chu, S.C.; Lai, T. A surrogate-assisted bi-swarm evolutionary algorithm for expensive optimization. Appl. Intell. 2022, 53, 12448–12471. [Google Scholar] [CrossRef]
  25. Ren, Z.; Sun, C.; Tan, Y.; Zhang, G.; Qin, S. A bi-stage surrogate-assisted hybrid algorithm for expensive optimization problems. Complex Intell. Syst. 2021, 7, 1391–1405. [Google Scholar] [CrossRef]
  26. Li, X.; Li, S. An adaptive surrogate-assisted particle swarm optimization for expensive problems. Soft Comput. 2021, 25, 15051–15065. [Google Scholar] [CrossRef]
  27. Liu, Y.; Liu, J.; Tan, S. Decision space partition based surrogate-assisted evolutionary algorithm for expensive optimization. Expert Syst. Appl. 2023, 214, 119075. [Google Scholar] [CrossRef]
  28. Zhang, J.; Li, M.; Yue, X.; Shi, M. A hierarchical surrogate assisted optimization algorithm using teaching-learning-based optimization and differential evolution for high-dimensional expensive problems. Appl. Soft Comput. 2024, 152, 111212. [Google Scholar] [CrossRef]
  29. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Ensemble strategies for population-based optimization algorithms–A survey. Swarm Evol. Comput. 2019, 44, 695–711. [Google Scholar] [CrossRef]
  30. Liao, P.; Sun, C.; Zhang, G.; Jin, Y. Multi-surrogate multi-tasking optimization of expensive problems. Knowl.-Based Syst. 2020, 205, 106262. [Google Scholar] [CrossRef]
  31. Yang, S.; Qi, Y.; Yang, R.; Ma, X.; Zhang, H. A surrogate assisted evolutionary multitasking optimization algorithm. Appl. Soft Comput. 2023, 132, 109775. [Google Scholar] [CrossRef]
  32. Akopov, A.S.; Hevencev, M.A. A multi-agent genetic algorithm for multi-objective optimization. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 13–16 October 2013. [Google Scholar]
  33. Zhong, W.; Liu, J.; Xue, M.; Jiao, L. A multiagent genetic algorithm for global numerical optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2004, 34, 1128–1141. [Google Scholar] [CrossRef] [PubMed]
  34. Gupta, A.; Ong, Y.S.; Feng, L. Multifactorial evolution: Toward evolutionary multitasking. IEEE Trans. Evol. Comput. 2016, 20, 343–357. [Google Scholar] [CrossRef]
  35. Ding, J.; Yang, C.; Jin, Y.; Chai, T. Generalized multitasking for evolutionary optimization of expensive problems. IEEE Trans. Evol. Comput. 2017, 23, 44–58. [Google Scholar] [CrossRef]
  36. Box, G.E.P.; Wilson, K.B. On the experimental attainment of optimum conditions. In Breakthroughs in Statistics: Methodology and Distribution; Springer: New York, NY, USA, 1992; pp. 270–310. [Google Scholar]
  37. Hardy, R.L. Multiquadric equations of topography and other irregular surfaces. J. Geophys. Res. 1971, 76, 1905–1915. [Google Scholar] [CrossRef]
  38. Zhang, J.; Yue, X.; Qiu, J.; Zhang, M.; Wang, X. A unified ensemble of surrogates with global and local measures for global metamodelling. Eng. Optim. 2021, 53, 474–495. [Google Scholar] [CrossRef]
Figure 1. The flow chart of the proposed MSAMT.
Figure 1. The flow chart of the proposed MSAMT.
Algorithms 18 00004 g001
Figure 2. The N R M S E values of the DSES and the DES on functions F1 and F3 with 20 dimensions in the search process. (a) The N R M S E values of the DSES and the DES on functions F1 with 20 dimensions in the search process; (b) The N R M S E values of the DSES and the DES on functions F3 with 20 dimensions in the search process.
Figure 2. The N R M S E values of the DSES and the DES on functions F1 and F3 with 20 dimensions in the search process. (a) The N R M S E values of the DSES and the DES on functions F1 with 20 dimensions in the search process; (b) The N R M S E values of the DSES and the DES on functions F3 with 20 dimensions in the search process.
Algorithms 18 00004 g002
Figure 3. Performances of the MSAMT with different r m p values on 20D F2, F3, F5, and F9 (From (a) to (d) are the box plots of F1, F3, F5, and F9, respectively).
Figure 3. Performances of the MSAMT with different r m p values on 20D F2, F3, F5, and F9 (From (a) to (d) are the box plots of F1, F3, F5, and F9, respectively).
Algorithms 18 00004 g003
Figure 4. Performances of the MSAMT with different k m a x values on 20D F2, F3, F5, and F9 (From (a) to (d) are the box plots of F1, F3, F5, and F9, respectively).
Figure 4. Performances of the MSAMT with different k m a x values on 20D F2, F3, F5, and F9 (From (a) to (d) are the box plots of F1, F3, F5, and F9, respectively).
Algorithms 18 00004 g004
Figure 5. Convergence curves of the MSAMT with/without decision space partition on 20D F2, F3, F5, and F9 (From (a) to (d) are the convergence curves of F1, F3, F5, and F9, respectively).
Figure 5. Convergence curves of the MSAMT with/without decision space partition on 20D F2, F3, F5, and F9 (From (a) to (d) are the convergence curves of F1, F3, F5, and F9, respectively).
Algorithms 18 00004 g005aAlgorithms 18 00004 g005b
Figure 6. The average run times of the proposed algorithm and the competing algorithms on each test problem (From (a) to (e) are the average run times of 10D, 20D, 30D, 50D, and 100D test problems, respectively).
Figure 6. The average run times of the proposed algorithm and the competing algorithms on each test problem (From (a) to (e) are the average run times of 10D, 20D, 30D, 50D, and 100D test problems, respectively).
Algorithms 18 00004 g006aAlgorithms 18 00004 g006b
Figure 7. Simulation diagram of the spatial truss structure.
Figure 7. Simulation diagram of the spatial truss structure.
Algorithms 18 00004 g007
Figure 8. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on the spatial truss.
Figure 8. Convergence curves of the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on the spatial truss.
Algorithms 18 00004 g008
Table 1. Currently important algorithms, including EAs and SAEAs.
Table 1. Currently important algorithms, including EAs and SAEAs.
AlgorithmAuthor, ReferencesSurrogate ModelsEvolutionary AlgorithmsYear
MFEAGupta et al. [34]×MFEA2016
G-MFEADing et al. [35]×GMFEA2017
FSAPSOLi et al. [15]RBFPSO2020
BiS-SAHARen et al. [25]RBFSL-PSO, DE2021
GL-SADEWang et al. [21]RBF, KrigingDE2023
LSADEKůdela et al. [22]RBF, LipschitzDE2023
DSP-SAEALiu et al. [27]RBF, Kriging, and polynomial regression (PR)PSO2023
SAMTOYang et al. [31]Kriging, RBFMFEA2023
HSAOAZhang et al. [28]RBF, PRS, and SVRTLBO, DE2024
Table 2. Test functions used in the experiment study.
Table 2. Test functions used in the experiment study.
No.ProblemDimensionOptimumProperty
F1Ellipsoid10, 20, 30, 50, and 1000Unimodal
F2Rosenbrock10, 20, 30, 50, and 1000Multimodal with narrow valleys
F3Ackley10, 20, 30, 50, and 1000Multimodal
F4Griewank10, 20, 30, 50, and 1000Multimodal
F5Rastrigin10, 20, 30, 50, and 1000Multimodal
F6Shifted Rotated High Conditioned Elliptic Function10, 20, 30, 50, and 100−450Unimodal, shifted, rotated, and non-separable
F7Shifted Rotated Ackley’s Function with Global Optimum on Bounds10, 20, 30, 50, and 100−140Multimodal, shifted, rotated, and non-separable
F8Shifted Rastrigin10, 20, 30, 50, and 100−330Multimodal, shifted and separable
F9Shifted Rotated Rastrigin10, 20, 30, 50, and 100−330Very complicated multimodal
F10Shifted Rotated Expanded Scaffer’s F6 (Shifted Rosenbrock’s Function) Function10, 20, 30, 50, and 100−300Multimodal, shifted, and non-separable
F11Rotated Hybrid Composition Function 110, 20, 30, 50, and 100120Very complicated multimodal
F12Rotated Hybrid Composition Function 210, 20, 30, 50, and 10010Multimodal, rotated, and non-separable
F13Rotated Hybrid Composition Function 2 with a Narrow Basin for the Global Optimum10, 20, 30, 50, and 10010Very complicated multimodal
F14Rotated Hybrid Composition Function 2 with the Global Optimum on the Bounds10, 20, 30, 50, and 10010Multimodal, non-separable, and global optimum on the bound
Table 3. Initial values as well as upper and lower bounds of the parameters to be optimized.
Table 3. Initial values as well as upper and lower bounds of the parameters to be optimized.
ParametersInitial ValuesUpper BoundsLower Bounds
L 1 / L 2 / L 3 1.0   m 1.3   m 0.7   m
E 1 / E 2 / E 3 / E 4 / E 5 / E 6 2.1 × P a 2.73 × P a 1.47 × P a
A 1 / A 2 / A 3 / A 4 / A 5 / A 6 3.0 × m 2 3.9 × m 2 2.1 × m 2
Table 4. Statistical results obtained by the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on the spatial truss.
Table 4. Statistical results obtained by the MSAMT, HSAOA, DSP-SAEA, BiS-SAHA, and FSAPSO on the spatial truss.
MethodsStatistical Results
BestWorstMeanStd
MSAMT 1.01 × 1.42 × 4.00 × 4.05 ×
HSAOA 2.29 × 2.57 × 6.72 × 6.44 ×
DSP-SAEA 7.89 × 1.27 × 4.20 × 3.99 ×
BiS-SAHA 7.89 × 1.61 × 3.85 × 4.74 ×
FSAPSO 3.49 × 2.26 × 5.00 × 5.77 ×
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Chen, L.; Zhang, J.; Li, M. A Multi-Surrogate Assisted Multi-Tasking Optimization Algorithm for High-Dimensional Expensive Problems. Algorithms 2025, 18, 4. https://doi.org/10.3390/a18010004

AMA Style

Li H, Chen L, Zhang J, Li M. A Multi-Surrogate Assisted Multi-Tasking Optimization Algorithm for High-Dimensional Expensive Problems. Algorithms. 2025; 18(1):4. https://doi.org/10.3390/a18010004

Chicago/Turabian Style

Li, Hongyu, Lei Chen, Jian Zhang, and Muxi Li. 2025. "A Multi-Surrogate Assisted Multi-Tasking Optimization Algorithm for High-Dimensional Expensive Problems" Algorithms 18, no. 1: 4. https://doi.org/10.3390/a18010004

APA Style

Li, H., Chen, L., Zhang, J., & Li, M. (2025). A Multi-Surrogate Assisted Multi-Tasking Optimization Algorithm for High-Dimensional Expensive Problems. Algorithms, 18(1), 4. https://doi.org/10.3390/a18010004

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop