0% found this document useful (0 votes)
20 views11 pages

Corrigendum To "Balance of Communication and Convergence: Predefined-Time Distributed Optimization Based On Zero-Gradient-Sum"

Uploaded by

Paco Aguilar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views11 pages

Corrigendum To "Balance of Communication and Convergence: Predefined-Time Distributed Optimization Based On Zero-Gradient-Sum"

Uploaded by

Paco Aguilar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

1

Corrigendum to ”Balance of Communication and


Convergence: Predefined-time Distributed
Optimization Based on Zero-Gradient-Sum”
Renyongkang Zhang, Ge Guo, Senior Member, IEEE, and Zeng-Di Zhou
arXiv:2412.16163v1 [math.OC] 3 Dec 2024

Abstract—This paper proposes a distributed optimization al- Distributed optimization algorithms ensuring convergence
gorithm with a convergence time that can be assigned in advance in a finite or fixed time are more preferable [15]–[19]. But the
according to task requirements. To this end, a sliding manifold upper bound of the settling time is often difficult to specify,
is introduced to achieve the sum of local gradients approaching
zero, based on which a distributed protocol is derived to reach either depending on system initialization or involving complex
a consensus minimizing the global cost. A novel approach parameter calculation. Therefore, a great deal of attention
for convergence analysis is derived in a unified settling time has been paid to predefined-time optimization, which allows
framework, resulting in an algorithm that can precisely converge the maximum convergence time to be explicitly prescribed
to the optimal solution at the prescribed time. The method is in advance. Given the challenges of the problem, some of
interesting as it simply requires the primal states to be shared
over the network, which implies less communication require- the works give approximate solutions or alternatives that are
ments. The result is extended to scenarios with time-varying feasible but not strictly predefined-time optimal. For instance,
objective function, by introducing local gradients prediction and the method in [20] can achieve predefined-time consensus
non-smooth consensus terms. Numerical simulations are provided but asymptotic optimization; the algorithms in [21]–[23] can
to corroborate the effectiveness of the proposed algorithms. drive the system to reach a feasible domain in a given time
Index Terms—Distributed optimization, predefined-time con- but the optimal solution is only approachable asymptotically;
vergence, multi-agent systems, time-varying optimization. the approximate solutions in [24]–[26] can converge to the
neighborhood of the optimizer, but not exact optimization in
I. I NTRODUCTION a given time.
In some recent works, two types of strictly prescribed-time

D ISTRIBUTED optimization has attracted considerable


concern due to its widespread applications, e.g., in smart
grids, sensor networks, and intelligent transportation systems
distributed optimization algorithms are given. The first type
of methods are based on consensus and gradient dynamics
with the aid of the estimation of global cost information [30],
[1]. The goal of distributed optimization is to cooperatively [31]. These algorithms can achieve the optimal solution in
minimize a global objective formed by a summation of local a preset time, but require the sharing of extra information
cost functions using only local computation and communica- such as estimator states or local gradients via the network,
tion. Various results on distributed optimization are available which implies poor privacy and high communication cost. The
in both discrete-time (e.g., [2]–[5]) and continuous-time (e.g., second type of methods are derived in the context of ZGS or
[6]–[8]) domains. In particular, many continuous-time algo- piecewise ZGS schemes, which are computationally and com-
rithms are well-known, such as (sub)gradient-based consensus municationally more promising for predefined-time distributed
algorithms [9], [10], saddle point dynamics [6], proportion- optimization since only the primal states are shared among
integration (PI) algorithms [7], [11], zero-gradient-sum (ZGS) the nodes [33]–[35]. It should be noted, however, that ZGS-
schemes [12], prediction-correction algorithms [13], Newton- based and piecewise ZGS-based algorithms are still limited
Rapson (NR) methods [14], etc. These algorithms can achieve and conservative, in that the former relies on initial conditions
asymptotic or exponential convergence, namely, the optimal [33] while the latter involves priority local minimization [34],
solution is achieved when the time approaches infinity. [35].
Motivated by the aforementioned discussions, this paper
This paper has been accepted for publication in the IEEE Transactions on
Cybernetics, doi: 10.1109/TCYB.2024.3498323. aims to develop a predefined-time distributed optimization
This work was supported by the National Natural Science Foundation of algorithm with less demanding initial conditions and commu-
China under Grants 62173079 and U1808205, in part by the Fundamental nication requirements. To this end, a local-minimization-free
Research Funds for the Central Universities under Grant N2423049, and in
part by the 2024 Hebei Provincial Doctoral Candidate/Postgraduate Student ZGS framework based on sliding mode is designed, consisting
Innovation Ability Training Funding Project under Grant CXZZBS2024184 of a sliding manifold to guarantee that the sum of gradients
and CXZZSS2024178. (Corresponding author: Ge Guo.) converges to zero and a distributed protocol to achieve global
R. Zhang and Z. D. Zhou are with the College of Information Science
and Engineering, Northeastern University, Shenyang 110819, China (e-mail: optimal consensus. We give a uniform framework for prescrib-
zryk1998@163.com; zhouzd199912@163.com). ing the convergence time of both the reach phase and the
G. Guo is with the State Key Laboratory of Synthetical Automation for sliding phase, which is very interesting due to its flexibility
Process Industries, Northeastern University, Shenyang 110819, China, and
also with the School of Control Engineering, Northeastern University at of regulating and fine-tuning the settling time in accordance
Qinhuangdao, Qinhuangdao 066004, China (e-mail: geguo@yeah.net). with system performance and task objectives. This advantage
2

TABLE I: Comparison With Existing Predefined-Time Algorithms


Predefined-Time Predefined-Time Required State of
Algorithm Key Idea
Mechanism Exact Optimization Communication
[20], [21] Bipartite power feedback [27] No Gradient-based consensus algorithm Dimension n
[22] Time-varying scaling method [28] No Prediction-correction algorithm Dimension 2n
[25] Time-based generator [29] No Saddle point dynamics Dimension 2n
[30]: Time-varying scaling method [32] / [30]: Dimension 2n /
[30], [31] Yes Estimation + consensus + gradient dynamics
[31]: Bipartite power feedback [27] [31]: Dimension 3n
[33]: Time-based generator [29] /
[33]–[35] [34]: Unilateral power feedback / Yes [33]: ZGS / [34], [35]: Piecewise ZGS Dimension n
[35]: Bipartite power feedback [36]
Proposed algorithm Unilateral power feedback Yes Local-minimization-free ZGS Dimension n

is highlighted by extending the method to scenarios with time- ψ ∈ R, if f is ψ-strongly convex, then the following equiv-
T
varying objective functions, resulting in a practical algorithm alence conditions are true: (∇f (z) − ∇f (x)) (z − x) ≥
2 T
running on local gradients prediction and consensus seeking. A ψ∥z − x∥ , ∀x, z ∈ S; f (z) − f (x) − ∇f (x) (z − x) ≥
ψ 2 2
2 ∥z − x∥ , ∀x, z ∈ S; ∇ f (x) ≥ ψI n , ∀x ∈ S. Besides, for
detailed comparison with other related methods is provided in
Table I and the main contributions are summarized as follows: any twice continuously differentiable function f : Rn → R
1) Instead of approximate optimization [25], predefined- and any constant Ψ > 0, if ∇f is Ψi -Lipschitz, then the follow-
T
time consensus-based asymptotic optimization [20] or ing conditions are equivalent: (∇f (z) − ∇f (x)) (z − x) ≤
2 T
feasible domain convergence-based asymptotic opti- Ψ ∥z − x∥ , ∀x, z ∈ S; f (z) − f (x) − ∇f (x) (z − x) ≤
Ψ 2 2
mization [21], [22], the algorithm presented in this paper 2 ∥z − x∥ , ∀x, z ∈ S; ∇ f (x) ≤ Ψ I n , ∀x ∈ S.
can achieve real prescribed-time distributed optimiza- Denote by G = (V, E) a graph with the node set
tion. V = {1, . . . , N } and the edge set E ⊆ V × V. Define
2) Superior to the methods in [30], [31], in which a variety A = [aij ] ∈ RN×N as the weighted adjacency matrix with
of information about the estimator and even gradients aii = 0, aij > 0 if (j, i) ∈ E and aij = 0 otherwise.
is involved, our algorithm only requires the primal L = [lij ] ∈ RN ×N represents the Laplacian matrix of G,
PN
states to be shared over the network, implying low where lij = −aij for i ̸= j and lii = j=1 aij . For any
communication load and strong privacy preservation. undirected and connected graph G, the following conclusion
3) The convergence time is prescribed in a uniform frame- holds: 1) L is positive semidefinite and satisfies λ1 (L) =
work and independent of any other parameters. Com- 0 < λ2 (L) ≤ · · · ≤ λN (L), 2) for P x = [xi ]T ∈ Rn
n T 1 2
pared to the traditional predefined-time ZGS-based dis- and y ∈ R , it holds that x Lx = 2 i,j aij (xi − xj )
1
P P
tributed optimization methods [33]–[35], the proposed and i,j aij xi (yi − yj ) = 2 i,j aij (xi − xj ) (yi − yj ), 3)
algorithm is free from local minimization and less if 1n T x = 0, xT Lx ≥ λ2 (L) xT x.
conservative. zi ∈ R ≥ 0 and p ∈ R, if P 0 < p ≤ 1,
Lemma PFor
Pn 1 ( [37]): n p n
then Pi=1 zip ≥ ( i=1 zi ) , and if p > 1, then i=1 zip ≥
n p
II. N OTATIONS AND P RELIMINARIES n1−p ( i=1 zi ) .
Denote by R the set of all real numbers, and R+ , Rn and Lemma 2 ( [38]): Consider the system ẋ (t) = f (x (t)),
Rn×m
represent the sets of positive real numbers, real n- where x ∈ Rn and f : Rn → Rn . If there exists a Lyapunov
dimension vectors, and real n × m matrices. 1n ∈ Rn and function V (x) such that
0n ∈ Rn denote an all-one column vector and an all-zero βq−1
 
α p Γ 1−βq
column vector, respectively. I n ∈ Rn×n is the identity matrix. p
α V̇ (x) ≤ − exp(αV p )V βq , ∀x ∈ Rn /{0},
Define sigα (·) = |·| sgn (·), where sgn (·) is the symbolic pTm
T
function and α ∈ R. For z ∈ Rn with z = [z1 , z2 , . . . , zn ] , where Tm
T R ∞> 0, α > 0, p > 0, β > 0, q > 0, βq < 1, and
sgn (z) = [sgn (z1 ) , sgn (z2 ) , . . . , sgn (zn )] ∈ Rn and Γ (τ ) = 0 tτ −1 exp (−t) dt, then the equilibrium of system
α α α α T
sig (z) = [sig (z1 ) , sig (z2 ) , . . . , sig (zn )] ∈ Rn . is globally predefined-time stable and the settling time can be
p p 1/p
∥z∥p = (|z1 | + · · · + |zn | ) with p ≥ 1 represents the bounded by Tm .
p-norm. For z ̸= 0n , the norm-normalized signum function Lemma 3: For the system ẋ (t) = f (x (t)) with x ∈ Rn and
is defined as SGN(z) = z/∥z∥ . Specifically, sigα (0) = 0, f : Rn → Rn , if there exists a Lyapunov function V (x) such
1
SGN(0) = 0 and the 2-norm is abbreviated as ∥z∥. Denote by that V̇ (x) ≤ − αpT m
exp(αV p )V 1−p , ∀x ∈ Rn /{0}, where
diag (z) ∈ Rn×n a diagonal matrix with elements zi . Define Tm > 0, α > 0 and 0 < p < 1, then the equilibrium of
the eigenvalues of A ∈ Rn×n as λ1 (A) ≤ · · · ≤ λn (A) in system is globally predefined-time stable and the settling time
a non-decreasing order with respect to real parts. ∇f (·) and is bounded by Tm .
∇2 f (·) respectively denote the gradient and Hessian of the Proof: By Lemma 2, let βq = 1 − p, β = 1, and 0 < p <
twice differentiable function f (·). 1, the conclusion can be drawn.
A differentiable function f : Rn → R on a con- Remark 1: Unlike the bipartite power feedback in [31],
vex set S ∈ Rn is convex if and only if ∀x, z ∈ S, which contains both powers greater than zero but less than one
T
(z − x) (∇f (z) − ∇f (x)) ≥ 0. For a twice continuously and powers greater than one, Lemma 3 achieves predefined-
differentiable function f : Rn → R and a position constant time stability by only using unilateral power feedback of
3

powers greater than zero but less than one, reducing com- IV. M AIN R ESULTS
putational complexity. A distributed optimization algorithm A. Predefined-time Distributed Optimization Algorithms
with predefined-time convergence based on Lemma 3 will be
In this section, a novel predefined-time distributed optimiza-
presented later in the article.
tion algorithm is first provided to solve the time-invariant
problem stated in (1). For each agent i ∈ V and positive
III. P ROBLEM F ORMULATION constants η, p, Tm , c, consider
−1  1 2p
Consider a system with N agents interacting over a com- ẋi (t) = − ∇2 fi (xi ) exp(∥si ∥ )sig1−2p (si )
munication topology G. Each agent i ∈ V is endowed with a 2pηTm
N
local cost function fi (z) : Rn → R or a time-varying cost 2c X 2
+ exp((aij ∥xi − xj ∥ )p )×
function fi (z, t) : Rn × R+ → R, which is only accessible p(1 − η)Tm j=1
to itself. The aim of this paper is to design a distributed 
algorithmP for each agent to solve the global optimization prob- a1−p
ij sig 1−2p
(x i − x j ) , (3a)
N
lem min i=1 fi (z) or the global time-varying optimization Z t N
PN
min i=1 fi (z, t) using only local information and interaction 2c X
2
si (t) = ∇fi (xi)+ exp((aij ∥xi −xj ∥ )p )×
with neighbors. Since the decision variables of each agent are 0 p(1−η)Tm j=1
the same, the above problem can be formulated as 
a1−p
ij sig
1−2p
(xi − xj ) dτ (3b)
N
P
min fi (xi ) s.t. xi = xj ∀i, j ∈ V (1) (3a) is a continuous-time distributed update law to solve
xi i=1
optimization problem, and (3b) is a sliding manifold to achieve
or zero-gradient-sum. The convergence analysis of algorithm (3)
N
is given as the following theorem.
Theorem 1: Suppose that Assumptions 1 and 2 hold. Let
P
min fi (xi , t) s.t. xi = xj ∀i, j ∈ V (2)
xi i=1
Ψ̄ = maxi∈V {Ψi }. If 0 < η < 1, 0 < p < 1/2, Tm > 0,
where xi ∈ Rn is the state of the ith agent. Suppose that the and c ≥ Ψ̄ / (4λ2 (L)), then, distributed algorithm (3) drives
optimization problem (1) and (2) are feasible and denote by the states xi (t) of all agents to a optimal solution x∗ of (1)
x∗ the optimal solution. Note that x∗ is a static value for the within a settling time that is bounded by Tm .
problem (1) while x∗ is a continuous trajectory that changes Proof: The proof is divided into two steps. Step 1: To
over time for (2). For the sake of convenience, the gradients prove that the sum of local gradients goes to zero in ηTm ;
and Hessian of fi (·) with respect to xi will be denoted as Step 2: To show that the states xi (t) converges to the optimal
∇fi (·) and ∇2 fi (·), respectively, throughout the following solution within Tm .
text. Step 1: For 0 < t ≤ ηTm , the time derivative of (3b)
satisfies
Next several assumptions and definitions are required in
following analysis. 1 2p
ṡi (t) = − exp(∥si ∥ )sig1−2p (si ) (4)
Assumption 1: G is undirected and connected. 2pηTm
Assumption 2: Each local cost function fi is twice con- Consider the Lyapunov function candidate Vs,i (si (t)) =
1 2
2 ∥si ∥ . The time derivative of Vs,i (t) along (4) is given by
tinuously differentiable, and ψi -strongly convex (ψi > 0)
2p 2−2p
with respect to xi . In addition, ∇fi is Ψi -Lipschitz (Ψi > 0) 1
V̇s,i = − 2pηT exp(∥si ∥ )∥si ∥2−2p . Since 0 < p < 1/2, one
m
continuous with respect to xi . has form Lemma 1 that
Remark 2: Assumption 1 and 2 both are common standard 1 2p 2
assumptions and often used in the existing literature. The V̇s,i ≤ − exp(∥si ∥ )(∥si ∥ )1−p
2pηTm
existence and uniqueness of the solution of the optimization 1 p 1−p
problem is guaranteedPby Assumption 2, and the optimal =− p exp(2p Vs,i )Vs,i (5)
N 2 pηTm
solution x∗ satisfying i=1 ∇fi (x∗ ) = 0.
Definition 1 (Predefined-time optimization): For the problem By Lemma 3, it can be obtained that si (t), i ∈ V approach zero
(1) or (2), it is said to achieve predefined-time optimization in a specified time ηTm . Since Assumption 1 holds, it follows
if there exists a continuous-time algorithm such that the that
optimal solution x∗ is sought at a predefined time Tm , i.e., N X
X N
2
lim ∥xi (t) − x∗ ∥ = 0 and ∥xi (t) − x∗ ∥ = 0 for ∀t > Tm . exp((aij ∥xi − xj ∥ )p )a1−p
ij sig
1−2p
(xi − xj ) = 0
t→Tm i=1 j=1
Definition 2 (Predefined-time approximate optimization, (6)
[25]): For the problem (1) or (2), it is said to achieve
predefined-time approximate optimization if there exists a From (3b), we have
continuous-time algorithm such that XN XN
∇fi (xi (t)) = si (t) (7)
• ∥xi (t) − x∗ ∥ ≤ ε for ∀t > Tm and i=1 i=1
∗ Combining with limt→ηTm si (t) = 0, we can get
limt→Tm + ∥xi (t) − x ∥ ≤ ε.
PN PN
• limt→∞ ∥xi (t) − x∗ ∥ = 0. i=1 ∇fi (xi (t)) = i=1 si (t) = 0 at t = ηTm .
4


Step 2: For t > ηTm , it follows from si = 0 that the × sig1−2p (xi − xj ) (14)
dynamics satisfies
N
and
2
−1 2c X 2
ẋi (t) = − ∇ fi (xi) exp((aij ∥xi −xj∥ )p )× N X
N  
p(1−η)Tm j=1
X 2 
exp (aij ∥xi − xj ∥ )p a1−p
ij (x ∗ T
) sig 1−2p
(x i − x j)

i=1 j=1
a1−p
ij sig
1−2p
(xi − xj ) (8)
=0 (15)
Substituting
PN (8), it can be derived that
N It follows that
∇2 fi (xi (t))ẋi = 0.
P
d i=1 ∇fi (xi (t))/dt = i=1
PN
Due to i=1 ∇fi (xi (t)) = 0 at t = ηTm , it holds that c XN X N 
PN 2 
i=1 ∇f i (x i (t)) = 0, ∀t ≥ ηTm , which indicates that the V̇ = − exp (aij ∥xi − xj ∥ )p ×
p(1 − η)Tm i=1 j=1
zero-gradient-sum manifold is guaranteed. 
Choose the Lyapunov function for system (8) a1−p (x − x )
T
sig 1−2p
(x − x )
ij i j i j
N
X N X N 
fi (x∗ )−fi (xi )−∇fiT (xi ) (x∗ − xi ) (9) c

V (xi (t)) =
X 2 
=− exp (aij ∥xi − xj ∥ )p ×
i=1 p(1 − η)Tm i=1 j=1
PN 2
It is clear from Assumption 2 that V ≥ i=1ψ2i∥xi −x∗∥ . De-

2−2p
 T T
a1−p
ij ∥x i − x j ∥2−2p
note x̃ = x̃1 , . . . , x̃TN ∈ RnN , where x̃i = xi − x̄ with x̄ = N X N 
1
PN ∗
PN c
k=1 xk . Since F (x ) ≤ F (x̄) and i=1 ∇fi (xi (t)) =
X 2 
N ≤− exp (aij ∥xi − xj ∥ )p ×
0, it is obvious that p(1 − η)Tm i=1 j=1

N 2 1−p
X aij ∥xi − xj ∥ (16)
fi (x̄)−fi (xi )−∇fiT (xi ) (x̄ − xi ) − V (t)

i=1
N N Note that the function f (z) = exp(z 2p )z 2(1−p) is convex. By
X X T applying Jensen’s Inequality, it can be deduced that
fi (x̄) − fi (x∗ ) −

= ∇fi (xi ) (x̄ − xi )
i=1 i=1 N N
N 1 XX 2  2 1−p

X exp (aij ∥xi − xj ∥ )p aij ∥xi − xj ∥
fi (x̄) − fi (x∗ ) ≥ 0 N(N −1)i=1 j=1

= (10)
i=1 N P
N N P
N
P 2 P 2
Notice that the following inequalities holds aij ∥xi − xj ∥ aij ∥xi − xj ∥
i=1j=1 p i=1j=1
 1−p
N ≥ exp ( )
X N (N − 1) N (N − 1)
fi (x̄)−fi (xi )−∇fiT (xi ) (x̄ − xi )

(17)
i=1
N
X Ψi Ψ̄ Then,
≤ ∥x̃i ∥2 ≤ ∥x̃∥2 (11)
i=1
2 2 PN P N
2
aij ∥xi − xj ∥
From (10) and (11), it follows that cN (N − 1)  i=1 j=1
)p ×

V̇ ≤ − exp (
p(1 − η)Tm N (N − 1)
Ψ̄
∥x̃∥2V (t) ≤ (12) N P
P N
2
2 aij ∥xi − xj ∥
i=1 j=1 1−p 
Differentiating V (t) along (8) yields
N (N − 1)
N
X N P N
T
(xi − x∗ ) ∇2 fi (xi )ẋi 2
P
V̇ = aij ∥x̃i − x̃j ∥
cN (N − 1)  i=1 j=1
i=1
)p ×

=− exp (
2c XN X N  p(1 − η)Tm N (N − 1)
2 
=− exp (aij ∥xi − xj ∥ )p × N N
p(1 − η)Tm i=1 j=1 P P
aij ∥x̃i − x̃j ∥
2
 i=1 j=1 1−p 
a1−p (x − x ∗ T
) sig 1−2p
(x − x ) (13) (18)
ij i i j N (N − 1)
Given that Assumption 1 holds, one has Since the graph is undirected and connected, 1T x̃ = 0 and
N X
X N   λ2 (L) = λ2 (L ⊗ In ), we have
2 
exp (aij ∥xi − xj ∥ )p a1−p
ij x T
i sig 1−2p
(x i − x j ) N X
N
i=1 j=1
X 2
aij ∥xi − xj ∥ = 2x̃T (L ⊗ In ) x̃ ≥ 2λ2 (L)∥x̃∥2
N X
N 
1 X 2  T i=1 j=1
= exp (aij ∥xi − xj ∥ )p a1−p
ij (xi − xj ) (19)
2 i=1 j=1
5

N
Considering that f (z) = exp(z 2p )z 2(1−p) is monotonically

c X
+ aij (χi − χj ) ,
increasing on [0, ∞), it can be concluded that (1 − η)Tm − t j=1
cN (N − 1)  2x̃T (L ⊗ In ) x̃ p   N 
V̇ ≤ − exp ( ) ×  X 
p(1 − η)Tm N (N − 1) χi (t) = In − exp − diag aij (xi − xj ) 1n ,
2x̃T (L ⊗ In ) x̃ 1−p  j=1
t N
N (N − 1)
Z
c X
si (t) = ∇fi (xi )+ aij (χi − χj )dτ
cN (N − 1)  2λ2 (L)∥x̃∥2 p  0 (1 − η)Tm −τ j=1
≤− exp ( ) ×
p(1 − η)Tm N (N − 1) (23a)
2λ2 (L)∥x̃∥2 1−p 
(20) where 0 < η < 1, k ≥ 1, Tm > 0, and c ≥ Ψ̄ /(λ22 (L)).
N (N − 1) Remark 6 (Robustness to Disturbance): The proposed
From (12) and (20), one has method based on sliding mode can reject disturbance after
cN (N −1) 4λ2 (L) 1−p 4λ2 (L)V p  1−p proper extension. Suppose that di (t) ∈ Rn is unknown
V̇ ≤ − exp ( ) V disturbance and ∥di (t)∥1 ≤ D. Introducing a non-smooth
p(1−η)Tm N (N −1)Ψ̄ N (N −1)Ψ̄
(21) term ksgn (si ), it can be obtained the following algorithm
insensitive to disturbance with predefined-time optimization.
Due to c ≥ Ψ̄ /(4λ2 (L)), it can be obtained that
1 4λ2 (L) −p 4λ2 (L)V p  1−p −1  1 2p
V̇ ≤ − exp ( ) V ẋi (t) = − ∇2 fi (xi ) exp(∥si ∥ )sig1−2p (si )
p(1−η)Tm N (N −1)Ψ̄ N (N −1)Ψ̄ 2pηTm
(22) N
2c X 2
Invoking Lemma 3, we have V = 0 with a predefined time +ksgn(si )+ exp((aij ∥xi −xj∥ )p )×
p(1−η)Tm j=1
(1 − η)Tm . With the help of optimality conditions, it is clear 
that limt→Tm xi (t) → x∗ , i ∈ V. a1−p
ij sig 1−2p
(x i − x j ) + di (t), (24a)
Remark 3: The proposed algorithm is inspired by the ZGS Z t N
framework [18] and predefined-time stability [34]. Compared 2c X
2
si (t) = ∇fi (xi)+ exp((aij ∥xi − xj ∥ )p )×
with [18], the convergence time in this paper can be arbitrarily 0 p(1−η)Tm j=1
specified by users and does not depend on system parameters, 
and is established under the unified time by introducing a a1−p
ij sig 1−2p
(x i − x j ) dτ (24b)
tuning parameter η. In contrast to [34], our algorithm is less
conservative, not only circumventing the requirement for local where 0 < η < 1, 0 < p < 1/2, Tm > 0, c ≥ Ψ̄ /(4λ2 (L)),
minimization, thereby avoiding waste of computational time, and k ≥ HD with ∇2 fi (xi ) 1 ≤ H. The proof details can
but also offering a broader range of parameters selection be done similarly to that of Theorem 1. The same results
(c ≥ Ψ̄ / (4λ2 (L)) in this paper but k ≥ Ψ̄ / (2λ2 (L)) in can be found in [18, Proposition 2], where an additional
[34] ). It is also worth mentioning that the proposed method sliding manifold is involved to deal with disturbance. In
eliminates the singularity of the algorithm [34] as the gradient contrast, algorithm (24) can guarantee both zero-gradient-sum
approaches zero. and disturbance-rejection by a common sliding mode.
Remark 4 (Convergence of (3) Over Dynamically Chang- Remark 7 (Full Distributed Implementation): Since the
ing Topologies): The convergence result in Theorem 1 is parameter selection of algorithm (3) depends on the algebraic
established for fixed undirected connected graphs. Since the connectivity of the graph and maximum Lipschitz constant of
proof of Theorem 1 relies on a Lyapunov function with no local gradients, it is not fully distributed. However, the pro-
dependency on the communication topology, we can read- posed framework can readily be extended to fully distributed
ily extend the convergence result to dynamically changing approximate optimization with the help of the notion of time-
topologies, following the line of proof Theorem 1. In other base generators in [24]–[26]. A predefined-time approximate
words, for a time-varying graph G(t) which is undirected stability can be achieved by multiplying a well-defined dy-
and connected at all times and whose adjacency matrix is namic system with a time-varying gain generated by a time-
uniformly bounded and piecewise constant, the algorithm (3) base generator. Consider the following algorithm
is still effective as long as c ≥ Ψ̄ /(4λ2min (L(t))) with −1 1 2p
ẋi (t) = − ∇2 fi (xi ) exp(∥si ∥ )sig1−2p (si )
λ2min (L(t)) = min λ2 (L(t)). 2pηTm
Remark 5 (Free-Will Arbitrary Time Convergence): The N
X 
specified time distributed optimization framework proposed + T (t, (1 − η) Tm ) aij (xi − xj ) , (25a)
in this paper relies on the predefined-time stability in [38]. j=1
Different convergence results can be obtained easily by using Z t N
X
various stability lemmas. For example, based on the result in si (t) = ∇fi (xi )+ T (t, (1−η) Tm ) aij (xi −xj )dτ
0 j=1
[28], [39], the algorithm with arbitrary time convergence can
be obtained, as shown below: (25b)
where 0 < η < 1, 0 < p < 1/2, Tm > 0, and

−1 k  
ẋi (t) = − ∇2 fi (xi) In − exp − diag(si ) 1n T (t, (1 − η) Tm ) is a time-base generator as defined in [25].
ηTm − t
6

N
It is obvious that the zero-gradient-sum is achieved within 2c X 2 
exp (aij ∥xi −xj∥ )p a1−p
ij sig
1−2p
(xi −xj)
ηTm and the states of all agents to the small domain of the p(1−η)Tm j=1
optimal solution within Tm under algorithm (25). The proof
N
can be done along similar lines as that of Theorem 1, which X ∂ 
+µ aijSGN (xi − xj ) + ∇fi (xi , t) , (26a)
is omitted here to save space. ∂t
j=1
Remark 8: It is worth noting that the communication N
Z t X
topology considered in this paper is fixed, and the connection 2c
si (t) =∇fi (xi )+ µ aij SGN (xi −xj )+ ×
weights are assumed to be piecewise constant even in the 0 p(1−η)Tm
j=1
dynamic switching topology. However, sensing or commu- N
nication capability is range-limited, so it is impractical to X 
2 
exp (aij ∥xi −xj ∥ )p a1−p
ij sig
1−2p
(xi −xj ) dτ
simply make an assumption that network connectivity is j=1
preserved by default. One potential solution is to maintain (26b)
network connectivity with artificial potential functions [40]–
[42]. In the future, we will investigate how to implement ZGS where η, p, Tm , c, µ ∈ R are positive constants.
algorithms for network connectivity preservation under preset Theorem 2: Suppose that Assumptions 1 - 3 hold. Let
time constraints. ψ = mini∈V {ψi }, Ψ̄ = maxi∈V {Ψi }, and a = mini,j∈V {aij :
Finally, in the following, we summarize the proposed algo- aij ̸= 0}. If 0 < η < 1, 0 < p < 1/2, Tm > 0,
rithm (3) in Algorithm 1. c ≥ Ψ̄ / (4λ2 (L)), and µ ≥ 2N ω Ψ̄ /(aψ), then, distributed
algorithm (26) drives the states xi (t) of all agents to a optimal
Algorithm 1 Predefined-time Distributed Optimization Algo- solution x∗ (t) of (2) within a settling time that is bounded by
rithm Tm .
Input: Proof: For 0 < t ≤ ηTm , differentiating (26b) yields
xi (0), ∇fi , ∇2 fi , Tm , Ψ̄ , λ2 (L); N
∂ X
Output: ṡi (t) = ∇2 fi (xi , t) ẋi + ∇fi (xi , t)+µ aij SGN (xi −xj )
x∗ ; ∂t j=1
1: Initialization: N
2c X 2 
t = 0; + exp (aij ∥xi −xj ∥ )p a1−p
ij sig
1−2p
(xi −xj )
Select parameters η, p, c according to Theorem 1; p(1−η)Tm j=1
2: while t ≤ Tm do 1 2p
3: for each agent i ∈ V do =− exp(∥si ∥ )sig1−2p (si ) (27)
2pηTm
4: Update x(t) by Eq. (3);
Similar to Theorem 1, it holds that limt→ηTm si (t) = 0.
5: end for
For t > ηTm , the dynamic satisfies
6: Update t;
7: end while N
−1 X ∂
8: return x∗ = xi (Tm ); ẋi (t) = − ∇2fi (xi , t) µ aij SGN (xi −xj )+ ∇fi (xi , t)
j=1
∂t
N
2c X 2 

B. Predefined-time Distributed Time-varying Optimization Al- + exp (aij ∥xi −xj ∥ )p a1−p
ij sig 1−2p
(x i −x j )
p(1−η)Tm j=1
gorithms
(28)
Most existing distributed time-varying optimization meth- PN
ods are distributed implementations of prediction-correction It follows from (28) that d i=1 ∇fi (xi , t)/dt =
PN 2 ∂
algorithms utilizing average tracking technology (see [13] and i=1 (∇ fi (x i , t)ẋ i + ∂t ∇f i (x i , t)) = 0. Due
PN
its extensions). The time-varying optimization with predefined- to ∇fi (xi , t) = 0 at t = ηTm , one has
PN i=1
time convergence under the ZGS framework is developed in i=1 ∇fi (xi , t) = 0∀t ≥ ηTm .
this section. −1 T
Denote H ′ (t) = ∇2 fi (xi , t) and x̃ = x̃T1 , . . . , x̃TN ∈

Assumption 3: Each cost function fi has the identical N
RnN , where x̃i = xi − x̄ with x̄ = N1 k=1 xk . Consider the
P
Hessian and the rate of change of local gradients is bounded,
∂ following Lyapunov function for system (28)
i.e., ∇2 fi (xi , t) = ∇2 fj (xj , t), ∥ ∂t ∇fi (xi , t) ∥∞ ≤ ω for
N
i, j ∈ V and t ≥ 0. 1 2 1X 2
Remark 9: Assumption 3 is trivial and often considered W (xi (t)) = ∥x̃∥ = ∥x̃i ∥ (29)
2 2 i=1
in time-varying optimization problems. It is easy to find
functions that satisfy the requirements of Assumption 3, such Differentiating W along (28) yields
as quadratic functions f (x, t) = (γx + θ(t))2 commonly used N  N
X  X ∂
for energy optimization and engineering application. Ẇ = − x̃Ti H ′ (t) µ aij SGN (xi − xj )+ ∇fi (xi , t)
For each agent i, the distributed algorithm is designed as i=1 j=1
∂t
follows N
2c X 2 
−1  1 2p + exp (aij ∥xi − xj ∥ )p ×
ẋi (t) = − ∇2 fi (xi ) exp(∥si ∥ )sig1−2p (si ) + p(1 − η)Tm j=1
2pηTm
7


a1−p 1−2p
(xi − xj ) Algorithm 2 Predefined-time Distributed Time-varying Opti-
ij sig
mization Algorithm
N N
X X ∂ Input:
x̃Ti H ′ (t)

= −µ aij SGN (x̃i − x̃j )+ ∇fi (xi , t) ∂
xi (0), ∇fi , ∇2 fi , ∂t ∇fi , Tm , ω, ψ, Ψ̄ , λ2 (L), a;
i=1 j=1
∂t
Output:
N N
2c X X 2  x∗ (t);
− x̃Ti H ′ (t) exp (aij ∥x̃i − x̃j ∥ )p × 1: Initialization:
p(1 − η)Tm i=1 j=1
t = 0;
a1−p
ij sig
1−2p
(x̃i − x̃j ) Select parameters η, p, c, µ according to Theorem 2;
N N N 2: while t ≥ 0 do
µ XX ωX
≤− aij ∥x̃i − x̃j ∥ + ∥x̃i ∥ 3: for each agent i ∈ V do
2Ψ̄ i=1 j=1 ψ i=1
4: Update x(t) by Eq. (26);
c XN X N  5: end for
2 
− exp (aij ∥x̃i − x̃j ∥ )p × 6: Update t;
Ψ̄ p(1 − η)Tm i=1 j=1
7: if t ≥ Tm then
return x∗ (t) = xi (t);

2−2p
a1−p
ij ∥x̃i − x̃j ∥2−2p (30) 8:
9: end if
1
PN 10: end while
Notice that ∥x̃i ∥ ≤ ∥xi − xj ∥ ≤
PN PN PN PN j=1
N
i=1 j=1 ∥xi − xj ∥ = i=1 j=1 ∥x̃i − x̃j ∥. Combining
with Lemma 1, it can be concluded that
N N N N
µ XX Nω X X
ẆC ≤ − aij ∥x̃i − x̃j ∥+ ∥x̃i − x̃j ∥
2Ψ̄ i=1 j=1 ψ i=1 j=1
N X N  Fig. 1: Communication topology for multiagent systems (all
c X 2 
− exp (aij ∥x̃i − x̃j ∥ )p × weights are 1).
Ψ̄ p(1 − η)Tm i=1 j=1

2 1−p
aij ∥x̃i − x̃j ∥ (31)
formed using MATLAB/Simulink with an Euler solver and
Due to µ ≥ 2N ω Ψ̄ /(aψ) and c ≥ Ψ̄ / (4λ2 (L)), it is clear fundamental step-size 10−3 .
that
1 4λ2 (L) −p 4λ2 (L)W p  1−p
Ẇ ≤ − exp ( ) W A. A Numerical Case
p(1−η)Tm N (N −1) N (N −1)
(32) Firstly, let us verify the distributed time-invariant optimiza-
tion algorithm (3). Consider a multi-agent system P with 6
By Lemma 3 and optimality conditions, the conclusion can be 6
nodes to solve the global optimization problem min i=1 fi
obtained.
cooperatively. The local cost functions fi assigned to agents
Remark 10: Note that Tm is an upper bound on the settling
are listed as:
time of the algorithms (3) and (26). Given the desired conver-
gence time Tm , the different performance of the algorithms f1 (x) = (x1 − 0.5)2 + 2(x2 + 1.3)2 − 0.5x1 x2 ,
can be obtained by adjusting other parameters η, p, c and µ. f2 (x) = 2(x1 + 0.7)2 + 1.5(x2 + 1.7)2 + 0.3x1 x2
However, to be used for physical systems, the gains cannot
+ 0.3 sin(0.3x1 + 1.8) + 0.73 cos(0.5x2 + 1),
be too large due to actuator saturation. Therefore, the choice
of controller parameters is a tradeoff between optimization f3 (x) = 2(x1 − 1.5)2 + 2(x2 − 0.3)2 + ln(2 + 0.1x21 )
performance and practical limitations. Besides, the discontin- + ln(4 + 0.6x22 ),
uous function SGN (·) in (26) may lead to the chattering of f4 (x) = 0.5(x1 − 1.5)2 + 1.5(x2 + 1.6)2 + 0.5x1 x2
control input. To reduce chattering, one can replace SGN (z) q q
ˆ
with a continuous approximation SGN(z) z
= ∥z∥+ϵ(t) , where + x1 /( 2 + 0.4x21 ) + 0.6x2 /( 1 + x22 ),
1 2p 1−2p
ϵ̇(t) = − 2pηTm exp(∥ϵ∥ )sig (ϵ), which will reduce the f5 (x) = (x1 − 2)2 + (x2 − 0.9)2 + 0.7x1 x2
chattering effect and make the controller easier to implement
+ 0.3 exp(−0.4x21 ) + 0.7 exp(−0.5x22 ),
in real applications [43].
The proposed time-varying algorithm (26) is summarized in f6 (x) = 1.5(x1 − 0.8)2 + 2(x2 + 1.5)2 .
Algorithm 2. It can be verified that the optimal solution of global objective
T T
is x∗ = [x∗1 , x∗2 ] = [0.7858, −0.9551] . The communication
V. S IMULATION V ERIFICATION topology among agents is depicted as an undirected connected
In this section, the two simulation examples are provided graph shown in Fig. 1 and agents’ initial values are set as
T T
to verify distributed time-invariant algorithms (3) and time- x1 (0) = [xi1 (0)] = [−2, −1, 1, 2, 3, 4] and x2 (0) =
T T
varying algorithms (26), respectively. All simulations are per- [xi2 (0)] = [−3, −2, −1, 1, 2, 3] . The algorithm parameters
8

4 3
are depicted in Fig. 6, which verifies the effectiveness of the
3 2
algorithm.
2 1
To illustrate the advantages and scalability of the proposed
1 0
distributed method, the algorithm is executed in a computing
0 -1 network consisting of 60 agents. The communication topology
-1 -2 is represented by a randomly generated connected graph, as
-2
0 0.2 0.4 0.6 0.8 1
-3
0 0.2 0.4 0.6 0.8 1
shown in Fig. 7a. The cost functions of agents are set to
Time(s) Time(s)
fi∗10+k (x) = fi+1 (x), i = {0, 1, . . . , 5}, k = {1, 2, . . . , 10},
(a) (b)
where f1 - f6 are as previously determined. The algorithm
Fig. 2: Evolution of xi (t) under algorithm (3). (a) xi1 (t). (b) parameters remain unchanged, and the initial values are ran-
xi2 (t) domly selected at [−5, 5]. The convergence results are depicted
in Fig. 7b - 7c, showing that the computational burden of the
100 algorithm is not affected by the agent scale.

80

60
19.5 19.5

40
19 19
0.2 0.25 0.3 0.4 0.5 0.6

20

0
0 0.2 0.4 0.6 0.8 1
Time(s)
(a)
Fig. 3: Evolution of the global objective function under
4
algorithm (3).

3
Signal Value

are selected as p = 0.3, η = 0.4, c = 3 and Tm = 2. Fig. 2


depicts the evolution of the system states, from which it can be 2

seen that the global optimizer performs accurately and quickly.


The trajectory of the global objective function shown in Fig. 1
0 0.2 0.4 0.6 0.8 1
3 also illustrates the validity of algorithm (3). Fig. 4 depicts Time(s)

the convergence performance of the algorithm with different (b)


parameters. As discussed in Remark 10, Tm is an upper bound Fig. 5: Switching communication topologies for multiagent
on the desired settling time, and a variety of convergence systems. (a) Set of graphs. (b) Switching signal.
rates can be obtained by adjusting the parameters. Simulation
of the algorithms under dynamic switching topologies is also
done. Fig. 5a and 5b show the the sequence and the switching
signal of the topologies, respectively. The convergence results 4 3

3 2

2 1

1 0
1.5 1.5
0 -1
1 1
-1 -2
0.5 0.5

0 0 -2 -3
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
-0.5 -0.5 Time(s) Time(s)

-1 -1 (a) (b)
-1.5 -1.5

-2
0 0.2 0.4 0.6 0.8 1
-2
0 0.2 0.4 0.6 0.8 1
Fig. 6: Evolution of xi (t) under algorithm (3 over switching
Time(s) Time(s)
graph). (a) xi1 (t). (b) xi2 (t)
1.5 1.5

1 1

0.5 0.5

0 0
B. Formation for target encirclement
-0.5 -0.5 A case of achieving the formation for target encirclement via
-1 -1

-1.5 -1.5
distributed time-varying optimization algorithms is provided
-2
0 0.2 0.4 0.6 0.8 1
-2
0 0.2 0.4 0.6 0.8 1
to verify the proposed method (26). Consider a group of four
Time(s) Time(s) robots interacting through undirected connected graph shown
Fig. 4: Performance evaluation of the algorithm (3) with in Fig. 8. The dynamics of each robot is modeled as
different parameters. ṗi (t) = ui (t), i ∈ V,
9

8
5
6

4
0
2

0
-5 -2

-4

-10 -6
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Time(s) Time(s)

(a) (b) (c)

Fig. 7: Experiment of algorithm (3) over 60 agents. (a) Communication topology. (b) Evolution of xi1 (t). (c) Evolution of
xi2 (t).

18

16

14
6

Global cost
12 5.5

Fig. 8: Communication topology for multi-robots. 10


5

4.5

8
4
0 0.2 0.4 0.6 0.8 1
8
6
7

6 4
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
5
Time(s)

4
Fig. 11: Evolution of the global objective function under
Y coordinate

3
algorithm (26).
2

1
60 40
0 20
40
-1 0
20
-2 -20
-2 -1 0 1 2 3 4 5 0
-40
X coordinate
-20 -60
Fig. 9: Trajectories for robot i, i ∈ V and the target source. 0 1 2
Time(s)
3 4 5 0 1 2
Time(s)
3 4 5

40 30

20
20
10

where pi (t) ∈ R2 and ui (t) ∈ R2 are the position


0
0
-20
and the control input of ith robot, and V = {1, 2, 3, 4}. -40
-10

-20
The initial positions of robots are set as p1 (0) = [0, 0]T , 0 1 2
Time(s)
3 4 5 0 1 2
Time(s)
3 4 5

p2 (0) = [0, −1]T , p3 (0) = [−1, 0]T and p4 (0) = [0, 1]T ,
Fig. 12: Control inputs of robot i, i ∈ V.
and the target source to be contained is defined as p∗ (t) =
[2 sin(t) + 0.5t, t]T . The goal of the group is to achieve
2
limt→Tm ∥pi (t) − hi − p∗ (t)∥ = 0, i ∈ V, where hi = which is imprecise. It is assumed that p̂∗i (t) = p∗ (t) + ϖi ,
T
2[cos(2πi/N ), sin(2πi/N )] are the formation configuration where ϖ1 = [1, 1]T , ϖ2 = [−1, −1]T , ϖ3 = [0.5, 0.5]T and
of robots. Each robot’s observation of the target is p̂∗i (t), ϖ4 = [−0.3, −0.3]T . Define xi (t) = pi (t)−hi and fi (xi , t) =
2
∥xi − p̂∗i (t)∥ . Note that ṗi (t) = ẋi (t), so the
P4 distributed
optimization algorithm (26) for solving min i=1 fi (xi , t)
3 6 can be treated as the control law of robot to achieve a
1

2
5
relatively accurate enveloping configuration. The parameters
0

1 1
4
are considered as η = 0.5, p = 0.3, Tm = 2, c = 3, and
3 -1

0
0

2
0 0.2 0.4 0.6
µ = 44. The position trajectories of the robots and the target
-1

-2
0 0.1 0.2 0.3 0.4
1 source are shown in Fig. 9, from which it can be observed
-1
0 that all robots have achieved encirclement of the target in
-2
0 0.5 1 1.5 2
Time(s)
2.5 3 3.5 4 4.5 5
-1
0 0.5 1 1.5 2 2.5
Time(s)
3 3.5 4 4.5 5 formation. The evolution of the optimizer and global cost
(a) (b) for the optimization problem are depicted in Fig. 10 and 11,
respectively. Finally, the control inputs of four robots are given
Fig. 10: Evolution of xi (t) under algorithm (26). (a) xi1 (t). in Fig. 12. Due to the discontinuity of the sign function,
(b) xi2 (t) there is chattering phenomenon in the control input. The
10

validity of Algorithm (26) with boundary layer approximation 40


0
30
is verified by keeping the experimental settings unchanged. 20
-10

-20
The simulation results are shown in Fig. 13-16, from which it 10 -30

can be observed that each robot moves to the desired position 0 -40
0 1 2 3 4 5 0 1 2 3 4 5
and the control input becomes significantly smoother. Time(s) Time(s)
20 20

10
8
10
0
7
-10
0
6
-20
5 -30 -10
0 1 2 3 4 5 0 1 2 3 4 5
4 Time(s) Time(s)
Y coordinate

3
Fig. 16: Control inputs of robot i, i ∈ V.
2

-1
In future research, we are interested in obtaining a discrete-
-2
time implementation of the proposed algorithm with formal
-2 -1 0 1 2 3 4 5
X coordinate convergence guarantees. Predefined-time distributed optimiza-
Fig. 13: Trajectories for robot i, i ∈ V and the target source. tion algorithms with state-dependent interactions are worth
investigation.

3 6
R EFERENCES
5
2 [1] P. Shi and B. Yan, “A survey on intelligent control for multiagent
4

1
systems,” IEEE Trans. Syst. Man Cybern, Syst., vol. 51, no. 1, pp. 161–
3
175, Jan 2021.
0 2
[2] J. Tsitsiklis, D. Bertsekas, and M. Athans, “Distributed asynchronous
1 deterministic and stochastic gradient optimization algorithms,” IEEE
-1
0 Trans. Autom. Control, vol. 31, no. 9, pp. 803–812, Sep. 1986.
-2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
-1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
[3] A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-
Time(s) Time(s) agent optimization,” IEEE Trans. Autom. Control, vol. 54, no. 1, pp.
(a) (b) 48–61, Jan. 2009.
[4] J. C. Duchi, A. Agarwal, and M. J. Wainwright, “Dual averaging for
Fig. 14: Evolution of xi (t) under algorithm (26) with bound- distributed optimization: Convergence analysis and network scaling,”
ary layer approximation. (a) xi1 (t). (b) xi2 (t) IEEE Trans. Autom. Control, vol. 57, no. 3, pp. 592–606, Mar. 2012.
[5] W. Shi, Q. Ling, G. Wu, and W. Yin, “Extra: An exact first-order
algorithm for decentralized consensus optimization,” SIAM J. Optim.,
vol. 25, no. 2, pp. 944–966, 2015.
18
[6] B. Gharesifard and J. Cortés, “Distributed continuous-time convex
16
optimization on weight-balanced digraphs,” IEEE Trans. Autom. Control,
vol. 59, no. 3, pp. 781–786, Mar. 2014.
14 [7] S. S. Kia, J. Cortés, and S. Martı́nez, “Distributed convex optimization
via continuous-time coordination algorithms with discrete-time commu-
Global cost

12
nication,” Automatica, vol. 55, pp. 254–264, May 2015.
10
[8] R. Zhang and G. Guo, “Continuous distributed robust optimization of
multi-agent systems with time-varying cost,” IEEE Trans. Control Netw.
8 Syst., 2023.
[9] P. Lin, W. Ren, and J. A. Farrell, “Distributed continuous-time optimiza-
6
tion: Nonuniform gradient gains, finite-time convergence, and convex
4
constraint set,” IEEE Trans. Autom. Control, vol. 62, no. 5, pp. 2239–
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 2253, May 2017.
Time(s)
[10] K. Lu, G. Jing, and L. Wang, “Distributed algorithm for solving convex
Fig. 15: Evolution of the global objective function under inequalities,” IEEE Trans. Autom. Control, vol. 63, no. 8, pp. 2670–2677,
Aug. 2018.
algorithm (26) with boundary layer approximation. [11] X. Wang, S. Mou, and B. D. O. Anderson, “Consensus-based distributed
optimization enhanced by integral feedback,” IEEE Transactions on
Automatic Control, vol. 68, no. 3, pp. 1894–1901, Mar. 2023.
VI. C ONCLUSION [12] J. Lu and C. Y. Tang, “Zero-Gradient-Sum algorithms for distributed
convex optimization: The continuous-time case,” IEEE Trans. Autom.
This paper investigates the distributed optimization algo- Control, vol. 57, no. 9, pp. 2348–2354, Sep. 2012.
rithms with predefined-time convergence and less communica- [13] S. Rahili and W. Ren, “Distributed continuous-time convex optimization
with time-varying cost functions,” IEEE Trans. Autom. Control, vol. 62,
tion requirements. Involving a sliding manifold to ensure that no. 4, pp. 1590–1605, Apr. 2017.
the sum of gradients approaches zero, a distributed algorithm is [14] H. Moradian and S. S. Kia, “A distributed continuous-time modified
proposed to achieve global optimal consensus. The result is ex- Newton–Raphson algorithm,” Automatica, vol. 136, p. 109886, 2022.
[15] M. Firouzbahrami and A. Nobakhti, “Cooperative fixed-time/finite-time
tended to apply to time-varying cost functions by introducing distributed robust optimization of multi-agent systems,” Automatica, vol.
non-smooth consensus terms and local gradients prediction. 142, pp. 110 358–110 366, 2022.
The proposed algorithms only need the primal states to be [16] K. Garg, M. Baranwal, and D. Panagou, “A fixed-time convergent
distributed algorithm for strongly convex functions in a time-varying
communicated, and the convergence time can be specified in network,” in Proc. 59th IEEE Conf. Decis. Control (CDC), Jeju, Korea
advance under the uniform time. (South), 2020, pp. 4405–4410.
11

[17] X. Wang, G. Wang, and S. Li, “A distributed fixed-time optimization nique,” IEEE Trans. Control Netw. Syst., vol. 7, no. 1, pp. 42–52, Mar.
algorithm for multi-agent systems,” Automatica, vol. 122, p. 109289, 2020.
2020. [42] Z. Wu, Z. Li, and J. Yu, “Designing zero-gradient-sum protocols for
[18] G. Guo, R. Zhang, and Z.-D. Zhou, “A local-minimization-free zero- finite-time distributed optimization problem,” IEEE Trans. Syst. Man,
gradient-sum algorithm for distributed optimization,” Automatica, vol. Cybern., Syst., vol. 52, no. 7, pp. 4569–4577, Jul. 2022.
157, p. 111247, 2023. [43] Y. Zhao, C. Xian, G. Wen, P. Huang, and W. Ren, “Design of distributed
[19] X. Ju, S. Yuan, X. Yang, and P. Shi, “Fixed-time neurodynamic op- event-triggered average tracking algorithms for homogeneous and het-
timization algorithms and application to circuits design,” IEEE Trans. erogeneous multiagent systems,” IEEE Trans. Autom. Control, vol. 67,
Circuits Syst. I, Reg. Papers, vol. 71, no. 5, pp. 2171–2181, May 2024. no. 3, pp. 1269–1284, 2022.
[20] K. Li, Q. Hu, Q. Liu, Z. Zeng, and F. Cheng, “A predefined-time
consensus algorithm of multi-agent system for distributed constrained
optimization,” IEEE Trans. Netw. Sci. Eng., vol. 11, no. 1, pp. 957–968,
Jan. 2024.
[21] T. Zhou, H. Wu, and J. Cao, “Distributed optimization in predefined-
time for multi-agent systems over a directed network,” Inf. Sci., vol.
615, pp. 743–757, 2022.
Renyongkang Zhang received the B.S. degree in
[22] Y. Zheng, Q. Liu, and J. Wang, “A specified-time convergent multi- automation from Northeastern University, Qinhuang-
agent system for distributed optimization with a time-varying objective dao, China, in 2020. He is currently pursuing the
function,” IEEE Trans. Autom. Control, 2023. Ph.D. degree in control science and engineering at
[23] Y.-W. Wang, Y. Zhang, X.-K. Liu, and X. Chen, “Distributed predefined- the College of Information Science and Engineering,
time optimization and control for multi-bus dc microgrid,” IEEE Trans. Northeastern University, Shenyang, China.
Power Syst., vol. 39, no. 4, pp. 5769–5779, Jul. 2024. His current research focuses on control and opti-
[24] Z. Guo and G. Chen, “Predefined-time distributed optimal allocation mization of multi-agent systems, and with applica-
of resources: A time-base generator scheme,” IEEE Trans. Syst., Man, tion to intelligent transportation. He was a recipient
Cybern., Syst., vol. 52, no. 1, pp. 438–447, Jan. 2022. of the Best Paper Award at the 5th International
[25] Y. Liu, Z. Xia, and W. Gui, “Multi-objective distributed optimization via Conference on Industrial Artificial Intelligence (IAI
a predefined-time multi-agent approach,” IEEE Trans. Autom. Control, 2023).
2023.
[26] Z. Guo and G. Chen, “Distributed dynamic event-triggered and practical
predefined-time resource allocation in cyber–physical systems,” Auto-
matica, vol. 142, p. 110390, 2022.
[27] A. Polyakov, “Nonlinear feedback design for fixed-time stabilization of
linear control systems,” IEEE Trans. Autom. Control, vol. 57, no. 8, pp.
2106–2110, Aug. 2012.
[28] A. K. Pal, S. Kamal, S. K. Nagar, B. Bandyopadhyay, and L. Fridman,
“Design of controllers with arbitrary convergence time,” Automatica,
vol. 112, p. 108710, 2020. Ge Guo received the B.S. degree and the PhD de-
gree from Northeastern University, Shenyang, China,
[29] H. M. Becerra, C. R. Vázquez, G. Arechavaleta, and J. Delfin,
in1994 and 1998, respectively.
“Predefined-time convergence control for high-order integrator systems
From May 2000 to April 2005, he was with
using time base generators,” IEEE Trans. Control Syst. Technol., vol. 26,
Lanzhou University of Technology, China, as a Pro-
no. 5, pp. 1866–1873, Sep. 2018.
fessor and the director of the Institute of Intelligent
[30] X. Gong, Y. Cui, J. Shen, J. Xiong, and T. Huang, “Distributed
Control and Robots. He then joined Dalian Maritime
optimization in prescribed-time: Theory and experiment,” IEEE Trans.
University, China, as a Professor with the Depart-
Netw. Sci. Eng., vol. 9, no. 2, pp. 564–576, Mar. 2022.
ment of Automation. Since 2018, he has been a Pro-
[31] Q. Cui, Y. Song, and C. Wen, “Prescribed time consensus control of fessor with Northeastern University (NEU) and the
multi-agent systems with minimized time-varying cost function,” IEEE director of the Center of Intelligent Transportation
Trans. Autom. Control, pp. 1–8, 2023. Systems of NEU. He has published over 200 international journal papers
[32] Y. Wang, Y. Song, D. J. Hill, and M. Krstic, “Prescribed-time consensus within his areas of interest, which include intelligent transportation systems,
and containment control of networked multiagent systems,” IEEE Trans. cyber-physical systems, etc.
Cybern., vol. 49, no. 4, pp. 1138–1147, Apr. 2019. Dr. Guo is an Associate Editor of the IEEE Transactions on Intelligent
[33] S. Chen, H. Jiang, and Z. Yu, “Distributed predefined-time optimization Transportation Systems, the IEEE Transactions on Vehicular Technology, the
algorithm: Dynamic event-triggered control,” IEEE Trans. Control Netw. IEEE Transactions on Intelligent Vehicles, the Information Sciences, the IEEE
Syst., pp. 1–11, 2023. Intelligent Transportation Systems Magazine, the ACTA Automatica Sinica,
[34] P. De Villeros, J. D. Sánchez-Torres, M. Defoort, M. Djemaı̈, and the China Journal of Highway and Transport and the Journal of Control and
A. Loukianov, “Predefined-time formation control for multiagent Decision. He won a series of awards including the CAA Young Scientist
systems-based on distributed optimization,” IEEE Trans. Cybern., 2023. Award, the First Prize of Natural Science Award of Hebei Province, the first
[35] L. Ma, C. Hu, J. Yu, L. Wang, and H. Jiang, “Distributed Prize of Science and Technology Progress Award of Gansu Province, etc.
fixed/preassigned-time optimization based on piecewise power-law de-
sign,” IEEE Trans. Cybern., vol. 53, no. 7, pp. 4320–4333, Jul. 2023.
[36] C. Hu, H. He, and H. Jiang, “Fixed/preassigned-time synchronization
of complex networks via improving fixed-time stability,” IEEE Trans.
Cybern., vol. 51, no. 6, pp. 2882–2892, Jun. 2021.
[37] Z. Zuo and L. Tie, “A new class of finite-time nonlinear consensus
protocols for multi-agent systems,” Int. J. Control, vol. 87, no. 2, pp.
363–370, 2014. Zeng-Di Zhou received the B.S. degree in mea-
surement and control technology and instrument
[38] J. D. Sánchez-Torres, M. Defoort, and A. J. Muñoz-Vázquez,
from Northeastern University, Qinhuangdao, China,
“Predefined-time stabilisation of a class of nonholonomic systems,” Int.
in 2021. He is currently pursuing the Ph.D. degree
J. Control, vol. 93, no. 12, pp. 2941–2948, 2020.
in control science and engineering at the College of
[39] A. K. Pal, S. Kamal, X. Yu, S. K. Nagar, and X. Xiong, “Free-will
Information Science and Engineering, Northeastern
arbitrary time consensus for multiagent systems,” IEEE Trans. Cybern.,
University, Shenyang, China.
vol. 52, no. 6, pp. 4636–4646, Jun. 2022.
His current research focuses on control and opti-
[40] B. Ning, Q.-L. Han, and Z. Zuo, “Distributed optimization of multiagent
mization of multi-agent systems.
systems with preserved network connectivity,” IEEE Trans. Cybern.,
vol. 49, no. 11, pp. 3980–3990, Nov. 2019.
[41] H. Hong, X. Yu, W. Yu, D. Zhang, and G. Wen, “Distributed convex
optimization on state-dependent undirected graphs: Homogeneity tech-

You might also like