APPLICATIONES MATHEMATICAE
22,1 (1993), pp. 11–23
E . N A V A R R O , R . C O M P A N Y and L . J Ó D A R (Valencia)
BESSEL MATRIX DIFFERENTIAL EQUATIONS:
EXPLICIT SOLUTIONS OF INITIAL
AND TWO-POINT BOUNDARY VALUE PROBLEMS
Abstract. In this paper we consider Bessel equations of the type t2 X (2) (t)
+ tX (1) (t) + (t2 I − A2 )X(t) = 0, where A is an n × n complex matrix and
X(t) is an n × m matrix for t > 0. Following the ideas of the scalar case
we introduce the concept of a fundamental set of solutions for the above
equation expressed in terms of the data dimension. This concept allows us
to give an explicit closed form solution of initial and two-point boundary
value problems related to the Bessel equation.
1. Introduction. Numerous problems from chemistry, physics and me-
chanics, both linear and nonlinear, are related to matrix differential equa-
tions of the type t2 X (2) (t) + tA(t)X (1) (t) + B(t)X(t) = 0, where A(t), B(t)
are matrix-valued functions [8], [10]. This paper is concerned with the Bessel
matrix equation
(1.1) t2 X (2) (t) + tX (1) (t) + (t2 I − A2 )X(t) = 0 , t > 0,
where A is a matrix in Cn×n , and X(t) is a matrix in Cn×n , for t > 0. Note
that the matrix problem (1.1) may be regarded as a system of coupled Bessel
type equations that cannot be transformed into a set of independent equa-
tions if the matrix A is not diagonalizable. Standard techniques to study
problems related to (1.1) are based on the consideration of the extended
first order system
tZ 0 (t) = M (t)Z(t)
where
1991 Mathematics Subject Classification: 33C10, 34A30, 47A60.
Key words and phrases: Bessel matrix equation, fundamental set, closed form solution,
boundary value problem, initial value problem.
12 E. Navarro et al.
0 I X(t)
(1.2) M (t) = , Z(t) = .
−t2 I + A2 0 tX 0 (t)
Then series solutions for (1.2) may be obtained, and the relationship between
the solutions X(t) of (1.1) and Z(t) of (1.2) is given by X(t) = [I, 0]Z(t)
(see [4], [13] for details). This technique has two basic drawbacks, first of all
it involves an increase of the problem dimension and a lack of explicitness
derived from the relationship X(t) = [I, 0]Z(t). Secondly, unlike the scalar
case it does not provide a pair of solutions of (1.1) which would allow us
to give a closed form of the general solution of (1.1) involving a pair of
parameters.
This paper is organized as follows. Section 2 is concerned with some
preliminaries that will be used in the following sections. In Section 3 we
construct series solutions of problem (1.1) and we propose a closed form of
the general solution of (1.1) for the case where the matrix A satisfies the
spectral condition
(1.3) For every eigenvalue z ∈ σ(A), 2z is not an integer, and if z, w
belong to σ(A) and z 6= w, then z ± w is not an integer.
Here σ(A) denotes the set of all eigenvalues of A. Finally, in Section 4 we
study the boundary value problem
t2 X (2) (t) + tX (1) (t) + (t2 I − A2 )X(t) = 0 , 0 < a ≤ t ≤ b,
(1) (1)
(1.4) M11 X(a) + N11 X(b) + M12 X (a) + N12 X (b) = 0 ,
(1) (1)
M21 X(a) + N21 X(b) + M22 X (a) + N22 X (b) = 0 ,
where Mij , Nij , for 1 ≤ i, j ≤ 2, are matrices in Cn×n .
If S is a matrix in Cm×n , we denote by S + its Moore–Penrose pseudoin-
verse and we recall that an account of uses and properties of this concept
may be found in [1].
2. Preliminaries. We begin this section with an algebraic result that
provides a finite expression for the solution of a generalized algebraic Lya-
punov matrix equation
(2.1) A1 + B1 X − XD1 = 0
where A1 , B1 , D1 and the unknown X are matrices in Cn×n .
Lemma 1. Suppose that matrices B1 and D1 satisfy the spectral condition
(2.2) σ(B1 ) ∩ σ(D1 ) = ∅
Pn
and let p(z) = k=0 ak z k be such that p(B1 ) = 0. Then the only solution
Bessel matrix differential equations 13
X of equation (2.1) is given by
X j
n X n
X −1
(2.3) X= aj B1h−1 A1 D1j−h aj D1j .
j=1 h=1 j=0
P r o o f. Under the hypothesis (2.2), equation (2.1) has only one solution
[2], [12], and from Corollary 2 of [2], if X is the only solution of (2.1), it
follows that
B1 A1 B1 0
V = =W W −1 ,
0 D1 0 D1
(2.4)
I X −1 I −X
W = , W = .
0 I 0 I
From (2.4), it follows that
B1 0
(2.5) p(V ) = W p W −1
0 D1
p(B1 ) 0 −1 0 Xp(D1 )
=W W =
0 p(D1 ) 0 p(D1 )
and taking into account the polynomial calculus there exists a matrix M
such that
B1 A1 p(B1 ) M 0 M
(2.6) p(V ) = p = = .
0 D1 0 p(D1 ) 0 p(D1 )
From (2.5) and (2.6) one sees that Xp(D1 ) = M and from the spectral
mapping theorem [3, p. 569] and (2.2), the matrix p(D1 ) is invertible. Thus
we have X = M (p(D1 ))−1 . On the other hand, considering the powers
V j , j = 0, 1, . . . , n, one finds that the (i, 2) block entry of the operator V j ,
j
denoted by Vi,2 for j = 1, . . . , n, i = 1, 2, satisfies
j j−1 j−1 j
V1,2 = B1 V1,2 + A1 V2,2 , V2,2 = D1j , 0
V1,2 = 0, 0
V2,2 =I.
j
By multiplying the matrix V1,2 by the coefficient aj for j = 0, 1, . . . , n and
by addition it follows that the block entry (1, 2) of the block matrix p(V )
is given by the expression
j
n X
X
M= aj B1h−1 A1 D1j−h .
j=1 h=1
Hence the result is established.
In accordance with the definition given in [6] for a time invariant regular
second order matrix differential equation, we introduce the concept of a
fundamental set of solutions for equations of the type
(2.7) Y (2) (t) + P (t)Y (1) (t) + Q(t)Y (t) = 0 .
14 E. Navarro et al.
Definition 1. Consider equation (2.7) where P (t), Q(t) are continuous
Cn×n -valued functions on an interval J of the real line, and Y (t) ∈ Cn×n .
We say that a pair of solutions {Y1 , Y2 } is a fundamental set of solutions of
(2.7) in the interval J if for any solution Z of (2.7) defined in J, there exist
matrices C, D ∈ Cn×n , uniquely determined by Z, such that
(2.8) Z(t) = Y1 (t)C + Y2 (t)D , t∈J.
The following result provides a useful characterization of a fundamental
set of solutions of (2.7) and it may be regarded as an analogue of Liouville’s
formula for the scalar case.
Lemma 2. Let {Y1 , Y2 } be a pair of solutions of (2.7) defined on the
interval J and let W (t) be the block matrix function
Y1 (t) Y2 (t)
(2.9) W (t) = (1) (1) .
Y1 (t) Y2 (t)
Then {Y1 , Y2 } is a fundamental set solutions of (2.7) on J if there exists a
point t1 ∈ J such that W (t1 ) is nonsingular in C2n×2n . In this case W (t)
is nonsingular for all t ∈ J.
P r o o f. Since Y1 (t) and Y2 (t) are solutions of (2.7), it follows that W (t)
defined by (2.9) satisfies
(1) 0 I
(2.10) W (t) = W (t) , t ∈ J .
−Q(t) −P (t)
Thus if G(t, s) is the transition state matrix of (2.10) such that G(t, t) = I
[7, p. 598], it follows that W (t) = G(t, t1 )W (t1 ) for all t ∈ J. Hence the
result is established because G(t, s) is invertible for all t, s in J.
Note that in the interval 0 < t < ∞, equation (1.1) takes the form (2.7)
with P (t) = I/t and Q(t) = I − (A/t)2 .
We conclude this section with some recalls concerned with the reciprocal
gamma function that may be found in [4, p. 253]. The reciprocal gamma
function, denoted by Γ −1 (z) = 1/Γ (z), is an entire function of the complex
variable z, and thus for any matrix C ∈ Cn×n , the Riesz–Dunford functional
calculus shows that Γ −1 (C) is a well defined matrix (see Chapter 7 of [3]).
If C is a matrix in Cn×n such that
(2.11) C + nI is invertible for all integer n ≥ 0
then from [4, p. 253], it follows that
(2.12) C(C + I)(C + 2I) . . . (C + nI)Γ −1 (C + (n + 1)I) = Γ −1 (C) .
Under the condition (2.11), Γ (C) is well defined and it is the inverse matrix
of Γ −1 (C). From the properties of the functional calculus Γ −1 (C) com-
mutes with C and from [3, p. 557], Γ (C) and Γ −1 (C) are polynomials in C.
Bessel matrix differential equations 15
In particular, if C is a matrix satisfying (2.11), and Re(z) > 0 for every
eigenvalue z ∈ σ(C), then we have
R∞
(2.13) Γ (C) = exp(−t) exp((C − I) ln t) dt
0
and this representation of Γ (C) coincides with the power series expansion,
the Riesz–Dunford formula for Γ (C) [3, p. 555] and others (see [4, p. 253]).
Note that if C satisfies (2.11), from the previous comments and (2.12) we
have
(2.14) Γ (C + (n + I)) = C(C + I)(C + 2I) . . . (C + nI)Γ (C) .
Note that from (2.13) and (2.14), for matrices C satisfying (2.11) the com-
putation of Γ (C) may be performed in an analogous way to the scalar case.
3. Bessel matrix differential equations. Suppose that we are look-
ing for solutions of equation (1.1) of the form
X
(3.1) X(t) = Ck tk tZ
k≥0
where Ck is a matrix in Cn×n , Z ∈ Cn×n and tZ = exp(Z ln t), for t > 0.
By taking formal derivatives in (3.1), it follows that
X
X (1) (t) = Ck (kI + Z)tZ+(k−1)I ,
k≥0
(3.2) X
(2)
X (t) = Ck (kI + Z)(kI + Z − I)tZ+(k−2)I .
k≥0
Assuming the convergence of the series (3.1), (3.2), and substituting into
equation (1.1), it follows that
nX
(3.3) [Ck (kI + Z)(kI + Z − I) + Ck (kI + Z) − A2 Ck ]tk
k≥0
X o
+ Ck−2 tk tZ = 0 .
k≥2
By equating to the zero matrix the coefficient of each power tk appearing in
(3.3), it follows that the matrices Ck must satisfy
(3.4) C0 Z(Z − I) + C0 Z − A2 C0 = C0 Z 2 − A2 C0 = 0 ,
(3.5) C1 (Z + I)Z + C1 (Z + I) − A2 C1 = C1 (Z + I)2 − A2 C1 = 0 ,
(3.6) Ck (kI + Z)2 − A2 Ck = −Ck−2 , k ≥ 2.
Let Z be a matrix in Cn×n and let C0 be an invertible matrix in Cn×n such
16 E. Navarro et al.
that
(3.7) Z = C0−1 AC0 .
Then
(3.8) σ(A) = σ(Z) , Z 2 = C0−1 A2 C0 , C0 Z 2 − A2 C0 = 0 .
Given the matrix Z defined by (3.7), from (1.3) and Lemma 1, the only
solution for C1 of the matrix equation
C1 (Z + I)2 − A2 C1 = 0
is the zero matrix C1 = 0. From (3.6) it follows that C2m+1 = 0 for m ≥ 0.
In order to determine the matrix coefficients C2m , let p(z) be an annihilating
polynomial of the matrix A2 ,
Xn
(3.9) p(z) = aj z j , p(A2 ) = 0 .
j=0
Under the hypothesis (1.3) it follows that σ((kI + Z)2 ) ∩ σ(A2 ) = ∅ for
k ≥ 1, and from Lemma 1, the only solution C2m of the equation
(3.10) A2 C2m − C2m (2mI + Z)2 = C2m−2 , m ≥ 1,
is given by
(3.11) C2m =
X j
n X n
X −1
− aj A2h−2 C2m−2 (2mI + Z)2(j−h) aj (2mI + Z)2j .
j=1 h=1 j=1
Note that once we choose the matrices C0 and Z, all the matrix coefficients
C2m for m ≥ 1 are determined by (3.11).
Now we are concerned with the proof of the convergence of the series
X
(3.12) X(t, Z, C0 ) = C2m t2m tZ .
m≥0
The generalized power series (3.12) is convergent for t > 0 if the power series
X
(3.13) U (t, Z, C0 ) = C2m t2m ,
m≥0
is convergent for t > 0.
If B is a matrix in Cn×n and B H denotes the conjugate transpose of B,
we denote by kBk its spectral norm, defined to be the maximum of the set
{|z|1/2 : z ∈ σ(B H H)}. Taking norms in (3.10), for large values of m it
follows that
(3.14) kC2m−2 k = kC2m (2mI + Z)2 − A2 C2m k
≥ | kC2m (2mI + Z)2 k − kA2 C2m k |
Bessel matrix differential equations 17
≥ kC2m k(4m2 − 4mkZk − kZ 2 k − kA2 k) .
Hence
kC2m k|t|2m |t|2
≤
kC2m−2 k|t|2m−2 4m2 − 4mkZk − kZ 2 k − kA2 k
and this proves the absolute convergence of the series (3.13) for t > 0.
Now we are going to find a second solution of (1.1) of the form
X
(3.15) X(t, −Z, C0 ) = Ck∗ tk t−Z = U (t, −Z, C0 )t−Z
k≥0
where C0 is the matrix satisfying (3.7). In an analogous way to the con-
struction of X(t, Z, C0 ), it is straightforward to show that the matrices Ck∗
appearing in (3.15), for k ≥ 0, with C0∗ = C0 , must satisfy the equations
C0∗ Z 2 − A2 C0∗ = 0 , C1∗ (I − Z)2 − A2 C1∗ = 0 ,
(3.16)
Ck∗ (kI − Z)2 − A2 Ck∗ = −Ck−2
∗
, k ≥ 2.
From the hyphothesis (1.3), (3.16) and Lemma 1, it follows that C1∗ =
∗
C2m+1 = 0, and, for m ≥ 1,
∗
(3.17) C2m
X j
n X n
X −1
∗
= aj A2h−2 C2m−2 (2mI − Z)2(j−h) aj (2mI − Z)2j .
j=1 h=1 j=0
The proof of the absolute convergence of the series
X
∗ 2m
(3.18) U (t, −Z, C0 ) = C2m t
m≥0
for t > 0 is analogous to the previous proof for U (t, Z, C0 ).
Now we are going to prove that for any invertible matrices C0 and Z
satisfying (3.7), the pair defined by X(t, Z, C0 ) and X(t, −Z, C0 ) is a fun-
damental set of solutions of (1.1) in 0 < t < ∞. The Wroński block matrix
function associated with this pair and defined by (2.9) takes the form
(3.19) W (t)
U (t, Z, C0 )tZ U (t, −Z, C0 )t−Z
h i
= (1) Z Z−I (1) −Z −Z−I
U (t, Z, C0 )t + U (t, Z, C0 )Zt U (t, −Z, C0 )t − U (t, −Z, C0 )Zt
Z
I 0 t 0
= −1 T (t) −Z
0 t I 0 t
where
(3.20) T (t)
U (t, Z, C0 ) U (t, −Z, C0 )
= .
U (1) (t, Z, C0 )t + U (t, Z, C0 )Z U (1) (t, −Z, C0 )t − U (t, −Z, C0 )Z
18 E. Navarro et al.
From (3.19) it is clear that W (t) is invertible if and only if T (t) is invertible.
Note that T (t) is a continuous C2n×2n -valued function defined in the interval
[0, ∞). Since T (0) is the matrix
C0 C0
T (0) = ,
C0 Z −C0 Z
it is invertible because of the invertibility of C0 , Lemma 1 of [5] and the fact
that
−C0 Z − (C0 Z)C0−1 C0 = −2C0 Z is invertible .
From the invertibility of T (0) and the Perturbation Lemma [9, p. 32], there
exists a positive number t1 such that T (t) is invertible in [0, t1 ]. This
proves the invertibility of W (t1 ) and from Lemma 2 the pair {X(·, Z, C0 ),
X(·, −Z, C0 )} is a fundamental set of solutions of equation (1.1) in 0 < t <
∞. From the previous comments the following result has been proved:
Theorem 1. Let C0 and Z be invertible matrices in Cn×n and let A be a
matrix in Cn×n satisfying (1.3). Then the pair {X(·, Z, C0 ), X(·, −Z, C0 )}
defined by (3.11), (3.12), (3.15), (3.17), (3.18) is a fundamental set of so-
lutions of the Bessel equation (1.1) in 0 < t < ∞. The general solution of
(1.1) in 0 < t < ∞ is given by
(3.21) X(t) = X(t, Z, C0 )P + X(t, −Z, C0 )Q , P, Q ∈ Cn×n .
The unique solution of (1.1) satisfying the initial conditions X(a) = E,
X (1) (a) = F , with 0 < a < ∞, is given by (3.21) where
P −1 E
= (W (a))
Q F
and W (a) is defined by (3.19).
R e m a r k 1. If we consider the Bessel equation (1.1) with vector-valued
unknown X(t), then considering the fundamental set of solutions construc-
ted in Theorem 1, the general solution of the vector problem (1.1) is given
by (3.21) upon replacing the matrices P , Q, by arbitrary vectors P , Q in
Cn×1 .
Now we are interested in showing that for the case where the matrix A
is diagonalizable and satisfies (1.3), the fundamental set of solutions con-
structed in Theorem 1 coincides with the well known one for the scalar case
when n = 1, given in terms of the Bessel functions of the first kind.
Let A be a diagonalizable matrix satisfying (1.3) and let C0 be a basis
of Cn×1 composed of eigenvectors of A. If σ(A) = {λ1 , . . . , λn }, and Z =
diag(λs : 1 ≤ s ≤ n), then we have
Z = C0−1 AC0 .
Bessel matrix differential equations 19
On the other hand, if we denote by B (i) the ith column of the matrix
B ∈ Cn×n , taking the ith column in both members of equation (3.6), it
follows that
(s) (s)
(3.22) ((k + λs )2 I − A2 )Ck = −Ck−2 , 1 ≤ s ≤ n, k ≥ 2.
Note that we may write the matrix (m + 12 λs )2 I − A2 in the form
(3.23) (m + 12 λs )2 I − A2 = ((m + 12 λs )I + A)((m + 12 λs )I − A
= (mI + 12 (λs I + A))(mI + 12 (λs I − A))
= (mI + Bs )(mI + Ds ) ,
Bs = 12 (λs I + A) , Ds = 21 (λs I − A) .
Considering (3.22) for even integers k = 2m, we have
m
(s) (−1)m Y (s)
C2m = 2m ((j + 21 λs )2 I − A2 )−1 C0
2 j=1
(3.24) m
(−1)m Y (s)
= (jI + Ds )−1 (jI + Bs )−1 C0 , 1 ≤ s ≤ n.
22m j=1
Now consider the new basis of eigenvectors of A defined by the matrix
K0 whose sth column is given by
(s) (s)
(3.25) C0 = 2λs Γ (Ds + I)Γ (Bs + I)K0 , 1 ≤ s ≤ n.
Note that from (1.3) and (3.23), the matrices Γ (Ds + I) and Γ (Bs + I) are
invertible and commute with A. This proves that the columns of K0 define
a basis of eigenvectors of A satisfying
(3.26) Z = K0−1 AK0 .
(s)
The corresponding equations (3.24) for K2m satisfy
m
(s) (−1)m Y (s)
K2m = (jI +Ds )−1 (jI +Bs )−1 Γ −1 (Bs +I)Γ −1 (Ds +I)C0 2−λs .
22m j=1
Taking into account (2.14) and the fact that jI + Bs and jI + Ds commute
shows that
(s) (−1)m (s)
K2m = 2m Γ −1 (Ds +(m+1)I)Γ −1 (Bs +(m+1)I)C0 2−λs , 1 ≤ s ≤ n.
2
In matrix form the above expression may be written as
(−1)m
K2m = L2m 2−Z ,
(3.27) 22m
(s) (s)
L2m = Γ −1 (Ds + (m + 1)I)Γ −1 (Bs + (m + 1)I)C0 ,
20 E. Navarro et al.
and X(t, Z, K0 ) takes the form
X
(3.28) X(t, Z, K0 ) = K2m t2m tZ
m≥0
(−1)m
X
2m
= L2m t (t/2)Z , t > 0.
22m
m≥0
In an analogous way, if we denote by Bs∗ and Ds∗ the matrices
Bs∗ = 21 (A − λs I) ,
(3.29) 1 ≤ s ≤ n,
Ds∗ = 12 (−A − λs I) ,
and
∗ (−1)m ∗ Z
K2m = L 2 ,
(3.30) 22m 2m
∗(s) (s)
L2m = Γ −1 (Ds∗ + (m + 1)I)Γ −1 (Bs∗ + (m + 1)I)C0 ,
then
X
∗ 2m −Z
(3.31) X(t, −Z, K0 ) = K2m t t
m≥0
(−1)m ∗ 2m
X
= L t (t/2)−Z , t > 0.
22m 2m
m≥0
Thus for the case where A is diagonalizable and σ(A) = {λs : 1 ≤ s ≤ n},
Theorem 1 provides the fundamental set of solutions in 0 < t < ∞, defined
by X(·, Z, K0 ) and X(·, −Z, K0 ).
Now we show that for the scalar case, when A = ν is a complex number
such that 2ν is not an integer, which is the condition (1.3) for the case n = 1,
the fundamental set of solutions of (1.1) given by (3.28) and (3.31) coincides
with the Bessel functions of the first kind Jν (x) and J−ν (x), respectively.
Note that for the scalar case we have
A=Z =ν, C0 = 1 ,
1 ∗
B1 = 2 (ν
+ ν) = ν , B1 = 21 (ν − ν) = 0 ,
1
D1 = 2 (ν − ν) = 0 , D1∗ = 12 (−ν − ν) = −ν ,
Γ −1 (B1 + (m + 1)I) = Γ −1
(ν + m + 1) ,
−1 −1
Γ (D1 + (m + 1)I) = Γ (m + 1) = 1/m! ,
1
L2m = ,
m!Γ (ν + m + 1)
1
L∗2m = ,
m!Γ (−ν + m + 1)
Bessel matrix differential equations 21
(1)
K0 = K0 = 2−Z Γ −1 (B1 + I)Γ −1 (D1 + I)
= 2−ν Γ −1 (ν + 1)0! = 2−ν Γ −1 (ν + 1) ,
(3.32) ∗(1)
K0∗ = K0 = 2Z Γ −1 (B1∗ + I)Γ −1 (D1∗ + I)
= 2ν Γ −1 (1)Γ −1 (−ν + 1) = 2ν Γ −1 (−ν + 1) .
Hence for the scalar case with A = ν such that 2ν is not an integer, taking
K0 and K0∗ defined by (3.32), it follows that the fundamental set of solutions
of (1.1) given by (3.28), (3.31) is
X(t, ν, K0 ) = Jν (t) , X(t, −ν, K0 ) = J−ν (t) , t > 0,
where Jν (t) and J−ν (t) denote the Bessel functions of the first kind of or-
der ν.
4. Boundary value problems. Under the hypotheses and notation
of Section 3, let X(t, Z, C0 ), X(t, −Z, C0 ) be a fundamental set of solutions
of (1.1), constructed for matrices Z and C0 satisfying (3.7). Taking into
account the expression (3.21) for the general solution of (1.1) in t > 0, its
derivative is
X (1) (t) = X (1) (t, Z, C0 )P + X (1) (t, −Z, C0 )Q
(4.1) = (U (1) (t, Z, C0 )tZ + U (t, Z, C0 )ZtZ−I )P
+ (U (1) (t, −Z, C0 )t−Z − U (t, −Z, C0 )Zt−Z−I )Q ,
where U (t, Z, C0 ), U (t, −Z, C0 ) are defined by (3.13) and (3.18), respec-
tively, and P , Q are arbitrary matrices in Cn×n .
If we impose on the general solution X(t) of (1.1), described by (3.21),
the boundary value conditions of (1.4), then from (3.21) and (4.1), it follows
that problem (1.4) is solvable if and only if the algebraic system
P
(4.2) S =0
Q
is compatible, where S = (Sij )1≤i, j≤2 is the block matrix whose entries are
(4.3) Si1 = Mi1 U (a, Z, C0 )aZ + Ni1 U (b, −Z, C0 )bZ
+ Mi2 (U (1) (a, Z, C0 )aZ + U (a, Z, C0 )ZaZ−I )
+ Ni2 (U (1) (b, Z, C0 )bZ + U (b, Z, C0 )ZbZ−I ) , i = 1, 2,
(4.4) Si2 = Mi1 U (a, −Z, C0 )a−Z + Ni1 U (b, −Z, C0 )b−Z
+ Mi2 (U (1) (a, −Z, C0 )a−Z − U (a, −Z, C0 )Za−Z−I )
+ Ni2 (U (1) (b, −Z, C0 )b−Z − U (b, −Z, C0 )Zb−Z−I ) , i = 1, 2.
Thus the boundary value problem (4.1) is solvable if and only if the matrix S
defined by (4.3)–(4.4) is singular. Under this condition, from Theorem 2.3.2
22 E. Navarro et al.
of [11, p. 24], the general solution of the algebraic system (4.2) is given by
P
(4.5) = S + SG , G ∈ C2n×n .
Q
Hence the general solution of problem (1.3), under the hypothesis of singu-
larity for the matrix S, is given by (3.21) where the matrices P , Q are given
by (4.5) for an arbitrary matrix G in C2n×n .
Hence the following result has been established:
Theorem 2. Under the hypotheses and notation of Theorem 1, let S be
the block matrix defined by (4.3)–(4.4) and associated with the fundamen-
tal set {X(·, Z, C0 ), X(·, −Z, C0 )}. Then the boundary value problem (1.3)
is solvable if and only if S is singular. Under this condition the general
solution of (1.3) is given by (3.21), where P , Q are matrices in Cn×n given
by (4.5).
Acknowledgements. This paper was supported by the D.G.I.C.Y.T.
grant PS90-140 and the NATO grant CRG 900040.
References
[1] S. L. C a m p b e l l and C. D. M e y e r J r ., Generalized Inverses of Linear Transfor-
mations, Pitman, London, 1979.
[2] C. D a v i s and P. R o s e n t h a l, Solving linear operator equations, Canad. J. Math.
26 (6) (1974), 1384–1389.
[3] N. D u n f o r d and J. S c h w a r t z, Linear Operators, Part I, Interscience, New York,
1957.
[4] E. H i l l e, Lectures on Ordinary Differential Equations, Addison-Wesley, 1969.
[5] L. J ó d a r, Explicit expressions for Sturm–Liouville operator problems, Proc. Edin-
burgh Math. Soc. 30 (1987), 301–309.
[6] —, Explicit solutions for second order operator differential equations with two bound-
ary value conditions, Linear Algebra Appl. 103 (1988), 35–53.
[7] T. K a i l a t h, Linear Systems, Prentice-Hall, Englewood Cliffs, N.J., 1980.
[8] H. B. K e l l e r and A. W. W o l f e, On the nonunique equilibrium states and buckling
mechanism of spherical shells, J. Soc. Indust. Appl. Math. 13 (1965), 674–705.
[9] J. M. O r t e g a, Numerical Analysis. A Second Course, Academic Press, New York,
1972.
[10] S. V. P a r t e r, M. L. S t e i n and P. R. S t e i n, On the multiplicity of solutions of a
differential equation arising in chemical reactor theory, Tech. Rep. 194, Dept. of
Computer Sciences, Univ. of Wisconsin, Madison, 1973.
[11] C. R. R a o and S. K. M i t r a, Generalized Inverses of Matrices and its Applications,
Wiley, New York, 1971.
[12] M. R o s e n b l u m, On the operator equation BX − XA = Q, Duke Math. J. 23
(1956), 263–269.
Bessel matrix differential equations 23
[13] E. W e i n m ü l l e r, A difference method for a singular boundary value problem of
second order , Math. Comp. 42 (166) (1984), 441–464.
ENRIQUE NAVARRO, RAFAEL COMPANY AND LUCAS JÓDAR
DEPARTAMENTO DE MATEMÁTICA APLICADA
UNIVERSIDAD POLITÉCNICA DE VALENCIA
P.O. BOX 22.012
46022 VALENCIA, SPAIN
Received on 21.3.1992