Algebra 02
1er février 2025
1 Vector Spaces
When you read the word vector you may immediately think of two points in R2 (or R3 )
connected by an arrow. Mathematically speaking, a vector is just an element of a vector space.
This then begs the question : What is a vector space ? Roughly speaking, a vector space is a
set of objects that can be added and multiplied by scalars.
Definition 1.0.1 A vector space is a set E of objects, called vectors, on which two operations
called addition and scalar multiplication have been defined satisfying the following properties.
If u, v, w are in E and if α, β ∈ R are scalars :
1. The sum u + v is in E. (closure under addition)
2. u + v = v + u (addition is commutative)
3. (u + v) + w = u + (v + w) (addition is associative)
4. There is a vector in E called the zero vector, denoted by 0, satisfying v + 0 = v.
5. For each v there is a vector −v in E such that v + (−v) = 0.
6. The scalar multiple of v by α, denoted α · v, is in E. (closure under scalar multipli-
cation)
7. α · (u + v) = α · u + α · v.
8. (α + β) · v = α · v + β · v.
9. (αβ) · v = α · (β · v) .
10. 1 · v = v
Remark 1.0.1 1. Elements of E are called vectors, and elements of R are called scalars.
Instead of vector space on R we also say, R− vector space.
2. It can be shown that 0R · v = 0E for any vector v in E.
To better understand the definition of a vector space, we first consider a few elementary
examples.
Example 1.0.1 1. R2 , R3 and more generally Rn are real vector spaces.
2. The set of applications from R into R is a vector space on R.
1
Lecture Notes Vector Spaces
3. Let E be the unit disc in R2 :
E = (x, y) ∈ R2 / x2 + y 2 ≤ 1
The circle is not closed under scalar multiplication. For example, take u = (1, 0) ∈ E
and multiply by say α = 2. Then αu = (2, 0) is not in E. Therefore, property (6) of the
definition of a vector space fails, and consequently the unit disc is not a vector space.
4. Let E be the graph of the quadratic function f (x) = x2 :
E = (x, y) ∈ R2 / y = x2
The set E is not closed under scalar multiplication. For example, u = (1, 1) is a point in
E but 2u = (2, 2) is not. You may also notice that E is not closed under addition either.
For example, both u = (1, 1) and v = (2, 4) are in E but u + v = (3, 5) and (3, 5) is not
a point on the parabola E. Therefore, the graph of f (x) = x2 is not a vector space.
5. F(R, R) : The vector space of functions from R into R.
a/ Let f and g two elements of F(R, R). The function f + g is defined by :
∀x ∈ R, (f + g)(x) = f (x) + g(x)
b/ If λ is a real number and f is a function of F(R, R), the function λ.f is defined by the
image of any real x as follows :
∀x ∈ R, (λ.f )(x) = λf (x)
c/ The identity The identity for addition is the null function, defined by :
∀x ∈ R, f (x) = 0.
This function can be written 0E = 0F (R,R) .
d/ The inverses The inverse of f in F(R, R) is the function g from R to R defined by :
∀x ∈ R, g(x) = −f (x).
The inverse of f is noted −f .
6. Let E = R2 [X] = {P = aX 2 + bX + c, a, b, c ∈ R) be the set of polynomials of degree
less than or equal to 2, with coefficients in R, provided with the following operations :
a/ A law ” + ”, given by : ∀P, Q ∈ E, P = aX 2 + bX + c, Q = a0 X 2 + b 0 X + c 0 ,
P + Q = (a + a0 )X 2 + (b + b0 )X + (c + c0 ).
b/ A law ” · ” defined by : ∀α ∈ R, ∀P ∈ E, P = aX 2 + bX + c,
α · P = (αa)X 2 + (αb)X + (αc).
(E, +, ·) is a vectorial space on R.
C. H. Page 2
Lecture Notes Vector Spaces
1.1 Subspaces of Vector Spaces
Frequently, one encounters a vector space F that is a subset of a larger vector space E. In
this case, we would say that F is a subspace of E. Below is the formal definition.
Definition 1.1.1 Let E be a vector space. A subset F of E is called a subspace of E if it
satisfies the following properties :
1. The zero vector of E is also in F .
2. F is closed under addition, that is, if u and v are in F then u + v is in F .
3. F is closed under scalar multiplication, that is, if u is in F and α is a scalar then α · u
is in F .
Example 1.1.1 Let F be the graph of the function f (x) = 2x :
F = (x, y) ∈ R2 |y = 2x .
F a subspace of E = R2 .
If x = 0 then y = 2 · 0 = 0 and therefore (0, 0) is in F .
Let u = (a, 2a) and v = (b, 2b) be elements of F . Then u+v = (a, 2a)+(b, 2b) = (a+b, 2a+2b) =
(a + b, 2(a + b)) Because the x and y components of u + v satisfy y = 2x then u + v is inside
in F . Thus, F is closed under addition.
Let α be any scalar and let u = (a, 2a) be an element of F . Then αu = (αa, α2a) = (αa, 2αa) ∈
F . F is closed under scalar multiplication.
All three conditions of a subspace are satisfied for F and therefore F is a subspace of E.
Example 1.1.2 Let F be the first quadrant in R2 :
F = (x, y) ∈ R2 | x ≥ 0, y ≥ 0 .
The set F contains the zero vector and the sum of two vectors in F is again in F . However,
F is not closed under scalar multiplication. For example if u = (1, 1) and α = −1, then
αu = (−1, −1) is not in F because the components of αu are clearly not non-negative.
Example 1.1.3 Let E = Rn [t] and consider the subset F of E :
F = {P (t) ∈ Rn [t]/ P 0 (1) = 0}
F is a subspace of E.
The zero polynomial 0(t) clearly has derivative at t = 1 equal to zero, that is, 00 (1) = 0,
and thus the zero polynomial is in F . Now suppose that P (t) and Q(t) are two polynomials
in F . Then, P 0 (1) = 0 and also Q0 (1) = 0, from the rules of differentiation, we compute
(P + Q)0 (1) = P 0 (1) + Q0 (1) = 0 + 0.
Therefore, the polynomial (P + Q)(t) is in F , and thus F is closed under addition.
Now let α be any scalar and let P (t) be a polynomial in F . Then P 0 (1) = 0. Using the rules
of differentiation, we compute that (αP )0 (1) = αP 0 (1) = α.0 = 0. Therefore, the polynomial
(αP )(t) is in F and thus F is closed under scalar multiplication.
All three properties of a subspace hold for F and therefore F is a subspace of Rn [t].
Example 1.1.4 1. Any field K is a vectorspace on K.
C. H. Page 3
Lecture Notes Vector Spaces
2. Any field L containing a field K is a vector space on K and K is a vector subspace of L.
3. C is a vector space on R and R is a subspace of C.
Example 1.1.5 Consider F = {(x, y) ∈ R2 / x2 + y 2 < 0}, F = ∅, so F is not a subspace of
R2 .
Example 1.1.6 Let F = {(x, y) ∈ R2 / x − y + 1 = 0}, we have : 0R2 = (0, 0) ∈
/ F , since
2
0 − 0 + 1 6= 0 therefore F is not a subspace of R .
Example 1.1.7 Let F = {(x, y) ∈ R2 / xy ≥ 0}, we have (2, 1), (−1, −2) ∈ F , but (2, 1) +
/ F because does not check xy ≥ 0 so F is not a subspace of R2 .
(−1, −2) = (1, −1) ∈
1.2 Operation on vector subspaces
Proposition 1.2.1 Let K be a field, E a K−vector space, F and G two subspaces of E, then :
1. F ∩ G is a subspace of E.
2. F ∪ G is a subspace of E if and only if, F ⊂ Gor G ⊂ F .
Proof 1.2.1 (of 1.) We have F and G are subspaces of E, then : (F ⊂ E and G ⊂ E therefore
F ∩ G ⊂ E.)
a/ 0E ∈ F and 0E ∈ G which means that 0E ∈ F ∩ G.
b/ ∀α, β ∈ K, ∀x, y ∈ F ∩ G (i.e. x ∈ F ∧ x ∈ G), we have αx + βy ∈ F and αx + βy ∈ G,
therefore αx + βy ∈ F ∩ G. Then F ∩ G is a subspace of E.
Remark 1.2.1 We generalize the property (1) to any family of vector subspaces, i.e. If (Fi )i∈I,I⊂N ,
is a family of subvector spaces, then ∩i∈I Fi is a subspace.
Example 1.2.1 Let E = R2 be the vector space on R. Consider the following subspaces F and
G:
F = (x, y) ∈ R2 / y = 0 , G = (x, y) ∈ R2 / x = 0 .
F and G are the x-axis and y-axis respectively.
Since (1, 0) ∈ F with (1, 0) ∈/ G, then F * G and (0, 1) ∈ G with (0, 1) ∈ / F, then G * F.
2
Therefore, F ∪ G is not a subspace of R .
The result can be obtained by noting that (1, 0), (0, 1) ∈ F ∪ G but (1, 0) + (0, 1) = (1, 1) ∈
/F
and (1, 1) ∈
/ G then (1, 1) ∈
/ F ∪ G. This means that F ∪ G is not a subspace of E.
Theoreme 1.2.1 Let K be a field, E a vector space on K, F and G two subspaces of E.
The set F + G defined by
F + G = {x + y/ x∈F and y ∈ G} ⊂ E
is a subspace of E called sum of the subspaces F and G. If in addition F ∩ G = {0E }, we say
that the sum F + G is a direct sum and we write F ⊕ G.
Proof 1.2.2 F + G is a subspace of E :
1. 0E = 0E + 0E ∈ F + G because 0E ∈ F and 0E ∈ G since F and G are two subspaces of
E.
C. H. Page 4
Lecture Notes Vector Spaces
2. ∀α, β ∈ K, ∀z, z 0 ∈ F + G, then z = x + y and z 0 = x0 + y 0 with x, x0 ∈ F and y, y 0 ∈ G.
Since F and G are subspaces of E, then
αx + βx0 ∈ F and αy + βy 0 ∈ G.
This means that (αx + βx0 ) + (αy + βy 0 ) ∈ F + G.
Therefore (αx + βx0 ) + (αy + βy 0 ) = α(x + y) + β(x0 + y 0 ) ∈ F + G, i.e. αz + βz 0 ∈ F + G
Example 1.2.2 Consider the vector space R3 , the subspaces F and H given by
F = (x, y, z) ∈ R3 /x + y − z = 0 and H = (x, y, z) ∈ R3 /x = y = 0 .
We have F + G = F ⊕ G. Indeed :
Let (x, y, z) ∈ F ∩ H, so (x, y, z) ∈ F, i.e. z = x + y and (x, y, z) ∈ H i.e. x = y = 0, so
x = y = z = 0, therefore F ∩ H = {0R3 } .
Example 1.2.3 For any vector space E, there are two trivial subspaces in E, namely, E itself
is a subspace of E and the set consisting of the zero vector F = {0}is a subspace of E.
There is one particular way to generate a subspace of any given vector space E using the
span of a set of vectors.
2 Linear combinations, generating famillies, linearly inde-
pendant famillies, bases, dimension.
2.1 Linear combinations
Let v1 , v2 , · · · , vn be a familly of vectors of a vector space on K, We call linear combination
of these vectors any vector of type
v = λ1 v1 + λ2 v2 + · · · + λn vn .
The scalars λ1 , · · · , λn are called the coefficients of the linear combination.
The span of {v1 , v2 , · · · , vn } is the set of all linear combinations of v1 , v2 , · · · , vn .
span {v1 , v2 , · · · , vn } = {λ1 v1 + λ2 v2 + · · · + λn vn / λ1 , · · · , λn ∈ R}
The span of a set of vectors in E is a subspace of E.
2.2 Generating famillies
Definition 2.2.1 The family {v1 , v2 , · · · , vn } is a generating family of the vector space E if
every vector of E is a linear combination of the vectors v1 , v2 , · · · , vn . This can also be written :
∀v ∈ E, ∃λ1 , λ2 , · · · , λn ∈ K/ v = λ1 v1 + λ2 v2 + · · · + λn vn
We also say that the family {v1 , v2 , · · · , vn } generates the vector space E and we write
E = span {v1 , v2 , · · · vn }
.
C. H. Page 5
Lecture Notes Vector Spaces
Example 2.2.1 Let the vectors v1 = (2, 1), v2 = (1, 1) ∈ R2 The vectors {v1 , v2 } form a
generating family of R2 . Indeed, let v = (x, y) ∈ R2 , showing that v is a linear combination
of v1 and v2 is equivalent to demonstrate the existence of two real numbers α and β such that
v = αv1 + βv2 . So we need to study the existence of solutions to the system :
2α + β = x
α+β =y
Its solutions are α = x − y and β = −x + 2y, whatever the real numbers x and y.
This proves that there can be several different finite families, not included in each other, gene-
rating the same vector space.
Example 2.2.2 Let E = Rn [X] be the vector space of polynomials of degree ≤ n. Then the
polynomials {1, X, · · · , X n } form a generating family of E.
2.3 Linearly independent famillies
Definition 2.3.1 1. A familly {v1 , v2 , · · · , vn } of vectors of a vector space E is linearly inde-
pendent if the only linear combination of these vectors equal to the zero vector is the one whose
coefficients are all zero. We also say that vectors {v1 , v2 , · · · , vn } are linearly independent.
This can be expressed as :
{v1 , v2 , · · · , vn } is a linearly independent familly is equivalent to :
((λ1 , · · · , λn ) ∈ Kn and λ1 v1 + λ2 v2 + · · · + λn vp = 0E ) ⇒ λ1 = λ2 = · · · = λn = 0.
2.4 Linearly dependent famillies
Definition 2.4.1 1. A non linearly independent familly is called a linearly dependent familly.
We also say that vectors {v1 , v2 , · · · , vn } are linearly dependents.
This can be expressed as : {v1 , v2 , · · · , vn } is a linearly dependent familly is equivalent to
(∃(λ1 , · · · , λn ) ∈ Kn − {0Kn } / λ1 v1 + λ2 v2 + · · · + λn vp = 0E ) .
Example 2.4.1 The polynomials P1 (X) = 1 − X, P2 (X) = 5 + 3X − 2X 2 and P3 (X) =
1 + 3X − X 2 form a linearly dependent family in the vector space R2 [X], because 3P1 (X) −
P2 (X) + 2P3 (X) = 0.
Example 2.4.2 In the vector space F (R, R) of functions from R into R, consider the family
{cos, sin}. Let’s show that it’s a linearly independent family.
Suppose we have λ cos +µ sin = 0, which is equivalent to ∀x ∈ R, λ cos(x) + µ sin(x) = 0. In
particular, for x = 0, this equality gives λ = 0. And for x = π/2, it gives µ = 0. So {cos, sin}
is a linearly independent family.
On the other hand, the family cos2 , sin2 1 is linearly dependent because we have : cos2 + sin2 −1 =
0.
The coefficients of the linear dependence are λ1 = 1, λ2 = 1, λ3 = −1.
Example 2.4.3 In the vector space R4 defined over the field R, consider the following vectors :
v1 = (1, 0, −1, 1), v2 = (0, 1, 1, 0), v3 = (1, 0, 0, 1), v4 = (0, 0, 0, 1), v5 = (1, 1, 0, 1).
The set {v1 , v2 , v3 , v4 } is linearly independent (to be verified). The set S2 = {v1 , v2 , v5 } is linearly
dependent (v5 = v1 + v2 ).
C. H. Page 6
Lecture Notes Vector Spaces
Theoreme 2.4.1 Let E be a vector space over the field K. A set F = {v1 , v2 , · · · , vn } of n
vectors of E, (n > 2) is linearly dependent if and only if at least one of the vectors of F is a
linear combination of the other vectors of F .
Remark 2.4.1 1. Any family containing a linearly dependent family is linearly dependent.
2. Any family included in a linearly independent family is linearly independent.
3. {v} is linearly independent if and only if v 6= 0.
4. Any set containing the null vector is linearly dependent.
2.5 Basis
A basis of a vector space is linearly independent generating familly.
If B = (xi )i∈I , I ⊂ N is a basis of E, then any x ∈ E is uniquely written as a linear combination
of elements of B. X
x= α i xi
i∈I
The scalars (αi )i∈I , are called the coordinates of x in the basis B.
3 Finite dimensional vector spaces
Definition 3.0.1 If a vector space is spanned by a finite number of vectors, it is said to be
finite-dimensional.
Otherwise it is infinite-dimensional. The number of vectors in a basis for a finite-dimensional
vector space E is called the dimension of E and denoted dimE.
By convention, we say that {0E } is a finite-dimensional space.
Definition 3.0.2 A family {v1 , · · · , vn } of vectors of E is said to be a basis of E if and only
if, we have :
1. {v1 , · · · , vn } is a linearly independent family of E and
2. {v1 , · · · , vn } is a generating family of E.
Example 3.0.1 1. The set (1, i) is a basis of the R−vector space C.
Indeed, if a, b ∈ R are such that a.1 + b.i = 0 then a + ib = 0 + i0 and therefore a = b = 0.
The set is therefore linearly independent.
For any complex number, there are a, b ∈ R such that z = a+ib, then (1, i) is a generating
set of C, it is therefore a basis of C.
2. In R3 , the set {e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)} forms a basis of R3 , called ca-
nonical basis of R3 .
The set {v1 = (1, 0, 1), v2 = (1, −1, 1)v3 = (0, 1, 1)} is a basis of R3 . Indeed :
a/ The family is linearly independent.
Let α1 , α2 , α3 ∈ R such that α1 v1 + α2 v2 + α3 v3 = 0R3 . Then
α1 + α2 =0
α2 + α3 =0
α1 + α2 + α3 = 0
C. H. Page 7
Lecture Notes Vector Spaces
which leads to α1 = α2 = α3 = 0.
b/ The set is generating of R3 . Let (x, y, z) ∈ R3 . We are looking for α1 , α2 , α3 ∈ R such
that (x, y, z) = α1 v1 + α2 v2 + α3 v3 . We then obtain the system
α1 + α2 =x
α2 + α3 =y
α1 + α2 + α3 = z
and we find α1 = 2x + y − z, α2 = x − y + z and α3 = −x + z.
So span {v1 = (1, 0, 1), v2 = (1, −1, 1), v3 = (0, 1, 1)} = R3 . Then {v1 , v2 , v3 } is a basis of
R3 .
More generally, we have :
Proposition 3.0.1 Canonical base of Kn
Consider the vector space E = Kn over the field K.
The standard basis vectors of E are a specific set of basis vectors that are commonly used in
linear algebra. They are the unit vectors in each dimension of the vector space :
(e1 , e2 , · · · , en ) of Kn called canonical and given by :
e1 = (1, 0, 0, · · · , 0), e2 = (0, 1, 0, 0, · · · , 0), · · · , en = (0, 0, 0, · · · 0, 1).
Proposition 3.0.2 Canonical base of Kn [X]
Let n ∈ N. Consider the vector space E = Kn [X] of polynomials of degree ≤ n with coefficients
in K. There is a specific basis of Kn [X] called canonical, given by {1, X, X 2 , · · · , X n } .
Theoreme 3.0.1 Theorem of the extracted basis From any finite generating family of E,
we can extract a basis of E. In particular, a finite-dimensional space admits a basis.
Theoreme 3.0.2 Incomplete basis theorem If E is finite-dimensional, then any linearly
independent family of E can be completed into a basis of E. To complete it, simply consider
certain vectors of a generating family of E.
Theoreme 3.0.3 Dimension If E is finite-dimensional, then all bases of E have the same
number of vectors (dimension of E).
Corollary 3.0.1 If E is a finite-dimensional vector space (dimE = n) and if B = (v1 , v2 , · · · , vn )
is a family of n vectors of E, then the following conditions are equivalent :
1. B is linearly independent.
2. B is a generating set of E.
3. B is a basis of E.
Remark 3.0.1 1. In particular, in a n-dimensional space, a linearly independent set al-
ways has at most n elements, and a generating family always has at least n elements.
2. If E and F are finite-dimensional, then dim(E × F ) = dim(E) + dim(F ). In particular,
dim(Kn ) = n.
3. dim(Kn [X]) = n + 1.
Definition 3.0.3 If (v1 , v2 , · · · , vn ) is a finite set of E, we call rank of (v1 , v2 , · · · , vn ) the
dimension of F = V ect (v1 , v2 , · · · , vn ) .
Let G = {v1 = (2, 1), v2 = (4, 2), v3 = (−3, 4)} be a subset of R2 . Let’s determine the rank
of G.
The set G is linearly dependent (v2 = 2v1 ), so span (v1 , v2 , v3 ) = span (v2 , v3 ) , so rank(G) = 2.
C. H. Page 8
Lecture Notes Vector Spaces
3.1 Subspaces and dimension
If E is a finite-dimensional vector space and if F is a subspace of E, then we have dim(F ) ≤
dim(E) and Furthermore :
dim(F ) = dim(E) ⇔ F = E.
Grassmann formula : Let E be a finite-dimensional vector space and let F, G be two subspaces
of E. Then
dim(F + G) = dim(F ) + dim(G) − dim(F ∩ G).
In particular, F and G are in direct sum if and only if
dim(F + G) = dim(F ) + dim(G).
C. H. Page 9