0% found this document useful (0 votes)
24 views11 pages

Lec 30

The document discusses linear independence of vectors. It defines linear independence as vectors not being linearly dependent. It provides examples of checking linear independence of two vectors in R2 by solving systems of equations. It also discusses that a set containing the zero vector is always linearly dependent, and that two non-zero vectors are linearly independent if they are not multiples of each other.

Uploaded by

themaker1602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views11 pages

Lec 30

The document discusses linear independence of vectors. It defines linear independence as vectors not being linearly dependent. It provides examples of checking linear independence of two vectors in R2 by solving systems of equations. It also discusses that a set containing the zero vector is always linearly dependent, and that two non-zero vectors are linearly independent if they are not multiples of each other.

Uploaded by

themaker1602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Mathematics for Data Science - 2

Professor. Sarang S. Sane


Department of Mathematics
Indian Institute of Technology, Madras
Lecture No. 30
Linear Independence – Part 1

Hello, and welcome to the Maths 2 component of this online B.Sc. program in data science. Today,
we are going to study the topic of Linear Independence. In the last couple of videos, we have
studied the idea of linear dependence. So, in particular, we saw the notion of linear combinations
of vectors. So, let me remind you that now we are studying vectors in the sense of elements of a
vector space, which we defined our two videos before this.

So, we defined the notion of linear combination which is nothing but a ∑ 𝑎𝑖 𝑣𝑖 , where 𝑣𝑖 ’s are
vectors in a vector space and 𝑎𝑖 ′s are real numbers. So, this is a finite sum 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ +
𝑎𝑛 𝑣𝑛 . And we talked about linear dependence.

(Refer Slide Time: 01:11)

So, today, in this video, we are going to talk about linear independence. Let us quickly recall what
is linear dependence. So, we say that a set of vectors 𝑣1 , 𝑣2 , … , 𝑣𝑛 , from a vector space 𝑉 we call
them linearly dependent if there exists scalars 𝑎1 , 𝑎2 , . . 𝑎𝑛 , such that 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛 =
0. Now, of course, there is a crucial condition extra which is that not all of the coefficients should
be 0. So, at least some coefficient must be non-zero. If all the coefficients are 0, of course, we
know that 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛 = 0.
So, the point is that there is some linear combination with non-zero coefficients. So, some
coefficients are non-zero so that the sum is still 0. So, if there exist such scalars then we say that
𝑣1 , 𝑣2 , … , 𝑣𝑛 is linearly dependent. So, equivalently in terms of linear combinations which is
something I defined last time and we just talked about, if we can express the 0 vector as a linear
combination of 𝑣1 , 𝑣2 , … , 𝑣𝑛 with non-zero coefficients, which means that there are some
coefficients, meaning at least one coefficient which is non-zero, then we say that 𝑣1 , 𝑣2 , … , 𝑣𝑛 are
linearly dependent. So, we have studied this notion in the previous video.

(Refer Slide Time: 02:34)

So, in this video, we are going to talk about linear independence. So, we say that a set of vectors
𝑣1 , 𝑣2 , … , 𝑣𝑛 from a vector space 𝑉 we call them linearly independent if they are not linearly
dependent. So, this is a very easy definition. We have already studied what is linear dependence.
So, we will say they are linearly independent if they are not linearly dependent. So, what does this
mean in terms of linear combinations and summing up to 0?

So, the equivalent formulation is a set of vectors 𝑣1 , 𝑣2 , … , 𝑣𝑛 from a vector space 𝑉 is said to be
linearly independent if the equation 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛 = 0 can only be satisfied when 𝑎𝑖 is
0. So, just to go back to dependence, dependence said there is some linear combination, there is
some, there are some scalars 𝑎1 , 𝑎2 , . . 𝑎𝑛 , not all of which are 0, so that when you take the sum
𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛 you get 0. Linear independence is saying, if there is a linear
combination, meaning if there is a sum 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛 which is 0, then the only way in
which this can happen is if the 𝑎𝑖 ′𝑠 are all 0.

So, equivalently in terms of linear combinations, we are saying that a set of vectors 𝑣1 , 𝑣2 , … , 𝑣𝑛
is linearly independent if the only linear combination of these vectors 𝑣1 , 𝑣2 , … , 𝑣𝑛 which equals 0
is the linear combination with all coefficients 0. So, let us take a pause and understand what we
are saying here. If the coefficients are 0, certainly ∑ 𝑎𝑖 𝑣𝑖 = 0. If all the 𝑎𝑖 ’s are 0, then ∑ 𝑎𝑖 𝑣𝑖 is
going to be 0.

So, what we are saying is that if the sum is 0, then the coefficients must be 0. In other words, the
only way of getting 0 as a linear combination is if the coefficients are 0. So, I want to emphasize
that this means we have to check something is linearly independent, which we will do at the end
of this video. We have to check that all the 𝑎𝑖 ’s are 0.

(Refer Slide Time: 05:00)

So, let us do some examples. So, let us first check out this example in ℝ2 . So, look at the two
vectors (−1, 3) and (2, 0). So, we want to see if which coefficients give us that the sum is 0. So,
you write this equation, 𝑎(−1, 3) + 𝑏(2, 0) = (0, 0) and then you try to solve for 𝑎 and 𝑏. So, if
you do that, you equate the corresponding coordinates.

So, the first coordinate for the left hand side is −𝑎 + 2𝑏, the first coordinate for the right hand side
is 0. They are equal so that means these two must be the same. So, −𝑎 + 2𝑏 = 0. And the second
coordinate is 3𝑎 + 0𝑏. And the second coordinate on the right is 0. So, you get 3𝑎 = 0. So, we
have a system of linear equations, −𝑎 + 2𝑏 = 0 and 3𝑎 = 0.

Now, we know how to solve a system of linear equations. We in fact know very general method
for that. And indeed, towards the end of the video, we will start using that method. But for now,
you can see this is very easy to solve. Namely, 3𝑎 = 0, so 𝑎 = 0. And once 𝑎 = 0, you put that
into the first equation and you get that 2𝑏 = 0. So, 𝑏 = 0.

So, the unique solution for this system is 𝑎 = 0 and 𝑏 = 0, which means that if 𝑎(−1, 3) +
𝑏(2, 0) = (0, 0), the only solutions of, the only coefficients which yield this identity to be true are
𝑎 = 0 and 𝑏 = 0. That means, the vectors (−1, 3) and (2, 0) are linearly independent.

(Refer Slide Time: 06:54)

Let us look at the 0 vector. This is a very special vector in our vector space. So, suppose I have a
set of vectors 𝑣1 , 𝑣2 , … , 𝑣𝑛 and one of the vectors here is the 0 vector. So, suppose 𝑣𝑖 is 0. One of
them is 0. Let us say that one is 𝑣𝑖 . So, then what you do is you choose the ith coordinate, sorry,
you choose the ith coefficient to be 1 and you choose the jth coefficient, where 𝑗 ≠ 𝑖 to be 0. So,
if you do that, then look at this linear combination, 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛 now for all the
vectors where 𝑗 ≠ 𝑖, 𝑎𝑗 𝑣𝑗 is going to be 0, because 𝑎𝑗 = 0.

On the other hand for the ith vector, you have 𝑎𝑖 𝑣𝑖 , 𝑣𝑖 = 0, so 𝑎𝑖 𝑣𝑖 = 0. So, for each of these
terms, 𝑎1 𝑣1 , 𝑎2 𝑣2 etc. up till 𝑎𝑛 𝑣𝑛 each term is 0. Hence, the sum is also 0. So, the linear
combination is 0. But of course, all the coefficients are not 0, because 𝑎𝑖 = 1. So, one of the
coefficients is 1. So, not all these coefficients are 0. So, what does that mean? That means a set of
vectors which contains the 0 vector is always a linearly dependent set. So, this is not linearly
independent. So, if your 𝑣1 , 𝑣2 , … , 𝑣𝑛 happens to contain 0, then this is linearly dependent. It is not
linearly independent. So, let us keep this in mind.

(Refer Slide Time: 08:37)

So, now let us ask the question when a two non-zero vectors linearly independent. So, we have
already seen that if a vector is 0, you take that in your collection of vectors, then that is a linearly
dependent set. So, now we can take two non-zero vectors. That is your first starting point for our
discussion. So, let us take two non-zero vectors and ask when are they linearly independent? When
is the set of vectors linearly independent?

So, let 𝑣1 and 𝑣2 be two non-zero vectors. So, suppose 𝑣1 and 𝑣2 are linearly dependent. So, we
want to study when they are linearly independent instead we will study when they are linearly
dependent and from there we will conclude when they are linearly independent. So, suppose 𝑣1
and 𝑣2 are linearly dependent that means 𝑎1 𝑣1 + 𝑎2 𝑣2 = 0 for some coefficients 𝑎1 and 𝑎2 , and
these coefficients, both of these coefficients are not 0. So, we are saying 𝑎1 𝑣1 + 𝑎2 𝑣2 = 0 for
some coefficients 𝑎1 and 𝑎2 , where both of them, I mean, at least one of them is non-zero. So, at
least one of 𝑎1 or 𝑎2 is not 0. That is the definition of a linear dependence.
But the point is here, since the vectors are non-zero, if one of them, one of these coefficients is
non-zero, then the other better be non-zero, because otherwise you will have, let us say, 𝑎1 is non-
zero, but 𝑎2 = 0, then you will have 𝑎1 𝑣1 which is a non-zero vector equates to 0 that cannot be
possible. So, if one of them is not zero, then both of them are not 0. And really, that is what this
statement here means. So, keep in mind that one of 𝑎1 or 𝑎2 is non-zero and so both of them must
be non-zero. That was the import of this statement here.
𝑎
So, dividing by 𝑎1 and putting 𝑐 = − 𝑎2 , we get that 𝑣1 = 𝑐𝑣2 . And so 𝑣1 and 𝑣2 are multiples of
1

each other. So, what are we saying? We are saying that if 𝑣1 and 𝑣2 are linearly dependent, then
𝑣1 and 𝑣2 are multiples of each other. And we can go backwards. We can reverse this set of
statements. So, we can reverse the implications above and conclude that if 𝑣1 and 𝑣2 are multiples
of each other, then they are linearly independent. If 𝑣1 and 𝑣2 are multiples, then 𝑣1 = 𝑐𝑣2 for
some 𝑐.

And then note that 𝑐 is non-zero, because both 𝑣1 and 𝑣2 are non-zero vectors and then you can
write this as 𝑣1 − 𝑐𝑣2 = 0 and so the coefficients are 1 and −𝑐, and in fact, both of them are non-
zero. Actually, it is enough for them, one of them to be non-zero. But in this case, both of them
are non-zero. And so we get that they are linearly dependent. So, that is what I mean by we can
reverse the implications.

So, the conclusion is, if two, you have two non-zero vectors, then they are linearly independent
precisely when they are not multiples of each other. We have seen that linear dependence of two
non-zero vectors is equivalent to them being multiples of each other. So, linear independence is
exactly same as saying that they are not multiples of each other. Let me qualify this coefficient 𝑎1
and 𝑎2 so, where at least one of 𝑎1 or 𝑎2 is not 0. And as I said, because both the vectors are non-
zero and you have only two vectors in this case both of them must be non-zero. That is what we
are saying.

So, the take home from here is, if you have two non-zero vectors, then linear independence means
that they are not multiples of each other. So, the point one is trying to make here is that linear
independence as a notion is a very useful and important notion and it is saying something very
important about these vectors, namely that they are not multiples.

(Refer Slide Time: 13:12)


So, let us ask the same of what happens to three vectors? Let us ask the same question for three
vectors. When the three vectors linearly independent what does that say? So, we will do the same
thing. We will first study when they are dependent and then from there we will try to draw
conclusions for when they are independent. So, suppose 𝑣1 , 𝑣2 and 𝑣3 are linearly dependent then
we have an equation of the form 𝑎1 𝑣1 + 𝑎2 𝑣2 + 𝑎2 𝑣3 = 0 where for some coefficients 𝑎1 , 𝑎2 , 𝑎3
where at least one of the coefficients is non-zero, which is exactly what I wanted to say on the
previous slide as well. At least one of these is non-zero.

So, let us assume that a1 is non-zero. We will, in a minute we will also study the other cases, but
this is a prototype case. Let us study 𝑎1 is not 0. So, if 𝑎1 ≠ 0, I can divide by 𝑎1 and I can take
𝑎
the terms for 𝑣2 and 𝑣3 on the other side and I can write 𝑣1 = 𝑏2 𝑣2 + 𝑏3 𝑣3 , where 𝑏2 = − 𝑎2 and
1
𝑎3
𝑏3 = − 𝑎 . So, 𝑣1 is a linear combination of the other two vectors that is the main point.
1

So, now we can make the same argument of 𝑎2 is non-zero or 𝑎3 is non-zero. Remember that we
know that one of them is non-zero. If it is not 𝑎1 , maybe 𝑎2 is non-zero, maybe 𝑎3 is non-zero.
And you can see that you can make the same argument and express. So, if 𝑎2 is non-zero, you can
express 𝑣2 as a linear combination of the other two. And if 𝑎3 is non-zero, you can express 𝑣3 as
a linear combination of the other two vectors. And I again note that this is 𝑏3 .

So, once again these implications are reversible. So, if, let us say if 𝑣1 is 𝑏2 𝑣2 + 𝑏3 𝑣3 , so then I
can write, I can take the 𝑣2 and 𝑣3 terms on the other side and write 𝑣1 − 𝑏2 𝑣2 − 𝑏3 𝑣3 is 0, but
remember that 𝑣1 has coefficient 1. So, I get 1𝑣1 − 𝑏2 𝑣2 − 𝑏3 𝑣3 is 0, which tells me that I have a
linear combination where at least one of the coefficients is non-zero, which one, the coefficient for
𝑣1 , because that is 1. And so this is linear, they are linearly dependent. And you can argue the same
way if 𝑣2 is a linear combination of 𝑣1 and 𝑣3 or 𝑣3 is a linear combination of 𝑣1 and 𝑣2 . So, these
implications are reversible.

So, the upshot is that, if you have three vectors 𝑣1 , 𝑣2 , 𝑣3 , which are linearly dependent, this is
exactly the same as saying that one of these vectors is a linear combination of the other two vectors.
So, now we can talk about linear independence. We have studied when they are linearly dependent.
So, we can now answer the question about linearly independent.

So, if three vectors are linearly independent, then none of these vectors is a linear combination of
the other two. That is what we are seeing. So, you can think of this geometrically. I will, we will
do that in a few minutes. But already you can start thinking of what this means in terms of
geometry. So, the conclusion here is the take home. If three vectors are linearly independent, this
is exactly the same as saying that none of them is a linear combination of the other two.

(Refer Slide Time: 17:06)

Let us look at an example in ℝ3 . So, let us take these three vectors (1, 1, 2), (1, 2, 0) and (0, 2, 1).
So, we want to ask whether or not they are linearly independent. So, let us take some arbitrary
coefficients 𝑎, 𝑏 and 𝑐. So, 𝑎(1, 1, 2) + 𝑏(1, 2, 0) + 𝑐(0, 2, 1) = (0, 0, 0). And then we ask, can
we get from here what are the choices for 𝑎, 𝑏, 𝑐, in particular, is there a choice where at least one
of 𝑎, 𝑏 or 𝑐 is non-zero.

So, we have the following system of linear equations. How do we get this? By equating the
corresponding coordinates. So, the first coordinate on the left hand side is 𝑎 ∗ 1 + 𝑏 ∗ 1 + 𝑐 ∗ 0 =
𝑎 + 𝑏 = 0. The first coordinate on the right hand side is 0, so this is 𝑎 + 𝑏 = 0. Similarly, if you
equate the second coordinates, you get 𝑎 + 2𝑏 + 2𝑐 = 0. And the third coordinate then you get
2𝑎 + 0𝑏 + 1𝑐 = 2𝑎 + 𝑐 = 0. So, that is how you get the system of equations.

Now, let us solve for 𝑎, 𝑏 and 𝑐 and see if we can get solutions which are non-zero. So, from the
first equation we get that 𝑏 = −𝑎, from the third equation we get 𝑐 = −2𝑎, if you substitute into
the middle equation, you get some combination of 𝑎 is not 0. So, specifically, you get, 𝑎 − 2𝑎 −
4𝑎. So, 𝑎 − 6𝑎, so −5𝑎 = 0, which tells you that 𝑎 = 0. And then once 𝑎 = 0, 𝑏 = −𝑎, so 𝑏 =
0, and 𝑐 = −2𝑎, so 𝑐 = 0. So, the only equation for this system is 𝑎 is, 𝑏 is, 𝑐 is 0.

So, what have we achieved? We have achieved that if you have an equation like this with
coefficients 𝑎, 𝑏, and 𝑐, then the only way this equation is satisfied, when I say this equation, I
mean, this equation here, the only way this equation is satisfied is when 𝑎 is 0, 𝑏 is 0 and 𝑐 is 0.
That tells us that these three vectors (1, 1, 2), (1, 2, 0) and (0, 2, 1) are linearly independent.
(Refer Slide Time: 19:37)

So, we have seen in this video the notion of linearly, a linearly independent set of vectors. So, we
say that a set of vectors is linearly independent if they are not linearly dependent, which really
means that if you take a linear combination of these vectors and equate it to 0, the only way that
that is possible is if all the coefficient are 0. Thank you.

You might also like