0% found this document useful (0 votes)
78 views18 pages

Optimization of Chemical Processes (CHE1011)

The document provides information about the course "Optimization of Chemical Processes" taught by Dr. Dharmendra Kumar Bal. It discusses key concepts in vector algebra and linear algebra that are relevant to chemical process optimization, including vectors, vector addition and subtraction, scalar multiplication, linear combinations, linear independence, dot products, matrices, matrix operations, determinants, and inverses.

Uploaded by

Sisay Amare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views18 pages

Optimization of Chemical Processes (CHE1011)

The document provides information about the course "Optimization of Chemical Processes" taught by Dr. Dharmendra Kumar Bal. It discusses key concepts in vector algebra and linear algebra that are relevant to chemical process optimization, including vectors, vector addition and subtraction, scalar multiplication, linear combinations, linear independence, dot products, matrices, matrix operations, determinants, and inverses.

Uploaded by

Sisay Amare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

OPTIMIZATION OF

CHEMICAL PROCESSES
(CHE1011)

Dr. Dharmendra Kumar Bal


Assistant Professor (Sr.)
School of Chemical Engineering

11
Vector
Vector is a directed line segment in N-dimensions.
It has both “length” and “direction”.
Vector in Rn is an ordered set of n real numbers.
v = (1,6,3,4) is in R4
a
a b
T
  
1 b
 
column vector:  6
 3

 
 4
 

row vector: (1 6 3 4)
Vector Addition and Vector Subtraction

 u1   v1   u1  v1 
uv       
    
u 2 v 2 u 2  v 2

 u1   v1   u1  v1 
u v       
u2  v2  u1  v2 
The difference of two vectors is the result of adding a negative vector
A – B = A + (-B)
Properties of Vector Addition
Commutative: A+B = B+A

Associative: (A+B)+C = A+(B+C)

There is a ZERO vector 0 = *0, 0, ……., 0+T such that A


+0=0+A=A
Note:
• B + (A-B) = A
• -(-B) = B
• -(A-B) = B-A
Vector Multiplication: By Scalar
αv= α(x1, x2)= (αx1, αx2)
Properties:
1. Distributive:  ( A  B)  A  B
(   ) A  A   A

2. Associative: ( ) A   ( A)


Linear Combination
Given: vectors v1, v2, …, vn in Rn and n real number
c1, c2, …, cn,
the vector x obtained by
x = c1 v1+ c2 v2+ …+ cn vn
is called a linear combination of v1, v2, …, vn
Examples of linear combination of v1 and v2:

1
v1  5v2
2
0.5v2  0v1  0.5v2
Linear Independence
A set of vectors {v1, v2, …, vk} is called linearly dependent if
there exist scalars c1, c2, …, ck ,
not all zero, such that c1v1 + c2v2 + … + ckvk = 0

The vectors are linearly independent if the above equation is


satisfied ONLY by c1 = c2 = … = ck = 0
Examples:
• The set S = {(1, 2), (2, 4)} is linearly dependent because
−2(1, 2) + 1(2, 4) = (0, 0)
• The set S = ,(1, 0), (0, 1), (−2, 5)- is linearly dependent
because
2(1, 0) −5(0, 1) + 1(−2, 5) = (0, 0)
Linear Independence
Determine whether the following set of vectors is
linearly dependent or linearly
Independent: v1 = (1, 2, 3), v2 = (0, 1, 2), v3 = (−2, 0, 1)

Solution: c1v1 + c2v2 + c3v3 = 0


⇒ c1(1, 2, 3) + c2(0, 1, 2) + c3(−2, 0, 1) = (0, 0, 0)
⇒ (c1−2c3, 2c1+c2, 3c1+2c2 +c3) = (0, 0, 0)
⇒ c1 = c2 = c3 = 0
Therefore, the set of vectors is linearly independent
Linear Independence: Properties
• A set of vectors is linearly dependent if and only if
one of the vectors is a linear combination of the
others.
• Any set of vectors containing the zero vector is
linearly dependent.
• If a set of vectors is linearly independent, then
any subset of these vectors is also linearly
independent.
• If a set of vectors is linearly dependent, then any
larger set, containing this set, is also linearly
dependent.
Dot Product (Inner Product, Scalar
Product) of Vectors
The dot product is a scalar
d 
A.B  AT B  a b c  e   ad  be  cf
 f 
The magnitude of a vector is the dot product of a vector with itself .
A  AT A  aa  bb  cc
2

The dot product is related to the angle between two vectors.


A.B  A B cos 
• Matrix Algebra
• Matrix Operations: Addition
• Matrix Multiplication
• Identity Matrix,
• Zero Matrix,
• Symmetric Matrix
• Diagonal matrix
Matrix Operations: Properties
• Matrix addition:

Matrix Multiplication:
The Determinant of a Square Matrix

• The determinant of a matrix A is denoted by


|A| (or det(A)).
• Determinants exist only for square matrices (n
× n).
Properties of Determinants
• |A|=|A'|.
• If a row or column of A = 0, then |A|= 0.
• If every value in a row or column is multiplied by k, then
|A| = k|A|.
• If two rows or columns are identical, |A| = 0.
• If two rows or columns are linear combination of each
other, |A| = 0
• |A| remains unchanged if each element of a row is
multiplied by a constant and added to any other row.
• |AB| = |A| |B|
• Determinant of a diagonal matrix = product of the
diagonal elements
Inverse of a Matrix

Let A denote the (n × n) matrix.

Let B denote an (n × n) matrix such that AB = BA = I.

If the matrix B exists then A is called invertible. Also B is


called the inverse of A and is denoted by A-1

Let A and B be two matrices whose inverse exists.

Let C = AB. Then the inverse of the matrix C exists and

C-1 = B-1A-1.
Inverse of a Matrix
The inverse (A-1) is defined such that A A-1 is I. Not
every matrix has an inverse.
If no inverse exists, then the matrix is called singular
(non invertible, Det = 0)
If A is non-singular, then A-1 is also non-singular.
If A, B are non-singular, then AB is also non singular
and (AB) -1 = B -1A -1
• (ABC)-1 = C-1B-1A-1
• If A is non-singular, then its transpose is also non-
singular. Also, (AT ) -1 = (A-1)T
Rank of a Matrix
• The rank of a matrix is defined as
rank(A) = the number of linearly independent rows
= the number of linearly independent columns
• If a matrix A of dimension m × n (where n < m) is
of rank n, then A has maximum possible rank and
is said to be of full rank.
• r(O)=0. As long as A is not 0, r(A) > 0.
• In general, the maximum possible rank of an
(m × n) matrix A is min(m, n).
Elementary Row and Column Operations
The following elementary row or column operations yield an
equivalent system:
 Interchanges: Two rows (or columns) can be interchanged
 Scaling: Multiplying a row (or column) by a nonzero
constant
 Sum: The row (or column) can be replaced by the sum of
that row (column) and a nonzero multiple of any other row
(column).
Elementary operations do not change the rank. One can use
ERO and ECO to find the rank of a matrix as follows:
ERO⇒minimum # of rows with at least one nonzero entry
ECO⇒minimum # of columns with at least one nonzero entry

You might also like