Introduction to Matrix Algebra

April 4, 2018 | Author: fathi123 | Category: Matrix (Mathematics), Determinant, Euclidean Vector, Vector Space, Operator Theory


Comments



Description

04.01.1 Chapter 04.01 Introduction After reading this chapter, you should be able to 1. define what a matrix is. 2. identify special types of matrices, and 3. identify when two matrices are equal. What does a matrix look like? Matrices are everywhere. If you have used a spreadsheet such as Excel or Lotus or written a table, you have used a matrix. Matrices make presentation of numbers clearer and make calculations easier to program. Look at the matrix below about the sale of tires in a Blowoutr’us store – given by quarter and make of tires. Q1 Q2 Q3 Q4 Copper Michigan Tirestone 6 5 25 16 10 20 7 15 3 27 25 2 If one wants to know how many Copper tires were sold in Quarter 4, we go along the row Copper and column Q4 and find that it is 27. So what is a matrix? A matrix is a rectangular array of elements. The elements can be symbolic expressions or numbers. Matrix ] [ A is denoted by = mn m m n n a a a a a a a a a A ....... ....... ....... ] [ 2 1 2 22 21 1 12 11   Row i of ] [ A has n elements and is 04.01.2 Chapter 04.01 | | in i i a a a .... 2 1 and column j of ] [ A has m elements and is mj j j a a a  2 1 Each matrix has rows and columns and this defines the size of the matrix. If a matrix ] [ A has m rows and n columns, the size of the matrix is denoted by n m× . The matrix ] [ A may also be denoted by n m A × ] [ to show that ] [ A is a matrix with m rows and n columns. Each entry in the matrix is called the entry or element of the matrix and is denoted by ij a where i is the row number and j is the column number of the element. The matrix for the tire sales example could be denoted by the matrix [A] as = 27 7 16 6 25 15 10 5 2 3 20 25 ] [ A . There are 3 rows and 4 columns, so the size of the matrix is 4 3× . In the above ] [ A matrix, 27 34 = a . What are the special types of matrices? Vector: A vector is a matrix that has only one row or one column. There are two types of vectors – row vectors and column vectors. Row Vector: If a matrix ] [B has one row, it is called a row vector ] [ ] [ 2 1 n b b b B   = and n is the dimension of the row vector. Example 1 Give an example of a row vector. Solution ] 0 2 3 20 25 [ ] [ = B is an example of a row vector of dimension 5. Column vector: If a matrix ] [C has one column, it is called a column vector Introduction 04.01.3 = m c c C   1 ] [ and m is the dimension of the vector. Example 2 Give an example of a column vector. Solution = 6 5 25 ] [C is an example of a column vector of dimension 3. Submatrix: If some row(s) or/and column(s) of a matrix ] [ A are deleted (no rows or columns may be deleted), the remaining matrix is called a submatrix of ] [ A . Example 3 Find some of the submatrices of the matrix ÷ = 2 1 3 2 6 4 ] [ A Solution | | | | ÷ ÷ 2 2 , 4 , 2 6 4 , 1 3 6 4 , 2 1 3 2 6 4 are some of the submatrices of ] [ A . Can you find other submatrices of ] [ A ? Square matrix: If the number of rows m of a matrix is equal to the number of columns n of a matrix ] [ A , ( n m = ), then ] [ A is called a square matrix. The entries nn a a a ,..., , 22 11 are called the diagonal elements of a square matrix. Sometimes the diagonal of the matrix is also called the principal or main of the matrix. Example 4 Give an example of a square matrix. 04.01.4 Chapter 04.01 Solution = 7 15 6 15 10 5 3 20 25 ] [ A is a square matrix as it has the same number of rows and columns, that is, 3. The diagonal elements of ] [ A are 7 , 10 , 25 33 22 11 = = = a a a . Upper triangular matrix: A n m× matrix for which j i a ij > = , 0 is called an upper triangular matrix. That is, all the elements below the diagonal entries are zero. Example 5 Give an example of an upper triangular matrix. Solution ÷ ÷ = 15005 0 0 6 001 . 0 0 0 7 10 ] [ A is an upper triangular matrix. Lower triangular matrix: A n m× matrix for which i j a ij > = , 0 is called a lower triangular matrix. That is, all the elements above the diagonal entries are zero. Example 6 Give an example of a lower triangular matrix. Solution = 1 5 . 2 6 . 0 0 1 3 . 0 0 0 1 ] [ A is a lower triangular matrix. Diagonal matrix: A square matrix with all non-diagonal elements equal to zero is called a diagonal matrix, that is, only the diagonal entries of the square matrix can be non-zero, ( j i a ij = = , 0 ). Example 7 Give examples of a diagonal matrix. Introduction 04.01.5 Solution = 0 0 0 0 1 . 2 0 0 0 3 ] [ A is a diagonal matrix. Any or all the diagonal entries of a diagonal matrix can be zero. For example = 0 0 0 0 1 . 2 0 0 0 3 ] [ A is also a diagonal matrix. Identity matrix: A diagonal matrix with all diagonal elements equal to one is called an identity matrix, ( j i a ij = = , 0 and 1 = ii a for all i ). Example 8 Give an example of an identity matrix. Solution = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] [ A is an identity matrix. Zero matrix: A matrix whose all entries are zero is called a zero matrix, ( 0 = ij a for all i and j ). Example 9 Give examples of a zero matrix. Solution = 0 0 0 0 0 0 0 0 0 ] [ A = 0 0 0 0 0 0 [B] 04.01.6 Chapter 04.01 = 0 0 0 0 0 0 0 0 0 0 0 0 ] [C | | 0 0 0 ] [ = D are all examples of a zero matrix. Tridiagonal matrices: A tridiagonal matrix is a square matrix in which all elements not on the following are zero - the major diagonal, the diagonal above the major diagonal, and the diagonal below the major diagonal. Example 10 Give an example of a tridiagonal matrix. Solution = 6 3 0 0 2 5 0 0 0 9 3 2 0 0 4 2 ] [ A is a tridiagonal matrix. Do non-square matrices have diagonal entries? Yes, for a n m× matrix ] [ A , the diagonal entries are kk k k a a a a , ..., , 1 , 1 22 11 ÷ ÷ where } , min{ n m k = . Example 11 What are the diagonal entries of = 8 . 7 6 . 5 2 . 3 9 . 2 7 6 5 2 . 3 ] [ A Solution The diagonal elements of ] [ A are . 7 and 2 . 3 22 11 = = a a Diagonally Dominant Matrix: A n n× square matrix ] [ A is a diagonally dominant matrix if ¯ = = > n j i j ij ii a a 1 | | for all n i ,....., 2 , 1 = and Introduction 04.01.7 ¯ = = > n j i j ij ii a a 1 | | for at least one i , that is, for each row, the absolute value of the diagonal element is greater than or equal to the sum of the absolute values of the rest of the elements of that row, and that the inequality is strictly greater than for at least one row. Diagonally dominant matrices are important in ensuring convergence in iterative schemes of solving simultaneous linear equations. Example 12 Give examples of diagonally dominant matrices and not diagonally dominant matrices. Solution ÷ ÷ = 6 2 3 2 4 2 7 6 15 ] [ A is a diagonally dominant matrix as 13 7 6 15 15 13 12 11 = + = + > = = a a a 4 2 2 4 4 23 21 22 = ÷ + = + > = ÷ = a a a 5 2 3 6 6 32 31 33 = + = + > = = a a a and for at least one row, that is Rows 1 and 3 in this case, the inequality is a strictly greater than inequality. ÷ ÷ ÷ = 001 . 5 2 3 2 4 2 9 6 15 ] [B is a diagonally dominant matrix as 15 9 6 15 15 13 12 11 = + = + > = ÷ = b b b 4 2 2 4 4 23 21 22 = + = + > = ÷ = b b b 5 2 3 001 . 5 001 . 5 32 31 33 = ÷ + = + > = = b b b The inequalities are satisfied for all rows and it is satisfied strictly greater than for at least one row (in this case it is Row 3). | | = 1 12 144 1 8 64 1 5 25 C is not diagonally dominant as 65 1 64 8 8 23 21 22 = + = + s = = c c c When are two matrices considered to be equal? 04.01.8 Chapter 04.01 Two matrices [A] and [B] are equal if the size of [A] and [B] is the same (number of rows and columns are same for [A] and [B]) and a ij = b ij for all i and j. Example 13 What would make = 7 6 3 2 ] [ A to be equal to = 22 11 6 3 ] [ b b B Solution The two matrices ] [ A and ] [B ould be equal if 2 11 = b and 7 22 = b . Key Terms: Matrix Vector Submatrix Square matrix Equal matrices Zero matrix Identity matrix Diagonal matrix Upper triangular matrix Lower triangular matrix Tri-diagonal matrix Diagonally dominant matrix 04.02.1 Chapter 04.02 Vectors After reading this chapter, you should be able to: 1. define a vector, 2. add and subtract vectors, 3. find linear combinations of vectors and their relationship to a set of equations, 4. explain what it means to have a linearly independent set of vectors, and 5. find the rank of a set of vectors. What is a vector? A vector is a collection of numbers in a definite order. If it is a collection of n numbers, it is called a n -dimensional vector. So the vector A  given by = n a a a A   2 1 is a n -dimensional column vector with n components, n a a a ,......, , 2 1 . The above is a column vector. A row vector ] [B is of the form ] ,...., , [ 2 1 n b b b B =  where B  is a n - dimensional row vector with n components n b b b ,...., , 2 1 . Example 1 Give an example of a 3-dimensional column vector. Solution Assume a point in space is given by its ) , , ( z y x coordinates. Then if the value of 5 , 2 , 3 = = = z y x , the column vector corresponding to the location of the points is = 5 2 3 z y x . 04.02.2 Chapter 04.02 When are two vectors equal? Two vectors A  and B  are equal if they are of the same dimension and if their corresponding components are equal. Given = n a a a A   2 1 and = n b b b B   2 1 then B A   = if n i b a i i ,......, 2 , 1 , = = . Example 2 What are the values of the unknown components in B  if = 1 4 3 2 A  and = 4 1 4 3 b b B  and B A   = . Solution 1 , 2 4 1 = = b b How do you add two vectors? Two vectors can be added only if they are of the same dimension and the addition is given by Vectors 04.02.3 + = + n n b b b a a a B A   2 1 2 1 ] [ ] [ + + + = n n b a b a b a  2 2 1 1 Example 3 Add the two vectors = 1 4 3 2 A  and ÷ = 7 3 2 5 B  Solution ÷ + = + 7 3 2 5 1 4 3 2 B A   + + ÷ + = 7 1 3 4 2 3 5 2 = 8 7 1 7 04.02.4 Chapter 04.02 Example 4 A store sells three brands of tires: Tirestone, Michigan and Copper. In quarter 1, the sales are given by the column vector = 6 5 25 1 A  where the rows represent the three brands of tires sold – Tirestone, Michigan and Copper respectively. In quarter 2, the sales are given by = 6 10 20 2 A  What is the total sale of each brand of tire in the first half of the year? Solution The total sales would be given by 2 1 A A C    + = + = 6 10 20 6 5 25 + + + = 6 6 10 5 20 25 = 12 15 45 So the number of Tirestone tires sold is 45, Michigan is 15 and Copper is 12 in the first half of the year. What is a null vector? A null vector is where all the components of the vector are zero. Example 5 Give an example of a null vector or zero vector. Solution The vector Vectors 04.02.5 0 0 0 0 is an example of a zero or null vector. What is a unit vector? A unit vector U  is defined as = n u u u U   2 1 where 1 2 2 3 2 2 2 1 = + + + + n u u u u  Example 6 Give examples of 3-dimensional unit column vectors. Solution Examples include , 0 1 0 , 0 2 1 2 1 , 0 0 1 , 3 1 3 1 3 1 etc. How do you multiply a vector by a scalar? If k is a scalar and A  is a n -dimensional vector, then = n a a a k A k   2 1 = n ka ka ka  2 1 04.02.6 Chapter 04.02 Example 7 What is A  2 if = 5 20 25 A  Solution = 5 20 25 2 2A  × × × = 5 2 20 2 25 2 = 10 40 50 Example 8 A store sells three brands of tires: Tirestone, Michigan and Copper. In quarter 1, the sales are given by the column vector = 6 25 25 A  If the goal is to increase the sales of all tires by at least 25% in the next quarter, how many of each brand should be sold? Solution Since the goal is to increase the sales by 25%, one would multiply the A  vector by 1.25, = 6 25 25 25 . 1 B  = 5 . 7 25 . 31 25 . 31 Since the number of tires must be an integer, we can say that the goal of sales is = 8 32 32 B  Vectors 04.02.7 What do you mean by a linear combination of vectors? Given m A A A    ,......, , 2 1 as m vectors of same dimension n, and if k 1 , k 2 ,…, k m are scalars, then m m A k A k A k    + + + ....... 2 2 1 1 is a linear combination of the m vectors. Example 9 Find the linear combinations a) B A   ÷ and b) C B A    3 ÷ + where = = = 2 1 10 , 2 1 1 , 6 3 2 C B A    Solution a) ÷ = ÷ 2 1 1 6 3 2 B A   ÷ ÷ ÷ = 2 6 1 3 1 2 = 4 2 1 b) ÷ + = ÷ + 2 1 10 3 2 1 1 6 3 2 3C B A    ÷ + ÷ + ÷ + = 6 2 6 3 1 3 30 1 2 ÷ = 2 1 27 04.02.8 Chapter 04.02 What do you mean by vectors being linearly independent? A set of vectors m A A A     , , , 2 1 are considered to be linearly independent if 0 ....... 2 2 1 1     = + + + m m A k A k A k has only one solution of 0 ...... 2 1 = = = = m k k k Example 10 Are the three vectors = = = 1 1 1 , 12 8 5 , 144 64 25 3 2 1 A A A    linearly independent? Solution Writing the linear combination of the three vectors = + + 0 0 0 1 1 1 12 8 5 144 64 25 3 2 1 k k k gives = + + + + + + 0 0 0 12 144 8 64 5 25 3 2 1 3 2 1 3 2 1 k k k k k k k k k The above equations have only one solution, 0 3 2 1 = = = k k k . However, how do we show that this is the only solution? This is shown below. The above equations are 0 5 25 3 2 1 = + + k k k (1) 0 8 64 3 2 1 = + + k k k (2) 0 12 144 3 2 1 = + + k k k (3) Subtracting Eqn (1) from Eqn (2) gives 0 3 39 2 1 = + k k 1 2 13k k ÷ = (4) Multiplying Eqn (1) by 8 and subtracting it from Eqn (2) that is first multiplied by 5 gives 0 3 120 3 1 = ÷ k k 1 3 40k k = (5) Remember we found Eqn (4) and Eqn (5) just from Eqns (1) and (2). Substitution of Eqns (4) and (5) in Eqn (3) for 1 k and 2 k gives 0 40 ) 13 ( 12 144 1 1 1 = + ÷ + k k k 0 28 1 = k Vectors 04.02.9 0 1 = k This means that 1 k has to be zero, and coupled with (4) and (5), 2 k and 3 k are also zero. So the only solution is 0 3 2 1 = = = k k k . The three vectors hence are linearly independent. Example 11 Are the three vectors = = = 24 14 6 , 7 5 2 , 5 2 1 3 2 1 A A A   linearly independent? Solution By inspection, 2 1 3 2 2 A A A    + = or 0 2 2 3 2 1     = + ÷ ÷ A A A So the linear combination 0 3 3 2 2 1 1     = + + A k A k A k has a non-zero solution 1 , 2 , 2 3 2 1 = ÷ = ÷ = k k k Hence, the set of vectors is linearly dependent. What if I cannot prove by inspection, what do I do? Put the linear combination of three vectors equal to the zero vector, = + + 0 0 0 24 14 6 7 5 2 5 2 1 3 2 1 k k k to give 0 6 2 3 2 1 = + + k k k (1) 0 14 5 2 3 2 1 = + + k k k (2) 0 24 7 5 3 2 1 = + + k k k (3) Multiplying Eqn (1) by 2 and subtracting from Eqn (2) gives 0 2 3 2 = + k k 3 2 2k k ÷ = (4) Multiplying Eqn (1) by 2.5 and subtracting from Eqn (2) gives 0 5 . 0 3 1 = ÷ ÷ k k 3 1 2k k ÷ = (5) Remember we found Eqn (4) and Eqn (5) just from Eqns (1) and (2). Substitute Eqn (4) and (5) in Eqn (3) for 1 k and 2 k gives ( ) ( ) 0 24 2 7 2 5 3 3 3 = + ÷ + ÷ k k k 04.02.10 Chapter 04.02 0 24 14 10 3 3 3 = + ÷ ÷ k k k 0 0 = This means any values satisfying Eqns (4) and (5) will satisfy Eqns (1), (2) and (3) simultaneously. For example, chose 6 3 = k , then 12 2 ÷ = k from Eqn (4), and 12 1 ÷ = k from Eqn (5). Hence we have a nontrivial solution of | | | | 6 12 12 3 2 1 ÷ ÷ = k k k . This implies the three given vectors are linearly dependent. Can you find another nontrivial solution? What about the following three vectors? 25 14 6 , 7 5 2 , 5 2 1 Are they linearly dependent or linearly independent? Note that the only difference between this set of vectors and the previous one is the third entry in the third vector. Hence, equations (4) and (5) are still valid. What conclusion do you draw when you plug in equations (4) and (5) in the third equation: 0 25 7 5 3 2 1 = + + k k k ? What has changed? Example 12 Are the three vectors = = = 2 1 1 , 13 8 5 , 89 64 25 3 2 1 A A A    linearly independent? Solution Writing the linear combination of the three vectors and equating to zero vector = + + 0 0 0 2 1 1 13 8 5 89 64 25 3 2 1 k k k gives = + + + + + + 0 0 0 2 13 89 8 64 5 25 3 2 1 3 2 1 3 2 1 k k k k k k k k k In addition to 0 3 2 1 = = = k k k , one can find other solutions for which 3 2 1 , , k k k are not equal to zero. For example 40 , 13 , 1 3 2 1 = ÷ = = k k k is also a solution. This implies Vectors 04.02.11 = + ÷ 0 0 0 2 1 1 40 13 8 5 13 89 64 25 1 So the linear combination that gives us a zero vector consists of non-zero constants. Hence 3 2 1 , , A A A    are linearly dependent. What do you mean by the rank of a set of vectors? From a set of n -dimensional vectors, the maximum number of linearly independent vectors in the set is called the rank of the set of vectors. Note that the rank of the vectors can never be greater than the vectors dimension. Example 13 What is the rank of = = = 1 1 1 , 12 8 5 , 144 64 25 3 2 1 A A A    ? Solution Since we found in Example 2.10 that 3 2 1 , , A A A    are linearly independent, the rank of the set of vectors 3 2 1 , , A A A    is 3. Example 14 What is the rank of = = = 2 1 1 , 13 8 5 , 89 64 25 3 2 1 A A A    ? Solution In Example 2.12, we found that 3 2 1 , , A A A    are linearly dependent, the rank of 3 2 1 , , A A A    is hence not 3, and is less than 3. Is it 2? Let us choose = = 13 8 5 , 89 64 25 2 1 A A   Linear combination of 1 A  and 2 A  equal to zero has only one solution. Therefore, the rank is 2. Example 15 What is the rank of 04.02.12 Chapter 04.02 = = = 5 3 3 , 4 2 2 , 2 1 1 3 2 1 A A A    ? Solution From inspection, 1 2 2A A  = ÷ , that implies . 0 0 2 3 2 1     = + ÷ A A A Hence . 0 3 3 2 2 1 1     = + + A k A k A k has a nontrivial solution. So 3 2 1 , , A A A    are linearly dependent, and hence the rank of the three vectors is not 3. Since 1 2 2A A   = , 2 1 and A A   are linearly dependent, but . 0 3 3 1 1    = + A k A k has trivial solution as the only solution. So 3 1 and A A   are linearly independent. The rank of the above three vectors is 2. Prove that if a set of vectors contains the null vector, the set of vectors is linearly dependent. Let m A A A    , ,......... , 2 1 be a set of n -dimensional vectors, then 0 2 2 1 1      = + + + m m A k A k A k is a linear combination of the m vectors. Then assuming if 1 A  is the zero or null vector, any value of 1 k coupled with 0 3 2 = = = = m k k k  will satisfy the above equation. Hence, the set of vectors is linearly dependent as more than one solution exists. Prove that if a set of vectors are linearly independent, then a subset of the m vectors also has to be linearly independent. Let this subset be ap a a A A A     , , , 2 1 where m p < . Then if this subset is linearly dependent, the linear combination 0 2 2 1 1      = + + + ap p a a A k A k A k has a non-trivial solution. So 0 0 ....... 0 ) 1 ( 2 2 1 1        = + + + + + + + am p a ap p a a A A A k A k A k also has a non-trivial solution too, where ( ) am p a A A    , , 1 + are the rest of the ) ( p m ÷ vectors. However, this is a contradiction. Therefore, a subset of linearly independent vectors cannot be linearly dependent. Vectors 04.02.13 Prove that if a set of vectors is linearly dependent, then at least one vector can be written as a linear combination of others. Let m A A A     , , , 2 1 be linearly dependent, then there exists a set of numbers m k k , , 1  not all of which are zero for the linear combination 0 2 2 1 1      = + + + m m A k A k A k . Let 0 = p k to give one of the non-zero values of m i k i , , 1 ,  = , be for p i = , then . 1 1 1 1 2 2 m p m p p p p p p p p A k k A k k A k k A k k A       ÷ ÷ ÷ ÷ ÷ ÷ = + + ÷ ÷ and that proves the theorem. Prove that if the dimension of a set of vectors is less than the number of vectors in the set, then the set of vectors is linearly dependent. Can you prove it? How can vectors be used to write simultaneous linear equations? If a set of m linear equations with n unknowns is written as 1 1 1 11 c x a x a n n = + + 2 2 1 21 c x a x a n n = + +     n n mn m c x a x a = + + 1 1 where n x x x , , , 2 1  are the unknowns, then in the vector notation they can be written as C A x A x A x n n      = + + + 2 2 1 1 where = 1 11 1 m a a A   where = 1 11 1 m a a A   = 2 12 2 m a a A   04.02.14 Chapter 04.02 = mn n n a a A   1 = m c c C   1 1 The problem now becomes whether you can find the scalars n x x x ,....., , 2 1 such that the linear combination C A x A x n n    = + +.......... 1 1 Example 16 Write 8 . 106 5 25 3 2 1 = + + x x x 2 . 177 8 64 3 2 1 = + + x x x 2 . 279 12 144 3 2 1 = + + x x x as a linear combination of vectors. Solution = + + + + + + 2 . 279 2 . 177 8 . 106 12 144 8 64 5 25 3 2 1 3 2 1 3 2 1 x x x x x x x x x = + + 2 . 279 2 . 177 8 . 106 1 1 1 12 8 5 144 64 25 3 2 1 x x x What is the definition of the dot product of two vectors? Let | | n a a a A , , , 2 1   = and | | n b b b B , , , 2 1   = be two n-dimensional vectors. Then the dot product of the two vectors A  and B  is defined as ¯ = = + + + = · n i i i n n b a b a b a b a B A 1 2 2 1 1    A dot product is also called an inner product or scalar. Example 17 Find the dot product of the two vectors A  = (4, 1, 2, 3) and B  = (3, 1, 7, 2). Solution ( ) ( ) 2 , 7 , 1 , 3 3 , 2 , 1 , 4 · = · B A   = (4)(3)+(1)(1)+(2)(7)+(3)(2) = 33 Vectors 04.02.15 Example 18 A product line needs three types of rubber as given in the table below. Rubber Type Weight (lbs) Cost per pound ($) A B C 200 250 310 20.23 30.56 29.12 Use the definition of a dot product to find the total price of the rubber needed. Solution The weight vector is given by ( ) 310 , 250 , 200 = W  and the cost vector is given by ( ) 12 . 29 , 56 . 30 , 23 . 20 = C  . The total cost of the rubber would be the dot product of W  and C  . ( ) ( ) 12 . 29 , 56 . 30 , 23 . 20 310 , 250 , 200 · = · C W   ) 12 . 29 )( 310 ( ) 56 . 30 )( 250 ( ) 23 . 20 )( 200 ( + + = 2 . 9027 7640 4046 + + = 20 . 20713 $ = Key Terms: Vector Addition of vectors Rank Dot Product Subtraction of vectors Unit vector Scalar multiplication of vectors Null vector Linear combination of vectors Linearly independent vectors 04.03.1 Chapter 04.03 Binary Matrix Operations After reading this chapter, you should be able to 1. add, subtract, and multiply matrices, and 2. apply rules of binary operations on matrices. How do you add two matrices? Two matrices ] [ A and ] [B can be added only if they are the same size. The addition is then shown as ] [ ] [ ] [ B A C + = where ij ij ij b a c + = Example 1 Add the following two matrices. = 7 2 1 3 2 5 ] [ A ÷ = 19 5 3 2 7 6 ] [B Solution ] [ ] [ ] [ B A C + = ÷ + = 19 5 3 2 7 6 7 2 1 3 2 5 + + + ÷ + + = 19 7 5 2 3 1 2 3 7 2 6 5 = 26 7 4 1 9 11 Example 2 Blowout r’us store has two store locations A and B , and their sales of tires are given by make (in rows) and quarters (in columns) as shown below. 04.03.2 Chapter 04.03 = 27 7 16 6 25 15 10 5 2 3 20 25 ] [ A = 20 7 1 4 21 15 6 3 0 4 5 20 ] [B where the rows represent the sale of Tirestone, Michigan and Copper tires respectively and the columns represent the quarter number: 1, 2, 3 and 4. What are the total tire sales for the two locations by make and quarter? Solution ] [ ] [ ] [ B A C + = = 27 7 16 6 25 15 10 5 2 3 20 25 + 20 7 1 4 21 15 6 3 0 4 5 20 = ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) + + + + + + + + + + + + 20 27 7 7 1 16 4 6 21 25 15 15 6 10 3 5 0 2 4 3 5 20 20 25 = 47 14 17 10 46 30 16 8 2 7 25 45 So if one wants to know the total number of Copper tires sold in quarter 4 at the two locations, we would look at Row 3 – Column 4 to give . 47 34 = c How do you subtract two matrices? Two matrices ] [ A and ] [B can be subtracted only if they are the same size. The subtraction is then given by ] [ ] [ ] [ B A D ÷ = Where ij ij ij b a d ÷ = Example 3 Subtract matrix ] [B from matrix ] [ A . = 7 2 1 3 2 5 ] [ A ÷ = 19 5 3 2 7 6 ] [B Binary Matrix Operations 04.03.3 Solution ] [ ] [ ] [ B A D ÷ = ÷ ÷ = 19 5 3 2 7 6 7 2 1 3 2 5 ÷ ÷ ÷ ÷ ÷ ÷ ÷ = ) 19 7 ( ) 5 2 ( ) 3 1 ( )) 2 ( 3 ( ) 7 2 ( ) 6 5 ( ÷ ÷ ÷ ÷ ÷ = 12 3 2 5 5 1 Example 4 Blowout r’us has two store locations A and B and their sales of tires are given by make (in rows) and quarters (in columns) as shown below. = 27 7 16 6 25 15 10 5 2 3 20 25 ] [ A = 20 7 1 4 21 15 6 3 0 4 5 20 ] [B where the rows represent the sale of Tirestone, Michigan and Copper tires respectively and the columns represent the quarter number: 1, 2, 3, and 4. How many more tires did store A sell than store B of each brand in each quarter? Solution ] [ ] [ ] [ B A D ÷ = = ÷ 20 7 1 4 21 15 6 3 0 4 5 20 27 7 16 6 25 15 10 5 2 3 20 25 ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ = 20 27 7 7 1 16 4 6 21 25 15 15 6 10 3 5 0 2 4 3 5 20 20 25 ÷ = 7 0 15 2 4 0 4 2 2 1 15 5 So if you want to know how many more Copper tires were sold in quarter 4 in store A than store B , 7 34 = d . Note that 1 13 ÷ = d implies that store A sold 1 less Michigan tire than store B in quarter 3. 04.03.4 Chapter 04.03 How do I multiply two matrices? Two matrices ] [ A and ] [B can be multiplied only if the number of columns of ] [ A is equal to the number of rows of ] [B to give n p p m n m B A C × × × = ] [ ] [ ] [ If ] [ A is a p m× matrix and ] [B is a n p× matrix, the resulting matrix ] [C is a n m× matrix. So how does one calculate the elements of ] [C matrix? ¯ = = p k kj ik ij b a c 1 pj ip j i j i b a b a b a + + + =   2 2 1 1 for each m i , , 2 , 1   = and n j , , 2 , 1   = . To put it in simpler terms, the th i row and th j column of the ] [C matrix in ] ][ [ ] [ B A C = is calculated by multiplying the th i row of ] [ A by the th j column of ] [B , that is, | | . 2 1 2 1 pj ip 2j i2 1j i1 pj j j ip i i ij b a ........ b a b a b b b a a a c + + + = =     ¯ = = p k kj ik b a 1 Example 5 Given = 7 2 1 3 2 5 ] [ A ÷ ÷ ÷ = 10 9 8 5 2 3 ] [B Find | | | || | B A C = Solution 12 c can be found by multiplying the first row of ] [ A by the second column of ] [B , | | ÷ ÷ ÷ = 10 8 2 3 2 5 12 c Binary Matrix Operations 04.03.5 ) 10 )( 3 ( ) 8 )( 2 ( ) 2 )( 5 ( ÷ + ÷ + ÷ = 56 ÷ = Similarly, one can find the other elements of ] [C to give ÷ ÷ = 88 76 56 52 ] [C Example 6 Blowout r’us store location A and the sales of tires are given by make (in rows) and quarters (in columns) as shown below = 27 7 16 6 25 15 10 5 2 3 20 25 ] [ A where the rows represent the sale of Tirestone, Michigan and Copper tires respectively and the columns represent the quarter number: 1, 2, 3, and 4. Find the per quarter sales of store A if the following are the prices of each tire. Tirestone = $33.25 Michigan = $40.19 Copper = $25.03 Solution The answer is given by multiplying the price matrix by the quantity of sales of store A. The price matrix is | | 03 . 25 19 . 40 25 . 33 , so the per quarter sales of store A would be given by | | = 27 7 16 6 25 15 10 5 2 3 20 25 03 . 25 19 . 40 25 . 33 ] [C ¯ = = 3 1 k kj ik ij b a c ¯ = = 3 1 1 1 11 k k k b a c 31 13 21 12 11 11 b a b a b a + + = ( )( ) ( )( ) ( )( ) 6 03 . 25 5 19 . 40 25 25 . 33 + + ÷ = 06 . 1747 $ = Similarly 06 . 1747 $ 81 . 877 $ 38 . 1467 $ 14 13 12 = = = c c c Therefore, each quarter sales of store A in dollars is given by the four columns of the row vector | | | | 06 . 1747 81 . 877 38 . 1467 38 . 1182 = C Remember since we are multiplying a 1×3 matrix by a 3×4 matrix, the resulting matrix is a 1×4 matrix. 04.03.6 Chapter 04.03 What is the scalar product of a constant and a matrix? If ] [ A is a n n× matrix and k is a real number, then the scalar product of k and ] [ A is another n n× matrix ] [B , where ij ij a k b = . Example 7 Let = 6 1 5 2 3 1 . 2 ] [ A Find ] [ 2 A Solution = 6 1 5 2 3 1 . 2 2 ] [ 2 A × × × × × × = 6 2 1 2 5 2 2 2 3 2 1 . 2 2 = 12 2 10 4 6 2 . 4 What is a linear combination of matrices? If ] [ ],....., [ ], [ 2 1 p A A A are matrices of the same size and p k k k ,....., , 2 1 are scalars, then ] [ ........ ] [ ] [ 2 2 1 1 p p A k A k A k + + + is called a linear combination of ] [ ],....., [ ], [A 2 1 p A A Example 8 If = = = 6 5 . 3 3 2 2 . 2 0 ] [ , 6 1 5 2 3 1 . 2 ] [ , 1 2 3 2 6 5 ] [ 3 2 1 A A A then find ] [ 5 . 0 ] [ 2 ] [ 3 2 1 A A A ÷ + Solution ] [ 5 . 0 ] [ 2 ] [ 3 2 1 A A A ÷ + ÷ + = 6 5 . 3 3 2 2 . 2 0 5 . 0 6 1 5 2 3 1 . 2 2 1 2 3 2 6 5 ÷ + = 3 75 . 1 5 . 1 1 1 . 1 0 12 2 10 4 6 2 . 4 1 2 3 2 6 5 = 10 25 . 2 5 . 11 5 9 . 10 2 . 9 Binary Matrix Operations 04.03.7 What are some of the rules of binary matrix operations? Commutative law of addition If ] [ A and ] [B are n m× matrices, then ] [ ] [ ] [ ] [ A B B A + = + ] Associative law of addition If [A], [B] and [C] are all n m× matrices, then ( ) ( ) ] [ ] [ ] [ ] [ ] [ ] [ C B A C B A + + = + + Associative law of multiplication If ] [ A , ] [B and ] [C are r p p n n m × × × and , size matrices, respectively, then ( ) ( ) ] [ ] ][ [ ] ][ [ ] [ C B A C B A = and the resulting matrix size on both sides of the equation is . r m× Distributive law If ] [ A and ] [B are n m× size matrices, and ] [C and ] [D are p n× size matrices ( ) ] ][ [ ] ][ [ ] [ ] [ ] [ D A C A D C A + = + ( ) ] ][ [ ] ][ [ ] [ ] [ ] [ C B C A C B A + = + and the resulting matrix size on both sides of the equation is . p m× Example 9 Illustrate the associative law of multiplication of matrices using = = = 5 3 1 2 ] [ , 6 9 5 2 ] [ , 2 0 5 3 2 1 ] [ C B A Solution = ] ][ [ C B = 5 3 1 2 6 9 5 2 = 39 36 27 19 = 39 36 27 19 2 0 5 3 2 1 ]) ][ ]([ [ C B A = 78 72 276 237 105 91 04.03.8 Chapter 04.03 = 6 9 5 2 2 0 5 3 2 1 ] ][ [ B A = 12 18 45 51 17 20 = 5 3 1 2 12 18 45 51 17 20 ] ])[ ][ ([ C B A = 78 72 276 237 105 91 The above illustrates the associative law of multiplication of matrices. Is [A][B] = [B][A]? If ] [ A ] [B exists, number of columns of ] [ A has to be same as the number of rows of ] [B and if ] ][ [ A B exists, number of columns of ] [B has to be same as the number of rows of ] [ A . Now for ] ][ [ ] ][ [ A B B A = , the resulting matrix from ] ][ [ B A and ] ][ [ A B has to be of the same size. This is only possible if ] [ A and ] [B are square and are of the same size. Even then in general ] ][ [ ] ][ [ A B B A = Example 10 Determine if ] ][ [ ] ][ [ A B B A = for the following matrices ÷ = = 5 1 2 3 ] [ , 5 2 3 6 ] [ B A Solution ] ][ [ B A ÷ = 5 1 2 3 5 2 3 6 = ÷ ÷ 29 1 27 15 ] ][ [ A B ÷ = 5 2 3 6 5 1 2 3 ÷ = 28 16 1 14 Binary Matrix Operations 04.03.9 ] ][ [ ] ][ [ A B B A = Key Terms: Addition of matrices Subtraction of matrices Multiplication of matrices Scalar Product of matrices Linear Combination of Matrices Rules of Binary Matrix Operation 04.04.1 Chapter 04.04 Unary Matrix Operations After reading this chapter, you should be able to: 1. know what unary operations means, 2. find the transpose of a square matrix and it’s relationship to symmetric matrices, 3. find the trace of a matrix, and 4. find the determinant of a matrix by the cofactor method. What is the transpose of a matrix? Let ] [ A be a n m× matrix. Then ] [B is the transpose of the ] [ A if ij ij a b = for all i and j . That is, the th i row and the th j column element of ] [ A is the th j row and th i column element of ] [B . Note, ] [B would be a m n× matrix. The transpose of ] [ A is denoted by T ] [ A . Example 1 Find the transpose of = 27 7 16 6 25 15 10 5 2 3 20 25 ] [ A Solution The transpose of ] [ A is | | = 27 25 2 7 15 3 16 10 20 6 5 25 T A Note, the transpose of a row vector is a column vector and the transpose of a column vector is a row vector. Also, note that the transpose of a transpose of a matrix is the matrix itself, that is, | | ( ) | | A A = T T . Also, ( ) ( ) T T T T T ; cA cA B A B A = + = + . 04.04.2 Chapter 04.04 What is a symmetric matrix? A square matrix ] [ A with real elements where ji ij a a = for n i ,..., 2 , 1 = and n j ,..., 2 , 1 = is called a symmetric matrix. This is same as, if T ] [ ] [ A A = , then T ] [ A is a symmetric matrix. Example 2 Give an example of a symmetric matrix. Solution = 3 . 9 8 6 8 5 . 21 2 . 3 6 2 . 3 2 . 21 ] [ A is a symmetric matrix as 6 2 3 31 13 21 12 = = = = a , a . a a and 8 32 23 = = a a . What is a skew-symmetric matrix? A n n× matrix is skew symmetric if ji ij a a ÷ = for n i ,..., 1 = and n j ,..., 1 = . This is same as | | | | . T A A ÷ = Example 3 Give an example of a skew-symmetric matrix. Solution ÷ ÷ ÷ 0 5 2 5 0 1 2 1 0 is skew-symmetric as 5 ; 2 ; 1 32 23 31 13 21 12 ÷ = ÷ = = ÷ = = ÷ = a a a a a a . Since ii ii a a ÷ = only if 0 = ii a , all the diagonal elements of a skew-symmetric matrix have to be zero. What is the trace of a matrix? The trace of a n n× matrix ] [ A is the sum of the diagonal entries of ] [ A , that is, | | ¯ = = n i ii a A 1 tr Example 4 Find the trace of ÷ = 6 2 3 2 4 2 7 6 15 ] [ A Unary Matrix Operations 04.04.3 Solution | | ¯ = = 3 1 tr i ii a A ) 6 ( ) 4 ( ) 15 ( + ÷ + = 17 = Example 5 The sales of tires are given by make (rows) and quarters (columns) for Blowout r’us store location A, as shown below. = 27 7 16 6 25 15 10 5 2 3 20 25 ] [ A where the rows represent the sale of Tirestone, Michigan and Copper tires, and the columns represent the quarter number 1, 2, 3, 4. Find the total yearly revenue of store A if the prices of tires vary by quarters as follows. = 95 . 22 03 . 27 02 . 22 03 . 25 23 . 38 03 . 41 02 . 38 19 . 40 05 . 30 02 . 35 01 . 30 25 . 33 ] [B where the rows represent the cost of each tire made by Tirestone, Michigan and Copper, and the columns represent the quarter numbers. Solution To find the total tire sales of store A for the whole year, we need to find the sales of each brand of tire for the whole year and then add to find the total sales. To do so, we need to rewrite the price matrix so that the quarters are in rows and the brand names are in the columns, that is, find the transpose of ] [B . T ] [ ] [ B C = T 95 . 22 03 . 27 02 . 22 03 . 25 23 . 38 03 . 41 02 . 38 19 . 40 05 . 30 02 . 35 01 . 30 25 . 33 = = 95 . 22 23 . 38 05 . 30 03 . 27 03 . 41 02 . 35 02 . 22 02 . 38 01 . 30 03 . 25 19 . 40 25 . 33 ] [C Recognize now that if we find ] ][ [ C A , we get | | | || | C A D = 04.04.4 Chapter 04.04 = 95 . 22 23 . 38 05 . 30 03 . 27 03 . 41 02 . 35 02 . 22 02 . 38 01 . 30 03 . 25 19 . 40 25 . 33 27 7 16 6 25 15 10 5 2 3 20 25 = 1311 2169 1736 1325 2152 1743 1193 1965 1597 The diagonal elements give the sales of each brand of tire for the whole year, that is 1597 $ 11 = d (Tirestone sales) 2152 $ 22 = d (Michigan sales) 1311 $ 33 = d (Cooper sales) The total yearly sales of all three brands of tires are 1311 2152 1597 3 1 + + = ¯ = i ii d 5060 $ = and this is the trace of the matrix ] [ A . Define the determinant of a matrix. The determinant of a square matrix is a single unique real number corresponding to a matrix. For a matrix ] [ A , determinant is denoted by A or ) det(A . So do not use ] [ A and A interchangeably. For a 2×2 matrix, = 22 21 12 11 ] [ a a a a A 21 12 22 11 ) det( a a a a A ÷ = How does one calculate the determinant of any square matrix? Let ] [ A be n n× matrix. The minor of entry ij a is denoted by ij M and is defined as the determinant of the ) 1 ( 1 ( ÷ × ÷ n n submatrix of ] [ A , where the submatrix is obtained by deleting the th i row and th j column of the matrix ] [ A . The determinant is then given by ( ) ( ) ¯ = + = ÷ = n j ij ij j i n i for M a A 1 , , 2 , 1 any 1 det  or ( ) ( ) ¯ = + = ÷ = n i ij ij j i n j any for M a A 1 , , 2 , 1 1 det  Unary Matrix Operations 04.04.5 coupled with that ( ) ] [ matrix 1 1 a for det 11 A a A × = , as we can always reduce the determinant of a matrix to determinants of 1 1× matrices. The number ij j i M + ÷ ) 1 ( is called the cofactor of ij a and is denoted by ij c . The above equation for the determinant can then be written as ( ) ¯ = = = n j ij ij n i any for C a A 1 , , 2 , 1 det  or ( ) ¯ = = = n i ij ij n j any for C a A 1 , , 2 , 1 det  The only reason why determinants are not generally calculated using this method is that it becomes computationally intensive. For a n n× matrix, it requires arithmetic operations proportional to n!. Example 6 Find the determinant of = 1 12 144 1 8 64 1 5 25 ] [ A Solution Method 1: ( ) ( ) ¯ = + = ÷ = 3 1 3 , 2 , 1 any for 1 det j ij ij j i i M a A Let 1 = i in the formula ( ) ( ) ¯ = + ÷ = 3 1 1 1 1 1 det j j j j M a A ( ) ( ) ( ) 13 13 3 1 12 12 2 1 11 11 1 1 1 1 1 M a M a M a + + + ÷ + ÷ + ÷ = 13 13 12 12 11 11 M a M a M a + ÷ = 1 12 144 1 8 64 1 5 25 11 = M 1 12 1 8 = 4 ÷ = 1 12 144 1 8 64 1 5 25 12 = M 04.04.6 Chapter 04.04 1 144 1 64 = 80 ÷ = 1 12 144 1 8 64 1 5 25 13 = M 12 144 8 64 = 384 ÷ = 13 13 12 12 11 11 ) det( M a M a M a A + ÷ = ( ) ( ) ( ) 384 1 80 5 4 25 ÷ + ÷ ÷ ÷ = 384 400 100 ÷ + ÷ = 84 ÷ = Also for 1 = i , ( ) ¯ = = 3 1 1 1 det j j j C a A ( ) 11 1 1 11 1 M C + ÷ = 11 M = 4 ÷ = ( ) 12 2 1 12 1 M C + ÷ = 12 M ÷ = 80 = ( ) 13 3 1 13 1 M C + ÷ = 13 M = 384 ÷ = ( ) 31 31 21 21 11 11 det C a C a C a A + + = ( ) ( ) ( ) 384 ) 1 ( 80 ) 5 ( 4 ) 25 ( ÷ + + ÷ = 384 400 100 ÷ + ÷ = 84 ÷ = Method 2: ( ) ( ) ¯ = + ÷ = 3 1 1 det i ij ij j i M a A for any 3 , 2 , 1 = j Let 2 = j in the formula ( ) ( ) ¯ = + ÷ = 3 1 2 2 2 1 det i i i i M a A ( ) ( ) ( ) 32 32 2 3 22 22 2 2 12 12 2 1 1 1 1 M a M a M a + + + ÷ + ÷ + ÷ = 32 32 22 22 12 12 M a M a M a ÷ + ÷ = Unary Matrix Operations 04.04.7 1 12 144 1 8 64 1 5 25 12 = M 1 144 1 64 = 80 ÷ = 1 12 144 1 8 64 1 5 25 22 = M 1 144 1 25 = 119 ÷ = 1 12 144 1 8 64 1 5 25 32 = M 1 64 1 25 = 39 ÷ = 32 32 22 22 12 12 ) det( M a M a M a A ÷ + ÷ = ) 39 ( 12 ) 119 ( 8 ) 80 ( 5 ÷ ÷ ÷ + ÷ ÷ = 468 952 400 + ÷ = 84 ÷ = In terms of cofactors for 2 = j , ( ) ¯ = = 3 1 2 2 det i i i C a A ( ) 12 2 1 12 1 M C + ÷ = 12 M ÷ = 80 = ( ) 22 2 2 22 1 M C + ÷ = 22 M = 119 ÷ = ( ) 32 2 3 32 1 M C + ÷ = 32 M ÷ = 39 = ( ) 32 32 22 22 12 12 det C a C a C a A + + = ( ) ( ) ( ) 39 ) 12 ( 119 ) 8 ( 80 ) 5 ( + ÷ + = 468 952 400 + ÷ = 04.04.8 Chapter 04.04 84 ÷ = Is there a relationship between det(AB), and det(A) and det(B)? Yes, if ] [ A and ] [B are square matrices of same size, then ) det( ) det( ) det( B A AB = Are there some other theorems that are important in finding the determinant of a square matrix? Theorem 1: If a row or a column in a n n× matrix ] [ A is zero, then 0 ) det( = A . Theorem 2: Let ] [A be a n n× matrix. If a row is proportional to another row, then 0 ) det( = A . Theorem 3: Let ] [A be a n n× matrix. If a column is proportional to another column, then 0 ) det( = A . Theorem 4: Let ] [ A be a n n× matrix. If a column or row is multiplied by k to result in matrix k , then ) det( ) det( A k B = . Theorem 5: Let ] [A be a n n× upper or lower triangular matrix, then ii n i a B 1 ) det( = = t . Example 7 What is the determinant of = 1 2 5 0 5 9 4 0 4 7 3 0 3 6 2 0 ] [ A Solution Since one of the columns (first column in the above example) of ] [A is a zero, 0 ) det( = A . Example 8 What is the determinant of = 18 3 5 9 10 2 4 5 6 7 2 3 4 6 1 2 ] [ A Solution ) det(A is zero because the fourth column Unary Matrix Operations 04.04.9 18 10 6 4 is 2 times the first column 9 5 3 2 Example 9 If the determinant of = 1 12 144 1 8 64 1 5 25 ] [ A is 84 ÷ , then what is the determinant of = 1 2 . 25 144 1 8 . 16 64 1 5 . 10 25 ] [B Solution Since the second column of ] [B is 2.1 times the second column of ] [ A ) det( 2.1 ) det( A B = ) 84 )( 1 . 2 ( ÷ = 176.4 ÷ = Example 10 Given the determinant of = 1 12 144 1 8 64 1 5 25 ] [ A is 84 ÷ , what is the determinant of ÷ ÷ = 1 12 144 56 . 1 8 . 4 0 1 5 25 ] [B 04.04.10 Chapter 04.04 Solution Since ] [B is simply obtained by subtracting the second row of ] [ A by 2.56 times the first row of ] [ A , det(A) det(B) = 84 ÷ = Example 11 What is the determinant of ÷ ÷ = 7 . 0 0 0 56 . 1 8 . 4 0 1 5 25 ] [ A Solution Since ] [ A is an upper triangular matrix ( ) I = = 3 1 det i ii a A ( )( )( ) 33 22 11 a a a = ( )( )( ) 7 . 0 8 . 4 25 ÷ = 84 ÷ = Key Terms: Transpose Symmetric Matrix Skew-Symmetric Matrix Trace of Matrix Determinant 04.05.1 Chapter 04.05 System of Equations After reading this chapter, you should be able to: 1. setup simultaneous linear equations in matrix form and vice-versa, 2. understand the concept of the inverse of a matrix, 3. know the difference between a consistent and inconsistent system of linear equations, and 4. learn that a system of linear equations can have a unique solution, no solution or infinite solutions. Matrix algebra is used for solving systems of equations. Can you illustrate this concept? Matrix algebra is used to solve a system of simultaneous linear equations. In fact, for many mathematical procedures such as the solution to a set of nonlinear equations, interpolation, integration, and differential equations, the solutions reduce to a set of simultaneous linear equations. Let us illustrate with an example for interpolation. Example 1 The upward velocity of a rocket is given at three different times on the following table. Table 5.1. Velocity vs. time data for a rocket Time, t Velocity, v (s) (m/s) 5 106.8 8 177.2 12 279.2 The velocity data is approximated by a polynomial as ( ) 12. t 5 , 2 s s + + = c bt at t v Set up the equations in matrix form to find the coefficients c b a , , of the velocity profile. Solution The polynomial is going through three data points ( ) ( ) ( ) 3 3 2 2 1 1 , t and , , , , v v t v t where from table 5.1. 8 . 106 , 5 1 1 = = v t 2 . 177 , 8 2 2 = = v t 04.05.2 Chapter 04.05 2 . 279 , 12 3 3 = = v t Requiring that ( ) c bt at t v + + = 2 passes through the three data points gives ( ) c bt at v t v + + = = 1 2 1 1 1 ( ) c bt at v t v + + = = 2 2 2 2 2 ( ) c bt at v t v + + = = 3 2 3 3 3 Substituting the data ( ) ( ) ( ) 3 3 2 2 1 1 , and , , , , v t v t v t gives ( ) ( ) 8 . 106 5 5 2 = + + c b a ( ) ( ) 2 . 177 8 8 2 = + + c b a ( ) ( ) 2 . 279 12 12 2 = + + c b a or 8 . 106 5 25 = + + c b a 2 . 177 8 64 = + + c b a 2 . 279 12 144 = + + c b a This set of equations can be rewritten in the matrix form as = + + + + + + 2 . 279 2 . 177 8 . 106 12 144 8 64 5 25 c b a c b a c b a The above equation can be written as a linear combination as follows = + + 2 . 279 2 . 177 8 . 106 1 1 1 12 8 5 144 64 25 c b a and further using matrix multiplication gives = 2 . 279 2 . 177 8 . 106 1 12 144 1 8 64 1 5 25 c b a The above is an illustration of why matrix algebra is needed. The complete solution to the set of equations is given later in this chapter. A general set of m linear equations and n unknowns, 1 1 2 12 1 11 c x a x a x a n n = + + +   2 2 2 22 1 21 c x a x a x a n n = + + +   …………………………………… ……………………………………. m n mn m m c x a x a x a = + + + ........ 2 2 1 1 can be rewritten in the matrix form as System of Equations 04.05.3 · · = · · m n mn m m n n c c c x x x a a a a a a a a a 2 1 2 1 2 1 2 22 21 1 12 11 . . . . . .     Denoting the matrices by | | A , | | X , and | | C , the system of equation is | || | | | C X A = , where | | A is called the coefficient matrix, | | C is called the right hand side vector and | | X is called the solution vector. Sometimes | || | | | C X A = systems of equations are written in the augmented form. That is | | = n mn m m n n c c c a ...... a a a ...... a a a ...... a a C A 2 1 2 1 2 22 21 1 12 11         A system of equations can be consistent or inconsistent. What does that mean? A system of equations | || | | | C X A = is consistent if there is a solution, and it is inconsistent if there is no solution. However, a consistent system of equations does not mean a unique solution, that is, a consistent system of equations may have a unique solution or infinite solutions (Figure 1). Figure 5.1. Consistent and inconsistent system of equations flow chart. Example 2 Give examples of consistent and inconsistent system of equations. Solution a) The system of equations Consistent System Inconsistent System Unique Solution Infinite Solutions [A][X]= [B] 04.05.4 Chapter 04.05 = 4 6 3 1 4 2 y x is a consistent system of equations as it has a unique solution, that is, = 1 1 y x . b) The system of equations = 3 6 2 1 4 2 y x is also a consistent system of equations but it has infinite solutions as given as follows. Expanding the above set of equations, 3 2 6 4 2 = + = + y x y x you can see that they are the same equation. Hence, any combination of ( ) y x, that satisfies 6 4 2 = + y x is a solution. For example ( ) ( ) 1 , 1 , = y x is a solution. Other solutions include ( ) ) 25 . 1 , 5 . 0 ( , = y x , ( ) ) 5 . 1 , 0 ( , = y x , and so on. c) The system of equations = 4 6 2 1 4 2 y x is inconsistent as no solution exists. How can one distinguish between a consistent and inconsistent system of equations? A system of equations | || | | | C X A = is consistent if the rank of A is equal to the rank of the augmented matrix | | C A A system of equations | || | | | C X A = is inconsistent if the rank of A is less than the rank of the augmented matrix | | C A . But, what do you mean by rank of a matrix? The rank of a matrix is defined as the order of the largest square submatrix whose determinant is not zero. Example 3 What is the rank of | | = 3 2 1 5 0 2 2 1 3 A ? Solution The largest square submatrix possible is of order 3 and that is ] [ A itself. Since , 0 23 ) det( = ÷ = A the rank of . 3 ] [ = A System of Equations 04.05.5 Example 4 What is the rank of | | = 7 1 5 5 0 2 2 1 3 A ? Solution The largest square submatrix of ] [ A is of order 3 and that is ] [ A itself. Since 0 ) det( = A , the rank of ] [ A is less than 3. The next largest square submatrix would be a 2×2 matrix. One of the square submatrices of ] [ A is | | = 0 2 1 3 B and 0 2 ) det( = ÷ = B . Hence the rank of ] [ A is 2. There is no need to look at other 2 2× submatrices to establish that the rank of ] [ A is 2. Example 5 How do I now use the concept of rank to find if = 2 . 279 2 . 177 8 . 106 1 12 144 1 8 64 1 5 25 3 2 1 x x x is a consistent or inconsistent system of equations? Solution The coefficient matrix is | | = 1 12 144 1 8 64 1 5 25 A and the right hand side vector is | | = 2 . 279 2 . 177 8 . 106 C The augmented matrix is | | = 2 . 279 1 12 144 2 . 177 1 8 64 8 . 106 1 5 25    B Since there are no square submatrices of order 4 as ] [B is a 3×4 matrix, the rank of ] [B is at most 3. So let us look at the square submatrices of ] [B of order 3; if any of these square submatrices have determinant not equal to zero, then the rank is 3. For example, a submatrix of the augmented matrix ] [B is 04.05.6 Chapter 04.05 = 1 12 144 1 8 64 1 5 25 ] [D has 0 84 ) det( = ÷ = D . Hence the rank of the augmented matrix ] [B is 3. Since ] [ ] [ D A = , the rank of ] [ A is 3. Since the rank of the augmented matrix ] [B equals the rank of the coefficient matrix ] [ A , the system of equations is consistent. Example 6 Use the concept of rank of matrix to find if = 0 . 284 2 . 177 8 . 106 2 13 89 1 8 64 1 5 25 3 2 1 x x x is consistent or inconsistent? Solution The coefficient matrix is given by | | = 2 13 89 1 8 64 1 5 25 A and the right hand side | | = 0 . 284 2 . 177 8 . 106 C The augmented matrix is | | = 0 . 284 : 2 13 89 2 . 177 : 1 8 64 8 . 106 : 1 5 25 B Since there are no square submatrices of order 4 as ] [B is a 4×3 matrix, the rank of the augmented ] [B is at most 3. So let us look at square submatrices of the augmented matrix ] [B of order 3 and see if any of these have determinants not equal to zero. For example, a square submatrix of the augmented matrix ] [B is | | = 2 13 89 1 8 64 1 5 25 D has 0 ) det( = D . This means, we need to explore other square submatrices of order 3 of the augmented matrix ] [B and find their determinants. That is, System of Equations 04.05.7 | | = 0 . 284 2 13 2 . 177 1 8 8 . 106 1 5 E 0 ) det( = E | | = 0 . 284 13 89 2 . 177 8 64 8 . 106 5 25 F 0 ) det( = F | | = 0 . 284 2 89 2 . 177 1 64 8 . 106 1 25 G 0 ) det( = G All the square submatrices of order 3×3 of the augmented matrix ] [B have a zero determinant. So the rank of the augmented matrix ] [B is less than 3. Is the rank of augmented matrix ] [B equal to 2?. One of the 2 2× submatrices of the augmented matrix ] [B is | | = 8 64 5 25 H and 0 120 ) det( = ÷ = H So the rank of the augmented matrix ] [B is 2. Now we need to find the rank of the coefficient matrix ] [B . | | = 2 13 89 1 8 64 1 5 25 A and 0 ) det( = A So the rank of the coefficient matrix ] [ A is less than 3. A square submatrix of the coefficient matrix ] [ A is | | = 1 8 1 5 J 0 3 ) det( = ÷ = J So the rank of the coefficient matrix ] [ A is 2. Hence, rank of the coefficient matrix ] [ A equals the rank of the augmented matrix [B]. So the system of equations ] [ ] [ ] [ C X A = is consistent. 04.05.8 Chapter 04.05 Example 7 Use the concept of rank to find if = 0 . 280 2 . 177 8 . 106 2 13 89 1 8 64 1 5 25 3 2 1 x x x is consistent or inconsistent. Solution The augmented matrix is | | = 0 . 280 : 2 13 89 2 . 177 : 1 8 64 8 . 106 : 1 5 25 B Since there are no square submatrices of order 4×4 as the augmented matrix ] [B is a 4×3 matrix, the rank of the augmented matrix ] [B is at most 3. So let us look at square submatrices of the augmented matrix (B) of order 3 and see if any of the 3×3 submatrices have a determinant not equal to zero. For example, a square submatrix of order 3×3 of ] [B | | = 2 13 89 1 8 64 1 5 25 D det(D) = 0 So it means, we need to explore other square submatrices of the augmented matrix ] [B | | = 0 . 280 2 13 2 . 177 1 8 8 . 106 1 5 E 0 0 . 12 0 det( = = E . So the rank of the augmented matrix ] [B is 3. The rank of the coefficient matrix ] [ A is 2 from the previous example. Since the rank of the coefficient matrix ] [ A is less than the rank of the augmented matrix ] [B , the system of equations is inconsistent. Hence, no solution exists for ] [ ] [ ] [ C X A = . If a solution exists, how do we know whether it is unique? In a system of equations ] [ ] [ ] [ C X A = that is consistent, the rank of the coefficient matrix ] [ A is the same as the augmented matrix ] [ C A . If in addition, the rank of the coefficient matrix ] [ A is same as the number of unknowns, then the solution is unique; if the rank of the coefficient matrix ] [ A is less than the number of unknowns, then infinite solutions exist. System of Equations 04.05.9 Unique solution if rank (A) = number of unknowns Infinite solutions if rank (A) < number of unknowns Consistent System if rank (A) = rank (A.B) Inconsistent System if rank (A) < rank (A.B) [A] [X] = [B] Figure 5.2. Flow chart of conditions for consistent and inconsistent system of equations. Example 8 We found that the following system of equations = 2 . 279 2 . 177 8 . 106 1 12 144 1 8 64 1 5 25 3 2 1 x x x is a consistent system of equations. Does the system of equations have a unique solution or does it have infinite solutions? Solution The coefficient matrix is | | = 1 12 144 1 8 64 1 5 25 A and the right hand side is | | = 2 . 279 2 . 177 8 . 106 C While finding out whether the above equations were consistent in an earlier example, we found that the rank of the coefficient matrix (A) equals rank of augmented matrix | | C A equals 3. The solution is unique as the number of unknowns = 3 = rank of (A). Example 9 We found that the following system of equations = 0 . 284 2 . 177 8 . 106 2 13 89 1 8 64 1 5 25 3 2 1 x x x is a consistent system of equations. Is the solution unique or does it have infinite solutions. 04.05.10 Chapter 04.05 Solution While finding out whether the above equations were consistent, we found that the rank of the coefficient matrix ] [ A equals the rank of augmented matrix ( ) C A equals 2 Since the rank of 2 ] [ = A < number of unknowns = 3, infinite solutions exist. If we have more equations than unknowns in [A] [X] = [C], does it mean the system is inconsistent? No, it depends on the rank of the augmented matrix | | C A and the rank of ] [ A . a) For example = 0 . 284 2 . 279 2 . 177 8 . 106 2 13 89 1 12 144 1 8 64 1 5 25 3 2 1 x x x is consistent, since rank of augmented matrix = 3 rank of coefficient matrix = 3 Now since the rank of (A) = 3 = number of unknowns, the solution is not only consistent but also unique. b) For example = 0 . 280 2 . 279 2 . 177 8 . 106 2 13 89 1 12 144 1 8 64 1 5 25 3 2 1 x x x is inconsistent, since rank of augmented matrix = 4 rank of coefficient matrix = 3 c) For example = 0 . 280 6 . 213 2 . 177 8 . 106 2 13 89 2 10 50 1 8 64 1 5 25 3 2 1 x x x is consistent, since rank of augmented matrix = 2 rank of coefficient matrix = 2 But since the rank of ] [ A = 2 < the number of unknowns = 3, infinite solutions exist. Consistent systems of equations can only have a unique solution or infinite solutions. Can a system of equations have more than one but not infinite number of solutions? No, you can only have either a unique solution or infinite solutions. Let us suppose ] [ ] [ ] [ C X A = has two solutions ] [Y and ] [Z so that System of Equations 04.05.11 ] [ ] [ ] [ C Y A = ] [ ] [ ] [ C Z A = If r is a constant, then from the two equations | || | | | C r Y A r = ( )| || | ( )| | C r Z A r ÷ = ÷ 1 1 Adding the above two equations gives | || | ( )| || | | | ( )| | C r C r Z A r Y A r ÷ + = ÷ + 1 1 | | | | ( )| | ( ) | | C Z r Y r A = ÷ + 1 Hence | | ( )| | Z r Y r ÷ + 1 is a solution to | || | | | C X A = Since r is any scalar, there are infinite solutions for ] [ ] [ ] [ C X A = of the form | | ( )| | Z r Y r ÷ + 1 Can you divide two matrices? If ] [ ] [ ] [ C B A = is defined, it might seem intuitive that | | | | B C A = ] [ , but matrix division is not defined like that. However an inverse of a matrix can be defined for certain types of square matrices. The inverse of a square matrix ] [ A , if existing, is denoted by 1 ] [ ÷ A such that ] [ ] [ ] [ ] [ ] [ 1 1 A A I A A ÷ ÷ = = Where ] [I is the identity matrix. In other words, let [A] be a square matrix. If ] [B is another square matrix of the same size such that ] [ ] [ ] [ I A B = , then ] [B is the inverse of ] [ A . ] [ A is then called to be invertible or nonsingular. If 1 ] [ ÷ A does not exist, ] [ A is called noninvertible or singular. If ] [ A and ] [B are two n n× matrices such that ] [ ] [ ] [ I A B = , then these statements are also true - [B] is the inverse of [A] - [A] is the inverse of [B] - [A] and [B] are both invertible - [A] [B]=[I]. - [A] and [B] are both nonsingular - all columns of [A] and [B]are linearly independent - all rows of [A] and [B] are linearly independent. Example 10 Determine if = 3 5 2 3 ] [B is the inverse of 04.05.12 Chapter 04.05 ÷ ÷ = 3 5 2 3 [A] Solution ] ][ [ A B ÷ ÷ = 3 5 2 3 3 5 2 3 1 0 0 1 = ] [I = Since ] [ ] [ ] [ I A B = , ] [B is the inverse of [A] and ] [ A is the inverse of ] [B . But, we can also show that ] ][ [ B A ÷ ÷ = 3 5 2 3 3 5 2 3 = 1 0 0 1 I] [ = to show that ] [A is the inverse of ] [B . Can I use the concept of the inverse of a matrix to find the solution of a set of equations [A] [X] = [C]? Yes, if the number of equations is the same as the number of unknowns, the coefficient matrix ] [ A is a square matrix. Given ] [ ] [ ] [ C X A = Then, if 1 ] [ ÷ A exists, multiplying both sides by 1 ] [ ÷ A . ] [ ] [ ] ][ [ ] [ 1 1 C A X A A ÷ ÷ = ] [ ] [ ] [ ] [ 1 C A X I ÷ = ] [ ] [ ] [ 1 C A X ÷ = This implies that if we are able to find 1 ] [ ÷ A , the solution vector of ] [ ] [ ] [ C X A = is simply a multiplication of 1 ] [ ÷ A and the right hand side vector, ] [C . How do I find the inverse of a matrix? If ] [ A is a n n× matrix, then 1 ] [ ÷ A is a n n× matrix and according to the definition of inverse of a matrix ] [ ] [ ] [ 1 I A A = ÷ Denoting System of Equations 04.05.13 · · · · · · · · · · · · · · · · = nn n n n n a a a a a a a a a A 2 1 2 22 21 1 12 11 ] [ · · · · · · · · · · · · · · · · = ÷ ' ' 2 ' 1 ' 2 ' 22 ' 21 ' 1 ' 12 ' 11 1 ] [ nn n n n n a a a a a a a a a A · · · · · · · · · · · · · · = 1 0 1 0 0 1 0 0 0 1 ] [I Using the definition of matrix multiplication, the first column of the 1 ] [ ÷ A matrix can then be found by solving · · = · · · · · · · · · · · · · · · · · · 0 0 1 ' 1 ' 21 ' 11 2 1 2 22 21 1 12 11 n nn n n n n a a a a a a a a a a a a Similarly, one can find the other columns of the 1 ] [ ÷ A matrix by changing the right hand side accordingly. Example 11 The upward velocity of the rocket is given by Table 5.2. Velocity vs time data for a rocket Time, t (s) Velocity, v (m/s) 5 106.8 8 177.2 12 279.2 In an earlier example, we wanted to approximate the velocity profile by ( ) 12 5 , 2 s s + + = t c bt at t v We found that the coefficients c b a and , , in ( ) t v are given by 04.05.14 Chapter 04.05 = 2 . 279 2 . 177 8 . 106 c b a 1 12 144 1 8 64 1 5 25 First, find the inverse of | | = 1 12 144 1 8 64 1 5 25 A and then use the definition of inverse to find the coefficients . and , , c b a Solution If | | = ÷ ' 33 ' 32 ' 31 ' 23 ' 22 ' 21 ' 13 ' 12 ' 11 1 a a a a a a a a a A is the inverse of ] [ A , then = 1 0 0 0 1 0 0 0 1 1 12 144 1 8 64 1 5 25 ' 33 ' 32 ' 31 ' 23 ' 22 ' 21 ' 13 ' 12 ' 11 a a a a a a a a a gives three sets of equations = 0 0 1 1 12 144 1 8 64 1 5 25 ' 31 ' 21 ' 11 a a a = 0 1 0 1 12 144 1 8 64 1 5 25 ' 32 ' 22 ' 12 a a a = 1 0 0 1 12 144 1 8 64 1 5 25 ' 33 ' 23 ' 13 a a a Solving the above three sets of equations separately gives ' 31 ' 21 ' 11 a a a = ÷ 571 . 4 9524 . 0 04762 . 0 ' 32 ' 22 ' 12 a a a = ÷ ÷ 000 . 5 417 . 1 08333 . 0 System of Equations 04.05.15 ÷ = 429 . 1 4643 . 0 03571 . 0 ' 33 ' 23 ' 13 a a a Hence ÷ ÷ ÷ ÷ = ÷ 429 . 1 000 . 5 571 . 4 4643 . 0 417 . 1 9524 . 0 03571 . 0 08333 . 0 04762 . 0 ] [ 1 A Now | || | | | C X A = where | | = c b a X | | = 2 . 279 2 . 177 8 . 106 C Using the definition of | | , 1 ÷ A | | | || | | | | | 1 1 C A X A A ÷ ÷ = | | | | | | C A X 1 ÷ = ÷ ÷ ÷ ÷ 2 . 279 2 . 177 8 . 106 429 . 1 000 . 5 571 . 4 4643 . 0 417 . 1 9524 . 0 03571 . 0 08333 . 0 04762 . 0 Hence = 086 . 1 69 . 19 2905 . 0 c b a So ( ) 12 5 , 086 . 1 69 . 19 2905 . 0 2 s s + + = t t t t v Is there another way to find the inverse of a matrix? For finding the inverse of small matrices, the inverse of an invertible matrix can be found by | | ( ) ( ) A adj A A det 1 1 = ÷ where 04.05.16 Chapter 04.05 ( ) T 2 1 2 22 21 1 12 11 = nn n n n n C C C C C C C C C A adj    where ij C are the cofactors of ij a . The matrix nn n n n C C C C C C C C       1 2 22 21 1 12 11 itself is called the matrix of cofactors from [A]. Cofactors are defined in Chapter 4. Example 12 Find the inverse of | | = 1 12 144 1 8 64 1 5 25 A Solution From Example 4.6 in Chapter 04.06, we found ( ) 84 det ÷ = A Next we need to find the adjoint of ] [A . The cofactors of A are found as follows. The minor of entry 11 a is 1 12 144 1 8 64 1 5 25 11 = M 1 12 1 8 = 4 ÷ = The cofactors of entry 11 a is ( ) 11 1 1 11 1 M C + ÷ = 11 M = 4 ÷ = The minor of entry 12 a is 1 12 144 1 8 64 1 5 25 12 = M System of Equations 04.05.17 1 144 1 64 = 80 ÷ = The cofactor of entry 12 a is ( ) 12 2 1 12 1 M C + ÷ = 12 M ÷ = ) 80 (÷ ÷ = 80 = Similarly 384 13 ÷ = C 7 21 = C 119 22 ÷ = C 420 23 = C 3 31 ÷ = C 39 32 = C 120 33 ÷ = C Hence the matrix of cofactors of ] [ A is | | ÷ ÷ ÷ ÷ ÷ = 120 39 3 420 119 7 384 80 4 C The adjoint of matrix ] [A is T ] [C , ( ) | | T C A adj = ÷ ÷ ÷ ÷ ÷ = 120 420 384 39 119 80 3 7 4 Hence | | ( ) ( ) A adj A A det 1 1 = ÷ ÷ ÷ ÷ ÷ ÷ ÷ = 120 420 384 39 119 80 3 7 4 84 1 ÷ ÷ ÷ ÷ = 429 . 1 000 . 5 571 . 4 4643 . 0 417 . 1 9524 . 0 03571 . 0 08333 . 0 04762 . 0 04.05.18 Chapter 04.05 If the inverse of a square matrix [A] exists, is it unique? Yes, the inverse of a square matrix is unique, if it exists. The proof is as follows. Assume that the inverse of ] [ A is ] [B and if this inverse is not unique, then let another inverse of ] [ A exist called ] [C . If ] [B is the inverse of ] [ A , then ] [ ] [ ] [ I A B = Multiply both sides by ] [C , ] [ ] [ ] [ ] [ ] [ C I C A B = ] [ ] [ ] [ ] [ C C A B = Since [C] is inverse of ] [ A , ] [ ] [ ] [ I C A = Multiply both sides by ] [B , ] [ ] [ ] [ C I B = ] [ ] [ C B = This shows that ] [B and ] [C are the same. So the inverse of ] [ A is unique. Key Terms: Consistent system Inconsistent system Infinite solutions Unique solution Rank Inverse 04.06.1 Chapter 04.06 Gaussian Elimination After reading this chapter, you should be able to: 1. solve a set of simultaneous linear equations using Naïve Gauss elimination, 2. learn the pitfalls of the Naïve Gauss elimination method, 3. understand the effect of round-off error when solving a set of linear equations with the Naïve Gauss elimination method, 4. learn how to modify the Naïve Gauss elimination method to the Gaussian elimination with partial pivoting method to avoid pitfalls of the former method, 5. find the determinant of a square matrix using Gaussian elimination, and 6. understand the relationship between the determinant of a coefficient matrix and the solution of simultaneous linear equations. How is a set of equations solved numerically? One of the most popular techniques for solving simultaneous linear equations is the Gaussian elimination method. The approach is designed to solve a general set of n equations and n unknowns 1 1 3 13 2 12 1 11 ... b x a x a x a x a n n = + + + + 2 2 3 23 2 22 1 21 ... b x a x a x a x a n n = + + + + . . . . . . n n nn n n n b x a x a x a x a = + + + + ... 3 3 2 2 1 1 Gaussian elimination consists of two steps 1. Forward Elimination of Unknowns: In this step, the unknown is eliminated in each equation starting with the first equation. This way, the equations are reduced to one equation and one unknown in each equation. 2. Back Substitution: In this step, starting from the last equation, each of the unknowns is found. Forward Elimination of Unknowns: In the first step of forward elimination, the first unknown, 1 x is eliminated from all rows below the first row. The first equation is selected as the pivot equation to eliminate 1 x . So, 04.06.2 Chapter 04.06 to eliminate 1 x in the second equation, one divides the first equation by 11 a (hence called the pivot element) and then multiplies it by 21 a . This is the same as multiplying the first equation by 11 21 / a a to give 1 11 21 1 11 21 2 12 11 21 1 21 ... b a a x a a a x a a a x a n n = + + + Now, this equation can be subtracted from the second equation to give 1 11 21 2 1 11 21 2 2 12 11 21 22 ... b a a b x a a a a x a a a a n n n ÷ = | | . | \ | ÷ + + | | . | \ | ÷ or 2 2 2 22 ... b x a x a n n ' = ' + + ' where n n n a a a a a a a a a a 1 11 21 2 2 12 11 21 22 22 ÷ = ' ÷ = '  This procedure of eliminating 1 x , is now repeated for the third equation to the th n equation to reduce the set of equations as 1 1 3 13 2 12 1 11 ... b x a x a x a x a n n = + + + + 2 2 3 23 2 22 ... b x a x a x a n n ' = ' + + ' + ' 3 3 3 33 2 32 ... b x a x a x a n n ' = ' + + ' + ' . . . . . . . . . n n nn n n b x a x a x a ' = ' + + ' + ' ... 3 3 2 2 This is the end of the first step of forward elimination. Now for the second step of forward elimination, we start with the second equation as the pivot equation and 22 a' as the pivot element. So, to eliminate 2 x in the third equation, one divides the second equation by 22 a' (the pivot element) and then multiply it by 32 a' . This is the same as multiplying the second equation by 22 32 / a a ' ' and subtracting it from the third equation. This makes the coefficient of 2 x zero in the third equation. The same procedure is now repeated for the fourth equation till the th n equation to give 1 1 3 13 2 12 1 11 ... b x a x a x a x a n n = + + + + 2 2 3 23 2 22 ... b x a x a x a n n ' = ' + + ' + ' 3 3 3 33 ... b x a x a n n ' ' = ' ' + + ' ' . . . . . . n n nn n b x a x a ' ' = ' ' + + ' ' ... 3 3 Gaussian Elimination 04.06.3 The next steps of forward elimination are conducted by using the third equation as a pivot equation and so on. That is, there will be a total of 1 ÷ n steps of forward elimination. At the end of 1 ÷ n steps of forward elimination, we get a set of equations that look like + + 2 12 1 11 x a x a 1 1 3 13 ... b x a x a n n = + + 2 2 3 23 2 22 ... b x a x a x a n n ' = ' + + ' + ' 3 3 3 33 ... b x a x a n n ' ' = ' ' + + ' ' . . . . . . ( ) ( ) 1 1 ÷ ÷ = n n n n nn b x a Back Substitution: Now the equations are solved starting from the last equation as it has only one unknown. ) 1 ( ) 1 ( ÷ ÷ = n nn n n n a b x Then the second last equation, that is the th ) 1 ( ÷ n equation, has two unknowns: n x and 1 ÷ n x , but n x is already known. This reduces the th ) 1 ( ÷ n equation also to one unknown. Back substitution hence can be represented for all equations by the formula ( ) ( ) ( ) 1 1 1 1 ÷ + = ÷ ÷ ¯ ÷ = i ii n i j j i ij i i i a x a b x for 1 , , 2 , 1  ÷ ÷ = n n i and ) 1 ( ) 1 ( ÷ ÷ = n nn n n n a b x Example 1 The upward velocity of a rocket is given at three different times in Table 1. Table 1 Velocity vs. time data. Time, t (s) Velocity, v (m/s) 5 106.8 8 177.2 12 279.2 The velocity data is approximated by a polynomial as ( ) 12 5 , 3 2 2 1 s s + + = t a t a t a t v The coefficients 3 2 1 and a , , a a for the above expression are given by 04.06.4 Chapter 04.06 = 2 . 279 2 . 177 8 . 106 1 12 144 1 8 64 1 5 25 3 2 1 a a a Find the values of 3 2 1 and a , , a a using the Naïve Gauss elimination method. Find the velocity at 11 , 9 , 5 . 7 , 6 = t seconds. Solution Forward Elimination of Unknowns Since there are three equations, there will be two steps of forward elimination of unknowns. First step Divide Row 1 by 25 and then multiply it by 64, that is, multiply Row 1 by 2.56 64/25 = . | | | | ( ) 56 . 2 8 . 106 1 5 25 × gives Row 1 as | | | | 408 . 273 56 . 2 8 . 12 64 Subtract the result from Row 2 | | | | | | | | 208 . 96 56 . 1 8 . 4 0 408 . 273 56 . 2 8 . 12 64 2 . 177 1 8 64 ÷ ÷ ÷ ÷ to get the resulting equations as ÷ = ÷ ÷ 2 . 279 208 . 96 8 . 106 1 12 144 56 . 1 8 . 4 0 1 5 25 3 2 1 a a a Divide Row 1 by 25 and then multiply it by 144, that is, multiply Row 1 by 5.76 144/25 = . | | | | ( ) 76 . 5 8 . 106 1 5 25 × gives Row 1 as | | | | 168 . 615 76 . 5 8 . 28 144 Subtract the result from Row 3 | | | | | | | | 968 . 335 76 . 4 8 . 16 0 168 . 615 76 . 5 8 . 28 144 2 . 279 1 12 144 ÷ ÷ ÷ ÷ to get the resulting equations as ÷ ÷ = ÷ ÷ ÷ ÷ 968 . 335 208 . 96 8 . 106 76 . 4 8 . 16 0 56 . 1 8 . 4 0 1 5 25 3 2 1 a a a Second step We now divide Row 2 by –4.8 and then multiply by –16.8, that is, multiply Row 2 by 3.5 4.8 16.8/ = ÷ ÷ . | | | | ( ) 5 . 3 208 . 96 56 . 1 8 . 4 0 × ÷ ÷ ÷ gives Row 2 as | | | | 728 . 336 46 . 5 8 . 16 0 ÷ ÷ ÷ Subtract the result from Row 3 Gaussian Elimination 04.06.5 | | | | | | | | 76 . 0 7 . 0 0 0 728 . 336 46 . 5 8 . 16 0 968 . 335 4.76 8 . 16 0 ÷ ÷ ÷ ÷ ÷ ÷ ÷ to get the resulting equations as ÷ = ÷ ÷ 76 . 0 208 . 96 8 . 106 7 . 0 0 0 56 . 1 8 . 4 0 1 5 25 3 2 1 a a a Back substitution From the third equation 76 . 0 7 . 0 3 = a 7 0 76 0 3 . . a = 1.08571 = Substituting the value of 3 a in the second equation, 208 . 96 56 . 1 8 . 4 3 2 ÷ = ÷ ÷ a a 8 . 4 56 . 1 208 . 96 3 2 ÷ + ÷ = a a 4.8 08571 . 1 1.56 96.208 ÷ × + ÷ = 6905 19. = Substituting the value of 2 a and 3 a in the first equation, 8 . 106 5 25 3 2 1 = + + a a a 25 5 8 . 106 3 2 1 a a a ÷ ÷ = 25 08571 . 1 6905 . 19 5 8 . 106 ÷ × ÷ = 290472 . 0 = Hence the solution vector is = 08571 . 1 6905 . 19 290472 . 0 3 2 1 a a a The polynomial that passes through the three data points is then ( ) 3 2 2 1 a t a t a t v + + = 12 5 , 08571 . 1 6905 . 19 290472 . 0 2 s s + + = t t t Since we want to find the velocity at 11 and 9 , 5 . 7 , 6 = t seconds, we could simply substitute each value of t in ( ) 08571 . 1 6905 . 19 290472 . 0 2 + + = t t t v and find the corresponding velocity. For example, at 6 = t 04.06.6 Chapter 04.06 ( ) ( ) ( ) m/s 686 . 129 08571 . 1 6 6905 . 19 6 290472 . 0 6 2 = + + = v However we could also find all the needed values of velocity at t = 6, 7.5, 9, 11 seconds using matrix multiplication. ( ) | | = 1 08571 1 6905 19 290472 0 2 t t . . . t v So if we want to find ( ) ( ) ( ) ( ), 11 , 9 , 5 . 7 , 6 v v v v it is given by ( ) ( ) ( ) ( ) | | | | = 1 1 1 1 11 9 5 . 7 6 11 9 5 . 7 6 08571 . 1 6905 . 19 0.290472 11 9 5 . 7 6 2 2 2 2 v v v v | | = 1 1 1 1 11 9 7.5 6 121 81 56.25 36 1.08571 19.6905 290472 . 0 | | 252.828 201.828 165.104 686 . 129 = m/s 686 . 129 ) 6 ( = v m/s 04 1 . 165 ) 5 . 7 ( = v m/s 828 . 201 ) 9 ( = v m/s 828 . 252 ) 11 ( = v Example 2 Use Naïve Gauss elimination to solve 45 10 15 20 3 2 1 = + + x x x 751 . 1 7 249 . 2 3 3 2 1 = + ÷ ÷ x x x 9 3 5 3 2 1 = + + x x x Use six significant digits with chopping in your calculations. Solution Working in the matrix form ÷ ÷ 3 1 5 7 249 . 2 3 10 15 20 3 2 1 x x x = 9 751 . 1 45 Forward Elimination of Unknowns First step Divide Row 1 by 20 and then multiply it by –3, that is, multiply Row 1 by 15 . 0 20 / 3 ÷ = ÷ . | | | | ( ) 15 . 0 45 10 15 20 ÷ × gives Row 1 as | | | | 75 . 6 5 . 1 25 . 2 3 ÷ ÷ ÷ ÷ Gaussian Elimination 04.06.7 Subtract the result from Row 2 | | | | | | | | 501 . 8 5 . 8 001 . 0 0 75 . 6 5 . 1 25 . 2 3 751 . 1 7 249 . 2 3 ÷ ÷ ÷ ÷ ÷ ÷ ÷ to get the resulting equations as 3 1 5 5 . 8 001 . 0 0 10 15 20 3 2 1 x x x = 9 501 . 8 45 Divide Row 1 by 20 and then multiply it by 5, that is, multiply Row 1 by 25 . 0 20 / 5 = | | | | ( ) 25 . 0 45 10 15 20 × gives Row 1 as | | | | 25 . 11 5 . 2 75 . 3 5 Subtract the result from Row 3 | | | | | | | | 2.25 .5 0 75 . 2 0 25 . 11 5 . 2 75 . 3 5 9 3 1 5 ÷ ÷ ÷ to get the resulting equations as ÷ 5 . 0 75 . 2 0 5 . 8 001 . 0 0 10 15 20 3 2 1 x x x = ÷ 25 . 2 501 . 8 45 Second step Now for the second step of forward elimination, we will use Row 2 as the pivot equation and eliminate Row 3: Column 2. Divide Row 2 by 0.001 and then multiply it by –2.75, that is, multiply Row 2 by 2750 001 . 0 / 75 . 2 ÷ = ÷ . | | | | ( ) 2750 501 . 8 5 . 8 001 . 0 0 ÷ × gives Row 2 as | | | | 75 . 23377 23375 75 . 2 0 ÷ ÷ ÷ Rewriting within 6 significant digits with chopping | | | | 7 . 23377 23375 75 . 2 0 ÷ ÷ ÷ Subtract the result from Row 3 | | | | | | | | 3375.45 2 5 . 23375 0 0 7 . 23377 23375 2.75 0 25 . 2 .5 0 75 . 2 0 ÷ ÷ ÷ ÷ ÷ ÷ Rewriting within 6 significant digits with chopping | | | | 4 . 23375 5 . 23375 0 0 ÷ to get the resulting equations as 5 . 23375 0 0 5 . 8 001 . 0 0 10 15 20 3 2 1 x x x = 4 . 23375 501 . 8 45 This is the end of the forward elimination steps. 04.06.8 Chapter 04.06 Back substitution We can now solve the above equations by back substitution. From the third equation, 4 . 23375 5 . 23375 3 = x 5 . 23375 4 . 23375 3 = x 999995 . 0 = Substituting the value of 3 x in the second equation 501 . 8 5 . 8 001 . 0 3 2 = + x x 0.001 0.999995 5 8 501 8 001 . 0 5 . 8 501 . 8 3 2 × ÷ = ÷ = . . x x 001 . 0 49995 . 8 501 . 8 ÷ = 001 . 0 00105 . 0 = 05 . 1 = Substituting the value of 3 x and 2 x in the first equation, 45 10 15 20 3 2 1 = + + x x x 20 10 15 45 3 2 1 x x x ÷ ÷ = 20 999995 . 0 10 05 . 1 15 45 × ÷ × ÷ = 20 2500 . 19 20 99995 . 9 25 . 29 20 99995 . 9 75 . 15 45 = ÷ = ÷ ÷ = 9625 . 0 = Hence the solution is = 3 2 1 ] [ x x x X = 999995 . 0 05 . 1 9625 . 0 Compare this with the exact solution of Gaussian Elimination 04.06.9 | | = 3 2 1 x x x X = 1 1 1 Are there any pitfalls of the Naïve Gauss elimination method? Yes, there are two pitfalls of the Naïve Gauss elimination method. Division by zero: It is possible for division by zero to occur during the beginning of the 1 ÷ n steps of forward elimination. For example 11 6 5 3 2 = + x x 16 7 5 4 3 2 1 = + + x x x 15 3 2 9 3 2 1 = + + x x x will result in division by zero in the first step of forward elimination as the coefficient of 1 x in the first equation is zero as is evident when we write the equations in matrix form. = 15 16 11 3 2 9 7 5 4 6 5 0 3 2 1 x x x But what about the equations below: Is division by zero a problem? 18 7 6 5 3 2 1 = + + x x x 25 3 12 10 3 2 1 = + + x x x 56 19 17 20 3 2 1 = + + x x x Written in matrix form, = 56 25 18 19 17 20 3 12 10 7 6 5 3 2 1 x x x there is no issue of division by zero in the first step of forward elimination. The pivot element is the coefficient of 1 x in the first equation, 5, and that is a non-zero number. However, at the end of the first step of forward elimination, we get the following equations in matrix form ÷ ÷ = ÷ ÷ ÷ 16 11 18 9 7 0 11 0 0 7 6 5 3 2 1 x x x Now at the beginning of the 2 nd step of forward elimination, the coefficient of 2 x in Equation 2 would be used as the pivot element. That element is zero and hence would create the division by zero problem. 04.06.10 Chapter 04.06 So it is important to consider that the possibility of division by zero can occur at the beginning of any step of forward elimination. Round-off error: The Naïve Gauss elimination method is prone to round-off errors. This is true when there are large numbers of equations as errors propagate. Also, if there is subtraction of numbers from each other, it may create large errors. See the example below. Example 3 Remember Example 2 where we used Naïve Gauss elimination to solve 45 10 15 20 3 2 1 = + + x x x 751 . 1 7 249 . 2 3 3 2 1 = + ÷ ÷ x x x 9 3 5 3 2 1 = + + x x x using six significant digits with chopping in your calculations? Repeat the problem, but now use five significant digits with chopping in your calculations. Solution Writing in the matrix form ÷ ÷ 3 1 5 7 249 . 2 3 10 15 20 3 2 1 x x x = 9 751 . 1 45 Forward Elimination of Unknowns First step Divide Row 1 by 20 and then multiply it by –3, that is, multiply Row 1 by 15 . 0 20 / 3 ÷ = ÷ . | | | | ( ) 15 . 0 45 10 15 20 ÷ × gives Row 1 as | | | | 75 . 6 5 . 1 25 . 2 3 ÷ ÷ ÷ ÷ Subtract the result from Row 2 | | | | | | | | 501 . 8 5 . 8 001 . 0 0 75 . 6 5 . 1 25 . 2 3 751 . 1 7 249 . 2 3 ÷ ÷ ÷ ÷ ÷ ÷ ÷ to get the resulting equations as 3 1 5 5 . 8 001 . 0 0 10 15 20 3 2 1 x x x = 9 501 . 8 45 Divide Row 1 by 20 and then multiply it by 5, that is, multiply Row 1 by 25 . 0 20 / 5 = . | | | | ( ) 25 . 0 45 10 15 20 × gives Row 1 as | | | | 25 . 11 5 . 2 75 . 3 5 Subtract the result from Row 3 | | | | | | | | 2.25 .5 0 75 . 2 0 25 . 11 5 . 2 75 . 3 5 9 3 1 5 ÷ ÷ ÷ Gaussian Elimination 04.06.11 to get the resulting equations as ÷ 5 . 0 75 . 2 0 5 . 8 001 . 0 0 10 15 20 3 2 1 x x x = ÷ 25 . 2 501 . 8 45 Second step Now for the second step of forward elimination, we will use Row 2 as the pivot equation and eliminate Row 3: Column 2. Divide Row 2 by 0.001 and then multiply it by –2.75, that is, multiply Row 2 by 2750 001 . 0 / 75 . 2 ÷ = ÷ . | | | | ( ) 2750 501 . 8 5 . 8 001 . 0 0 ÷ × gives Row 2 as | | | | 75 . 23377 23375 75 . 2 0 ÷ ÷ ÷ Rewriting within 5 significant digits with chopping | | | | 23377 23375 75 . 2 0 ÷ ÷ ÷ Subtract the result from Row 3 | | | | | | | | 3374 2 23375 0 0 23377 23375 2.75 0 25 . 2 .5 0 75 . 2 0 ÷ ÷ ÷ ÷ ÷ ÷ Rewriting within 6 significant digits with chopping | | | | 23374 23375 0 0 ÷ to get the resulting equations as 23375 0 0 5 . 8 001 . 0 0 10 15 20 3 2 1 x x x = 23374 501 . 8 45 This is the end of the forward elimination steps. Back substitution We can now solve the above equations by back substitution. From the third equation, 23374 23375 3 = x 23375 23374 3 = x 99995 . 0 = Substituting the value of 3 x in the second equation 501 . 8 5 . 8 001 . 0 3 2 = + x x 0.001 0.99995 5 8 501 8 001 . 0 5 . 8 501 . 8 3 2 × ÷ = ÷ = . . x x 001 . 0 4995 . 8 501 . 8 001 . 0 499575 . 8 501 . 8 ÷ = ÷ = 04.06.12 Chapter 04.06 001 . 0 0015 . 0 = 5 . 1 = Substituting the value of 3 x and 2 x in the first equation, 45 10 15 20 3 2 1 = + + x x x 20 10 15 45 3 2 1 x x x ÷ ÷ = 20 99995 . 0 10 5 . 1 15 45 × ÷ × ÷ = 20 500 . 12 20 5005 . 12 20 9995 . 9 5 . 22 20 9995 . 9 5 . 22 45 = = ÷ = ÷ ÷ = 625 . 0 = Hence the solution is | | = 3 2 1 x x x X = 99995 . 0 5 . 1 625 . 0 Compare this with the exact solution of | | = 3 2 1 x x x X = 1 1 1 What are some techniques for improving the Naïve Gauss elimination method? As seen in Example 3, round off errors were large when five significant digits were used as opposed to six significant digits. One method of decreasing the round-off error would be to use more significant digits, that is, use double or quad precision for representing the numbers. However, this would not avoid possible division by zero errors in the Naïve Gauss elimination method. To avoid division by zero as well as reduce (not eliminate) round-off error, Gaussian elimination with partial pivoting is the method of choice. Gaussian Elimination 04.06.13 How does Gaussian elimination with partial pivoting differ from Naïve Gauss elimination? The two methods are the same, except in the beginning of each step of forward elimination, a row switching is done based on the following criterion. If there are n equations, then there are 1 ÷ n forward elimination steps. At the beginning of the th k step of forward elimination, one finds the maximum of kk a , k k a , 1 + , …………, nk a Then if the maximum of these values is pk a in the th p row, n p k s s , then switch rows p and k . The other steps of forward elimination are the same as the Naïve Gauss elimination method. The back substitution steps stay exactly the same as the Naïve Gauss elimination method. Example 4 In the previous two examples, we used Naïve Gauss elimination to solve 45 10 15 20 3 2 1 = + + x x x 751 . 1 7 249 . 2 3 3 2 1 = + ÷ ÷ x x x 9 3 5 3 2 1 = + + x x x using five and six significant digits with chopping in the calculations. Using five significant digits with chopping, the solution found was | | = 3 2 1 x x x X = 99995 . 0 5 . 1 625 . 0 This is different from the exact solution of | | = 3 2 1 x x x X = 1 1 1 Find the solution using Gaussian elimination with partial pivoting using five significant digits with chopping in your calculations. Solution ÷ ÷ 3 1 5 7 249 . 2 3 10 15 20 3 2 1 x x x = 9 751 . 1 45 04.06.14 Chapter 04.06 Forward Elimination of Unknowns Now for the first step of forward elimination, the absolute value of the first column elements below Row 1 is 20 , 3 ÷ , 5 or 20, 3, 5 So the largest absolute value is in the Row 1. So as per Gaussian elimination with partial pivoting, the switch is between Row 1 and Row 1 to give ÷ ÷ 3 1 5 7 249 . 2 3 10 15 20 3 2 1 x x x = 9 751 . 1 45 Divide Row 1 by 20 and then multiply it by –3, that is, multiply Row 1 by 15 . 0 20 / 3 ÷ = ÷ . | | | | ( ) 15 . 0 45 10 15 20 ÷ × gives Row 1 as | | | | 75 . 6 5 . 1 25 . 2 3 ÷ ÷ ÷ ÷ Subtract the result from Row 2 | | | | | | | | 501 . 8 5 . 8 001 . 0 0 75 . 6 5 . 1 25 . 2 3 751 . 1 7 249 . 2 3 ÷ ÷ ÷ ÷ ÷ ÷ ÷ to get the resulting equations as 3 1 5 5 . 8 001 . 0 0 10 15 20 3 2 1 x x x = 9 501 . 8 45 Divide Row 1 by 20 and then multiply it by 5, that is, multiply Row 1 by 25 . 0 20 / 5 = . | | | | ( ) 25 . 0 45 10 15 20 × gives Row 1 as | | | | 25 . 11 5 . 2 75 . 3 5 Subtract the result from Row 3 | | | | | | | | 2.25 .5 0 75 . 2 0 25 . 11 5 . 2 75 . 3 5 9 3 1 5 ÷ ÷ ÷ to get the resulting equations as ÷ 5 . 0 75 . 2 0 5 . 8 001 . 0 0 10 15 20 3 2 1 x x x = ÷ 25 . 2 501 . 8 45 This is the end of the first step of forward elimination. Now for the second step of forward elimination, the absolute value of the second column elements below Row 1 is 001 . 0 , 75 . 2 ÷ or 0.001, 2.75 Gaussian Elimination 04.06.15 So the largest absolute value is in Row 3. So Row 2 is switched with Row 3 to give ÷ 5 . 8 001 . 0 0 5 . 0 75 . 2 0 10 15 20 3 2 1 x x x = ÷ 501 . 8 25 . 2 7 Divide Row 2 by –2.75 and then multiply it by 0.001, that is, multiply Row 2 by 00036363 . 0 75 . 2 / 001 . 0 ÷ = ÷ . | | | | ( ) 00036363 . 0 25 . 2 5 . 0 75 . 2 0 ÷ × ÷ ÷ gives Row 2 as | | | | 00081816 . 0 00018182 . 0 00099998 . 0 0 ÷ Subtract the result from Row 3 | | | | | | | | .50018184 8 50018182 . 8 0 0 00081816 . 0 0.00018182 .00099998 0 0 501 . 8 .5 8 .001 0 0 ÷ ÷ Rewriting within 5 significant digits with chopping | | | | 5001 . 8 5001 . 8 0 0 to get the resulting equations as ÷ 5001 . 8 0 0 5 . 0 75 . 2 0 10 15 20 3 2 1 x x x = ÷ 5001 . 8 25 . 2 45 Back substitution 5001 . 8 5001 . 8 3 = x 5001 . 8 5001 . 8 3 = x =1 Substituting the value of 3 x in Row 2 25 . 2 5 . 0 75 . 2 3 2 ÷ = + ÷ x x 75 . 2 5 . 0 25 . 2 2 2 ÷ ÷ ÷ = x x 75 . 2 1 5 . 0 25 . 2 ÷ × ÷ ÷ = 75 . 2 5 . 0 25 . 2 ÷ ÷ ÷ = 75 . 2 75 . 2 ÷ ÷ = 1 = Substituting the value of 3 x and 2 x in Row 1 45 10 15 20 3 2 1 = + + x x x 20 10 15 45 3 2 1 x x x ÷ ÷ = 04.06.16 Chapter 04.06 20 1 10 1 15 45 × ÷ × ÷ = 20 10 30 20 10 15 45 ÷ = ÷ ÷ = 20 20 = 1 = So the solution is | | = 3 2 1 x x x X = 1 1 1 This, in fact, is the exact solution. By coincidence only, in this case, the round-off error is fully removed. Can we use Naïve Gauss elimination methods to find the determinant of a square matrix? One of the more efficient ways to find the determinant of a square matrix is by taking advantage of the following two theorems on a determinant of matrices coupled with Naïve Gauss elimination. Theorem 1: Let ] [ A be a n n× matrix. Then, if ] [B is a n n× matrix that results from adding or subtracting a multiple of one row to another row, then ) det( ) det( B A = (The same is true for column operations also). Theorem 2: Let ] [ A be a n n× matrix that is upper triangular, lower triangular or diagonal, then nn ii a a a a A × × × × × = ... ... ) det( 22 11 I = = n i ii a 1 This implies that if we apply the forward elimination steps of the Naïve Gauss elimination method, the determinant of the matrix stays the same according to Theorem 1. Then since at the end of the forward elimination steps, the resulting matrix is upper triangular, the determinant will be given by Theorem 2. Gaussian Elimination 04.06.17 Example 5 Find the determinant of = 1 12 144 1 8 64 1 5 25 ] [ A Solution Remember in Example 1, we conducted the steps of forward elimination of unknowns using the Naïve Gauss elimination method on ] [ A to give | | ÷ ÷ = 7 . 0 0 0 56 . 1 8 . 4 0 1 5 25 B According to Theorem 2 ) det( ) det( B A = 7 . 0 ) 8 . 4 ( 25 × ÷ × = 00 . 84 ÷ = What if I cannot find the determinant of the matrix using the Naïve Gauss elimination method, for example, if I get division by zero problems during the Naïve Gauss elimination method? Well, you can apply Gaussian elimination with partial pivoting. However, the determinant of the resulting upper triangular matrix may differ by a sign. The following theorem applies in addition to the previous two to find the determinant of a square matrix. Theorem 3: Let ] [ A be a n n× matrix. Then, if ] [B is a matrix that results from switching one row with another row, then ) det( ) det( A B ÷ = . Example 6 Find the determinant of ÷ ÷ ÷ = 5 1 5 6 099 . 2 3 0 7 10 ] [ A Solution The end of the forward elimination steps of Gaussian elimination with partial pivoting, we would obtain ÷ = 002 . 6 0 0 5 5 . 2 0 0 7 10 ] [B ( ) 002 . 6 5 . 2 10 det × × = B 04.06.18 Chapter 04.06 05 . 150 = Since rows were switched once during the forward elimination steps of Gaussian elimination with partial pivoting, ( ) ) det( det B A ÷ = 05 . 150 ÷ = Example 7 Prove ( ) 1 det 1 ) det( ÷ = A A Solution ( ) ( ) ( ) ( ) ( ) ( ) 1 1 1 1 det 1 det 1 det det det det ] [ ] ][ [ ÷ ÷ ÷ ÷ = = = = A A A A I A A I A A If ] [ A is a n n× matrix and 0 ) det( = A , what other statements are equivalent to it? 1. ] [ A is invertible. 2. 1 ] [ ÷ A exists. 3. ] [ ] [ ] [ C X A = has a unique solution. 4. ] 0 [ ] [ ] [ = X A solution is ] 0 [ ] [  = X . 5. ] [ ] [ ] [ ] [ ] [ 1 1 A A I A A ÷ ÷ = = . Key Terms: Naïve Gauss Elimination Partial Pivoting Determinant 04.08.1 Chapter 04.08 Gauss-Seidel Method After reading this chapter, you should be able to: 1. solve a set of equations using the Gauss-Seidel method, 2. recognize the advantages and pitfalls of the Gauss-Seidel method, and 3. determine under what conditions the Gauss-Seidel method always converges. Why do we need another method to solve a set of simultaneous linear equations? In certain cases, such as when a system of equations is large, iterative methods of solving equations are more advantageous. Elimination methods, such as Gaussian elimination, are prone to large round-off errors for a large set of equations. Iterative methods, such as the Gauss-Seidel method, give the user control of the round-off error. Also, if the physics of the problem are well known, initial guesses needed in iterative methods can be made more judiciously leading to faster convergence. What is the algorithm for the Gauss-Seidel method? Given a general set of n equations and n unknowns, we have 1 1 3 13 2 12 1 11 ... c x a x a x a x a n n = + + + + 2 2 3 23 2 22 1 21 ... c x a x a x a x a n n = + + + + . . . . . . n n nn n n n c x a x a x a x a = + + + + ... 3 3 2 2 1 1 If the diagonal elements are non-zero, each equation is rewritten for the corresponding unknown, that is, the first equation is rewritten with 1 x on the left hand side, the second equation is rewritten with 2 x on the left hand side and so on as follows 04.08.2 Chapter 04.08 nn n n n n n n n n n n n n n n n n n n n n n a x a x a x a c x a x a x a x a x a c x a x a x a x a c x 1 1 , 2 2 1 1 1 , 1 , 1 2 2 , 1 2 2 , 1 1 1 , 1 1 1 22 2 3 23 1 21 2 2 ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ = ÷ ÷ ÷ ÷ = ÷ ÷ ÷ =         These equations can be rewritten in a summation form as 11 1 1 1 1 1 a x a c x n j j j j ¯ = = ÷ = 22 2 1 2 2 2 a x a c x j n j j j ¯ = = ÷ = . . . 1 , 1 1 1 , 1 1 1 ÷ ÷ ÷ = = ÷ ÷ ÷ ¯ ÷ = n n n n j j j j n n n a x a c x nn n n j j j nj n n a x a c x ¯ = = ÷ = 1 Hence for any row i , . , , 2 , 1 , 1 n i a x a c x ii n i j j j ij i i  = ÷ = ¯ = = Now to find i x ’s, one assumes an initial guess for the i x ’s and then uses the rewritten equations to calculate the new estimates. Remember, one always uses the most recent estimates to calculate the next estimates, i x . At the end of each iteration, one calculates the absolute relative approximate error for each i x as 100 new old new × ÷ = e i i i i a x x x where new i x is the recently obtained value of i x , and old i x is the previous value of i x . Gauss-Seidel Method 04.08.3 When the absolute relative approximate error for each x i is less than the pre-specified tolerance, the iterations are stopped. Example 1 The upward velocity of a rocket is given at three different times in the following table Table 1 Velocity vs. time data. Time, t (s) Velocity, v (m/s) 5 106.8 8 177.2 12 279.2 The velocity data is approximated by a polynomial as ( ) 12 5 , 3 2 2 1 s s + + = t a t a t a t v Find the values of 3 2 1 and , , a a a using the Gauss-Seidel method. Assume an initial guess of the solution as = 5 2 1 3 2 1 a a a and conduct two iterations. Solution The polynomial is going through three data points ( ) ( ) ( ) 3 3 2 2 1 1 , and , , , , v t v t v t where from the above table 8 . 106 , 5 1 1 = = v t 2 . 177 , 8 2 2 = = v t 2 . 279 , 12 3 3 = = v t Requiring that ( ) 3 2 2 1 a t a t a t v + + = passes through the three data points gives ( ) 3 1 2 2 1 1 1 1 a t a t a v t v + + = = ( ) 3 2 2 2 2 1 2 2 a t a t a v t v + + = = ( ) 3 3 2 2 3 1 3 3 a t a t a v t v + + = = Substituting the data ( ) ( ) ( ) 3 3 2 2 1 1 , and , , , , v t v t v t gives ( ) ( ) 8 . 106 5 5 3 2 2 1 = + + a a a ( ) ( ) 2 . 177 8 8 3 2 2 1 = + + a a a ( ) ( ) 2 . 279 12 12 3 2 2 1 = + + a a a or 8 . 106 5 25 3 2 1 = + + a a a 2 . 177 8 64 3 2 1 = + + a a a 04.08.4 Chapter 04.08 2 . 279 12 144 3 2 1 = + + a a a The coefficients 3 2 1 and , , a a a for the above expression are given by = 2 . 279 2 . 177 8 . 106 1 12 144 1 8 64 1 5 25 3 2 1 a a a Rewriting the equations gives 25 5 8 . 106 3 2 1 a a a ÷ ÷ = 8 64 2 . 177 3 1 2 a a a ÷ ÷ = 1 12 144 2 . 279 2 1 3 a a a ÷ ÷ = Iteration #1 Given the initial guess of the solution vector as = 5 2 1 3 2 1 a a a we get 25 ) 5 ( ) 2 ( 5 8 . 106 1 ÷ ÷ = a 6720 . 3 = ( ) ( ) 8 5 6720 . 3 64 2 . 177 2 ÷ ÷ = a 8150 . 7 ÷ = ( ) ( ) 1 8510 . 7 12 6720 . 3 144 2 . 279 3 ÷ ÷ ÷ = a 36 . 155 ÷ = The absolute relative approximate error for each i x then is 100 6720 . 3 1 6720 . 3 1 × ÷ = e a % 76 . 72 = 100 8510 . 7 2 8510 . 7 2 × ÷ ÷ ÷ = e a % 47 . 125 = 100 36 . 155 5 36 . 155 3 × ÷ ÷ ÷ = e a % 22 . 103 = At the end of the first iteration, the estimate of the solution vector is Gauss-Seidel Method 04.08.5 ÷ ÷ = 36 . 155 8510 . 7 6720 . 3 3 2 1 a a a and the maximum absolute relative approximate error is 125.47%. Iteration #2 The estimate of the solution vector at the end of Iteration #1 is ÷ ÷ = 36 . 155 8510 . 7 6720 . 3 3 2 1 a a a Now we get ( ) 25 ) 36 . 155 ( 8510 . 7 5 8 . 106 1 ÷ ÷ ÷ ÷ = a 056 . 12 = ( ) 8 ) 36 . 155 ( 056 . 12 64 2 . 177 2 ÷ ÷ ÷ = a 882 . 54 ÷ = ( ) ( ) 1 882 . 54 12 056 . 12 144 2 . 279 3 ÷ ÷ ÷ = a = 34 . 798 ÷ The absolute relative approximate error for each i x then is 100 056 . 12 6720 . 3 056 . 12 1 × ÷ = e a % 543 . 69 = ( ) 100 882 . 54 8510 . 7 882 . 54 2 × ÷ ÷ ÷ ÷ = e a % 695 . 85 = ( ) 100 34 . 798 36 . 155 34 . 798 3 × ÷ ÷ ÷ ÷ = e a % 540 . 80 = At the end of the second iteration the estimate of the solution vector is ÷ ÷ = 54 . 798 882 . 54 056 . 12 3 2 1 a a a and the maximum absolute relative approximate error is 85.695%. Conducting more iterations gives the following values for the solution vector and the corresponding absolute relative approximate errors. 04.08.6 Chapter 04.08 Iteration 1 a % 1 a e 2 a % 2 a e 3 a % 3 a e 1 2 3 4 5 6 3.6720 12.056 47.182 193.33 800.53 3322.6 72.767 69.543 74.447 75.595 75.850 75.906 –7.8510 –54.882 –255.51 –1093.4 –4577.2 –19049 125.47 85.695 78.521 76.632 76.112 75.972 –155.36 –798.34 –3448.9 –14440 –60072 –249580 103.22 80.540 76.852 76.116 75.963 75.931 As seen in the above table, the solution estimates are not converging to the true solution of 29048 . 0 1 = a 690 . 19 2 = a 0857 . 1 3 = a The above system of equations does not seem to converge. Why? Well, a pitfall of most iterative methods is that they may or may not converge. However, the solution to a certain classes of systems of simultaneous equations does always converge using the Gauss-Seidel method. This class of system of equations is where the coefficient matrix ] [ A in ] [ ] ][ [ C X A = is diagonally dominant, that is ¯ = = > n i j j ij ii a a 1 for all i ¯ = = > n i j j ij ii a a 1 for at least one i If a system of equations has a coefficient matrix that is not diagonally dominant, it may or may not converge. Fortunately, many physical systems that result in simultaneous linear equations have a diagonally dominant coefficient matrix, which then assures convergence for iterative methods such as the Gauss-Seidel method of solving simultaneous linear equations. Example 2 Find the solution to the following system of equations using the Gauss-Seidel method. 1 5 3 12 3 2 1 x x x = ÷ + 28 3 5 3 2 1 x x x = + + 76 13 7 3 3 2 1 = + + x x x Use = 1 0 1 3 2 1 x x x as the initial guess and conduct two iterations. Solution The coefficient matrix Gauss-Seidel Method 04.08.7 | | ÷ = 13 7 3 3 5 1 5 3 12 A is diagonally dominant as 8 5 3 12 12 13 12 11 = ÷ + = + > = = a a a 4 3 1 5 5 23 21 22 = + = + > = = a a a 10 7 3 13 13 32 31 33 = + = + > = = a a a and the inequality is strictly greater than for at least one row. Hence, the solution should converge using the Gauss-Seidel method. Rewriting the equations, we get 12 5 3 1 3 2 1 x x x + ÷ = 5 3 28 3 1 2 x x x ÷ ÷ = 13 7 3 76 2 1 3 x x x ÷ ÷ = Assuming an initial guess of = 1 0 1 3 2 1 x x x Iteration #1 ( ) ( ) 12 1 5 0 3 1 1 + ÷ = x 50000 . 0 = ( ) ( ) 5 1 3 50000 . 0 28 2 ÷ ÷ = x 9000 . 4 = ( ) ( ) 13 9000 . 4 7 50000 . 0 3 76 3 ÷ ÷ = x 0923 . 3 = The absolute relative approximate error at the end of the first iteration is 100 50000 . 0 1 50000 . 0 1 × ÷ = e a % 00 . 100 = 100 9000 . 4 0 9000 . 4 2 × ÷ = e a % 00 . 100 = 100 0923 . 3 1 0923 . 3 3 × ÷ = e a % 662 . 67 = 04.08.8 Chapter 04.08 The maximum absolute relative approximate error is 100.00% Iteration #2 ( ) ( ) 12 0923 . 3 5 9000 . 4 3 1 1 + ÷ = x 14679 . 0 = ( ) ( ) 5 0923 . 3 3 14679 . 0 28 2 ÷ ÷ = x 7153 . 3 = ( ) ( ) 13 7153 . 3 7 14679 . 0 3 76 3 ÷ ÷ = x 8118 . 3 = At the end of second iteration, the absolute relative approximate error is 100 14679 . 0 50000 . 0 14679 . 0 1 × ÷ = e a % 61 . 240 = 100 7153 . 3 9000 . 4 7153 . 3 2 × ÷ = e a % 889 . 31 = 100 8118 . 3 0923 . 3 8118 . 3 3 × ÷ = e a % 874 . 18 = The maximum absolute relative approximate error is 240.61%. This is greater than the value of 100.00% we obtained in the first iteration. Is the solution diverging? No, as you conduct more iterations, the solution converges as follows. Iteration 1 x % 1 a e 2 x % 2 a e 3 x % 3 a e 1 2 3 4 5 6 0.50000 0.14679 0.74275 0.94675 0.99177 0.99919 100.00 240.61 80.236 21.546 4.5391 0.74307 4.9000 3.7153 3.1644 3.0281 3.0034 3.0001 100.00 31.889 17.408 4.4996 0.82499 0.10856 3.0923 3.8118 3.9708 3.9971 4.0001 4.0001 67.662 18.874 4.0064 0.65772 0.074383 0.00101 This is close to the exact solution vector of = 4 3 1 3 2 1 x x x Example 3 Given the system of equations 76 13 7 3 3 2 1 x x x = + + Gauss-Seidel Method 04.08.9 28 3 5 3 2 1 x x x = + + 1 5 3 12 3 2 1 x - x x = + find the solution using the Gauss-Seidel method. Use = 1 0 1 3 2 1 x x x as the initial guess. Solution Rewriting the equations, we get 3 13 7 76 3 2 1 x x x ÷ ÷ = 5 3 28 3 1 2 x x x ÷ ÷ = 5 3 12 1 2 1 3 ÷ ÷ ÷ = x x x Assuming an initial guess of = 1 0 1 3 2 1 x x x the next six iterative values are given in the table below. Iteration 1 x % 1 a e 2 x % 2 a e 3 x % 3 a e 1 2 3 4 5 6 21.000 –196.15 1995.0 –20149 2.0364×10 5 –2.0579×10 6 95.238 110.71 109.83 109.90 109.89 109.89 0.80000 14.421 –116.02 1204.6 –12140 1.2272×10 5 100.00 94.453 112.43 109.63 109.92 109.89 50.680 –462.30 4718.1 –47636 4.8144×10 5 –4.8653×10 6 98.027 110.96 109.80 109.90 109.89 109.89 You can see that this solution is not converging and the coefficient matrix is not diagonally dominant. The coefficient matrix | | ÷ = 5 3 12 3 5 1 13 7 3 A is not diagonally dominant as 20 13 7 3 3 13 12 11 = + = + s = = a a a Hence, the Gauss-Seidel method may or may not converge. However, it is the same set of equations as the previous example and that converged. The only difference is that we exchanged first and the third equation with each other and that made the coefficient matrix not diagonally dominant. 04.08.10 Chapter 04.08 Therefore, it is possible that a system of equations can be made diagonally dominant if one exchanges the equations with each other. However, it is not possible for all cases. For example, the following set of equations 3 3 2 1 = + + x x x 9 4 3 2 3 2 1 = + + x x x 9 7 3 2 1 = + + x x x cannot be rewritten to make the coefficient matrix diagonally dominant. Key Terms: Gauss-Seidel method Convergence of Gauss-Seidel method Diagonally dominant matrix 04.07.1 Chapter 04.07 LU Decomposition After reading this chapter, you should be able to: 1. identify when LU decomposition is numerically more efficient than Gaussian elimination, 2. decompose a nonsingular matrix into LU, and 3. show how LU decomposition is used to find the inverse of a matrix. I hear about LU decomposition used as a method to solve a set of simultaneous linear equations. What is it? We already studied two numerical methods of finding the solution to simultaneous linear equations – Naïve Gauss elimination and Gaussian elimination with partial pivoting. Then, why do we need to learn another method? To appreciate why LU decomposition could be a better choice than the Gauss elimination techniques in some cases, let us discuss first what LU decomposition is about. For a nonsingular matrix | | A on which one can successfully conduct the Naïve Gauss elimination forward elimination steps, one can always write it as | | | || | U L A = where | | L = Lower triangular matrix | | U = Upper triangular matrix Then if one is solving a set of equations ] [ ] ][ [ C X A = , then | || || | | | C X U L = as | || | ( ) U L A ] [ = Multiplying both sides by | | 1 − L , | | | || || | | | | | C L X U L L 1 1 − − = | || || | X U I =| | | | C L 1 − as | | | | ( ) ] [ 1 I L L = − | || | | | | | C L X U 1 − = as | || | ( ) ] [U U I = Let | | | | | | Z C L = −1 04.07.2 Chapter 04.07 then | || | | | C Z L = (1) and | || | | | Z X U = (2) So we can solve Equation (1) first for ] [Z by using forward substitution and then use Equation (2) to calculate the solution vector | | X by back substitution. This is all exciting but LU decomposition looks more complicated than Gaussian elimination. Do we use LU decomposition because it is computationally more efficient than Gaussian elimination to solve a set of n equations given by [A][X]=[C]? For a square matrix ] [ A of n n × size, the computational time 1 DE CT | to decompose the ] [ A matrix to ] ][ [ U L form is given by DE CT | = | | . | \ | − + 3 20 4 3 8 2 3 n n n T , where T = clock cycle time 2 The computational time . FS CT | to solve by forward substitution | || | | | C Z L = is given by FS CT | = ( ) n n T 4 4 2 − The computational time BS CT | to solve by back substitution | || | | | Z X U = is given by BS CT | = ( ) n n T 12 4 2 + So, the total computational time to solve a set of equations by LU decomposition is LU CT | = DE CT | + FS CT | + BS CT | = | | . | \ | − + 3 20 4 3 8 2 3 n n n T + ( ) n n T 4 4 2 − + ( ) n n T 12 4 2 + = | | . | \ | + + 3 4 12 3 8 2 3 n n n T Now let us look at the computational time taken by Gaussian elimination. The computational time FE CT | for the forward elimination part, FE CT | = | | . | \ | − + 3 32 8 3 8 2 3 n n n T , 1 The time is calculated by first separately calculating the number of additions, subtractions, multiplications, and divisions in a procedure such as back substitution, etc. We then assume 4 clock cycles each for an add, subtract, or multiply operation, and 16 clock cycles for a divide operation as is the case for a typical AMD®-K7 chip. http://www.isi.edu/~draper/papers/mwscas07_kwon.pdf 2 As an example, a 1.2 GHz CPU has a clock cycle of ns 833333 . 0 ) 10 2 . 1 /( 1 9 = × LU Decomposition 04.07.3 and the computational time BS CT | for the back substitution part is BS CT | = ( ) n n T 12 4 2 + So, the total computational time GE CT | to solve a set of equations by Gaussian Elimination is GE CT | = FE CT | + BS CT | = | | . | \ | − + 3 32 8 3 8 2 3 n n n T + ( ) n n T 12 4 2 + = | | . | \ | + + 3 4 12 3 8 2 3 n n n T The computational time for Gaussian elimination and LU decomposition is identical. This has confused me further! Why learn LU decomposition method when it takes the same computational time as Gaussian elimination, and that too when the two methods are closely related. Please convince me that LU decomposition has its place in solving linear equations! We have the knowledge now to convince you that LU decomposition method has its place in the solution of simultaneous linear equations. Let us look at an example where the LU decomposition method is computationally more efficient than Gaussian elimination. Remember in trying to find the inverse of the matrix ] [ A in Chapter 04.05, the problem reduces to solving n sets of equations with the n columns of the identity matrix as the RHS vector. For calculations of each column of the inverse of the ] [ A matrix, the coefficient matrix ] [ A matrix in the set of equation | || | | | C X A = does not change. So if we use the LU decomposition method, the | | | || | U L A = decomposition needs to be done only once, the forward substitution (Equation 1) n times, and the back substitution (Equation 2) n times. So the total computational time LU inverse CT | required to find the inverse of a matrix using LU decomposition is LU inverse CT | = LU CT | 1× + FS CT n | × + BS CT n | × = | | . | \ | − + × 3 20 4 3 8 1 2 3 n n n T + × n ( ) n n T 4 4 2 − + × n ( ) n n T 12 4 2 + = | | . | \ | − + 3 20 12 3 32 2 3 n n n T In comparison, if Gaussian elimination method were used to find the inverse of a matrix, the forward elimination as well as the back substitution will have to be done n times. The total computational time GE inverse CT | required to find the inverse of a matrix by using Gaussian elimination then is GE inverse CT | = FE CT n | × + BS CT n | × = × n | | . | \ | − + 3 32 8 3 8 2 3 n n n T + × n ( ) n n T 12 4 2 + 04.07.4 Chapter 04.07 = | | . | \ | + + 3 4 12 3 8 2 3 4 n n n T Clearly for large n , GE inverse CT | >> LU inverse CT | as GE inverse CT | has the dominating terms of 4 n and LU inverse CT | has the dominating terms of 3 n . For large values of n , Gaussian elimination method would take more computational time (approximately 4 / n times – prove it) than the LU decomposition method. Typical values of the ratio of the computational time for different values of n are given in Table 1. Table 1 Comparing computational times of finding inverse of a matrix using LU decomposition and Gaussian elimination. n 10 100 1000 10000 GE inverse CT | / LU inverse CT | 3.28 25.83 250.8 2501 Are you convinced now that LU decomposition has its place in solving systems of equations? We are now ready to answer other curious questions such as 1) How do I find LU matrices for a nonsingular matrix ] [ A ? 2) How do I conduct forward and back substitution steps of Equations (1) and (2), respectively? How do I decompose a non-singular matrix ] [ A , that is, how do I find | | | || | U L A = ? If forward elimination steps of the Naïve Gauss elimination methods can be applied on a nonsingular matrix, then | | A can be decomposed into LU as = nn n n n n a a a a a a a a a A        2 1 2 22 21 1 12 11 ] [ = nn n n n n u u u u u u                  0 0 0 1 0 1 0 0 1 2 22 1 12 11 2 1 21 The elements of the | | U matrix are exactly the same as the coefficient matrix one obtains at the end of the forward elimination steps in Naïve Gauss elimination. The lower triangular matrix | | L has 1 in its diagonal entries. The non-zero elements on the non-diagonal elements in | | L are multipliers that made the corresponding entries zero in the upper triangular matrix | | U during forward elimination. Let us look at this using the same example as used in Naïve Gaussian elimination. LU Decomposition 04.07.5 Example 1 Find the LU decomposition of the matrix | | = 1 12 144 1 8 64 1 5 25 A Solution | | | || | U L A = = 33 23 22 13 12 11 32 31 21 0 0 0 1 0 1 0 0 1 u u u u u u    The | | U matrix is the same as found at the end of the forward elimination of Naïve Gauss elimination method, that is | | − − = 7 . 0 0 0 56 . 1 8 . 4 0 1 5 25 U To find 21  and 31  , find the multiplier that was used to make the 21 a and 31 a elements zero in the first step of forward elimination of the Naïve Gauss elimination method. It was 25 64 21 =  56 . 2 = 25 144 31 =  76 . 5 = To find 32  , what multiplier was used to make 32 a element zero? Remember 32 a element was made zero in the second step of forward elimination. The | | A matrix at the beginning of the second step of forward elimination was − − − − 76 . 4 8 . 16 0 56 . 1 8 . 4 0 1 5 25 So 8 . 4 8 . 16 32 − − =  5 . 3 = Hence | | = 1 5 . 3 76 . 5 0 1 56 . 2 0 0 1 L Confirm | || | | | A U L = . 04.07.6 Chapter 04.07 | || | − − = 7 . 0 0 0 56 . 1 8 . 4 0 1 5 25 1 5 . 3 76 . 5 0 1 56 . 2 0 0 1 U L = 1 12 144 1 8 64 1 5 25 Example 2 Use the LU decomposition method to solve the following simultaneous linear equations. = 2 279 2 177 8 106 1 12 144 1 8 64 1 5 25 3 2 1 . . . a a a Solution Recall that | || | | | C X A = and if | | | || | U L A = then first solving | || | | | C Z L = and then | || | | | Z X U = gives the solution vector | | X . Now in the previous example, we showed | | | || | U L A = − − = 7 . 0 0 0 56 . 1 8 . 4 0 1 5 25 1 5 . 3 76 . 5 0 1 56 . 2 0 0 1 First solve | || | | | C Z L = = 2 . 279 2 . 177 8 . 106 1 5 . 3 76 . 5 0 1 56 . 2 0 0 1 3 2 1 z z z to give 8 . 106 1 = z 2 . 177 56 . 2 2 1 = + z z 2 . 279 5 . 3 76 . 5 3 2 1 = + + z z z Forward substitution starting from the first equation gives 106.8 1 = z LU Decomposition 04.07.7 1 2 56 . 2 2 . 177 z z − = 8 . 106 56 . 2 2 . 177 × − = 208 . 96 − = 2 1 3 5 . 3 76 . 5 2 . 279 z z z − − = ( ) 208 . 96 5 . 3 8 . 106 76 . 5 2 . 279 − × − × − = 76 . 0 = Hence | | = 3 2 1 z z z Z − = 76 . 0 208 . 96 8 . 106 This matrix is same as the right hand side obtained at the end of the forward elimination steps of Naïve Gauss elimination method. Is this a coincidence? Now solve | || | | | Z X U = − = − − 76 0 208 96 8 106 7 . 0 0 0 56 . 1 8 . 4 0 1 5 25 3 2 1 . . . a a a 8 . 106 5 25 3 2 1 = + + a a a 208 . 96 56 . 1 8 . 4 3 2 − = − − a a 76 . 0 7 . 0 3 = a From the third equation 76 . 0 7 . 0 3 = a 7 0 76 0 3 . . a = 1.0857 = Substituting the value of 3 a in the second equation, 208 . 96 56 . 1 8 . 4 3 2 − = − − a a 8 . 4 56 . 1 208 . 96 3 2 − + − = a a 4.8 0857 . 1 1.56 96.208 − × + − = 691 19. = Substituting the value of 2 a and 3 a in the first equation, 8 . 106 5 25 3 2 1 = + + a a a 25 5 8 . 106 3 2 1 a a a − − = 04.07.8 Chapter 04.07 25 0857 . 1 691 . 19 5 8 . 106 − × − = 29048 . 0 = Hence the solution vector is = 0857 . 1 691 . 19 29048 . 0 3 2 1 a a a How do I find the inverse of a square matrix using LU decomposition? A matrix | | B is the inverse of | | A if | || | | | | || | A B I B A = = . How can we use LU decomposition to find the inverse of the matrix? Assume the first column of | | B (the inverse of | | A ) is T 1 12 11 ] ... ... [ n b b b Then from the above definition of an inverse and the definition of matrix multiplication | | = 0 0 1 1 21 11   n b b b A Similarly the second column of | | B is given by | | = 0 1 0 2 22 12   n b b b A Similarly, all columns of | | B can be found by solving n different sets of equations with the column of the right hand side being the n columns of the identity matrix. Example 3 Use LU decomposition to find the inverse of | | = 1 12 144 1 8 64 1 5 25 A Solution Knowing that | | | || | U L A = − − = 0.7 0 0 1.56 4.8 0 1 5 25 1 5 . 3 76 . 5 0 1 56 . 2 0 0 1 LU Decomposition 04.07.9 We can solve for the first column of | | 1 ] [ − = A B by solving for = 0 0 1 1 12 144 1 8 64 1 5 25 31 21 11 b b b First solve | || | | | C Z L = , that is = 0 0 1 1 5 . 3 76 . 5 0 1 56 . 2 0 0 1 3 2 1 z z z to give 1 1 = z 0 56 . 2 2 1 = + z z 0 5 . 3 76 . 5 3 2 1 = + + z z z Forward substitution starting from the first equation gives 1 1 = z 1 2 56 2 0 z . z − = ( ) 1 56 . 2 0 − = 56 . 2 − = 2 1 3 5 . 3 76 . 5 0 z z z − − = ( ) ( ) 56 . 2 5 . 3 1 76 . 5 0 − − − = 2 . 3 = Hence | | = 3 2 1 z z z Z − = 2 . 3 56 . 2 1 Now solve | || | | | Z X U = that is − = − − 3.2 2.56 1 7 . 0 0 0 56 . 1 8 . 4 0 1 5 25 31 21 11 b b b 1 5 25 31 21 11 = + + b b b 56 . 2 56 . 1 8 . 4 31 21 − = − − b b 04.07.10 Chapter 04.07 2 . 3 7 . 0 31 = b Backward substitution starting from the third equation gives 7 . 0 2 . 3 31 = b 571 . 4 = 8 . 4 56 . 1 56 . 2 31 21 − + − = b b 8 . 4 ) 571 . 4 ( 56 . 1 56 . 2 − + − = 9524 . 0 − = 25 5 1 31 21 11 b b b − − = 25 571 . 4 ) 9524 . 0 ( 5 1 − − − = 04762 . 0 = Hence the first column of the inverse of | | A is − = 571 . 4 9524 . 0 04762 . 0 31 21 11 b b b Similarly by solving = 0 1 0 1 12 144 1 8 64 1 5 25 32 22 12 b b b gives − − = 000 . 5 417 . 1 08333 . 0 32 22 12 b b b and solving = 1 0 0 1 12 144 1 8 64 1 5 25 33 23 13 b b b gives − = 429 . 1 4643 . 0 03571 . 0 33 23 13 b b b Hence | | − − − − = − 429 . 1 000 . 5 571 . 4 4643 . 0 417 . 1 9524 . 0 03571 . 0 08333 . 0 04762 . 0 1 A Can you confirm the following for the above example? | || | | | | | | | A A I A A 1 1 − − = = Key Terms: LU decomposition Inverse 04.01.2 Chapter 04.01 ai 2 ....ain  and column j of [ A] has m elements and is  a1 j  a   2j        a mj    Each matrix has rows and columns and this defines the size of the matrix. If a matrix [ A] has m rows and n columns, the size of the matrix is denoted by m  n . The matrix [ A] may also be denoted by [ A] mn to show that [ A] is a matrix with m rows and n columns. Each entry in the matrix is called the entry or element of the matrix and is denoted by aij where i is the row number and j is the column number of the element. The matrix for the tire sales example could be denoted by the matrix [A] as 25 20 3 2  [ A]   5 10 15 25 .    6 16 7 27   There are 3 rows and 4 columns, so the size of the matrix is 3 4 . In the above [ A] matrix, a34  27 . ai1 What are the special types of matrices? Vector: A vector is a matrix that has only one row or one column. There are two types of vectors – row vectors and column vectors. Row Vector: If a matrix [ B] has one row, it is called a row vector [ B]  [b1 b2 bn ] and n is the dimension of the row vector. Example 1 Give an example of a row vector. Solution [ B]  [25 20 3 2 0] is an example of a row vector of dimension 5. Column vector: If a matrix [C ] has one column, it is called a column vector Introduction  c1   [C ]       c m  and m is the dimension of the vector. 04.01.3 Example 2 Give an example of a column vector. Solution 25 [C ]   5    6   is an example of a column vector of dimension 3. Submatrix: If some row(s) or/and column(s) of a matrix [ A] are deleted (no rows or columns may be deleted), the remaining matrix is called a submatrix of [ A] . Example 3 Find some of the submatrices of the matrix  4 6 2 [ A]     3  1 2 Solution  4 6 2 4 6   2 3  1 2, 3  1, 4 6 2, 4, 2       are some of the submatrices of [ A] . Can you find other submatrices of [ A] ? Square matrix: If the number of rows m of a matrix is equal to the number of columns n of a matrix [ A] , ( m  n ), then [ A] is called a square matrix. The entries a11 , a22 ,..., ann are called the diagonal elements of a square matrix. Sometimes the diagonal of the matrix is also called the principal or main of the matrix. Example 4 Give an example of a square matrix. that is. Solution 0 0 1 [ A]  0. Solution 0  7 10  0  0. all the elements below the diagonal entries are zero. that is.04. a 22  10. all the elements above the diagonal entries are zero. 3.3 1 0   0. Upper triangular matrix: A m  n matrix for which aij  0. Lower triangular matrix: A m  n matrix for which aij  0. ( aij  0. That is.01 Solution 25 20 3  [ A]   5 10 15    6 15 7    is a square matrix as it has the same number of rows and columns. Diagonal matrix: A square matrix with all non-diagonal elements equal to zero is called a diagonal matrix. That is.001 [ A]   6   0 0 15005   is an upper triangular matrix. i  j is called an upper triangular matrix.5 1   is a lower triangular matrix. The diagonal elements of [ A] are a11  25.4 Chapter 04. only the diagonal entries of the square matrix can be non-zero. Example 6 Give an example of a lower triangular matrix. i  j ). Example 7 Give examples of a diagonal matrix. a33  7 . j  i is called a lower triangular matrix. . Example 5 Give an example of an upper triangular matrix.6 2.01. For example 3 0 0 [ A]  0 2. Solution 1 0 0 1 [ A]   0 0  0 0 is an identity matrix.01. Identity matrix: A diagonal matrix with all diagonal elements equal to one is called an identity matrix.1 0   0 0 0    is also a diagonal matrix. Example 9 Give examples of a zero matrix. Example 8 Give an example of an identity matrix.Introduction 04. Zero matrix: 0 0 1 0 0 0  0  1 A matrix whose all entries are zero is called a zero matrix. Solution 0 [ A]  0  0  0 [B]   0 0 0 0 0  0 0  0 0 0 0  . ( aij  0.5 Solution 3 0 0 [ A]  0 2.1 0   0 0 0    is a diagonal matrix. Any or all the diagonal entries of a diagonal matrix can be zero. ( aij  0 for all i and j ). i  j and aii  1 for all i ). a kk where k  min{m.6 7.. a 22 . Diagonally Dominant Matrix: A n  n square matrix [ A] is a diagonally dominant matrix if a ii   | aij | for all i  1.2 and a 22  7.01 Tridiagonal matrices: A tridiagonal matrix is a square matrix in which all elements not on the following are zero the major diagonal.6 0 0 0 0  [C ]  0 0 0 0   0 0 0 0    [ D]  0 0 0 are all examples of a zero matrix.. n} ..2   5..2. the diagonal entries are a11 . a k 1.9 3. Example 10 Give an example of a tridiagonal matrix..2 5  6 7   [ A]  2. Chapter 04. the diagonal above the major diagonal.04.. Solution 2 4 0 2 3 9 [ A]   0 0 5  0 0 3 is a tridiagonal matrix. and the diagonal below the major diagonal. 0 0  2  6 Do non-square matrices have diagonal entries? Yes. Example 11 What are the diagonal entries of 3..k 1 ..8 Solution The diagonal elements of [ A] are a11  3. for a m  n matrix [ A] .01.. n and j 1 i j n . 01.001  5. the absolute value of the diagonal element is greater than or equal to the sum of the absolute values of the rest of the elements of that row. Example 12 Give examples of diagonally dominant matrices and not diagonally dominant matrices.Introduction aii   | aij | for at least one i . for each row.7 that is. that is Rows 1 and 3 in this case.001  b31  b32  3   2  5 The inequalities are satisfied for all rows and it is satisfied strictly greater than for at least one row (in this case it is Row 3). Solution 7  15 6  2  4  2 [ A]    3 2 6   is a diagonally dominant matrix as a11  15  15  a12  a13  6  7  13 a 22   4  4  a 21  a 23  2   2  4 a 33  6  6  a31  a 32  3  2  5 and for at least one row.001   is a diagonally dominant matrix as b11   15  15  b12  b13  6  9  15 b22   4  4  b21  b23  2  2  4 b33  5. the inequality is a strictly greater than inequality. and that the inequality is strictly greater than for at least one row. j 1 i j n 04. 9   15 6  2 [B]   4 2    3  2 5. Diagonally dominant matrices are important in ensuring convergence in iterative schemes of solving simultaneous linear equations.  25 5 1 C    64 8 1   144 12 1   is not diagonally dominant as c 22  8  8  c 21  c 23  64  1  65 When are two matrices considered to be equal? . Example 13 What would make  2 3 [ A]    6 7  to be equal to 3 b [ B]   11   6 b22  Solution The two matrices [A] and [B] ould be equal if b11  2 and b22  7 . Key Terms: Matrix Vector Submatrix Square matrix Equal matrices Zero matrix Identity matrix Diagonal matrix Upper triangular matrix Lower triangular matrix Tri-diagonal matrix Diagonally dominant matrix .04.01.01 Two matrices [A] and [B] are equal if the size of [A] and [B] is the same (number of rows and columns are same for [A] and [B]) and aij = bij for all i and j.8 Chapter 04. . bn . find linear combinations of vectors and their relationship to a set of equations. A row vector [B] is of the form B  [b1 . and find the rank of a set of vectors. define a vector..... a1 ...02 Vectors After reading this chapter. 2. z ) coordinates. bn ] where B is a n dimensional row vector with n components b1 . a 2 .1 . 3. explain what it means to have a linearly independent set of vectors. z  5 . a n . y. The above is a   column vector. So the vector A given by  a1   a 2  A     a n  is a n -dimensional column vector with n components. If it is a collection of n numbers. Solution Assume a point in space is given by its ( x. What is a vector? A vector is a collection of numbers in a definite order.Chapter 04. add and subtract vectors.02. y  2.... 4. it is  called a n -dimensional vector. Example 1 Give an example of a 3-dimensional column vector. 5. the column vector corresponding to the location of the points is  x   3  y    2 ...      z  5     04.... you should be able to: 1. b2 .. b2 . Then if the value of x  3.. b4  1 How do you add two vectors? Two vectors can be added only if they are of the same dimension and the addition is given by .. Solution b1  2..02. Example 2  What are the values of the unknown components in B if  2   3 A   4   1  and  b1   3 B  4   b4    and A  B ..2 Chapter 04.2. n ...02 When are two vectors equal?   Two vectors A and B are equal if they are of the same dimension and if their corresponding components are equal.. Given  a1   a 2  A     a n  and  b1   b2  B     bn    then A  B if ai  bi .. i  1.04. 3 Example 3 Add the two vectors  2   3 A   4   1  and 5    2 B  3   7 Solution  2  5     3   2 A B        4  3      1   7  2  5 3  2     4  3   1  7  7  1    7    8  .02.Vectors  a1   b1   a  b  [ A]  [ B ]   2    2        a n  bn   a1  b1  a  b  2 2        a n  bn  04. 04. Solution The vector .02. In quarter 1. Michigan is 15 and Copper is 12 in the first half of the year. the sales are given by the column vector 25    A1   5  6   where the rows represent the three brands of tires sold – Tirestone. the sales are given by 20    A2  10  6   What is the total sale of each brand of tire in the first half of the year? Solution The total sales would be given by    C  A1  A2 25 20   5   10      6 6     25  20   5  10     66    45  15    12    So the number of Tirestone tires sold is 45. In quarter 2.4 Example 4 Chapter 04. What is a null vector? A null vector is where all the components of the vector are zero. Michigan and Copper. Example 5 Give an example of a null vector or zero vector.02 A store sells three brands of tires: Tirestone. Michigan and Copper respectively.  3    2    1  0   0  0             3 How do you multiply a vector by a scalar?  If k is a scalar and A is a n -dimensional vector. then  a1  a   kA  k  2     a n   ka1  ka    2      ka n  . 1.Vectors 04. 0.02. Solution Examples include  1   1   3     1  2  0  1 .5 0  0    0    0  is an example of a zero or null vector. What is a unit vector?  A unit vector U is defined as  u1   u 2  U      u n  where 2 2 2 u12  u 2  u 3    u n  1 Example 6 Give examples of 3-dimensional unit column vectors. etc.  1 . 25  31. 25  B  1.25.2525   6   31. how many of each brand should be sold? Solution  Since the goal is to increase the sales by 25%.25    7.04.02. one would multiply the A vector by 1.02 Example 8 A store sells three brands of tires: Tirestone. In quarter 1. Michigan and Copper.6 Example 7  What is 2 A if  25    A  20 5   Solution  25  2A  2 20   5    2  25  2  20    25    50   40   10    Chapter 04.5    Since the number of tires must be an integer. we can say that the goal of sales is 32    B  32 8   . the sales are given by the column vector 25    A  25 6   If the goal is to increase the sales of all tires by at least 25% in the next quarter. . and if k1.. B  1  ...02... Am as m vectors of same dimension n... C   1  2  2 6        Solution  2  1        a) A  B  3  1  6   2      2  1   3 1   6  2    1    2    4   2 1  10          b) A  B  3C  3  1   3 1  6   2  2        2  1  30   3 1 3    626    27   1     2    .….7 What do you mean by a linear combination of vectors?    A1 ... Given Example 9 Find the linear combinations   a) A  B and    b) A  B  3C where 10 1  2          A   3  . k2. A2 .  k m Am is a linear combination of the m vectors.. km are scalars.Vectors 04. then    k1 A1  k 2 A2  ... ...  ..02 Are the three vectors 1 5  25        8 .. The above equations are 25k1  5k 2  k 3  0 (1) 64k1  8k 2  k3  0 (2) 144k1  12k 2  k3  0 (3) Subtracting Eqn (1) from Eqn (2) gives 39k1  3k 2  0 k 2  13k1 (4) Multiplying Eqn (1) by 8 and subtracting it from Eqn (2) that is first multiplied by 5 gives 120k1  3k3  0 k3  40k1 (5) Remember we found Eqn (4) and Eqn (5) just from Eqns (1) and (2)..  k m  0 Example 10 Chapter 04. how do we show that this is the only solution? This is shown below...  k m Am  0 has only one solution of k1  k 2  . A  1 A1   64 .8 What do you mean by vectors being linearly independent?    A set of vectors A1 .. Substitution of Eqns (4) and (5) in Eqn (3) for k1 and k 2 gives 144k1  12( 13k1 )  40k1  0 28k1  0 .04. A2 .02. However. k1  k 2  k 3  0 . A2    3   1 12 144      linearly independent? Solution Writing the linear combination of the three vectors 1 0 5  25   64   k  8   k 1  0 k1   2  3    1 0 12 144        gives  25k1  5k 2  k 3  0  64k  8k  k   0 1 2 3     144k1  12k 2  k 3  0     The above equations have only one solution. Am are considered to be linearly independent if     k1 A1  k 2 A2  ... Vectors 04.5 and subtracting from Eqn (2) gives  0. k 3  1 Hence. the set of vectors is linearly dependent. A2    3   24 7  5        linearly independent? Solution By inspection.9 k1  0 This means that k1 has to be zero.    A3  2 A1  2 A2 or      2 A1  2 A2  A3  0 So the linear combination     k1 A1  k 2 A2  k 3 A3  0 has a non-zero solution k1  2. The three vectors hence are linearly independent. 1   2  6  0  2  k 5  k 14   0 k1   2  3    5  7  24 0         to give k1  2 k 2  6 k 3  0 2k1  5k 2  14k 3  0 5k1  7k 2  24k 3  0 Multiplying Eqn (1) by 2 and subtracting from Eqn (2) gives k 2  2k 3  0 k 2  2k 3 Multiplying Eqn (1) by 2. what do I do? Put the linear combination of three vectors equal to the zero vector. Substitute Eqn (4) and (5) in Eqn (3) for k1 and k 2 gives 5 2k 3   7 2k 3   24k 3  0 (1) (2) (3) (4) (5) . So the only solution is k1  k 2  k 3  0 .02. A  14  A1  2. k 2 and k 3 are also zero. What if I cannot prove by inspection.5k1  k 3  0 k1  2k 3 Remember we found Eqn (4) and Eqn (5) just from Eqns (1) and (2). k 2  2. and coupled with (4) and (5). Example 11 Are the three vectors 6  2 1      5. 5. Hence we have a nontrivial solution of k1 k 2 k 3    12  12 6. 14        5 7  25       Are they linearly dependent or linearly independent? Note that the only difference between this set of vectors and the previous one is the third entry in the third vector. For example k1  1. This implies . then k 2  12 from Eqn (4). k 3 are not equal to zero. A2    3   89  13  2       linearly independent? Solution Writing the linear combination of the three vectors and equating to zero vector 25 5 1   0  64  k  8   k 1   0 k1   2  3      13  2  0 89       gives  25k1  5k 2  k 3  0  64k  8k  k   0 1 2 3     89k1  13k 2  2k 3  0     In addition to k1  k 2  k 3  0 . chose k 3  6 .02. For example.04. equations (4) and (5) are still valid.02  10k 3  14k 3  24k 3  0 00 This means any values satisfying Eqns (4) and (5) will satisfy Eqns (1). Can you find another nontrivial solution? What about the following three vectors? 1   2   6  2. What conclusion do you draw when you plug in equations (4) and (5) in the third equation: 5k1  7k 2  25k 3  0 ? What has changed? Example 12 Are the three vectors 25 5 1        8  . This implies the three given vectors are linearly dependent.10 Chapter 04. one can find other solutions for which k1 . (2) and (3) simultaneously. Hence. k 2 . and k1  12 from Eqn (5). A  1  A1  64. k 3  40 is also a solution. k 2  13. A3 are linearly dependent.12. A2 .10 that A1 . the rank of A1 . A2 . A3 is 3. What do you mean by the rank of a set of vectors? From a set of n -dimensional vectors. A  1 ? A1   64 . Note that the rank of the vectors can never be greater than the vectors dimension. A2 . A3 are linearly dependent. Is it 2? Let us choose 25 5     A1  64. Hence    A1 . we found that A1 . and is less than 3. A2    3   144 12 1      Solution    Since we found in Example 2. the rank of the set    of vectors A1 . Example 15 What is the rank of . the rank is 2. A  1  ? A1  64. Example 13 What is the rank of  25  5 1       8 . Example 14 What is the rank of 25 5 1        8  . A3 are linearly independent. A3 is hence not 3.Vectors 25 5 1  0  64  13 8   40 1   0 1        89  13  2  0          04.11 So the linear combination that gives us a zero vector consists of non-zero constants. A2   8    89  13       Linear combination of A1 and A2 equal to zero has only one solution. A2 . A2    3   89  13  2       Solution       In Example 2. the maximum number of linearly independent vectors in the set is called the rank of the set of vectors. A2 . Therefore.02.  0 Aam  0   also has a non-trivial solution too.. the set of vectors is linearly dependent. the set of vectors is linearly dependent as more than one solution exists. any value of k1 coupled with k 2  k 3    k m  0 will satisfy the above equation.. the linear combination     k1 Aa1  k 2 Aa 2    k p Aap  0 has a non-trivial solution.   A 2  2 A1 . Hence     k1 A1  k 2 A2  k 3 A3  0. Then if this subset is linearly dependent..04.12 1   2 3      2.. Since   A2  2A1 ...   A1 and A2 are linearly dependent.   has trivial solution as the only solution. a subset of linearly independent vectors cannot be linearly dependent. then a subset of the m vectors also has to be linearly independent.    Let A1 .. Let this subset be    Aa1 .02 From inspection. However. that implies     2 A1  A2  0 A3  0. A2    3   2  4 5       Solution Chapter 04. Am be a set of n -dimensional vectors. .. this is a contradiction. and hence the rank of the three vectors is not 3. Hence. A2 .. . has a nontrivial solution..02. but    k1 A1  k 3 A3  0.    So A1 . Aap where p  m . Aa 2 . .. Aam are the rest of the (m  p) vectors. Prove that if a set of vectors are linearly independent.. A  3 ? A1  1 . Then assuming if A1 is the zero or null vector. Prove that if a set of vectors contains the null vector. So       k1 Aa1  k 2 Aa 2    k p Aap  0 Aa ( p 1)  . then     k1 A1  k 2 A2    k m Am  0  is a linear combination of the m vectors.. The rank of the above three vectors is 2.. A3 are linearly dependent. where Aa  p 1 . A2 . Therefore.. So A1 and A3 are linearly independent.. A p 1  A p   2 A2    kp kp kp kp and that proves the theorem. Can you prove it? How can vectors be used to write simultaneous linear equations? If a set of m linear equations with n unknowns is written as a11 x1    a1n xn  c1 a 21 x1    a 2 n x n  c 2     a m1 x1    a mn x n  c n where x1 ..Vectors 04. Am be linearly dependent.    Let A1 . then at least one vector can be written as a linear combination of others. then k p 1  k p 1  k  k  A p 1    m Am . k m not all of which are zero for the linear combination     k1 A1  k 2 A2    k m Am  0 .13 Prove that if a set of vectors is linearly dependent. Let k p  0 to give one of the non-zero values of k i . be for i  p .02.. x n are the unknowns. . x 2 . then the set of vectors is linearly dependent. A2 . then there exists a set of numbers k1 . m .  . then in the vector notation they can be written as     x1 A1  x 2 A2    x n An  C where a    11  A1     am1    where a    11  A1     a m1     a12   A2       a m 2    . i  1. Prove that if the dimension of a set of vectors is less than the number of vectors in the set. 8   64 x  8 x 2  x3   177.. 1. Then the dot   product of the two vectors A and B is defined as n   A  B  a1b1  a 2 b2    a n bn   ai bi i 1 A dot product is also called an inner product or scalar.02 The problem now becomes whether you can find the scalars x1 . x n such that the linear combination    x1 A1  . Example 17   Find the dot product of the two vectors A = (4.7..  .2      25  5 1 106...2  1     144 x1  12 x 2  x3  279.. 1.2 as a linear combination of vectors. 7..14  a1n   An       a mn    c    1 C1     c m    Chapter 04. 2..2  x1   2   3     144 12 1 279. 3) and B = (3.1.2..2        What is the definition of the dot product of two vectors?   Let A  a1 .  x n An  C Example 16 Write 25 x1  5 x 2  x3  106. Solution   A  B  4...04.1.. a 2 . a n  and B  b1 . bn  be two n-dimensional vectors.3  3..8   64   x  8   x 1  177. Solution  25 x1  5 x 2  x3  106.02. 2).  .. b2 .2 = (4)(3)+(1)(1)+(2)(7)+(3)(2) = 33 .. x 2 .8 64 x1  8 x 2  x3  177..2 144 x1  12 x 2  x3  279. 250.23)  (250)(30.56 250 B 29.Vectors 04. Solution The weight vector is given by  W  200.20 Key Terms: Vector Addition of vectors Rank Dot Product Subtraction of vectors Unit vector Scalar multiplication of vectors Null vector Linear combination of vectors Linearly independent vectors .2  $20713.12)  4046  7640  9027.56. Rubber Type Weight (lbs) Cost per pound ($) 20.   W  C  200.29.56)  (310)(29.56.30.250.30.02.23.12 310 C Use the definition of a dot product to find the total price of the rubber needed.15 Example 18 A product line needs three types of rubber as given in the table below.310   20.310 and the cost vector is given by  C  20.23.12 .12  (200)(20.   The total cost of the rubber would be the dot product of W and C .23 200 A 30.29. 1 . How do you add two matrices? Two matrices [ A] and [B] can be added only if they are the same size. apply rules of binary operations on matrices. 5 2 3 6 7  2  [ A]   [ B]     1 2 7 3 5 19  Solution [C ]  [ A]  [ B] 5 2 3 6 7  2    1 2 7 3 5 19  5  6 2  7 3  2    1  3 2  5 7  19 11 9 1     4 7 26 Example 2 Blowout r’us store has two store locations A and B . 04. and multiply matrices. The addition is then shown as [C ]  [ A]  [ B] where cij  aij  bij Example 1 Add the following two matrices.Chapter 04. add. and their sales of tires are given by make (in rows) and quarters (in columns) as shown below.03. you should be able to 1. subtract.03 Binary Matrix Operations After reading this chapter. and 2. 2 Chapter 04. Michigan and Copper tires respectively and the columns represent the quarter number: 1.04. How do you subtract two matrices? Two matrices [A] and [B] can be subtracted only if they are the same size. 3 7   2 19   . The subtraction is then given by [ D]  [ A]  [ B] Where d ij  aij  bij Example 3 Subtract matrix [ B] 5 2 [ A]   1 2 6 7 [ B]   3 5 from matrix [ A] . 2.03. What are the total tire sales for the two locations by make and quarter? Solution [C ]  [ A]  [ B ] 25 20 3 2  20 5 4 0  =  5 10 15 25 +  3 6 15 21      6 16 7 27   4 1 7 20     2  0  25  20  20  5 3  4   5  3 10  6  15  15 25  21 =   6  4  16  1 7  7  27  20   45 25 7 2    8 16 30 46   10 17 14 47    So if one wants to know the total number of Copper tires sold in quarter 4 at the two locations.03 25 20 3 2  [ A]   5 10 15 25    6 16 7 27   20 5 4 0  [B]   3 6 15 21    4 1 7 20   where the rows represent the sale of Tirestone. 3 and 4. we would look at Row 3 – Column 4 to give c34  47. Binary Matrix Operations Solution 04. How many more tires did store A sell than store B of each brand in each quarter? Solution [ D]  [ A]  [ B] 25 20 3 2  20 =  5 10 15 25   3     6 16 7 27   4    25  20 20  5 3  4   5  3 10  6 15  15   6  4 16  1 7  7  0 6 15 21  1 7 20  20  25  21  27  20  5 4 5 15  1 2   2 4 0 4   2 15 0 7    So if you want to know how many more Copper tires were sold in quarter 4 in store A than store B . . Michigan and Copper tires respectively and the columns represent the quarter number: 1.03. Note that d13  1 implies that store A sold 1 less Michigan tire than store B in quarter 3. d 34  7 . 2. 3. and 4.3 [ D]  [ A]  [ B] 5 2 3 6 7  2    1 2 7  3 5 19  (5  6) (2  7) (3  (2))    (1  3) (2  5) (7  19)  1  5 5     2  3  12 Example 4 Blowout r’us has two store locations A and B and their sales of tires are given by make (in rows) and quarters (in columns) as shown below. 25 20 3 2  [ A]   5 10 15 25    6 16 7 27    20 5 4 0  [B]   3 6 15 21    4 1 7 20   where the rows represent the sale of Tirestone. 04. So how does one calculate the elements of [C ] matrix? cij   aik bkj k 1 p  ai1b1 j  ai 2 b2 j    aip b pj for each i  1. To put it in simpler terms.03 Two matrices [ A] and [ B] can be multiplied only if the number of columns of [ A] is equal to the number of rows of [ B ] to give [C ] mn  [ A] m p [ B] pn If [ A] is a m  p matrix and [B] is a p  n matrix.. 2.. .. 2. the resulting matrix [C ] is a m  n matrix.  aip b pj . ..4 How do I multiply two matrices? Chapter 04.03.     aik bkj k 1 p Example 5 Given 5 2 3 [ A]    1 2 7  3  2  [ B]  5  8    9  10   Find C   AB  2 c12  5 2 3  8     10   Solution c12 can be found by multiplying the first row of [ A] by the second column of [B ] .. m and j  1. . the i th row and j th column of the [C ] matrix in [C ]  [ A][ B] is calculated by multiplying the i th row of [ A] by the j th column of [ B] ..  b1 j  b   2j cij  ai1 ai 2  aip          b pj     ai1 b1j  ai2 b2j  . that is. n .. each quarter sales of store A in dollars is given by the four columns of the row vector C   1182.5 Blowout r’us store location A and the sales of tires are given by make (in rows) and quarters (in columns) as shown below 25 20 3 2  [ A]   5 10 15 25    6 16 7 27    where the rows represent the sale of Tirestone.19 Copper = $25.19 25.Binary Matrix Operations  (5)(2)  (2)(8)  (3)(10)  56 Similarly.03.06 Remember since we are multiplying a 1  3 matrix by a 3  4 matrix.19 25.19 5  25.06 Therefore.38 1467.06 Similarly c12  $1467. 3.03 Solution The answer is given by multiplying the price matrix by the quantity of sales of store A .036   $1747.25 40.2525  40. the resulting matrix is a 1  4 matrix.03  5 10 15 25    6 16 7 27    cij   aik bkj c11   a1k bk 1 k 1 k 1 3 3  a11b11  a12b21  a13b31  33.38 c13  $877. 2.81 c14  $1747. Find the per quarter sales of store A if the following are the prices of each tire. The price matrix is 33. .38 877. and 4. Michigan and Copper tires respectively and the columns represent the quarter number: 1. Tirestone = $33.25 40. so the per quarter sales of store A would be given by 25 20 3 2  [C ]  33.25 Michigan = $40.81 1747. one can find the other elements of [C ] to give 52  56 [C ]    76  88 Example 6 04.03 . 5 2 0 2.1 3 2  0. [ A p ] Example 8 0 2..03 If [ A] is a n  n matrix and k is a real number. then the scalar product of k and [ A] is another n  n matrix [ B] .1 3 2[ A]  2  5 1 2  2.6 What is the scalar product of a constant and a matrix? Chapter 04.5 1.2 2.[ A2 ]. where bij  k aij ... [ A3 ]  3 3.. Example 7 Let 2.... then k1[ A1 ]  k 2 [ A2 ]  ...9 5  2.5[ A3 ] 5 6  3 2 5 6  3 2  9.03.2 2 2. k 2 ..1 3 2 [ A]     5 1 6 Find 2[ A] Solution 2.2 6 4   0 1.2  11.75     10.04... [ A2 ]   5 1 6...1   1   10 2 12 1.k p are scalars.5  2  1 3 3..1 3 2 5 6 2 If [ A1 ]   .[ Ap ] are matrices of the same size and k1 ..1   25 4. k p [ Ap ] is called a linear combination of [A1 ]..... [ A2 ]..2 6   10 2 2 6  2  3 2  2 2  1 2  6  4 12  What is a linear combination of matrices? If [ A1 ].5[ A3 ] Solution [ A1 ]  2[ A2 ]  0.5  5 1 6  2 4...25 10  2 6  1 3  .5 6     3 2 1  then find [ A1 ]  2[ A2 ]  0... then [ A]  [ B]  [ B]  [ A] ] Associative law of addition If [A]. then [ A]  [ B]  [C ]  [ A]  [ B]  [C ] Associative law of multiplication If [ A] .03. and [C ] and [ D] are n  p size matrices [ A][C ]  [ D]  [ A][C ]  [ A][ D] [ A]  [ B][C ]  [ A][C ]  [ B][C ] and the resulting matrix size on both sides of the equation is m  p. [ B] and [C ] are m  n. Example 9 Illustrate the associative law of multiplication of matrices using 1 2  2 5  2 1 [ A]  3 5. [ B]   .7 Commutative law of addition If [ A] and [ B] are m  n matrices. n  p and p  r size matrices.Binary Matrix Operations What are some of the rules of binary matrix operations? 04. [B] and [C] are all m  n matrices. Distributive law If [ A] and [ B] are m  n size matrices. then [ A][ B][C ]  [ A][ B][C ] and the resulting matrix size on both sides of the equation is m  r. [C ]  3 5   9 6    0 2    Solution [ B][C ]  2  9 19  36 5   2 1 6 3 5   27 39  1 2  19 27  [ A]([ B][C ])  3 5    36 39   0 2      91 105   237 276    72 78    . respectively. Even then in general [ A][ B]  [ B][ A] Example 10 Determine if [ A][ B]  [ B][ A] for the following matrices   3 2 6 3 [ A]    . This is only possible if [ A] and [ B] are square and are of the same size. the resulting matrix from [ A][ B] and [ B][ A] has to be of the same size.03 1 [ A][ B ]  3  0  20   51  18  Is [A][B] = [B][A]? If [ A] [ B ] exists.03. number of columns of [ B] has to be same as the number of rows of [ A] .04. [ B ]   1 5   2 5 Solution 6 3  3 [ A][ B]     2 5  1  15 27 =    1 29   3 2 6 [ B][ A]     1 5 2  14 1     16 28 2 5  3 5  .8 2 2 5 5    9 6  2  17  45  12   20 17   2 1 ([ A][ B])[C ]   51 45    3 5    18 12    91 105   237 276    72 78    The above illustrates the associative law of multiplication of matrices. number of columns of [ A] has to be same as the number of rows of [ B] and if [ B][ A] exists. Now for [ A][ B]  [ B][ A] . Chapter 04. Binary Matrix Operations [ A][ B]  [ B][ A] Key Terms: 04.03.9 Addition of matrices Subtraction of matrices Multiplication of matrices Scalar Product of matrices Linear Combination of Matrices Rules of Binary Matrix Operation . That is. Note. Also. find the transpose of a square matrix and it’s relationship to symmetric matrices. Also. the i th row and the j th column element of [ A] is the j th row and i th column element of [ B ] . What is the transpose of a matrix? Let [ A] be a m  n matrix. cA  cA T . you should be able to: 1. 2. A  T T  A .04 Unary Matrix Operations After reading this chapter.1 . the transpose of a row vector is a column vector and the transpose of a column vector is a row vector.04. The transpose of [ A] is denoted by [ A]T . and 4. [ B ] would be a n  m matrix. find the trace of a matrix. 3. know what unary operations means. Example 1 Find the transpose of 25 20 3 2  [ A]   5 10 15 25    6 16 7 27   Solution The transpose of [ A] is  25 5 6  20 10 16   AT    3 15 7     2 25 27 Note. Then [B] is the transpose of the [ A] if bij  aij for all i and j . note that the transpose of a transpose of a matrix is the matrix itself.  A  B   A T  B T .Chapter 04. T T 04. find the determinant of a matrix by the cofactor method. that is. 2 What is a symmetric matrix? Chapter 04.2  3..2 3. n is called a symmetric matrix. n and j  1. Example 2 Give an example of a symmetric matrix.. tr A   aii i 1 n Example 4 Find the trace of 15 6 7  [ A]   2  4 2   3 2 6   .. n .2.2 21. a13  a31  2.2.. Solution 0 1 2   1 0  5    2 5 0    is skew-symmetric as a12   a 21  1.04. then [ A]T is a symmetric matrix. Example 3 Give an example of a skew-symmetric matrix... Since aii  aii only if aii  0 ...04. a13  a31  6 and a 23  a32  8 .5 8  [ A]     6 8 9.. What is a skew-symmetric matrix? A n  n matrix is skew symmetric if aij   a ji for i  1..... n and j  1.. if [ A]  [ A]T . This is same as.. What is the trace of a matrix? The trace of a n  n matrix [A] is the sum of the diagonal entries of [ A] . all the diagonal elements of a skew-symmetric matrix have to be zero.04 A square matrix [A] with real elements where aij  a ji for i  1.3   is a symmetric matrix as a12  a 21  3. that is. Solution 6 21. a 23   a32   5 . This is same as A  AT ..2 . 03 22.02 30. find the transpose of [ B] . [C ]  [ B]T 33. as shown below. and the columns represent the quarter number 1.3 tr A   aii 3  (15)  (4)  (6)  17 Example 5 i 1 The sales of tires are given by make (rows) and quarters (columns) for Blowout r’us store location A . Michigan and Copper.19 38.23 22.02 27. 25 20 3 2  [ A]   5 10 15 25    6 16 7 27    where the rows represent the sale of Tirestone.02  [C ]   35.03 22. 33.23      25.03 22.02 30.23    25.02 41.03   30.95 where the rows represent the cost of each tire made by Tirestone.95 Recognize now that if we find [ A][C ] .02 27.03 22.01 35.05 [B]  40. Find the total yearly revenue of store A if the prices of tires vary by quarters as follows.25 30.02 41.01 35.03 30.19 38. Michigan and Copper tires.25 30. we get D  AC  T .95   33. 2. 3.19 25. that is. 4.25 40.Unary Matrix Operations Solution 04.03 38. we need to find the sales of each brand of tire for the whole year and then add to find the total sales. we need to rewrite the price matrix so that the quarters are in rows and the brand names are in the columns.05 38.03 27. To do so.03 38.05  40. Solution To find the total tire sales of store A for the whole year. and the columns represent the quarter numbers.04.02 41.01 38.02 22. The minor of entry aij is denoted by M ij and is defined as the determinant of the (n  1  (n  1) submatrix of [ A] .23 25. The determinant is then given by det  A    1 j 1 n n i j aij M ij for any i  1.03 38.4 33. For a 2  2 matrix.  .04.25 25 20 3 2   30. n or det  A    1 i 1 i j aij M ij for any j  1. that is d11  $1597 (Tirestone sales) d 22  $2152 (Michigan sales) d 33  $1311 (Cooper sales) The total yearly sales of all three brands of tires are d i 1 3 ii  1597  2152  1311  $5060 and this is the trace of the matrix [ A] . For a matrix [ A] .05  1597 1965 1193  1743 2152 1325   1736 2169 1311   40.04 The diagonal elements give the sales of each brand of tire for the whole year.03  22.04. where the submatrix is obtained by deleting the i th row and j th column of the matrix [ A] .02 41.01   5 10 15 25    35. a12  a [ A]   11  a 21 a 22  det( A)  a11 a 22  a12 a 21 How does one calculate the determinant of any square matrix? Let [ A] be n  n matrix.95 Chapter 04.02  27.  . determinant is denoted by A or det(A) . 2. 2. Define the determinant of a matrix. The determinant of a square matrix is a single unique real number corresponding to a matrix.02  6 16 7 27     30.19 38. n . So do not use [A] and A interchangeably.03 22. 2. For a n  n matrix. as we can always reduce the determinant of a matrix to determinants of 1 1 matrices. 2. The above equation for the determinant can then be written as det  A   a ij C ij for any i  1. Example 6 Find the determinant of  25 5 [ A]   64 8  144 12  Solution Method 1: det  A    1 j 1 3 1 1  1  i j aij M ij for any i  1. it requires arithmetic operations proportional to n!. The number (1) i  j M ij is called the cofactor of aij and is denoted by cij . n i 1 The only reason why determinants are not generally calculated using this method is that it becomes computationally intensive. 3 Let i  1 in the formula det  A    1 j 1 3 1 j a1 j M 1 j 1 2   1 a11 M 11   1 11 a12 M 12   1 a 13 M 13 1 3  a11 M 11  a12 M 12  a13 M 13 25 5 1 M 11  64 8 1 144 12 1 8 1 12 1  4 25 5 1 M 12  64 8 1 144 12 1  .Unary Matrix Operations 04. 2.  .04. .5 coupled with that det  A  a11 for a 1  1 matrix [ A] . n j 1 n n or det  A   a ij C ij for any j  1.  det  A   a1 j C1 j 3 j 1 Chapter 04.6 64 1 144 1  80 25 5 1 M 13  64 8 1 144 12 1  64 8 144 12  384 det( A)  a11 M 11  a12 M 12  a13 M 13  25 4   5 80   1 384   100  400  384  84 Also for i  1 .04. 3 Let j  2 in the formula det  A    1 i 1 1 2 3 i2 ai 2 M i 2 2 2   1 a12 M 12   1 a22 M 22   1 3 2 a32 M 32  a12 M 12  a 22 M 22  a 32 M 32 . 2.04 C11   1 M 11  M 11  4 1 2 C12   1 M 12   M 12  80 1 3 C13   1 M 13  M 13  384 det  A  a11C11  a 21C 21  a31C 31  (25) 4   (5)80  (1) 384   100  400  384  84 11 Method 2: det  A    1 i 1 3 i j aij M ij for any j  1.04. 04. det  A   ai 2 C i 2 C12   1 M 12   M 12  80 2 2 C 22   1 M 22  M 22  119 3 2 C 32   1 M 32   M 32  39 det  A  a12C12  a22C22  a32C32  (5)80   (8) 119   (12)39   400  952  468 i 1 1 2 3 .Unary Matrix Operations 04.7 25 5 1 M 12  64 8 1 144 12 1 64 1 144 1  80 25 5 1 M 22  64 8 1 144 12 1  25 1 144 1  119 25 5 1  64 8 1 144 12 1   M 32 25 1 64 1  39 det( A)   a12 M 12  a 22 M 22  a32 M 32  5(80)  8(119)  12(39)  400  952  468  84 In terms of cofactors for j  2 . Theorem 2: Let [ A] be a n  n matrix. det( A)  0 .04. then det( B)  k det( A) . Theorem 4: Let [ A] be a n  n matrix. If a column or row is multiplied by k to result in matrix k .8  84 Chapter 04. if [ A] and [B] are square matrices of same size. If a column is proportional to another column. Theorem 5: Let [ A] be a n  n upper or lower triangular matrix. then det( A)  0 .04. then det( AB)  det( A) det( B) Are there some other theorems that are important in finding the determinant of a square matrix? Theorem 1: If a row or a column in a n  n matrix [ A] is zero. Theorem 3: Let [ A] be a n  n matrix. then det( A)  0 .04 Is there a relationship between det(AB). then det( A)  0 . Example 8 What is the determinant of 2 1 6 4  3 2 7 6   [ A]   5 4 2 10   9 5 3 18 Solution det( A) is zero because the fourth column . then det( B )   aii . i 1 n Example 7 What is the determinant of 0 2 6 3  0 3 7 4   [ A]   0 4 9 5    0 5 2 1  Solution Since one of the columns (first column in the above example) of [ A] is a zero. and det(A) and det(B)? Yes. If a row is proportional to another row. 4 Example 10 Given the determinant of  25 5 1 [ A]   64 8 1   144 12 1   is  84 .04.1 times the second column of [ A] det( B)  2.5 1 [B]   64 16.8  1.Unary Matrix Operations 4 6   10   18 is 2 times the first column  2  3   5    9  Example 9 04.8 1   144 25.2 1   Solution Since the second column of [ B ] is 2. then what is the determinant of  25 10.56 [B]    144 12 1    . what is the determinant of 5 1   25  0  4.1)(84)  176.9 If the determinant of  25 5 1 [ A]   64 8 1   144 12 1   is  84 .1 det( A)  (2. 04 Since [B ] is simply obtained by subtracting the second row of [ A] by 2.80.04. det(B)  det(A)  84 Example 11 What is the determinant of 5 1  25  0  4.10 Solution Chapter 04.7    Solution Since [ A] is an upper triangular matrix det  A   aii  a11 a 22 a 33   25 4.56 [ A]    0 0 0.8  1.56 times the first row of [ A] .04.7   84 i 1 3 Key Terms: Transpose Symmetric Matrix Skew-Symmetric Matrix Trace of Matrix Determinant . learn that a system of linear equations can have a unique solution. interpolation.05.05 System of Equations After reading this chapter. 2.2 12 279. Example 1 The upward velocity of a rocket is given at three different times on the following table. the solutions reduce to a set of simultaneous linear equations.2 04. and t 3 . t 2 . v 2  177. know the difference between a consistent and inconsistent system of linear equations. time data for a rocket Time. Matrix algebra is used for solving systems of equations. c of the velocity profile. Can you illustrate this concept? Matrix algebra is used to solve a system of simultaneous linear equations. for many mathematical procedures such as the solution to a set of nonlinear equations.1. 3. Table 5. no solution or infinite solutions. Set up the equations in matrix form to find the coefficients a. integration. In fact. v3  where from table 5. and differential equations. b. t1  5. v1  106. v 2 . Velocity vs. you should be able to: 1.2 The velocity data is approximated by a polynomial as vt   at 2  bt  c .8 8 177. understand the concept of the inverse of a matrix. Let us illustrate with an example for interpolation. v1 . v (s) (m/s) 5 106. and 4. 5  t  12.8 t 2  8. t Velocity. Solution The polynomial is going through three data points t1 .1. setup simultaneous linear equations in matrix form and vice-versa.Chapter 04.1 . 2     The above equation can be written as a linear combination as follows  5  1 106.2      The above is an illustration of why matrix algebra is needed.8   25   64   b  8   c 1  177.04.2 t 3  12. v3  gives a 5 2   b5  c  106.8   64a  8b  c   177.2      144a  12b  c  279. v 2 . The complete solution to the set of equations is given later in this chapter.05.2 This set of equations can be rewritten in the matrix form as  25a  5b  c  106.. a11 x1  a12 x 2    a1n x n  c1 a 21 x1  a 22 x 2    a 2 n x n  c 2 …………………………………… ……………………………………. v1 ..2 or 25a  5b  c  106.  a mn x n  c m can be rewritten in the matrix form as ..2 144        and further using matrix multiplication gives  25 5 1 a  106.8   64 8 1 b   177....2 144a  12b  c  279.8 64a  8b  c  177.2 Chapter 04.8 a 8 2   b8  c  177. a m1 x1  a m 2 x 2  .2       144 12 1  c  279.2  a       12 1 279. t 2 ..2 a 12 2   b12  c  279. A general set of m linear equations and n unknowns.05 Requiring that vt   at 2  bt  c passes through the three data points gives vt1   v1  at12  bt1  c 2 vt 2   v 2  at 2  bt 2  c 2 vt 3   v3  at 3  bt 3  c Substituting the data t1 . v3  279. and t 3 . where A is called the coefficient matrix.3 a1n   x1   c1  a 22 .. 04.. Sometimes AX   C  systems of equations are written in the augmented form. a1n  c1    11  a 21 a 22 .. Consistent and inconsistent system of equations flow chart. Example 2 Give examples of consistent and inconsistent system of equations.... What does that mean? A system of equations AX   C  is consistent if there is a solution. However. a mn   x n  c m      Denoting the matrices by A .. X  .. a consistent system of equations may have a unique solution or infinite solutions (Figure 1). and it is inconsistent if there is no solution. a 2 n   x 2   c 2                     a m 2 .. and C  . the system of equation is AX   C  .System of Equations  a11 a  21      a m1  a12 .1.. . [A][X]= [B] Consistent System Inconsistent System Unique Solution Infinite Solutions Figure 5.. a mn  c n    A system of equations can be consistent or inconsistent. a 2 n  c 2  A C             a m1 a m 2 .. . Solution a) The system of equations .. C  is called the right hand side vector and X  is called the solution vector.. . That is a a12 .. a consistent system of equations does not mean a unique solution.05. that is. 04.05.4 Chapter 04.05  2 4  x  6 1 3   y    4       is a consistent system of equations as it has a unique solution, that is,  x  1  y   1 .    b) The system of equations  2 4   x  6  1 2   y    3      is also a consistent system of equations but it has infinite solutions as given as follows. Expanding the above set of equations, 2x  4 y  6 x  2y  3 you can see that they are the same equation. Hence, any combination of  x, y  that satisfies 2x  4 y  6 is a solution. For example  x, y   1,1 is a solution. Other solutions include x, y   (0.5,1.25) , x, y   (0, 1.5) , and so on. c) The system of equations  2 4  x  6  1 2   y    4       is inconsistent as no solution exists. How can one distinguish between a consistent and inconsistent system of equations? A system of equations AX   C  is consistent if the rank of A is equal to the rank of the augmented matrix AC  A system of equations AX   C  is inconsistent if the rank of A is less than the rank of the augmented matrix AC  . But, what do you mean by rank of a matrix? The rank of a matrix is defined as the order of the largest square submatrix whose determinant is not zero. Example 3 What is the rank of  3 1 2 A  2 0 5 ?   1 2 3    Solution The largest square submatrix possible is of order 3 and that is [ A] itself. Since det( A)  23  0, the rank of [ A]  3. System of Equations Example 4 04.05.5 What is the rank of  3 1 2 A  2 0 5 ?   5 1 7    Solution The largest square submatrix of [ A] is of order 3 and that is [ A] itself. Since det( A)  0 , the rank of [ A] is less than 3. The next largest square submatrix would be a 2  2 matrix. One of the square submatrices of [ A] is 3 1 B    2 0   and det( B)  2  0 . Hence the rank of [ A] is 2. There is no need to look at other 2  2 submatrices to establish that the rank of [ A] is 2. Example 5 How do I now use the concept of rank to find if  25 5 1  x1  106.8   64 8 1  x   177.2    2    144 12 1  x3  279.2      is a consistent or inconsistent system of equations? Solution The coefficient matrix is  25 5 1 A   64 8 1   144 12 1   and the right hand side vector is 106.8  C   177.2    279.2   The augmented matrix is  25 5 1  106.8  B   64 8 1  177.2    144 12 1  279.2   Since there are no square submatrices of order 4 as [ B] is a 3  4 matrix, the rank of [ B] is at most 3. So let us look at the square submatrices of [ B ] of order 3; if any of these square submatrices have determinant not equal to zero, then the rank is 3. For example, a submatrix of the augmented matrix [ B] is 04.05.6 Chapter 04.05  25 5 1 [D]   64 8 1   144 12 1   has det( D)  84  0 . Hence the rank of the augmented matrix [B] is 3. Since [ A]  [ D] , the rank of [ A] is 3. Since the rank of the augmented matrix [B] equals the rank of the coefficient matrix [ A] , the system of equations is consistent. Example 6 Use the concept of rank of matrix to find if 25 5 1   x1  106.8  64 8 1   x   177.2     2   89 13 2  x3  284.0      is consistent or inconsistent? Solution The coefficient matrix is given by 25 5 1  A  64 8 1   89 13 2   and the right hand side 106.8  C   177.2    284.0   The augmented matrix is 25 5 1 : 106.8  B  64 8 1 : 177.2    89 13 2 : 284.0   Since there are no square submatrices of order 4 as [ B ] is a 4  3 matrix, the rank of the augmented [ B] is at most 3. So let us look at square submatrices of the augmented matrix [ B] of order 3 and see if any of these have determinants not equal to zero. For example, a square submatrix of the augmented matrix [ B ] is 25 5 1  D  64 8 1   89 13 2   has det( D)  0 . This means, we need to explore other square submatrices of order 3 of the augmented matrix [ B] and find their determinants. That is, Is the rank of augmented matrix [ B] equal to 2?.05. and .8  G   64 1 177. So the rank of the augmented matrix [ B ] is less than 3.7 25 1 106.2    89 13 284.2    13 2 284.2    89 2 284.0   det( E )  0 25 5 106. One of the 2  2 submatrices of the augmented matrix [B] is 25 5 H    64 8   and det( H )  120  0 So the rank of the augmented matrix [B ] is 2. 25 5 1 A  64 8 1   89 13 2   det( A)  0 So the rank of the coefficient matrix [ A] is less than 3.8  E    8 1 177.8  F   64 8 177. Now we need to find the rank of the coefficient matrix [B] .0   det( F )  0 04.0   det(G )  0 All the square submatrices of order 3  3 of the augmented matrix [ B ] have a zero determinant. Hence.System of Equations  5 1 106. So the system of equations [ A][ X ]  [C ] is consistent. A square submatrix of the coefficient matrix [ A] is 5 1 J     8 1   det( J )  3  0 So the rank of the coefficient matrix [ A] is 2. rank of the coefficient matrix [ A] equals the rank of the augmented matrix [B]. 2    89 13 2 : 280. the rank of the coefficient matrix [ A] is same as the number of unknowns. Solution The augmented matrix is 25 5 1 : 106.8  B  64 8 1 : 177. If a solution exists. So let us look at square submatrices of the augmented matrix (B) of order 3 and see if any of the 3  3 submatrices have a determinant not equal to zero. Since the rank of the coefficient matrix [ A] is less than the rank of the augmented matrix [B] . . If in addition.8  64 8 1  x   177. no solution exists for [ A][ X ]  [C ] .0   det( E 0  12.8 Example 7 Chapter 04.0      is consistent or inconsistent.2     2   89 13 2  x3  280. For example. then the solution is unique.05.0  0 .2    13 2 280. the rank of the augmented matrix [B] is at most 3.8  E    8 1 177.05 Use the concept of rank to find if  25 5 1  x1  106.04. a square submatrix of order 3 3 of [B] 25 5 1  D  64 8 1   89 13 2   det(D) = 0 So it means. So the rank of the augmented matrix [B] is 3. Hence. if the rank of the coefficient matrix [ A] is less than the number of unknowns. the system of equations is inconsistent. The rank of the coefficient matrix [A] is 2 from the previous example. we need to explore other square submatrices of the augmented matrix [B]  5 1 106. then infinite solutions exist. the rank of the coefficient matrix [ A] is the same as the augmented matrix [ A C ] .0   Since there are no square submatrices of order 4 4 as the augmented matrix [B] is a 4 3 matrix. how do we know whether it is unique? In a system of equations [ A][ X ]  [C ] that is consistent. Example 9 We found that the following system of equations 25 5 1   x1  106.2     2   144 12 1  x3  279.2      is a consistent system of equations.B) Unique solution if rank (A) = number of unknowns Infinite solutions if rank (A) < number of unknowns Figure 5. we found that the rank of the coefficient matrix (A) equals rank of augmented matrix AC  equals 3. . Does the system of equations have a unique solution or does it have infinite solutions? Solution The coefficient matrix is  25 5 1 A   64 8 1   144 12 1   and the right hand side is 106.B) Inconsistent System if rank (A) < rank (A.05.System of Equations [A] [X] = [B] 04.8   64 8 1  x   177.8  C   177.2     2   89 13 2  x3  284.2    279. Example 8 We found that the following system of equations  25 5 1  x1  106. Flow chart of conditions for consistent and inconsistent system of equations.9 Consistent System if rank (A) = rank (A.2.2   While finding out whether the above equations were consistent in an earlier example. The solution is unique as the number of unknowns = 3 = rank of (A).0      is a consistent system of equations. Is the solution unique or does it have infinite solutions.8  64 8 1   x   177. we found that the rank of the coefficient matrix [ A] equals the rank of augmented matrix  AC  equals 2 Since the rank of [ A]  2 < number of unknowns = 3. Can a system of equations have more than one but not infinite number of solutions? No. infinite solutions exist. it depends on the rank of the augmented matrix  AC  and the rank of [ A] . the solution is not only consistent but also unique.2    x3      284.0  is consistent.2     x2     144 12 1   279. since rank of augmented matrix = 4 rank of coefficient matrix = 3 c) For example 106. If we have more equations than unknowns in [A] [X] = [C].05 While finding out whether the above equations were consistent. [ A] [ X ]  [C ] has two solutions [Y ] and [Z ] so that Let us suppose .8   25 5 1  64 8 1  x1  177.10 Solution Chapter 04.2    x3      280. since rank of augmented matrix = 3 rank of coefficient matrix = 3 Now since the rank of (A) = 3 = number of unknowns. infinite solutions exist. Consistent systems of equations can only have a unique solution or infinite solutions.05.0  89 13 2 is consistent.8   25 5 1  x1     64 8 1   x 2   177. does it mean the system is inconsistent? No.8  25 5 1 64 8 1  x1  177.6    x3    89 13 2   280.2    x     50 10 2  2  213. since rank of augmented matrix = 2 rank of coefficient matrix = 2 But since the rank of [ A] = 2 < the number of unknowns = 3.0  89 13 2 is inconsistent.2   144 12 1   279. a) For example 106. you can only have either a unique solution or infinite solutions.04. b) For example 106. let [A] be a square matrix. [ A] is called noninvertible or singular. then these statements are also true  [B] is the inverse of [A]  [A] is the inverse of [B]  [A] and [B] are both invertible  [A] [B]=[I]. it might seem intuitive that [ A]  C  . but matrix division is not B defined like that. If [ A] and [ B] are two n  n matrices such that [ B][ A]  [ I ] . If [ A] 1 does not exist. [ A] is then called to be invertible or nonsingular.  [A] and [B] are both nonsingular  all columns of [A] and [B]are linearly independent  all rows of [A] and [B] are linearly independent. is denoted by [ A] 1 such that [ A][ A] 1  [ I ]  [ A] 1 [ A] Where [ I ] is the identity matrix. there are infinite solutions for [ A][ X ]  [C ] of the form r Y   1  r Z  Can you divide two matrices? 04.11 If [ A][ B]  [C ] is defined. In other words. then from the two equations r AY   r C  1  r AZ   1  r C  Adding the above two equations gives r  AY   1  r  AZ   r C   1  r C  ArY   1  r Z   C  Hence r Y   1  r Z  is a solution to AX   C  Since r is any scalar. The inverse of a square matrix [ A] . if existing. If [ B] is another square matrix of the same size such that [ B][ A]  [ I ] . Example 10 Determine if 3 2  [B]    5 3 is the inverse of .05. then [ B] is the inverse of [ A] .System of Equations [ A] [Y ]  [C ] [ A] [ Z ]  [C ] If r is a constant. However an inverse of a matrix can be defined for certain types of square matrices. [ B] is the inverse of [A] and [ A] is the inverse of [ B] . [C ] .05.04.05 [ B][ A]  [ I ] .12  3 2  [A]     5  3 Solution 3 [ B ][ A]   5 1  0  [I ] Since 2   3 2  3  5  3   0 1  Chapter 04. if the number of equations is the same as the number of unknowns. then [ A] 1 is a n  n matrix and according to the definition of inverse of a matrix [ A][ A] 1  [ I ] Denoting . Given [ A][ X ]  [C ] Then. Can I use the concept of the inverse of a matrix to find the solution of a set of equations [A] [X] = [C]? Yes. the coefficient matrix [ A] is a square matrix. How do I find the inverse of a matrix? If [ A] is a n  n matrix. the solution vector of [ A][ X ]  [C ] is simply a multiplication of [ A] 1 and the right hand side vector. multiplying both sides by [ A] 1 . But. if [ A] 1 exists. [ A] 1 [ A][ X ]  [ A] 1 [C ] [ I ][ X ]  [ A] 1 [C ] [ X ]  [ A] 1 [C ] This implies that if we are able to find [ A] 1 . we can also show that   3 2  3 2  [ A][ B]      5  3 5 3 1 0   0 1  [I] to show that [ A] is the inverse of [ B] . 2 In an earlier example. 5  t  12 We found that the coefficients a.System of Equations  a11 a  21 [ A]       a n1  a12 04. b.8 8 177. v (m/s) 5 106. Velocity vs time data for a rocket Time. Example 11 The upward velocity of the rocket is given by Table 5. and c in vt  are given by . we wanted to approximate the velocity profile by vt   at 2  bt  c.05. the first column of the [ A] 1 matrix can then be found by solving '  a11 a12   a1n   a11  1  '  a a 22   a 2 n  a 21  0  21                                ' a n1 a n 2   a nn  a n1  0      Similarly.2 12 279.13   a1n  a 22   a 2 n              a n 2   a nn   ' '  a11 a12   a1' n   ' ' '  a 21 a 22   a 2 n  [ A] 1                  ' a ' a '   a nn  n2  n1  1 0    0  0 1 0   0   [I ]    1        0     1    Using the definition of matrix multiplication. one can find the other columns of the [ A] 1 matrix by changing the right hand side accordingly.2. t (s) Velocity. b. then ' ' '  25 5 1  a11 a12 a13  1 0 0  64 8 1 a ' a ' '  a 23   0 1 0 22    21   ' ' ' 144 12 1  a31 a 32 a 33  0 0 1      gives three sets of equations '  25 5 1  a11  1  64 8 1 a '   0    21    ' 144 12 1  a31  0      '  25 5 1  a12  0  64 8 1 a '   1    22    ' 144 12 1  a32  0      '  25 5 1  a13  0  64 8 1 a '   0    23    144 12 1  a33  1  '     Solving the above three sets of equations separately gives '  a11   0. and c.04762   '    a 21  =  0.05.000      .08333  '    a 22  =  1.571      '  a12   0.14  25 5 1 a  106.05 If A1 '  a11  '  a 21 ' a31  ' a12 ' a 22 ' a32 ' a13  '  a 23  ' a33   is the inverse of [ A] .417  '  a32    5.04.2      First. find the inverse of  25 5 1 A   64 8 1   144 12 1   and then use the definition of inverse to find the coefficients a.8   64 8 1 b   177. Solution Chapter 04.9524 '  a31   4.2       144 12 1  c  279. 571  5.03571   '    a 23    0.15 Hence  0.429    106.571  5. A1 AX   A1 C  X  A1 C   0.08333 0.429    1 Now where AX   C  a   X   b    c    106.2905 b    19.000 1.System of Equations '  a13   0. 5  t  12 Is there another way to find the inverse of a matrix? For finding the inverse of small matrices.69t  1.8  177.4643 1.4643 [ A]   0.03571   0. the inverse of an invertible matrix can be found by A1  1 adj  A det  A where .4643 '  a33   1.2   Hence a  0.69          c   1.04762  0.08333 0.086.2    279.9524  0.03571   0.086   So vt   0.417    4.2905t 2  19.000 1.04762  0.9524 1.429      04.2   1 Using the definition of A .417    4.05.2    279.8  C   177. 04.05.6 in Chapter 04. The cofactors of A are found as follows.16 C11 C12  C1n  C C2n   21 C 22  adj  A       C n1 C n 2  C nn  where C ij are the cofactors of aij .05 C11 C12  C1n  C   21 C 22  C 2 n        C n1   C nn  itself is called the matrix of cofactors from [A]. Example 12 Find the inverse of  25 5 1 A   64 8 1   144 12 1   Solution From Example 4. The matrix T Chapter 04. we found det  A  84 Next we need to find the adjoint of [ A] .06. The minor of entry a11 is 25 5 1 M 11  64 8 1 144 12 1 8 1 12 1  4 The cofactors of entry a11 is  C11   1 M 11  M 11  4 The minor of entry a12 is 11 25 5 1 M 12  64 8 1 144 12 1 . Cofactors are defined in Chapter 4. 9524  0.05.17 C12   1 M 12   M 12  (80)  80 Similarly C13  384 C 21  7 C22  119 C 23  420 C 31  3 C 32  39 C 33  120 Hence the matrix of cofactors of [ A] is  384 80  4 C    7  119 420     120  39  3   The adjoint of matrix [ A] is [C ]T .417    5.08333 0.429   4.000 1.4643 1.System of Equations 64 1 144 1  80 The cofactor of entry a12 is  1 2 04.04762  0. T adj  A  C  3  7  4  80   119 39      384 420  120   Hence A1  1 adj  A det  A 3  7  4 1    119 80 39    84    384 420  120    0.571   .03571     0. 04. Assume that the inverse of [ A] is [ B] and if this inverse is not unique.18 If the inverse of a square matrix [A] exists. If [ B] is the inverse of [ A] . if it exists. is it unique? Chapter 04. then [ B][ A]  [ I ] Multiply both sides by [C ] . [ B][ I ]  [C ] [ B]  [C ] This shows that [ B] and [C ] are the same. [ B][ A][C ]  [ I ][C ] [ B][ A][C ]  [C ] Since [C] is inverse of [ A] . then let another inverse of [ A] exist called [C ] . the inverse of a square matrix is unique.05. [ A][C ]  [ I ] Multiply both sides by [ B ] . So the inverse of [ A] is unique. Key Terms: Consistent system Inconsistent system Infinite solutions Unique solution Rank Inverse . The proof is as follows.05 Yes.  a2 n xn  b2 . .06 Gaussian Elimination After reading this chapter.  ann xn  bn Gaussian elimination consists of two steps 1. . solve a set of simultaneous linear equations using Naïve Gauss elimination. 4. each of the unknowns is found. the first unknown. 04.Chapter 04. understand the relationship between the determinant of a coefficient matrix and the solution of simultaneous linear equations.  a1n xn  b1 a21 x1  a22 x2  a23 x3  .. Forward Elimination of Unknowns: In the first step of forward elimination. starting from the last equation. learn how to modify the Naïve Gauss elimination method to the Gaussian elimination with partial pivoting method to avoid pitfalls of the former method. . .. 5. learn the pitfalls of the Naïve Gauss elimination method.. So. This way. How is a set of equations solved numerically? One of the most popular techniques for solving simultaneous linear equations is the Gaussian elimination method. x1 is eliminated from all rows below the first row. the unknown is eliminated in each equation starting with the first equation. Back Substitution: In this step.1 . . and 6. 2.06. The first equation is selected as the pivot equation to eliminate x1 . the equations are reduced to one equation and one unknown in each equation. you should be able to: 1... find the determinant of a square matrix using Gaussian elimination. Forward Elimination of Unknowns: In this step. understand the effect of round-off error when solving a set of linear equations with the Naïve Gauss elimination method. The approach is designed to solve a general set of n equations and n unknowns a11 x1  a12 x2  a13 x3  .. 2. an1 x1  an 2 x2  an 3 x3  . 3. .  a n xn  b2 2    a33 x3  . This is the same as multiplying the second  equation by a32 / a and subtracting it from the third equation. .. . So......  a2 n xn  b2 where a  a22  a22  21 a12 a11  a  a2 n  a2 n  21 a1n a11 This procedure of eliminating x1 .04.  a3n xn  b3 .  a1n x n  b1    a22 x2  a23 x3  . ..    an3 x3  ..  a1n x n  b1    a22 x2  a23 x3  . . . This is the same as multiplying the first equation by a 21 / a11 to give a a a a 21 x1  21 a12 x 2  . . This makes the coefficient of 22 x2 zero in the third equation. this equation can be subtracted from the second equation to give     a a a  a 22  21 a12  x 2  .. ..  21 a1n x n  21 b1 a11 a11 a11 Now..  ann xn  bn . is now repeated for the third equation to the n th equation to reduce the set of equations as a11 x1  a12 x 2  a13 x3  . .. one divides the second equation by a22  (the pivot element) and then multiply it by a32 . The same procedure is now repeated for the fourth equation till the n th equation to give a11 x1  a12 x 2  a13 x3  .  a n xn  b2 2     a32 x2  a33 x3  .. we start with the second equation as the pivot equation and a22 as the pivot  element.  a3n xn  b3 .2 Chapter 04... Now for the second step of forward  elimination.06. ... .  ann xn  bn This is the end of the first step of forward elimination.     an 2 x2  an 3 x3  . . to eliminate x2 in the third equation.   a 2 n  21 a1n  x n  b2  21 b1     a11 a11 a11     or    a22 x2  . . one divides the first equation by a11 (hence called the pivot element) and then multiplies it by a 21 .06 to eliminate x1 in the second equation... ... there will be a total of n  1 steps of forward elimination. . .. t (s) Velocity. a2 . but xn is already known.  a2 n xn  b2    a33 x3  . n 1  a nn xn  bnn 1 Back Substitution: Now the equations are solved starting from the last equation as it has only one unknown.1 and xn  Example 1 The upward velocity of a rocket is given at three different times in Table 1. At the end of n  1 steps of forward elimination. time data.. and a3 for the above expression are given by . .8 177... we get a set of equations that look like a11 x1  a12 x 2  a13 x3  .06. Back substitution hence can be represented for all equations by the formula xi   bii 1   aiji 1 x j  aiii 1 ( bnn 1) (n a nn 1) j  i 1 n for i  n  1.2 The velocity data is approximated by a polynomial as vt   a1t 2  a2t  a3 . that is the ( n  1) th equation. has two unknowns: xn and x n1 . v (m/s) 5 8 12 106.. . b ( n 1) x n  n( n 1) a nn Then the second last equation.  a3n xn  b3 . This reduces the ( n  1) th equation also to one unknown. .  a1n x n  b1     a22 x2  a23 x3  . That is. n  2.. .Gaussian Elimination 04. Time.3 The next steps of forward elimination are conducted by using the third equation as a pivot equation and so on. 5  t  12 The coefficients a1.2 279. Table 1 Velocity vs. 968      Second step We now divide Row 2 by –4.8   25  0  4.168 0  16.2 1  64 12.06.8  4.56 a    96.8  1.04.56 273.8  25  0  4. that is. multiply Row 2 by  16. multiply Row 1 by 144/25  5.56 273.5. Find the velocity at t  6. that is.408 Subtract the result from Row 2 64 8 177.8  1.76 615.76 gives Row 1 as 144 28.5 . 25 5 1 106.8 5.76 615.2  144 28.56  a     96.168 Subtract the result from Row 3 144 12 1 279. 0  4.06  25 5 1  a1  106.968 to get the resulting equations as 5 1   a1   106. First step Divide Row 1 by 25 and then multiply it by 64. there will be two steps of forward elimination of unknowns.8  4.8  3.8   64 8 1 a   177.76 .8 2.8 2.5 gives Row 2 as 0  16.8/  4.8 and then multiply by –16.8. 9.46  336.76  335. a2 .8  1.4 Chapter 04.8  5.2       Divide Row 1 by 25 and then multiply it by 144.8 5.56 gives Row 1 as 64 12.208    2   144 12 1   a3   279.208     2    0  16. that is.408 0  4.208 to get the resulting equations as 5 1   a1   106. Solution Forward Elimination of Unknowns Since there are three equations.8  1.56  96. multiply Row 1 by 64/25  2.208 3. 25 5 1 106.56  96.8 5.8 2. and a3 using the Naïve Gauss elimination method. 11 seconds.728 Subtract the result from Row 3 .2     2   144 12 1  a3  279.2      Find the values of a1.56 . 7.76  a3   335. 6905   2    a 3   1.08571   4.208  1. at t  6 .56a3  96. 5  t  12 Since we want to find the velocity at t  6.76 a3  0.76 to get the resulting equations as 5 1   a1   106.56  1.08571  25  0.8  5  19.6905  1.  4. 9 and 11 seconds.8  5a 2  a3 a1  25 106.6905 Substituting the value of a 2 and a3 in the first equation.7   a3   0.6905t  1.8  4.208    2   0 0 0.7  1.08571.56 a    96.5.8  25  0  4.290472t 2  19.7 0.08571 Substituting the value of a3 in the second equation. 25a1  5a 2  a3  106.8  1.8 106.8  96.6905t  1.8  19.7a3  0.290472 a    19. 7.5 0 0 The polynomial that passes through the three data points is then vt   a1t 2  a 2 t  a3  0. For example.76 0.728 0 0 0.8  5.46  336.06.290472t 2  19.76       Back substitution From the third equation 0.208  1.76  335.290472 Hence the solution vector is  a1  0.8a 2  1.08571 and find the corresponding velocity.08571      04.208  96.Gaussian Elimination  16.56a3 a2   4. we could simply substitute each value of t in vt   0.968   16. 11 seconds using matrix multiplication.69056   1.08571 2 Chapter 04.5 1 v(6)  129.08571  6 7.75 .6905 1. t 2    vt   0.5 2 7.290472 19. v7.686 165.828 7. 20 15 10 45 0.04. 9.5)  165.6905 1. it is given by  129.290472 19.290472 19.25 81 121  0. 7. v9 .5.828 252.249 x 2  7 x3  1.828 m/s v(11)  252.686 m/s v(7.751 5 x1  x 2  3x3  9 Use six significant digits with chopping in your calculations.6905 1.5.5  6.5 v9 v11  0.6 v6  0.104 201.828 m/s Example 2 Use Naïve Gauss elimination to solve 20 x1  15 x 2  10 x3  45  3x1  2.5 9 11    1 1 1 1     129.15 .06.249 7   x  = 1. v11.686 m/s However we could also find all the needed values of velocity at t = 6.751   2    5 1 3   x3   9       Forward Elimination of Unknowns First step Divide Row 1 by 20 and then multiply it by –3.2904726   19.08571  t  1   6 2 v6 v7.06 So if we want to find v6 .1 04 m/s v(9)  201. Solution Working in the matrix form 15 10  x1   45   20  3  2.25  1. multiply Row 1 by  3 / 20  0.15 gives Row 1 as  3  2.08571  6  1  9 2 112   9 11  1 1   36 56. that is. 5  23375. we will use Row 2 as the pivot equation and eliminate Row 3: Column 2.75.5   x 2  =  8.75 0. . that is.75 0.501        0 0 23375.75 2.75 2.501   2    5 1 3   x3   9       1.751  1. Divide Row 2 by 0.25  0  2. multiply Row 1 by 5 / 20  0.001 8.5  2.001  2750 .25 gives Row 1 as 5 3.7   3  2.4       This is the end of the forward elimination steps.5 23375.25 0  2.75 0.001 8.001 8.25 to get the resulting equations as 15 10   x1   45  20  0 0.06.5  x3   2.5 8.75 / 0.75  23375  23377.5 11.75  23375  23377.5 8.4 to get the resulting equations as 10   x1   45  20 15  0 0.001 and then multiply it by –2.7 Subtract the result from Row 3 0  2. 0 0.25      Second step Now for the second step of forward elimination.5  x3  23375.45 Rewriting within 6 significant digits with chopping 0 0 23375.75 Rewriting within 6 significant digits with chopping 0  2. that is.75  23375  23377.75 Divide Row 1 by 20 and then multiply it by 5.249 7 04.25 20 15 10 45 0.5  2.5  x  = 8.7 0 0 23375.001 8.501 to get the resulting equations as 10   x1   45  20 15  0 0.5  6.501 2750 gives Row 2 as 0  2.5 11.501    2     0  2. multiply Row 2 by  2.001 8.Gaussian Elimination Subtract the result from Row 2  3  2.25 Subtract the result from Row 3 5 1 3 9  5 3.5  x  =  8.25 0 0. 501 8.9625    1.9625 Hence the solution is  x1  [ X ]   x2     x3     0.001 0.4 x3  23375.5  0.001  1.4 23375.06.8 Back substitution Chapter 04.001 8.501  8.5 x3  8.06 We can now solve the above equations by back substitution.001 8.2500  20  0.05  10  0.5 x3 x2  0.5 x3  23375. 20 x1  15 x 2  10 x3  45 45  15 x 2  10 x3 x1  20 45  15  1. 23375. From the third equation.999995  0.501  8.501  8.5  0.05 Substituting the value of x3 and x 2 in the first equation.04.999995   Compare this with the exact solution of .75  9.999995 Substituting the value of x3 in the second equation 0.99995  20 29.001x 2  8.25  9.99995  20 19.05    0.49995  0.00105  0.999995  20 45  15.  5 6 7   x1  18  10 12 3   x   25  2         20 17 19  x3  56  there is no issue of division by zero in the first step of forward elimination. However. at the end of the first step of forward elimination. 5.9 Are there any pitfalls of the Naïve Gauss elimination method? Yes. we get the following equations in matrix form 7   x1   18  5 6 0 0  11  x     11   2    0  7  9   x3   16      Now at the beginning of the 2nd step of forward elimination. Division by zero: It is possible for division by zero to occur during the beginning of the n  1 steps of forward elimination.06. . there are two pitfalls of the Naïve Gauss elimination method. the coefficient of x2 in Equation 2 would be used as the pivot element. That element is zero and hence would create the division by zero problem. 0 5 6  x1  11 4 5 7   x   16  2          9 2 3  x3  15 But what about the equations below: Is division by zero a problem? 5 x1  6 x2  7 x3  18 10 x1  12 x2  3x3  25 20 x1  17 x2  19 x3  56 Written in matrix form. and that is a non-zero number. For example 5 x2  6 x3  11 4 x1  5 x2  7 x3  16 9 x1  2 x2  3 x3  15 will result in division by zero in the first step of forward elimination as the coefficient of x1 in the first equation is zero as is evident when we write the equations in matrix form. The pivot element is the coefficient of x1 in the first equation.Gaussian Elimination  x1  X    x2     x3    1  1  1  04. 751   3  2.751   2   x3   9      Forward Elimination of Unknowns First step Divide Row 1 by 20 and then multiply it by –3.501   2    5 1 3   x3   9       Divide Row 1 by 20 and then multiply it by 5. Also. 20 15 10 45 0.5  x  = 8.501 to get the resulting equations as 10   x1   45  20 15  0 0.75 0.5  6.5  6.75 0 0. Solution Writing in the matrix form 15 10  20  3  2.04. Example 3 Remember Example 2 where we used Naïve Gauss elimination to solve 20 x1  15 x 2  10 x3  45  3x1  2. multiply Row 1 by 5 / 20  0. This is true when there are large numbers of equations as errors propagate.06 So it is important to consider that the possibility of division by zero can occur at the beginning of any step of forward elimination. that is.25 . Round-off error: The Naïve Gauss elimination method is prone to round-off errors.5  2.249 x 2  7 x3  1.001 8. 20 15 10 45 0. but now use five significant digits with chopping in your calculations.15 gives Row 1 as  3  2. that is.001 8.5 8.25 0  2.5 11. multiply Row 1 by  3 / 20  0.25 Subtract the result from Row 3 5 1 3 9  5 3.10 Chapter 04.249 7 1.25 gives Row 1 as 5 3.25  1. it may create large errors.75 2.751 5 x1  x 2  3 x3  9 using six significant digits with chopping in your calculations? Repeat the problem.15 .5 11.25 .249 7    5 1 3    x1   45   x  = 1.75 Subtract the result from Row 2  3  2. See the example below.06.75 2.25  1. if there is subtraction of numbers from each other. 501    2     0  2.5   x  =  8.001x 2  8.99995  0.75  23375  23377.001 and then multiply it by –2.75  23375  23377 0 0 23375 23374 Rewriting within 6 significant digits with chopping 0 0 23375  23374 to get the resulting equations as 10   x1   45  20 15  0 0.5 x3  8.5  0.499575  0.75 0. From the third equation.001 .75 Rewriting within 5 significant digits with chopping 0  2.25      04.75 0.5  2.5 8.001 8. 0 0.501 2750 gives Row 2 as 0  2.501  8.001 8.5  x3   2. we will use Row 2 as the pivot equation and eliminate Row 3: Column 2.75  23375  23377 Subtract the result from Row 3 0  2.5  x  =  8.99995 Substituting the value of x3 in the second equation 0.001  2750 .001 8.001 8. Divide Row 2 by 0. that is. Back substitution We can now solve the above equations by back substitution.501  8.5 x3 x2  0.Gaussian Elimination to get the resulting equations as 15 10   x1   45  20  0 0.001 8.501 8.06. 23375 x3  23374 23374 x3  23375  0.001 8. multiply Row 2 by  2.75 / 0.75.25  0  2.4995  0.501  8.11 Second step Now for the second step of forward elimination.501  8.501    2    0 0 23375  x3  23374       This is the end of the forward elimination steps. this would not avoid possible division by zero errors in the Naïve Gauss elimination method. round off errors were large when five significant digits were used as opposed to six significant digits.0015 0.5  10  0.9995  20 22.5  9.625 Hence the solution is  x1  X    x 2     x3      0.12 0. .06 What are some techniques for improving the Naïve Gauss elimination method? As seen in Example 3.5 Substituting the value of x3 and x 2 in the first equation.5  9.9995  20 12.04. use double or quad precision for representing the numbers.001  1.5    0. that is.500  20  0. 20 x1  15 x 2  10 x3  45 45  15 x 2  10 x3 x1  20 45  15 1. One method of decreasing the round-off error would be to use more significant digits. However.99995  20 45  22. Gaussian elimination with partial pivoting is the method of choice.99995   Compare this with the exact solution of  x1  1 X    x2   1     x3  1    Chapter 04.625    1.06. To avoid division by zero as well as reduce (not eliminate) round-off error.5005  20 12. 751   2    5  9  1 3   x3        .625    1. except in the beginning of each step of forward elimination. If there are n equations. one finds the maximum of a kk . the solution found was  x1  X    x 2     x3     0. then switch rows p and k .5    0. we used Naïve Gauss elimination to solve 20 x1  15 x 2  10 x3  45  3x1  2.13 The two methods are the same.99995   This is different from the exact solution of  x1  X    x 2     x3    1  1  1  Find the solution using Gaussian elimination with partial pivoting using five significant digits with chopping in your calculations. k  p  n .249 x 2  7 x3  1. a k 1. The other steps of forward elimination are the same as the Naïve Gauss elimination method.751 5 x1  x 2  3 x3  9 using five and six significant digits with chopping in the calculations. At the beginning of the k th step of forward elimination. then there are n  1 forward elimination steps.k . The back substitution steps stay exactly the same as the Naïve Gauss elimination method. a row switching is done based on the following criterion.Gaussian Elimination How does Gaussian elimination with partial pivoting differ from Naïve Gauss elimination? 04. Using five significant digits with chopping.06. a nk Then if the maximum of these values is a pk in the p th row.249 7   x  = 1. …………. Solution 15 10  x1   20  45   3  2. Example 4 In the previous two examples. 5  x  = 8.5  6.001 8.25  1. 20 15 10 45 0.25 This is the end of the first step of forward elimination.5 8. 20 15 10 45 0.75 0. multiply Row 1 by 5 / 20  0.25  1. that is.04.75 2.5  x  =  8.5 11. 3.501    2   5 1 3   x3   9       Divide Row 1 by 20 and then multiply it by 5.25 to get the resulting equations as 15 10   x1   45  20  0 0.501 to get the resulting equations as 10   x1   45  20 15  0 0.501     2         0  2. the absolute value of the second column elements below Row 1 is 0.  2. the switch is between Row 1 and Row 1 to give 15 10  x1   20  45   3  2.15 gives Row 1 as  3  2.001 8. that is.249 7 1.25 .15 .06 Now for the first step of forward elimination.75 2.5  x3   2. the absolute value of the first column elements below Row 1 is 20 .14 Forward Elimination of Unknowns Chapter 04.5 11.001 .5  6. multiply Row 1 by  3 / 20  0.25 gives Row 1 as 5 3.  3 .001 8.751   3  2. So as per Gaussian elimination with partial pivoting.75 .249 7   x  = 1. 5 So the largest absolute value is in the Row 1.5  2.75 or 0.06.75 Subtract the result from Row 2  3  2.751    2   5  9  1 3   x3        Divide Row 1 by 20 and then multiply it by –3.001.25 Subtract the result from Row 3 5 1 3 9  5 3. Now for the second step of forward elimination. 2. 5 or 20.75 0.25 0  2.75 0 0. 25  0 .75 x 2  0.25 0.00018182 0.00081816 Subtract the result from Row 3 0 0.5 x3  2.75 1 Substituting the value of x3 and x 2 in Row 1 x2  20 x1  15 x 2  10 x3  45 45  15 x 2  10 x3 x1  20 .75  2.001.5   2.50018184 Rewriting within 5 significant digits with chopping 0 0 8.5001 0 8.00036363 .001 8.06.75 and then multiply it by 0.75   x  =   2.25  0.00081816 0 0 8.75 0.501 8.5  x  =  2.75 0.5001 x3  8.75  0.5  2.25    2    0 0.001 /  2.00099998  0.5 x 2  2.25  0.5  x3   8.5  1   2.00018182 0.75  2.00036363 gives Row 2 as 0 0.5   2     0 8.25  0.5001x3  8.5001 to get the resulting equations as 15 10   x1  20  45   0  2. multiply Row 2 by 0.00099998  0.15  0 0.75   2.25  2.75  2.Gaussian Elimination So the largest absolute value is in Row 3.50018182 8. 0  2.5001 8.5001 =1 Substituting the value of x3 in Row 2  2. So Row 2 is switched with Row 3 to give 15 10   x1  20  7   0  2.5001 8.501        04. that is.001 8.5001  x3        Back substitution Divide Row 2 by –2.5 8. Theorem 1: Let [ A] be a n  n matrix. then det( A)  det( B) (The same is true for column operations also). the determinant will be given by Theorem 2.  aii  .06 This.16 45  15  1  10  1 20 45  15  10  20 30  10  20 20  20 1 So the solution is  x1  X    x 2     x3     1 = 1  1  Chapter 04. the resulting matrix is upper triangular. Theorem 2: Let [ A] be a n  n matrix that is upper triangular. if [B] is a n  n matrix that results from adding or subtracting a multiple of one row to another row...06. then det( A)  a11  a 22  . . in fact.. in this case. By coincidence only. the determinant of the matrix stays the same according to Theorem 1. lower triangular or diagonal.. is the exact solution.04. the round-off error is fully removed.  a nn   aii i 1 n This implies that if we apply the forward elimination steps of the Naïve Gauss elimination method. Then since at the end of the forward elimination steps. Can we use Naïve Gauss elimination methods to find the determinant of a square matrix? One of the more efficient ways to find the determinant of a square matrix is by taking advantage of the following two theorems on a determinant of matrices coupled with Naïve Gauss elimination. Then. Gaussian Elimination Example 5 04.8)  0.7    According to Theorem 2 det( A)  det( B)  25  ( 4. Then.17 Find the determinant of  25 5 [ A]   64 8  144 12  Solution 1 1  1  Remember in Example 1.56 B    0 0 0 .06.8  1. you can apply Gaussian elimination with partial pivoting. we conducted the steps of forward elimination of unknowns using the Naïve Gauss elimination method on [ A] to give 5 1  25  0  4.7  84.5  6. The following theorem applies in addition to the previous two to find the determinant of a square matrix. the determinant of the resulting upper triangular matrix may differ by a sign. then det( B)   det( A) . for example. we would obtain 0  10  7  0 2 .5 [ B]   5   0 0 6.002   det B   10  2.099 6 [ A]    5  1 5   Solution The end of the forward elimination steps of Gaussian elimination with partial pivoting.00 What if I cannot find the determinant of the matrix using the Naïve Gauss elimination method. if I get division by zero problems during the Naïve Gauss elimination method? Well.002 . Example 6 Find the determinant of  7 0  10  3 2. Theorem 3: Let [ A] be a n  n matrix. However. if [ B] is a matrix that results from switching one row with another row. 4. det  A    2.18 Chapter 04.06  150. Key Terms: Naïve Gauss Elimination Partial Pivoting Determinant .  [ A][ X ]  [0] solution is [ X ]  [0] .06. 3. 5.04. [ A] 1 exists.05 Since rows were switched once during the forward elimination steps of Gaussian elimination with partial pivoting. [ A][ X ]  [C ] has a unique solution. det  A   det( B)  150. what other statements are equivalent to it? 1. [ A][ A] 1  [ I ]  [ A] 1 [ A] .05 Example 7 Prove det( A)  Solution 1 det A1   [ A][ A]1  [ I ] det A A   det I  det  A det A   1 1 1  1 det A1 If [ A] is a n  n matrix and det( A)  0 . [ A] is invertible. give the user control of the round-off error. such as Gaussian elimination. Also. 2. .  a nn x n  c n If the diagonal elements are non-zero. Elimination methods. . each equation is rewritten for the corresponding unknown. you should be able to: 1.Chapter 04. iterative methods of solving equations are more advantageous. and 3. . What is the algorithm for the Gauss-Seidel method? Given a general set of n equations and n unknowns..08. such as when a system of equations is large. the first equation is rewritten with x1 on the left hand side. if the physics of the problem are well known. . determine under what conditions the Gauss-Seidel method always converges. initial guesses needed in iterative methods can be made more judiciously leading to faster convergence.  a 2 n x n  c 2 . a n1 x1  a n 2 x 2  a n 3 x3  . solve a set of equations using the Gauss-Seidel method. . Why do we need another method to solve a set of simultaneous linear equations? In certain cases. the second equation is rewritten with x 2 on the left hand side and so on as follows 04. are prone to large round-off errors for a large set of equations....1 . such as the Gauss-Seidel method. we have a11 x1  a12 x 2  a13 x3  ..  a1n x n  c1 a 21 x1  a 22 x 2  a 23 x3  .. recognize the advantages and pitfalls of the Gauss-Seidel method.08 Gauss-Seidel Method After reading this chapter. that is. Iterative methods. i  1. and xiold is the previous value of xi .1 x1  an 1. At the end of each iteration. a 22 c n 1  x n 1  n j 1 j  n 1 a n n 1.n 1 xn 1 ann These equations can be rewritten in a summation form as c1   a1 j x j x1  j 1 j 1 n a11 c2   a2 j x j n j 1 j2 x2  .2 x2    xn 1  xn  cn 1  an 1.n 1 j 1 jn c n   a nj x j a nn Hence for any row i .n 1 c2  a21 x1  a23 x3   a2 n xn a22 Chapter 04.08 cn  an1 x1  an 2 x2    an .08. Remember. j xj a n 1. .04. .n xn an 1.2. xn  ci   aij x j . one always uses the most recent estimates to calculate the next estimates. . . one assumes an initial guess for the xi ’s and then uses the rewritten equations to calculate the new estimates. xi . aii Now to find xi ’s. 2 x2   an 1.n  2 xn  2  an 1. one calculates the absolute relative approximate error for each xi as n xi  j 1 j i a i  x inew  x iold x inew  100 where xinew is the recently obtained value of xi . n. v (m/s) 5 8 12 106. t 2 . v1  106.8 177.2 The velocity data is approximated by a polynomial as vt   a1t 2  a2t  a3 . Time. t 2 . 5  t  12 Find the values of a1 . v 2  177. and t 3 .2 1 2 3 or 25a1  5a2  a3  106. v3  279. the iterations are stopped. v2 .8 2 1 2 3 2   a 8   a 8  a  177. v1 . v3  gives a1 52  a2 5  a3  106. v2 .2 t 3  12.Gauss-Seidel Method 04. v3  where from the above table t1  5. Assume an initial guess of the solution as  a1  1  a    2  2    a3  5      and conduct two iterations.2 279. Solution The polynomial is going through three data points t1 . and t 3 . t (s) Velocity. v1 . a 2 . and a3 using the Gauss-Seidel method. Example 1 The upward velocity of a rocket is given at three different times in the following table Table 1 Velocity vs.2 a 12   a 12  a  279.08.2 . time data.8 64a1  8a2  a3  177.8 t 2  8.3 When the absolute relative approximate error for each xi is less than the pre-specified tolerance.2 Requiring that vt   a1t 2  a 2 t  a3 passes through the three data points gives vt1   v1  a1t12  a2t1  a3 2 vt 2   v2  a1t 2  a2t 2  a3 2 vt3   v3  a1t3  a2t3  a3 Substituting the data t1 . 6720 177. and a3 for the above expression are given by  25 5 1  a1  106.8510  2 a 2  100  7.2      Rewriting the equations gives 106.08 we get 106.04.2  144a1  12a 2 a3  1 Iteration #1 Given the initial guess of the solution vector as  a1  1   a    2  2    a 3  5      Chapter 04.8150 279.36  103. a2 .8   64 8 1 a   177.2     2   144 12 1  a3  279.36 The absolute relative approximate error for each xi then is a1  a 1  3.2  643.6720   12 7.8  5(2)  (5) 25  3.36  5 a 3  100  155.22% At the end of the first iteration.8510  a3  1  155.08. the estimate of the solution vector is .2  1443.76%  7.2 The coefficients a1 .6720  1  100 3.6720  72.4 144a1  12a2  a3  279.2  64a1  a3 a2  8 279.47%  155.8510  125.6720   5 a2  8  7.8  5a 2  a3 a1  25 177. 36  a 3  100  798.8510  2    a3    155.056  3.882  85.882   7.8510  2    a3    155.2  6412.54     a 1  and the maximum absolute relative approximate error is 85.056  a     54.882 a3  1 =  798.8510 a 2  100  54.34 The absolute relative approximate error for each xi then is a1  Now we get 12.2  14412. 04.Gauss-Seidel Method  a1   3.540% At the end of the second iteration the estimate of the solution vector is  a1   12.34  80.882  2    a3   798.056  12 54.36) a2  8  54.47%.6720  a    7.34   155.695%  798. .056  (155. Conducting more iterations gives the following values for the solution vector and the corresponding absolute relative approximate errors.882 279.695%.6720 100 12.08.36      106.543%  54.056  69.5 Iteration #2 The estimate of the solution vector at the end of Iteration #1 is  a1   3.6720  a    7.36) 25  12.8510   (155.36      and the maximum absolute relative approximate error is 125.8  5 7.056 177. 882 –255. Fortunately. a pitfall of most iterative methods is that they may or may not converge.056 47.9 –14440 –60072 –249580 a 3 % 3.6 Iteration 1 2 3 4 5 6 a1 a 1 % Chapter 04. However. which then assures convergence for iterative methods such as the Gauss-Seidel method of solving simultaneous linear equations.690 a3  1.0857 The above system of equations does not seem to converge. 12 x1  3x 2  5 x3  1 x1  5 x2  3x3  28 3x1  7 x2  13x3  76 Use  x1  1  x   0   2    x 3  1      as the initial guess and conduct two iterations.447 75.04. Why? Well.36 –798.972 103.543 74. Example 2 Find the solution to the following system of equations using the Gauss-Seidel method.34 –3448.29048 a2  19.767 69. the solution estimates are not converging to the true solution of a1  0. This class of system of equations is where the coefficient matrix [ A] in [ A][ X ]  [C ] is diagonally dominant.08.695 78.47 85.8510 –54. it may or may not converge.112 75.521 76.850 75.6 72.4 –4577.53 3322.182 193.595 75.632 76.51 –1093.08 a2 a 2 % a3 –155. many physical systems that result in simultaneous linear equations have a diagonally dominant coefficient matrix.852 76. Solution The coefficient matrix .33 800.540 76.22 80.906 –7.2 –19049 125.116 75.6720 12. the solution to a certain classes of systems of simultaneous equations does always converge using the Gauss-Seidel method. that is aii   aij for all i j 1 j i n a ii   aij for at least one i j 1 j i n If a system of equations has a coefficient matrix that is not diagonally dominant.963 75.931 As seen in the above table. 50000 28  0.00% 3.0923  67.50000   74.00% 4. we get 1  3 x 2  5 x3 x1  12 28  x1  3 x3 x2  5 76  3 x1  7 x 2 x3  13 Assuming an initial guess of  x1  1  x   0   2    x3  1     Iteration #1 1  30  51 12  0.662% x1  .0923  1 a 3   100 3.Gauss-Seidel Method 12 3  5 A   1 5 3     3 7 13    is diagonally dominant as a11  12  12  a12  a13  3   5  8 a 22  5  5  a 21  a 23  1  3  4 a 33  13  13  a31  a32  3  7  10 04.50000  31 x2  5  4.9000  0 a 2  100 4.7 and the inequality is strictly greater than for at least one row.9000 x3  13  3. Rewriting the equations.9000  100.9000 76  30. Hence.08.50000  100.50000  1 a 1   100 0. the solution should converge using the Gauss-Seidel method.0923 The absolute relative approximate error at the end of the first iteration is 0. 00101 This is close to the exact solution vector of  x1  1   x   3  2    x3  4     Example 3 Given the system of equations 3x1  7 x2  13x3  76 .8118 At the end of second iteration.889% 3.9000  53.889 17. the absolute relative approximate error is 0.10856 67.7153 3.08.4996 0.00 240.662 18. This is greater than the value of 100.08 The maximum absolute relative approximate error is 100.9971 4.7153  31.0001 4.14679 28  0.50000 0.0923 x1  12  0.874 4.0923 3.0923 x2  5  3.0034 3.874% The maximum absolute relative approximate error is 240.236 21.00 31.50000 a 1  100 0.82499 0.99919 100.61 80.8118  18.9000 3.0001 100.74275 0.61% 3.408 4.8 Chapter 04.14679   73.7153 76  30.14679  0. the solution converges as follows.9000 a 2  100 3.8118 3.14679  240.546 4.14679  33.65772 0.00% Iteration #2 1  34.99177 0.0001 a 3 % 0. Is the solution diverging? No.0281 3.9708 3.0064 0.5391 0.7153  4.04.7153 x3  13  3.14679 0.94675 0. as you conduct more iterations.00% we obtained in the first iteration. Iteration 1 2 3 4 5 6 x1 a 1 % x2 a 2 % x3 3.0923 a 3   100 3.074383 0.8118  3.61%.1644 3.74307 4. 02 1204.80000 14.71 109. we get 76  7 x 2  13 x3 x1  3 28  x1  3x3 x2  5 1  12 x1  3x 2 x3  5 Assuming an initial guess of  x1  1  x   0   2    x 3  1      the next six iterative values are given in the table below. However.89 109. The only difference is that we exchanged first and the third equation with each other and that made the coefficient matrix not diagonally dominant.680 –462.43 109.0 –20149 2.0364  105 –2.89 0. Iteration 1 2 3 4 5 6 x1 a 1 % x2 a 2 % 04.30 4718.1 –47636 4. it is the same set of equations as the previous example and that converged.421 –116.90 109.0579  106 95. the Gauss-Seidel method may or may not converge.8653  106 a 3 % 21.89 109. .63 109.89 98.15 1995.453 112.90 109.2272  105 100.238 110.00 94. The coefficient matrix  3 7 13  A   1 5 3    12 3  5   is not diagonally dominant as a11  3  3  a12  a13  7  13  20 Hence.5 x3  1 find the solution using the Gauss-Seidel method.000 –196.Gauss-Seidel Method x1  5 x2  3 x3  28 12 x1  3 x2 . Use  x1  1  x   0   2    x 3  1      as the initial guess.89 You can see that this solution is not converging and the coefficient matrix is not diagonally dominant.08. Solution Rewriting the equations.8144  105 –4.9 x3 50.6 –12140 1.80 109.92 109.96 109.027 110.83 109. For example. However.08. Key Terms: Gauss-Seidel method Convergence of Gauss-Seidel method Diagonally dominant matrix .08 Therefore.10 Chapter 04.04. it is possible that a system of equations can be made diagonally dominant if one exchanges the equations with each other. the following set of equations x1  x 2  x3  3 2 x1  3x 2  4 x3  9 x1  7 x 2  x3  9 cannot be rewritten to make the coefficient matrix diagonally dominant. it is not possible for all cases. you should be able to: 1.07 LU Decomposition After reading this chapter. −1 Let [L]−1 [L][U ][X ] = [L]−1 [C ] [I ][U ][X ] = [L]−1 [C ] as ([L]−1 [L] = [ I ]) [U ][X ] = [L]−1 [C ] as ([I ][U ] = [U ]) [L]−1 [C ] = [Z ] 04. What is it? We already studied two numerical methods of finding the solution to simultaneous linear equations – Naïve Gauss elimination and Gaussian elimination with partial pivoting. one can always write it as [A] = [L][U ] where [L] = Lower triangular matrix [U ] = Upper triangular matrix Then if one is solving a set of equations [ A][ X ] = [C ] . then [L][U ][X ] = [C ] as ([ A] = [L][U ]) Multiplying both sides by [L ] .Chapter 04. decompose a nonsingular matrix into LU. 2. show how LU decomposition is used to find the inverse of a matrix. why do we need to learn another method? To appreciate why LU decomposition could be a better choice than the Gauss elimination techniques in some cases.07. identify when LU decomposition is numerically more efficient than Gaussian elimination.1 . let us discuss first what LU decomposition is about. For a nonsingular matrix [A] on which one can successfully conduct the Naïve Gauss elimination forward elimination steps. I hear about LU decomposition used as a method to solve a set of simultaneous linear equations. Then. and 3. etc.edu/~draper/papers/mwscas07_kwon. the computational time 1 CT |DE to decompose the [ A] matrix to [ L][U ] form is given by T = clock cycle time 2.  8n 3 32n  2 CT |FE = T   3 + 8n − 3  . or multiply operation. and 16 clock cycles for a divide operation as is the case for a typical AMD®-K7 chip. a 1.isi.2 GHz CPU has a clock cycle of 1 /(1. The computational time CT |FS to solve by forward substitution [L][Z ] = [C ] is given by The computational time CT |BS to solve by back substitution [U ][ X ] = [Z ] is given by CT |BS = T (4n 2 + 12n ) So.2 then and Chapter 04.04.07 [L][Z ] = [C ] [U ][X ] = [Z ] (1) (2) So we can solve Equation (1) first for [Z ] by using forward substitution and then use Equation (2) to calculate the solution vector [X ] by back substitution. the total computational time to solve a set of equations by LU decomposition is CT |LU = CT |DE + CT |FS + CT |BS CT |FS = T (4n 2 − 4n ) where  8n 3 20n  2 CT |DE = T   3 + 4n − 3  .pdf 2 As an example. The computational time CT |FE for the forward elimination part. subtract.    1 The time is calculated by first separately calculating the number of additions. multiplications. This is all exciting but LU decomposition looks more complicated than Gaussian elimination.07. subtractions. We then assume 4 clock cycles each for an add.     8n 3 20n  2 2 2 = T  3 + 4n − 3  + T (4n − 4n )+ T (4n + 12n )    3  8n 4n  2 = T  3 + 12n + 3     Now let us look at the computational time taken by Gaussian elimination.833333 ns . Do we use LU decomposition because it is computationally more efficient than Gaussian elimination to solve a set of n equations given by [A][X]=[C]? For a square matrix [ A] of n × n size. and divisions in a procedure such as back substitution.2 × 109 ) = 0. http://www. Let us look at an example where the LU decomposition method is computationally more efficient than Gaussian elimination.07.3 CT |BS = T (4n 2 + 12n ) So.05. So the total computational time CT |inverse LU required to find the inverse of a matrix using LU decomposition is CT |inverse LU = 1× CT |LU + n × CT |FS + n × CT |BS  8n 3 20n   + n × T (4n 2 − 4n )+ n × T (4n 2 + 12n ) + 4n 2 − = 1× T   3 3    3  32n 20n  2 = T  3 + 12n − 3     In comparison. So if we use the LU decomposition method. Remember in trying to find the inverse of the matrix [ A] in Chapter 04.LU Decomposition and the computational time CT |BS for the back substitution part is 04. and the back substitution (Equation 2) n times. For calculations of each column of the inverse of the [ A] matrix. the problem reduces to solving n sets of equations with the n columns of the identity matrix as the RHS vector. and that too when the two methods are closely related. the forward substitution (Equation 1) n times. the coefficient matrix [ A] matrix in the set of equation [A][X ] = [C ] does not change. Please convince me that LU decomposition has its place in solving linear equations! We have the knowledge now to convince you that LU decomposition method has its place in the solution of simultaneous linear equations. This has confused me further! Why learn LU decomposition method when it takes the same computational time as Gaussian elimination. the forward elimination as well as the back substitution will have to be done n times. the total computational time CT |GE to solve a set of equations by Gaussian Elimination is CT |GE = CT |FE + CT |BS  8n 3 32n  2 2 = T  3 + 8n − 3  + T (4n + 12n )    3  8n 4n  2 = T  3 + 12n + 3     The computational time for Gaussian elimination and LU decomposition is identical. the [A] = [L ][U ] decomposition needs to be done only once. The total computational time CT |inverse GE required to find the inverse of a matrix by using Gaussian elimination then is CT |inverse GE = n × CT |FE + n × CT |BS  8n 3 32n  2 2 = n× T  3 + 8n − 3  + n × T (4n + 12n )    . if Gaussian elimination method were used to find the inverse of a matrix. 83 250. The lower triangular matrix [L ] has 1 in its diagonal entries. Gaussian elimination method would take more computational time (approximately n / 4 times – prove it) than the LU decomposition method.28 25. CT |inverse GE >> CT |inverse LU as CT |inverse GE has the dominating terms of n 4 and CT |inverse LU has the dominating terms of n 3 . how do I find [A] = [L][U ]? If forward elimination steps of the Naïve Gauss elimination methods can be applied on a nonsingular matrix.07  8n 4 4n 2   + 12n 3 + = T  3 3    Clearly for large n . The non-zero elements on the non-diagonal elements in [L ] are multipliers that made the corresponding entries zero in the upper triangular matrix [U ] during forward elimination.07.04. Let us look at this using the same example as used in Naïve Gaussian elimination. Typical values of the ratio of the computational time for different values of n are given in Table 1.4 Chapter 04. that is. respectively? How do I decompose a non-singular matrix [ A] .  a1n   a2n        a nn   0 u11 u12  0  0 u22         1  0 0 . Table 1 Comparing computational times of finding inverse of a matrix using LU decomposition and Gaussian elimination. For large values of n .8 2501 Are you convinced now that LU decomposition has its place in solving systems of equations? We are now ready to answer other curious questions such as 1) How do I find LU matrices for a nonsingular matrix [ A] ? 2) How do I conduct forward and back substitution steps of Equations (1) and (2). then [A] can be decomposed into LU as  a11 a [ A] =  21     a n1 1  =  21      n1 a12 a 22  an 2 0 1   n2  u1n   u2 n        unn  The elements of the [U ] matrix are exactly the same as the coefficient matrix one obtains at the end of the forward elimination steps in Naïve Gauss elimination. n CT |inverse GE / CT |inverse LU 10 100 1000 10000 3. 56 [U ] =   0 0 0.07. It was 64  21 = 25 = 2.5 1   Confirm [L][U ] = [A] .8 − 4.8 − 4. . that is 5 1  25  0 − 4. find the multiplier that was used to make the a 21 and a 31 elements zero in the first step of forward elimination of the Naïve Gauss elimination method.56     0 − 16.5 [A] = [L][U ] 0 0 u11 u12 u13  1 =  21 1 0  0 u22 u23     0 u33   31  32 1  0    The [U ] matrix is the same as found at the end of the forward elimination of Naïve Gauss elimination method.76 To find  32 .76   So  32 = Hence − 16.56 1 0 [L] =   5. what multiplier was used to make a32 element zero? Remember a 32 element was made zero in the second step of forward elimination.5 0 0  1 2.7    To find  21 and  31 .8 − 1.56 144  31 = 25 = 5.8 = 3. The [A] matrix at the beginning of the second step of forward elimination was 5 1  25  0 − 4.76 3.8 − 1.LU Decomposition Example 1 Find the LU decomposition of the matrix  25 5 1 [A] =  64 8 1   144 12 1   Solution 04. 04.07.6 Chapter 04.07  1 [L][U ] = 2.56  5.76   25 =  64  144  Example 2 0 25 5 1    0 − 4.8 − 1.56 0   3.5 1  0 0 0.7    1 8 1  12 1  5 0 1 Use the LU decomposition method to solve the following simultaneous linear equations.  25 5 1  a1  106.8   64 8 1 a  = 177.2     2    144 12 1  a 3  279.2     Solution Recall that [A][X ] = [C ] and if [A] = [L][U ] then first solving [L][Z ] = [C ] and then [U ][X ] = [Z ] gives the solution vector [X ] . Now in the previous example, we showed [A] = [L][U ] First solve [L][Z ] = [C ] 0 0 25 5 1   1 = 2.56 1 0  0 − 4.8 − 1.56    0 0.7  5.76 3.5 1  0    to give 0 0  z1  106.8   1 2.56 1 0  z  = 177.2    2    5.76 3.5 1  z 3  279.2      z1 = 106.8 2.56 z1 + z 2 = 177.2 5.76 z1 + 3.5z 2 + z3 = 279.2 Forward substitution starting from the first equation gives z1 = 106.8 LU Decomposition 04.07.7 Hence z 2 = 177.2 − 2.56 z1 = 177.2 − 2.56 × 106.8 = −96.208 z3 = 279.2 − 5.76 z1 − 3.5z 2 = 279.2 − 5.76 × 106.8 − 3.5 × (− 96.208) = 0.76  z1  [Z ] =  z 2     z3     106.8  = − 96.208      0.76  This matrix is same as the right hand side obtained at the end of the forward elimination steps of Naïve Gauss elimination method. Is this a coincidence? Now solve [U ][X ] = [Z ] 5 1   a1   106.8  25  0 − 4.8 − 1.56 a  = − 96.208    2   0 0 0.7   a3   0.76       25a1 + 5a 2 + a3 = 106.8 − 4.8a 2 − 1.56a3 = −96.208 0.7 a3 = 0.76 From the third equation 0.7 a3 = 0.76 0.76 a3 = 0.7 = 1.0857 Substituting the value of a3 in the second equation, − 4.8a 2 − 1.56a3 = −96.208 − 96.208 + 1.56a3 a2 = − 4.8 − 96.208 + 1.56 × 1.0857 = − 4.8 = 19.691 Substituting the value of a 2 and a3 in the first equation, 25a1 + 5a 2 + a3 = 106.8 a1 = 106.8 − 5a 2 − a3 25 04.07.8 106.8 − 5 × 19.691 − 1.0857 25 = 0.29048 Hence the solution vector is  a1  0.29048 a  =  19.691    2   a3   1.0857      = Chapter 04.07 How do I find the inverse of a square matrix using LU decomposition? A matrix [B ] is the inverse of [A] if [A][B] = [I ] = [B][A]. How can we use LU decomposition to find the inverse of the matrix? Assume the first column of [B ] (the inverse of [A] ) is [b11 b12 ... ... bn1 ]T Then from the above definition of an inverse and the definition of matrix multiplication b11  1 b  0 [A] 21  =           bn1  0 Similarly the second column of [B ] is given by Similarly, all columns of [B ] can be found by solving n different sets of equations with the column of the right hand side being the n columns of the identity matrix. Example 3 Use LU decomposition to find the inverse of  25 5 1 [A] =  64 8 1   144 12 1   Solution Knowing that [A] = [L][U ]  b12  0 b  1 [A] 22  =           bn 2  0 0 0 25 5 1   1 2.56 1 0  0 − 4.8 − 1.56 =   5.76 3.5 1  0 0 0.7     2       25b11 + 5b21 + b31 = 1 − 4.76(1) − 3.56(1) = −2.56 z1 + z 2 = 0 5.56b31 = −2.LU Decomposition We can solve for the first column of [ B] = [A] by solving for  25 5 1 b11  1  64 8 1 b  = 0   21     144 12 1 b31  0      First solve [L][Z ] = [C ] .2 Hence  z1  [Z ] =  z 2     z3     1  = − 2.07.9 to give z1 = 1 2.56 ) = 3.56    3.5 z 2 = 0 − 5.56 z1 = 0 − 2.7  b31   3.8 − 1.56 .8b21 − 1.76 z1 − 3.2    Now solve [U ][X ] = [Z ] that is 5 1  b11   1  25  0 − 4.56    21    0 0 0.76 3.56 1 0  z  = 0  2          5.56 z 3 = 0 − 5.56 b  = − 2.5 z 2 + z 3 = 0 Forward substitution starting from the first equation gives z1 = 1 z2 = 0 − 2.5(− 2. that is 0 0  z1  1  1 2.5 1  z 3  0 −1 04.76 z1 + 3. 9524   21       b31   4.571 = 25 = 0.9524 1.417 − 0.429  144 12 1 b33  1          Hence  0.08333 0.56(4.07.7 = 4.08333  64 8 1 b  = 1 gives b  =  1.56b31 b21 = − 4.417     22     22      b32   − 5.000  144 12 1 b32  0        and solving b13   0.571) = − 4.571 1.04762  b  = − 0.04762 − 0.000   Can you confirm the following for the above example? [A][A]−1 = [I ] = [A]−1 [A] Key Terms: LU decomposition Inverse b11   0.2 b31 = 0.04.571  .03571  −1 [A] = − 0.03571   25 5 1  b13  0  64 8 1 b  = 0 gives b  = − 0.10 0.9524) − 4.8 − 2.04762 Hence the first column of the inverse of [A] is Chapter 04.4643    4.429  − 5.07 Similarly by solving  25 5 1 b12  0 b12  − 0.7b31 = 3.8 = −0.4643   23      23    b33   1.9524 1 − 5b21 − b31 b11 = 25 1 − 5(−0.571 − 2.56 + 1.2 Backward substitution starting from the third equation gives 3.56 + 1.
Copyright © 2024 DOKUMEN.SITE Inc.