SlideShare a Scribd company logo
Analysis of Algorithms
Matrix algorithms
Andres Mendez-Vazquez
November 22, 2015
1 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
2 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
3 / 102
Basic definitions
A matrix is a rectangular array of numbers
A =
a11 a12 a13
a21 a22 a23
=
1 2 3
4 5 6
A transpose matrix is the matrix obtained by exchanging the rows and
columns
AT
=



1 4
2 5
3 6



4 / 102
Basic definitions
A matrix is a rectangular array of numbers
A =
a11 a12 a13
a21 a22 a23
=
1 2 3
4 5 6
A transpose matrix is the matrix obtained by exchanging the rows and
columns
AT
=



1 4
2 5
3 6



4 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
5 / 102
Several cases of matrices
Zero matrix







0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0







The diagonal matrix






a11 0 · · · 0
0 a22 · · · 0
...
...
...
...
0 0 · · · ann






6 / 102
Several cases of matrices
Zero matrix







0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0







The diagonal matrix






a11 0 · · · 0
0 a22 · · · 0
...
...
...
...
0 0 · · · ann






6 / 102
Several cases of matrices
Upper triangular matrix






a11 a12 · · · a1n
0 a22 · · · a2n
...
...
...
...
0 0 · · · ann






7 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
8 / 102
Operations on matrices
They Define a Vectorial Space
Matrix addition.
Multiplication by scalar.
The existence of zero.
9 / 102
Operations on matrices
They Define a Vectorial Space
Matrix addition.
Multiplication by scalar.
The existence of zero.
9 / 102
Operations on matrices
They Define a Vectorial Space
Matrix addition.
Multiplication by scalar.
The existence of zero.
9 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
10 / 102
Matrix Multiplication
What is Matrix Multiplication?
Given A, B matrices with dimensions n×n, the multiplication is defined as
C = AB
cij =
n
k=1
aikbkj
11 / 102
Complexity and Algorithm
Algorithm: Complexity Θ (n3
)
Square-Matrix-Multiply(A, B)
1 n = A.rows
2 let C be a new matrix n × n
3 for i = 1 to n
4 for j = 1 to n
5 C [i, j] = 0
6 for k = 1 to n
7 C [i, j] = C [i, j] + A [i, j] ∗ B [i, j]
8 return C
12 / 102
Matrix multiplication properties
Properties of the Multiplication
The Identity exist for a matrix A of m × n:
ImA = AIn = A.
The multiplication is associative:
A(BC) = (AB)C.
In addition, multiplication is distibutive
A(B + C) = AB + AC
(B + C)D = BD + CD
13 / 102
Matrix multiplication properties
Properties of the Multiplication
The Identity exist for a matrix A of m × n:
ImA = AIn = A.
The multiplication is associative:
A(BC) = (AB)C.
In addition, multiplication is distibutive
A(B + C) = AB + AC
(B + C)D = BD + CD
13 / 102
Matrix multiplication properties
Properties of the Multiplication
The Identity exist for a matrix A of m × n:
ImA = AIn = A.
The multiplication is associative:
A(BC) = (AB)C.
In addition, multiplication is distibutive
A(B + C) = AB + AC
(B + C)D = BD + CD
13 / 102
In addition
Definition
The inner product between vectors is defied as
xT
y =
n
i=1
xiyi
14 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
15 / 102
Matrix inverses
The inverse is defined as the vector A−1
such that
AA−1
= A−1
A = In
Example
1 1
1 0
−1
=
0 1
1 −1
=⇒
1 1
1 0
0 1
1 −1
=
1 · 0 + 1 · 1 1 · 1 − 1 · 1
1 · 0 + 1 · 0 1 · 1 + 0 · −1
=
1 0
0 1
Remark
A matrix that is invertible is called non-singular.
16 / 102
Matrix inverses
The inverse is defined as the vector A−1
such that
AA−1
= A−1
A = In
Example
1 1
1 0
−1
=
0 1
1 −1
=⇒
1 1
1 0
0 1
1 −1
=
1 · 0 + 1 · 1 1 · 1 − 1 · 1
1 · 0 + 1 · 0 1 · 1 + 0 · −1
=
1 0
0 1
Remark
A matrix that is invertible is called non-singular.
16 / 102
Matrix inverses
The inverse is defined as the vector A−1
such that
AA−1
= A−1
A = In
Example
1 1
1 0
−1
=
0 1
1 −1
=⇒
1 1
1 0
0 1
1 −1
=
1 · 0 + 1 · 1 1 · 1 − 1 · 1
1 · 0 + 1 · 0 1 · 1 + 0 · −1
=
1 0
0 1
Remark
A matrix that is invertible is called non-singular.
16 / 102
Properties of an inverse
Some properties are
(BA)−1 = A−1B−1
A−1 T = AT −1
17 / 102
The Rank of A
Rank of A
A collection of vectors is x1, x2, ..., xn such that
c1x1 + c2x2 + ... + cnxn = 0. The rank of a matrix is the number of linear
independent rows.
Theorem 1
A square matrix has full rank if and only if it is nonsingular.
18 / 102
The Rank of A
Rank of A
A collection of vectors is x1, x2, ..., xn such that
c1x1 + c2x2 + ... + cnxn = 0. The rank of a matrix is the number of linear
independent rows.
Theorem 1
A square matrix has full rank if and only if it is nonsingular.
18 / 102
Other Theorems
A null vector x is such that Ax = 0
Theorem 2: A matrix A has full column rank if and only if it does not
have a null vector.
Then, for squared matrices, we have
Corollary 3: A square matrix A is singular if and only if it has a null
vector.
19 / 102
Other Theorems
A null vector x is such that Ax = 0
Theorem 2: A matrix A has full column rank if and only if it does not
have a null vector.
Then, for squared matrices, we have
Corollary 3: A square matrix A is singular if and only if it has a null
vector.
19 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
20 / 102
Determinants
A determinant can be defined recursively as follows
det(A) =



a11 if n = 1
n
j=1
(−1)1+ja1jdet(A[1j]) if n > 1
(1)
Where (−1)i+jdet(A[ij]) is called a cofactor
21 / 102
Determinants
A determinant can be defined recursively as follows
det(A) =



a11 if n = 1
n
j=1
(−1)1+ja1jdet(A[1j]) if n > 1
(1)
Where (−1)i+jdet(A[ij]) is called a cofactor
21 / 102
Theorems
Theorem 4(determinant properties).
The determinant of a square matrix A has the following properties:
If any row or any column A is zero, then det(A) = 0.
The determinant of A is multiplied by λ if the entries of any one row
(or any one column) of A are all multiplied by λ.
The determinant of A is unchanged if the entries in one row
(respectively, column) are added to those in another row (respectively,
column).
The determinant of A equals the determinant of AT .
The determinant of A is multiplied by −1 if any two rows (or any two
columns) are exchanged.
Theorem 5
An n × n matrix A is singular if and only if det(A) = 0.
22 / 102
Theorems
Theorem 4(determinant properties).
The determinant of a square matrix A has the following properties:
If any row or any column A is zero, then det(A) = 0.
The determinant of A is multiplied by λ if the entries of any one row
(or any one column) of A are all multiplied by λ.
The determinant of A is unchanged if the entries in one row
(respectively, column) are added to those in another row (respectively,
column).
The determinant of A equals the determinant of AT .
The determinant of A is multiplied by −1 if any two rows (or any two
columns) are exchanged.
Theorem 5
An n × n matrix A is singular if and only if det(A) = 0.
22 / 102
Theorems
Theorem 4(determinant properties).
The determinant of a square matrix A has the following properties:
If any row or any column A is zero, then det(A) = 0.
The determinant of A is multiplied by λ if the entries of any one row
(or any one column) of A are all multiplied by λ.
The determinant of A is unchanged if the entries in one row
(respectively, column) are added to those in another row (respectively,
column).
The determinant of A equals the determinant of AT .
The determinant of A is multiplied by −1 if any two rows (or any two
columns) are exchanged.
Theorem 5
An n × n matrix A is singular if and only if det(A) = 0.
22 / 102
Theorems
Theorem 4(determinant properties).
The determinant of a square matrix A has the following properties:
If any row or any column A is zero, then det(A) = 0.
The determinant of A is multiplied by λ if the entries of any one row
(or any one column) of A are all multiplied by λ.
The determinant of A is unchanged if the entries in one row
(respectively, column) are added to those in another row (respectively,
column).
The determinant of A equals the determinant of AT .
The determinant of A is multiplied by −1 if any two rows (or any two
columns) are exchanged.
Theorem 5
An n × n matrix A is singular if and only if det(A) = 0.
22 / 102
Theorems
Theorem 4(determinant properties).
The determinant of a square matrix A has the following properties:
If any row or any column A is zero, then det(A) = 0.
The determinant of A is multiplied by λ if the entries of any one row
(or any one column) of A are all multiplied by λ.
The determinant of A is unchanged if the entries in one row
(respectively, column) are added to those in another row (respectively,
column).
The determinant of A equals the determinant of AT .
The determinant of A is multiplied by −1 if any two rows (or any two
columns) are exchanged.
Theorem 5
An n × n matrix A is singular if and only if det(A) = 0.
22 / 102
Theorems
Theorem 4(determinant properties).
The determinant of a square matrix A has the following properties:
If any row or any column A is zero, then det(A) = 0.
The determinant of A is multiplied by λ if the entries of any one row
(or any one column) of A are all multiplied by λ.
The determinant of A is unchanged if the entries in one row
(respectively, column) are added to those in another row (respectively,
column).
The determinant of A equals the determinant of AT .
The determinant of A is multiplied by −1 if any two rows (or any two
columns) are exchanged.
Theorem 5
An n × n matrix A is singular if and only if det(A) = 0.
22 / 102
Theorems
Theorem 4(determinant properties).
The determinant of a square matrix A has the following properties:
If any row or any column A is zero, then det(A) = 0.
The determinant of A is multiplied by λ if the entries of any one row
(or any one column) of A are all multiplied by λ.
The determinant of A is unchanged if the entries in one row
(respectively, column) are added to those in another row (respectively,
column).
The determinant of A equals the determinant of AT .
The determinant of A is multiplied by −1 if any two rows (or any two
columns) are exchanged.
Theorem 5
An n × n matrix A is singular if and only if det(A) = 0.
22 / 102
Positive definite matrix
Definition
A positive definite matrix A is called positive definite if and only if
xT Ax > 0 for all x = 0
Theorem 6
For any matrix A with full column rank, the matrix AT A is positive
definite.
23 / 102
Positive definite matrix
Definition
A positive definite matrix A is called positive definite if and only if
xT Ax > 0 for all x = 0
Theorem 6
For any matrix A with full column rank, the matrix AT A is positive
definite.
23 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
24 / 102
Matrix Multiplication
Problem description
Given n × n matrices A,B and C:
r s
t u
=
a b
c d
e f
g h
Thus, you could compute r, s, t and u using recursion!!!
r = ae + bg
s = af + bh
t = ce + dg
u = cf + dh
25 / 102
Matrix Multiplication
Problem description
Given n × n matrices A,B and C:
r s
t u
=
a b
c d
e f
g h
Thus, you could compute r, s, t and u using recursion!!!
r = ae + bg
s = af + bh
t = ce + dg
u = cf + dh
25 / 102
Problem
Complexity of previous approach
T(n) = 8T
n
2
+ Θ(n2
)
Thus
T(n) = Θ(n3
)
Therefore
You need to use a different type of products.
26 / 102
Problem
Complexity of previous approach
T(n) = 8T
n
2
+ Θ(n2
)
Thus
T(n) = Θ(n3
)
Therefore
You need to use a different type of products.
26 / 102
Problem
Complexity of previous approach
T(n) = 8T
n
2
+ Θ(n2
)
Thus
T(n) = Θ(n3
)
Therefore
You need to use a different type of products.
26 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
27 / 102
The Strassen’s Algorithm
It is a divide and conquer algorithm
Given A, B, C matrices with dimensions n × n, we recursively split the
matrices such that we finish with 12 n
2 × n
2 sub matrices
r s
t u
=
a b
c d
e f
g h
Remember the Gauss Trick?
Imagine the same for Matrix Multiplication.
28 / 102
The Strassen’s Algorithm
It is a divide and conquer algorithm
Given A, B, C matrices with dimensions n × n, we recursively split the
matrices such that we finish with 12 n
2 × n
2 sub matrices
r s
t u
=
a b
c d
e f
g h
Remember the Gauss Trick?
Imagine the same for Matrix Multiplication.
28 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
29 / 102
Algorithm
Strassen’s Algorithm
1 Divide the input matrices A and B into n
2 × n
2 sub matrices.
2 Using Θ n2 scalar additions and subtractions, compute 14 matrices
A1, B1, ..., A7, B7 each of which is n
2 × n
2 .
3 Recursively compute the seven matrices products Pi = AiBi for
i = 1, 2, 3, ..., 7.
4 Compute the desired matrix
r s
t u
by adding and or subtracting various combinations of the Pi matrices,
using only Θ n2 scalar additions and subtractions
30 / 102
Algorithm
Strassen’s Algorithm
1 Divide the input matrices A and B into n
2 × n
2 sub matrices.
2 Using Θ n2 scalar additions and subtractions, compute 14 matrices
A1, B1, ..., A7, B7 each of which is n
2 × n
2 .
3 Recursively compute the seven matrices products Pi = AiBi for
i = 1, 2, 3, ..., 7.
4 Compute the desired matrix
r s
t u
by adding and or subtracting various combinations of the Pi matrices,
using only Θ n2 scalar additions and subtractions
30 / 102
Algorithm
Strassen’s Algorithm
1 Divide the input matrices A and B into n
2 × n
2 sub matrices.
2 Using Θ n2 scalar additions and subtractions, compute 14 matrices
A1, B1, ..., A7, B7 each of which is n
2 × n
2 .
3 Recursively compute the seven matrices products Pi = AiBi for
i = 1, 2, 3, ..., 7.
4 Compute the desired matrix
r s
t u
by adding and or subtracting various combinations of the Pi matrices,
using only Θ n2 scalar additions and subtractions
30 / 102
Algorithm
Strassen’s Algorithm
1 Divide the input matrices A and B into n
2 × n
2 sub matrices.
2 Using Θ n2 scalar additions and subtractions, compute 14 matrices
A1, B1, ..., A7, B7 each of which is n
2 × n
2 .
3 Recursively compute the seven matrices products Pi = AiBi for
i = 1, 2, 3, ..., 7.
4 Compute the desired matrix
r s
t u
by adding and or subtracting various combinations of the Pi matrices,
using only Θ n2 scalar additions and subtractions
30 / 102
Algorithm
Strassen’s Algorithm
1 Divide the input matrices A and B into n
2 × n
2 sub matrices.
2 Using Θ n2 scalar additions and subtractions, compute 14 matrices
A1, B1, ..., A7, B7 each of which is n
2 × n
2 .
3 Recursively compute the seven matrices products Pi = AiBi for
i = 1, 2, 3, ..., 7.
4 Compute the desired matrix
r s
t u
by adding and or subtracting various combinations of the Pi matrices,
using only Θ n2 scalar additions and subtractions
30 / 102
Algorithm
Strassen’s Algorithm
1 Divide the input matrices A and B into n
2 × n
2 sub matrices.
2 Using Θ n2 scalar additions and subtractions, compute 14 matrices
A1, B1, ..., A7, B7 each of which is n
2 × n
2 .
3 Recursively compute the seven matrices products Pi = AiBi for
i = 1, 2, 3, ..., 7.
4 Compute the desired matrix
r s
t u
by adding and or subtracting various combinations of the Pi matrices,
using only Θ n2 scalar additions and subtractions
30 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
31 / 102
Strassen Observed that
Trial and Error
First , he generated
Pi = AiBi = (αi1a + αi2b + αi3c + αi4d) · (βi1e + βi2f + βi3g + βi4h)
Where αij, βij ∈ {−1, 0, 1}
32 / 102
Then
r
r = ae + bg = a b c d





+1 0 0 0
0 0 +1 0
0 0 0 0
0 0 0 0










e
f
g
h





s
s = af + bh = a b c d





+1 0 0 0
0 0 0 +1
0 0 0 0
0 0 0 0










e
f
g
h





33 / 102
Then
r
r = ae + bg = a b c d





+1 0 0 0
0 0 +1 0
0 0 0 0
0 0 0 0










e
f
g
h





s
s = af + bh = a b c d





+1 0 0 0
0 0 0 +1
0 0 0 0
0 0 0 0










e
f
g
h





33 / 102
Then
r
r = ae + bg = a b c d





+1 0 0 0
0 0 +1 0
0 0 0 0
0 0 0 0










e
f
g
h





s
s = af + bh = a b c d





+1 0 0 0
0 0 0 +1
0 0 0 0
0 0 0 0










e
f
g
h





33 / 102
Therefore
t
r = ce + dg = a b c d





0 0 0 0
0 0 0 0
+1 0 0 0
0 0 +1 0










e
f
g
h





u
u = cf + dh = a b c d





0 0 0 0
0 0 0 0
0 +1 0 0
0 0 0 +1










e
f
g
h





34 / 102
Therefore
t
r = ce + dg = a b c d





0 0 0 0
0 0 0 0
+1 0 0 0
0 0 +1 0










e
f
g
h





u
u = cf + dh = a b c d





0 0 0 0
0 0 0 0
0 +1 0 0
0 0 0 +1










e
f
g
h





34 / 102
Example Compute the s from P1 and P2 matrices
Compute
s = P1 + P2
Where P1
P1 = A1B1
= a (f − h)
= af − ah
= a b c d





0 +1 0 −1
0 0 0 0
0 0 0 0
0 0 0 0










e
f
g
h





35 / 102
Example Compute the s from P1 and P2 matrices
Compute
s = P1 + P2
Where P1
P1 = A1B1
= a (f − h)
= af − ah
= a b c d





0 +1 0 −1
0 0 0 0
0 0 0 0
0 0 0 0










e
f
g
h





35 / 102
Example Compute the s from P1 and P2 matrices
Compute
s = P1 + P2
Where P1
P1 = A1B1
= a (f − h)
= af − ah
= a b c d





0 +1 0 −1
0 0 0 0
0 0 0 0
0 0 0 0










e
f
g
h





35 / 102
Example Compute the s from P1 and P2 matrices
Compute
s = P1 + P2
Where P1
P1 = A1B1
= a (f − h)
= af − ah
= a b c d





0 +1 0 −1
0 0 0 0
0 0 0 0
0 0 0 0










e
f
g
h





35 / 102
Example Compute the s from P1 and P2 matrices
Compute
s = P1 + P2
Where P1
P1 = A1B1
= a (f − h)
= af − ah
= a b c d





0 +1 0 −1
0 0 0 0
0 0 0 0
0 0 0 0










e
f
g
h





35 / 102
Example Compute the s from P1 and P2 matrices
Where P2
P2 = A2B2
= (a + b) h
= ah + bh
= a b c d





0 0 0 +1
0 0 0 +1
0 0 0 0
0 0 0 0










e
f
g
h





36 / 102
Example Compute the s from P1 and P2 matrices
Where P2
P2 = A2B2
= (a + b) h
= ah + bh
= a b c d





0 0 0 +1
0 0 0 +1
0 0 0 0
0 0 0 0










e
f
g
h





36 / 102
Example Compute the s from P1 and P2 matrices
Where P2
P2 = A2B2
= (a + b) h
= ah + bh
= a b c d





0 0 0 +1
0 0 0 +1
0 0 0 0
0 0 0 0










e
f
g
h





36 / 102
Example Compute the s from P1 and P2 matrices
Where P2
P2 = A2B2
= (a + b) h
= ah + bh
= a b c d





0 0 0 +1
0 0 0 +1
0 0 0 0
0 0 0 0










e
f
g
h





36 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
37 / 102
Complexity
Because we are only computing 7 matrices
T(n) = 7T n
2 + Θ n2 = Θ nlg 7 = O n2.81 .
38 / 102
Nevertheless
We do not use Strassen’s because
A constant factor hidden in the running of the algorithm is larger than
the constant factor of the naive Θ n3 method.
When matrices are sparse, there are faster methods.
Strassen’s is not a numerically stable as the naive method.
The sub matrices formed at the levels of the recursion consume space.
39 / 102
Nevertheless
We do not use Strassen’s because
A constant factor hidden in the running of the algorithm is larger than
the constant factor of the naive Θ n3 method.
When matrices are sparse, there are faster methods.
Strassen’s is not a numerically stable as the naive method.
The sub matrices formed at the levels of the recursion consume space.
39 / 102
Nevertheless
We do not use Strassen’s because
A constant factor hidden in the running of the algorithm is larger than
the constant factor of the naive Θ n3 method.
When matrices are sparse, there are faster methods.
Strassen’s is not a numerically stable as the naive method.
The sub matrices formed at the levels of the recursion consume space.
39 / 102
Nevertheless
We do not use Strassen’s because
A constant factor hidden in the running of the algorithm is larger than
the constant factor of the naive Θ n3 method.
When matrices are sparse, there are faster methods.
Strassen’s is not a numerically stable as the naive method.
The sub matrices formed at the levels of the recursion consume space.
39 / 102
The Holy Grail of Matrix Multiplications O (n2
)
In a method by Virginia Vassilevska Williams (2012) Assistant
Professor at Stanford
The computational complexity of her method is ω < 2.3727 or
O n2.3727
Better than Coppersmith and Winograd (1990) O n2.375477
Many Researchers Believe that
Coppersmith, Winograd and Cohn et al. conjecture could lead to
O n2 , contradicting a variant of the widely believed sun flower
conjecture of Erdos and Rado.
40 / 102
The Holy Grail of Matrix Multiplications O (n2
)
In a method by Virginia Vassilevska Williams (2012) Assistant
Professor at Stanford
The computational complexity of her method is ω < 2.3727 or
O n2.3727
Better than Coppersmith and Winograd (1990) O n2.375477
Many Researchers Believe that
Coppersmith, Winograd and Cohn et al. conjecture could lead to
O n2 , contradicting a variant of the widely believed sun flower
conjecture of Erdos and Rado.
40 / 102
The Holy Grail of Matrix Multiplications O (n2
)
In a method by Virginia Vassilevska Williams (2012) Assistant
Professor at Stanford
The computational complexity of her method is ω < 2.3727 or
O n2.3727
Better than Coppersmith and Winograd (1990) O n2.375477
Many Researchers Believe that
Coppersmith, Winograd and Cohn et al. conjecture could lead to
O n2 , contradicting a variant of the widely believed sun flower
conjecture of Erdos and Rado.
40 / 102
Exercises
28.1-3
28.1-5
28.1-8
28.1-9
28.2-2
28.2-5
41 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
42 / 102
In Many Fields
From Optimization to Control
We are required to solve systems of simultaneous equations.
For Example
For Polynomial Curve Fitting, we are given (x1, y1) , (x2, y2) , ..., (xn, yn)
We want
To find a polynomial of degree n − 1 with structure
p (x) = a0 + a1x + a2x2
+ ... + an−1xn−1
43 / 102
In Many Fields
From Optimization to Control
We are required to solve systems of simultaneous equations.
For Example
For Polynomial Curve Fitting, we are given (x1, y1) , (x2, y2) , ..., (xn, yn)
We want
To find a polynomial of degree n − 1 with structure
p (x) = a0 + a1x + a2x2
+ ... + an−1xn−1
43 / 102
In Many Fields
From Optimization to Control
We are required to solve systems of simultaneous equations.
For Example
For Polynomial Curve Fitting, we are given (x1, y1) , (x2, y2) , ..., (xn, yn)
We want
To find a polynomial of degree n − 1 with structure
p (x) = a0 + a1x + a2x2
+ ... + an−1xn−1
43 / 102
Thus
We can build a system of equations
a0 + a1x1 + a2x2
1 + ... + an−1xn−1
1 = y1
a0 + a1x2 + a2x2
2 + ... + an−1xn−1
2 = y2
...
a0 + a1xn + a2x2
n + ... + an−1xn−1
n = yn
We have n unknowns
a0, a1, a2, ..., an−1
44 / 102
Thus
We can build a system of equations
a0 + a1x1 + a2x2
1 + ... + an−1xn−1
1 = y1
a0 + a1x2 + a2x2
2 + ... + an−1xn−1
2 = y2
...
a0 + a1xn + a2x2
n + ... + an−1xn−1
n = yn
We have n unknowns
a0, a1, a2, ..., an−1
44 / 102
Solving Systems of Linear Equations
Proceed as follows
We start with a set of linear equations with n unknowns:
x1, x2, ..., xn



a11x1 + a12x2 + ... + a1nxn = b1
a21x1 + a22x2 + ... + a2nxn = b2
...
...
an1x1 + an2x2 + ... + annxn = bn
Something Notable
A set of values for x1, x2, ..., xn that satisfy all of the equations
simultaneously is said to be a solution to these equations.
In this section, we only treat the case in which there are exactly n
equations in n unknowns.
45 / 102
Solving Systems of Linear Equations
Proceed as follows
We start with a set of linear equations with n unknowns:
x1, x2, ..., xn



a11x1 + a12x2 + ... + a1nxn = b1
a21x1 + a22x2 + ... + a2nxn = b2
...
...
an1x1 + an2x2 + ... + annxn = bn
Something Notable
A set of values for x1, x2, ..., xn that satisfy all of the equations
simultaneously is said to be a solution to these equations.
In this section, we only treat the case in which there are exactly n
equations in n unknowns.
45 / 102
Solving Systems of Linear Equations
Proceed as follows
We start with a set of linear equations with n unknowns:
x1, x2, ..., xn



a11x1 + a12x2 + ... + a1nxn = b1
a21x1 + a22x2 + ... + a2nxn = b2
...
...
an1x1 + an2x2 + ... + annxn = bn
Something Notable
A set of values for x1, x2, ..., xn that satisfy all of the equations
simultaneously is said to be a solution to these equations.
In this section, we only treat the case in which there are exactly n
equations in n unknowns.
45 / 102
Solving Systems of Linear Equations
Proceed as follows
We start with a set of linear equations with n unknowns:
x1, x2, ..., xn



a11x1 + a12x2 + ... + a1nxn = b1
a21x1 + a22x2 + ... + a2nxn = b2
...
...
an1x1 + an2x2 + ... + annxn = bn
Something Notable
A set of values for x1, x2, ..., xn that satisfy all of the equations
simultaneously is said to be a solution to these equations.
In this section, we only treat the case in which there are exactly n
equations in n unknowns.
45 / 102
Solving systems of linear equations
continuation
We can conveniently rewrite the equations as the matrix-vector
equation:






a11 a12 . . . a1n
a21 a22 . . . a2n
...
...
...
...
an1 an2 . . . ann












x1
x2
...
xn






=






b1
b2
...
bn






or, equivalently, letting A = (aij), x = (xj), and b = (bi), as
Ax = b
In this section, we shall be concerned predominantly with the case of
which A is nonsingular, after all we want to invert A.
46 / 102
Solving systems of linear equations
continuation
We can conveniently rewrite the equations as the matrix-vector
equation:






a11 a12 . . . a1n
a21 a22 . . . a2n
...
...
...
...
an1 an2 . . . ann












x1
x2
...
xn






=






b1
b2
...
bn






or, equivalently, letting A = (aij), x = (xj), and b = (bi), as
Ax = b
In this section, we shall be concerned predominantly with the case of
which A is nonsingular, after all we want to invert A.
46 / 102
Solving systems of linear equations
continuation
We can conveniently rewrite the equations as the matrix-vector
equation:






a11 a12 . . . a1n
a21 a22 . . . a2n
...
...
...
...
an1 an2 . . . ann












x1
x2
...
xn






=






b1
b2
...
bn






or, equivalently, letting A = (aij), x = (xj), and b = (bi), as
Ax = b
In this section, we shall be concerned predominantly with the case of
which A is nonsingular, after all we want to invert A.
46 / 102
Solving systems of linear equations
continuation
We can conveniently rewrite the equations as the matrix-vector
equation:






a11 a12 . . . a1n
a21 a22 . . . a2n
...
...
...
...
an1 an2 . . . ann












x1
x2
...
xn






=






b1
b2
...
bn






or, equivalently, letting A = (aij), x = (xj), and b = (bi), as
Ax = b
In this section, we shall be concerned predominantly with the case of
which A is nonsingular, after all we want to invert A.
46 / 102
Solving systems of linear equations
continuation
We can conveniently rewrite the equations as the matrix-vector
equation:






a11 a12 . . . a1n
a21 a22 . . . a2n
...
...
...
...
an1 an2 . . . ann












x1
x2
...
xn






=






b1
b2
...
bn






or, equivalently, letting A = (aij), x = (xj), and b = (bi), as
Ax = b
In this section, we shall be concerned predominantly with the case of
which A is nonsingular, after all we want to invert A.
46 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
47 / 102
Overview of Lower Upper (LUP) Decomposition
Intuition
The idea behind LUP decomposition is to find three n × n matrices L,U,
and P such that:
PA = LU
where:
L is a unit lower triangular matrix.
U is an upper triangular matrix.
P is a permutation matrix.
Where
We call matrices L,U, and P satisfying the above equation a LUP
decomposition of the matrix A.
48 / 102
Overview of Lower Upper (LUP) Decomposition
Intuition
The idea behind LUP decomposition is to find three n × n matrices L,U,
and P such that:
PA = LU
where:
L is a unit lower triangular matrix.
U is an upper triangular matrix.
P is a permutation matrix.
Where
We call matrices L,U, and P satisfying the above equation a LUP
decomposition of the matrix A.
48 / 102
Overview of Lower Upper (LUP) Decomposition
Intuition
The idea behind LUP decomposition is to find three n × n matrices L,U,
and P such that:
PA = LU
where:
L is a unit lower triangular matrix.
U is an upper triangular matrix.
P is a permutation matrix.
Where
We call matrices L,U, and P satisfying the above equation a LUP
decomposition of the matrix A.
48 / 102
Overview of Lower Upper (LUP) Decomposition
Intuition
The idea behind LUP decomposition is to find three n × n matrices L,U,
and P such that:
PA = LU
where:
L is a unit lower triangular matrix.
U is an upper triangular matrix.
P is a permutation matrix.
Where
We call matrices L,U, and P satisfying the above equation a LUP
decomposition of the matrix A.
48 / 102
Overview of Lower Upper (LUP) Decomposition
Intuition
The idea behind LUP decomposition is to find three n × n matrices L,U,
and P such that:
PA = LU
where:
L is a unit lower triangular matrix.
U is an upper triangular matrix.
P is a permutation matrix.
Where
We call matrices L,U, and P satisfying the above equation a LUP
decomposition of the matrix A.
48 / 102
Overview of Lower Upper (LUP) Decomposition
Intuition
The idea behind LUP decomposition is to find three n × n matrices L,U,
and P such that:
PA = LU
where:
L is a unit lower triangular matrix.
U is an upper triangular matrix.
P is a permutation matrix.
Where
We call matrices L,U, and P satisfying the above equation a LUP
decomposition of the matrix A.
48 / 102
Overview of Lower Upper (LUP) Decomposition
Intuition
The idea behind LUP decomposition is to find three n × n matrices L,U,
and P such that:
PA = LU
where:
L is a unit lower triangular matrix.
U is an upper triangular matrix.
P is a permutation matrix.
Where
We call matrices L,U, and P satisfying the above equation a LUP
decomposition of the matrix A.
48 / 102
What is a Permutation Matrix
Basically
We represent the permutation P compactly by an array π[1..n]. For
i = 1, 2, ..., n, the entry π[i] indicates that Piπ[i] = 1 and Pij = 0 for
j = π[i].
Thus
PA has aπ[i],j in row i and a column j.
Pb has bπ[i] as its ith element.
49 / 102
What is a Permutation Matrix
Basically
We represent the permutation P compactly by an array π[1..n]. For
i = 1, 2, ..., n, the entry π[i] indicates that Piπ[i] = 1 and Pij = 0 for
j = π[i].
Thus
PA has aπ[i],j in row i and a column j.
Pb has bπ[i] as its ith element.
49 / 102
How can we use this in our advantage?
Lock at this
Ax = b =⇒ PAx = Pb (2)
Therefore
LUx = Pb (3)
Now, if we make Ux = y
Ly = Pb (4)
50 / 102
How can we use this in our advantage?
Lock at this
Ax = b =⇒ PAx = Pb (2)
Therefore
LUx = Pb (3)
Now, if we make Ux = y
Ly = Pb (4)
50 / 102
How can we use this in our advantage?
Lock at this
Ax = b =⇒ PAx = Pb (2)
Therefore
LUx = Pb (3)
Now, if we make Ux = y
Ly = Pb (4)
50 / 102
Thus
We first obtain y
Then, we obtain x.
51 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
52 / 102
Forward and Back Substitution
Forward substitution
Forward substitution can solve the lower triangular system Ly = Pb in
Θ(n2) time, given L, P and b.
Then
Since L is unit lower triangular, equation Ly = Pb can be rewritten as:
y1 = bπ[1]
l21y1 + y2 = bπ[2]
l31y1 + l32 + y3 = bπ[3]
...
ln1y1 + ln2y2 + ln3y3 + ... + yn = bπ[n]
53 / 102
Forward and Back Substitution
Forward substitution
Forward substitution can solve the lower triangular system Ly = Pb in
Θ(n2) time, given L, P and b.
Then
Since L is unit lower triangular, equation Ly = Pb can be rewritten as:
y1 = bπ[1]
l21y1 + y2 = bπ[2]
l31y1 + l32 + y3 = bπ[3]
...
ln1y1 + ln2y2 + ln3y3 + ... + yn = bπ[n]
53 / 102
Forward and Back Substitution
Back substitution
Back substitution is similar to forward substitution. Like forward
substitution, this process runs in Θ(n2) time. Since U is upper-triangular,
we can rewrite the system Ux = y as
u11x1 + u12x2 + ... + u1n−2xn−2 + u1n−1xn−1 + u1nxn = y1
u22x2 + ... + u2n−2xn−2 + u2n−1xn−1 + u2nxn = y2
...
un−2n−2xn−2 + un−2n−1xn−1 + un−2nxn = yn−2
un−1n−1xn−1 + un−1nxn = yn−1
unnxn = yn
54 / 102
Example
We have
Ax =



1 2 0
3 4 4
5 6 3


 x =



3
7
8


 = b
55 / 102
Example
The L, U and P matrix
L =



1 0 0
0.2 1 0
0.6 0.5 1


 , U =



5 6 3
0 0.8 −0.6
0 0 2.5


 , P =



0 0 1
1 0 0
0 1 0



56 / 102
Example
Using forward substitution, Ly = Pb for y
Ly =



1 0 0
0.2 1 0
0.6 0.5 1


 y =



0 0 1
1 0 0
0 1 0






3
7
8


 = Pb
57 / 102
Example
Using forward substitution, we get y
y =



8
1.4
1.5



58 / 102
Example
Now, we use the back substitution, Ux = y for x
Ux =



5 6 3
0 0.8 −0.6
0 0 2.5






x1
x2
x3


 =



8
1.4
1.5


 ,
59 / 102
Example
Finally, we get
x =



−1.4
2.2
0.6



60 / 102
Forward and Back Substitution
Given P, L, U, and b, the procedure LUP- SOLVE solves for x by
combining forward and back substitution
LUP-SOLVE(L, U, π, b)
1 n = L.rows
2 Let x be a new vector of length n
3 for i = 1 to n
4 yi = bπ[i] − i−1
j=1 lijyj
5 for i = n downto 1
6 xi =
yi−
n
j=i+1
uijxj
uii
7 return x
Complexity
The running time is Θ(n2).
61 / 102
Forward and Back Substitution
Given P, L, U, and b, the procedure LUP- SOLVE solves for x by
combining forward and back substitution
LUP-SOLVE(L, U, π, b)
1 n = L.rows
2 Let x be a new vector of length n
3 for i = 1 to n
4 yi = bπ[i] − i−1
j=1 lijyj
5 for i = n downto 1
6 xi =
yi−
n
j=i+1
uijxj
uii
7 return x
Complexity
The running time is Θ(n2).
61 / 102
Forward and Back Substitution
Given P, L, U, and b, the procedure LUP- SOLVE solves for x by
combining forward and back substitution
LUP-SOLVE(L, U, π, b)
1 n = L.rows
2 Let x be a new vector of length n
3 for i = 1 to n
4 yi = bπ[i] − i−1
j=1 lijyj
5 for i = n downto 1
6 xi =
yi−
n
j=i+1
uijxj
uii
7 return x
Complexity
The running time is Θ(n2).
61 / 102
Forward and Back Substitution
Given P, L, U, and b, the procedure LUP- SOLVE solves for x by
combining forward and back substitution
LUP-SOLVE(L, U, π, b)
1 n = L.rows
2 Let x be a new vector of length n
3 for i = 1 to n
4 yi = bπ[i] − i−1
j=1 lijyj
5 for i = n downto 1
6 xi =
yi−
n
j=i+1
uijxj
uii
7 return x
Complexity
The running time is Θ(n2).
61 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
62 / 102
Ok, if we have the L,U and P!!!
Thus
We need to find those matrices
How, we do it?
We are going to use something called the Gaussian Elimination.
63 / 102
Ok, if we have the L,U and P!!!
Thus
We need to find those matrices
How, we do it?
We are going to use something called the Gaussian Elimination.
63 / 102
For this
We assume that A is a n × n
Such that A is not singular
We use a process known as Gaussian elimination to create LU
decomposition
This algorithm is recursive in nature.
Properties
Clearly if n = 1, we are done for L = I1 and U = A.
64 / 102
For this
We assume that A is a n × n
Such that A is not singular
We use a process known as Gaussian elimination to create LU
decomposition
This algorithm is recursive in nature.
Properties
Clearly if n = 1, we are done for L = I1 and U = A.
64 / 102
For this
We assume that A is a n × n
Such that A is not singular
We use a process known as Gaussian elimination to create LU
decomposition
This algorithm is recursive in nature.
Properties
Clearly if n = 1, we are done for L = I1 and U = A.
64 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
65 / 102
Computing LU decomposition
For n > 1, we break A into four parts
A =








a11 a12 · · · a1n
a21 a22 · · · a2n
...
...
...
...
an1 an2 · · · ann








=


a11 wT
v A

 (5)
66 / 102
Where
We have
v is a column (n − 1) −vector.
wT is a row (n − 1) −vector.
A is an (n − 1) × (n − 1).
67 / 102
Where
We have
v is a column (n − 1) −vector.
wT is a row (n − 1) −vector.
A is an (n − 1) × (n − 1).
67 / 102
Where
We have
v is a column (n − 1) −vector.
wT is a row (n − 1) −vector.
A is an (n − 1) × (n − 1).
67 / 102
Where
We have
v is a column (n − 1) −vector.
wT is a row (n − 1) −vector.
A is an (n − 1) × (n − 1).
67 / 102
Computing a LU decomposition
Thus, we can do the following
A =
a11 wT
v A
=
1 0
v
a11
In−1






a11 wT
0 A −
vwT
a11
Schur Complement






=
1 0
v
a11
In−1
a11 wT
0 L U
=
1 0
v
a11
L
a11 wT
0 U
= LU
68 / 102
Computing a LU decomposition
Thus, we can do the following
A =
a11 wT
v A
=
1 0
v
a11
In−1






a11 wT
0 A −
vwT
a11
Schur Complement






=
1 0
v
a11
In−1
a11 wT
0 L U
=
1 0
v
a11
L
a11 wT
0 U
= LU
68 / 102
Computing a LU decomposition
Thus, we can do the following
A =
a11 wT
v A
=
1 0
v
a11
In−1






a11 wT
0 A −
vwT
a11
Schur Complement






=
1 0
v
a11
In−1
a11 wT
0 L U
=
1 0
v
a11
L
a11 wT
0 U
= LU
68 / 102
Computing a LU decomposition
Thus, we can do the following
A =
a11 wT
v A
=
1 0
v
a11
In−1






a11 wT
0 A −
vwT
a11
Schur Complement






=
1 0
v
a11
In−1
a11 wT
0 L U
=
1 0
v
a11
L
a11 wT
0 U
= LU
68 / 102
Computing a LU decomposition
Thus, we can do the following
A =
a11 wT
v A
=
1 0
v
a11
In−1






a11 wT
0 A −
vwT
a11
Schur Complement






=
1 0
v
a11
In−1
a11 wT
0 L U
=
1 0
v
a11
L
a11 wT
0 U
= LU
68 / 102
Computing a LU decomposition
Pseudo-Code running in Θ (n3
)
LU-Decomposition(A)
1 n = A.rows
2 Let L and U be new n × n matrices
3 Initialize U with 0’s below the diagonal
4 Initialize L with 1’s on the diagonal and 0’s above the diagonal.
5 for k = 1 to n
6 ukk = akk
7 for i = k + 1 to n
8 lik = aik
ukk
lik holds vi
9 uki = aki uki holds wT
i
10 for i = k + 1 to n
11 for j = k + 1 to n
12 aij = aij − likukj
13 return L and U
69 / 102
Computing a LU decomposition
Pseudo-Code running in Θ (n3
)
LU-Decomposition(A)
1 n = A.rows
2 Let L and U be new n × n matrices
3 Initialize U with 0’s below the diagonal
4 Initialize L with 1’s on the diagonal and 0’s above the diagonal.
5 for k = 1 to n
6 ukk = akk
7 for i = k + 1 to n
8 lik = aik
ukk
lik holds vi
9 uki = aki uki holds wT
i
10 for i = k + 1 to n
11 for j = k + 1 to n
12 aij = aij − likukj
13 return L and U
69 / 102
Computing a LU decomposition
Pseudo-Code running in Θ (n3
)
LU-Decomposition(A)
1 n = A.rows
2 Let L and U be new n × n matrices
3 Initialize U with 0’s below the diagonal
4 Initialize L with 1’s on the diagonal and 0’s above the diagonal.
5 for k = 1 to n
6 ukk = akk
7 for i = k + 1 to n
8 lik = aik
ukk
lik holds vi
9 uki = aki uki holds wT
i
10 for i = k + 1 to n
11 for j = k + 1 to n
12 aij = aij − likukj
13 return L and U
69 / 102
Computing a LU decomposition
Pseudo-Code running in Θ (n3
)
LU-Decomposition(A)
1 n = A.rows
2 Let L and U be new n × n matrices
3 Initialize U with 0’s below the diagonal
4 Initialize L with 1’s on the diagonal and 0’s above the diagonal.
5 for k = 1 to n
6 ukk = akk
7 for i = k + 1 to n
8 lik = aik
ukk
lik holds vi
9 uki = aki uki holds wT
i
10 for i = k + 1 to n
11 for j = k + 1 to n
12 aij = aij − likukj
13 return L and U
69 / 102
Computing a LU decomposition
Pseudo-Code running in Θ (n3
)
LU-Decomposition(A)
1 n = A.rows
2 Let L and U be new n × n matrices
3 Initialize U with 0’s below the diagonal
4 Initialize L with 1’s on the diagonal and 0’s above the diagonal.
5 for k = 1 to n
6 ukk = akk
7 for i = k + 1 to n
8 lik = aik
ukk
lik holds vi
9 uki = aki uki holds wT
i
10 for i = k + 1 to n
11 for j = k + 1 to n
12 aij = aij − likukj
13 return L and U
69 / 102
Computing a LU decomposition
Pseudo-Code running in Θ (n3
)
LU-Decomposition(A)
1 n = A.rows
2 Let L and U be new n × n matrices
3 Initialize U with 0’s below the diagonal
4 Initialize L with 1’s on the diagonal and 0’s above the diagonal.
5 for k = 1 to n
6 ukk = akk
7 for i = k + 1 to n
8 lik = aik
ukk
lik holds vi
9 uki = aki uki holds wT
i
10 for i = k + 1 to n
11 for j = k + 1 to n
12 aij = aij − likukj
13 return L and U
69 / 102
Computing a LU decomposition
Pseudo-Code running in Θ (n3
)
LU-Decomposition(A)
1 n = A.rows
2 Let L and U be new n × n matrices
3 Initialize U with 0’s below the diagonal
4 Initialize L with 1’s on the diagonal and 0’s above the diagonal.
5 for k = 1 to n
6 ukk = akk
7 for i = k + 1 to n
8 lik = aik
ukk
lik holds vi
9 uki = aki uki holds wT
i
10 for i = k + 1 to n
11 for j = k + 1 to n
12 aij = aij − likukj
13 return L and U
69 / 102
Computing a LU decomposition
Pseudo-Code running in Θ (n3
)
LU-Decomposition(A)
1 n = A.rows
2 Let L and U be new n × n matrices
3 Initialize U with 0’s below the diagonal
4 Initialize L with 1’s on the diagonal and 0’s above the diagonal.
5 for k = 1 to n
6 ukk = akk
7 for i = k + 1 to n
8 lik = aik
ukk
lik holds vi
9 uki = aki uki holds wT
i
10 for i = k + 1 to n
11 for j = k + 1 to n
12 aij = aij − likukj
13 return L and U
69 / 102
Computing a LU decomposition
Pseudo-Code running in Θ (n3
)
LU-Decomposition(A)
1 n = A.rows
2 Let L and U be new n × n matrices
3 Initialize U with 0’s below the diagonal
4 Initialize L with 1’s on the diagonal and 0’s above the diagonal.
5 for k = 1 to n
6 ukk = akk
7 for i = k + 1 to n
8 lik = aik
ukk
lik holds vi
9 uki = aki uki holds wT
i
10 for i = k + 1 to n
11 for j = k + 1 to n
12 aij = aij − likukj
13 return L and U
69 / 102
Example
Here, we have this example
2 3 1 5
6 13 5 19
2 19 10 23
4 10 11 31
⇒
13 5 19
19 10 23
10 11 31
− 1
2
6
2
4
3 1 5 =
13 5 19
19 10 23
10 11 31
− 1
2
18 6 30
6 2 10
12 4 20
⇒
2 3 1 5
3 4 2 4
1 16 9 18
2 4 9 21
⇒
9 18
9 11
− 1
4
16
4
2 4 =
9 18
9 11
− 1
4
32 64
8 16
=
9 18
9 11
−
8 16
2 4
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 17
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 3
70 / 102
Example
Here, we have this example
2 3 1 5
6 13 5 19
2 19 10 23
4 10 11 31
⇒
13 5 19
19 10 23
10 11 31
− 1
2
6
2
4
3 1 5 =
13 5 19
19 10 23
10 11 31
− 1
2
18 6 30
6 2 10
12 4 20
⇒
2 3 1 5
3 4 2 4
1 16 9 18
2 4 9 21
⇒
9 18
9 11
− 1
4
16
4
2 4 =
9 18
9 11
− 1
4
32 64
8 16
=
9 18
9 11
−
8 16
2 4
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 17
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 3
70 / 102
Example
Here, we have this example
2 3 1 5
6 13 5 19
2 19 10 23
4 10 11 31
⇒
13 5 19
19 10 23
10 11 31
− 1
2
6
2
4
3 1 5 =
13 5 19
19 10 23
10 11 31
− 1
2
18 6 30
6 2 10
12 4 20
⇒
2 3 1 5
3 4 2 4
1 16 9 18
2 4 9 21
⇒
9 18
9 11
− 1
4
16
4
2 4 =
9 18
9 11
− 1
4
32 64
8 16
=
9 18
9 11
−
8 16
2 4
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 17
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 3
70 / 102
Example
Here, we have this example
2 3 1 5
6 13 5 19
2 19 10 23
4 10 11 31
⇒
13 5 19
19 10 23
10 11 31
− 1
2
6
2
4
3 1 5 =
13 5 19
19 10 23
10 11 31
− 1
2
18 6 30
6 2 10
12 4 20
⇒
2 3 1 5
3 4 2 4
1 16 9 18
2 4 9 21
⇒
9 18
9 11
− 1
4
16
4
2 4 =
9 18
9 11
− 1
4
32 64
8 16
=
9 18
9 11
−
8 16
2 4
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 17
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 3
70 / 102
Example
Here, we have this example
2 3 1 5
6 13 5 19
2 19 10 23
4 10 11 31
⇒
13 5 19
19 10 23
10 11 31
− 1
2
6
2
4
3 1 5 =
13 5 19
19 10 23
10 11 31
− 1
2
18 6 30
6 2 10
12 4 20
⇒
2 3 1 5
3 4 2 4
1 16 9 18
2 4 9 21
⇒
9 18
9 11
− 1
4
16
4
2 4 =
9 18
9 11
− 1
4
32 64
8 16
=
9 18
9 11
−
8 16
2 4
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 17
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 3
70 / 102
Example
Here, we have this example
2 3 1 5
6 13 5 19
2 19 10 23
4 10 11 31
⇒
13 5 19
19 10 23
10 11 31
− 1
2
6
2
4
3 1 5 =
13 5 19
19 10 23
10 11 31
− 1
2
18 6 30
6 2 10
12 4 20
⇒
2 3 1 5
3 4 2 4
1 16 9 18
2 4 9 21
⇒
9 18
9 11
− 1
4
16
4
2 4 =
9 18
9 11
− 1
4
32 64
8 16
=
9 18
9 11
−
8 16
2 4
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 17
⇒
2 3 1 5
3 4 2 4
1 4 1 2
2 1 7 3
70 / 102
Thus
We get the following decomposition





2 3 1 5
6 13 5 19
2 19 10 23
4 10 11 31





=





1 0 0 0
3 1 0 0
1 4 1 0
2 1 7 1










2 3 1 5
0 4 2 4
0 0 1 2
0 0 0 3





71 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
72 / 102
Observations
Something Notable
The elements by which we divide during LU decomposition are called
pivots.
They occupy the diagonal elements of the matrix U.
Why the permutation P
It allows us to avoid dividing by 0.
73 / 102
Observations
Something Notable
The elements by which we divide during LU decomposition are called
pivots.
They occupy the diagonal elements of the matrix U.
Why the permutation P
It allows us to avoid dividing by 0.
73 / 102
Observations
Something Notable
The elements by which we divide during LU decomposition are called
pivots.
They occupy the diagonal elements of the matrix U.
Why the permutation P
It allows us to avoid dividing by 0.
73 / 102
Thus, What do we want?
We want P, L and U
PA = LU
However, we move a non-zero element, ak1
From somewhere in the first column to the (1, 1) position of the matrix.
In addition
ak1 as the element in the first column with the greatest absolute value.
74 / 102
Thus, What do we want?
We want P, L and U
PA = LU
However, we move a non-zero element, ak1
From somewhere in the first column to the (1, 1) position of the matrix.
In addition
ak1 as the element in the first column with the greatest absolute value.
74 / 102
Thus, What do we want?
We want P, L and U
PA = LU
However, we move a non-zero element, ak1
From somewhere in the first column to the (1, 1) position of the matrix.
In addition
ak1 as the element in the first column with the greatest absolute value.
74 / 102
Exchange Rows
Thus
We exchange row 1 with row k, or multiplying A by a permutation matrix
Q on the left
QA =
ak1 wT
v A
With
v = (a21, a31, ..., an1)T
with a11 replaces ak1.
wT = (ak2, ak3, ..., akn).
A is a (n − 1) × (n − 1)
75 / 102
Exchange Rows
Thus
We exchange row 1 with row k, or multiplying A by a permutation matrix
Q on the left
QA =
ak1 wT
v A
With
v = (a21, a31, ..., an1)T
with a11 replaces ak1.
wT = (ak2, ak3, ..., akn).
A is a (n − 1) × (n − 1)
75 / 102
Now, ak1 = 0
We have then
QA =
ak1 wT
v A
=
1 0
v
ak1
In−1
ak1 wT
0 A − vwT
ak1
76 / 102
Now, ak1 = 0
We have then
QA =
ak1 wT
v A
=
1 0
v
ak1
In−1
ak1 wT
0 A − vwT
ak1
76 / 102
Important
Something Notable
if A is nonsingular, then the Schur complement A − vwT
ak1
is nonsingular,
too.
Now, we can find recursively an LUP decomposition for it
P A −
vwT
ak1
= L U
Then, we define a new permutation matrix
P =
1 0
0 P
Q
77 / 102
Important
Something Notable
if A is nonsingular, then the Schur complement A − vwT
ak1
is nonsingular,
too.
Now, we can find recursively an LUP decomposition for it
P A −
vwT
ak1
= L U
Then, we define a new permutation matrix
P =
1 0
0 P
Q
77 / 102
Important
Something Notable
if A is nonsingular, then the Schur complement A − vwT
ak1
is nonsingular,
too.
Now, we can find recursively an LUP decomposition for it
P A −
vwT
ak1
= L U
Then, we define a new permutation matrix
P =
1 0
0 P
Q
77 / 102
Thus
We have
PA =
1 0
0 P
QA
=
1 0
0 P
1 0
v
ak1
In−1
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
P
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 P A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 L U
=
1 0
P v
ak1
L
ak1 wT
0 U
= LU
78 / 102
Thus
We have
PA =
1 0
0 P
QA
=
1 0
0 P
1 0
v
ak1
In−1
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
P
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 P A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 L U
=
1 0
P v
ak1
L
ak1 wT
0 U
= LU
78 / 102
Thus
We have
PA =
1 0
0 P
QA
=
1 0
0 P
1 0
v
ak1
In−1
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
P
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 P A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 L U
=
1 0
P v
ak1
L
ak1 wT
0 U
= LU
78 / 102
Thus
We have
PA =
1 0
0 P
QA
=
1 0
0 P
1 0
v
ak1
In−1
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
P
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 P A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 L U
=
1 0
P v
ak1
L
ak1 wT
0 U
= LU
78 / 102
Thus
We have
PA =
1 0
0 P
QA
=
1 0
0 P
1 0
v
ak1
In−1
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
P
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 P A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 L U
=
1 0
P v
ak1
L
ak1 wT
0 U
= LU
78 / 102
Thus
We have
PA =
1 0
0 P
QA
=
1 0
0 P
1 0
v
ak1
In−1
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
P
ak1 wT
0 A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 P A − vwT
ak1
=
1 0
P v
ak1
In−1
ak1 wT
0 L U
=
1 0
P v
ak1
L
ak1 wT
0 U
= LU
78 / 102
Computing a LUP decomposition
Algorithm
LUP-Decomposition(A)
1. n = A.rows
2. Let π [1..n] new array
3. for i = 1 to n
4. π [i] = i
5. for k = 1 to n
6. p = 0
7. for i = k to n
8. if |aik| > p
9.
p = |aik|
10. k = i
11. if p == 0
12. error “Singular Matrix”
13. Exchange π [k] ←→ π [k ]
14. for i = 1 to n
15. Exchange aki ←→ ak i
16. for i = k + 1 to n
17. aik = aik
akk
18. for j = k + 1 to n
19. aij = aij − aikakj
79 / 102
Computing a LUP decomposition
Algorithm
LUP-Decomposition(A)
1. n = A.rows
2. Let π [1..n] new array
3. for i = 1 to n
4. π [i] = i
5. for k = 1 to n
6. p = 0
7. for i = k to n
8. if |aik| > p
9.
p = |aik|
10. k = i
11. if p == 0
12. error “Singular Matrix”
13. Exchange π [k] ←→ π [k ]
14. for i = 1 to n
15. Exchange aki ←→ ak i
16. for i = k + 1 to n
17. aik = aik
akk
18. for j = k + 1 to n
19. aij = aij − aikakj
79 / 102
Computing a LUP decomposition
Algorithm
LUP-Decomposition(A)
1. n = A.rows
2. Let π [1..n] new array
3. for i = 1 to n
4. π [i] = i
5. for k = 1 to n
6. p = 0
7. for i = k to n
8. if |aik| > p
9.
p = |aik|
10. k = i
11. if p == 0
12. error “Singular Matrix”
13. Exchange π [k] ←→ π [k ]
14. for i = 1 to n
15. Exchange aki ←→ ak i
16. for i = k + 1 to n
17. aik = aik
akk
18. for j = k + 1 to n
19. aij = aij − aikakj
79 / 102
Computing a LUP decomposition
Algorithm
LUP-Decomposition(A)
1. n = A.rows
2. Let π [1..n] new array
3. for i = 1 to n
4. π [i] = i
5. for k = 1 to n
6. p = 0
7. for i = k to n
8. if |aik| > p
9.
p = |aik|
10. k = i
11. if p == 0
12. error “Singular Matrix”
13. Exchange π [k] ←→ π [k ]
14. for i = 1 to n
15. Exchange aki ←→ ak i
16. for i = k + 1 to n
17. aik = aik
akk
18. for j = k + 1 to n
19. aij = aij − aikakj
79 / 102
Computing a LUP decomposition
Algorithm
LUP-Decomposition(A)
1. n = A.rows
2. Let π [1..n] new array
3. for i = 1 to n
4. π [i] = i
5. for k = 1 to n
6. p = 0
7. for i = k to n
8. if |aik| > p
9.
p = |aik|
10. k = i
11. if p == 0
12. error “Singular Matrix”
13. Exchange π [k] ←→ π [k ]
14. for i = 1 to n
15. Exchange aki ←→ ak i
16. for i = k + 1 to n
17. aik = aik
akk
18. for j = k + 1 to n
19. aij = aij − aikakj
79 / 102
Computing a LUP decomposition
Algorithm
LUP-Decomposition(A)
1. n = A.rows
2. Let π [1..n] new array
3. for i = 1 to n
4. π [i] = i
5. for k = 1 to n
6. p = 0
7. for i = k to n
8. if |aik| > p
9.
p = |aik|
10. k = i
11. if p == 0
12. error “Singular Matrix”
13. Exchange π [k] ←→ π [k ]
14. for i = 1 to n
15. Exchange aki ←→ ak i
16. for i = k + 1 to n
17. aik = aik
akk
18. for j = k + 1 to n
19. aij = aij − aikakj
79 / 102
Computing a LUP decomposition
Algorithm
LUP-Decomposition(A)
1. n = A.rows
2. Let π [1..n] new array
3. for i = 1 to n
4. π [i] = i
5. for k = 1 to n
6. p = 0
7. for i = k to n
8. if |aik| > p
9.
p = |aik|
10. k = i
11. if p == 0
12. error “Singular Matrix”
13. Exchange π [k] ←→ π [k ]
14. for i = 1 to n
15. Exchange aki ←→ ak i
16. for i = k + 1 to n
17. aik = aik
akk
18. for j = k + 1 to n
19. aij = aij − aikakj
79 / 102
Computing a LUP decomposition
Algorithm
LUP-Decomposition(A)
1. n = A.rows
2. Let π [1..n] new array
3. for i = 1 to n
4. π [i] = i
5. for k = 1 to n
6. p = 0
7. for i = k to n
8. if |aik| > p
9.
p = |aik|
10. k = i
11. if p == 0
12. error “Singular Matrix”
13. Exchange π [k] ←→ π [k ]
14. for i = 1 to n
15. Exchange aki ←→ ak i
16. for i = k + 1 to n
17. aik = aik
akk
18. for j = k + 1 to n
19. aij = aij − aikakj
79 / 102
Computing a LUP decomposition
Algorithm
LUP-Decomposition(A)
1. n = A.rows
2. Let π [1..n] new array
3. for i = 1 to n
4. π [i] = i
5. for k = 1 to n
6. p = 0
7. for i = k to n
8. if |aik| > p
9.
p = |aik|
10. k = i
11. if p == 0
12. error “Singular Matrix”
13. Exchange π [k] ←→ π [k ]
14. for i = 1 to n
15. Exchange aki ←→ ak i
16. for i = k + 1 to n
17. aik = aik
akk
18. for j = k + 1 to n
19. aij = aij − aikakj
79 / 102
Computing a LUP decomposition
Example
1 2 0 2 0.6
2 3 3 4 -2
3 5 5 4 2
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 0.5 4 -0.5
80 / 102
Computing a LUP decomposition
Example
1 2 0 2 0.6
2 3 3 4 -2
3 5 5 4 2
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 0.5 4 -0.5
80 / 102
Computing a LUP decomposition
Example
1 2 0 2 0.6
2 3 3 4 -2
3 5 5 4 2
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 0.5 4 -0.5
80 / 102
Computing a LUP decomposition
Example
1 2 0 2 0.6
2 3 3 4 -2
3 5 5 4 2
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 0.5 4 -0.5
80 / 102
Computing a LUP decomposition
Example
1 2 0 2 0.6
2 3 3 4 -2
3 5 5 4 2
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 0.5 4 -0.5
80 / 102
Computing a LUP decomposition
Example
1 2 0 2 0.6
2 3 3 4 -2
3 5 5 4 2
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 0.5 4 -0.5
80 / 102
Computing a LUP decomposition
Example
1 2 0 2 0.6
2 3 3 4 -2
3 5 5 4 2
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 0.5 4 -0.5
80 / 102
Computing a LUP decomposition
Example
1 2 0 2 0.6
2 3 3 4 -2
3 5 5 4 2
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 0.5 4 -0.5
80 / 102
Computing a LUP decomposition
Example
1 2 0 2 0.6
2 3 3 4 -2
3 5 5 4 2
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 3 3 4 -2
1 2 0 2 0.6
4 -1 -2 3.4 -1
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
2 0.6 0 1.6 -3.2
1 0.4 -2 0.4 -0.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 -1 4.2 -0.6
=⇒
3 5 5 4 2
1 0.4 -2 0.4 -0.2
2 0.6 0 1.6 -3.2
4 -1 0.5 4 -0.5
80 / 102
Finally, you get
The Permutation and Decomposition





0 0 1 0
1 0 0 0
0 0 0 1
0 1 0 0





P





2 0 2 0.6
3 3 4 −2
5 5 4 2
−1 −2 3.4 −1





A
= ...





1 0 0 0
0.4 1 0 0
−0.2 0.5 1 0
0.6 0 0.4 1





L





5 5 4 2
0 −2 0.4 −0.2
0 0 4 −0.5
0 0 0 −3





U
81 / 102
Symmetric positive-definite matrices
Lemma 28.9
Any symmetric positive-definite matrix is nonsingular.
Lemma 28.10
If A is a symmetric positive-definite matrix, then every leading submatrix
of A is symmetric and positive-definite.
82 / 102
Symmetric positive-definite matrices
Lemma 28.9
Any symmetric positive-definite matrix is nonsingular.
Lemma 28.10
If A is a symmetric positive-definite matrix, then every leading submatrix
of A is symmetric and positive-definite.
82 / 102
Symmetric positive-definite matrices
Definition: Schur complement
Let A be a symmetric positive-definite matrix, and let Ak be a leading
k × k submatrix of A. Partition A as:
A =
Ak BT
B C
Then, the Schur complement of A with respect to Ak is defined to be
S = C − BA−1
k BT
83 / 102
Symmetric positive-definite matrices
Definition: Schur complement
Let A be a symmetric positive-definite matrix, and let Ak be a leading
k × k submatrix of A. Partition A as:
A =
Ak BT
B C
Then, the Schur complement of A with respect to Ak is defined to be
S = C − BA−1
k BT
83 / 102
Symmetric positive-definite matrices
Definition: Schur complement
Let A be a symmetric positive-definite matrix, and let Ak be a leading
k × k submatrix of A. Partition A as:
A =
Ak BT
B C
Then, the Schur complement of A with respect to Ak is defined to be
S = C − BA−1
k BT
83 / 102
Symmetric positive-definite matrices
Definition: Schur complement
Let A be a symmetric positive-definite matrix, and let Ak be a leading
k × k submatrix of A. Partition A as:
A =
Ak BT
B C
Then, the Schur complement of A with respect to Ak is defined to be
S = C − BA−1
k BT
83 / 102
Symmetric positive-definite matrices
Lemma 28.11 (Schur complement lemma)
If A is a symmetric positive-definite matrix and Ak is a leading k × k
submatrix of A, then the Schur complement of A with respect to Ak is
symmetric and positive-definite.
Corollary 28.12
LU decomposition of a symmetric positive-definite matrix never causes a
division by 0.
84 / 102
Symmetric positive-definite matrices
Lemma 28.11 (Schur complement lemma)
If A is a symmetric positive-definite matrix and Ak is a leading k × k
submatrix of A, then the Schur complement of A with respect to Ak is
symmetric and positive-definite.
Corollary 28.12
LU decomposition of a symmetric positive-definite matrix never causes a
division by 0.
84 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
85 / 102
Inverting matrices
LUP decomposition can be used to compute a matrix inverse
The computation of a matrix inverse can be speed up using techniques
such as Strassen’s algorithm for matrix multiplication.
86 / 102
Computing a matrix inverse from a LUP decomposition
Proceed as follows
The equation AX = In can be viewed as a set of n distinct equations
of the form Axi = ei, for i = 1, ..., n.
We have a LUP decomposition of a matrix A in the form of three
matrices L,U, and P such that PA = LU.
Then we use the backward-forward to solve AXi = ei.
87 / 102
Complexity
First
We can compute each Xi in time Θ n2 .
Thus, X can be computed in time Θ n3 .
LUP decomposition is computed in time Θ n3 .
Finally
We can compute A−1 of a matrix A in time Θ n3 .
88 / 102
Complexity
First
We can compute each Xi in time Θ n2 .
Thus, X can be computed in time Θ n3 .
LUP decomposition is computed in time Θ n3 .
Finally
We can compute A−1 of a matrix A in time Θ n3 .
88 / 102
Matrix multiplication and matrix inversion
Theorem 28.7
If we can invert an n × n matrix in time I(n), where I(n) = Ω(n2) and
I(n) satisfies the regularity condition I(3n) = O(I(n)), then we can
multiply two n × n matrices in time O(I(n)).
89 / 102
Matrix multiplication and matrix inversion
Theorem 28.8
If we can multiply two n × n real matrices in time M(n), where
M(n) = Ω(n2) and M(n) = O(M(n + k)) for any k in range 0 ≤ k ≤ n
and M(n
2 ) ≤ cM(n) for some constant c < 1
2. Then we can compute the
inverse of any real nonsingular n × n matrix in time O(M(n)).
90 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
91 / 102
Least-squares Approximation
Fitting curves to given sets of data points is an important application
of symmetric positive-definite matrices.
Given
(x1, y1), (x2, y2), ..., (xm, ym)
where the yi are known to be subject to measurement errors. We would
like to determine a function F(x) such that:
yi = F(xi) + ηi
for i = 1, 2, ..., m
92 / 102
Least-squares Approximation
Continuation
The form of the function F depends on the problem at hand.
F(x) =
n
j=1
cjfj(x)
A common choice is fj(x) = xj−1, which means that
F(x) = c1 + c2x + c3x2 + ... + cnxn−1
is a polynomial of degree n − 1 in x.
93 / 102
Least-squares Approximation
Continuation
Let
A =






f1(x1) f2(x1) . . . fn(x1)
f1(x2) f2(x2) . . . fn(x2)
...
...
...
...
f1(xm) f2(xm) . . . fn(xm)






denote the matrix of values of the basis functions at the given points; that
is, aij = fj(xi). Let c = (ck) denote the desired size-n vector of
coefficients. Then,
A =






f1(x1) f2(x1) . . . fn(x1)
f1(x2) f2(x2) . . . fn(x2)
...
...
...
...
f1(xm) f2(xm) . . . fn(xm)












c1
c2
...
cn






=






F(x1)
F(x2)
...
F(xm)






94 / 102
Least-squares Approximation
Then
Thus, η = Ac − y is the size of approximation errors. To minimize
approximation errors, we choose to minimize the norm of the error vector ,
which gives us a least-squares solution.
||η||2 = ||Ac − y||2 =
m
i=1
n
j=1
aijcj − yi
2
Thus
We can minimize ||η|| by differentiating ||η|| with respect to each ck and
then setting the result to 0:
d||η||2
dck
=
m
i=1
2
n
j=1
aijcj − yi aik = 0
95 / 102
Least-squares Approximation
Then
Thus, η = Ac − y is the size of approximation errors. To minimize
approximation errors, we choose to minimize the norm of the error vector ,
which gives us a least-squares solution.
||η||2 = ||Ac − y||2 =
m
i=1
n
j=1
aijcj − yi
2
Thus
We can minimize ||η|| by differentiating ||η|| with respect to each ck and
then setting the result to 0:
d||η||2
dck
=
m
i=1
2
n
j=1
aijcj − yi aik = 0
95 / 102
Least-squares Approximation
We can put all derivatives
The n equation for k = 1, 2, ..., n
(Ac − y)T A = 0
or equivalently to
AT (Ac − y) = 0
which implies
AT Ac = AT y
96 / 102
Least-squares Approximation
Continuation
The AT A is symmetric:
If A has full column rank, then AT A is positive- definite as well.
Hence, (AT A)−1 exists, and the solution to equation AT Ac = AT y is
c = ((AT A)−1AT )y = A+y
where the matrix A+ = ((AT A)−1AT ) is called the pseudoinverse of the
matrix A.
97 / 102
Least-Square Approximation
Continuation
As an example of producing a least-squares fit, suppose that we have 5
data points (-1,2), (1,1),(2,1),(3,0),(5,3), shown as black dots in following
figure
98 / 102
Least-squares Approximation
Continuation
We start with the matrix of basis-function values
A =







1 x1 x2
1
1 x2 x2
2
1 x3 x2
3
1 x4 x2
4
1 x5 x2
5







=







1 −1 1
1 1 1
1 2 4
1 3 9
1 5 25







whose pseudoinverse is
A+ =



0.500 0.300 0.200 0.100 −0.100
−0.388 0.093 0.190 0.193 −0.088
0.060 −0.036 −0.048 −0.036 0.060



99 / 102
Matrix multiplication and matrix inversion
Continuation
Multiplying y by A+ , we obtain the coefficient vector
c =



1.200
−0.757
0.214



which corresponds to the quadratic polynomial
F(x) = 1.200 − 0.757x + 0.214x2
100 / 102
Outline
1 Introduction
Basic Definitions
Matrix Examples
2 Matrix Operations
Introduction
Matrix Multiplication
The Inverse
Determinants
3 Improving the Complexity of the Matrix Multiplication
Back to Matrix Multiplication
Strassen’s Algorithm
The Algorithm
How he did it?
Complexity
4 Solving Systems of Linear Equations
Introduction
Lower Upper Decomposition
Forward and Back Substitution
Obtaining the Matrices
Computing LU decomposition
Computing LUP decomposition
5 Applications
Inverting Matrices
Least-squares Approximation
6 Exercises
Some Exercises You Can Try!!!
101 / 102
Exercises
From Cormen’s book solve
34.5-1
34.5-2
34.5-3
34.5-4
34.5-5
34.5-7
34.5-8
102 / 102
Ad

More Related Content

What's hot (20)

Algorithm paradigms
Algorithm paradigmsAlgorithm paradigms
Algorithm paradigms
suresh5c2
 
Singular Value Decompostion (SVD): Worked example 1
Singular Value Decompostion (SVD): Worked example 1Singular Value Decompostion (SVD): Worked example 1
Singular Value Decompostion (SVD): Worked example 1
Isaac Yowetu
 
recurrence relations
 recurrence relations recurrence relations
recurrence relations
Anurag Cheela
 
LinearAlgebra.ppt
LinearAlgebra.pptLinearAlgebra.ppt
LinearAlgebra.ppt
vijaykumar838577
 
ML - Multiple Linear Regression
ML - Multiple Linear RegressionML - Multiple Linear Regression
ML - Multiple Linear Regression
Andrew Ferlitsch
 
Ant colony optimization
Ant colony optimizationAnt colony optimization
Ant colony optimization
Meenakshi Devi
 
Laplace transform and its applications
Laplace transform and its applicationsLaplace transform and its applications
Laplace transform and its applications
Nisarg Shah
 
Lesson 22: Quadratic Forms
Lesson 22: Quadratic FormsLesson 22: Quadratic Forms
Lesson 22: Quadratic Forms
Matthew Leingang
 
Introduction to matlab
Introduction to matlabIntroduction to matlab
Introduction to matlab
Khulna University
 
graph theory
graph theory graph theory
graph theory
ganith2k13
 
Singular Value Decompostion (SVD)
Singular Value Decompostion (SVD)Singular Value Decompostion (SVD)
Singular Value Decompostion (SVD)
Isaac Yowetu
 
Numerical Solution of Ordinary Differential Equations
Numerical Solution of Ordinary Differential EquationsNumerical Solution of Ordinary Differential Equations
Numerical Solution of Ordinary Differential Equations
Meenakshisundaram N
 
Introduction to Topological Data Analysis
Introduction to Topological Data AnalysisIntroduction to Topological Data Analysis
Introduction to Topological Data Analysis
Mason Porter
 
Python For Data Science Cheat Sheet
Python For Data Science Cheat SheetPython For Data Science Cheat Sheet
Python For Data Science Cheat Sheet
Karlijn Willems
 
Introduction to Graph Theory
Introduction to Graph TheoryIntroduction to Graph Theory
Introduction to Graph Theory
Yosuke Mizutani
 
code optimization 1...code optimization-1221849738688969-9
code optimization 1...code optimization-1221849738688969-9code optimization 1...code optimization-1221849738688969-9
code optimization 1...code optimization-1221849738688969-9
Sanjeev Raaz
 
Tree in discrete structure
Tree in discrete structureTree in discrete structure
Tree in discrete structure
Lahore Garrison University
 
B.tech ii unit-3 material multiple integration
B.tech ii unit-3 material multiple integrationB.tech ii unit-3 material multiple integration
B.tech ii unit-3 material multiple integration
Rai University
 
introduction to graph theory
introduction to graph theoryintroduction to graph theory
introduction to graph theory
Chuckie Balbuena
 
Soft computing
Soft computingSoft computing
Soft computing
ganeshpaul6
 
Algorithm paradigms
Algorithm paradigmsAlgorithm paradigms
Algorithm paradigms
suresh5c2
 
Singular Value Decompostion (SVD): Worked example 1
Singular Value Decompostion (SVD): Worked example 1Singular Value Decompostion (SVD): Worked example 1
Singular Value Decompostion (SVD): Worked example 1
Isaac Yowetu
 
recurrence relations
 recurrence relations recurrence relations
recurrence relations
Anurag Cheela
 
ML - Multiple Linear Regression
ML - Multiple Linear RegressionML - Multiple Linear Regression
ML - Multiple Linear Regression
Andrew Ferlitsch
 
Ant colony optimization
Ant colony optimizationAnt colony optimization
Ant colony optimization
Meenakshi Devi
 
Laplace transform and its applications
Laplace transform and its applicationsLaplace transform and its applications
Laplace transform and its applications
Nisarg Shah
 
Lesson 22: Quadratic Forms
Lesson 22: Quadratic FormsLesson 22: Quadratic Forms
Lesson 22: Quadratic Forms
Matthew Leingang
 
Singular Value Decompostion (SVD)
Singular Value Decompostion (SVD)Singular Value Decompostion (SVD)
Singular Value Decompostion (SVD)
Isaac Yowetu
 
Numerical Solution of Ordinary Differential Equations
Numerical Solution of Ordinary Differential EquationsNumerical Solution of Ordinary Differential Equations
Numerical Solution of Ordinary Differential Equations
Meenakshisundaram N
 
Introduction to Topological Data Analysis
Introduction to Topological Data AnalysisIntroduction to Topological Data Analysis
Introduction to Topological Data Analysis
Mason Porter
 
Python For Data Science Cheat Sheet
Python For Data Science Cheat SheetPython For Data Science Cheat Sheet
Python For Data Science Cheat Sheet
Karlijn Willems
 
Introduction to Graph Theory
Introduction to Graph TheoryIntroduction to Graph Theory
Introduction to Graph Theory
Yosuke Mizutani
 
code optimization 1...code optimization-1221849738688969-9
code optimization 1...code optimization-1221849738688969-9code optimization 1...code optimization-1221849738688969-9
code optimization 1...code optimization-1221849738688969-9
Sanjeev Raaz
 
B.tech ii unit-3 material multiple integration
B.tech ii unit-3 material multiple integrationB.tech ii unit-3 material multiple integration
B.tech ii unit-3 material multiple integration
Rai University
 
introduction to graph theory
introduction to graph theoryintroduction to graph theory
introduction to graph theory
Chuckie Balbuena
 

Viewers also liked (20)

Waves
WavesWaves
Waves
jghopwood
 
01 Machine Learning Introduction
01 Machine Learning Introduction 01 Machine Learning Introduction
01 Machine Learning Introduction
Andres Mendez-Vazquez
 
Preparation Data Structures 10 trees
Preparation Data Structures 10 treesPreparation Data Structures 10 trees
Preparation Data Structures 10 trees
Andres Mendez-Vazquez
 
11 Machine Learning Important Issues in Machine Learning
11 Machine Learning Important Issues in Machine Learning11 Machine Learning Important Issues in Machine Learning
11 Machine Learning Important Issues in Machine Learning
Andres Mendez-Vazquez
 
Drainage
DrainageDrainage
Drainage
Ansh Jindal
 
Preparation Data Structures 05 chain linear_list
Preparation Data Structures 05 chain linear_listPreparation Data Structures 05 chain linear_list
Preparation Data Structures 05 chain linear_list
Andres Mendez-Vazquez
 
Preparation Data Structures 07 stacks
Preparation Data Structures 07 stacksPreparation Data Structures 07 stacks
Preparation Data Structures 07 stacks
Andres Mendez-Vazquez
 
31 Machine Learning Unsupervised Cluster Validity
31 Machine Learning Unsupervised Cluster Validity31 Machine Learning Unsupervised Cluster Validity
31 Machine Learning Unsupervised Cluster Validity
Andres Mendez-Vazquez
 
Work energy power 2 reading assignment -revision 2
Work energy power 2 reading assignment -revision 2Work energy power 2 reading assignment -revision 2
Work energy power 2 reading assignment -revision 2
sashrilisdi
 
26 Computational Geometry
26 Computational Geometry26 Computational Geometry
26 Computational Geometry
Andres Mendez-Vazquez
 
Preparation Data Structures 11 graphs
Preparation Data Structures 11 graphsPreparation Data Structures 11 graphs
Preparation Data Structures 11 graphs
Andres Mendez-Vazquez
 
13 Machine Learning Supervised Decision Trees
13 Machine Learning Supervised Decision Trees13 Machine Learning Supervised Decision Trees
13 Machine Learning Supervised Decision Trees
Andres Mendez-Vazquez
 
03 Probability Review for Analysis of Algorithms
03 Probability Review for Analysis of Algorithms03 Probability Review for Analysis of Algorithms
03 Probability Review for Analysis of Algorithms
Andres Mendez-Vazquez
 
Periodic table (1)
Periodic table (1)Periodic table (1)
Periodic table (1)
jghopwood
 
What's the Matter?
What's the Matter?What's the Matter?
What's the Matter?
Stephen Taylor
 
07 Machine Learning - Expectation Maximization
07 Machine Learning - Expectation Maximization07 Machine Learning - Expectation Maximization
07 Machine Learning - Expectation Maximization
Andres Mendez-Vazquez
 
15 B-Trees
15 B-Trees15 B-Trees
15 B-Trees
Andres Mendez-Vazquez
 
F1
F1F1
F1
smartgeniusproduction
 
Preparation Data Structures 10 trees
Preparation Data Structures 10 treesPreparation Data Structures 10 trees
Preparation Data Structures 10 trees
Andres Mendez-Vazquez
 
11 Machine Learning Important Issues in Machine Learning
11 Machine Learning Important Issues in Machine Learning11 Machine Learning Important Issues in Machine Learning
11 Machine Learning Important Issues in Machine Learning
Andres Mendez-Vazquez
 
Preparation Data Structures 05 chain linear_list
Preparation Data Structures 05 chain linear_listPreparation Data Structures 05 chain linear_list
Preparation Data Structures 05 chain linear_list
Andres Mendez-Vazquez
 
Preparation Data Structures 07 stacks
Preparation Data Structures 07 stacksPreparation Data Structures 07 stacks
Preparation Data Structures 07 stacks
Andres Mendez-Vazquez
 
31 Machine Learning Unsupervised Cluster Validity
31 Machine Learning Unsupervised Cluster Validity31 Machine Learning Unsupervised Cluster Validity
31 Machine Learning Unsupervised Cluster Validity
Andres Mendez-Vazquez
 
Work energy power 2 reading assignment -revision 2
Work energy power 2 reading assignment -revision 2Work energy power 2 reading assignment -revision 2
Work energy power 2 reading assignment -revision 2
sashrilisdi
 
Preparation Data Structures 11 graphs
Preparation Data Structures 11 graphsPreparation Data Structures 11 graphs
Preparation Data Structures 11 graphs
Andres Mendez-Vazquez
 
13 Machine Learning Supervised Decision Trees
13 Machine Learning Supervised Decision Trees13 Machine Learning Supervised Decision Trees
13 Machine Learning Supervised Decision Trees
Andres Mendez-Vazquez
 
03 Probability Review for Analysis of Algorithms
03 Probability Review for Analysis of Algorithms03 Probability Review for Analysis of Algorithms
03 Probability Review for Analysis of Algorithms
Andres Mendez-Vazquez
 
Periodic table (1)
Periodic table (1)Periodic table (1)
Periodic table (1)
jghopwood
 
07 Machine Learning - Expectation Maximization
07 Machine Learning - Expectation Maximization07 Machine Learning - Expectation Maximization
07 Machine Learning - Expectation Maximization
Andres Mendez-Vazquez
 
Ad

Similar to 23 Matrix Algorithms (20)

01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
Andres Mendez-Vazquez
 
Numerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic EquationNumerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic Equation
payalpriyadarshinisa1
 
Chapter 3: Linear Systems and Matrices - Part 2/Slides
Chapter 3: Linear Systems and Matrices - Part 2/SlidesChapter 3: Linear Systems and Matrices - Part 2/Slides
Chapter 3: Linear Systems and Matrices - Part 2/Slides
Chaimae Baroudi
 
Rankmatrix
RankmatrixRankmatrix
Rankmatrix
Madhuri Sonkhaskar
 
Strassen's Matrix Multiplication divide and conquere algorithm
Strassen's Matrix Multiplication divide and conquere algorithmStrassen's Matrix Multiplication divide and conquere algorithm
Strassen's Matrix Multiplication divide and conquere algorithm
Ahmad177077
 
Solution of System of Linear Equations
Solution of System of Linear EquationsSolution of System of Linear Equations
Solution of System of Linear Equations
mofassair
 
Arjrandomjjejejj3ejjeejjdjddjjdjdjdjdjdjdjdjdjd
Arjrandomjjejejj3ejjeejjdjddjjdjdjdjdjdjdjdjdjdArjrandomjjejejj3ejjeejjdjddjjdjdjdjdjdjdjdjdjd
Arjrandomjjejejj3ejjeejjdjddjjdjdjdjdjdjdjdjdjd
12345arjitcs
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinants
Kum Visal
 
UNIT-1 MATRICES (All the Topics).pdf
UNIT-1 MATRICES (All the Topics).pdfUNIT-1 MATRICES (All the Topics).pdf
UNIT-1 MATRICES (All the Topics).pdf
AbdulKalam13785
 
Solving using systems
Solving using systemsSolving using systems
Solving using systems
holmsted
 
Matrices 2_System of Equations.pdf
Matrices 2_System of Equations.pdfMatrices 2_System of Equations.pdf
Matrices 2_System of Equations.pdf
Kwame Nkrumah University of Science and Technology
 
Determinants, crammers law, Inverse by adjoint and the applications
Determinants, crammers law,  Inverse by adjoint and the applicationsDeterminants, crammers law,  Inverse by adjoint and the applications
Determinants, crammers law, Inverse by adjoint and the applications
NikoBellic28
 
tw1979 Exercise 1 Report
tw1979 Exercise 1 Reporttw1979 Exercise 1 Report
tw1979 Exercise 1 Report
Thomas Wigg
 
Linear algebra03fallleturenotes01
Linear algebra03fallleturenotes01Linear algebra03fallleturenotes01
Linear algebra03fallleturenotes01
ST ZULAIHA NURHAJARURAHMAH
 
Linear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear AlgebraLinear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear Algebra
MUHAMMADUSMAN93058
 
Modern ontrol Systems lecture for bachelor students
Modern ontrol Systems lecture for bachelor studentsModern ontrol Systems lecture for bachelor students
Modern ontrol Systems lecture for bachelor students
AbusabahIAAhmed1
 
Cis435 week02
Cis435 week02Cis435 week02
Cis435 week02
ashish bansal
 
system of linear equations by Diler
system of linear equations by Dilersystem of linear equations by Diler
system of linear equations by Diler
kishor pokar
 
System of linear equations
System of linear equationsSystem of linear equations
System of linear equations
Diler4
 
Two algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networksTwo algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networks
ESCOM
 
Numerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic EquationNumerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic Equation
payalpriyadarshinisa1
 
Chapter 3: Linear Systems and Matrices - Part 2/Slides
Chapter 3: Linear Systems and Matrices - Part 2/SlidesChapter 3: Linear Systems and Matrices - Part 2/Slides
Chapter 3: Linear Systems and Matrices - Part 2/Slides
Chaimae Baroudi
 
Strassen's Matrix Multiplication divide and conquere algorithm
Strassen's Matrix Multiplication divide and conquere algorithmStrassen's Matrix Multiplication divide and conquere algorithm
Strassen's Matrix Multiplication divide and conquere algorithm
Ahmad177077
 
Solution of System of Linear Equations
Solution of System of Linear EquationsSolution of System of Linear Equations
Solution of System of Linear Equations
mofassair
 
Arjrandomjjejejj3ejjeejjdjddjjdjdjdjdjdjdjdjdjd
Arjrandomjjejejj3ejjeejjdjddjjdjdjdjdjdjdjdjdjdArjrandomjjejejj3ejjeejjdjddjjdjdjdjdjdjdjdjdjd
Arjrandomjjejejj3ejjeejjdjddjjdjdjdjdjdjdjdjdjd
12345arjitcs
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinants
Kum Visal
 
UNIT-1 MATRICES (All the Topics).pdf
UNIT-1 MATRICES (All the Topics).pdfUNIT-1 MATRICES (All the Topics).pdf
UNIT-1 MATRICES (All the Topics).pdf
AbdulKalam13785
 
Solving using systems
Solving using systemsSolving using systems
Solving using systems
holmsted
 
Determinants, crammers law, Inverse by adjoint and the applications
Determinants, crammers law,  Inverse by adjoint and the applicationsDeterminants, crammers law,  Inverse by adjoint and the applications
Determinants, crammers law, Inverse by adjoint and the applications
NikoBellic28
 
tw1979 Exercise 1 Report
tw1979 Exercise 1 Reporttw1979 Exercise 1 Report
tw1979 Exercise 1 Report
Thomas Wigg
 
Linear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear AlgebraLinear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear Algebra
MUHAMMADUSMAN93058
 
Modern ontrol Systems lecture for bachelor students
Modern ontrol Systems lecture for bachelor studentsModern ontrol Systems lecture for bachelor students
Modern ontrol Systems lecture for bachelor students
AbusabahIAAhmed1
 
system of linear equations by Diler
system of linear equations by Dilersystem of linear equations by Diler
system of linear equations by Diler
kishor pokar
 
System of linear equations
System of linear equationsSystem of linear equations
System of linear equations
Diler4
 
Two algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networksTwo algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networks
ESCOM
 
Ad

More from Andres Mendez-Vazquez (20)

2.03 bayesian estimation
2.03 bayesian estimation2.03 bayesian estimation
2.03 bayesian estimation
Andres Mendez-Vazquez
 
05 linear transformations
05 linear transformations05 linear transformations
05 linear transformations
Andres Mendez-Vazquez
 
01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors
Andres Mendez-Vazquez
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
Andres Mendez-Vazquez
 
01.01 vector spaces
01.01 vector spaces01.01 vector spaces
01.01 vector spaces
Andres Mendez-Vazquez
 
06 recurrent neural_networks
06 recurrent neural_networks06 recurrent neural_networks
06 recurrent neural_networks
Andres Mendez-Vazquez
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation
Andres Mendez-Vazquez
 
Zetta global
Zetta globalZetta global
Zetta global
Andres Mendez-Vazquez
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning
Andres Mendez-Vazquez
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
Andres Mendez-Vazquez
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning Syllabus
Andres Mendez-Vazquez
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
Andres Mendez-Vazquez
 
Ideas 09 22_2018
Ideas 09 22_2018Ideas 09 22_2018
Ideas 09 22_2018
Andres Mendez-Vazquez
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data Sciences
Andres Mendez-Vazquez
 
Analysis of Algorithms Syllabus
Analysis of Algorithms  SyllabusAnalysis of Algorithms  Syllabus
Analysis of Algorithms Syllabus
Andres Mendez-Vazquez
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations
Andres Mendez-Vazquez
 
18.1 combining models
18.1 combining models18.1 combining models
18.1 combining models
Andres Mendez-Vazquez
 
17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension
Andres Mendez-Vazquez
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
Andres Mendez-Vazquez
 
Introduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems SyllabusIntroduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems Syllabus
Andres Mendez-Vazquez
 
01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors
Andres Mendez-Vazquez
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
Andres Mendez-Vazquez
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation
Andres Mendez-Vazquez
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning
Andres Mendez-Vazquez
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
Andres Mendez-Vazquez
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning Syllabus
Andres Mendez-Vazquez
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
Andres Mendez-Vazquez
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data Sciences
Andres Mendez-Vazquez
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations
Andres Mendez-Vazquez
 
Introduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems SyllabusIntroduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems Syllabus
Andres Mendez-Vazquez
 

Recently uploaded (20)

Automatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and BeyondAutomatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and Beyond
NU_I_TODALAB
 
Working with USDOT UTCs: From Conception to Implementation
Working with USDOT UTCs: From Conception to ImplementationWorking with USDOT UTCs: From Conception to Implementation
Working with USDOT UTCs: From Conception to Implementation
Alabama Transportation Assistance Program
 
🚀 TDX Bengaluru 2025 Unwrapped: Key Highlights, Innovations & Trailblazer Tak...
🚀 TDX Bengaluru 2025 Unwrapped: Key Highlights, Innovations & Trailblazer Tak...🚀 TDX Bengaluru 2025 Unwrapped: Key Highlights, Innovations & Trailblazer Tak...
🚀 TDX Bengaluru 2025 Unwrapped: Key Highlights, Innovations & Trailblazer Tak...
SanjeetMishra29
 
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdfIBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
VigneshPalaniappanM
 
Design Optimization of Reinforced Concrete Waffle Slab Using Genetic Algorithm
Design Optimization of Reinforced Concrete Waffle Slab Using Genetic AlgorithmDesign Optimization of Reinforced Concrete Waffle Slab Using Genetic Algorithm
Design Optimization of Reinforced Concrete Waffle Slab Using Genetic Algorithm
Journal of Soft Computing in Civil Engineering
 
Modeling the Influence of Environmental Factors on Concrete Evaporation Rate
Modeling the Influence of Environmental Factors on Concrete Evaporation RateModeling the Influence of Environmental Factors on Concrete Evaporation Rate
Modeling the Influence of Environmental Factors on Concrete Evaporation Rate
Journal of Soft Computing in Civil Engineering
 
Control Methods of Noise Pollutions.pptx
Control Methods of Noise Pollutions.pptxControl Methods of Noise Pollutions.pptx
Control Methods of Noise Pollutions.pptx
vvsasane
 
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
PawachMetharattanara
 
GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdfGROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
kemimafe11
 
Slide share PPT of NOx control technologies.pptx
Slide share PPT of  NOx control technologies.pptxSlide share PPT of  NOx control technologies.pptx
Slide share PPT of NOx control technologies.pptx
vvsasane
 
Using the Artificial Neural Network to Predict the Axial Strength and Strain ...
Using the Artificial Neural Network to Predict the Axial Strength and Strain ...Using the Artificial Neural Network to Predict the Axial Strength and Strain ...
Using the Artificial Neural Network to Predict the Axial Strength and Strain ...
Journal of Soft Computing in Civil Engineering
 
Deepfake Phishing: A New Frontier in Cyber Threats
Deepfake Phishing: A New Frontier in Cyber ThreatsDeepfake Phishing: A New Frontier in Cyber Threats
Deepfake Phishing: A New Frontier in Cyber Threats
RaviKumar256934
 
Environment .................................
Environment .................................Environment .................................
Environment .................................
shadyozq9
 
Unleashing the Power of Salesforce Flows &amp_ Slack Integration!.pptx
Unleashing the Power of Salesforce Flows &amp_ Slack Integration!.pptxUnleashing the Power of Salesforce Flows &amp_ Slack Integration!.pptx
Unleashing the Power of Salesforce Flows &amp_ Slack Integration!.pptx
SanjeetMishra29
 
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
Pierre Celestin Eyock
 
acid base ppt and their specific application in food
acid base ppt and their specific application in foodacid base ppt and their specific application in food
acid base ppt and their specific application in food
Fatehatun Noor
 
Water Industry Process Automation & Control Monthly May 2025
Water Industry Process Automation & Control Monthly May 2025Water Industry Process Automation & Control Monthly May 2025
Water Industry Process Automation & Control Monthly May 2025
Water Industry Process Automation & Control
 
Slide share PPT of SOx control technologies.pptx
Slide share PPT of SOx control technologies.pptxSlide share PPT of SOx control technologies.pptx
Slide share PPT of SOx control technologies.pptx
vvsasane
 
Urban Transport Infrastructure September 2023
Urban Transport Infrastructure September 2023Urban Transport Infrastructure September 2023
Urban Transport Infrastructure September 2023
Rajesh Prasad
 
David Boutry - Specializes In AWS, Microservices And Python
David Boutry - Specializes In AWS, Microservices And PythonDavid Boutry - Specializes In AWS, Microservices And Python
David Boutry - Specializes In AWS, Microservices And Python
David Boutry
 
Automatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and BeyondAutomatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and Beyond
NU_I_TODALAB
 
🚀 TDX Bengaluru 2025 Unwrapped: Key Highlights, Innovations & Trailblazer Tak...
🚀 TDX Bengaluru 2025 Unwrapped: Key Highlights, Innovations & Trailblazer Tak...🚀 TDX Bengaluru 2025 Unwrapped: Key Highlights, Innovations & Trailblazer Tak...
🚀 TDX Bengaluru 2025 Unwrapped: Key Highlights, Innovations & Trailblazer Tak...
SanjeetMishra29
 
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdfIBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
VigneshPalaniappanM
 
Control Methods of Noise Pollutions.pptx
Control Methods of Noise Pollutions.pptxControl Methods of Noise Pollutions.pptx
Control Methods of Noise Pollutions.pptx
vvsasane
 
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
PawachMetharattanara
 
GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdfGROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
kemimafe11
 
Slide share PPT of NOx control technologies.pptx
Slide share PPT of  NOx control technologies.pptxSlide share PPT of  NOx control technologies.pptx
Slide share PPT of NOx control technologies.pptx
vvsasane
 
Deepfake Phishing: A New Frontier in Cyber Threats
Deepfake Phishing: A New Frontier in Cyber ThreatsDeepfake Phishing: A New Frontier in Cyber Threats
Deepfake Phishing: A New Frontier in Cyber Threats
RaviKumar256934
 
Environment .................................
Environment .................................Environment .................................
Environment .................................
shadyozq9
 
Unleashing the Power of Salesforce Flows &amp_ Slack Integration!.pptx
Unleashing the Power of Salesforce Flows &amp_ Slack Integration!.pptxUnleashing the Power of Salesforce Flows &amp_ Slack Integration!.pptx
Unleashing the Power of Salesforce Flows &amp_ Slack Integration!.pptx
SanjeetMishra29
 
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
Pierre Celestin Eyock
 
acid base ppt and their specific application in food
acid base ppt and their specific application in foodacid base ppt and their specific application in food
acid base ppt and their specific application in food
Fatehatun Noor
 
Slide share PPT of SOx control technologies.pptx
Slide share PPT of SOx control technologies.pptxSlide share PPT of SOx control technologies.pptx
Slide share PPT of SOx control technologies.pptx
vvsasane
 
Urban Transport Infrastructure September 2023
Urban Transport Infrastructure September 2023Urban Transport Infrastructure September 2023
Urban Transport Infrastructure September 2023
Rajesh Prasad
 
David Boutry - Specializes In AWS, Microservices And Python
David Boutry - Specializes In AWS, Microservices And PythonDavid Boutry - Specializes In AWS, Microservices And Python
David Boutry - Specializes In AWS, Microservices And Python
David Boutry
 

23 Matrix Algorithms

  • 1. Analysis of Algorithms Matrix algorithms Andres Mendez-Vazquez November 22, 2015 1 / 102
  • 2. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 2 / 102
  • 3. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 3 / 102
  • 4. Basic definitions A matrix is a rectangular array of numbers A = a11 a12 a13 a21 a22 a23 = 1 2 3 4 5 6 A transpose matrix is the matrix obtained by exchanging the rows and columns AT =    1 4 2 5 3 6    4 / 102
  • 5. Basic definitions A matrix is a rectangular array of numbers A = a11 a12 a13 a21 a22 a23 = 1 2 3 4 5 6 A transpose matrix is the matrix obtained by exchanging the rows and columns AT =    1 4 2 5 3 6    4 / 102
  • 6. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 5 / 102
  • 7. Several cases of matrices Zero matrix        0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0        The diagonal matrix       a11 0 · · · 0 0 a22 · · · 0 ... ... ... ... 0 0 · · · ann       6 / 102
  • 8. Several cases of matrices Zero matrix        0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0        The diagonal matrix       a11 0 · · · 0 0 a22 · · · 0 ... ... ... ... 0 0 · · · ann       6 / 102
  • 9. Several cases of matrices Upper triangular matrix       a11 a12 · · · a1n 0 a22 · · · a2n ... ... ... ... 0 0 · · · ann       7 / 102
  • 10. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 8 / 102
  • 11. Operations on matrices They Define a Vectorial Space Matrix addition. Multiplication by scalar. The existence of zero. 9 / 102
  • 12. Operations on matrices They Define a Vectorial Space Matrix addition. Multiplication by scalar. The existence of zero. 9 / 102
  • 13. Operations on matrices They Define a Vectorial Space Matrix addition. Multiplication by scalar. The existence of zero. 9 / 102
  • 14. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 10 / 102
  • 15. Matrix Multiplication What is Matrix Multiplication? Given A, B matrices with dimensions n×n, the multiplication is defined as C = AB cij = n k=1 aikbkj 11 / 102
  • 16. Complexity and Algorithm Algorithm: Complexity Θ (n3 ) Square-Matrix-Multiply(A, B) 1 n = A.rows 2 let C be a new matrix n × n 3 for i = 1 to n 4 for j = 1 to n 5 C [i, j] = 0 6 for k = 1 to n 7 C [i, j] = C [i, j] + A [i, j] ∗ B [i, j] 8 return C 12 / 102
  • 17. Matrix multiplication properties Properties of the Multiplication The Identity exist for a matrix A of m × n: ImA = AIn = A. The multiplication is associative: A(BC) = (AB)C. In addition, multiplication is distibutive A(B + C) = AB + AC (B + C)D = BD + CD 13 / 102
  • 18. Matrix multiplication properties Properties of the Multiplication The Identity exist for a matrix A of m × n: ImA = AIn = A. The multiplication is associative: A(BC) = (AB)C. In addition, multiplication is distibutive A(B + C) = AB + AC (B + C)D = BD + CD 13 / 102
  • 19. Matrix multiplication properties Properties of the Multiplication The Identity exist for a matrix A of m × n: ImA = AIn = A. The multiplication is associative: A(BC) = (AB)C. In addition, multiplication is distibutive A(B + C) = AB + AC (B + C)D = BD + CD 13 / 102
  • 20. In addition Definition The inner product between vectors is defied as xT y = n i=1 xiyi 14 / 102
  • 21. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 15 / 102
  • 22. Matrix inverses The inverse is defined as the vector A−1 such that AA−1 = A−1 A = In Example 1 1 1 0 −1 = 0 1 1 −1 =⇒ 1 1 1 0 0 1 1 −1 = 1 · 0 + 1 · 1 1 · 1 − 1 · 1 1 · 0 + 1 · 0 1 · 1 + 0 · −1 = 1 0 0 1 Remark A matrix that is invertible is called non-singular. 16 / 102
  • 23. Matrix inverses The inverse is defined as the vector A−1 such that AA−1 = A−1 A = In Example 1 1 1 0 −1 = 0 1 1 −1 =⇒ 1 1 1 0 0 1 1 −1 = 1 · 0 + 1 · 1 1 · 1 − 1 · 1 1 · 0 + 1 · 0 1 · 1 + 0 · −1 = 1 0 0 1 Remark A matrix that is invertible is called non-singular. 16 / 102
  • 24. Matrix inverses The inverse is defined as the vector A−1 such that AA−1 = A−1 A = In Example 1 1 1 0 −1 = 0 1 1 −1 =⇒ 1 1 1 0 0 1 1 −1 = 1 · 0 + 1 · 1 1 · 1 − 1 · 1 1 · 0 + 1 · 0 1 · 1 + 0 · −1 = 1 0 0 1 Remark A matrix that is invertible is called non-singular. 16 / 102
  • 25. Properties of an inverse Some properties are (BA)−1 = A−1B−1 A−1 T = AT −1 17 / 102
  • 26. The Rank of A Rank of A A collection of vectors is x1, x2, ..., xn such that c1x1 + c2x2 + ... + cnxn = 0. The rank of a matrix is the number of linear independent rows. Theorem 1 A square matrix has full rank if and only if it is nonsingular. 18 / 102
  • 27. The Rank of A Rank of A A collection of vectors is x1, x2, ..., xn such that c1x1 + c2x2 + ... + cnxn = 0. The rank of a matrix is the number of linear independent rows. Theorem 1 A square matrix has full rank if and only if it is nonsingular. 18 / 102
  • 28. Other Theorems A null vector x is such that Ax = 0 Theorem 2: A matrix A has full column rank if and only if it does not have a null vector. Then, for squared matrices, we have Corollary 3: A square matrix A is singular if and only if it has a null vector. 19 / 102
  • 29. Other Theorems A null vector x is such that Ax = 0 Theorem 2: A matrix A has full column rank if and only if it does not have a null vector. Then, for squared matrices, we have Corollary 3: A square matrix A is singular if and only if it has a null vector. 19 / 102
  • 30. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 20 / 102
  • 31. Determinants A determinant can be defined recursively as follows det(A) =    a11 if n = 1 n j=1 (−1)1+ja1jdet(A[1j]) if n > 1 (1) Where (−1)i+jdet(A[ij]) is called a cofactor 21 / 102
  • 32. Determinants A determinant can be defined recursively as follows det(A) =    a11 if n = 1 n j=1 (−1)1+ja1jdet(A[1j]) if n > 1 (1) Where (−1)i+jdet(A[ij]) is called a cofactor 21 / 102
  • 33. Theorems Theorem 4(determinant properties). The determinant of a square matrix A has the following properties: If any row or any column A is zero, then det(A) = 0. The determinant of A is multiplied by λ if the entries of any one row (or any one column) of A are all multiplied by λ. The determinant of A is unchanged if the entries in one row (respectively, column) are added to those in another row (respectively, column). The determinant of A equals the determinant of AT . The determinant of A is multiplied by −1 if any two rows (or any two columns) are exchanged. Theorem 5 An n × n matrix A is singular if and only if det(A) = 0. 22 / 102
  • 34. Theorems Theorem 4(determinant properties). The determinant of a square matrix A has the following properties: If any row or any column A is zero, then det(A) = 0. The determinant of A is multiplied by λ if the entries of any one row (or any one column) of A are all multiplied by λ. The determinant of A is unchanged if the entries in one row (respectively, column) are added to those in another row (respectively, column). The determinant of A equals the determinant of AT . The determinant of A is multiplied by −1 if any two rows (or any two columns) are exchanged. Theorem 5 An n × n matrix A is singular if and only if det(A) = 0. 22 / 102
  • 35. Theorems Theorem 4(determinant properties). The determinant of a square matrix A has the following properties: If any row or any column A is zero, then det(A) = 0. The determinant of A is multiplied by λ if the entries of any one row (or any one column) of A are all multiplied by λ. The determinant of A is unchanged if the entries in one row (respectively, column) are added to those in another row (respectively, column). The determinant of A equals the determinant of AT . The determinant of A is multiplied by −1 if any two rows (or any two columns) are exchanged. Theorem 5 An n × n matrix A is singular if and only if det(A) = 0. 22 / 102
  • 36. Theorems Theorem 4(determinant properties). The determinant of a square matrix A has the following properties: If any row or any column A is zero, then det(A) = 0. The determinant of A is multiplied by λ if the entries of any one row (or any one column) of A are all multiplied by λ. The determinant of A is unchanged if the entries in one row (respectively, column) are added to those in another row (respectively, column). The determinant of A equals the determinant of AT . The determinant of A is multiplied by −1 if any two rows (or any two columns) are exchanged. Theorem 5 An n × n matrix A is singular if and only if det(A) = 0. 22 / 102
  • 37. Theorems Theorem 4(determinant properties). The determinant of a square matrix A has the following properties: If any row or any column A is zero, then det(A) = 0. The determinant of A is multiplied by λ if the entries of any one row (or any one column) of A are all multiplied by λ. The determinant of A is unchanged if the entries in one row (respectively, column) are added to those in another row (respectively, column). The determinant of A equals the determinant of AT . The determinant of A is multiplied by −1 if any two rows (or any two columns) are exchanged. Theorem 5 An n × n matrix A is singular if and only if det(A) = 0. 22 / 102
  • 38. Theorems Theorem 4(determinant properties). The determinant of a square matrix A has the following properties: If any row or any column A is zero, then det(A) = 0. The determinant of A is multiplied by λ if the entries of any one row (or any one column) of A are all multiplied by λ. The determinant of A is unchanged if the entries in one row (respectively, column) are added to those in another row (respectively, column). The determinant of A equals the determinant of AT . The determinant of A is multiplied by −1 if any two rows (or any two columns) are exchanged. Theorem 5 An n × n matrix A is singular if and only if det(A) = 0. 22 / 102
  • 39. Theorems Theorem 4(determinant properties). The determinant of a square matrix A has the following properties: If any row or any column A is zero, then det(A) = 0. The determinant of A is multiplied by λ if the entries of any one row (or any one column) of A are all multiplied by λ. The determinant of A is unchanged if the entries in one row (respectively, column) are added to those in another row (respectively, column). The determinant of A equals the determinant of AT . The determinant of A is multiplied by −1 if any two rows (or any two columns) are exchanged. Theorem 5 An n × n matrix A is singular if and only if det(A) = 0. 22 / 102
  • 40. Positive definite matrix Definition A positive definite matrix A is called positive definite if and only if xT Ax > 0 for all x = 0 Theorem 6 For any matrix A with full column rank, the matrix AT A is positive definite. 23 / 102
  • 41. Positive definite matrix Definition A positive definite matrix A is called positive definite if and only if xT Ax > 0 for all x = 0 Theorem 6 For any matrix A with full column rank, the matrix AT A is positive definite. 23 / 102
  • 42. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 24 / 102
  • 43. Matrix Multiplication Problem description Given n × n matrices A,B and C: r s t u = a b c d e f g h Thus, you could compute r, s, t and u using recursion!!! r = ae + bg s = af + bh t = ce + dg u = cf + dh 25 / 102
  • 44. Matrix Multiplication Problem description Given n × n matrices A,B and C: r s t u = a b c d e f g h Thus, you could compute r, s, t and u using recursion!!! r = ae + bg s = af + bh t = ce + dg u = cf + dh 25 / 102
  • 45. Problem Complexity of previous approach T(n) = 8T n 2 + Θ(n2 ) Thus T(n) = Θ(n3 ) Therefore You need to use a different type of products. 26 / 102
  • 46. Problem Complexity of previous approach T(n) = 8T n 2 + Θ(n2 ) Thus T(n) = Θ(n3 ) Therefore You need to use a different type of products. 26 / 102
  • 47. Problem Complexity of previous approach T(n) = 8T n 2 + Θ(n2 ) Thus T(n) = Θ(n3 ) Therefore You need to use a different type of products. 26 / 102
  • 48. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 27 / 102
  • 49. The Strassen’s Algorithm It is a divide and conquer algorithm Given A, B, C matrices with dimensions n × n, we recursively split the matrices such that we finish with 12 n 2 × n 2 sub matrices r s t u = a b c d e f g h Remember the Gauss Trick? Imagine the same for Matrix Multiplication. 28 / 102
  • 50. The Strassen’s Algorithm It is a divide and conquer algorithm Given A, B, C matrices with dimensions n × n, we recursively split the matrices such that we finish with 12 n 2 × n 2 sub matrices r s t u = a b c d e f g h Remember the Gauss Trick? Imagine the same for Matrix Multiplication. 28 / 102
  • 51. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 29 / 102
  • 52. Algorithm Strassen’s Algorithm 1 Divide the input matrices A and B into n 2 × n 2 sub matrices. 2 Using Θ n2 scalar additions and subtractions, compute 14 matrices A1, B1, ..., A7, B7 each of which is n 2 × n 2 . 3 Recursively compute the seven matrices products Pi = AiBi for i = 1, 2, 3, ..., 7. 4 Compute the desired matrix r s t u by adding and or subtracting various combinations of the Pi matrices, using only Θ n2 scalar additions and subtractions 30 / 102
  • 53. Algorithm Strassen’s Algorithm 1 Divide the input matrices A and B into n 2 × n 2 sub matrices. 2 Using Θ n2 scalar additions and subtractions, compute 14 matrices A1, B1, ..., A7, B7 each of which is n 2 × n 2 . 3 Recursively compute the seven matrices products Pi = AiBi for i = 1, 2, 3, ..., 7. 4 Compute the desired matrix r s t u by adding and or subtracting various combinations of the Pi matrices, using only Θ n2 scalar additions and subtractions 30 / 102
  • 54. Algorithm Strassen’s Algorithm 1 Divide the input matrices A and B into n 2 × n 2 sub matrices. 2 Using Θ n2 scalar additions and subtractions, compute 14 matrices A1, B1, ..., A7, B7 each of which is n 2 × n 2 . 3 Recursively compute the seven matrices products Pi = AiBi for i = 1, 2, 3, ..., 7. 4 Compute the desired matrix r s t u by adding and or subtracting various combinations of the Pi matrices, using only Θ n2 scalar additions and subtractions 30 / 102
  • 55. Algorithm Strassen’s Algorithm 1 Divide the input matrices A and B into n 2 × n 2 sub matrices. 2 Using Θ n2 scalar additions and subtractions, compute 14 matrices A1, B1, ..., A7, B7 each of which is n 2 × n 2 . 3 Recursively compute the seven matrices products Pi = AiBi for i = 1, 2, 3, ..., 7. 4 Compute the desired matrix r s t u by adding and or subtracting various combinations of the Pi matrices, using only Θ n2 scalar additions and subtractions 30 / 102
  • 56. Algorithm Strassen’s Algorithm 1 Divide the input matrices A and B into n 2 × n 2 sub matrices. 2 Using Θ n2 scalar additions and subtractions, compute 14 matrices A1, B1, ..., A7, B7 each of which is n 2 × n 2 . 3 Recursively compute the seven matrices products Pi = AiBi for i = 1, 2, 3, ..., 7. 4 Compute the desired matrix r s t u by adding and or subtracting various combinations of the Pi matrices, using only Θ n2 scalar additions and subtractions 30 / 102
  • 57. Algorithm Strassen’s Algorithm 1 Divide the input matrices A and B into n 2 × n 2 sub matrices. 2 Using Θ n2 scalar additions and subtractions, compute 14 matrices A1, B1, ..., A7, B7 each of which is n 2 × n 2 . 3 Recursively compute the seven matrices products Pi = AiBi for i = 1, 2, 3, ..., 7. 4 Compute the desired matrix r s t u by adding and or subtracting various combinations of the Pi matrices, using only Θ n2 scalar additions and subtractions 30 / 102
  • 58. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 31 / 102
  • 59. Strassen Observed that Trial and Error First , he generated Pi = AiBi = (αi1a + αi2b + αi3c + αi4d) · (βi1e + βi2f + βi3g + βi4h) Where αij, βij ∈ {−1, 0, 1} 32 / 102
  • 60. Then r r = ae + bg = a b c d      +1 0 0 0 0 0 +1 0 0 0 0 0 0 0 0 0           e f g h      s s = af + bh = a b c d      +1 0 0 0 0 0 0 +1 0 0 0 0 0 0 0 0           e f g h      33 / 102
  • 61. Then r r = ae + bg = a b c d      +1 0 0 0 0 0 +1 0 0 0 0 0 0 0 0 0           e f g h      s s = af + bh = a b c d      +1 0 0 0 0 0 0 +1 0 0 0 0 0 0 0 0           e f g h      33 / 102
  • 62. Then r r = ae + bg = a b c d      +1 0 0 0 0 0 +1 0 0 0 0 0 0 0 0 0           e f g h      s s = af + bh = a b c d      +1 0 0 0 0 0 0 +1 0 0 0 0 0 0 0 0           e f g h      33 / 102
  • 63. Therefore t r = ce + dg = a b c d      0 0 0 0 0 0 0 0 +1 0 0 0 0 0 +1 0           e f g h      u u = cf + dh = a b c d      0 0 0 0 0 0 0 0 0 +1 0 0 0 0 0 +1           e f g h      34 / 102
  • 64. Therefore t r = ce + dg = a b c d      0 0 0 0 0 0 0 0 +1 0 0 0 0 0 +1 0           e f g h      u u = cf + dh = a b c d      0 0 0 0 0 0 0 0 0 +1 0 0 0 0 0 +1           e f g h      34 / 102
  • 65. Example Compute the s from P1 and P2 matrices Compute s = P1 + P2 Where P1 P1 = A1B1 = a (f − h) = af − ah = a b c d      0 +1 0 −1 0 0 0 0 0 0 0 0 0 0 0 0           e f g h      35 / 102
  • 66. Example Compute the s from P1 and P2 matrices Compute s = P1 + P2 Where P1 P1 = A1B1 = a (f − h) = af − ah = a b c d      0 +1 0 −1 0 0 0 0 0 0 0 0 0 0 0 0           e f g h      35 / 102
  • 67. Example Compute the s from P1 and P2 matrices Compute s = P1 + P2 Where P1 P1 = A1B1 = a (f − h) = af − ah = a b c d      0 +1 0 −1 0 0 0 0 0 0 0 0 0 0 0 0           e f g h      35 / 102
  • 68. Example Compute the s from P1 and P2 matrices Compute s = P1 + P2 Where P1 P1 = A1B1 = a (f − h) = af − ah = a b c d      0 +1 0 −1 0 0 0 0 0 0 0 0 0 0 0 0           e f g h      35 / 102
  • 69. Example Compute the s from P1 and P2 matrices Compute s = P1 + P2 Where P1 P1 = A1B1 = a (f − h) = af − ah = a b c d      0 +1 0 −1 0 0 0 0 0 0 0 0 0 0 0 0           e f g h      35 / 102
  • 70. Example Compute the s from P1 and P2 matrices Where P2 P2 = A2B2 = (a + b) h = ah + bh = a b c d      0 0 0 +1 0 0 0 +1 0 0 0 0 0 0 0 0           e f g h      36 / 102
  • 71. Example Compute the s from P1 and P2 matrices Where P2 P2 = A2B2 = (a + b) h = ah + bh = a b c d      0 0 0 +1 0 0 0 +1 0 0 0 0 0 0 0 0           e f g h      36 / 102
  • 72. Example Compute the s from P1 and P2 matrices Where P2 P2 = A2B2 = (a + b) h = ah + bh = a b c d      0 0 0 +1 0 0 0 +1 0 0 0 0 0 0 0 0           e f g h      36 / 102
  • 73. Example Compute the s from P1 and P2 matrices Where P2 P2 = A2B2 = (a + b) h = ah + bh = a b c d      0 0 0 +1 0 0 0 +1 0 0 0 0 0 0 0 0           e f g h      36 / 102
  • 74. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 37 / 102
  • 75. Complexity Because we are only computing 7 matrices T(n) = 7T n 2 + Θ n2 = Θ nlg 7 = O n2.81 . 38 / 102
  • 76. Nevertheless We do not use Strassen’s because A constant factor hidden in the running of the algorithm is larger than the constant factor of the naive Θ n3 method. When matrices are sparse, there are faster methods. Strassen’s is not a numerically stable as the naive method. The sub matrices formed at the levels of the recursion consume space. 39 / 102
  • 77. Nevertheless We do not use Strassen’s because A constant factor hidden in the running of the algorithm is larger than the constant factor of the naive Θ n3 method. When matrices are sparse, there are faster methods. Strassen’s is not a numerically stable as the naive method. The sub matrices formed at the levels of the recursion consume space. 39 / 102
  • 78. Nevertheless We do not use Strassen’s because A constant factor hidden in the running of the algorithm is larger than the constant factor of the naive Θ n3 method. When matrices are sparse, there are faster methods. Strassen’s is not a numerically stable as the naive method. The sub matrices formed at the levels of the recursion consume space. 39 / 102
  • 79. Nevertheless We do not use Strassen’s because A constant factor hidden in the running of the algorithm is larger than the constant factor of the naive Θ n3 method. When matrices are sparse, there are faster methods. Strassen’s is not a numerically stable as the naive method. The sub matrices formed at the levels of the recursion consume space. 39 / 102
  • 80. The Holy Grail of Matrix Multiplications O (n2 ) In a method by Virginia Vassilevska Williams (2012) Assistant Professor at Stanford The computational complexity of her method is ω < 2.3727 or O n2.3727 Better than Coppersmith and Winograd (1990) O n2.375477 Many Researchers Believe that Coppersmith, Winograd and Cohn et al. conjecture could lead to O n2 , contradicting a variant of the widely believed sun flower conjecture of Erdos and Rado. 40 / 102
  • 81. The Holy Grail of Matrix Multiplications O (n2 ) In a method by Virginia Vassilevska Williams (2012) Assistant Professor at Stanford The computational complexity of her method is ω < 2.3727 or O n2.3727 Better than Coppersmith and Winograd (1990) O n2.375477 Many Researchers Believe that Coppersmith, Winograd and Cohn et al. conjecture could lead to O n2 , contradicting a variant of the widely believed sun flower conjecture of Erdos and Rado. 40 / 102
  • 82. The Holy Grail of Matrix Multiplications O (n2 ) In a method by Virginia Vassilevska Williams (2012) Assistant Professor at Stanford The computational complexity of her method is ω < 2.3727 or O n2.3727 Better than Coppersmith and Winograd (1990) O n2.375477 Many Researchers Believe that Coppersmith, Winograd and Cohn et al. conjecture could lead to O n2 , contradicting a variant of the widely believed sun flower conjecture of Erdos and Rado. 40 / 102
  • 84. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 42 / 102
  • 85. In Many Fields From Optimization to Control We are required to solve systems of simultaneous equations. For Example For Polynomial Curve Fitting, we are given (x1, y1) , (x2, y2) , ..., (xn, yn) We want To find a polynomial of degree n − 1 with structure p (x) = a0 + a1x + a2x2 + ... + an−1xn−1 43 / 102
  • 86. In Many Fields From Optimization to Control We are required to solve systems of simultaneous equations. For Example For Polynomial Curve Fitting, we are given (x1, y1) , (x2, y2) , ..., (xn, yn) We want To find a polynomial of degree n − 1 with structure p (x) = a0 + a1x + a2x2 + ... + an−1xn−1 43 / 102
  • 87. In Many Fields From Optimization to Control We are required to solve systems of simultaneous equations. For Example For Polynomial Curve Fitting, we are given (x1, y1) , (x2, y2) , ..., (xn, yn) We want To find a polynomial of degree n − 1 with structure p (x) = a0 + a1x + a2x2 + ... + an−1xn−1 43 / 102
  • 88. Thus We can build a system of equations a0 + a1x1 + a2x2 1 + ... + an−1xn−1 1 = y1 a0 + a1x2 + a2x2 2 + ... + an−1xn−1 2 = y2 ... a0 + a1xn + a2x2 n + ... + an−1xn−1 n = yn We have n unknowns a0, a1, a2, ..., an−1 44 / 102
  • 89. Thus We can build a system of equations a0 + a1x1 + a2x2 1 + ... + an−1xn−1 1 = y1 a0 + a1x2 + a2x2 2 + ... + an−1xn−1 2 = y2 ... a0 + a1xn + a2x2 n + ... + an−1xn−1 n = yn We have n unknowns a0, a1, a2, ..., an−1 44 / 102
  • 90. Solving Systems of Linear Equations Proceed as follows We start with a set of linear equations with n unknowns: x1, x2, ..., xn    a11x1 + a12x2 + ... + a1nxn = b1 a21x1 + a22x2 + ... + a2nxn = b2 ... ... an1x1 + an2x2 + ... + annxn = bn Something Notable A set of values for x1, x2, ..., xn that satisfy all of the equations simultaneously is said to be a solution to these equations. In this section, we only treat the case in which there are exactly n equations in n unknowns. 45 / 102
  • 91. Solving Systems of Linear Equations Proceed as follows We start with a set of linear equations with n unknowns: x1, x2, ..., xn    a11x1 + a12x2 + ... + a1nxn = b1 a21x1 + a22x2 + ... + a2nxn = b2 ... ... an1x1 + an2x2 + ... + annxn = bn Something Notable A set of values for x1, x2, ..., xn that satisfy all of the equations simultaneously is said to be a solution to these equations. In this section, we only treat the case in which there are exactly n equations in n unknowns. 45 / 102
  • 92. Solving Systems of Linear Equations Proceed as follows We start with a set of linear equations with n unknowns: x1, x2, ..., xn    a11x1 + a12x2 + ... + a1nxn = b1 a21x1 + a22x2 + ... + a2nxn = b2 ... ... an1x1 + an2x2 + ... + annxn = bn Something Notable A set of values for x1, x2, ..., xn that satisfy all of the equations simultaneously is said to be a solution to these equations. In this section, we only treat the case in which there are exactly n equations in n unknowns. 45 / 102
  • 93. Solving Systems of Linear Equations Proceed as follows We start with a set of linear equations with n unknowns: x1, x2, ..., xn    a11x1 + a12x2 + ... + a1nxn = b1 a21x1 + a22x2 + ... + a2nxn = b2 ... ... an1x1 + an2x2 + ... + annxn = bn Something Notable A set of values for x1, x2, ..., xn that satisfy all of the equations simultaneously is said to be a solution to these equations. In this section, we only treat the case in which there are exactly n equations in n unknowns. 45 / 102
  • 94. Solving systems of linear equations continuation We can conveniently rewrite the equations as the matrix-vector equation:       a11 a12 . . . a1n a21 a22 . . . a2n ... ... ... ... an1 an2 . . . ann             x1 x2 ... xn       =       b1 b2 ... bn       or, equivalently, letting A = (aij), x = (xj), and b = (bi), as Ax = b In this section, we shall be concerned predominantly with the case of which A is nonsingular, after all we want to invert A. 46 / 102
  • 95. Solving systems of linear equations continuation We can conveniently rewrite the equations as the matrix-vector equation:       a11 a12 . . . a1n a21 a22 . . . a2n ... ... ... ... an1 an2 . . . ann             x1 x2 ... xn       =       b1 b2 ... bn       or, equivalently, letting A = (aij), x = (xj), and b = (bi), as Ax = b In this section, we shall be concerned predominantly with the case of which A is nonsingular, after all we want to invert A. 46 / 102
  • 96. Solving systems of linear equations continuation We can conveniently rewrite the equations as the matrix-vector equation:       a11 a12 . . . a1n a21 a22 . . . a2n ... ... ... ... an1 an2 . . . ann             x1 x2 ... xn       =       b1 b2 ... bn       or, equivalently, letting A = (aij), x = (xj), and b = (bi), as Ax = b In this section, we shall be concerned predominantly with the case of which A is nonsingular, after all we want to invert A. 46 / 102
  • 97. Solving systems of linear equations continuation We can conveniently rewrite the equations as the matrix-vector equation:       a11 a12 . . . a1n a21 a22 . . . a2n ... ... ... ... an1 an2 . . . ann             x1 x2 ... xn       =       b1 b2 ... bn       or, equivalently, letting A = (aij), x = (xj), and b = (bi), as Ax = b In this section, we shall be concerned predominantly with the case of which A is nonsingular, after all we want to invert A. 46 / 102
  • 98. Solving systems of linear equations continuation We can conveniently rewrite the equations as the matrix-vector equation:       a11 a12 . . . a1n a21 a22 . . . a2n ... ... ... ... an1 an2 . . . ann             x1 x2 ... xn       =       b1 b2 ... bn       or, equivalently, letting A = (aij), x = (xj), and b = (bi), as Ax = b In this section, we shall be concerned predominantly with the case of which A is nonsingular, after all we want to invert A. 46 / 102
  • 99. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 47 / 102
  • 100. Overview of Lower Upper (LUP) Decomposition Intuition The idea behind LUP decomposition is to find three n × n matrices L,U, and P such that: PA = LU where: L is a unit lower triangular matrix. U is an upper triangular matrix. P is a permutation matrix. Where We call matrices L,U, and P satisfying the above equation a LUP decomposition of the matrix A. 48 / 102
  • 101. Overview of Lower Upper (LUP) Decomposition Intuition The idea behind LUP decomposition is to find three n × n matrices L,U, and P such that: PA = LU where: L is a unit lower triangular matrix. U is an upper triangular matrix. P is a permutation matrix. Where We call matrices L,U, and P satisfying the above equation a LUP decomposition of the matrix A. 48 / 102
  • 102. Overview of Lower Upper (LUP) Decomposition Intuition The idea behind LUP decomposition is to find three n × n matrices L,U, and P such that: PA = LU where: L is a unit lower triangular matrix. U is an upper triangular matrix. P is a permutation matrix. Where We call matrices L,U, and P satisfying the above equation a LUP decomposition of the matrix A. 48 / 102
  • 103. Overview of Lower Upper (LUP) Decomposition Intuition The idea behind LUP decomposition is to find three n × n matrices L,U, and P such that: PA = LU where: L is a unit lower triangular matrix. U is an upper triangular matrix. P is a permutation matrix. Where We call matrices L,U, and P satisfying the above equation a LUP decomposition of the matrix A. 48 / 102
  • 104. Overview of Lower Upper (LUP) Decomposition Intuition The idea behind LUP decomposition is to find three n × n matrices L,U, and P such that: PA = LU where: L is a unit lower triangular matrix. U is an upper triangular matrix. P is a permutation matrix. Where We call matrices L,U, and P satisfying the above equation a LUP decomposition of the matrix A. 48 / 102
  • 105. Overview of Lower Upper (LUP) Decomposition Intuition The idea behind LUP decomposition is to find three n × n matrices L,U, and P such that: PA = LU where: L is a unit lower triangular matrix. U is an upper triangular matrix. P is a permutation matrix. Where We call matrices L,U, and P satisfying the above equation a LUP decomposition of the matrix A. 48 / 102
  • 106. Overview of Lower Upper (LUP) Decomposition Intuition The idea behind LUP decomposition is to find three n × n matrices L,U, and P such that: PA = LU where: L is a unit lower triangular matrix. U is an upper triangular matrix. P is a permutation matrix. Where We call matrices L,U, and P satisfying the above equation a LUP decomposition of the matrix A. 48 / 102
  • 107. What is a Permutation Matrix Basically We represent the permutation P compactly by an array π[1..n]. For i = 1, 2, ..., n, the entry π[i] indicates that Piπ[i] = 1 and Pij = 0 for j = π[i]. Thus PA has aπ[i],j in row i and a column j. Pb has bπ[i] as its ith element. 49 / 102
  • 108. What is a Permutation Matrix Basically We represent the permutation P compactly by an array π[1..n]. For i = 1, 2, ..., n, the entry π[i] indicates that Piπ[i] = 1 and Pij = 0 for j = π[i]. Thus PA has aπ[i],j in row i and a column j. Pb has bπ[i] as its ith element. 49 / 102
  • 109. How can we use this in our advantage? Lock at this Ax = b =⇒ PAx = Pb (2) Therefore LUx = Pb (3) Now, if we make Ux = y Ly = Pb (4) 50 / 102
  • 110. How can we use this in our advantage? Lock at this Ax = b =⇒ PAx = Pb (2) Therefore LUx = Pb (3) Now, if we make Ux = y Ly = Pb (4) 50 / 102
  • 111. How can we use this in our advantage? Lock at this Ax = b =⇒ PAx = Pb (2) Therefore LUx = Pb (3) Now, if we make Ux = y Ly = Pb (4) 50 / 102
  • 112. Thus We first obtain y Then, we obtain x. 51 / 102
  • 113. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 52 / 102
  • 114. Forward and Back Substitution Forward substitution Forward substitution can solve the lower triangular system Ly = Pb in Θ(n2) time, given L, P and b. Then Since L is unit lower triangular, equation Ly = Pb can be rewritten as: y1 = bπ[1] l21y1 + y2 = bπ[2] l31y1 + l32 + y3 = bπ[3] ... ln1y1 + ln2y2 + ln3y3 + ... + yn = bπ[n] 53 / 102
  • 115. Forward and Back Substitution Forward substitution Forward substitution can solve the lower triangular system Ly = Pb in Θ(n2) time, given L, P and b. Then Since L is unit lower triangular, equation Ly = Pb can be rewritten as: y1 = bπ[1] l21y1 + y2 = bπ[2] l31y1 + l32 + y3 = bπ[3] ... ln1y1 + ln2y2 + ln3y3 + ... + yn = bπ[n] 53 / 102
  • 116. Forward and Back Substitution Back substitution Back substitution is similar to forward substitution. Like forward substitution, this process runs in Θ(n2) time. Since U is upper-triangular, we can rewrite the system Ux = y as u11x1 + u12x2 + ... + u1n−2xn−2 + u1n−1xn−1 + u1nxn = y1 u22x2 + ... + u2n−2xn−2 + u2n−1xn−1 + u2nxn = y2 ... un−2n−2xn−2 + un−2n−1xn−1 + un−2nxn = yn−2 un−1n−1xn−1 + un−1nxn = yn−1 unnxn = yn 54 / 102
  • 117. Example We have Ax =    1 2 0 3 4 4 5 6 3    x =    3 7 8    = b 55 / 102
  • 118. Example The L, U and P matrix L =    1 0 0 0.2 1 0 0.6 0.5 1    , U =    5 6 3 0 0.8 −0.6 0 0 2.5    , P =    0 0 1 1 0 0 0 1 0    56 / 102
  • 119. Example Using forward substitution, Ly = Pb for y Ly =    1 0 0 0.2 1 0 0.6 0.5 1    y =    0 0 1 1 0 0 0 1 0       3 7 8    = Pb 57 / 102
  • 120. Example Using forward substitution, we get y y =    8 1.4 1.5    58 / 102
  • 121. Example Now, we use the back substitution, Ux = y for x Ux =    5 6 3 0 0.8 −0.6 0 0 2.5       x1 x2 x3    =    8 1.4 1.5    , 59 / 102
  • 122. Example Finally, we get x =    −1.4 2.2 0.6    60 / 102
  • 123. Forward and Back Substitution Given P, L, U, and b, the procedure LUP- SOLVE solves for x by combining forward and back substitution LUP-SOLVE(L, U, π, b) 1 n = L.rows 2 Let x be a new vector of length n 3 for i = 1 to n 4 yi = bπ[i] − i−1 j=1 lijyj 5 for i = n downto 1 6 xi = yi− n j=i+1 uijxj uii 7 return x Complexity The running time is Θ(n2). 61 / 102
  • 124. Forward and Back Substitution Given P, L, U, and b, the procedure LUP- SOLVE solves for x by combining forward and back substitution LUP-SOLVE(L, U, π, b) 1 n = L.rows 2 Let x be a new vector of length n 3 for i = 1 to n 4 yi = bπ[i] − i−1 j=1 lijyj 5 for i = n downto 1 6 xi = yi− n j=i+1 uijxj uii 7 return x Complexity The running time is Θ(n2). 61 / 102
  • 125. Forward and Back Substitution Given P, L, U, and b, the procedure LUP- SOLVE solves for x by combining forward and back substitution LUP-SOLVE(L, U, π, b) 1 n = L.rows 2 Let x be a new vector of length n 3 for i = 1 to n 4 yi = bπ[i] − i−1 j=1 lijyj 5 for i = n downto 1 6 xi = yi− n j=i+1 uijxj uii 7 return x Complexity The running time is Θ(n2). 61 / 102
  • 126. Forward and Back Substitution Given P, L, U, and b, the procedure LUP- SOLVE solves for x by combining forward and back substitution LUP-SOLVE(L, U, π, b) 1 n = L.rows 2 Let x be a new vector of length n 3 for i = 1 to n 4 yi = bπ[i] − i−1 j=1 lijyj 5 for i = n downto 1 6 xi = yi− n j=i+1 uijxj uii 7 return x Complexity The running time is Θ(n2). 61 / 102
  • 127. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 62 / 102
  • 128. Ok, if we have the L,U and P!!! Thus We need to find those matrices How, we do it? We are going to use something called the Gaussian Elimination. 63 / 102
  • 129. Ok, if we have the L,U and P!!! Thus We need to find those matrices How, we do it? We are going to use something called the Gaussian Elimination. 63 / 102
  • 130. For this We assume that A is a n × n Such that A is not singular We use a process known as Gaussian elimination to create LU decomposition This algorithm is recursive in nature. Properties Clearly if n = 1, we are done for L = I1 and U = A. 64 / 102
  • 131. For this We assume that A is a n × n Such that A is not singular We use a process known as Gaussian elimination to create LU decomposition This algorithm is recursive in nature. Properties Clearly if n = 1, we are done for L = I1 and U = A. 64 / 102
  • 132. For this We assume that A is a n × n Such that A is not singular We use a process known as Gaussian elimination to create LU decomposition This algorithm is recursive in nature. Properties Clearly if n = 1, we are done for L = I1 and U = A. 64 / 102
  • 133. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 65 / 102
  • 134. Computing LU decomposition For n > 1, we break A into four parts A =         a11 a12 · · · a1n a21 a22 · · · a2n ... ... ... ... an1 an2 · · · ann         =   a11 wT v A   (5) 66 / 102
  • 135. Where We have v is a column (n − 1) −vector. wT is a row (n − 1) −vector. A is an (n − 1) × (n − 1). 67 / 102
  • 136. Where We have v is a column (n − 1) −vector. wT is a row (n − 1) −vector. A is an (n − 1) × (n − 1). 67 / 102
  • 137. Where We have v is a column (n − 1) −vector. wT is a row (n − 1) −vector. A is an (n − 1) × (n − 1). 67 / 102
  • 138. Where We have v is a column (n − 1) −vector. wT is a row (n − 1) −vector. A is an (n − 1) × (n − 1). 67 / 102
  • 139. Computing a LU decomposition Thus, we can do the following A = a11 wT v A = 1 0 v a11 In−1       a11 wT 0 A − vwT a11 Schur Complement       = 1 0 v a11 In−1 a11 wT 0 L U = 1 0 v a11 L a11 wT 0 U = LU 68 / 102
  • 140. Computing a LU decomposition Thus, we can do the following A = a11 wT v A = 1 0 v a11 In−1       a11 wT 0 A − vwT a11 Schur Complement       = 1 0 v a11 In−1 a11 wT 0 L U = 1 0 v a11 L a11 wT 0 U = LU 68 / 102
  • 141. Computing a LU decomposition Thus, we can do the following A = a11 wT v A = 1 0 v a11 In−1       a11 wT 0 A − vwT a11 Schur Complement       = 1 0 v a11 In−1 a11 wT 0 L U = 1 0 v a11 L a11 wT 0 U = LU 68 / 102
  • 142. Computing a LU decomposition Thus, we can do the following A = a11 wT v A = 1 0 v a11 In−1       a11 wT 0 A − vwT a11 Schur Complement       = 1 0 v a11 In−1 a11 wT 0 L U = 1 0 v a11 L a11 wT 0 U = LU 68 / 102
  • 143. Computing a LU decomposition Thus, we can do the following A = a11 wT v A = 1 0 v a11 In−1       a11 wT 0 A − vwT a11 Schur Complement       = 1 0 v a11 In−1 a11 wT 0 L U = 1 0 v a11 L a11 wT 0 U = LU 68 / 102
  • 144. Computing a LU decomposition Pseudo-Code running in Θ (n3 ) LU-Decomposition(A) 1 n = A.rows 2 Let L and U be new n × n matrices 3 Initialize U with 0’s below the diagonal 4 Initialize L with 1’s on the diagonal and 0’s above the diagonal. 5 for k = 1 to n 6 ukk = akk 7 for i = k + 1 to n 8 lik = aik ukk lik holds vi 9 uki = aki uki holds wT i 10 for i = k + 1 to n 11 for j = k + 1 to n 12 aij = aij − likukj 13 return L and U 69 / 102
  • 145. Computing a LU decomposition Pseudo-Code running in Θ (n3 ) LU-Decomposition(A) 1 n = A.rows 2 Let L and U be new n × n matrices 3 Initialize U with 0’s below the diagonal 4 Initialize L with 1’s on the diagonal and 0’s above the diagonal. 5 for k = 1 to n 6 ukk = akk 7 for i = k + 1 to n 8 lik = aik ukk lik holds vi 9 uki = aki uki holds wT i 10 for i = k + 1 to n 11 for j = k + 1 to n 12 aij = aij − likukj 13 return L and U 69 / 102
  • 146. Computing a LU decomposition Pseudo-Code running in Θ (n3 ) LU-Decomposition(A) 1 n = A.rows 2 Let L and U be new n × n matrices 3 Initialize U with 0’s below the diagonal 4 Initialize L with 1’s on the diagonal and 0’s above the diagonal. 5 for k = 1 to n 6 ukk = akk 7 for i = k + 1 to n 8 lik = aik ukk lik holds vi 9 uki = aki uki holds wT i 10 for i = k + 1 to n 11 for j = k + 1 to n 12 aij = aij − likukj 13 return L and U 69 / 102
  • 147. Computing a LU decomposition Pseudo-Code running in Θ (n3 ) LU-Decomposition(A) 1 n = A.rows 2 Let L and U be new n × n matrices 3 Initialize U with 0’s below the diagonal 4 Initialize L with 1’s on the diagonal and 0’s above the diagonal. 5 for k = 1 to n 6 ukk = akk 7 for i = k + 1 to n 8 lik = aik ukk lik holds vi 9 uki = aki uki holds wT i 10 for i = k + 1 to n 11 for j = k + 1 to n 12 aij = aij − likukj 13 return L and U 69 / 102
  • 148. Computing a LU decomposition Pseudo-Code running in Θ (n3 ) LU-Decomposition(A) 1 n = A.rows 2 Let L and U be new n × n matrices 3 Initialize U with 0’s below the diagonal 4 Initialize L with 1’s on the diagonal and 0’s above the diagonal. 5 for k = 1 to n 6 ukk = akk 7 for i = k + 1 to n 8 lik = aik ukk lik holds vi 9 uki = aki uki holds wT i 10 for i = k + 1 to n 11 for j = k + 1 to n 12 aij = aij − likukj 13 return L and U 69 / 102
  • 149. Computing a LU decomposition Pseudo-Code running in Θ (n3 ) LU-Decomposition(A) 1 n = A.rows 2 Let L and U be new n × n matrices 3 Initialize U with 0’s below the diagonal 4 Initialize L with 1’s on the diagonal and 0’s above the diagonal. 5 for k = 1 to n 6 ukk = akk 7 for i = k + 1 to n 8 lik = aik ukk lik holds vi 9 uki = aki uki holds wT i 10 for i = k + 1 to n 11 for j = k + 1 to n 12 aij = aij − likukj 13 return L and U 69 / 102
  • 150. Computing a LU decomposition Pseudo-Code running in Θ (n3 ) LU-Decomposition(A) 1 n = A.rows 2 Let L and U be new n × n matrices 3 Initialize U with 0’s below the diagonal 4 Initialize L with 1’s on the diagonal and 0’s above the diagonal. 5 for k = 1 to n 6 ukk = akk 7 for i = k + 1 to n 8 lik = aik ukk lik holds vi 9 uki = aki uki holds wT i 10 for i = k + 1 to n 11 for j = k + 1 to n 12 aij = aij − likukj 13 return L and U 69 / 102
  • 151. Computing a LU decomposition Pseudo-Code running in Θ (n3 ) LU-Decomposition(A) 1 n = A.rows 2 Let L and U be new n × n matrices 3 Initialize U with 0’s below the diagonal 4 Initialize L with 1’s on the diagonal and 0’s above the diagonal. 5 for k = 1 to n 6 ukk = akk 7 for i = k + 1 to n 8 lik = aik ukk lik holds vi 9 uki = aki uki holds wT i 10 for i = k + 1 to n 11 for j = k + 1 to n 12 aij = aij − likukj 13 return L and U 69 / 102
  • 152. Computing a LU decomposition Pseudo-Code running in Θ (n3 ) LU-Decomposition(A) 1 n = A.rows 2 Let L and U be new n × n matrices 3 Initialize U with 0’s below the diagonal 4 Initialize L with 1’s on the diagonal and 0’s above the diagonal. 5 for k = 1 to n 6 ukk = akk 7 for i = k + 1 to n 8 lik = aik ukk lik holds vi 9 uki = aki uki holds wT i 10 for i = k + 1 to n 11 for j = k + 1 to n 12 aij = aij − likukj 13 return L and U 69 / 102
  • 153. Example Here, we have this example 2 3 1 5 6 13 5 19 2 19 10 23 4 10 11 31 ⇒ 13 5 19 19 10 23 10 11 31 − 1 2 6 2 4 3 1 5 = 13 5 19 19 10 23 10 11 31 − 1 2 18 6 30 6 2 10 12 4 20 ⇒ 2 3 1 5 3 4 2 4 1 16 9 18 2 4 9 21 ⇒ 9 18 9 11 − 1 4 16 4 2 4 = 9 18 9 11 − 1 4 32 64 8 16 = 9 18 9 11 − 8 16 2 4 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 17 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 3 70 / 102
  • 154. Example Here, we have this example 2 3 1 5 6 13 5 19 2 19 10 23 4 10 11 31 ⇒ 13 5 19 19 10 23 10 11 31 − 1 2 6 2 4 3 1 5 = 13 5 19 19 10 23 10 11 31 − 1 2 18 6 30 6 2 10 12 4 20 ⇒ 2 3 1 5 3 4 2 4 1 16 9 18 2 4 9 21 ⇒ 9 18 9 11 − 1 4 16 4 2 4 = 9 18 9 11 − 1 4 32 64 8 16 = 9 18 9 11 − 8 16 2 4 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 17 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 3 70 / 102
  • 155. Example Here, we have this example 2 3 1 5 6 13 5 19 2 19 10 23 4 10 11 31 ⇒ 13 5 19 19 10 23 10 11 31 − 1 2 6 2 4 3 1 5 = 13 5 19 19 10 23 10 11 31 − 1 2 18 6 30 6 2 10 12 4 20 ⇒ 2 3 1 5 3 4 2 4 1 16 9 18 2 4 9 21 ⇒ 9 18 9 11 − 1 4 16 4 2 4 = 9 18 9 11 − 1 4 32 64 8 16 = 9 18 9 11 − 8 16 2 4 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 17 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 3 70 / 102
  • 156. Example Here, we have this example 2 3 1 5 6 13 5 19 2 19 10 23 4 10 11 31 ⇒ 13 5 19 19 10 23 10 11 31 − 1 2 6 2 4 3 1 5 = 13 5 19 19 10 23 10 11 31 − 1 2 18 6 30 6 2 10 12 4 20 ⇒ 2 3 1 5 3 4 2 4 1 16 9 18 2 4 9 21 ⇒ 9 18 9 11 − 1 4 16 4 2 4 = 9 18 9 11 − 1 4 32 64 8 16 = 9 18 9 11 − 8 16 2 4 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 17 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 3 70 / 102
  • 157. Example Here, we have this example 2 3 1 5 6 13 5 19 2 19 10 23 4 10 11 31 ⇒ 13 5 19 19 10 23 10 11 31 − 1 2 6 2 4 3 1 5 = 13 5 19 19 10 23 10 11 31 − 1 2 18 6 30 6 2 10 12 4 20 ⇒ 2 3 1 5 3 4 2 4 1 16 9 18 2 4 9 21 ⇒ 9 18 9 11 − 1 4 16 4 2 4 = 9 18 9 11 − 1 4 32 64 8 16 = 9 18 9 11 − 8 16 2 4 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 17 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 3 70 / 102
  • 158. Example Here, we have this example 2 3 1 5 6 13 5 19 2 19 10 23 4 10 11 31 ⇒ 13 5 19 19 10 23 10 11 31 − 1 2 6 2 4 3 1 5 = 13 5 19 19 10 23 10 11 31 − 1 2 18 6 30 6 2 10 12 4 20 ⇒ 2 3 1 5 3 4 2 4 1 16 9 18 2 4 9 21 ⇒ 9 18 9 11 − 1 4 16 4 2 4 = 9 18 9 11 − 1 4 32 64 8 16 = 9 18 9 11 − 8 16 2 4 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 17 ⇒ 2 3 1 5 3 4 2 4 1 4 1 2 2 1 7 3 70 / 102
  • 159. Thus We get the following decomposition      2 3 1 5 6 13 5 19 2 19 10 23 4 10 11 31      =      1 0 0 0 3 1 0 0 1 4 1 0 2 1 7 1           2 3 1 5 0 4 2 4 0 0 1 2 0 0 0 3      71 / 102
  • 160. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 72 / 102
  • 161. Observations Something Notable The elements by which we divide during LU decomposition are called pivots. They occupy the diagonal elements of the matrix U. Why the permutation P It allows us to avoid dividing by 0. 73 / 102
  • 162. Observations Something Notable The elements by which we divide during LU decomposition are called pivots. They occupy the diagonal elements of the matrix U. Why the permutation P It allows us to avoid dividing by 0. 73 / 102
  • 163. Observations Something Notable The elements by which we divide during LU decomposition are called pivots. They occupy the diagonal elements of the matrix U. Why the permutation P It allows us to avoid dividing by 0. 73 / 102
  • 164. Thus, What do we want? We want P, L and U PA = LU However, we move a non-zero element, ak1 From somewhere in the first column to the (1, 1) position of the matrix. In addition ak1 as the element in the first column with the greatest absolute value. 74 / 102
  • 165. Thus, What do we want? We want P, L and U PA = LU However, we move a non-zero element, ak1 From somewhere in the first column to the (1, 1) position of the matrix. In addition ak1 as the element in the first column with the greatest absolute value. 74 / 102
  • 166. Thus, What do we want? We want P, L and U PA = LU However, we move a non-zero element, ak1 From somewhere in the first column to the (1, 1) position of the matrix. In addition ak1 as the element in the first column with the greatest absolute value. 74 / 102
  • 167. Exchange Rows Thus We exchange row 1 with row k, or multiplying A by a permutation matrix Q on the left QA = ak1 wT v A With v = (a21, a31, ..., an1)T with a11 replaces ak1. wT = (ak2, ak3, ..., akn). A is a (n − 1) × (n − 1) 75 / 102
  • 168. Exchange Rows Thus We exchange row 1 with row k, or multiplying A by a permutation matrix Q on the left QA = ak1 wT v A With v = (a21, a31, ..., an1)T with a11 replaces ak1. wT = (ak2, ak3, ..., akn). A is a (n − 1) × (n − 1) 75 / 102
  • 169. Now, ak1 = 0 We have then QA = ak1 wT v A = 1 0 v ak1 In−1 ak1 wT 0 A − vwT ak1 76 / 102
  • 170. Now, ak1 = 0 We have then QA = ak1 wT v A = 1 0 v ak1 In−1 ak1 wT 0 A − vwT ak1 76 / 102
  • 171. Important Something Notable if A is nonsingular, then the Schur complement A − vwT ak1 is nonsingular, too. Now, we can find recursively an LUP decomposition for it P A − vwT ak1 = L U Then, we define a new permutation matrix P = 1 0 0 P Q 77 / 102
  • 172. Important Something Notable if A is nonsingular, then the Schur complement A − vwT ak1 is nonsingular, too. Now, we can find recursively an LUP decomposition for it P A − vwT ak1 = L U Then, we define a new permutation matrix P = 1 0 0 P Q 77 / 102
  • 173. Important Something Notable if A is nonsingular, then the Schur complement A − vwT ak1 is nonsingular, too. Now, we can find recursively an LUP decomposition for it P A − vwT ak1 = L U Then, we define a new permutation matrix P = 1 0 0 P Q 77 / 102
  • 174. Thus We have PA = 1 0 0 P QA = 1 0 0 P 1 0 v ak1 In−1 ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 P ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 P A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 L U = 1 0 P v ak1 L ak1 wT 0 U = LU 78 / 102
  • 175. Thus We have PA = 1 0 0 P QA = 1 0 0 P 1 0 v ak1 In−1 ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 P ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 P A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 L U = 1 0 P v ak1 L ak1 wT 0 U = LU 78 / 102
  • 176. Thus We have PA = 1 0 0 P QA = 1 0 0 P 1 0 v ak1 In−1 ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 P ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 P A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 L U = 1 0 P v ak1 L ak1 wT 0 U = LU 78 / 102
  • 177. Thus We have PA = 1 0 0 P QA = 1 0 0 P 1 0 v ak1 In−1 ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 P ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 P A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 L U = 1 0 P v ak1 L ak1 wT 0 U = LU 78 / 102
  • 178. Thus We have PA = 1 0 0 P QA = 1 0 0 P 1 0 v ak1 In−1 ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 P ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 P A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 L U = 1 0 P v ak1 L ak1 wT 0 U = LU 78 / 102
  • 179. Thus We have PA = 1 0 0 P QA = 1 0 0 P 1 0 v ak1 In−1 ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 P ak1 wT 0 A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 P A − vwT ak1 = 1 0 P v ak1 In−1 ak1 wT 0 L U = 1 0 P v ak1 L ak1 wT 0 U = LU 78 / 102
  • 180. Computing a LUP decomposition Algorithm LUP-Decomposition(A) 1. n = A.rows 2. Let π [1..n] new array 3. for i = 1 to n 4. π [i] = i 5. for k = 1 to n 6. p = 0 7. for i = k to n 8. if |aik| > p 9. p = |aik| 10. k = i 11. if p == 0 12. error “Singular Matrix” 13. Exchange π [k] ←→ π [k ] 14. for i = 1 to n 15. Exchange aki ←→ ak i 16. for i = k + 1 to n 17. aik = aik akk 18. for j = k + 1 to n 19. aij = aij − aikakj 79 / 102
  • 181. Computing a LUP decomposition Algorithm LUP-Decomposition(A) 1. n = A.rows 2. Let π [1..n] new array 3. for i = 1 to n 4. π [i] = i 5. for k = 1 to n 6. p = 0 7. for i = k to n 8. if |aik| > p 9. p = |aik| 10. k = i 11. if p == 0 12. error “Singular Matrix” 13. Exchange π [k] ←→ π [k ] 14. for i = 1 to n 15. Exchange aki ←→ ak i 16. for i = k + 1 to n 17. aik = aik akk 18. for j = k + 1 to n 19. aij = aij − aikakj 79 / 102
  • 182. Computing a LUP decomposition Algorithm LUP-Decomposition(A) 1. n = A.rows 2. Let π [1..n] new array 3. for i = 1 to n 4. π [i] = i 5. for k = 1 to n 6. p = 0 7. for i = k to n 8. if |aik| > p 9. p = |aik| 10. k = i 11. if p == 0 12. error “Singular Matrix” 13. Exchange π [k] ←→ π [k ] 14. for i = 1 to n 15. Exchange aki ←→ ak i 16. for i = k + 1 to n 17. aik = aik akk 18. for j = k + 1 to n 19. aij = aij − aikakj 79 / 102
  • 183. Computing a LUP decomposition Algorithm LUP-Decomposition(A) 1. n = A.rows 2. Let π [1..n] new array 3. for i = 1 to n 4. π [i] = i 5. for k = 1 to n 6. p = 0 7. for i = k to n 8. if |aik| > p 9. p = |aik| 10. k = i 11. if p == 0 12. error “Singular Matrix” 13. Exchange π [k] ←→ π [k ] 14. for i = 1 to n 15. Exchange aki ←→ ak i 16. for i = k + 1 to n 17. aik = aik akk 18. for j = k + 1 to n 19. aij = aij − aikakj 79 / 102
  • 184. Computing a LUP decomposition Algorithm LUP-Decomposition(A) 1. n = A.rows 2. Let π [1..n] new array 3. for i = 1 to n 4. π [i] = i 5. for k = 1 to n 6. p = 0 7. for i = k to n 8. if |aik| > p 9. p = |aik| 10. k = i 11. if p == 0 12. error “Singular Matrix” 13. Exchange π [k] ←→ π [k ] 14. for i = 1 to n 15. Exchange aki ←→ ak i 16. for i = k + 1 to n 17. aik = aik akk 18. for j = k + 1 to n 19. aij = aij − aikakj 79 / 102
  • 185. Computing a LUP decomposition Algorithm LUP-Decomposition(A) 1. n = A.rows 2. Let π [1..n] new array 3. for i = 1 to n 4. π [i] = i 5. for k = 1 to n 6. p = 0 7. for i = k to n 8. if |aik| > p 9. p = |aik| 10. k = i 11. if p == 0 12. error “Singular Matrix” 13. Exchange π [k] ←→ π [k ] 14. for i = 1 to n 15. Exchange aki ←→ ak i 16. for i = k + 1 to n 17. aik = aik akk 18. for j = k + 1 to n 19. aij = aij − aikakj 79 / 102
  • 186. Computing a LUP decomposition Algorithm LUP-Decomposition(A) 1. n = A.rows 2. Let π [1..n] new array 3. for i = 1 to n 4. π [i] = i 5. for k = 1 to n 6. p = 0 7. for i = k to n 8. if |aik| > p 9. p = |aik| 10. k = i 11. if p == 0 12. error “Singular Matrix” 13. Exchange π [k] ←→ π [k ] 14. for i = 1 to n 15. Exchange aki ←→ ak i 16. for i = k + 1 to n 17. aik = aik akk 18. for j = k + 1 to n 19. aij = aij − aikakj 79 / 102
  • 187. Computing a LUP decomposition Algorithm LUP-Decomposition(A) 1. n = A.rows 2. Let π [1..n] new array 3. for i = 1 to n 4. π [i] = i 5. for k = 1 to n 6. p = 0 7. for i = k to n 8. if |aik| > p 9. p = |aik| 10. k = i 11. if p == 0 12. error “Singular Matrix” 13. Exchange π [k] ←→ π [k ] 14. for i = 1 to n 15. Exchange aki ←→ ak i 16. for i = k + 1 to n 17. aik = aik akk 18. for j = k + 1 to n 19. aij = aij − aikakj 79 / 102
  • 188. Computing a LUP decomposition Algorithm LUP-Decomposition(A) 1. n = A.rows 2. Let π [1..n] new array 3. for i = 1 to n 4. π [i] = i 5. for k = 1 to n 6. p = 0 7. for i = k to n 8. if |aik| > p 9. p = |aik| 10. k = i 11. if p == 0 12. error “Singular Matrix” 13. Exchange π [k] ←→ π [k ] 14. for i = 1 to n 15. Exchange aki ←→ ak i 16. for i = k + 1 to n 17. aik = aik akk 18. for j = k + 1 to n 19. aij = aij − aikakj 79 / 102
  • 189. Computing a LUP decomposition Example 1 2 0 2 0.6 2 3 3 4 -2 3 5 5 4 2 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 0.5 4 -0.5 80 / 102
  • 190. Computing a LUP decomposition Example 1 2 0 2 0.6 2 3 3 4 -2 3 5 5 4 2 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 0.5 4 -0.5 80 / 102
  • 191. Computing a LUP decomposition Example 1 2 0 2 0.6 2 3 3 4 -2 3 5 5 4 2 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 0.5 4 -0.5 80 / 102
  • 192. Computing a LUP decomposition Example 1 2 0 2 0.6 2 3 3 4 -2 3 5 5 4 2 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 0.5 4 -0.5 80 / 102
  • 193. Computing a LUP decomposition Example 1 2 0 2 0.6 2 3 3 4 -2 3 5 5 4 2 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 0.5 4 -0.5 80 / 102
  • 194. Computing a LUP decomposition Example 1 2 0 2 0.6 2 3 3 4 -2 3 5 5 4 2 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 0.5 4 -0.5 80 / 102
  • 195. Computing a LUP decomposition Example 1 2 0 2 0.6 2 3 3 4 -2 3 5 5 4 2 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 0.5 4 -0.5 80 / 102
  • 196. Computing a LUP decomposition Example 1 2 0 2 0.6 2 3 3 4 -2 3 5 5 4 2 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 0.5 4 -0.5 80 / 102
  • 197. Computing a LUP decomposition Example 1 2 0 2 0.6 2 3 3 4 -2 3 5 5 4 2 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 3 3 4 -2 1 2 0 2 0.6 4 -1 -2 3.4 -1 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 2 0.6 0 1.6 -3.2 1 0.4 -2 0.4 -0.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 -1 4.2 -0.6 =⇒ 3 5 5 4 2 1 0.4 -2 0.4 -0.2 2 0.6 0 1.6 -3.2 4 -1 0.5 4 -0.5 80 / 102
  • 198. Finally, you get The Permutation and Decomposition      0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0      P      2 0 2 0.6 3 3 4 −2 5 5 4 2 −1 −2 3.4 −1      A = ...      1 0 0 0 0.4 1 0 0 −0.2 0.5 1 0 0.6 0 0.4 1      L      5 5 4 2 0 −2 0.4 −0.2 0 0 4 −0.5 0 0 0 −3      U 81 / 102
  • 199. Symmetric positive-definite matrices Lemma 28.9 Any symmetric positive-definite matrix is nonsingular. Lemma 28.10 If A is a symmetric positive-definite matrix, then every leading submatrix of A is symmetric and positive-definite. 82 / 102
  • 200. Symmetric positive-definite matrices Lemma 28.9 Any symmetric positive-definite matrix is nonsingular. Lemma 28.10 If A is a symmetric positive-definite matrix, then every leading submatrix of A is symmetric and positive-definite. 82 / 102
  • 201. Symmetric positive-definite matrices Definition: Schur complement Let A be a symmetric positive-definite matrix, and let Ak be a leading k × k submatrix of A. Partition A as: A = Ak BT B C Then, the Schur complement of A with respect to Ak is defined to be S = C − BA−1 k BT 83 / 102
  • 202. Symmetric positive-definite matrices Definition: Schur complement Let A be a symmetric positive-definite matrix, and let Ak be a leading k × k submatrix of A. Partition A as: A = Ak BT B C Then, the Schur complement of A with respect to Ak is defined to be S = C − BA−1 k BT 83 / 102
  • 203. Symmetric positive-definite matrices Definition: Schur complement Let A be a symmetric positive-definite matrix, and let Ak be a leading k × k submatrix of A. Partition A as: A = Ak BT B C Then, the Schur complement of A with respect to Ak is defined to be S = C − BA−1 k BT 83 / 102
  • 204. Symmetric positive-definite matrices Definition: Schur complement Let A be a symmetric positive-definite matrix, and let Ak be a leading k × k submatrix of A. Partition A as: A = Ak BT B C Then, the Schur complement of A with respect to Ak is defined to be S = C − BA−1 k BT 83 / 102
  • 205. Symmetric positive-definite matrices Lemma 28.11 (Schur complement lemma) If A is a symmetric positive-definite matrix and Ak is a leading k × k submatrix of A, then the Schur complement of A with respect to Ak is symmetric and positive-definite. Corollary 28.12 LU decomposition of a symmetric positive-definite matrix never causes a division by 0. 84 / 102
  • 206. Symmetric positive-definite matrices Lemma 28.11 (Schur complement lemma) If A is a symmetric positive-definite matrix and Ak is a leading k × k submatrix of A, then the Schur complement of A with respect to Ak is symmetric and positive-definite. Corollary 28.12 LU decomposition of a symmetric positive-definite matrix never causes a division by 0. 84 / 102
  • 207. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 85 / 102
  • 208. Inverting matrices LUP decomposition can be used to compute a matrix inverse The computation of a matrix inverse can be speed up using techniques such as Strassen’s algorithm for matrix multiplication. 86 / 102
  • 209. Computing a matrix inverse from a LUP decomposition Proceed as follows The equation AX = In can be viewed as a set of n distinct equations of the form Axi = ei, for i = 1, ..., n. We have a LUP decomposition of a matrix A in the form of three matrices L,U, and P such that PA = LU. Then we use the backward-forward to solve AXi = ei. 87 / 102
  • 210. Complexity First We can compute each Xi in time Θ n2 . Thus, X can be computed in time Θ n3 . LUP decomposition is computed in time Θ n3 . Finally We can compute A−1 of a matrix A in time Θ n3 . 88 / 102
  • 211. Complexity First We can compute each Xi in time Θ n2 . Thus, X can be computed in time Θ n3 . LUP decomposition is computed in time Θ n3 . Finally We can compute A−1 of a matrix A in time Θ n3 . 88 / 102
  • 212. Matrix multiplication and matrix inversion Theorem 28.7 If we can invert an n × n matrix in time I(n), where I(n) = Ω(n2) and I(n) satisfies the regularity condition I(3n) = O(I(n)), then we can multiply two n × n matrices in time O(I(n)). 89 / 102
  • 213. Matrix multiplication and matrix inversion Theorem 28.8 If we can multiply two n × n real matrices in time M(n), where M(n) = Ω(n2) and M(n) = O(M(n + k)) for any k in range 0 ≤ k ≤ n and M(n 2 ) ≤ cM(n) for some constant c < 1 2. Then we can compute the inverse of any real nonsingular n × n matrix in time O(M(n)). 90 / 102
  • 214. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 91 / 102
  • 215. Least-squares Approximation Fitting curves to given sets of data points is an important application of symmetric positive-definite matrices. Given (x1, y1), (x2, y2), ..., (xm, ym) where the yi are known to be subject to measurement errors. We would like to determine a function F(x) such that: yi = F(xi) + ηi for i = 1, 2, ..., m 92 / 102
  • 216. Least-squares Approximation Continuation The form of the function F depends on the problem at hand. F(x) = n j=1 cjfj(x) A common choice is fj(x) = xj−1, which means that F(x) = c1 + c2x + c3x2 + ... + cnxn−1 is a polynomial of degree n − 1 in x. 93 / 102
  • 217. Least-squares Approximation Continuation Let A =       f1(x1) f2(x1) . . . fn(x1) f1(x2) f2(x2) . . . fn(x2) ... ... ... ... f1(xm) f2(xm) . . . fn(xm)       denote the matrix of values of the basis functions at the given points; that is, aij = fj(xi). Let c = (ck) denote the desired size-n vector of coefficients. Then, A =       f1(x1) f2(x1) . . . fn(x1) f1(x2) f2(x2) . . . fn(x2) ... ... ... ... f1(xm) f2(xm) . . . fn(xm)             c1 c2 ... cn       =       F(x1) F(x2) ... F(xm)       94 / 102
  • 218. Least-squares Approximation Then Thus, η = Ac − y is the size of approximation errors. To minimize approximation errors, we choose to minimize the norm of the error vector , which gives us a least-squares solution. ||η||2 = ||Ac − y||2 = m i=1 n j=1 aijcj − yi 2 Thus We can minimize ||η|| by differentiating ||η|| with respect to each ck and then setting the result to 0: d||η||2 dck = m i=1 2 n j=1 aijcj − yi aik = 0 95 / 102
  • 219. Least-squares Approximation Then Thus, η = Ac − y is the size of approximation errors. To minimize approximation errors, we choose to minimize the norm of the error vector , which gives us a least-squares solution. ||η||2 = ||Ac − y||2 = m i=1 n j=1 aijcj − yi 2 Thus We can minimize ||η|| by differentiating ||η|| with respect to each ck and then setting the result to 0: d||η||2 dck = m i=1 2 n j=1 aijcj − yi aik = 0 95 / 102
  • 220. Least-squares Approximation We can put all derivatives The n equation for k = 1, 2, ..., n (Ac − y)T A = 0 or equivalently to AT (Ac − y) = 0 which implies AT Ac = AT y 96 / 102
  • 221. Least-squares Approximation Continuation The AT A is symmetric: If A has full column rank, then AT A is positive- definite as well. Hence, (AT A)−1 exists, and the solution to equation AT Ac = AT y is c = ((AT A)−1AT )y = A+y where the matrix A+ = ((AT A)−1AT ) is called the pseudoinverse of the matrix A. 97 / 102
  • 222. Least-Square Approximation Continuation As an example of producing a least-squares fit, suppose that we have 5 data points (-1,2), (1,1),(2,1),(3,0),(5,3), shown as black dots in following figure 98 / 102
  • 223. Least-squares Approximation Continuation We start with the matrix of basis-function values A =        1 x1 x2 1 1 x2 x2 2 1 x3 x2 3 1 x4 x2 4 1 x5 x2 5        =        1 −1 1 1 1 1 1 2 4 1 3 9 1 5 25        whose pseudoinverse is A+ =    0.500 0.300 0.200 0.100 −0.100 −0.388 0.093 0.190 0.193 −0.088 0.060 −0.036 −0.048 −0.036 0.060    99 / 102
  • 224. Matrix multiplication and matrix inversion Continuation Multiplying y by A+ , we obtain the coefficient vector c =    1.200 −0.757 0.214    which corresponds to the quadratic polynomial F(x) = 1.200 − 0.757x + 0.214x2 100 / 102
  • 225. Outline 1 Introduction Basic Definitions Matrix Examples 2 Matrix Operations Introduction Matrix Multiplication The Inverse Determinants 3 Improving the Complexity of the Matrix Multiplication Back to Matrix Multiplication Strassen’s Algorithm The Algorithm How he did it? Complexity 4 Solving Systems of Linear Equations Introduction Lower Upper Decomposition Forward and Back Substitution Obtaining the Matrices Computing LU decomposition Computing LUP decomposition 5 Applications Inverting Matrices Least-squares Approximation 6 Exercises Some Exercises You Can Try!!! 101 / 102
  • 226. Exercises From Cormen’s book solve 34.5-1 34.5-2 34.5-3 34.5-4 34.5-5 34.5-7 34.5-8 102 / 102
  翻译: