Tutorial 04 - Linear algebra (cont’d)
Contents
Tutorial 04 - Linear algebra (cont’d)¶
Iterative and gradient methods for solving linear algebraic equation systems, Jacobi method, Gauss-Seidel method, successive over-relaxation method, conjugate gradient method, power iteration and eigensystems.
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
Iterative methods for linear algebraic equations¶
Jacobi method¶
A system of \( n \) linear algebraic equations \( \mathbb{A} x = b \) with diagonally dominant coefficient matrix \( \mathbb{A} \) can be solved iteratively using Jacobi method:
where \( x^{(k)} \) and \( x^{(k+1)} \) are, respectively, the k-th and k+1-th iterations of \( x \).
04.1 Example: Implement the Jacobi method.
def jacobi_method(A, b, error_tolerance):
"""
Solves system of linear equations iteratively using Jacobi's algorithm.
Args:
A (array_like): A n-by-n diagonally dominant matrix
b (array_like): RHS vector of size n
error_tolerance (float): Error tolerance
Returns:
numpy.ndarray: Vector of solution
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.linalg.solve
function.
A = np.random.rand(3, 3) + 10. * np.eye(3) # create diagonally dominant matrix to ensure convergence
b = np.random.rand(3)
np.testing.assert_almost_equal(jacobi_method(A, b, 1.0e-15), la.solve(A, b), decimal=15)
Gauss-Seidel method¶
A system of \( n \) linear algebraic equations \( \mathbb{A} x = b \) with diagonally dominant coefficient matrix \( \mathbb{A} \) can be solved iteratively using Gauss-Seidel method:
where \( x^{(k)} \) and \( x^{(k+1)} \) are, respectively, the k-th and k+1-th iterations of \( x \).
Exercise 04.2: Implement the Gauss-Seidel method.
def gauss_seidel_method(A, b, error_tolerance):
"""
Solves system of linear equations iteratively using Gauss-Seidel's algorithm.
Args:
A (array_like): A n-by-n diagonally dominant matrix
b (array_like): RHS vector of size n
error_tolerance (float): Error tolerance
Returns:
numpy.ndarray: Vector of solution
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.linalg.solve
function.
A = np.random.rand(3, 3) + 10. * np.eye(3) # create diagonally dominant matrix to ensure convergence
b = np.random.rand(3)
np.testing.assert_almost_equal(gauss_seidel_method(A, b, 1.0e-15), la.solve(A, b), decimal=15)
Successive over-relaxation method¶
A system of \( n \) linear algebraic equations \( \mathbb{A} x = b \) with diagonally dominant coefficient matrix \( \mathbb{A} \) can be solved iteratively using the method of successive over-relaxation:
where \( x^{(k)} \) and \( x^{(k+1)} \) are, respectively, the k-th and k+1-th iterations of \( x \) and \( \omega \in (0, 2) \) is the relaxation factor.
Exercise 04.3: Implement the successive over-relaxation (SOR) method.
def successive_overrelaxation_method(A, b, omega, error_tolerance):
"""
Solves system of linear equations iteratively using successive over-relaxation (SOR) method.
Args:
A (array_like): A n-by-n matrix
b (array_like): RHS vector of size n
omega (float): Relaxation factor
error_tolerance (float): Error tolerance
Returns:
numpy.ndarray: Vector of solution
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.linalg.solve
function.
A = np.random.rand(3, 3) + 10. * np.eye(3) # create diagonally dominant matrix to ensure convergence
b = np.random.rand(3)
omega = 2.0 * np.random.rand()
np.testing.assert_almost_equal(successive_overrelaxation_method(A, b, 0.9, 1.0e-15), la.solve(A, b), decimal=15)
Exercise 04.4: Compare the number of iterations required to solve the system of 100 linear algebraic equations using Jacobi, Gauss-Seidel, and SOR methods. Try to find an optimal value of relaxation factor \( \omega \) in the case of SOR method.
# add your code here
Gradient methods for linear algebraic equations¶
Conjugate gradient method¶
A system of \( n \) linear algebraic equations \( \mathbb{A} x = b \) with real, symmetric, and positive-definite coefficient matrix \( \mathbb{A} \) can be solved using the conjugate gradient method.
Exercise 04.5: Implement the conjugate gradient method.
def conjugate_gradient_method(A, b, error_tolerance):
"""
Solves system of linear equations using conjugate gradient method.
Args:
A (array_like): A n-by-n real, symmetric, and positive-definite matrix
b (array_like): RHS vector of size n
error_tolerance (float): Error tolerance
Returns:
numpy.ndarray: Vector of solution
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.linalg.solve
function.
A = np.random.rand(3, 3)
A = np.tril(A) + np.tril(A, -1).T + 10 * np.eye(3) # create symmetric and diagonally dominant matrix to ensure convergence
b = np.random.rand(3)
np.testing.assert_almost_equal(conjugate_gradient_method(A, b, 1.0e-15), la.solve(A, b), decimal=15)
Eigenvalues and eigenvectors¶
Power iteration method¶
The power iteration method finds the greatest (in absolute value) eigen value and its corresponding eigenvector of a diagonalizable matrix \( \mathbb{A} \). Starting with the vector \( v_{0} \), the algorithm is described by the recurrence
The sequence above converges to an eigenvector associated with the dominant eigenvalue if the following two conditions are satisfied:
The matrix \( \mathbb{A} \) must have an eigenvalue that is strictly greater in magnitude than its other eigenvalues.
The starting vector \( v_{0} \) has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue.
Exercise 04.6: Implement the power iteration method.
def power_iteration(A, max_it):
"""
Finds the greatest (in absolute value) eigen value of given matrix and its corresponding eigenvector.
Args:
A (array_like): A n-by-n diagonalizable matrix
max_it (int): Maximum number of iterations
Returns:
numpy.ndarray: Eigenvector corresponding to a greatest eigenvalue (in absolute value)
float: Greatest eigenvalue (in absolute value)
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.linalg.eigvals
function.
A = np.random.rand(3, 3)
e_vec, e_val = power_iteration(A, 50)
np.testing.assert_almost_equal(e_val, np.max(np.abs(la.eigvals(A))), decimal=15)