Tutorial 07 - Nonlinear equations

Root finding, bisection method, secant method, false position method, Newton-Raphson method.

import numpy as np
from scipy import linalg, optimize
import matplotlib.pyplot as plt

Bisection method

The bisection method is applicable for numerically solving the equation \( f(x) = 0 \) for \( x \in \mathbb{R} \), where \( f \) is a continuous function defined on an interval \( [a, b] \) and where \( f(a) \) and \( f(b) \) have opposite signs. In this case \( a \) and \( b \) are said to bracket a root.

At each step the method divides the interval \( [a, b] \) in two by computing the midpoint \( c = (a + b) \ / \ 2 \) of the interval. Unless \( c \) is itself a root (which is very unlikely, but possible) there are now only two possibilities: either \( f(a) \) and \( f(c) \) have opposite signs and bracket a root, or \( f(c) \) and \( f(b) \) have opposite signs and bracket a root. The method selects the subinterval that is guaranteed to be a bracket as the new interval to be used in the next step. The process is continued until the interval is sufficiently small. When implementing the method on a computer, there can be problems with finite precision, so there are often additional convergence tests or limits to the number of iterations.

Exercise 07.1: Implement the bisection method.

def bisection(f, a, b, error_tolerance=1.0e-15, max_iterations=500):
    """
    Finds the root of a continuous function using bisection method.
    Args:
        f (function): Continuous function defined on an interval [a, b], where f(a) and f(b) have opposite signs
        a (float): Leftmost point of the interval
        b (float): Rightmost point of the interval
        error_tolerance (float): Error tolerance
        max_iterations (int): Maximum number of iterations
    Returns:
        float: The root of function f
    Raises:
        RuntimeError: Raises an exception when the root is not found
    """

    # add your code here

You may evaluate the correctness of your implementation using the scipy.optimize.root_scalar function.

def f(x):
    return x**2 - 1.0
try:
    np.testing.assert_array_almost_equal(bisection(f, 0.0, 4.0), optimize.root_scalar(f, bracket=[0.0, 4.0], method="bisect").root, decimal=7)
except AssertionError as E:
    print(E)
else:
    print("The implementation is correct.")

Exercise 07.2: Find the only real root of the cubic function

\[ f(x) = x^3 - x^2 - 1 \]

within the interval \( [-3, 3] \) using the bisection method. Plot the function and mark the solution.

# add your code here

Note: The root of the cubic function in Exercise 07.2 can be also found analytically and is known as a supergolden ratio.

Secant method

The secant method numerically solves the equation \( f(x) = 0 \), \( x \in \mathbb{R} \) using a succession of roots of secant lines. The secant method does not require that the root remain bracketed, like the bisection method does, and hence it does not always converge.

Starting with initial values \( x^{(0)} \) and \( x^{(1)} \), the method calculates the equation of a secant line \( s(x) = px + q \) passing through the points \( [x^{(0)}, f(x^{(0)})] \) and \( [x^{(1)}, f(x^{(1)})] \). The slope of the line, \( p \), and the coefficient, \( q \), can be easily obtained using basic algebra,

\[ p = \frac{f(x^{(1)}) - f(x^{(0)})}{x^{(1)} - x^{(0)}}, \quad q = f(x^{(0)}) - \frac{f(x^{(1)}) - f(x^{(0)})}{x^{(1)} - x^{(0)}} x^{(0)}. \]

The point where the secant line intersects the \( x \)-axis can be obtained by solving \( s(x) = 0 \),

\[ x = \frac{x^{(0)} f(x^{(1)}) - x^{(1)} f(x^{(0)})}{f(x^{(1)}) - f(x^{(0)})}. \]

This new value of \( x \) is then used as \( x^{(2)} \), and the process repeats (with \( x^{(1)} \) and \( x^{(2)} \) instead of \( x^{(0)} \) and \( x^{(1)} \)) until the difference between \( x^{(k)} \) and \( x^{(k - 1)}\) is sufficiently small.

Exercise 07.3: Implement the secant method.

def secant_method(f, x_0, x_1, error_tolerance=1.0e-15, max_iterations=500):
    """
    Finds the root of a continuous function using secant method.
    Args:
        f (function): Continuous function
        x_0 (float): First initial guess
        x_1 (float): Second initial guess
        error_tolerance (float): Error tolerance
        max_iterations (int): Maximum number of iterations
    Returns:
        float: The root of function f
    Raises:
        RuntimeError: Raises an exception when the root is not found
    """

    # add your code here

You may evaluate the correctness of your implementation using the scipy.optimize.root_scalar function.

def f(x):
    return x**2 - 1.0
try:
    np.testing.assert_array_almost_equal(secant_method(f, 0.0, 4.0), optimize.root_scalar(f, x0=0.0, x1=4.0, method="secant").root, decimal=7)
except AssertionError as E:
    print(E)
else:
    print("The implementation is correct.")

Exercise 07.4: Find the only real root of the cubic function

\[ f(x) = x^3 - x - 1 \]

using the secant method (you may start with the initial values \( x_0 = 0 \) and \( x_1 = 2 \)). Plot the function and mark the solution.

# add your code here

Note: The root of the cubic function in Exercise 07.4 can be also found analytically and is known as a plastic number.

False position method

The false position method uses the same formula as the secant method. However, it does not apply the formula on \( x^{(k - 1)} \) and \( x^{(k - 2)} \), like the secant method, but on \( x^{(k - 1)} \) and on the last iterate \( x^{(k)} \) such that \( f(x^{(k)}) \) and \( f(x^{(k - 1)}) \) have a different sign, so that the two points continue to bracket the root. This means that the false position method always converges.

Exercise 07.5: Implement the false position method.

def false_position_method(f, x_0, x_1, error_tolerance=1.0e-15, max_iterations=500):
    """
    Finds the root of a continuous function using false position method.
    Args:
        f (function): Continuous function defined on an interval [x_0, x_1], where f(x_0) and f(x_1) have opposite signs
        x_0 (float): Leftmost point of the interval
        x_1 (float): Rightmost point of the interval
        error_tolerance (float): Error tolerance
        max_iterations (int): Maximum number of iterations
    Returns:
        float: The root of function f
    Raises:
        RuntimeError: Raises an exception when the root is not found
    """

    # add your code here

You may evaluate the correctness of your implementation using the scipy.optimize.root_scalar function.

def f(x):
    return x**2 - 1.0
try:
    np.testing.assert_array_almost_equal(false_position_method(f, 0.0, 4.0), optimize.root_scalar(f, x0=0.0, x1=4.0, method="secant").root, decimal=7)
except AssertionError as E:
    print(E)
else:
    print("The implementation is correct.")

Newton-Raphson method

Newton-Raphson method for one nonlinear equation:

The root of nonlinear function \( f(x) \), \( x \in \mathbb{R} \), whose derivative \( f^{\prime}(x) \) is continuous and nonzero in the neighborhood of the root, is obtained iteratively using the following recursive formula,

\[ x^{(k + 1)} = x^{(k)} - \frac{f(x^{(k)})}{f^{\prime}(x^{(k)})} \]

with \( x_0 \) being an initial guess. A numerical difference can be used to approximate \( f^{\prime}(x) \) in the Newton-Raphson formula. This is not, however, a recommended procedure, because it may decrease accuracy and the rate of convergence.

Exercise 07.6: Implement the Newton-Raphson method for one-unknown problem (you may use the finite differences to approximate the derivative of given function).

def newton_raphson(f, x_0, error_tolerance=1.0e-15, max_iterations=500):
    """
    Finds the solution of one nonlinear eqution using Newton-Raphson method.
    Args:
        f (function): Function whose derivative is continuous and nonzero in the neighborhood of a root
        x_0 (float): Initial guess
        error_tolerance (float): Error tolerance
        max_iterations (int): Maximum number of iterations
    Returns:
        float: The solution of a given equation
    Raises:
        RuntimeError: Raises an exception when the solution is not found
    """

    # add your code here

You may evaluate the correctness of your implementation using the scipy.optimize.root_scalar function.

def f(x):
    return x**2 - 1.0
def df(x):
    return 2.0 * x
try:
    np.testing.assert_array_almost_equal(newton_raphson(f, 4.0), optimize.root_scalar(f, fprime=df, x0=4.0, method="newton").root, decimal=7)
except AssertionError as E:
    print(E)
else:
    print("The implementation is correct.")

Exercise 07.7: Find the solution of the equation

\[ x^3 - \cos x = 0 \]

using the Newton-Raphson method with the error tolerance of \( 10^{-15} \). How does the required number of iterations depends on the value of initial guess?

# add your code here

Exercise 07.8: Find the solution of the equation

\[ \sqrt[3]{x} = 0 \]

using the Newton-Raphson method.

# add your code here

Warning: The Newton-Raphson method diverges in the case of equation in Exercise 07.8, becuase the tangent line at the root is vertical.

Newton-Raphson method for system of nonlinear equations:

A system of \( n \) nonlinear equations \( f(x) = 0 \), where \( x \) and \( f \), respectively, denote the entire vectors of values \( x_i \) and functions \( f_i \), \( i = 0, 1, \dots, n - 1 \), is obtained iteratively using the following recursive formula,

\[ x^{(k + 1)} = x^{(k)} + \delta_x. \]

The correction \( \delta_x \) is obtained by solving the following system of linear algebraic equations,

\[ J \cdot \delta_x = -f, \]

where \( J \) is the Jacobian matrix.

Exercise 07.9: Implement the Newton-Raphson method for a two-unknown problem (you may use the finite differences to approximate the derivatives of given functions).

def newton_raphson_2d(f_0, f_1, x_0, y_0, error_tolerance=1.0e-15, max_iterations=500):
    """
    Finds the solution of a sytem of two equations using Newton-Raphson method.
    Args:
        f_1 (function): Function whose derivative is continuous and nonzero in the neighborhood of a root
        f_2 (function): Function whose derivative is continuous and nonzero in the neighborhood of a root
        x_0 (float): Initial guess
        y_0 (float): Initial guess
        error_tolerance (float): Error tolerance
        max_iterations (int): Maximum number of iterations
    Returns:
        list: The vector of solution of a given sytem of equations
    Raises:
        RuntimeError: Raises an exception when the solution is not found
    """

    # add your code here

Exercise 07.10: Solve the following system of nonlinear equations,

\[\begin{split} \begin{align} & x^2 - y^2 = -4 x + 2 y + 1, \\ & x^2 = -5y + 4, \end{align} \end{split}\]

using the Newton-Raphson method. Plot both functions and mark the root.

# add your code here