Tutorial 08 - Optimization

Search for extremes of functions, golden section search, parabolic interpolation search, gradient descent.

import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy import linalg, optimize

Steepest descent

The steepest descent method

Exercise 08.5: Implement the steepest descent method for a function of one unknown.

def steepest_descent(f, x_0, step, error_tolerance=1.0e-15, max_iterations=1e5):
    """
    Finds the minimum of function using the method of steepest descent.
    Args:
        f (function): A strictly unimodal and differentiable function in a neighborhood of a point x_0
        x_0 (float): Initial guess
        step (float): Step size multiplier
        error_tolerance (float): Error tolerance
        n_max (int): Maximum number of iterations
    Returns:
        float: A coordinate of minimum
    Raises:
        RuntimeError: Raises an exception when the minimum is not found
    """

    # add your code here

You may evaluate the correctness of your implementation using the scipy.optimize.minimize_scalar function.

def f(x):
    return np.sin(x)
try:
    np.testing.assert_array_almost_equal(steepest_descent(f, -2.0, 1.0e-3), 
        optimize.minimize_scalar(f, bracket=[-2.0, 0.0], tol=1.0e-7, method="golden").x, decimal=7)
except AssertionError as E:
    print(E)
else:
    print("The implementation is correct.")

Exercise 08.6: Find a local minimum of the following function,

\[ f(x) = -x^3 \mathrm{e}^{-(x + 1)^2} + 10 \, x \tanh(x + 2) \]

with the error tolerance of \( 10^{-7} \) using the steepest descent method. Use the point \( x_0 = 0 \) as an initial guess.

  1. Print the coordinate of the minimum.

  2. Plot the function \( f \) and mark the minimum.

# add your code here

Exercise 08.7: Implement the steepest descent method for a function of \( n \in \mathbb{N} \) unknowns.

def steepest_descent_nd(f, x_0, step, error_tolerance=1.0e-15, max_iterations=1e5):
    """
    Finds the minimum of function using the method of steepest descent.
    Args:
        f (function): A strictly unimodal and differentiable function in a neighborhood of a point x_0
        x_0 (float): Initial guess
        step (float): Step size multiplier
        error_tolerance (float): Error tolerance
        n_max (int): Maximum number of iterations
    Returns:
        float: A coordinate of minimum
    Raises:
        RuntimeError: Raises an exception when the minimum is not found
    """

    # add your code here

You may evaluate the correctness of your implementation using the scipy.optimize.minimize function.

def f(x):
    return np.sin(x[0]) * np.sin(x[1])
try:
    np.testing.assert_array_almost_equal(steepest_descent_nd(f, np.array([-2.0, -2.0]), 1.0e-3), 
        optimize.minimize(f, np.array([-2.0, -2.0])).x, decimal=7)
except AssertionError as E:
    print(E)
else:
    print("The implementation is correct.")

Exercise 08.8: Find a local minimum of the following function,

\[ f(x, y) = (1 - x)^2 + 100 (y - x^2)^2, \]

with the error tolerance of \( 10^{-7} \) using the steepest descent method. Use the point \( (x_0, y_0) = (0, 0) \) as an initial guess.

  1. Print the coordinate of minimum.

  2. Plot the function \( f \) and mark the minimum.

# add your code here

Note: The function in Exercise 08.8 is called the Rosenbrock function and is used as a performance test problem for optimization algorithms. The global minimum of this function is located inside a long, narrow, parabolic shaped flat valley.