Tutorial 08 - Optimization
Contents
Tutorial 08 - Optimization¶
Search for extremes of functions, golden section search, parabolic interpolation search, gradient descent.
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy import linalg, optimize
Golden-section search¶
Exercise 08.1: Implement the golden-section search method.
def golden_section_search(f, a, b, error_tolerance=1.0e-15, max_iterations=500):
"""
Finds the minimum of function using golden section search.
Args:
f (function): A strictly unimodal function on [a, b]
a (float): Left endpoint of the interval
b (float): Right endpoint of the interval
error_tolerance (float): Error tolerance
max_iterations (int): Maximum number of iterations
Returns:
float: A coordinate of minimum
Raises:
RuntimeError: Raises an exception when the minimum is not found
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.optimize.minimize_scalar
function.
def f(x):
return np.sin(x)
try:
np.testing.assert_array_almost_equal(golden_section_search(f, -2.0, 0.0, 1.0e-7),
optimize.minimize_scalar(f, bracket=[-2.0, 0.0], tol=1.0e-7, method="golden").x, decimal=7)
except AssertionError as E:
print(E)
else:
print("The implementation is correct.")
Exercise 08.2: Find a local minimum of the following function,
in the interval \( [-5, 5] \) with the error tolerance of \( 10^{-7} \) using the golden-section search method.
Print the coordinate of the minimum.
Plot the function \( f \) and mark the minimum.
# add your code here
Parabolic interpolation search¶
Parabolic interpolation search
Exercise 08.3: Implement the parabolic interpolation search method.
def parabolic_interpolation_search(f, a, b, c, error_tolerance=1.0e-15, max_iterations=500):
"""
Finds the minimum of function using parabolic interpolation search.
Args:
f (function): A strictly unimodal function on [a, b]
a (float): Left endpoint of the interval
b (float): Point inside the interval
c (float): Right endpoint of the interval
error_tolerance (float): Error tolerance
max_iterations (int): Maximum number of iterations
Returns:
float: A coordinate of minimum
Raises:
RuntimeError: Raises an exception when the minimum is not found
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.optimize.minimize_scalar
function.
def f(x):
return np.sin(x)
try:
np.testing.assert_array_almost_equal(parabolic_interpolation_search(f, -2.0, -1.0, 0.0, 1.0e-7),
optimize.minimize_scalar(f, bracket=[-2.0, 0.0], tol=1.0e-7, method="golden").x, decimal=7)
except AssertionError as E:
print(E)
else:
print("The implementation is correct.")
Exercise 08.4: Find a local minimum of the following function,
in the interval \( [-5, 5] \) with the error tolerance of \( 10^{-7} \) using the parabolic interpolation search method.
Print the coordinate of the minimum.
Plot the function \( f \) and mark the minimum.
# add your code here
Steepest descent¶
The steepest descent method
Exercise 08.5: Implement the steepest descent method for a function of one unknown.
def steepest_descent(f, x_0, step, error_tolerance=1.0e-15, max_iterations=1e5):
"""
Finds the minimum of function using the method of steepest descent.
Args:
f (function): A strictly unimodal and differentiable function in a neighborhood of a point x_0
x_0 (float): Initial guess
step (float): Step size multiplier
error_tolerance (float): Error tolerance
n_max (int): Maximum number of iterations
Returns:
float: A coordinate of minimum
Raises:
RuntimeError: Raises an exception when the minimum is not found
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.optimize.minimize_scalar
function.
def f(x):
return np.sin(x)
try:
np.testing.assert_array_almost_equal(steepest_descent(f, -2.0, 1.0e-3),
optimize.minimize_scalar(f, bracket=[-2.0, 0.0], tol=1.0e-7, method="golden").x, decimal=7)
except AssertionError as E:
print(E)
else:
print("The implementation is correct.")
Exercise 08.6: Find a local minimum of the following function,
with the error tolerance of \( 10^{-7} \) using the steepest descent method. Use the point \( x_0 = 0 \) as an initial guess.
Print the coordinate of the minimum.
Plot the function \( f \) and mark the minimum.
# add your code here
Exercise 08.7: Implement the steepest descent method for a function of \( n \in \mathbb{N} \) unknowns.
def steepest_descent_nd(f, x_0, step, error_tolerance=1.0e-15, max_iterations=1e5):
"""
Finds the minimum of function using the method of steepest descent.
Args:
f (function): A strictly unimodal and differentiable function in a neighborhood of a point x_0
x_0 (float): Initial guess
step (float): Step size multiplier
error_tolerance (float): Error tolerance
n_max (int): Maximum number of iterations
Returns:
float: A coordinate of minimum
Raises:
RuntimeError: Raises an exception when the minimum is not found
"""
# add your code here
You may evaluate the correctness of your implementation using the scipy.optimize.minimize
function.
def f(x):
return np.sin(x[0]) * np.sin(x[1])
try:
np.testing.assert_array_almost_equal(steepest_descent_nd(f, np.array([-2.0, -2.0]), 1.0e-3),
optimize.minimize(f, np.array([-2.0, -2.0])).x, decimal=7)
except AssertionError as E:
print(E)
else:
print("The implementation is correct.")
Exercise 08.8: Find a local minimum of the following function,
with the error tolerance of \( 10^{-7} \) using the steepest descent method. Use the point \( (x_0, y_0) = (0, 0) \) as an initial guess.
Print the coordinate of minimum.
Plot the function \( f \) and mark the minimum.
# add your code here
Note: The function in Exercise 08.8 is called the Rosenbrock function and is used as a performance test problem for optimization algorithms. The global minimum of this function is located inside a long, narrow, parabolic shaped flat valley.