AnimatedOptimization.jl¶
Some optimization algorithms (for any function) with animations for functions from R² → R.
These are meant for teaching. They have not been extenisvely tested and are likely not well-suited for other uses.
Optimization Methods¶
Heuristic¶
#
AnimatedOptimization.minrandomsearch
— Method.
minrandomsearch(f, x0, npoints; var0=1.0, ftol = 1e-6,
vtol = 1e-4, maxiter = 1000,
vshrink=0.9, xrange=[-2., 3.],
yrange=[-2.,6.])
Find the minimum of function f
by random search.
Creates an animation illustrating search progress.
Arguments
f
function to minimizex0
starting valuenpoints
number of points in cloudvar0
initial variance of pointsftol
convergence tolerance for function value. Search terminates if both function change is less than ftol and variance is less than vtol.vtol
convergence tolerance for variance. Search terminates if both function change is less than ftol and variance is less than vtol.maxiter
maximum number of iterationsvshrink
after every 100 iterations with no function improvement, the variance is reduced by this factorxrange
range of x-axis in animationyrange
range of y-axis in animationanimate
whether to create animation
Returns
(fmin, xmin, iter, info, anim)
tuple consisting of minimal function value, minimizer, number of iterations, convergence info, and an animation
Smooth, Unconstrained¶
#
AnimatedOptimization.graddescent
— Method.
graddescent(f, x0; grad=x->Forwardiff.gradient(f,x),
γ0=1.0, ftol = 1e-6,
xtol = 1e-4, gtol=1-6, maxiter = 1000,
xrange=[-2., 3.],
yrange=[-2.,6.], animate=true)
Find the minimum of function f
by gradient descent
Arguments
f
function to minimizex0
starting valuegrad
function that computes gradient off
γ0
initial step size multiplierftol
tolerance for function valuextol
tolerance for xgtol
tolerance for gradient. Convergence requires meeting all three tolerances.maxiter
maximum iterationsxrange
x-axis range for animationyrange
y-axis range for animationanimate
whether to create animation
Returns
(fmin, xmin, iter, info, anim)
tuple consisting of minimal function value, minimizer, number of iterations, convergence info, and animations
#
AnimatedOptimization.newton
— Method.
newton(f, x0;
grad=x->ForwardDiff.gradient(f,x),
hess=x->ForwardDiff.hessian(f,x),
ftol = 1e-6,
xtol = 1e-4, gtol=1-6, maxiter = 1000,
xrange=[-2., 3.],
yrange=[-2.,6.], animate=true)
Find the minimum of function f
by Newton’s method.
Arguments
f
function to minimiziex0
starting valuegrad
function that returns gradient off
hess
function that returns hessian off
ftol
tolerance for function valuextol
tolerance for xgtol
tolerance for gradient. Convergence requires meeting all three tolerances.maxiter
maximum iterationsxrange
x-axis range for animationyrange
y-axis range for animationanimate
whether to create animation
Returns
(fmin, xmin, iter, info, anim)
tuple consisting of minimal function value, minimizer, number of iterations, convergence info, and animation
Constrained¶
#
AnimatedOptimization.interiorpoint
— Method.
interiorpoint(f, x0, c;
L = (x,λ)->(f(x) - dot(λ,c(x))),
∇ₓL = (x,λ)->ForwardDiff.gradient(z->L(z,λ), x),
∇²ₓL= (x,λ)->ForwardDiff.hessian(z->L(z,λ), x),
∇c = x->ForwardDiff.jacobian(c,x),
tol=1e-4, maxiter = 1000,
μ0 = 1.0, μfactor = 0.2,
xrange=[-2., 3.],
yrange=[-2.,6.], animate=true)
Find the minimum of function f
subject to c(x) >= 0
using a primal-dual interior point method.
Arguments
f
function to minimiziex0
starting value. Must have c(x0) > 0c
constraint function. Must return an array.L = (x,λ)->(f(x) - dot(λ,c(x)))
Lagrangian∇ₓL = (x,λ)->ForwardDiff.gradient(z->L(z,λ), x)
Derivative of Lagrangian wrtx
∇²ₓL= (x,λ)->ForwardDiff.hessian(z->L(z,λ), x)
Hessian of Lagrangian wrtx
∇c = x->ForwardDiff.jacobian(c,x)
Jacobian of constraintstol
convergence toleranceμ0
initial μμfactor
how much to decrease μ byxrange
range of x-axis for animationyrange
range of y-axis for animationanimate
whether to create an animation (if true requires length(x)==2)verbosity
higher values result in more printed output during search. 0 for no output, any number > 0 for some.
Returns
(fmin, xmin, iter, info, animate)
tuple consisting of minimal function value, minimizer, number of iterations, and convergence info
#
AnimatedOptimization.sequentialquadratic
— Method.
sequentialquadratic(f, x0, c;
∇f = x->ForwardDiff.gradient(f,x),
∇c = x->ForwardDiff.jacobian(c,x),
L = (x,λ)->(f(x) - dot(λ,c(x))),
∇ₓL = (x,λ)->ForwardDiff.gradient(z->L(z,λ), x),
∇²ₓL= (x,λ)->ForwardDiff.hessian(z->L(z,λ), x),
tol=1e-4, maxiter = 1000,
trustradius=1.0, xrange=[-2., 3.],
yrange=[-2.,6.], animate=true, verbosity=1)
Find the minimum of function f
subject to c(x) ≥ 0
by sequential quadratic programming.
Arguments
f
function to minimiziex0
starting value. Must have c(x0) > 0c
constraint function. Must return an array.∇f = x->ForwardDiff.gradient(f,x)
∇c = x->ForwardDiff.jacobian(c,x)
Jacobian of constraintsL = (x,λ)->(f(x) - dot(λ,c(x)))
Lagrangian∇ₓL = (x,λ)->ForwardDiff.gradient(z->L(z,λ), x)
Derivative of Lagrangian wrtx
∇²ₓL= (x,λ)->ForwardDiff.hessian(z->L(z,λ), x)
Hessian of Lagrangian wrtx
tol
convergence tolerancemaxiter
trustradius
initial trust region radiusxrange
range of x-axis for animationyrange
range of y-axis for animationanimate
whether to create an animation (if true requires length(x)==2)verbosity
higher values result in more printed output during search. 0 for no output, any number > 0 for some.
Returns
(fmin, xmin, iter, info, animate)
tuple consisting of minimal function value, minimizer, number of iterations, and convergence info
#
AnimatedOptimization.slqp
— Method.
slqp(f, x0, c;
∇f = x->ForwardDiff.gradient(f,x),
∇c = x->ForwardDiff.jacobian(c,x),
L = (x,λ)->(f(x) - dot(λ,c(x))),
∇ₓL = (x,λ)->ForwardDiff.gradient(z->L(z,λ), x),
∇²ₓL= (x,λ)->ForwardDiff.hessian(z->L(z,λ), x),
tol=1e-4, maxiter = 1000,
trustradius=1.0, xrange=[-2., 3.],
yrange=[-2.,6.], animate=true, verbosity=1)
Find the minimum of function f
subject to c(x) ≥ 0
by sequential linear quadratic programming.
See https://en.wikipedia.org/wiki/Sequentiallinear-quadraticprogramming for algorithm information.
Arguments
f
function to minimiziex0
starting value. Must have c(x0) > 0c
constraint function. Must return an array.∇f = x->ForwardDiff.gradient(f,x)
∇c = x->ForwardDiff.jacobian(c,x)
Jacobian of constraintsL = (x,λ)->(f(x) - dot(λ,c(x)))
Lagrangian∇ₓL = (x,λ)->ForwardDiff.gradient(z->L(z,λ), x)
Derivative of Lagrangian wrtx
∇²ₓL= (x,λ)->ForwardDiff.hessian(z->L(z,λ), x)
Hessian of Lagrangian wrtx
tol
convergence tolerancemaxiter
trustradius
initial trust region radiusxrange
range of x-axis for animationyrange
range of y-axis for animationanimate
whether to create an animation (if true requires length(x)==2)verbosity
higher values result in more printed output during search. 0 for no output, any number > 0 for some.
Returns
(fmin, xmin, iter, info, animate)
tuple consisting of minimal function value, minimizer, number of iterations, and convergence info