scalation.optimization
Members list
Type members
Classlikes
The BoundsConstraint trait provides a mechanism for bouncing back at constraint boundaries.
The BoundsConstraint trait provides a mechanism for bouncing back at constraint boundaries.
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
-
class NelderMeadSimplex2class SPSA
The ConjugateGradient class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.
The ConjugateGradient class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.
dir_k = - grad (x) + beta * dir_k-1
minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
Value parameters
- exactLS
-
whether to use exact (e.g.,
GoldenLS) or inexact (e.g.,WolfeLS) Line Search - f
-
the objective function to be minimized
- g
-
the constraint function to be satisfied, if any
- ineq
-
whether the constraint function must satisfy inequality or equality
Attributes
- Supertypes
The ConjugateGradient_NoLS class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.
The ConjugateGradient_NoLS class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.
dir_k = - grad (x) + beta * dir_k-1
min f(x) where f: R^n -> R
This version does not use a line search algorithm (_NoLS)
Value parameters
- f
-
the objective function to be minimized
Attributes
- See also
-
ConjugateGradientfor one that uses line search. - Supertypes
The CoordinateDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Coordinate Descent algorithm. Given a function f and a starting point x0, the algorithm picks coordinate directions (cyclically) and takes steps in the those directions. The algorithm iterates until it converges.
The CoordinateDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Coordinate Descent algorithm. Given a function f and a starting point x0, the algorithm picks coordinate directions (cyclically) and takes steps in the those directions. The algorithm iterates until it converges.
dir_k = kth coordinate direction
min f(x)
Value parameters
- exactLS
-
whether to use exact (e.g.,
GoldenLS) or inexact (e.g.,WolfeLS) Line Search - f
-
the vector-to-scalar objective function
Attributes
- Supertypes
The GoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions (see goldenSectionLSTest). If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y' (see goldenSectionLSTest2).
The GoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions (see goldenSectionLSTest). If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y' (see goldenSectionLSTest2).
Value parameters
- f
-
the scalar objective function to minimize
- τ
-
the tolerance for breaking the iterations
Attributes
- Supertypes
The GradientDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm. Given a function f and a starting point x0, the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The class assumes that partial derivative functions are not available unless explicitly given via the setDerivatives method.
The GradientDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm. Given a function f and a starting point x0, the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The class assumes that partial derivative functions are not available unless explicitly given via the setDerivatives method.
dir_k = -gradient (x)
minimize f(x)
Value parameters
- exactLS
-
whether to use exact (e.g.,
GoldenLS) or inexact (e.g.,WolfeLS) Line Search - f
-
the vector-to-scalar objective function
Attributes
- Supertypes
The GradientDescent_Adam class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements an ADAptive Moment estimation (Adam) Optimizer.
The GradientDescent_Adam class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements an ADAptive Moment estimation (Adam) Optimizer.
Value parameters
- f
-
the vector-to-scalar (V2S) objective/loss function
- grad
-
the vector-to-vector (V2V) gradient function, grad f
- hparam
-
the hyper-parameters
Attributes
- See also
- Supertypes
The GradientDescent_Mo class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with Momentum Optimizer.
The GradientDescent_Mo class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with Momentum Optimizer.
Value parameters
- f
-
the vector-to-scalar (V2S) objective/loss function
- grad
-
the vector-to-vector (V2V) gradient function ∇f
- hparam
-
the hyper-parameters
Attributes
- See also
- Supertypes
The GradientDescent_Mo2 class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with Momentum Optimizer.
The GradientDescent_Mo2 class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with Momentum Optimizer.
Value parameters
- f
-
the vector-to-scalar objective function
- gr
-
the vector-to-gradient function
Attributes
- See also
- Supertypes
The GradientDescent_NoLS class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with No Line Search Optimizer.
The GradientDescent_NoLS class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with No Line Search Optimizer.
Value parameters
- f
-
the vector-to-scalar (V2S) objective/loss function
- grad
-
the vector-to-vector (V2V) gradient function, grad f
- hparam
-
the hyper-parameters
Attributes
- See also
- Supertypes
The GridSearch companion object specifies default minimums and maximums for the grid's coordinate axes.
The GridSearch companion object specifies default minimums and maximums for the grid's coordinate axes.
Attributes
- Companion
- class
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
GridSearch.type
The GridSearch class performs grid search over an n-dimensional space to find a minimal objective value for f(x).
The GridSearch class performs grid search over an n-dimensional space to find a minimal objective value for f(x).
Value parameters
- f
-
the objective function to be minimized
- g
-
the constraint function to be satisfied, if any
- n
-
the number of dimensions in search space
- nSteps
-
the number of steps an axes is divided into to from the grid
Attributes
- Companion
- object
- Supertypes
The GridSearchLS class performs a line search on f(x) to find a minimal value for f. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from x1 (often 0) to xmax. A guess for xmax must be given. It works on scalar functions (see gridSearchLSTest). If starting with a vector function f(x), simply define a new function g(y) = x0 + direction * y (see gridSearchLSTest2).
The GridSearchLS class performs a line search on f(x) to find a minimal value for f. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from x1 (often 0) to xmax. A guess for xmax must be given. It works on scalar functions (see gridSearchLSTest). If starting with a vector function f(x), simply define a new function g(y) = x0 + direction * y (see gridSearchLSTest2).
Value parameters
- f
-
the scalar objective function to minimize
Attributes
- Supertypes
The IntegerTabuSearch class performs tabu search to find minima of functions defined on integer vector domains z^n. Tabu search will not re-visit points already deemed sub-optimal.
The IntegerTabuSearch class performs tabu search to find minima of functions defined on integer vector domains z^n. Tabu search will not re-visit points already deemed sub-optimal.
minimize f(x) subject to g(x) <= 0, x in Z^n
Value parameters
- f
-
the objective function to be minimize (f maps an integer vector to a double)
- g
-
the constraint function to be satisfied, if any
- maxStep
-
the maximum/starting step size (make larger for larger domains)
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
The LassoAdmm object performs LASSO regression using Alternating Direction Method of Multipliers (ADMM). Minimize the following objective function to find an optimal solutions for x.
The LassoAdmm object performs LASSO regression using Alternating Direction Method of Multipliers (ADMM). Minimize the following objective function to find an optimal solutions for x.
argmin_x (1/2)||Ax − b||_2^2 + λ||x||_1
A = data matrix
b = response vector
λ = weighting on the l_1 penalty
x = solution (coefficient vector)
Attributes
- See also
-
euler.stat.yale.edu/~tba3/stat612/lectures/lec23/lecture23.pdf
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
LassoAdmm.type
The LineSearch trait specifies the basic methods for Line Search (LS) algorithms in classes extending this trait to implement. Line search is for one dimensional optimization problems. The algorithms perform line search to find an 'x'-value that minimizes a function 'f' that is passed into an implementing class.
The LineSearch trait specifies the basic methods for Line Search (LS) algorithms in classes extending this trait to implement. Line search is for one dimensional optimization problems. The algorithms perform line search to find an 'x'-value that minimizes a function 'f' that is passed into an implementing class.
x* = argmin f(x)
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
The Minimize trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:
The Minimize trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:
minimize f(x)
where f is the objective/loss function to be minimized
Attributes
- Companion
- object
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
-
class BFGS_NoLSclass LBFGSclass LBFGS_NoLSclass ConjugateGradient_NoLSclass GradientDescent_Adamclass GradientDescent_Moclass GradientDescent_Mo2class GradientDescent_NoLSclass NelderMeadSimplexclass NewtonRaphsonclass Newton_NoLSShow all
The Minimizer trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:
The Minimizer trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:
minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
where f is the objective function to be minimized g is the constraint function to be satisfied, if any
Classes mixing in this trait must implement a function fg that rolls the constraints into the objective functions as penalties for constraint violation, a one-dimensional Line Search (LS) algorithm lineSearch and an iterative method (solve) that searches for improved solutions x-vectors with lower objective function values f(x).
Attributes
- Companion
- object
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
-
class BFGSclass LBFGS_Bclass ConjugateGradientclass CoordinateDescentclass GradientDescentclass GridSearchclass NelderMeadSimplex2class SPSAShow all
The MonitorEpochs trait is used to monitor the loss function over the epochs.
The MonitorEpochs trait is used to monitor the loss function over the epochs.
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
-
class NelderMeadSimplex2class SPSA
The NelderMeadSimplex solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function f and its dimension n, the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.
The NelderMeadSimplex solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function f and its dimension n, the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.
minimize f(x)
The algorithm requires between 1 to n+2 function evaluations per iteration
Value parameters
- f
-
the vector-to-scalar objective function
- n
-
the dimension of the search space
Attributes
- Supertypes
The NelderMeadSimplex2 solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function f and its dimension n, the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.
The NelderMeadSimplex2 solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function f and its dimension n, the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.
minimize f(x)
Value parameters
- f
-
the vector-to-scalar objective function
- n
-
the dimension of the search space
Attributes
- Supertypes
-
trait MonitorEpochstrait BoundsConstrainttrait Minimizerclass Objecttrait Matchableclass AnyShow all
The NewtonRaphson class is used to find roots (zeros) for a one-dimensional (scalar) function 'f'. The solve method finds zeros for function 'f', while the optimize method finds local optima using the same logic, but applied to first and second derivatives.
The NewtonRaphson class is used to find roots (zeros) for a one-dimensional (scalar) function 'f'. The solve method finds zeros for function 'f', while the optimize method finds local optima using the same logic, but applied to first and second derivatives.
Value parameters
- f
-
the scalar function to find roots/optima of
Attributes
- Supertypes
The Newton_NoLS class is used to find optima for functions of vectors. The solve method finds local optima using the Newton method that deflects the gradient using the inverse Hessian.
The Newton_NoLS class is used to find optima for functions of vectors. The solve method finds local optima using the Newton method that deflects the gradient using the inverse Hessian.
min f(x) where f: R^n -> R
Value parameters
- f
-
the vector to scalar function to find optima of
- useLS
-
whether to use Line Search (LS)
Attributes
- See also
-
Newtonfor one that uses a different line search. - Supertypes
The PathMonitor trait specifies the logic needed to monitor a single path taken in a multidimensional graph.
The PathMonitor trait specifies the logic needed to monitor a single path taken in a multidimensional graph.
Classes mixing in this trait should call the clearPath method before beginning to monitor a path and then should call the add2Path method whenever a new data point is produced in the path being monitored. After that, a call to the getPath method will return a deep copy of the path that was monitored throughout the calculations.
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
The SPSA class implements the Simultaneous Perturbation Stochastic Approximation algorithm for rough approximation of gradients.
The SPSA class implements the Simultaneous Perturbation Stochastic Approximation algorithm for rough approximation of gradients.
Value parameters
- checkCon
-
whether to check bounds contraints
- debug_
-
the whether to call in debug mode (does tracing)j
- f
-
the vector to scalar function whose approximate gradient is sought
- lower
-
the lower bounds vector
- max_iter
-
the maximum number of iterations
- upper
-
the upper bounds vector
Attributes
- See also
-
https://www.jhuapl.edu/spsa/PDF-SPSA/Matlab-SPSA_Alg.pdf minimize f(x)
- Supertypes
-
trait MonitorEpochstrait BoundsConstrainttrait Minimizerclass Objecttrait Matchableclass AnyShow all
The StoppingRule trait provides stopping rules for early termination in iterative optimization algorithms.
The StoppingRule trait provides stopping rules for early termination in iterative optimization algorithms.
Value parameters
- upLimit
-
the number upward (loss increasing) steps allowed
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
-
class GradientDescent_Adamclass GradientDescent_Moclass GradientDescent_Mo2class GradientDescent_NoLS
The TabuSearch class performs tabu search to find minima of functions defined on double vector domains z^n. Tabu search will not re-visit points already deemed sub-optimal.
The TabuSearch class performs tabu search to find minima of functions defined on double vector domains z^n. Tabu search will not re-visit points already deemed sub-optimal.
minimize f(x) subject to g(x) <= 0, x in Z^n
Value parameters
- f
-
the objective function to be minimize (f maps an double vector to a double)
- g
-
the constraint function to be satisfied, if any
- maxStep
-
the maximum/starting step size (make larger for larger domains)
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
The WolfeConditions class specifies conditions for inexact line search algorithms to acceptable/near minimal point along a given search direction p that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.
The WolfeConditions class specifies conditions for inexact line search algorithms to acceptable/near minimal point along a given search direction p that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.
f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust) |f'(x)| <= c2 * |f'(0)| Wolfe condition 2 (Strong version)
Note: c1 and c2 defaults below intended for Quasi Newton methods such as BFGS or L-BFGS
Value parameters
- c1
-
constant for sufficient decrease (Wolfe condition 1: .0001 to .001)
- c2
-
constant for curvature/slope constraint (Wolfe condition 2: .9 to .8)
- f
-
the objective/loss function to minimize (vector-to-scalar)
- g
-
the gradient of the objective/loss function (vector-to-vector)
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
The WolfeLS class performs an inexact line search on f to find a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.
The WolfeLS class performs an inexact line search on f to find a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.
f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) |f'(x)| <= |c2 * f'(0)| Wolfe condition 2 (Strong version) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust)
It works on scalar functions (@see wolfeLSTest). If starting with a vector function f(x), simply defines a new function fl(a) = x0 + direction * a (@see wolfeLSTest2).
Value parameters
- c1
-
constant for sufficient decrease (Wolfe condition 1)
- c2
-
constant for curvature/slope constraint (Wolfe condition 2)
- f
-
the scalar objective function to minimize
Attributes
- Supertypes
The WolfeLS2 class performs an inexact line search on f to find (1) a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.
The WolfeLS2 class performs an inexact line search on f to find (1) a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.
f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) |f'(x)| <= |c2 * f'(0)| Wolfe condition 2 (Strong version) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust)
The it uses bisection (or interpolative search) to find an approximate local minimal point. Currently, the strong version is not supported. Note: c1 and c2 defaults below intended for Quasi Newton methods such as BFGS or L-BFGS
Value parameters
- c1
-
constant for sufficient decrease (Wolfe condition 1: .0001 to .001)
- c2
-
constant for curvature/slope constraint (Wolfe condition 2: .9 to .8)
- f
-
the objective/loss function to minimize
- g
-
the gradient of the objective/loss function
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
The WolfeLS3 class performs an inexact line search on f to find (1) a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.
The WolfeLS3 class performs an inexact line search on f to find (1) a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.
f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) |f'(x)| <= |c2 * f'(0)| Wolfe condition 2 (Strong version) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust)
The it uses bisection (or interpolative search) to find an approximate local minimal point. Currently, the strong version is not supported. Note: c1 and c2 defaults below intended for Quasi Newton methods such as BFGS or L-BFGS
Value parameters
- c1
-
constant for sufficient decrease (Wolfe condition 1: .0001 to .001)
- c2
-
constant for curvature/slope constraint (Wolfe condition 2: .9 to .8)
- c3
-
constant for noise control condition
- eg
-
estimate of gradient noise
- f
-
the objective/loss function to minimize
- g
-
the gradient of the objective/loss function
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Value members
Concrete methods
Return the better solution, the one with smaller functional value.
Return the better solution, the one with smaller functional value.
Value parameters
- best
-
the best solution found so far
- cand
-
the candidate solution (functional value f and vector x)
Attributes
Check whether the candidate solution has blown up.
Check whether the candidate solution has blown up.
Value parameters
- cand
-
the candidate solution (functional value f and vector x)
Attributes
The conjugateGradientTest main function is used to test the `ConjugateGradient class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The conjugateGradientTest main function is used to test the `ConjugateGradient class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.conjugateGradientTest
Attributes
The conjugateGradientTest2 main function is used to test the ConjugateGradient class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The conjugateGradientTest2 main function is used to test the ConjugateGradient class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.conjugateGradientTest2
Attributes
The conjugateGradientTest3 main function is used to test the ConjugateGradient class. f(x) = 1/x_0 + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The conjugateGradientTest3 main function is used to test the ConjugateGradient class. f(x) = 1/x_0 + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.conjugateGradientTest3
Attributes
The conjugateGradient_NoLSTest main function is used to test the `ConjugateGradient_NoLS class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The conjugateGradient_NoLSTest main function is used to test the `ConjugateGradient_NoLS class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.conjugateGradient_NoLSTest
Attributes
The conjugateGradient_NoLSTest2 main function is used to test the ConjugateGradient_NoLS class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The conjugateGradient_NoLSTest2 main function is used to test the ConjugateGradient_NoLS class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.conjugateGradient_NoLSTest2
Attributes
The conjugateGradient_NoLSTest3 main function is used to test the ConjugateGradient_NoLS class. f(x) = 1/x_0 + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The conjugateGradient_NoLSTest3 main function is used to test the ConjugateGradient_NoLS class. f(x) = 1/x_0 + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.conjugateGradient_NoLSTest3
Attributes
The coordinateDescentTest main function is used to test the CoordinateDescent class.
The coordinateDescentTest main function is used to test the CoordinateDescent class.
runMain scalation.optimization.coordinateDescentTest
Attributes
Return the fast soft thresholding vector function.
Return the fast soft thresholding vector function.
Value parameters
- th
-
the threshold (theta)
- v
-
the vector to threshold
Attributes
The goldenSectionLSTest main function is used to test the GoldenSectionLS class on scalar functions.
The goldenSectionLSTest main function is used to test the GoldenSectionLS class on scalar functions.
runMain scalation.optimization.goldenSectionLSTest
Attributes
The goldenSectionLSTest2 main function is used to test the goldenSectionLS class on vector functions.
The goldenSectionLSTest2 main function is used to test the goldenSectionLS class on vector functions.
runMain scalation.optimization.goldenSectionLSTest
Attributes
The gradientDescentTest main function is used to test the GradientDescent class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The gradientDescentTest main function is used to test the GradientDescent class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.gradientDescentTest
Attributes
The gradientDescentTest2 main function is used to test the GradientDescent class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The gradientDescentTest2 main function is used to test the GradientDescent class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.gradientDescentTest2
Attributes
The gradientDescentTest3 main function is used to test the GradientDescent class. f(x) = 1/x(0) + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The gradientDescentTest3 main function is used to test the GradientDescent class. f(x) = 1/x(0) + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.gradientDescentTest3
Attributes
The gradientDescentTest4 main function is used to test the GradientDescent class. f(x) = x_0/4 + 5x_0^2 + x_0^4 - 9x_0^2 x_1 + 3x_1^2 + 2x_1^4
The gradientDescentTest4 main function is used to test the GradientDescent class. f(x) = x_0/4 + 5x_0^2 + x_0^4 - 9x_0^2 x_1 + 3x_1^2 + 2x_1^4
Attributes
- See also
-
math.fullerton.edu/mathews/n2003/gradientsearch/GradientSearchMod/Links/GradientSearchMod_lnk_5.html
runMain scalation.optimization.gradientDescentTest4
The gradientDescent_AdamTest main function is used to test the GradientDescent_Adam class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The gradientDescent_AdamTest main function is used to test the GradientDescent_Adam class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.gradientDescent_AdamTest
Attributes
The gradientDescent_Mo2Test main function is used to test the GradientDescent_Mo2 class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The gradientDescent_Mo2Test main function is used to test the GradientDescent_Mo2 class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.gradientDescent_Mo2Test
Attributes
The gradientDescent_MoTest main function is used to test the GradientDescent_Mo class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The gradientDescent_MoTest main function is used to test the GradientDescent_Mo class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.gradientDescent_MoTest
Attributes
The gradientDescent_NoLSTest main function is used to test the GradientDescent_NoLS class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The gradientDescent_NoLSTest main function is used to test the GradientDescent_NoLS class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.gradientDescent_NoLSTest
Attributes
The gridSearchLSTest main function is used to test the GridSearchLS class on scalar functions.
The gridSearchLSTest main function is used to test the GridSearchLS class on scalar functions.
runMain scalation.optimization.gridSearchLSTest
Attributes
The gridSearchLSTest2 main function is used to test the GridSearchLS class on vector functions.
The gridSearchLSTest2 main function is used to test the GridSearchLS class on vector functions.
runMain scalation.optimization.gridSearchLSTest
Attributes
The gridSearchTest main function is used to test the GridSearch class on f(x): f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The gridSearchTest main function is used to test the GridSearch class on f(x): f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.gridSearchTest
Attributes
The gridSearchTest2 main function is used to test the GridSearch class on f(x): f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The gridSearchTest2 main function is used to test the GridSearch class on f(x): f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.gridSearchTest2
Attributes
The hungarian method is an O(n^3) [ O(J^2W) ] implementation of the Hungarian algorithm (or Kuhn-Munkres algorithm) for assigning jobs to workers. Given J jobs and W workers, find a minimal cost assignment of JOBS to WORKERS such that each worker is assigned to at most one job and each job has one worker assigned. It solves the minimum-weighted bipartite graph matching problem.
The hungarian method is an O(n^3) [ O(J^2W) ] implementation of the Hungarian algorithm (or Kuhn-Munkres algorithm) for assigning jobs to workers. Given J jobs and W workers, find a minimal cost assignment of JOBS to WORKERS such that each worker is assigned to at most one job and each job has one worker assigned. It solves the minimum-weighted bipartite graph matching problem.
minimize sum_j { cost(j, w) }
Value parameters
- cost
-
the cost matrix: cost(j, w) = cost of assigning job j to worker w
Attributes
The hungarianTest main function test the hungarianTest method.
The hungarianTest main function test the hungarianTest method.
Attributes
- See also
-
http://people.whitman.edu/~hundledr/courses/M339S20/M339/Ch07_5.pdf Minimal total cost = 51
runMain scalation.optimization.hungarianTest
The hungarianTest2 main function test the hungarianTest method.
The hungarianTest2 main function test the hungarianTest method.
Attributes
- See also
-
https://d13mk4zmvuctmz.cloudfront.net/assets/main/study-material/notes/ electrical-engineering_engineering_operations-research_assignment-problems_notes.pdf Solution: job 4 assigned to worker 0 with cost (j = 4, w = 0) = 1.0 job 1 assigned to worker 1 with cost (j = 1, w = 1) = 5.0 job 0 assigned to worker 2 with cost (j = 0, w = 2) = 3.0 job 2 assigned to worker 3 with cost (j = 2, w = 3) = 2.0 job 3 assigned to worker 4 with cost (j = 3, w = 4) = 9.0 job -1 assigned to worker 5 with cost (j = -1, w = 5) = NA (worker 5 is unassigned) Minimal total cost = 20
runMain scalation.optimization.hungarianTest2
The integerTabuSearchTest main method is used to test the IntegerTabuSearch class (unconstrained).
The integerTabuSearchTest main method is used to test the IntegerTabuSearch class (unconstrained).
runMain scalation.optimization.integerTabuSearchTest
Attributes
The integerTabuSearchTest2 main method is used to test the IntegerTabuSearch class (constrained).
The integerTabuSearchTest2 main method is used to test the IntegerTabuSearch class (constrained).
runMain scalation.optimization.integerTabuSearchTest2
Attributes
The lassoAdmmTest main function tests LassoAdmm object using the following regression equation. y = b dot x = b_0 + b_1x_1 + b_2x_2.
The lassoAdmmTest main function tests LassoAdmm object using the following regression equation. y = b dot x = b_0 + b_1x_1 + b_2x_2.
Attributes
- See also
-
statmaster.sdu.dk/courses/st111/module03/index.html
runMain scalation.optimization.lassoAdmmTest
The lassoAdmmTest2 main function tests LassoAdmm object using the following regression equation. y = b dot x = b_0 + b_1x_1 + b_2x_2.
The lassoAdmmTest2 main function tests LassoAdmm object using the following regression equation. y = b dot x = b_0 + b_1x_1 + b_2x_2.
Attributes
- See also
-
www.cs.jhu.edu/~svitlana/papers/non_refereed/optimization_1.pdf
runMain scalation.optimization.lassoAdmmTest2
The lassoAdmmTest3 main function tests LassoAdmm object use of soft-thresholding.
The lassoAdmmTest3 main function tests LassoAdmm object use of soft-thresholding.
runMain scalation.optimization.lassoAdmmTest3
Attributes
The nLPTest main function used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems. Algorithms: - Gradient Descent with Golden Section Line Search - Polak-Ribiere Conjugate Gradient with Golden Section Line Search - Gradient Descent with Wolfe Line Search (option ib BFGS) - Broyden–Fletcher–Goldfarb–Shanno (BFGS) with Wolfe Line Search - Limited Memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) with Wolfe Line Search - Limited Memory Broyden–Fletcher–Goldfarb–Shanno Bounded (LBFGS_B) with Wolfe Line Search - Nelder-Mead Simplex - Coordinate Descent - Grid Search
The nLPTest main function used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems. Algorithms: - Gradient Descent with Golden Section Line Search - Polak-Ribiere Conjugate Gradient with Golden Section Line Search - Gradient Descent with Wolfe Line Search (option ib BFGS) - Broyden–Fletcher–Goldfarb–Shanno (BFGS) with Wolfe Line Search - Limited Memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) with Wolfe Line Search - Limited Memory Broyden–Fletcher–Goldfarb–Shanno Bounded (LBFGS_B) with Wolfe Line Search - Nelder-Mead Simplex - Coordinate Descent - Grid Search
runMain scalation.optimization.nLPTest
Attributes
The nLPTest2 main function used to test several Non-Linear Programming (NLP) algorithms on constrained problems. FIX
The nLPTest2 main function used to test several Non-Linear Programming (NLP) algorithms on constrained problems. FIX
runMain scalation.optimization.nLPTest2
Attributes
The nelderMeadSimplex2Test main function is used to test the NelderMeadSimplex2 class.
The nelderMeadSimplex2Test main function is used to test the NelderMeadSimplex2 class.
runMain scalation.optimization.nelderMeadSimplex2Test
Attributes
The nelderMeadSimplexTest main function is used to test the NelderMeadSimplex class.
The nelderMeadSimplexTest main function is used to test the NelderMeadSimplex class.
runMain scalation.optimization.nelderMeadSimplexTest
Attributes
The newtonRaphsonTest main function is used to test the NewtonRaphson class. This test passes in a function for the derivative to find a root.
The newtonRaphsonTest main function is used to test the NewtonRaphson class. This test passes in a function for the derivative to find a root.
runMain scalation.optimization.newtonRaphsonTest
Attributes
The newtonRaphsonTest2 main function is used to test the NewtonRaphson class. This test numerically approximates the derivative to find a root.
The newtonRaphsonTest2 main function is used to test the NewtonRaphson class. This test numerically approximates the derivative to find a root.
runMain scalation.optimization.newtonRaphsonTest2
Attributes
The newtonRaphsonTest3 main function is used to test the NewtonRaphson class. This test numerically approximates the derivatives to find minima.
The newtonRaphsonTest3 main function is used to test the NewtonRaphson class. This test numerically approximates the derivatives to find minima.
runMain scalation.optimization.newtonRaphsonTest3
Attributes
The newton_NoLSTest main function is used to test the Newton_NoLS class. This test numerically approximates the first derivative (gradient) and the second derivative (Hessian) to find minima.
The newton_NoLSTest main function is used to test the Newton_NoLS class. This test numerically approximates the first derivative (gradient) and the second derivative (Hessian) to find minima.
runMain scalation.optimization.newton_NoLSTest
Attributes
The newton_NoLSTest2 main function is used to test the Newton_NoLS class. This test functionally evaluates the first derivative (gradient) and uses the Jacobian to numerically compute the second derivative (Hessian) from the gradient to find minima.
The newton_NoLSTest2 main function is used to test the Newton_NoLS class. This test functionally evaluates the first derivative (gradient) and uses the Jacobian to numerically compute the second derivative (Hessian) from the gradient to find minima.
runMain scalation.optimization.newton_NoLSTest2
Attributes
The newton_NoLSTest3 main function is used to test the Newton_NoLS class. This test uses the Rosenbrock function.
The newton_NoLSTest3 main function is used to test the Newton_NoLS class. This test uses the Rosenbrock function.
runMain scalation.optimization.newton_NoLSTest3
Attributes
The sPSATest main function tests the SPSA class.
The sPSATest main function tests the SPSA class.
runMain scalation.optimization.sPSATest
Attributes
Show the assignments of jobs to workers and the accumulating costs.
Show the assignments of jobs to workers and the accumulating costs.
Value parameters
- cost
-
the cost matrix: cost(j, w) = cost of assigning job j to worker w
- job_acost
-
the (job, acost) tuple
Attributes
Return the soft thresholding scalar function.
Return the soft thresholding scalar function.
Value parameters
- th
-
the threshold (theta)
- x
-
the scalar to threshold
Attributes
The tabuSearchTest main method is used to test the TabuSearch class (unconstrained).
The tabuSearchTest main method is used to test the TabuSearch class (unconstrained).
runMain scalation.optimization.tabuSearchTest
Attributes
The tabuSearchTest2 main method is used to test the TabuSearch class (constrained).
The tabuSearchTest2 main method is used to test the TabuSearch class (constrained).
runMain scalation.optimization.tabuSearchTest2
Attributes
The wolfeConditionsTest main function is used to test the WolfeConditions class.
The wolfeConditionsTest main function is used to test the WolfeConditions class.
runMain scalation.optimization.wolfeConditionsTest
Attributes
The wolfeLS2Test main function is used to test the WolfeLS2 class on scalar functions.
The wolfeLS2Test main function is used to test the WolfeLS2 class on scalar functions.
runMain scalation.optimization.wolfeLS2Test
Attributes
The wolfeLS2Test2 main function is used to test the WolfeLS2 class on scalar functions.
The wolfeLS2Test2 main function is used to test the WolfeLS2 class on scalar functions.
runMain scalation.optimization.wolfeLS2Test2
Attributes
The wolfeLS2Test3 main function is used to test the WolfeLS2 class on vector functions. This test uses the Rosenbrock function.
The wolfeLS2Test3 main function is used to test the WolfeLS2 class on vector functions. This test uses the Rosenbrock function.
Attributes
- See also
-
https://mikl.dk/post/2019-wolfe-conditions/
runMain scalation.optimization.wolfeLS2Test3
The wolfeLS2Test4 main function is used to test the WolfeLS2 class on scalar functions.
The wolfeLS2Test4 main function is used to test the WolfeLS2 class on scalar functions.
runMain scalation.optimization.wolfeLSTest4
Attributes
The wolfeLS3Test main function is used to test the WolfeLS3 class on scalar functions.
The wolfeLS3Test main function is used to test the WolfeLS3 class on scalar functions.
runMain scalation.optimization.wolfeLS3Test
Attributes
The wolfeLS3Test2 main function is used to test the WolfeLS3 class on scalar functions.
The wolfeLS3Test2 main function is used to test the WolfeLS3 class on scalar functions.
runMain scalation.optimization.wolfeLS3Test2
Attributes
The wolfeLS3Test3 main function is used to test the WolfeLS3 class on vector functions. This test uses the Rosenbrock function.
The wolfeLS3Test3 main function is used to test the WolfeLS3 class on vector functions. This test uses the Rosenbrock function.
Attributes
- See also
-
https://mikl.dk/post/2019-wolfe-conditions/
runMain scalation.optimization.wolfeLS3Test3
The wolfeLS3Test4 main function is used to test the WolfeLS3 class on scalar functions.
The wolfeLS3Test4 main function is used to test the WolfeLS3 class on scalar functions.
runMain scalation.optimization.wolfeLS3Test4
Attributes
The wolfeLSTest main function is used to test the WolfeLS class on scalar functions.
The wolfeLSTest main function is used to test the WolfeLS class on scalar functions.
runMain scalation.optimization.wolfeLSTest
Attributes
The wolfeLSTest2 main function is used to test the WolfeLS class on vector functions.
The wolfeLSTest2 main function is used to test the WolfeLS class on vector functions.
runMain scalation.optimization.wolfeLSTest2
Attributes
The wolfeLSTest3 main function is used to test the WolfeLS2 class on vector functions. This test uses the Rosenbrock function.
The wolfeLSTest3 main function is used to test the WolfeLS2 class on vector functions. This test uses the Rosenbrock function.
Attributes
- See also
-
https://mikl.dk/post/2019-wolfe-conditions/
runMain scalation.optimization.wolfeLSTest3