scalation.optimization.quasi_newton
Members list
Type members
Classlikes
The BFGS the class implements the Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. BFGS determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by multiplying it by a matrix that approximates the inverse Hessian. Note, this implementation may be set up to work with the matrix b (approximate Hessian) or directly with the aHi matrix (the inverse of b).
The BFGS the class implements the Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. BFGS determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by multiplying it by a matrix that approximates the inverse Hessian. Note, this implementation may be set up to work with the matrix b (approximate Hessian) or directly with the aHi matrix (the inverse of b).
minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
Value parameters
- exactLS
-
whether to use exact (e.g.,
GoldenLS) or inexact (e.g.,WolfeLS) Line Search - f
-
the objective function to be minimized
- g
-
the constraint function to be satisfied, if any
- ineq
-
whether the constraint is treated as inequality (default) or equality
Attributes
- Companion
- object
- Supertypes
The BFGS_NoLS class is used to find optima for functions of vectors. The solve method finds local optima using a Quasi Newton method that uses the BFGS update to approximate the inverse Hessian.
The BFGS_NoLS class is used to find optima for functions of vectors. The solve method finds local optima using a Quasi Newton method that uses the BFGS update to approximate the inverse Hessian.
min f(x) where f: R^n -> R
Value parameters
- f
-
the vector to scalar function to find optima of
- useLS
-
whether to use Line Search (LS)
Attributes
- See also
-
BFGSfor one that uses a different line search. - Supertypes
The DM_LBFGS object implementation of the direction momentum Limited memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) for unconstrained optimization (dmL-BFGS) algorithm.
The DM_LBFGS object implementation of the direction momentum Limited memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) for unconstrained optimization (dmL-BFGS) algorithm.
Attributes
- Supertypes
- Self type
-
DM_LBFGS.type
The EvaluationLogicNative trait specifies the requirements for the logic to be used for variable evaluation against the objective function in the lbfgsMain method of the LBFGS object. The methods provided in this trait are called directly by the code used by the BFGS class.
The EvaluationLogicNative trait specifies the requirements for the logic to be used for variable evaluation against the objective function in the lbfgsMain method of the LBFGS object. The methods provided in this trait are called directly by the code used by the BFGS class.
Classes mixing in this trait must implement the evaluate method, which is used to evaluate the gradients and objective function for a given state of the variables.
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
The FunctionEvaluation case class to store the definition of a function evaluation in a format that adheres to the evaluation logic format used by the implementation of the Limited memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) for unconstrained optimization (L-BFGS) algorithm.
The FunctionEvaluation case class to store the definition of a function evaluation in a format that adheres to the evaluation logic format used by the implementation of the Limited memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) for unconstrained optimization (L-BFGS) algorithm.
Value parameters
- gradFunction
-
the gradient vector-valued function (vector -> vector)
- objFunction
-
the multi-variate objective function (vector -> scalar)
Attributes
- Companion
- object
- Supertypes
-
trait Serializabletrait Producttrait Equalstrait EvaluationLogicclass Objecttrait Matchableclass AnyShow all
The FunctionEvaluation companion object provides a factory method.
The FunctionEvaluation companion object provides a factory method.
Attributes
- Companion
- class
- Supertypes
-
trait Singletontrait Producttrait Mirrortrait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
- Self type
-
FunctionEvaluation.type
The FunctionOptimization case class to store the definition of a function optimization in a format that adheres to the optimization logic format used by the implementation of the Limited memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) for unconstrained optimization (L-BFGS) algorithm.
The FunctionOptimization case class to store the definition of a function optimization in a format that adheres to the optimization logic format used by the implementation of the Limited memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) for unconstrained optimization (L-BFGS) algorithm.
Attributes
- Companion
- object
- Supertypes
-
trait Serializabletrait Producttrait Equalstrait OptimizationLogictrait EvaluationLogicclass Objecttrait Matchableclass AnyShow all
The FunctionOptimization` companion object provides two factory methods.
The FunctionOptimization` companion object provides two factory methods.
Attributes
- Companion
- class
- Supertypes
-
trait Singletontrait Producttrait Mirrortrait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
- Self type
-
FunctionOptimization.type
The LBFGS the class implements the Limited-Memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. minimize f(x)
The LBFGS the class implements the Limited-Memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. minimize f(x)
Value parameters
- f
-
the multi-variate objective function to be minimized
- gr
-
its gradient vector-valued function
Attributes
The LBFGS object implementats of the Limited memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) for unconstrained optimization (L-BFGS) algorithm. This Scala implementation was made based on the C implementation of the same algorithm found in the following link.
The LBFGS object implementats of the Limited memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) for unconstrained optimization (L-BFGS) algorithm. This Scala implementation was made based on the C implementation of the same algorithm found in the following link.
Attributes
The LBFGSBacktrackingArmijo object implements the backtracking Armijo line search algorithm for use in the implementation of L-BFGS.
The LBFGSBacktrackingArmijo object implements the backtracking Armijo line search algorithm for use in the implementation of L-BFGS.
Attributes
- Supertypes
- Self type
The LBFGSBacktrackingOrthantWise object implements the backtracking Orthant-Wise line search algorithm for use in the native implementation of L-BFGS.
The LBFGSBacktrackingOrthantWise object implements the backtracking Orthant-Wise line search algorithm for use in the native implementation of L-BFGS.
Attributes
- Supertypes
- Self type
The LBFGSBacktrackingStrongWolfe object implements the backtracking Strong Wolfe line search algorithm for use in the native implementation of L-BFGS.
The LBFGSBacktrackingStrongWolfe object implements the backtracking Strong Wolfe line search algorithm for use in the native implementation of L-BFGS.
Attributes
- Supertypes
- Self type
The LBFGSBacktrackingWolfe object implements the backtracking Wolfe line search algorithm for use in the native implementation of L-BFGS.
The LBFGSBacktrackingWolfe object implements the backtracking Wolfe line search algorithm for use in the native implementation of L-BFGS.
Attributes
- Supertypes
- Self type
The LBFGSCallbackData case class is used to group together the EvaluationLogic specified for a L-BFGS optimization done by the LBFGS object with values that are the parameters for the methods of the EvaluationLogic. This allows the user to pass the optimization logic of the L-BFGS optimization as a parameter to different methods and classes while retaining the ability to callback the methods of said logic with the correct parameters.
The LBFGSCallbackData case class is used to group together the EvaluationLogic specified for a L-BFGS optimization done by the LBFGS object with values that are the parameters for the methods of the EvaluationLogic. This allows the user to pass the optimization logic of the L-BFGS optimization as a parameter to different methods and classes while retaining the ability to callback the methods of said logic with the correct parameters.
Value parameters
- evaluationLogic
-
EvaluationLogicthat describes the optimization steps for the L-BFGS optimization done by theLBFGSobject. - instance
-
User data provided for a given call of the L-BFGS optimization done by
lbfgsMainon theLBFGSobject. Can haveAnytype defined by the user as long as it is the same one expected by theoptimizationLogicparameter. - n
-
The number of variables used in the optimization.
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The LBFGSIterationData case class stores relevant data regarding the changes made to the variable and gradient values in a single iteration of the native implementation of the L-BFGS algorithm. This data is used in future iterations of the algorithm to improve the search direction used to minimize the function value.
The LBFGSIterationData case class stores relevant data regarding the changes made to the variable and gradient values in a single iteration of the native implementation of the L-BFGS algorithm. This data is used in future iterations of the algorithm to improve the search direction used to minimize the function value.
Value parameters
- alpha
-
Product between
rhoand the dot product betweensandqof this iteration and next iteration, respectively. This value is used to recalculate theqvalues for past iterations that are kept by the algorithm. For the ''k''-th iteration of the algorithm, the resulting α,,k,, will be: α,,k,, = ρ,,k,, * (s,,k,,^t^ •q,,k+1,,). - s
-
VectorDcontaining the difference between the estimates of the variable values (x), each stored in aVectorDof the last 2 iterations of the algorithm. For the ''k''-th iteration of the algorithm, the resulting s,,k,, will be: s,,k,, = x,,k+1,, - x,,k,,. - y
-
VectorDcontaining the difference between the gradient vectors (g), each stored in aVectorDof the last 2 iterations of the algorithm. For the ''k''-th iteration of the algorithm, the resulting y,,k,, will be: y,,k,, = g,,k+1,, - g,,k,,. - ys
-
Dot product between
yands. This value is used in multiple steps when determining the search direction to take when minimizing the function value, hence it is calculated once for each iteration. For the ''k''-th iteration of the algorithm, the resulting ys,,k,, will be:y,,k,,^t^ •s,,k,,.
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The LBFGSLineSearch trait specifies the requirements for a line search algorithm to be used in the native implementation of L-BFGS.
The LBFGSLineSearch trait specifies the requirements for a line search algorithm to be used in the native implementation of L-BFGS.
Classes mixing in this trait must implement the lineSearch method. The lineSearch method is used to find the optimal step, searching in a specific line, to be taken to minimize an objective function value.
Attributes
- Companion
- object
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
-
object LBFGSBacktrackingArmijoobject LBFGSBacktrackingOrthantWiseobject LBFGSBacktrackingStrongWolfeobject LBFGSBacktrackingWolfeobject LBFGSMoreThuente
The LBFGSLineSearch companion object provides a method for slecting the line search algorithm.
The LBFGSLineSearch companion object provides a method for slecting the line search algorithm.
Attributes
- Companion
- trait
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
LBFGSLineSearch.type
The LBFGSLineSearchAlg enumeration describes possible line search algorithms to be used in the L-BFGS algorithm when determining the size of the step to be taken in gradient descent.
The LBFGSLineSearchAlg enumeration describes possible line search algorithms to be used in the L-BFGS algorithm when determining the size of the step to be taken in gradient descent.
Value parameters
- number
-
nhmerical representation of the algorithm category
Attributes
- Supertypes
-
trait Enumtrait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The LBFGSLineSearchFailure case class describes a failure that occurred in the execution of a line search algorithm in the native implementation of the L-BFGS algorithm. Every line search algorithm used by the native L-BFGS implementation should return an instance of this case class upon encountering an error when searching for the optimal step to take in a given line.
The LBFGSLineSearchFailure case class describes a failure that occurred in the execution of a line search algorithm in the native implementation of the L-BFGS algorithm. Every line search algorithm used by the native L-BFGS implementation should return an instance of this case class upon encountering an error when searching for the optimal step to take in a given line.
Value parameters
- bestIncompleteResults
-
LBFGSLineSearchIncompleteResultscontaining the best results obtained from the incomplete execution of the line search algorithm. - returnCode
-
LBFGSReturnCodedescribing the error responsible for causing a failure in the line search algorithm. Must be an error code, as returning a non-error code causes undefined behavior.
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The LBFGSLineSearchIncomplete case class describes the incomplete optimization results obtained before an error occurred in the execution of a line search algorithm in the native implementation of the L-BFGS algorithm. This information might be useful for the user to determine the effectiveness of a certain line search algorithm before an error occurred.
The LBFGSLineSearchIncomplete case class describes the incomplete optimization results obtained before an error occurred in the execution of a line search algorithm in the native implementation of the L-BFGS algorithm. This information might be useful for the user to determine the effectiveness of a certain line search algorithm before an error occurred.
Value parameters
- functionValue
-
Objective function value for the variable values obtained after applying the best step found for the searched line before the occurrence of an error in the line search algorithm.
- variableValues
-
VectorDcontaining the values of the variables after applying the best step found for the searched line before the occurrence of an error in the line search algorithm.
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The LBFGSLineSearchPrms case class is used to group together all parameters that control the line search routine used by the L-BFGS optimization process.
The LBFGSLineSearchPrms case class is used to group together all parameters that control the line search routine used by the L-BFGS optimization process.
Value parameters
- defaultStep
-
The default step selected as the initial for the line search routine. The default value is 1.0.
- ftol
-
Controls the accuracy of the line search routine. Should be greater than zero and smaller than 0.5. The default value is 1e-4.
- gtol
-
Controls the accuracy of the line search routine. If the function and gradient evaluations are inexpensive with respect to the cost of the iteration (which is sometimes the case when solving very large problems), it may be advantageous to set this parameter to a small value (e.g: 0.1). This parameter should be greater than the
ftolparameter (default value of 1e-4) and smaller than 1.0. The default value is 0.9. - maxLineSearch
-
Maximum number of trials for the line search. Controls the number of function and gradient evaluations per iteration for the line search routine. The default value is 40.
- maxStep
-
Maximum step of the line search routine. Does not need to be modified unless the exponents are too large for the machine being used, or unless the problem is extremely badly scaled (in which case the exponents should be increased). The default value is 1e20.
- minStep
-
Minimum step of the line search routine. Does not need to be modified unless the exponents are too large for the machine being used, or unless the problem is extremely badly scaled (in which case the exponents should be increased). The default value is 1e-20.
- wolfe
-
A coefficient for the Wolfe condition. Only used when a backtracking line-search algorithm that relies on the Wolfe condition is chosen for the
LBFGSLineSearchAlglineSearchparam (e.g:LBFGSLineSearchAlg.BacktrackingWolfeorLBFGSLineSearchAlg.BacktrackingStrongWolfe). Should be greater than theftolparameter and smaller than 1.0. The default value is 0.9. - xtol
-
The machine precision for floating-point values. Must be a positive value set by a client program to estimate the machine precision. The L-BFGS optimization will terminate with the return code
LBFGSReturnCode.RoundingErrorif the relative width of the interval of uncertainty is less than this parameter. The default value is 1.0e-16.
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The LBFGSLineSearchStep case class stores the results of a single line search step performed by a line search algorithm in the implementation of the L-BFGS algorithm. Every line search algorithm used by the native L-BFGS implementation should return an instance of this case class upon achieving a successful line search step.
The LBFGSLineSearchStep case class stores the results of a single line search step performed by a line search algorithm in the implementation of the L-BFGS algorithm. Every line search algorithm used by the native L-BFGS implementation should return an instance of this case class upon achieving a successful line search step.
Value parameters
- fx
-
The objective function value obtained after performing the line search step.
- g
-
VectorDrepresenting the gradient vector obtained after performing the line search step. - numberOfIterations
-
The number of iterations needed to determine the line search step performed.
- step
-
The step selected by the line search algorithm.
- x
-
VectorDrepresenting the values of the variables obtained after performing the line search step.
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The LBFGSMoreThuente object implements the MoreThuente line search algorithm for use in the native implementation of L-BFGS.
The LBFGSMoreThuente object implements the MoreThuente line search algorithm for use in the native implementation of L-BFGS.
Attributes
- Supertypes
- Self type
-
LBFGSMoreThuente.type
The LBFGSPrms class is used to group together all parameters that control the L-BFGS optimization process.
The LBFGSPrms class is used to group together all parameters that control the L-BFGS optimization process.
Value parameters
- delta
-
delta for convergence test. Determines the minimum rate of decrease of the objective function. The optimization stops iterations when the following condition is met: (f'-f)/f < delta. In the formula, f' is the objective value of
pastiterations ago, and f is the objective value of the current iteration. The default value is 1e-5. - epsilon
-
epsilon for convergence test. Determines the accuracy with which the solution is to be found. A minimization terminates when: ||g|| <
epsilon* max(1, ||x||). In the formula, ||.|| denotes the Euclidean L2-norm. The default value is 1e-5. - lineSearch
-
LBFGSLineSearchAlgto specify what line search algorithm should be used. The default value isLBFGSLineSearchAlg.Default. - lineSearchPrms
-
BFGSLineSearchPrmsto specify the parameters needed during the execution of the line search algorithm routine. - m
-
number of past iterations stored in memory to approximate the inverse Hessian matrix of the current iteration. Values less than 3 are not recommended. Large values will result in excessive computing time. The default value is 6.
- maxIterations
-
the maximum number of iterations. The
lbfgsMainandlbfgsMainCWrappermethods inWrapperterminate an optimization process with theLBFGSReturnCode.MaximumIterationreturn code when the iteration count exceeds this parameter. Setting this parameter to zero continues the optimization process until a convergence or error. The default value is 0. - orthantWise
-
Optionof typeOrthantWisePrmsthat specifies whether the Orthant-Wise Limited-memory Quasi-Newton (OWL-QN) optimization method should be used when calculating the value to be optimized. If this parameter is set toSome, then thelineSearchparameter should always be set toLBFGSLineSearchAlg.BacktrackingOrthantWiseor the optimization will terminate with the return codeLBFGSReturnCode.InvalidLineSearch. The same will occur when thelineSearchparameter is set toLBFGSLineSearchAlg.BacktrackingOrthantWiseand this parameter is set toNone. - past
-
distance for delta-based convergence test. Determines how many iterations to compute the rate of decrease of the objective function. A value of zero implies the delta-based convergence test will not be performed. The default value is 0.
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The LBFGSResults case class is used to group together all the outputs resulting from a call to a method implementing the L-BFGS optimization algorithm.
The LBFGSResults case class is used to group together all the outputs resulting from a call to a method implementing the L-BFGS optimization algorithm.
Value parameters
- finalFunctionValue
-
Optionvalue that represents the final value obtained for the objective function in the L-BFGS optimization. If the objective function was never evaluated due to errors in the arguments provided by the user to the L-BFGS method, this field will be set toNoneorSome(0)depending on how the L-BFGS method was implemented. For new L-BFGS implementations, returningNoneis preferred overSome(0)when the objective function is never evaluated. - lineSearchIncomplete
-
Optionvalue that represents the best incomplete results obtained by the line search algorithm before a failure occurred when performing a line search during the L-BFGS optimization. If the L-BFGS optimization is successful or produces an error that is not the result of a call to a line search algorithm, this value will be set toNone. If the L-BFGS optimization is stopped due to an error produced by a call to a line search algorithm, this value will be set toSomewith an instance ofLBFGSLineSearchIncompletethat represents the best result obtained by the line search algorithm before the error occurred. Some L-BFGS implementations are incapable of returning this data and will always returnNone, regardless of the circumstances. - optimizedVariables
-
VectorDthat contains the optimized values of the variables that were provided as inputs to the L-BFGS optimization. - returnCode
-
LBFGSReturnCodethat represents the outcome of L-BFGS optimization.
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The LBFGSReturnCode enumeration describes possible return codes of the L-BFGS optimization, including different ways the optimization may correctly conclude, possible errors with the parameters given and possible errors during the optimization process.
The LBFGSReturnCode enumeration describes possible return codes of the L-BFGS optimization, including different ways the optimization may correctly conclude, possible errors with the parameters given and possible errors during the optimization process.
Value parameters
- code
-
integer value that represents the return code.
Attributes
- Companion
- object
- Supertypes
-
trait Enumtrait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The `LBFGSReturnCode companion object ...
The `LBFGSReturnCode companion object ...
Attributes
- Companion
- enum
- Supertypes
-
trait Sumtrait Mirrorclass Objecttrait Matchableclass Any
- Self type
-
LBFGSReturnCode.type
The LBFGSVarEvaluationResults case class holds results from running evaluation logic used by the implementation of the Limited memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) for unconstrained optimization (L-BFGS) algorithm.
The LBFGSVarEvaluationResults case class holds results from running evaluation logic used by the implementation of the Limited memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) for unconstrained optimization (L-BFGS) algorithm.
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The LBFGS_B companion object provides a factory method for Limited memory Broyden–Fletcher–Goldfarb–Shanno for Bounds constrained optimization.
The LBFGS_B the class implements the Limited memory Broyden–Fletcher– Goldfarb–Shanno for Bounds constrained optimization (L-BFGS-B) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. L-BFGS-B determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by * multiplying it by a matrix that approximates the inverse Hessian. Furthermore, only a few vectors represent the approximation of the Hessian Matrix (limited memory). The parameters estimated are also bounded within user specified lower and upper bounds.
The LBFGS_B the class implements the Limited memory Broyden–Fletcher– Goldfarb–Shanno for Bounds constrained optimization (L-BFGS-B) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. L-BFGS-B determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by * multiplying it by a matrix that approximates the inverse Hessian. Furthermore, only a few vectors represent the approximation of the Hessian Matrix (limited memory). The parameters estimated are also bounded within user specified lower and upper bounds.
minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
Value parameters
- exactLS
-
whether to use exact (e.g.,
GoldenLS) or inexact (e.g.,WolfeLS) Line Search - f
-
the objective function to be minimized
- g
-
the constraint function to be satisfied, if any
- ineq
-
whether the constraint is treated as inequality (default) or equality
- l_u
-
(vector, vector) of lower and upper bounds for all input parameters
Attributes
- Companion
- object
- Supertypes
The LBFGS_NoLS class is used to find optima for functions of vectors. The solve method finds local optima using a Quasi Newton method, the Limited Memory BFGS Method that keeps track of the most recent m changes in x-positions and gradients. The Ring class is used to store the most recent m vectors.
The LBFGS_NoLS class is used to find optima for functions of vectors. The solve method finds local optima using a Quasi Newton method, the Limited Memory BFGS Method that keeps track of the most recent m changes in x-positions and gradients. The Ring class is used to store the most recent m vectors.
min f(x) where f: R^n -> R
Value parameters
- f
-
the vector to scalar function to find optima of
- m
-
the memory size or number of historical s and y vectors to maintain
- n
-
the dimensionality of the vectors
- useLS
-
whether to use Line Search (LS)
Attributes
- See also
-
LBFGSfor one that uses a different line search. - Supertypes
The LineSearchTriInterval case class represents a trial interval for determining an optimal step, including the next step that should be evaluated based on the current endpoints of the trial interval. This class is used by the line search algorithm when attempting to minimize the objective function. In each iteration of the line search algorithm, the trial interval will be updated until certain conditions are met that result in a line search step being selected.
The LineSearchTriInterval case class represents a trial interval for determining an optimal step, including the next step that should be evaluated based on the current endpoints of the trial interval. This class is used by the line search algorithm when attempting to minimize the objective function. In each iteration of the line search algorithm, the trial interval will be updated until certain conditions are met that result in a line search step being selected.
Value parameters
- brackt
-
Indicates if the trial value
tis bracketed. If set to true, the minimizer has been bracketed in an interval of uncertainty with endpoints betweenxandy. - dx
-
Derivative of the objective function obtained by using step
x. - dy
-
Derivative of the objective function obtained by using step
y. - fx
-
Objective function value obtained by using step
x. - fy
-
Objective function value obtained by using step
y. - t
-
The new step chosen to be evaluated in an iteration of the line search algorithm based on the trial interval information.
- x
-
Value of the endpoint in the step trial interval that yields the least function value at the moment.
- y
-
Value of the second endpoint in the step trial interval (the first endpoint being
x).
Attributes
- See also
-
Jorge J. More and David J. Thuente. Line search algorithm with guaranteed sufficient decrease. ACM Transactions on Mathematical Software (TOMS), Vol 20, No 3, pp. 286-307, 1994.
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The OptimizationLogic trait specifies the requirements for the logic to be used in each step of a L-BFGS variable minimization done by the lbfgsMain method of the LBFGS object. The methods provided in this trait are called directly by the code used by the LBFGS class.
The OptimizationLogic trait specifies the requirements for the logic to be used in each step of a L-BFGS variable minimization done by the lbfgsMain method of the LBFGS object. The methods provided in this trait are called directly by the code used by the LBFGS class.
Classes mixing in this trait must implement two methods: evaluate and progress. The evaluate method is used to evaluate the gradients and objective function for a given state of the variables. The progress method is used to report on how the minimization process is progressing.
Attributes
- Supertypes
- Known subtypes
-
class FunctionOptimization
The OrthantWisePrms class is used to group together all parameters that are control the Orthant-Wise method for minimizing the objective function value during the L-BFGS optimization process.
The OrthantWisePrms class is used to group together all parameters that are control the Orthant-Wise method for minimizing the objective function value during the L-BFGS optimization process.
Value parameters
- c
-
Coefficient for the L1 norm of variables. Must be set to a positive value to activate the Orthant-Wise Limited-memory Quasi-Newton (OWL-QN) method, which minimizes the objective function F(x) combined with the L1 norm |x| of the variables: F(x) + C|x|. This parameter is the coefficient ''C'' for the |x| term. As the L1 norm |x| is not differentiable at zero, the code modifies function and gradient evaluations from a client program suitably. Thus, a client program only has to return the function value F(x) and gradients G(x) as usual.
- end
-
End index
Optionfor computing L1 norm of the variables. This parameter, which we shall henceforth call ''e'', must be selected such that 0 < ''e'' ≤ N. It specifies the index number at which the code stops computing the L1 norm of the variablesx. Setting this parameter toNoneorSomewith a negative value will compute the L1 norm for all the variablesx, which is useful when the number of variablesx(''N'') is not known. - start
-
Start index for computing L1 norm of the variables. This parameter, which we shall henceforth call ''b'', must be selected such that 0 ≤ ''b'' < N. It specifies the index number from which the L1 norm of the variables
xwill be computed: |x| = |x,,''b'',,| + |x,,''b''+1,,| + ... + |x,,N,,|. In other words, variables x,,1,,, ..., x,,''b''-1,, are not used for computing the L1 norm. Setting ''b'' to a non-zero value can protect variables x,,1,,, ..., x,,''b''-1,, from being regularized (e.g.: if they represent a bias term of logistic regression). The default value is 0.
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
The QNewton object provides methods useful for Quasi-Newton optimizers.
The QNewton object provides methods useful for Quasi-Newton optimizers.
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
QNewton.type
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
Types
The LBFGSLineSearchReturn type is a union type representing the return value for the lineSearch method of line search algorithms used by the native implementation of the L-BFGS algorithm.
The LBFGSLineSearchReturn type is a union type representing the return value for the lineSearch method of line search algorithms used by the native implementation of the L-BFGS algorithm.
A successful execution should return LBFGSLineSearchStep while an execution with errors should return a LBFGSLineSearchFailure with the appropriate LBFGSReturnCode error code. Returning a LBFGSReturnCode success code inside of a LBFGSLineSearchFailure triggers undefined behavior.
Attributes
Value members
Concrete methods
The bFGSBealeFunction main function is used to test the BFGS class on f(x): f(x) = (1.5 - x(0) + x(0)x(1))^2 + (2.25 - x(0) + x(0)(x(1)^2^))^2 + (2.625 - x(0) + x(0)*(x(1)^3))^2
The bFGSBealeFunction main function is used to test the BFGS class on f(x): f(x) = (1.5 - x(0) + x(0)x(1))^2 + (2.25 - x(0) + x(0)(x(1)^2^))^2 + (2.625 - x(0) + x(0)*(x(1)^3))^2
runMain scalation.optimization.quasi_newton.bFGSBealeFunction
Attributes
The bFGSBohachevsky1Function main function is used to test the BFGS class on f(x): f(x) = x(0)^2 + 2x(1)^2 - 0.3cos(3Pix(0)) - 0.4cos(4Pi*x(1)) + 0.7
The bFGSBohachevsky1Function main function is used to test the BFGS class on f(x): f(x) = x(0)^2 + 2x(1)^2 - 0.3cos(3Pix(0)) - 0.4cos(4Pi*x(1)) + 0.7
runMain scalation.optimization.quasi_newton.bFGSBohachevsky1Function
Attributes
The bFGSBohachevsky2Function main function is used to test the BFGS class on f(x): f(x) = x(0)^2 + 2x(1)^2 - 0.3cos(3Pix(0))cos(4Pi*x(1)) + 0.3
The bFGSBohachevsky2Function main function is used to test the BFGS class on f(x): f(x) = x(0)^2 + 2x(1)^2 - 0.3cos(3Pix(0))cos(4Pi*x(1)) + 0.3
runMain scalation.optimization.quasi_newton.bFGSBohachevsky2Function
Attributes
The bFGSBohachevsky3Function main function is used to test the BFGS class on f(x): f(x) = x(0)^2 + 2x(1)^2 - 0.3cos(3Pix(0)+4Pix(1)) + 0.3
The bFGSBohachevsky3Function main function is used to test the BFGS class on f(x): f(x) = x(0)^2 + 2x(1)^2 - 0.3cos(3Pix(0)+4Pix(1)) + 0.3
runMain scalation.optimization.quasi_newton.bFGSBohachevsky3Function
Attributes
The bFGSBoothFunction main function is used to test the BFGS class on f(x): f(x) = (x(0) + 2 * x(1) - 7)^2 + (2 * x(0) + x(1) - 5)^2
The bFGSBoothFunction main function is used to test the BFGS class on f(x): f(x) = (x(0) + 2 * x(1) - 7)^2 + (2 * x(0) + x(1) - 5)^2
runMain scalation.optimization.quasi_newton.bFGSBoothFunction
Attributes
The bFGSCamel3Function main function is used to test the BFGS class on f(x): f(x) = 2x(0)^2 - 1.05x(0)^4 + (1/6.0)*x(0)^6 + x(0)*x(1) + x(1)^2
The bFGSCamel3Function main function is used to test the BFGS class on f(x): f(x) = 2x(0)^2 - 1.05x(0)^4 + (1/6.0)*x(0)^6 + x(0)*x(1) + x(1)^2
runMain scalation.optimization.quasi_newton.bFGSCamel3Function
Attributes
The bFGSCubeFunction main function is used to test the BFGS class on f(x): f(x) = 100*(x(1) - x(0)^3)^2 + (1-x(0))^2
The bFGSCubeFunction main function is used to test the BFGS class on f(x): f(x) = 100*(x(1) - x(0)^3)^2 + (1-x(0))^2
runMain scalation.optimization.quasi_newton.bFGSCubeFunction
Attributes
The bFGSFreudensteinRothFunction main function is used to test the BFGS class on f(x): f(x) = (x(0) - 13 + x(1)*((5-x(1))x(1) -2))^2 + (x(0) -29 + x(1)((x(1) + 1)*x(1) -14))^2
The bFGSFreudensteinRothFunction main function is used to test the BFGS class on f(x): f(x) = (x(0) - 13 + x(1)*((5-x(1))x(1) -2))^2 + (x(0) -29 + x(1)((x(1) + 1)*x(1) -14))^2
runMain scalation.optimization.quasi_newton.bFGSFreudensteinRothFunction
Attributes
The bFGSFreudensteinRothFunction main function is used to test the BFGS class on f(x): f(x) = (x(0) - 13 + x(1)*((5-x(1))x(1) -2))^2 + (x(0) -29 + x(1)((x(1) + 1)*x(1) -14))^2
The bFGSFreudensteinRothFunction main function is used to test the BFGS class on f(x): f(x) = (x(0) - 13 + x(1)*((5-x(1))x(1) -2))^2 + (x(0) -29 + x(1)((x(1) + 1)*x(1) -14))^2
runMain scalation.optimization.quasi_newton.bFGSFreudensteinRothFunction
Attributes
The bFGSTest main function is used to test the BFGS class on f(x): f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The bFGSTest main function is used to test the BFGS class on f(x): f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.quasi_newton.bFGSTest
Attributes
The bFGSTest2 main function is used to test the BFGS class on f(x): f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The bFGSTest2 main function is used to test the BFGS class on f(x): f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.quasi_newton.bFGSTest2
Attributes
The bFGSTest3 main function is used to test the BFGS_NoLS class. This test uses the Rosenbrock function. f(x) = (1 - x_0)^2 + 100 (x_1 - x_0^2^)^2
The bFGSTest3 main function is used to test the BFGS_NoLS class. This test uses the Rosenbrock function. f(x) = (1 - x_0)^2 + 100 (x_1 - x_0^2^)^2
runMain scalation.optimization.quasi_newton.bFGSTest3
Attributes
The bFGSTest4 main function is used to test the BFGS class on f(x): f(x) = 1/x(0) + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The bFGSTest4 main function is used to test the BFGS class on f(x): f(x) = 1/x(0) + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.quasi_newton.bFGSTest4
Attributes
The bFGS_NoLSTest main function is used to test the BFGS_NoLS class. This test numerically approximates the derivatives to find minima. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The bFGS_NoLSTest main function is used to test the BFGS_NoLS class. This test numerically approximates the derivatives to find minima. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.quasi_newton.bFGS_NoLSTest
Attributes
The bFGS_NoLSTest2 main function is used to test the BFGS_NoLS class. This tests use of functions for partial derivatives to find minima. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The bFGS_NoLSTest2 main function is used to test the BFGS_NoLS class. This tests use of functions for partial derivatives to find minima. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.quasi_newton.bFGS_NoLSTest2
Attributes
The bFGS_NoLSTest3 main function is used to test the BFGS_NoLS class. This test uses the Rosenbrock function. f(x) = (1 - x_0)^2 + 100 (x_1 - x_0^2)^2
The bFGS_NoLSTest3 main function is used to test the BFGS_NoLS class. This test uses the Rosenbrock function. f(x) = (1 - x_0)^2 + 100 (x_1 - x_0^2)^2
runMain scalation.optimization.quasi_newton.bFGS_NoLSTest3
Attributes
The bFGS_NoLSTest4 main function is used to test the BFGS_NoLS class on f(x): f(x) = 1/x(0) + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The bFGS_NoLSTest4 main function is used to test the BFGS_NoLS class on f(x): f(x) = 1/x(0) + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.quasi_newton.bFGS_NoLSTest4
Attributes
The bealeFunctionLBFGSTest main function uses the Beale Function.
The bealeFunctionLBFGSTest main function uses the Beale Function.
runMain scalation.optimization.quasi_newton.bealeFunctionLBFGSTest
Attributes
The bohachevsky1FunctionLBFGSTest main function uses the Bohachevsky1 Function.
The bohachevsky1FunctionLBFGSTest main function uses the Bohachevsky1 Function.
runMain scalation.optimization.quasi_newton.bohachevsky1FunctionLBFGSTest
Attributes
The bohachevsky2FunctionLBFGSTest main function uses the Bohachevsky2 Function.
The bohachevsky2FunctionLBFGSTest main function uses the Bohachevsky2 Function.
runMain scalation.optimization.quasi_newton.bohachevsky2FunctionLBFGSTest
Attributes
The bohachevsky3FunctionLBFGSTest main function uses the Bohachevsky3 Function.
The bohachevsky3FunctionLBFGSTest main function uses the Bohachevsky3 Function.
runMain scalation.optimization.quasi_newton.bohachevsky3FunctionLBFGSTest
Attributes
The boothFunctionLBFGSTest main function uses the Booth Function to test the lbfgsMain method provided by the LBFGS object. Multiple tests are performed with different values for the variables.
The boothFunctionLBFGSTest main function uses the Booth Function to test the lbfgsMain method provided by the LBFGS object. Multiple tests are performed with different values for the variables.
The Booth Function can be described as follows:
-
Input dimension: 2;
-
Function domain: -10 ≤ x,,i,, ≤ 10;
-
Function definition: f(x) = (x,,0,, + 2 * x,,1,, - 7)^2^ + (2 * x,,0,, + x,,1,, - 5)^2^;
-
Global minimum: x* = (1, 3); f(x*) = 0;
This test function can be run on the sbt shell with the following command:
runMain scalation.optimization.quasi_newton.boothFunctionLBFGSTest
Attributes
The camel3FunctionLBFGSTest main function uses the Camel3 Function.
The camel3FunctionLBFGSTest main function uses the Camel3 Function.
runMain scalation.optimization.quasi_newton.camel3FunctionLBFGSTest
Attributes
The cubeFunctionLBFGSTest main function uses the Cube Function.
The cubeFunctionLBFGSTest main function uses the Cube Function.
runMain scalation.optimization.quasi_newton.cubeFunctionLBFGSTest
Attributes
The freudensteinRothFunctionLBFGSTest main function uses the FreudensteinRoth Function.
The freudensteinRothFunctionLBFGSTest main function uses the FreudensteinRoth Function.
runMain scalation.optimization.quasi_newton.freudensteinRothFunctionLBFGSTest
Attributes
The lBFGS_BTest main function is used to test the LBFGS_B class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The lBFGS_BTest main function is used to test the LBFGS_B class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.quasi_newton.lBFGS_BTest
Attributes
The lBFGS_BTest2 main function is used to test the LBFGS_B class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The lBFGS_BTest2 main function is used to test the LBFGS_B class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.quasi_newton.lBFGS_BTest2
Attributes
The lBFGS_BTest3 main function is used to test the LBFGS_B class. f(x) = 1/x_0 + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
The lBFGS_BTest3 main function is used to test the LBFGS_B class. f(x) = 1/x_0 + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1
runMain scalation.optimization.quasi_newton.lBFGS_BTest3
Attributes
The lBFGS_NoLSTest main function is used to test the LBFGS_NoLS class. This test numerically approximates the derivatives to find minima.
The lBFGS_NoLSTest main function is used to test the LBFGS_NoLS class. This test numerically approximates the derivatives to find minima.
runMain scalation.optimization.quasi_newton.lBFGS_NoLSTest
Attributes
The lBFGS_NoLSTest2 main function is used to test the LBFGS_NoLS class. This tests use functions for partial derivatives to find minima.
The lBFGS_NoLSTest2 main function is used to test the LBFGS_NoLS class. This tests use functions for partial derivatives to find minima.
runMain scalation.optimization.quasi_newton.lBFGS_NoLSTest2
Attributes
The lBFGS_NoLSTest3 main function is used to test the LBFGS_NoLS class. This test uses the Rosenbrock function.
The lBFGS_NoLSTest3 main function is used to test the LBFGS_NoLS class. This test uses the Rosenbrock function.
runMain scalation.optimization.quasi_newton.lBFGS_NoLSTest3
Attributes
The mccormickFunctionDMLBFGSTest main function uses the McCormick Function to test the dmlbfgsMain method provided by the dmLBFGS object. Multiple tests are performed with different values for the variables.
The mccormickFunctionDMLBFGSTest main function uses the McCormick Function to test the dmlbfgsMain method provided by the dmLBFGS object. Multiple tests are performed with different values for the variables.
This test function can be run on the sbt shell with the following command:
runMain scalation.optimization.quasi_newton.mccormickFunctionDMLBFGSTest
Attributes
The mccormickFunctionLBFGSTest main function uses the McCormick Function to test the lbfgsMain method provided by the LBFGS object. Multiple tests are performed with different values for the variables.
The mccormickFunctionLBFGSTest main function uses the McCormick Function to test the lbfgsMain method provided by the LBFGS object. Multiple tests are performed with different values for the variables.
runMain scalation.optimization.quasi_newton.mccormickFunctionLBFGSTest