Skip to content

Optimizer Comparison

This chapter provides a unified reference for choosing the right solver from the numra::optim module. The table below summarizes all available algorithms, followed by a decision flowchart and error handling guide.

SolverFunctionProblem TypeGradientsConstraintsConvergence
BFGSbfgs_minimizeUnconstrainedRequiredNoneSuperlinear
L-BFGSlbfgs_minimizeUnconstrainedRequiredNoneSuperlinear
Nelder-Meadnelder_meadUnconstrainedNoneNoneLinear
PowellpowellUnconstrainedNoneNoneSuperlinear
L-BFGS-Blbfgsb_minimizeBound-constrainedRequiredBox boundsSuperlinear
Aug. Lagrangianaugmented_lagrangian_minimizeGeneral NLPOptionalEq + Ineq + BoundsLinear (outer)
SQPsqp_minimizeGeneral NLPRequiredEq + IneqSuperlinear
Levenberg-Marquardtlm_minimizeLeast squaresJacobianNoneSuperlinear
Simplexsimplex_solveLinear programN/ALinear Eq + IneqFinite
Active Set QPactive_set_qp_solveQuadratic programN/ALinear + BoundsFinite
MILPmilp_solveMixed-integer LPN/ALinear + IntegerFinite
DEde_minimizeGlobal (box)NoneBox boundsStochastic
CMA-EScmaes_minimizeGlobal (box)NoneBox boundsStochastic
NSGA-IInsga2_optimizeMulti-objectiveNoneBox boundsStochastic
RobustRobustProblem::solveUncertain paramsOptionalEq + Ineq + BoundsDepends on inner
StochasticStochasticProblem::solveRandom paramsOptionalDet + ChanceDepends on inner

Use this guide to select the appropriate solver:

Is the objective linear?
|-- Yes: Are there integer/binary variables?
| |-- Yes --> milp_solve (Branch-and-Bound)
| |-- No --> simplex_solve (Revised Simplex)
|
|-- No: Is the objective quadratic with linear constraints?
|-- Yes --> active_set_qp_solve (Active Set QP)
|
|-- No: Is it a least-squares problem (sum of squared residuals)?
|-- Yes --> lm_minimize (Levenberg-Marquardt)
|
|-- No: Are there multiple objectives?
|-- Yes --> nsga2_optimize (NSGA-II)
|
|-- No: Are parameters uncertain?
|-- Yes: Worst-case safety?
| |-- Yes --> RobustProblem (Robust Opt.)
| |-- No --> StochasticProblem (SAA/CVaR)
|
|-- No: Is the landscape multimodal/non-convex?
|-- Yes --> de_minimize or cmaes_minimize
|
|-- No: Are there constraints?
|-- Bounds only --> lbfgsb_minimize
|-- General --> augmented_lagrangian_minimize or sqp_minimize
|-- None --> Are gradients available?
|-- Yes: n < 1000? --> bfgs_minimize
| n >= 1000? --> lbfgs_minimize
|-- No --> nelder_mead or powell

The OptimProblem builder’s .solve() method automatically dispatches to the appropriate solver:

use numra::optim::OptimProblem;
// Auto-selects based on problem structure:
let result = OptimProblem::new(2)
.x0(&[1.0, 1.0])
.objective(|x: &[f64]| x[0] * x[0] + x[1] * x[1])
.gradient(|x: &[f64], g: &mut [f64]| { g[0] = 2.0 * x[0]; g[1] = 2.0 * x[1]; })
.solve() // dispatches to L-BFGS (unconstrained + gradient)
.unwrap();

The dispatch logic considers:

Problem StructureAuto-Selected Solver
Linear objectiveSimplex or MILP
Quadratic objectiveActive Set QP
Least-squaresLevenberg-Marquardt
Multi-objectiveNSGA-II
Global flag setDifferential Evolution
Constraints presentAugmented Lagrangian
Bounds onlyL-BFGS-B
Unconstrained + gradientL-BFGS
Unconstrained, no gradientNelder-Mead

Override auto-dispatch with .solve_with():

use numra::optim::{OptimProblem, SolverChoice};
let result = OptimProblem::new(2)
.x0(&[1.0, 1.0])
.objective(|x: &[f64]| x[0] * x[0] + x[1] * x[1])
.solve_with(SolverChoice::Bfgs)
.unwrap();

All solvers return Result<OptimResult<S>, OptimError>. The error variants cover all failure modes:

Error VariantDescriptionCommon Cause
LineSearchFailedWolfe line search failedBad gradient, non-smooth objective
NotDescentDirectionSearch direction has positive directional derivativeNumerical issues in Hessian approximation
InvalidFunctionValueNaN or Inf objective valueUndefined region of objective function
SingularMatrixLinear system solve failedDegenerate constraints, poor conditioning
DimensionMismatchArray sizes do not matchProgramming error in problem setup
NoObjectiveNo objective function providedMissing .objective() call
NoInitialPointNo initial guess providedMissing .x0() call
InfeasibleConstraints cannot be satisfiedOver-constrained problem
UnboundedObjective can decrease without boundMissing constraints in LP
LPInfeasibleLP has no feasible solutionContradictory linear constraints
QPNotPositiveSemiDefiniteQP Hessian is indefiniteNon-convex QP formulation
MILPInfeasibleMILP has no feasible integer solutionOver-constrained integer program
Other(String)Catch-all for miscellaneous errorsVarious
use numra::optim::{OptimProblem, OptimError};
match OptimProblem::new(2)
.x0(&[1.0, 1.0])
.objective(|x: &[f64]| x[0] + x[1])
.constraint_eq(|x: &[f64]| x[0] * x[0] + x[1] * x[1] - 1.0)
.solve()
{
Ok(result) => {
if result.converged {
println!("Optimal: x={:?}, f={:.6}", result.x, result.f);
} else {
println!("Warning: did not converge. Status: {:?}", result.status);
println!("Best found: x={:?}, f={:.6}", result.x, result.f);
}
}
Err(OptimError::Infeasible { violation }) => {
println!("Infeasible: max violation = {:.2e}", violation);
}
Err(e) => {
println!("Optimization failed: {}", e);
}
}
  1. Always provide gradients when possible. Finite-difference gradients cost 2n2n extra function evaluations and are less accurate.

  2. Use L-BFGS over BFGS for n>1000n > 1000. The O(n2)O(n^2) dense Hessian in BFGS becomes prohibitive.

  3. Tighten tolerances gradually. Start with default tolerances and only tighten gtol/ftol if the solution is insufficiently accurate.

  4. Provide a good initial point. For local solvers, the initial point determines which local minimum is found. For constrained problems, start near the feasible region.

  5. Use the builder API for complex problems. It handles solver selection, finite-difference fallbacks, and constraint formatting automatically.

  6. Combine global + local for multimodal problems: use DE or CMA-ES for exploration, then BFGS for polishing.

  7. Check result.history for convergence diagnostics. A stalling gradient norm or oscillating objective suggests the solver is struggling.

Every solver populates these fields:

FieldTypeAlways Available
xVec<S>Yes
fSYes
gradVec<S>Yes (empty for derivative-free)
iterationsusizeYes
n_fevalusizeYes
n_gevalusizeYes (0 for derivative-free)
convergedboolYes
messageStringYes
statusOptimStatusYes
historyVec<IterationRecord<S>>Yes (may be empty)
wall_time_secsf64Yes
lambda_eqVec<S>Constrained solvers
lambda_ineqVec<S>Constrained solvers
active_boundsVec<usize>L-BFGS-B, QP
constraint_violationSConstrained solvers
paretoOption<ParetoResult<S>>NSGA-II only
sensitivityOption<ParamSensitivity<S>>Robust only