Single and double precision
The functionality presented here was introduced in OpEn version 0.12.0.
The new API is fully backward-compatible with previous versions of OpEn
with f64 being the default scalar type.
Overview
OpEn's Rust API now supports both f64 and f32. Note that with f32
you may encounter issues with convergence, especially if you are solving
particularly ill-conditioned problems. On the other hand, f32 is sometimes
the preferred type for embedded applications and can lead to lower
solve times.
When using f32: (i) make sure the problem is properly scaled,
and (ii) you may want to opt for less demanding tolerances.
PANOC example
Below you can see two examples of using the solver with single and double precision arithmetic.
- Single precision
- Double precision
use optimization_engine::{constraints, panoc::PANOCCache, Problem, SolverError};
use optimization_engine::panoc::PANOCOptimizer;
let tolerance = 1e-4_f32;
let lbfgs_memory = 10;
let radius = 1.0_f32;
let bounds = constraints::Ball2::new(None, radius);
let df = |u: &[f32], grad: &mut [f32]| -> Result<(), SolverError> {
grad[0] = u[0] + u[1] + 1.0_f32;
grad[1] = u[0] + 2.0_f32 * u[1] - 1.0_f32;
Ok(())
};
let f = |u: &[f32], cost: &mut f32| -> Result<(), SolverError> {
*cost = 0.5_f32 * (u[0] * u[0] + u[1] * u[1]);
Ok(())
};
let problem = Problem::new(&bounds, df, f);
let mut cache = PANOCCache::<f32>::new(2, tolerance, lbfgs_memory);
let mut optimizer = PANOCOptimizer::new(problem, &mut cache);
let mut u = [0.0_f32, 0.0_f32];
let status = optimizer.solve(&mut u).unwrap();
assert!(status.has_converged());
use optimization_engine::{constraints, panoc::PANOCCache, Problem, SolverError};
use optimization_engine::panoc::PANOCOptimizer;
let tolerance = 1e-6;
let lbfgs_memory = 10;
let radius = 1.0;
let bounds = constraints::Ball2::new(None, radius);
let df = |u: &[f64], grad: &mut [f64]| -> Result<(), SolverError> {
grad[0] = u[0] + u[1] + 1.0;
grad[1] = u[0] + 2.0 * u[1] - 1.0;
Ok(())
};
let f = |u: &[f64], cost: &mut f64| -> Result<(), SolverError> {
*cost = 0.5 * (u[0] * u[0] + u[1] * u[1]);
Ok(())
};
let problem = Problem::new(&bounds, df, f);
let mut cache = PANOCCache::new(2, tolerance, lbfgs_memory);
let mut optimizer = PANOCOptimizer::new(problem, &mut cache);
let mut u = [0.0, 0.0];
let status = optimizer.solve(&mut u).unwrap();
assert!(status.has_converged());
To use single precision, make sure that the following are all using f32:
- the initial guess
u - the closures for the cost and gradient
- the constraints
- the cache
- any tolerances and numerical constants
- You are explicitly using
PANOCCache::<f32>as in the above example
Example with FBS
The same pattern applies to other solvers.
use optimization_engine::{constraints, Problem, SolverError};
use optimization_engine::fbs::{FBSCache, FBSOptimizer};
use std::num::NonZeroUsize;
let bounds = constraints::Ball2::new(None, 0.2_f32);
let df = |u: &[f32], grad: &mut [f32]| -> Result<(), SolverError> {
grad[0] = u[0] + u[1] + 1.0_f32;
grad[1] = u[0] + 2.0_f32 * u[1] - 1.0_f32;
Ok(())
};
let f = |u: &[f32], cost: &mut f32| -> Result<(), SolverError> {
*cost = u[0] * u[0] + 2.0_f32 * u[1] * u[1] + u[0] - u[1] + 3.0_f32;
Ok(())
};
let problem = Problem::new(&bounds, df, f);
let mut cache = FBSCache::<f32>::new(NonZeroUsize::new(2).unwrap(), 0.1_f32, 1e-6_f32);
let mut optimizer = FBSOptimizer::new(problem, &mut cache);
let mut u = [0.0_f32, 0.0_f32];
let status = optimizer.solve(&mut u).unwrap();
assert!(status.has_converged());
Example with ALM
ALM also supports both precisions. As with PANOC and FBS, the scalar type should be chosen once and then used consistently throughout the ALM problem, cache, mappings, and tolerances.
For example, if you use:
AlmCache::<f32>PANOCCache::<f32>Ball2::<f32>- closures of type
|u: &[f32], ...|
then the whole ALM solve runs in single precision.
If instead you use plain f64 literals and &[f64] closures, the solver runs in double precision. This is the default behaviour.
Type inference tips
Rust usually infers the scalar type correctly, but explicit annotations are often helpful for f32.
Good ways to make f32 intent clear are:
- suffix literals, for example
1.0_f32and1e-4_f32 - annotate vectors and arrays, for example
let mut u = [0.0_f32; 2]; - annotate caches explicitly, for example
PANOCCache::<f32>::new(...) - annotate closure arguments, for example
|u: &[f32], grad: &mut [f32]|
f32 and f64For example, the following combinations are problematic:
u: &[f32]with a cost function writing to&mut f64Ball2::new(None, 1.0_f64)together withPANOCCache::<f32>
Choose one scalar type per optimization problem and use it everywhere.