Matrix Factorizations
Matrix factorizations decompose a matrix into a product of simpler matrices that make subsequent operations (solving, least-squares, determinants) much cheaper. The key insight: factorize once at cost, then solve repeatedly at cost per right-hand side.
Numra provides four direct factorizations: LU, QR, Cholesky, and SVD. Each
caches the decomposition on construction and reuses it for every subsequent
solve() call.
LU Factorization
Section titled “LU Factorization”LU with partial pivoting decomposes as:
where is a permutation matrix, is unit lower triangular, and is upper triangular. This is the general-purpose workhorse for solving with any nonsingular square matrix.
When to Use
Section titled “When to Use”- General (non-symmetric, non-SPD) square systems
- Multiple right-hand sides with the same coefficient matrix
- Inside implicit ODE solvers (Radau5, ESDIRK, BDF) where the Newton system is solved many times per step
use numra::linalg::{DenseMatrix, Matrix, LUFactorization};
let mut a: DenseMatrix<f64> = DenseMatrix::zeros(2, 2);a.set(0, 0, 1.0); a.set(0, 1, 2.0);a.set(1, 0, 3.0); a.set(1, 1, 4.0);
// Factor oncelet lu = LUFactorization::new(&a).unwrap();assert_eq!(lu.dim(), 2);
// Solve Ax = b (reuses the cached factorization)let b1 = vec![5.0, 11.0];let x1 = lu.solve(&b1).unwrap();assert!((x1[0] - 1.0).abs() < 1e-10);assert!((x1[1] - 2.0).abs() < 1e-10);
// Solve again with a different right-hand side -- no refactorizationlet b2 = vec![5.0, 13.0];let x2 = lu.solve(&b2).unwrap();assert!((x2[0] - 3.0).abs() < 1e-10);assert!((x2[1] - 1.0).abs() < 1e-10);In-Place Solve
Section titled “In-Place Solve”When you want to overwrite b with the solution:
use numra::linalg::{DenseMatrix, Matrix, LUFactorization};
let mut a: DenseMatrix<f64> = DenseMatrix::zeros(2, 2);a.set(0, 0, 2.0); a.set(1, 1, 3.0);
let lu = LUFactorization::new(&a).unwrap();let mut b = vec![4.0, 9.0];lu.solve_inplace(&mut b).unwrap();// b is now [2.0, 3.0]Multiple Right-Hand Sides
Section titled “Multiple Right-Hand Sides”Solve where has several columns (stored column-major):
use numra::linalg::{DenseMatrix, Matrix, LUFactorization};
let mut a: DenseMatrix<f64> = DenseMatrix::zeros(2, 2);a.set(0, 0, 2.0); a.set(1, 1, 3.0);
let lu = LUFactorization::new(&a).unwrap();
// Two right-hand sides in column-major layout:// col 0: [2, 6], col 1: [4, 9]let b = vec![2.0, 6.0, 4.0, 9.0];let x = lu.solve_multi(&b, 2).unwrap();// col 0 solution: [1, 2], col 1 solution: [2, 3]The LUSolver Trait
Section titled “The LUSolver Trait”DenseMatrix also implements the LUSolver trait for convenience:
use numra::linalg::{DenseMatrix, Matrix, LUSolver};
let mut a: DenseMatrix<f64> = DenseMatrix::zeros(2, 2);a.set(0, 0, 1.0); a.set(0, 1, 2.0);a.set(1, 0, 3.0); a.set(1, 1, 4.0);
// One-shot solve (factors internally)let x = a.lu_solve(&vec![5.0, 11.0]).unwrap();
// Or get the factorization for reuselet lu = a.lu_factor().unwrap();Numerical Properties
Section titled “Numerical Properties”| Property | Value |
|---|---|
| Cost | flops for factorization, per solve |
| Stability | Partial pivoting guarantees |
| Works on | Any nonsingular square matrix |
| Fails when | Matrix is singular (zero pivot) |
QR Factorization
Section titled “QR Factorization”QR decomposes as:
where is orthogonal () and is upper triangular. QR is numerically more stable than LU for solving and is essential for least-squares problems.
When to Use
Section titled “When to Use”- Overdetermined systems (more rows than columns):
- Square systems where extra numerical stability is worth the cost
- Building orthogonal bases (Gram-Schmidt, Krylov methods)
Square System Solve
Section titled “Square System Solve”use numra::linalg::{DenseMatrix, Matrix, QRFactorization};
let mut a: DenseMatrix<f64> = DenseMatrix::zeros(2, 2);a.set(0, 0, 1.0); a.set(0, 1, 2.0);a.set(1, 0, 3.0); a.set(1, 1, 4.0);
let qr = QRFactorization::new(&a).unwrap();assert_eq!(qr.nrows(), 2);assert_eq!(qr.ncols(), 2);
let b = vec![5.0, 11.0];let x = qr.solve(&b).unwrap();assert!((x[0] - 1.0).abs() < 1e-10);assert!((x[1] - 2.0).abs() < 1e-10);Least-Squares Problems
Section titled “Least-Squares Problems”For an overdetermined system with
, solve_least_squares finds that minimizes
:
use numra::linalg::{DenseMatrix, Matrix, QRFactorization};
// Fit a line y = a + b*x through points (1,1), (2,2), (3,2)let mut a: DenseMatrix<f64> = DenseMatrix::zeros(3, 2);a.set(0, 0, 1.0); a.set(0, 1, 1.0);a.set(1, 0, 1.0); a.set(1, 1, 2.0);a.set(2, 0, 1.0); a.set(2, 1, 3.0);
let qr = QRFactorization::new(&a).unwrap();
let b = vec![1.0, 2.0, 2.0];let x = qr.solve_least_squares(&b).unwrap();// x[0] = intercept ~ 2/3, x[1] = slope ~ 0.5assert!((x[0] - 2.0 / 3.0).abs() < 1e-10);assert!((x[1] - 0.5).abs() < 1e-10);Numerical Properties
Section titled “Numerical Properties”| Property | Value |
|---|---|
| Cost | flops |
| Stability | Always backward stable (orthogonal transformations) |
| Works on | Any matrix with |
| Advantage over LU | Never amplifies rounding errors |
Cholesky Factorization
Section titled “Cholesky Factorization”For symmetric positive definite (SPD) matrices, Cholesky computes:
where is lower triangular. This is roughly twice as fast as LU and uses half the storage.
When to Use
Section titled “When to Use”- Covariance matrices, mass matrices, stiffness matrices from FEM
- Any system where you know is SPD
- Detecting positive definiteness (Cholesky fails if is not SPD)
use numra::linalg::{DenseMatrix, Matrix, CholeskyFactorization};
// SPD matrix: [4 2; 2 3]let mut a: DenseMatrix<f64> = DenseMatrix::zeros(2, 2);a.set(0, 0, 4.0); a.set(0, 1, 2.0);a.set(1, 0, 2.0); a.set(1, 1, 3.0);
let chol = CholeskyFactorization::new(&a).unwrap();assert_eq!(chol.dim(), 2);
let b = vec![6.0, 5.0];let x = chol.solve(&b).unwrap();assert!((x[0] - 1.0).abs() < 1e-10);assert!((x[1] - 1.0).abs() < 1e-10);Detecting Non-SPD Matrices
Section titled “Detecting Non-SPD Matrices”If the matrix is not positive definite, new returns an error:
use numra::linalg::{DenseMatrix, Matrix, CholeskyFactorization};
// Not positive definite: [1 2; 2 1]let mut a: DenseMatrix<f64> = DenseMatrix::zeros(2, 2);a.set(0, 0, 1.0); a.set(0, 1, 2.0);a.set(1, 0, 2.0); a.set(1, 1, 1.0);
match CholeskyFactorization::new(&a) { Err(numra::linalg::LinalgError::NotPositiveDefinite) => { println!("Matrix is not positive definite"); } _ => {}}Numerical Properties
Section titled “Numerical Properties”| Property | Value |
|---|---|
| Cost | flops (half of LU) |
| Stability | Always stable for SPD matrices |
| Works on | Symmetric positive definite matrices only |
| Advantage | 2x faster and half the storage of LU |
SVD — Singular Value Decomposition
Section titled “SVD — Singular Value Decomposition”The SVD decomposes any matrix as:
where is orthogonal, is diagonal with non-negative singular values , and is orthogonal.
Full SVD
Section titled “Full SVD”use numra::linalg::{DenseMatrix, Matrix, SvdDecomposition};
let mut a: DenseMatrix<f64> = DenseMatrix::zeros(3, 3);a.set(0, 0, 3.0);a.set(1, 1, 1.0);a.set(2, 2, 2.0);
let svd = SvdDecomposition::new(&a).unwrap();
// Singular values in decreasing orderlet s = svd.singular_values(); // [3.0, 2.0, 1.0]
// Orthogonal factor matriceslet u = svd.u(); // 3x3let v = svd.v(); // 3x3Thin SVD
Section titled “Thin SVD”For tall or wide matrices, the thin SVD saves memory by only computing the “economy” factors:
use numra::linalg::{DenseMatrix, Matrix, ThinSvdDecomposition};
let mut a: DenseMatrix<f64> = DenseMatrix::zeros(4, 2);a.set(0, 0, 1.0);a.set(1, 1, 2.0);
let svd = ThinSvdDecomposition::new(&a).unwrap();let u = svd.u(); // 4x2 (not 4x4)let v = svd.v(); // 2x2let s = svd.singular_values(); // [2.0, 1.0]Pseudoinverse, Rank, and Condition Number
Section titled “Pseudoinverse, Rank, and Condition Number”The SVD provides the most numerically reliable way to compute these quantities:
use numra::linalg::{DenseMatrix, Matrix, SvdDecomposition};
let mut a: DenseMatrix<f64> = DenseMatrix::zeros(2, 2);a.set(0, 0, 1.0); a.set(0, 1, 2.0);a.set(1, 0, 2.0); a.set(1, 1, 4.0);
let svd = SvdDecomposition::new(&a).unwrap();
// Numerical rank (count singular values > tolerance)let r = svd.rank(1e-10); // 1 (rank-deficient matrix)
// Condition number: sigma_max / sigma_minlet kappa = svd.cond(); // infinity for singular matrix
// Moore-Penrose pseudoinverse: A^+let pinv = svd.pseudoinverse();Convenience Methods on DenseMatrix
Section titled “Convenience Methods on DenseMatrix”For quick one-off computations without managing the decomposition object:
use numra::linalg::{DenseMatrix, Matrix};
let a = DenseMatrix::from_row_major(2, 2, &[3.0, 0.0, 0.0, 1.0]);
let s = a.singular_values(); // [3.0, 1.0]let k = a.cond(); // 3.0let r = a.rank(1e-10); // 2let pinv = a.pinv().unwrap(); // pseudoinverse
// Least-squares solve via SVD pseudoinverselet mut tall = DenseMatrix::from_row_major(3, 2, &[1.0, 1.0, 1.0, 2.0, 1.0, 3.0]);let b = vec![1.0, 2.0, 2.0];let x = tall.lstsq(&b).unwrap();Numerical Properties
Section titled “Numerical Properties”| Property | Value |
|---|---|
| Cost | flops |
| Stability | The gold standard — always backward stable |
| Works on | Any matrix (no restrictions) |
| Extra info | Rank, condition number, null space, range |
Choosing a Factorization
Section titled “Choosing a Factorization”| Situation | Best choice | Why |
|---|---|---|
| General square | LU | Fastest general-purpose solver |
| Multiple RHS, same | LU (cached) | Factor once, solve many |
| SPD system | Cholesky | 2x faster, guaranteed stable |
| Overdetermined least-squares | QR | Numerically stable |
| Rank-deficient system | SVD | Only reliable method |
| Condition number estimate | SVD | Direct access to |
| ODE implicit step | LU | Newton system |
Cost Comparison (n x n matrix)
Section titled “Cost Comparison (n x n matrix)”| Factorization | Factor | Solve | Total (k solves) |
|---|---|---|---|
| LU | |||
| QR | |||
| Cholesky | |||
| SVD |
The factorization cost dominates for single solves. When , all methods amortize to per solve — but LU and Cholesky have the smallest constants.