The condition number of a mathematical object is a number that is a measure of how sensitivity the object is to small changes. There are several types of condition numbers. In machine learning you might want to know how sensitive a square matrix A that defines a set of linear equations Ax = b is. Or you might want to know how sensitive a tall matrix (more rows than columns) that holds training data for a prediction model is.
The one-norm condition number of a matrix A is defined as cond(A) * cond(Ainv), in words, the one-norm of A times the one-norm of the inverse of A.
The one-norm of a matrix is the maximum of the column sums of the magnitudes of the values. For example, if A is:
3.0 0.0 -2.0 5.0 -1.0 4.0 6.0 3.0 4.0 1.0 0.0 3.0 -3.0 2.0 4.0 5.0
The column sums of the absolute values are 11, 7, 12, 16, and so the one-norm is 16.
Note: Another common condition number for matrices is the two-norm condition number. It is defines as the ratio of the largest singular value of the matrix divided by the smallest singular value. Computing singular values is one of the most difficult tasks in all of numerical programming.
I put together a demo using the C# language. To account for tall machine learning style matrices, I computed the pseudo-inverse (which applies to any shape matrix) instead of the standard inverse (which applies to only square matrices).
There are dozens of ways to compute a pseudo-inverse, all of them very complicated. I used QR decomposition via the modified Gram-Schmidt algorithm, which is, in my opinion, the simplest possible algorithm for pseudo-inverse.
The output of my demo:
Begin one-norm condition number demo Source (square) matrix A: 3.0 0.0 -2.0 5.0 -1.0 4.0 6.0 3.0 4.0 1.0 0.0 3.0 -3.0 2.0 4.0 5.0 The one-norm condition number = 53.3333 Source (on-square) matrix B: 3.0 0.0 -2.0 5.0 -1.0 4.0 6.0 3.0 4.0 1.0 0.0 3.0 -3.0 2.0 4.0 5.0 5.0 3.0 -1.0 0.0 The one-norm condition number = 16.5219 End demo
It’s very difficult to interpret condition numbers other than larger condition numbers mean more sensitive, in other words, small changes in the source matrix can lead to very large chances in the result (linear equation coefficients, matrix inverse for training, etc.)

I have a strange liking for comedy movies that have a special scene/condition: obviously fake bats. To me, these scenes are hilarious.
Left: In “Black Sheep” (1996), loser Mike Donnelly (actor Chris Farley) is the brother of rising politician Al Donnelly. Al’s campaign manager Steve Dodds (actor David Spade) is tasked with keeping Mike out of trouble and out of the press. Steve and Mike try to go into quarantine at a mountain cabin, complete with fake bat. Everything works out in the end. My grade = B.
Center: In “Spy” (2015), desk CIA agent Susan Cooper (actress Melissa McCarthy) works out of a basement office where all the losers are sequestered. The office also houses a colony of bats. This movie could have been excellent, but there were too many crude and sexual scenes to suit me. Even so, my grade = B+.
Right: In “Ace Ventura: When Nature Calls” (1995), detective Ace Ventura (actor Jim Carrey) loves all animals . . except bats. He is hired to go to Nibia, Africa to find a kidnapped sacred white bat. I’m not a Jim Carrey fan, but this movie has some hilarious scenes. My grade = B+.
Demo program. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my blog editor chokes on symbols).
// matrix 1-norm condition number
using System.IO;
namespace MatrixConditionNumberDefinition
{
internal class Program
{
static void Main(string[] args)
{
Console.WriteLine("\nBegin one-norm condition" +
" number demo ");
// 1. set up source matrix
double[][] A = new double[4][];
A[0] = new double[] { 3, 0, -2, 5 };
A[1] = new double[] { -1, 4, 6, 3 };
A[2] = new double[] { 4, 1, 0, 3 };
A[3] = new double[] { -3, 2, 4, 5 };
// A[4] = new double[] { 5, 3, -1, 0 };
Console.WriteLine("\nSource (square) matrix A: ");
MatShow(A, 1, 6);
double cnA = MatOneNormCondition(A);
Console.WriteLine("\nThe one-norm condition " +
"number = " + cnA.ToString("F4"));
double[][] B = new double[5][];
B[0] = new double[] { 3, 0, -2, 5 };
B[1] = new double[] { -1, 4, 6, 3 };
B[2] = new double[] { 4, 1, 0, 3 };
B[3] = new double[] { -3, 2, 4, 5 };
B[4] = new double[] { 5, 3, -1, 0 };
Console.WriteLine("\nSource (on-square) matrix B: ");
MatShow(B, 1, 6);
double cnB = MatOneNormCondition(B);
Console.WriteLine("\nThe one-norm condition " +
"number = " + cnB.ToString("F4"));
Console.WriteLine("\nEnd demo ");
Console.ReadLine();
} // Main()
// ------------------------------------------------------
static double MatOneNormCondition(double[][] A)
{
// one-norm condition for a matrix in a
// machine learning scenarios where nRows gte nCols
// cond = norm(A) * norm(Ainv)
double a = MatOneNorm(A);
double b = MatOneNorm(QRGramSchmidt.MatPseudoInv(A));
return a * b;
}
// ------------------------------------------------------
static double MatOneNorm(double[][] A)
{
// max col sum of abs values
// do not assume A is a square matrix
// assume m gte n
int m = A.Length; int n = A[0].Length;
double result = 0;
for (int j = 0; j "lt" n; ++j) // each column
{
double colSum = 0.0;
for (int i = 0; i "lt" m; ++i) // walk down column
{
colSum += Math.Abs(A[i][j]);
}
if (colSum "gt" result)
result = colSum;
}
return result;
}
// ------------------------------------------------------
public static void MatShow(double[][] A, int dec,
int wid)
{
int nRows = A.Length;
int nCols = A[0].Length;
double small = 1.0 / Math.Pow(10, dec);
for (int i = 0; i "lt" nRows; ++i)
{
for (int j = 0; j "lt" nCols; ++j)
{
double v = A[i][j]; // avoid "-0.00000"
if (Math.Abs(v) "lt" small) v = 0.0;
Console.Write(v.ToString("F" + dec).
PadLeft(wid));
}
Console.WriteLine("");
}
}
// ------------------------------------------------------
} // class Program
// ========================================================
public class QRGramSchmidt
{
// container class for relaxed Moore-Penrose
// pseudo-inverse using QR decomposition with modified
// Gram-Schmidt algorithm
public static double[][] MatPseudoInv(double[][] M)
{
// relaxed Moore-Penrose pseudo-inverse using QR-GS
// A = Q*R, pinv(A) = inv(R) * trans(Q)
int nr = M.Length; int nc = M[0].Length; // aka m, n
if (nr "lt" nc)
Console.WriteLine("ERROR: Works only m "gte" n");
double[][] Q; double[][] R;
MatDecompQR(M, out Q, out R); // Gram-Schmidt
double[][] Rinv = MatInvUpperTri(R); // std algo
double[][] Qinv = MatTranspose(Q); // is inv(Q)
double[][] result = MatProduct(Rinv, Qinv);
return result;
}
// ------------------------------------------------------
private static void MatDecompQR(double[][] A,
out double[][] Q, out double[][] R)
{
// QR decomposition, modified Gram-Schmidt
// 'reduced' mode (all machine learning scenarios)
int m = A.Length; int n = A[0].Length;
if (m "lt" n)
throw new Exception("m must be gte n ");
double[][] QQ = new double[m][]; // working Q mxn
for (int i = 0; i "lt" m; ++i)
QQ[i] = new double[n];
double[][] RR = new double[n][]; // working R nxn
for (int i = 0; i "lt" n; ++i)
RR[i] = new double[n];
for (int k = 0; k "lt" n; ++k) // main loop each col
{
double[] v = new double[m];
for (int i = 0; i "lt" m; ++i) // col k
v[i] = A[i][k];
for (int j = 0; j "lt" k; ++j) // inner loop
{
double[] colj = new double[QQ.Length];
for (int i = 0; i "lt" colj.Length; ++i)
colj[i] = QQ[i][j];
double vecdot = 0.0;
for (int i = 0; i "lt" colj.Length; ++i)
vecdot += colj[i] * v[i];
RR[j][k] = vecdot;
// v = v - (R[j, k] * Q[:, j])
for (int i = 0; i "lt" v.Length; ++i)
v[i] = v[i] - (RR[j][k] * QQ[i][j]);
} // j
double normv = 0.0;
for (int i = 0; i "lt" v.Length; ++i)
normv += v[i] * v[i];
normv = Math.Sqrt(normv);
RR[k][k] = normv;
// Q[:, k] = v / R[k, k]
for (int i = 0; i "lt" QQ.Length; ++i)
QQ[i][k] = v[i] / (RR[k][k] + 1.0e-12);
} // k
Q = QQ;
R = RR;
}
// ------------------------------------------------------
private static double[][] MatInvUpperTri(double[][] A)
{
// used to invert R from QR
int n = A.Length; // must be square matrix
double[][] result = MatIdentity(n);
for (int k = 0; k "lt" n; ++k)
{
for (int j = 0; j "lt" n; ++j)
{
for (int i = 0; i "lt" k; ++i)
{
result[j][k] -= result[j][i] * A[i][k];
}
result[j][k] /= (A[k][k] + 1.0e-8); // avoid 0
}
}
return result;
}
// ------------------------------------------------------
private static double[][] MatTranspose(double[][] M)
{
int nRows = M.Length; int nCols = M[0].Length;
double[][] result = MatMake(nCols, nRows); // note
for (int i = 0; i "lt" nRows; ++i)
for (int j = 0; j "lt" nCols; ++j)
result[j][i] = M[i][j];
return result;
}
// ------------------------------------------------------
private static double[][] MatMake(int nRows, int nCols)
{
double[][] result = new double[nRows][];
for (int i = 0; i "lt" nRows; ++i)
result[i] = new double[nCols];
return result;
}
// ------------------------------------------------------
private static double[][] MatIdentity(int n)
{
double[][] result = MatMake(n, n);
for (int i = 0; i "lt" n; ++i)
result[i][i] = 1.0;
return result;
}
// ------------------------------------------------------
private static double[][] MatProduct(double[][] A,
double[][] B)
{
int aRows = A.Length; int aCols = A[0].Length;
int bRows = B.Length; int bCols = B[0].Length;
if (aCols != bRows)
throw new Exception("Non-conformable matrices");
double[][] result = MatMake(aRows, bCols);
for (int i = 0; i "lt" aRows; ++i) // each row of A
for (int j = 0; j "lt" bCols; ++j) // each col of B
for (int k = 0; k "lt" aCols; ++k)
result[i][j] += A[i][k] * B[k][j];
return result;
}
} // class QRGramSchmidt
// ========================================================
} // ns




















.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2025 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2025 G2E Conference
2025 iSC West Conference
You must be logged in to post a comment.