One morning before work, I figured I’d implement a function to compute the determinant of a matrix using the row echelon form technique, using the C# language. And so I did.
In machine learning scenarios, the determinant is most often used to check if a matrix inverse exists. If the determinant is 0, the inverse doesn’t exist. If the determinant is anything else, the inverse does exist.
There are several algorithms that can be used to compute a matrix determinant. Based on my experience, the two most common are the LU/LUP decomposition technique and the Gaussian elimination to row echelon form technique. For my implementation to compute a matrix determinant, I used the reduction to row echelon form technique. This technique is also called Gaussian elimination.
Maybe the best way to understand what row echelon form is by looking at a concrete example of a 7-by-7 matrix that’s in row echelon form:
3.8 3.7 8.2 7.6 1.5 2.2 8.3 0.0 0.0 5.4 3.8 1.8 0.0 4.9 0.0 0.0 0.0 8.4 3.3 6.3 2.6 0.0 0.0 0.0 0.0 0.0 5.7 0.5 0.0 0.0 0.0 0.0 0.0 2.3 8.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
A square matrix is in row echelon form when a.) if there are any rows of all zeros, they are all on the bottom, b.) if you start at the upper left corner, and draw a border so that the 0.0 entries are below, then you get a staircase shape (that isn’t necessarily symmetric). The word “echelon” can be traced back to the Latin word for “stairs”.
The source matrix is converted to row echelon form using three operations:
1.) Any two rows can be swapped.
2.) Any row (or a multiple of it) can be added to another row.
3.) Any row can be multiplied by a constant.
The determinant starts with a value of 1.0. If two rows are swapped, the determinant is multiplied by -1 (i.e., changes sign). If you add a multiple of one row to another, the determinant does not change. If you multiply a row by a constant k, the determinant is also multiplied by k.
The output of my demo is:
Begin matrix determinant using Gaussian elimination to row echelon form Source matrix: 2.0 9.0 1.0 8.0 Determinant = 7.0000 Source matrix: 5.0 9.0 7.0 3.0 8.0 -1.0 -2.0 1.0 6.0 Determinant = 234.0000 Source matrix: 1.0 -4.0 3.0 0.0 2.0 8.0 6.0 -1.0 9.0 7.0 5.0 -1.0 -1.0 0.0 2.0 3.0 Determinant = -1103.0000 Source matrix: 1.0 4.0 3.0 0.0 2.0 8.0 6.0 0.0 9.0 7.0 5.0 1.0 -1.0 0.0 -2.0 -3.0 Determinant = 0.0000 End demo
A good way to warm my brain up to start a day.

The row echelon reduction algorithm is, in my opinion, the simplest algorithm to compute a matrix determinant. Many of the guys I work with have an affinity for reduction of all kinds. I’ve always been fascinated by reduction in the form of scale models to represent reality. In particular, I’ve always loved model mine trains, perhaps due, in part, to riding on the old mine train rides at Disneyland and Knott’s Berry Farm, just a few minutes away from where I grew up in Southern California. Here’s a beautiful example of a model coal mine layout from “modelrailwaylayoutplans.com”.
Demo code. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my blog editor chokes on symbols).
using System;
using System.IO;
using System.Collections.Generic;
namespace MatrixDeterminantEchelon
{
internal class MatrixDeterminantEchelonProgram
{
static void Main(string[] args)
{
Console.WriteLine("\nBegin matrix determinant using" +
" Gaussian elimination to row echelon form ");
// first example
double[][] A = new double[2][];
A[0] = new double[] { 2, 9 };
A[1] = new double[] { 1, 8 };
Console.WriteLine("\nSource matrix: ");
MatShow(A, 1, 5);
double detA = MatDeterminantEchelon(A);
Console.WriteLine("\nDeterminant = " +
detA.ToString("F4"));
// second example
double[][] B = new double[3][];
B[0] = new double[] { 5, 9, 7 };
B[1] = new double[] { 3, 8, -1 };
B[2] = new double[] { -2, 1, 6 };
Console.WriteLine("\nSource matrix: ");
MatShow(B, 1, 5);
double detB = MatDeterminantEchelon(B);
Console.WriteLine("\nDeterminant = " +
detB.ToString("F4"));
// third example
double[][] C = new double[4][];
C[0] = new double[] { 1, -4, 3, 0 };
C[1] = new double[] { 2, 8, 6, -1 };
C[2] = new double[] { 9, 7, 5, -1 };
C[3] = new double[] { -1, 0, 2, 3 };
Console.WriteLine("\nSource matrix: ");
MatShow(C, 1, 5);
double detC = MatDeterminantEchelon(C);
Console.WriteLine("\nDeterminant = " +
detC.ToString("F4"));
// fourth example
double[][] D = new double[4][];
D[0] = new double[] { 1, 4, 3, 0 };
D[1] = new double[] { 2, 8, 6, 0 }; // r2 = 2 * r1
D[2] = new double[] { 9, 7, 5, 1 };
D[3] = new double[] { -1, 0, -2, -3 };
Console.WriteLine("\nSource matrix: ");
MatShow(D, 1, 5);
double detD = MatDeterminantEchelon(D);
Console.WriteLine("\nDeterminant = " +
detD.ToString("F4"));
Console.WriteLine("\nEnd demo ");
Console.ReadLine();
} // Main()
static double MatDeterminantEchelon(double[][] A)
{
// using row echelon algorithm
int n = A.Length; // assume A is square
// small n shortcuts
if (n == 1)
return A[0][0]; // usually
if (n == 2)
return (A[0][0] * A[1][1]) - (A[0][1] * A[1][0]);
if (n == 3)
{
double a = (A[1][1] * A[2][2]) - (A[1][2] * A[2][1]);
double b = (A[1][0] * A[2][2]) - (A[1][2] * A[2][0]);
double c = (A[1][0] * A[2][1]) - (A[1][1] * A[2][0]);
double result =
(A[0][0] * a) - (A[0][1] * b) + (A[0][2] * c);
return result;
}
// row echelon for n = 4 or larger
double det = 1.0;
// make a copy of A
double[][] B = new double[n][];
for (int i = 0; i "lt" n; ++i)
B[i] = new double[n];
for (int i = 0; i "lt" n; ++i)
for (int j = 0; j "lt" n; ++j)
B[i][j] = A[i][j];
for (int i = 0; i "lt" n; ++i) // main loop
{
int pr = i; // find pivot row
for (int r = i + 1; r "lt" n; ++r)
{
if (Math.Abs(B[r][i]) "gt" Math.Abs(B[pr][i]))
pr = r;
}
if (pr != i) // swap rows pr and i
{
for (int k = 0; k "lt" n; ++k)
{
double tmp = B[i][k];
B[i][k] = B[pr][k];
B[pr][k] = tmp;
}
det *= -1.0; // swap flips sign
}
if (Math.Abs(B[i][i]) "lt" 1.0e-8)
return 0.0;
det *= B[i][i]; // theory
for (int r = i + 1; r "lt" n; ++r) // elim below pivot
{
double scale = B[r][i] / B[i][i]; // not 0
for (int k = i; k "lt" n; ++k)
B[r][k] -= scale * B[i][k];
}
} // main loop
// MatShow(B, 4, 9); // B is in row echelon form
return det;
} // MatDeterminantEchelon
static void MatShow(double[][] A, int dec, int wid)
{
for (int i = 0; i "lt" A.Length; ++i)
{
for (int j = 0; j "lt" A[0].Length; ++j)
{
double v = A[i][j];
Console.Write(v.ToString("F" + dec).
PadLeft(wid));
}
Console.WriteLine("");
}
}
static double[][] MatLoad(string fn, int[] usecols,
char sep, string comment)
{
List"lt"double[]"gt" result =
new List"lt"double[]"gt"();
string line = "";
FileStream ifs = new FileStream(fn, FileMode.Open);
StreamReader sr = new StreamReader(ifs);
while ((line = sr.ReadLine()) != null)
{
if (line.StartsWith(comment) == true)
continue;
string[] tokens = line.Split(sep);
List"lt"double"gt" lst = new List"lt"double"gt"();
for (int j = 0; j "lt" usecols.Length; ++j)
lst.Add(double.Parse(tokens[usecols[j]]));
double[] row = lst.ToArray();
result.Add(row);
}
sr.Close(); ifs.Close();
return result.ToArray();
}
} // class Program
} // ns

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2025 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2025 G2E Conference
2025 iSC West Conference
You must be logged in to post a comment.