Before I start: The technique presented in this post is only an investigation to compute SVD. The algorithm is (possibly) not numerically stable, and is (probably) not suitable when high accuracy (beyond 8 decimals) is needed. And the algorithm is very slow compared to standard SVD algorithms. That said, for small matrices, the technique for SVD presented here is the simplest I’ve seen.
The starting point of the problem is singular value decomposition (SVD). If you have a matrix A and decompose it using SVD, you get a matrix U, a vector s, and a matrix Vh, where A = U * S * Vh. The S is a square matrix with the s values on the diagonal. SVD has many important uses in machine learning, for example, computing a pseudo-inverse for training a linear model.
Unfortunately, accurately computing SVD is mind-numbingly difficult. Two common SVD algorithms are Jacobi and Golub-Kahan-Reinsch. Both algorithms are very complex. But it’s possible to compute an approximation to SVD. Briefly, to approximate the SVD of a source matrix A, first compute the QR decomposition of A to get Q and R, and then compute the SVD of the R matrix.
This seems silly at first — using SVD to compute SVD — but the R result matrix of a QR decomposition is a square matrix, and it’s possible to find its SVD using QR in a relatively simple way . In Python code:
def my_svd(A): # simple approximation using QR for tall skinny A Q, R = my_qr(A, mode='reduced') U, s, Vh = my_svd_square(R) U = np.matmul(Q, U) return U, s, Vh
The my_svd() function isn’t trivial, but it’s vastly less complicated than computing SVD using Jacobi or any other algorithm. The my_svd_square() helper function for a square matrix is also much simpler than computing SVD for a non-square matrix. In essence, the idea for approximating SVD is to break it down into two much simpler (but by no means easy) sub-problems that can be solved using QR decomposition.
The first part of the output of a demo:
Simple SVD approximation using simple QR Source matrix A: [[ 1.000000 6.000000 8.000000 5.000000] [ 0.000000 2.000000 4.000000 6.000000] [ 9.000000 3.000000 7.000000 7.000000] [-1.000000 -3.000000 -5.000000 -7.000000] [ 7.000000 5.000000 3.000000 1.000000]] ================================= Using simple approximation: Exited at iteration 19 U = [[ 0.490569 0.357946 -0.658897 -0.443928] [ 0.309393 0.418534 0.228973 0.339519] [ 0.618658 -0.402788 0.526177 -0.422092] [-0.402797 -0.389987 -0.198233 -0.465342] [ 0.344434 -0.618365 -0.444148 0.541249]] s = [21.095067 8.254694 4.718203 1.611445] Vh = [[ 0.420588 0.395767 0.594453 0.559554] [-0.872923 -0.017625 0.219636 0.435263] [ 0.247105 -0.750910 -0.214769 0.573539] [ 0.007029 0.528386 -0.743142 0.410485]] Check = [[ 1.000000 6.000000 8.000000 5.000000] [ 0.000000 2.000000 4.000000 6.000000] [ 9.000000 3.000000 7.000000 7.000000] [-1.000000 -3.000000 -5.000000 -7.000000] [ 7.000000 5.000000 3.000000 1.000000]] Pass
The Check matrix shows that U * S * Vh equals the source matrix A.
Here’s the second part of a demo output that shows SVD using the built-in NumPy np.linalg.svd() functions:
Using linalg.svd(): U = [[-0.490569 -0.357946 0.658897 -0.443928] [-0.309393 -0.418534 -0.228973 0.339519] [-0.618658 0.402788 -0.526177 -0.422092] [ 0.402797 0.389987 0.198233 -0.465342] [-0.344434 0.618365 0.444148 0.541249]] s = [21.095067 8.254694 4.718203 1.611445] Vh = [[-0.420588 -0.395767 -0.594453 -0.559554] [ 0.872923 0.017625 -0.219636 -0.435263] [-0.247105 0.750910 0.214769 -0.573539] [ 0.007029 0.528386 -0.743142 0.410485]] Check = [[ 1.000000 6.000000 8.000000 5.000000] [-0.000000 2.000000 4.000000 6.000000] [ 9.000000 3.000000 7.000000 7.000000] [-1.000000 -3.000000 -5.000000 -7.000000] [ 7.000000 5.000000 3.000000 1.000000]] Pass End demo
The results are the same — almost. One of the hugely annoying details with SVD is that the sign of the values in the columns of U, and in the rows of Vh are arbitrary. Singular values by convention are always positive or zero.
Note: The standard way to compute the singular value decomposition (SVD) of a square matrix A using the QR decomposition is via an iterative process based on the QR algorithm, specifically the Golub-Kahan-Reinsch algorithm. The overall process involves two main phases:
1. Bidiagonalization: Reduce the original matrix A to a bidiagonal matrix B using Householder reflections.
2. SVD of Bidiagonal Matrix: Compute the SVD of the bidiagonal matrix B using an iterative QR-based method.
My demo technique skips the bidiagonalization step.
I tested the demo code, but not thoroughly — 1,000 matrices with shape 7×5, with random values between 10.0 and +10.0, and it passed those 1,000 tests.
I ran some test with random-size matrices, up to 100×100, and although the tests all passed, performance was very slow — making the approximation technique not practical for most large-matrix scenarios.
Well, all in all, this was a very challenging and very interesting exploration.

Actor Richard Carlson (1912-1977) starred in several of my favorite old science fiction movies. Old movies often had to approximate various special effects, but that doesn’t detract from their appeal to me. Here are three, all from the year 1953.
Left: In “The Magnetic Monster”, a rogue scientist creates a new element that feeds on energy and doubles its mass every 11 hours. Unless it’s stopped, it will literally consume the Earth. Carlson plays Dr. Jeffrey Stewart who devises a plan to stop the menace.
Center: In “The Maze”, Carlson plays Gerald MacTeam. He inherits a creepy old Scottish castle with weird happenings at night. It turns out that the hidden master of the castle is a human-sized frog creature whose development stalled as an embryo. What the heck?! The story has a happy ending (well, except for the frog-man).
Right: In “It Came From Outer Space”, Carlson plays astronomer John Putnam. He witnesses an object crashing into the desert. It turns out that it’s an alien spaceship. The aliens can morph and they impersonate townspeople to get supplies to repair their ship. The story ends well for everyone.
Demo program:
# svd_approx_using_qr.py
# approximating svd using qr
# possibly not numerically stable. for fun only
import numpy as np
np.set_printoptions(precision=6, suppress=True,
floatmode='fixed')
# -----------------------------------------------------------
def my_qr(A, mode):
# QR decomposition from scratch
# mode = 'complete' or 'reduced'
m, n = A.shape # tall skinny A only!
k = np.minimum(m,n)
R = A.copy()
Q = np.eye(m)
if m == n: end = n-1
else: end = n
for i in range(end):
H = np.eye(m)
# Householder routine follows
a = R[i:, i]
norm_a = np.copysign(np.linalg.norm(a), a[0])
v = a / (a[0] + norm_a)
v[0] = 1.0
h = np.eye(len(a))
V = np.matmul(v[:, None], v[None, :])
h -= (2.0 / np.dot(v, v)) * V
H[i:, i:] = h # copy h into lower right
Q = np.matmul(Q, H)
R = np.matmul(H, R)
# 'reduced' : returns Q, R with dims (m, k), (k, n)
if mode == 'complete':
return Q, R
elif mode == 'reduced':
return Q[0:m,0:k], R[0:k,0:n]
# -----------------------------------------------------------
def my_svd_square(A):
# helper for my_svd()
# square matrix only (such as R from QR decomp reduced)
max_iter = 1000
R1 = np.copy(A); # diag holds s
U = np.eye(len(A));
V = np.eye(len(A))
for i in range(max_iter):
# exit when off-diag cells of R1 are all close to 0
# could check just every 100 iterations to improve perf
if np.allclose(R1, np.diag(np.diag(R1)), \
atol=1.0e-8) == True:
print("Exited at iteration " + str(i))
break
Q1, R1 = my_qr(R1, mode='reduced')
Q2, R2 = my_qr(R1.T, mode='reduced')
R1 = R2.T
U = np.matmul(U, Q1)
V = np.matmul(V, Q2)
if i == max_iter-1:
print("Warn: Exceeded max iterations ")
# deal with singular values computed as negative
s = np.copy(np.diag(R1))
Vh = V.T
for i in range(len(s)):
if s[i] "lt" 0.0:
s[i] = -s[i]
Vh[i,:] *= -1
return U, s, Vh
# -----------------------------------------------------------
# -----------------------------------------------------------
def my_svd(A):
# simple approximation using QR for tall skinny A
Q, R = my_qr(A, mode='reduced')
U, s, Vh = my_svd_square(R)
U = np.matmul(Q, U)
return U, s, Vh
# -----------------------------------------------------------
print("\nSimple SVD approximation using simple QR ")
A = np.array([
[1, 6, 8, 5],
[0, 2, 4, 6],
[9, 3, 7, 7],
[-1, -3, -5, -7],
[7, 5, 3, 1]], dtype=np.float64)
print("\nSource matrix A: ")
print(A)
print("\n================================= ")
print("\nUsing simple approximation: ")
U, s, Vh = my_svd(A)
print("\nU = "); print(U)
print("\ns = "); print(s)
print("\nVh = "); print(Vh)
C = np.matmul(U, np.matmul(np.diag(s), Vh))
print("\nCheck = ")
print(C)
# if np.allclose(A, C) == True:
# print("Pass ")
print("\n================================= ")
print("\nUsing linalg.svd(): ")
U, s, Vh = np.linalg.svd(A, full_matrices=False)
print("\nU = "); print(U)
print("\ns = "); print(s)
print("\nVh = "); print(Vh)
C = np.matmul(U, np.matmul(np.diag(s), Vh))
print("\nCheck = ")
print(C)
# if np.allclose(A, C) == True:
# print("Pass ")
print("\nEnd demo ")

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2025 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2025 G2E Conference
2025 iSC West Conference
You must be logged in to post a comment.