Singular value decomposition (SVD) breaks down a matrix A into a matrix U, a vector s, and a matrix Vh, such that A = U * S * Vh. The S is a square matrix with the elements of vector s on the diagonal, 0s elsewhere.
If you want to compute the Moore-Penrose pseudo-inverse of A using SVD, it is A_pinv = V * Sinv * Uh. The V is the transpose of Vh, Sinv is S with the reciprocals of the elements, and Uh is the transpose of U.
As it turns out, the NumPy SVD function accepts a source matrix to decompose, and a strange full_matrices Boolean parameter. The default value is full_matrices=True, but in order to compute the pseudo-inverse, you must explicitly specify full_matrices=False.
Here’s a demo. It starts by setting up a matrix that has more rows than columns, such as the kind you’d find as training data in a machine learning problem. The demo computes the pseudo-inverse directly using the np.linalg.pinv() function:
import numpy as np
np.set_printoptions(precision=4, suppress=True)
A = np.array([
[1, -2, 3],
[-5, 0, 2],
[8, 5, -4],
[-1, 0, 9]])
print("\nA: ")
print(A)
print("\n============================== ")
print("\nA pinv() using np.linalg.pinv(): \n")
A_pinv = np.linalg.pinv(A)
print(A_pinv)
The output is:
A: [[ 1 -2 3] [-5 0 2] [ 8 5 -4] [-1 0 9]] ============================== A pinv() using np.linalg.pinv(): [[ 0.0979 -0.1201 0.0392 0.0115] [-0.1707 0.1607 0.1317 0.0797] [ 0.0297 0.0038 0.0119 0.1057]]
Next, the demo computes the pseudo-inverse by using the SVD with the full_matrices set to False:
print("\nComputing SVD with full_matrices=False ")
U, s, Vh = np.linalg.svd(A, full_matrices=False)
print("\nU: ")
print(U)
print("\ns: ")
print(s)
print("\nVh: ")
print(Vh)
Uh = U.T
V = Vh.T
p = len(s)
Sinv = np.zeros((p,p))
for i in range(p):
Sinv[i][i] = 1.0 / (s[i] + 1.0e-12)
# V * Sinv * Uh
A_pinv = np.matmul(V, np.matmul(Sinv,Uh))
print("\nA pinv() scratch with full_matrices=False \n")
print(A_pinv)
The output is the same as when computed directly by np.linalg.pinv() and is correct:
Computing SVD with full_matrices=False U: [[ 0.1662 -0.3022 0.6397] [ 0.3568 0.2561 -0.6442] [-0.7379 -0.5111 -0.3448] [ 0.5483 -0.7628 -0.2387]] s: [12.8087 7.423 3.292 ] Vh: [[-0.63 -0.314 0.7103] [-0.6613 -0.2628 -0.7026] [ 0.4073 -0.9123 -0.0421]] A pinv() scratch with full_matrices=False [[ 0.0979 -0.1201 0.0392 0.0115] [-0.1707 0.1607 0.1317 0.0797] [ 0.0297 0.0038 0.0119 0.1057]]
However, when the demo tries to compute the pseudo-inverse using SVD with the default full_matrices=True, the program fails. The code:
print("\nComputing SVD with full_matrices=True ")
U, s, Vh = np.linalg.svd(A, full_matrices=True) # default!
print("\nU: ")
print(U)
print("\ns: ")
print(s)
print("\nVh: ")
print(Vh)
print("\n")
Uh = U.T
V = Vh.T
p = len(s)
Sinv = np.zeros((p,p))
for i in range(p):
Sinv[i][i] = 1.0 / (s[i] + 1.0e-12)
# V * Sinv * Uh
A_pinv = np.matmul(V, np.matmul(Sinv,Uh))
print("\nA pinv() scratch with full_matrices=True \n")
print(A_pinv)
The output:
Computing SVD with full_matrices=True U: [[ 0.1662 -0.3022 0.6397 -0.6869] [ 0.3568 0.2561 -0.6442 -0.6262] [-0.7379 -0.5111 -0.3448 -0.2748] [ 0.5483 -0.7628 -0.2387 0.246 ]] s: [12.8087 7.423 3.292 ] Vh: [[-0.63 -0.314 0.7103] [-0.6613 -0.2628 -0.7026] [ 0.4073 -0.9123 -0.0421]] Traceback (most recent call last): File "C:\VSM\PseudoInverse\pseudoinv_numpy.py", line 65, inA_pinv = np.matmul(V, np.matmul(Sinv,Uh)) ^^^^^^^^^^^^^^^^^^ ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 4 is different from 3)
So, the question in my mind is, “Why is the default to svd() “full_matrices=True” when that causes calculation of the pseudo-inverse to fail?”
I have no answer to that. I know that SVD is used for dozens of different purposes in mathematics, so there may be scenarios where “full_matrices=True” is the correct option.

Over time, arcade machines have become more sensitive but much less interesting. Here are two fully_sensitive=False coin operated electro-mechanical games from Marvin’s Marvelous Mechanical Museum in West Bloomfield Township, Michigan.
Left: In “Dr. Chopandoff”, you place your hand is the metal glove. The doctor slams the guillotine blade down, apparently chopping your fingers off. Nice illusion.
Right: This old 1930s era automata machine from the UK depicts the Spanish Inquisition, complete with automated motion whipping, branding, and racking. Holy Moly!
Demo program:
# pseudoinv_numpy.py
import numpy as np
import scipy as sp
np.set_printoptions(precision=4, suppress=True)
A = np.array([
[1, -2, 3],
[-5, 0, 2],
[8, 5, -4],
[-1, 0, 9]])
print("\nA: ")
print(A)
print("\n============================== ")
print("\nA pinv() using np.linalg.pinv(): \n")
A_pinv = np.linalg.pinv(A)
print(A_pinv)
print("\n============================== ")
print("\nComputing SVD with full_matrices=False ")
U, s, Vh = np.linalg.svd(A, full_matrices=False)
print("\nU: ")
print(U)
print("\ns: ")
print(s)
print("\nVh: ")
print(Vh)
Uh = U.T
V = Vh.T
p = len(s)
Sinv = np.zeros((p,p))
for i in range(p):
Sinv[i][i] = 1.0 / (s[i] + 1.0e-12)
# V * Sinv * Uh
A_pinv = np.matmul(V, np.matmul(Sinv,Uh))
print("\nA pinv() scratch with full_matrices=False \n")
print(A_pinv)
print("\n============================== ")
print("\nComputing SVD with full_matrices=True ")
U, s, Vh = np.linalg.svd(A, full_matrices=True)
print("\nU: ")
print(U)
print("\ns: ")
print(s)
print("\nVh: ")
print(Vh)
print("\n")
Uh = U.T
V = Vh.T
p = len(s)
Sinv = np.zeros((p,p))
for i in range(p):
Sinv[i][i] = 1.0 / (s[i] + 1.0e-12)
# V * Sinv * Uh
A_pinv = np.matmul(V, np.matmul(Sinv,Uh))
print("\nA pinv() scratch with full_matrices=True \n")
print(A_pinv)
print("\n============================== ")

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2025 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2025 G2E Conference
2025 iSC West Conference
You must be logged in to post a comment.