“Linear Regression with Pseudo-Inverse Training Using JavaScript” in Visual Studio Magazine

I wrote an article titled “Linear Regression with Pseudo-Inverse Training Using JavaScript” in the February 2026 edition of Microsoft Visual Studio Magazine. See https://visualstudiomagazine.com/articles/2026/02/02/linear-regression-with-pseudo-inverse-training-using-javascript.aspx.

The goal of a machine learning regression problem is to predict a single numeric value. For example, you might want to predict the bank account balance of an employee based on annual income, age, and years of work experience. There are roughly a dozen main regression techniques, such as nearest neighbors regression, kernel ridge regression, neural network regression, and gradient boosting regression. Linear regression is the most fundamental technique.

The form of a linear regression prediction equation is y’ = (w0 * x0) + (w1 * x1) + . . + (wn * xn) + b where y’ is the predicted value, the xi are predictor values, the wi are constants called model weights, and b is a constant called the bias. For example, y’ = predicted balance = (-0.54 * income) + (-0.38 * ago) + (0.17 * experience) + 0.62. Training the model is the process of finding the values of the weights and bias so that predicted y values are close to known correct target y values in a set of training data.

There are many different techniques to train a linear regression model. Three of the most common are 1.) stochastic gradient descent (SGD), 2.) relaxed Moore-Penrose pseudo-inverse, and 3.) pseudo-inverse via normal equations closed form training. My article explains how to implement the relaxed MP pseudo-inverse training technique.

The output of the article demo program is:

Linear regression with pseudo-inverse (QR-Householder)
 training using JavaScript.

Loading synthetic train (200) and test (40) data from file

First three train X:
 -0.1660   0.4406  -0.9998  -0.3953  -0.7065
  0.0776  -0.1616   0.3704  -0.5911   0.7562
 -0.9452   0.3409  -0.1654   0.1174  -0.7192

First three train y:
   0.4840
   0.1568
   0.8054

Creating and training model
Done

Model weights/coefficients:
 -0.2656   0.0333  -0.0454   0.0358  -0.1146
Model bias/intercept: 0.3619

Evaluating model

Train acc (within 0.10) = 0.4600
Test acc (within 0.10) = 0.6500

Train MSE = 0.0026
Test MSE = 0.0020

Predicting for x =
  -0.1660    0.4406   -0.9998   -0.3953   -0.7065
Predicted y = 0.5329

End demo

In theory, the weight values of a linear regression model can be solved using the equation w = inv(X) * y, where w is a vector of weights and the bias that you want to find, X is a “design matrix.” which is a matrix of training predictor data that has a leading column of 1.0 values prepended, inv() is the matrix inverse operation, * means matrix-to-vector multiplication, and y is a vector of the target y values. However, this equation won’t work because matrix inverse applies only to square matrices that have the same number of rows and columns, which is almost never the case in machine learning scenarios.

One work-around is to use what’s called the relaxed Moore-Penrose pseudo-inverse. The math equation is: w = pinv(X) * y. This technique works because the pseudo-inverse applies to any shape matrix. The technique is called relaxed because in a machine learning scenario, the pseudo-inverse implementation doesn’t have to fulfill all the mathematical requirements of true MP pseudo-inverse.

Implementing linear regression using relaxed MP pseudo-inverse training from scratch requires a bit of effort, but compared to using a library function such as those in the Python language scikit-learn library, it allows you to easily integrate a prediction model with other Web-oriented systems. And a from-scratch implementation also allows you to easily modify the system. For example, you can add error checking that is relevant to your particular problem scenario, or you can add diagnostic console.log() statements to monitor training.



Linear regression is a classical machine learning technique. People in tubes is a classical science fiction movie technique.

Left: In “Lifeforce” (1985), a space crew discovers a huge derelict spacecraft that contains bodies in angular tubes. The crew brings the bodies back to Earth. Bad idea. Space vampires. Pretty good movie. I give it my personal B grade.

Center: In “Osiris” (2025), a team of U.S. Special Forces members is mysteriously abducted. They awaken in some pods on an alien spaceship. The aliens are bad. I give this movie a C+ grade.

Right: In “The Creation of the Humanoids” (1962), in the 23rd century, after a nuclear war, society has become dependent on robots. There are extremist human groups and extremist robot groups. The movie is slow and chatty. Somedays I like the movie (grade B) and somedays not (grade C-).


This entry was posted in JavaScript, Machine Learning. Bookmark the permalink.

Leave a Reply