Four Different Techniques to Train a Linear Regression Model Using C#

The goal of a machine learning regression model is to predict a single numeric value. For example, you might want to predict a person’s credit score based on age, annual income, bank account balance, and so on. Common regression techniques include: linear regression, nearest neighbors regression, quadratic regression, kernel ridge regression, neural network regression, random forest regression, and gradient boosting regression. Each technique has pros and cons.

The form of a linear regression model is y’ = (w0 * x0) + (w1 * x1) + . . + b, where y’ is the predicted value, the xi are predictor variables, the wi are constants called weights (or coefficients), and b is a constant called the bias (or intercept). Training is the process of finding good values for the weights and bias so that predicted y values closely match known correct target y values in a set of training data.

There are several techniques that can be used to train a linear regression model. I put together a demo, using the C# language, of four techniques:

1.) left pseudo-inverse via normal equations with Cholesky inverse.
2.) basic stochastic gradient descent (SGD).
3.) relaxed Moore-Penrose pseudo-inverse with QR modified Gram-Schmidt inverse.
4.) relaxed Moore-Penrose pseudo-inverse with SVD Jacobi inverse.

Technique #1 works well for most scenarios, but can fail when the training data has an unfortunate combination of very small and/or very large values. Technique #2 is iterative and works well for most training datasets, even huge datasets, but requires you to specify a maximum number of training iterations and a constant called a learning rate, both of which must be determined by trial and error. Technique #3 can fail for very large datasets but is relatively easy to implement. Technique #4 is very similar to #3, but is less likely to fail for large datasets, but is very complicated to implement.

The output of my demo is:

Begin C# linear regression demo

Loading synthetic train (200) and test (40) data
Done

First three train X:
 -0.1660  0.4406 -0.9998 -0.3953 -0.7065
  0.0776 -0.1616  0.3704 -0.5911  0.7562
 -0.9452  0.3409 -0.1654  0.1174 -0.7192

First three train y:
  0.4840
  0.1568
  0.8054

==========================

1. Training model using left pseudo-inv via
   normal equations (Cholesky)
Done

Weights/coefficients:
-0.2656  0.0333  -0.0454  0.0358  -0.1146
Bias/constant: 0.3619

==========================

2. Training model using SGD
Setting lrnRate = 0.0010
Setting maxEpochs = 1000
epoch =     0  MSE =   0.1132
epoch =   200  MSE =   0.0026
epoch =   400  MSE =   0.0026
epoch =   600  MSE =   0.0026
epoch =   800  MSE =   0.0026
Done

Weights/coefficients:
-0.2656  0.0333  -0.0453  0.0358  -0.1146
Bias/constant: 0.3619

==========================

3. Training model using relaxed Moore-Penrose
   pseudo-inv via QR (Gram-Schmidt)
Done

Weights/coefficients:
-0.2656  0.0333  -0.0454  0.0358  -0.1146
Bias/constant: 0.3619

==========================

4. Training model using relaxed Moore-Penrose
   pseudo-inv via SVD (Jacobi)
Done

Weights/coefficients:
-0.2656  0.0333  -0.0454  0.0358  -0.1146
Bias/constant: 0.3619

==========================

End demo

The demo data is synthetic. There are just 200 training items. There are five predictor variables, so the regression model has five weights and a bias. The output shows that all four training techniques give the same model, meaning the values of the weights and the bias are the same (to four decimal places).

Good fun for me.



Machine learning regression is sort of like finding hidden patterns in datasets. Every cover of Playboy magazine (except for the very first issue in December 1953) has a logo of a bunny head. Sometimes the logo is obvious but sometimes the logo is very cleverly hidden in the cover photo. It’s kind of like a “Where’s Waldo” for adults.

Left: On the July 1982 issue, the bunny logo is sneakily incorporated into the telephone cord.

Right: On the June 1980 issue, the bunny logo is disguised as one of the butterflies.


Demo program. Very long. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my blog editor chokes on symbols).

using System;
using System.IO;
using System.Collections.Generic;

namespace LinearRegressionFourTrainingTechniques
{
  internal class LinearRegressionProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin C# linear regression demo ");

      // 1. load data
      Console.WriteLine("\nLoading synthetic train" +
        " (200) and test (40) data");
      string trainFile =
        "..\\..\\..\\Data\\synthetic_train_200.txt";
      int[] colsX = new int[] { 0, 1, 2, 3, 4 };
      int colY = 5;

      double[][] trainX =
        MatLoad(trainFile, colsX, ',', "#");
      double[] trainY =
        MatToVec(MatLoad(trainFile,
        new int[] { colY }, ',', "#"));

      string testFile =
        "..\\..\\..\\Data\\synthetic_test_40.txt";
      double[][] testX =
        MatLoad(testFile, colsX, ',', "#");
      double[] testY =
        MatToVec(MatLoad(testFile,
        new int[] { colY }, ',', "#"));
      Console.WriteLine("Done ");

      Console.WriteLine("\nFirst three train X: ");
      for (int i = 0; i "lt" 3; ++i)
        VecShow(trainX[i], 4, 8);

      Console.WriteLine("\nFirst three train y: ");
      for (int i = 0; i "lt" 3; ++i)
        Console.WriteLine(trainY[i].ToString("F4").
          PadLeft(8));

      //// 2. create model
      //Console.WriteLine("\nCreating linear " +
      //  "regression model ");
      //LinearRegressor model = new LinearRegressor();
      //Console.WriteLine("Done");

      // 3a. train model using left pseudo-inv
      Console.WriteLine("\n========================== ");
      Console.WriteLine("\n1. Training model using" +
        " left pseudo-inv via normal equations (Cholesky) ");
      LinearRegressor model1 = new LinearRegressor();
      model1.TrainClosed(trainX, trainY);
      Console.WriteLine("Done ");
      Console.WriteLine("\nWeights/coefficients: ");
      for (int i = 0; i "lt" model1.weights.Length; ++i)
        Console.Write(model1.weights[i].ToString("F4") +
          "  ");
      Console.WriteLine("\nBias/constant: " +
        model1.bias.ToString("F4"));

      // 3b. train model using SGD
      Console.WriteLine("\n========================== ");
      Console.WriteLine("\n2. Training model using SGD ");
      LinearRegressor model2 = new LinearRegressor();
      double lrnRate = 0.001;
      int maxEpochs = 1000;
      Console.WriteLine("Setting lrnRate = " +
        lrnRate.ToString("F4"));
      Console.WriteLine("Setting maxEpochs = " +
        maxEpochs);
      model2.TrainSGD(trainX, trainY, lrnRate, maxEpochs);
      Console.WriteLine("Done ");
      Console.WriteLine("\nWeights/coefficients: ");
      for (int i = 0; i "lt" model2.weights.Length; ++i)
        Console.Write(model2.weights[i].ToString("F4") +
          "  ");
      Console.WriteLine("\nBias/constant: " +
        model2.bias.ToString("F4"));

      // 3c. train model using MP pseudo-inv QR
      Console.WriteLine("\n========================== ");
      Console.WriteLine("\n3. Training model using" +
        " relaxed Moore-Penrose pseudo-inv via" +
        " QR (Gram-Schmidt) ");
      LinearRegressor model3 = new LinearRegressor();
      model3.TrainPinvQR(trainX, trainY);
      Console.WriteLine("Done ");
      Console.WriteLine("\nWeights/coefficients: ");
      for (int i = 0; i "lt" model3.weights.Length; ++i)
        Console.Write(model3.weights[i].ToString("F4") +
          "  ");
      Console.WriteLine("\nBias/constant: " +
        model3.bias.ToString("F4"));

      // 3d. train model using MP pseudo-inv SVD
      Console.WriteLine("\n========================== ");
      Console.WriteLine("\n4. Training model using" +
        " relaxed Moore-Penrose pseudo-inv via" +
        " SVD (Jacobi) ");
      LinearRegressor model4 = new LinearRegressor();
      model4.TrainPinvSVD(trainX, trainY);
      Console.WriteLine("Done ");
      Console.WriteLine("\nWeights/coefficients: ");
      for (int i = 0; i "lt" model4.weights.Length; ++i)
        Console.Write(model4.weights[i].ToString("F4") +
          "  ");
      Console.WriteLine("\nBias/constant: " +
        model4.bias.ToString("F4"));

      //// 4. evaluate model
      //Console.WriteLine("\nEvaluating model ");
      //double accTrain = model.Accuracy(trainX, trainY, 0.10);
      //Console.WriteLine("\nAccuracy train (within 0.10) = " +
      //  accTrain.ToString("F4"));
      //double accTest = model.Accuracy(testX, testY, 0.10);
      //Console.WriteLine("Accuracy test (within 0.10) = " +
      //  accTest.ToString("F4"));

      //double mseTrain = model.MSE(trainX, trainY);
      //Console.WriteLine("\nMSE train = " +
      //  mseTrain.ToString("F4"));
      //double mseTest = model.MSE(testX, testY);
      //Console.WriteLine("MSE test = " +
      //  mseTest.ToString("F4"));

      //// 5. use model
      //double[] x = trainX[0];
      //Console.WriteLine("\nPredicting for x = ");
      //VecShow(x, 4, 9);
      //double predY = model.Predict(x);
      //Console.WriteLine("Predicted y = " +
      //  predY.ToString("F4"));

      // 6. TODO: save model weights, bias to text file

      Console.WriteLine("\n========================== ");
      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main

    // ------------------------------------------------------
    // helpers for Main(): MatLoad, MatToVec, VecShow
    // ------------------------------------------------------

    static double[][] MatLoad(string fn, int[] usecols,
      char sep, string comment)
    {
      List"lt"double[]"gt" result = new List"lt"double[]"gt"();
      string line = "";
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      while ((line = sr.ReadLine()) != null)
      {
        if (line.StartsWith(comment) == true)
          continue;
        string[] tokens = line.Split(sep);
        List"lt"double"gt" lst = new List"lt"double"gt"();
        for (int j = 0; j "lt" usecols.Length; ++j)
          lst.Add(double.Parse(tokens[usecols[j]]));
        double[] row = lst.ToArray();
        result.Add(row);
      }
      sr.Close(); ifs.Close();
      return result.ToArray();
    }

    static double[] MatToVec(double[][] mat)
    {
      int nRows = mat.Length;
      int nCols = mat[0].Length;
      double[] result = new double[nRows * nCols];
      int k = 0;
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[k++] = mat[i][j];
      return result;
    }

    static void VecShow(double[] vec, int dec, int wid)
    {
      for (int i = 0; i "lt" vec.Length; ++i)
        Console.Write(vec[i].ToString("F" + dec).
          PadLeft(wid));
      Console.WriteLine("");
    }
  } // class Program

  // ========================================================

  public class LinearRegressor
  {
    public double[] weights;
    public double bias;
    private Random rnd;

    public LinearRegressor(int seed = 0)
    {
      this.weights = new double[0]; // quasi-null
      this.bias = 0.0;
      this.rnd = new Random(seed);
    }

    // ------------------------------------------------------

    public double Predict(double[] x)
    {
      double result = 0.0;
      for (int j = 0; j "lt" x.Length; ++j)
        result += x[j] * this.weights[j];
      result += this.bias;
      return result;
    }

    // ------------------------------------------------------

    public double Accuracy(double[][] dataX, double[] dataY,
      double pctClose)
    {
      int numCorrect = 0; int numWrong = 0;
      for (int i = 0; i "lt" dataX.Length; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        if (Math.Abs(predY - actualY) "lt"
          Math.Abs(pctClose * actualY))
          ++numCorrect;
        else
          ++numWrong;
      }
      return (numCorrect * 1.0) / (numWrong + numCorrect);
    }

    // ------------------------------------------------------

    public double MSE(double[][] dataX, double[] dataY)
    {
      int n = dataX.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        sum += (actualY - predY) * (actualY - predY);
      }
      return sum / n;
    }

    // ------------------------------------------------------

    public void TrainClosed(double[][] trainX,
      double[] trainY)
    {
      // train using left pinv via normal equations
      // can fail for very large trainX (then use SGD)
      int dim = trainX[0].Length;
      this.weights = new double[dim];

      double[][] X = Cholesky.MatToDesign(trainX);
      double[][] Xpinv = Cholesky.MatPseudoInv(X);
      double[] biasAndWts = 
        Cholesky.MatVecProduct(Xpinv, trainY);
      this.bias = biasAndWts[0];
      for (int i = 1; i "lt" biasAndWts.Length; ++i)
        this.weights[i - 1] = biasAndWts[i];
      return;
    }

    // ------------------------------------------------------

    public void TrainSGD(double[][] trainX, double[] trainY,
      double lrnRate, int maxEpochs)
    {
      int n = trainX.Length;
      int dim = trainX[0].Length;
      this.weights = new double[dim];
      double lo = -0.01; double hi = 0.01;
      for (int i = 0; i "lt" dim; ++i)
        this.weights[i] = (hi - lo) *
          this.rnd.NextDouble() + lo;
      this.bias = (hi - lo) *
          this.rnd.NextDouble() + lo;

      int[] indices = new int[n];
      for (int i = 0; i "lt" n; ++i)
        indices[i] = i;

      for (int epoch = 0; epoch "lt" maxEpochs; ++epoch)
      {
        Shuffle(indices, this.rnd);
        for (int i = 0; i "lt" n; ++i)
        {
          int ii = indices[i];
          double[] x = trainX[ii];
          double actualY = trainY[ii];
          double predY = this.Predict(x);

          for (int j = 0; j "lt" dim; ++j)
            this.weights[j] -= lrnRate *
              (predY - actualY) * x[j];
          this.bias -= lrnRate * (predY - actualY);
        }
        if (epoch % (int)(maxEpochs / 5) == 0)
        {
          double mse = this.MSE(trainX, trainY);
          string s1 = "epoch = " + 
            epoch.ToString().PadLeft(5);
          string s2 = "  MSE = " + 
            mse.ToString("F4").PadLeft(8);
          Console.WriteLine(s1 + s2);
        }
      }
    } // TrainSGD()

    // ------------------------------------------------------

    private static void Shuffle(int[] indices, Random rnd)
    {
      // helper for TrainSGD()
      int n = indices.Length;
      for (int i = 0; i "lt" n; ++i)
      {
        int ri = rnd.Next(i, n);
        int tmp = indices[i];
        indices[i] = indices[ri];
        indices[ri] = tmp;
      }
    }

    // ------------------------------------------------------

    public void TrainPinvQR(double[][] trainX, double[] trainY)
    {
      // train using relaxed Moore-Penrose pinv, via
      // QR inverse, via QR decomp, via modified Gram-Schmidt

      int dim = trainX[0].Length;
      this.weights = new double[dim];

      double[][] X = QR_GramSchmidt.MatToDesign(trainX);
      double[][] Xpinv = QR_GramSchmidt.MatPseudoInv(X);
      double[] biasAndWts = 
        QR_GramSchmidt.MatVecProduct(Xpinv, trainY);
      this.bias = biasAndWts[0];
      for (int i = 1; i "lt" biasAndWts.Length; ++i)
        this.weights[i - 1] = biasAndWts[i];
      return;
    }

    // ------------------------------------------------------

    public void TrainPinvSVD(double[][] trainX, double[] trainY)
    {
      // train using relaxed Moore-Penrose pinv, via
      // SVD inverse, via SVD decomp, via Jacobi

      int dim = trainX[0].Length;
      this.weights = new double[dim];

      double[][] X = SVD_Jacobi.MatToDesign(trainX);
      double[][] Xpinv = SVD_Jacobi.MatPseudoInv(X);
      double[] biasAndWts = 
        SVD_Jacobi.MatVecProduct(Xpinv, trainY);
      this.bias = biasAndWts[0];
      for (int i = 1; i "lt" biasAndWts.Length; ++i)
        this.weights[i - 1] = biasAndWts[i];
      return;
    }

    // ------------------------------------------------------

  } // class LinearRegressor

  // ========================================================

  public class Cholesky
  {
    // container class for MatPseudoInv(), used in Train()
    // left pseudo-inv via normal equations

    public static double[][] MatPseudoInv(double[][] A)
    {
      // left pseudo-inverse via normal equations
      // A is design matrix where nRows must be gte nCols
      // pinv = inv(At * A) * A
      // calls MatInvCholesky, calls MatDecompCholesky
      double[][] At = MatTranspose(A);
      double[][] AtA = MatProduct(At, A);
      for (int i = 0; i "lt" AtA.Length; ++i)
        AtA[i][i] += 1.0e-8; /// condition before inv
      double[][] AtAinv = MatInvCholesky(AtA);
      double[][] pinv = MatProduct(AtAinv, At);
      return pinv;
    }

    // ------------------------------------------------------

    public static double[][] MatToDesign(double[][] X)
    {
      // utility function
      // construct design matrix from X (add col of 1.0s)
      int nRows = X.Length; // aka m
      int nCols = X[0].Length; // n
      
      double[][] result = MatMake(nRows, nCols + 1);
      for (int i = 0; i "lt" nRows; ++i)
      {
        result[i][0] = 1.0;
        for (int j = 1; j "lt" nCols + 1; ++j)
          result[i][j] = X[i][j - 1];
      }
      return result;
    }

    // ------------------------------------------------------

    public static double[] MatVecProduct(double[][] M,
      double[] v)
    {
      // M * v. return a regular vector
      int nRows = M.Length;
      int nCols = M[0].Length;
      int n = v.Length;
      if (nCols != n)
        throw new Exception("non-conform in MatVecProd");

      double[] result = new double[nRows];
      for (int i = 0; i "lt" nRows; ++i)
        for (int k = 0; k "lt" nCols; ++k)
          result[i] += M[i][k] * v[k];

      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatInvCholesky(double[][] A)
    {
      // A must be square, symmetric, positive definite
      // calls MatDecompCholesky()
      int m = A.Length;
      int n = A[0].Length;  // m == n
      double[][] L = MatDecompCholesky(A); // L * Lt = A

      double[][] result = MatMake(n, n); // Identity
      for (int i = 0; i "lt" n; ++i)
        result[i][i] = 1.0;

      for (int k = 0; k "lt" n; ++k)
      {
        for (int j = 0; j "lt" n; j++)
        {
          for (int i = 0; i "lt" k; i++)
          {
            result[k][j] -= result[i][j] * L[k][i];
          }
          result[k][j] /= L[k][k];
        }
      }

      for (int k = n - 1; k "gte" 0; --k)
      {
        for (int j = 0; j "lt" n; j++)
        {
          for (int i = k + 1; i "lt" n; i++)
          {
            result[k][j] -= result[i][j] * L[i][k];
          }
          result[k][j] /= L[k][k];
        }
      }
      return result;
    }

    // ----------------------------------------------------

    private static double[][] MatDecompCholesky(double[][] A)
    {
      // decompose A to L such that L * Lt = A
      // A must be square
      int m = A.Length; int n = A[0].Length;  // m == n
      double[][] L = MatMake(n, n);  // or m

      for (int i = 0; i "lt" n; ++i)
      {
        for (int j = 0; j "lte" i; ++j)
        {
          double sum = 0.0;
          for (int k = 0; k "lt" j; ++k)
            sum += L[i][k] * L[j][k];
          if (i == j)
          {
            double tmp = A[i][i] - sum;
            if (tmp "lt" 0.0)
              throw new
                Exception("decomp Cholesky fatal");
            L[i][j] = Math.Sqrt(tmp);
          }
          else
          {
            if (L[j][j] == 0.0)
              throw new
                Exception("decomp Cholesky fatal ");
            L[i][j] = (A[i][j] - sum) / L[j][j];
          }
        } // j
      } // i

      return L;
    }

    // ----------------------------------------------------
    // helpers: MatMake, MatTranspose, MatProduct
    // ----------------------------------------------------

    private static double[][] MatMake(int nRows, int nCols)
    {
      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatTranspose(double[][] M)
    {
      int nr = M.Length; int nc = M[0].Length;
      double[][] result = new double[nc][]; // note
      for (int i = 0; i "lt" nc; ++i)
        result[i] = new double[nr];
      for (int i = 0; i "lt" nr; ++i)
        for (int j = 0; j "lt" nc; ++j)
          result[j][i] = M[i][j]; // note
      return result;
    }

    // ````````````````````````````````````````````````````

    private static double[][] MatProduct(double[][] A,
      double[][] B)
    {
      int aRows = A.Length; int aCols = A[0].Length;
      int bRows = B.Length; int bCols = B[0].Length;
      if (aCols != bRows)
        throw new Exception("Non-conformable matrices");

      double[][] result = new double[aRows][];
      for (int i = 0; i "lt" aRows; ++i)
        result[i] = new double[bCols];

      for (int i = 0; i "lt" aRows; ++i) // each row of A
        for (int j = 0; j "lt" bCols; ++j) // each col of B
          for (int k = 0; k "lt" aCols; ++k)
            result[i][j] += A[i][k] * B[k][j];
      return result;
    }

    // ------------------------------------------------------

  } // class Cholesky

  // ========================================================

  public class QR_GramSchmidt
  {
    public static double[][] MatPseudoInv(double[][] A)
    {
      // relaxed Moore-Penrose pseudo-inv via QR
      // decomposition, modified Gram-Schmidt algorithm
      // A is design matrix where nRows must be gte nCols
      // pinv = inv(R) * inv(Q)

      int nr = A.Length;
      int nc = A[0].Length; // aka m, n
      if (nr "lt" nc)
        Console.WriteLine("ERROR: Works only m "gte" n");

      double[][] Q; double[][] R;
      MatDecompQR(A, out Q, out R); // Gram-Schmidt
      double[][] Rinv = MatInvUpperTri(R); // std algo
      double[][] Qinv = MatTranspose(Q);  // is inv(Q)
      double[][] pinv = MatProduct(Rinv, Qinv);
      return pinv;
    }

    // ------------------------------------------------------

    public static double[][] MatToDesign(double[][] X)
    {
      // utility function
      // construct design matrix from X (add col of 1.0s)
      int nRows = X.Length; // aka m
      int nCols = X[0].Length; // n
      
      double[][] result = MatMake(nRows, nCols + 1);
      for (int i = 0; i "lt" nRows; ++i)
      {
        result[i][0] = 1.0;
        for (int j = 1; j "lt" nCols + 1; ++j)
          result[i][j] = X[i][j - 1];
      }
      return result;
    }

    // ------------------------------------------------------

    public static double[] MatVecProduct(double[][] M,
      double[] v)
    {
      // M * v. return a regular vector
      int nRows = M.Length;
      int nCols = M[0].Length;
      int n = v.Length;
      if (nCols != n)
        throw new Exception("non-conform in MatVecProd");

      double[] result = new double[nRows];
      for (int i = 0; i "lt" nRows; ++i)
        for (int k = 0; k "lt" nCols; ++k)
          result[i] += M[i][k] * v[k];

      return result;
    }

    // ------------------------------------------------------

    private static void MatDecompQR(double[][] A,
      out double[][] Q, out double[][] R)
    {
      // QR decomposition, modified Gram-Schmidt
      // 'reduced' mode (for machine learning scenarios)
      int m = A.Length; 
      int n = A[0].Length;
      if (m "lt" n)
        throw new Exception("m must be gte n ");

      double[][] QQ = new double[m][];  // working Q mxn
      for (int i = 0; i "lt" m; ++i)
        QQ[i] = new double[n];

      double[][] RR = new double[n][];  // working R nxn
      for (int i = 0; i "lt" n; ++i)
        RR[i] = new double[n];

      for (int k = 0; k "lt" n; ++k) // main loop each col
      {
        double[] v = new double[m];
        for (int i = 0; i "lt" m; ++i)  // col k
          v[i] = A[i][k];
        for (int j = 0; j "lt" k; ++j)  // inner loop
        {
          double[] colj = new double[QQ.Length];
          for (int i = 0; i "lt" colj.Length; ++i)
            colj[i] = QQ[i][j];

          double vecdot = 0.0;
          for (int i = 0; i "lt" colj.Length; ++i)
            vecdot += colj[i] * v[i];
          RR[j][k] = vecdot;

          // v = v - (R[j, k] * Q[:, j])
          for (int i = 0; i "lt" v.Length; ++i)
            v[i] = v[i] - (RR[j][k] * QQ[i][j]);
        } // j

        double normv = 0.0;
        for (int i = 0; i "lt" v.Length; ++i)
          normv += v[i] * v[i];
        normv = Math.Sqrt(normv);
        RR[k][k] = normv;

        // Q[:, k] = v / R[k, k]
        for (int i = 0; i "lt" QQ.Length; ++i)
          QQ[i][k] = v[i] / (RR[k][k] + 1.0e-12);

      } // k

      Q = QQ;
      R = RR;
    }

    // ------------------------------------------------------

    private static double[][] MatInvUpperTri(double[][] A)
    {
      // used to invert R from QR
      int n = A.Length;  // must be square matrix
      double[][] result = MatIdentity(n);

      for (int k = 0; k "lt" n; ++k)
      {
        for (int j = 0; j "lt" n; ++j)
        {
          for (int i = 0; i "lt" k; ++i)
          {
            result[j][k] -= result[j][i] * A[i][k];
          }
          result[j][k] /= (A[k][k] + 1.0e-8); // avoid 0
        }
      }
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatTranspose(double[][] M)
    {
      int nRows = M.Length; int nCols = M[0].Length;
      double[][] result = MatMake(nCols, nRows);  // note
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[j][i] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatMake(int nRows, int nCols)
    {
      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatIdentity(int n)
    {
      double[][] result = MatMake(n, n);
      for (int i = 0; i "lt" n; ++i)
        result[i][i] = 1.0;
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatProduct(double[][] A,
      double[][] B)
    {
      int aRows = A.Length; int aCols = A[0].Length;
      int bRows = B.Length; int bCols = B[0].Length;
      if (aCols != bRows)
        throw new Exception("Non-conformable matrices");

      double[][] result = MatMake(aRows, bCols);

      for (int i = 0; i "lt" aRows; ++i) // each row of A
        for (int j = 0; j "lt" bCols; ++j) // each col of B
          for (int k = 0; k "lt" aCols; ++k)
            result[i][j] += A[i][k] * B[k][j];

      return result;
    }

  } // class QR_GramSchmidt

  // ========================================================

  public class SVD_Jacobi
  {
    public static double[][] MatPseudoInv(double[][] A)
    {
      double[][] U; double[] s; double[][] Vh;
      MatDecompSVD_Jacobi(A, out U, out s, out Vh);

      double[][] Uh = MatTranspose(U);
      double[][] V = MatTranspose(Vh);

      double[][] S = MatMake(s.Length, s.Length);
      for (int i = 0; i "lt" s.Length; ++i)
        S[i][i] = 1.0 / (s[i] + 1.0e-12); // avoid div by 0
 
      double[][] result = MatProduct(V, MatProduct(S, Uh));
      return result;
    }

    // ------------------------------------------------------

    public static double[][] MatToDesign(double[][] X)
    {
      // utility function
      // construct design matrix from X (add col of 1.0s)
      int nRows = X.Length; // aka m
      int nCols = X[0].Length; // n
      
      double[][] result = MatMake(nRows, nCols + 1);
      for (int i = 0; i "lt" nRows; ++i)
      {
        result[i][0] = 1.0;
        for (int j = 1; j "lt" nCols + 1; ++j)
          result[i][j] = X[i][j - 1];
      }
      return result;
    }

    // ------------------------------------------------------

    public static double[] MatVecProduct(double[][] M,
      double[] v)
    {
      // M * v. return a regular vector
      int nRows = M.Length;
      int nCols = M[0].Length;
      int n = v.Length;
      if (nCols != n)
        throw new Exception("non-conform in MatVecProd");

      double[] result = new double[nRows];
      for (int i = 0; i "lt" nRows; ++i)
        for (int k = 0; k "lt" nCols; ++k)
          result[i] += M[i][k] * v[k];

      return result;
    }

    // ------------------------------------------------------

    public static void MatDecompSVD_Jacobi(double[][] M,
      out double[][] U, out double[] s, out double[][] Vh)
    {
      // see references above
      double epsilon = 1.0e-15;

      double[][] A = MatCopyOf(M); // working U
      int m = A.Length; int n = A[0].Length;
      double[][] Q = MatIdentity(n); // working V
      double[] sv = new double[n];  // working s

      // initialize counters
      int count = 1;
      int pass = 0;
      double tolerance = 10 * m * epsilon; // heuristic

      // always do at least 12 sweeps
      int passMax = Math.Max(5 * n, 12); // heuristic

      // store the column error estimates for use
      // during orthogonalization

      for (int j = 0; j "lt" n; ++j)
      {
        double[] cj = MatGetColumn(A, j);
        double sj = VecNorm(cj);
        sv[j] = epsilon * sj;
      }

      // orthogonalize A using plane rotations
      while (count "gt" 0 && pass "lte" passMax)
      {
        // initialize rotation counter
        count = n * (n - 1) / 2;

        for (int j = 0; j "lt" n - 1; ++j)
        {
          for (int k = j + 1; k "lt" n; ++k)
          {
            double cosine, sine;

            double[] cj = MatGetColumn(A, j);
            double[] ck = MatGetColumn(A, k);

            double p = 2.0 * VecDot(cj, ck);
            double a = VecNorm(cj);
            double b = VecNorm(ck);

            double q = a * a - b * b;
            double v = Hypot(p, q);

            // test for columns j,k orthogonal,
            // or dominant errors 
            double abserr_a = sv[j];
            double abserr_b = sv[k];

            bool sorted = (a "gte" b);
            bool orthog = (Math.Abs(p) "lte"
              tolerance * (a * b));
            bool bada = (a "lt" abserr_a);
            bool badb = (b "lt" abserr_b);

            if (sorted == true && (orthog == true ||
              bada == true || badb == true))
            {
              --count;
              continue;
            }

            // calculate rotation angles
            if (v == 0 || sorted == false)
            {
              cosine = 0.0; sine = 1.0;
            }
            else
            {
              cosine = Math.Sqrt((v + q) / (2.0 * v));
              sine = p / (2.0 * v * cosine);
            }

            // apply rotation to A (working U)
            for (int i = 0; i "lt" m; ++i)
            {
              double Aik = A[i][k];
              double Aij = A[i][j];
              A[i][j] = Aij * cosine + Aik * sine;
              A[i][k] = -Aij * sine + Aik * cosine;
            }

            // update singular values
            sv[j] = Math.Abs(cosine) * abserr_a +
              Math.Abs(sine) * abserr_b;
            sv[k] = Math.Abs(sine) * abserr_a +
              Math.Abs(cosine) * abserr_b;

            // apply rotation to Q (working V)
            for (int i = 0; i "lt" n; ++i)
            {
              double Qij = Q[i][j];
              double Qik = Q[i][k];
              Q[i][j] = Qij * cosine + Qik * sine;
              Q[i][k] = -Qij * sine + Qik * cosine;
            } // i
          } // k
        } // j

        ++pass;
      } // while

      //  compute singular values
      double prevNorm = -1.0;

      for (int j = 0; j "lt" n; ++j)
      {
        double[] column = MatGetColumn(A, j);
        double norm = VecNorm(column);

        // determine if singular value is zero
        if (norm == 0.0 || prevNorm == 0.0
          || (j "gt" 0 &&
            norm "lte" tolerance * prevNorm))
        {
          sv[j] = 0.0;
          for (int i = 0; i "lt" m; ++i)
            A[i][j] = 0.0;
          prevNorm = 0.0;
        }
        else
        {
          sv[j] = norm;
          for (int i = 0; i "lt" m; ++i)
            A[i][j] = A[i][j] * 1.0 / norm;
          prevNorm = norm;
        }
      }

      if (count "gt" 0)
      {
        Console.WriteLine("Jacobi iterations did not" +
          " converge");
      }

      U = A;
      Vh = MatTranspose(Q);
      s = sv;

      // to sync with np.linalg.svd() full_matrices=False
      // shapes and allow U*S*Vh:
      // if m "lt" n, extract 1st m columns of U
      //   extract 1st m values of s
      //   extract 1st m rows of Vh

      if (m "lt" n)
      {
        U = MatExtractFirstColumns(U, m);
        s = VecExtractFirst(s, m);
        Vh = MatExtractFirstRows(Vh, m);
      }

    } // MatDecompSVD_Jacobi()

    // ------------------------------------------------------

    // === helper functions =================================
    //
    // MatMake, MatCopy, MatIdentity, MatGetColumn,
    // MatExtractFirstColumns, MatExtractFirstRows,
    // MatTranspose, MatProduct, VecNorm, VecDot,
    // Hypot, VecExtractFirst
    //
    // ======================================================

    private static double[][] MatMake(int nRows, int nCols)
    {
      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatCopyOf(double[][] M)
    {
      int nr = M.Length; int nc = M[0].Length;
      double[][] result = MatMake(nr, nc);
      for (int i = 0; i "lt" nr; ++i)
        for (int j = 0; j "lt" nc; ++j)
          result[i][j] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatIdentity(int n)
    {
      double[][] result = MatMake(n, n);
      for (int i = 0; i "lt" n; ++i)
        result[i][i] = 1.0;
      return result;
    }

    // ------------------------------------------------------

    private static double[] MatGetColumn(double[][] M, int j)
    {
      int nRows = M.Length;
      double[] result = new double[nRows];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] 
      MatExtractFirstColumns(double[][] M, int n)
    {
      int nRows = M.Length;
      double[][] result = MatMake(nRows, n);
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" n; ++j)
          result[i][j] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] 
      MatExtractFirstRows(double[][] M, int n)
    {
      int nCols = M[0].Length;
      double[][] result = MatMake(n, nCols);
      for (int i = 0; i "lt" n; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[i][j] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatTranspose(double[][] M)
    {
      int nRows = M.Length;
      int nCols = M[0].Length;
      double[][] result = MatMake(nCols, nRows);
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[j][i] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatProduct(double[][] A,
      double[][] B)
    {
      int aRows = A.Length;
      int aCols = A[0].Length;
      int bRows = B.Length;
      int bCols = B[0].Length;
      if (aCols != bRows)
        throw new Exception("Non-conformable matrices");

      double[][] result = MatMake(aRows, bCols);

      for (int i = 0; i "lt" aRows; ++i)
        for (int j = 0; j "lt" bCols; ++j)
          for (int k = 0; k "lt" aCols; ++k)
            result[i][j] += A[i][k] * B[k][j];

      return result;
    }

    // ------------------------------------------------------

    private static double VecNorm(double[] v)
    {
      double sum = 0.0;
      int n = v.Length;
      for (int i = 0; i "lt" n; ++i)
        sum += v[i] * v[i];
      return Math.Sqrt(sum);
    }

    // ------------------------------------------------------

    private static double VecDot(double[] v1, double[] v2)
    {
      int n = v1.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
        sum += v1[i] * v2[i];
      return sum;
    }

    // ------------------------------------------------------

    private static double Hypot(double x, double y)
    {
      // fancy sqrt(x^2 + y^2)  std technique
      double xabs = Math.Abs(x);
      double yabs = Math.Abs(y);
      double min, max;

      if (xabs "lt" yabs)
      {
        min = xabs; max = yabs;
      }
      else
      {
        min = yabs; max = xabs;
      }

      if (min == 0)
        return max;

      double u = min / max;
      return max * Math.Sqrt(1 + u * u);
    }

    // ------------------------------------------------------

    private static double[] VecExtractFirst(double[] v,
      int n)
    {
      double[] result = new double[n];
      for (int i = 0; i "lt" n; ++i)
        result[i] = v[i];
      return result;
    }

    // ------------------------------------------------------

  } // class SVD_Jacobi

  // ========================================================

} // ns

Training data:

# synthetic_train_200.txt
#
-0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
-0.8299, -0.9219, -0.6603,  0.7563, -0.8033,  0.7955
 0.0663,  0.3838, -0.3690,  0.3730,  0.6693,  0.3206
-0.9634,  0.5003,  0.9777,  0.4963, -0.4391,  0.7377
-0.1042,  0.8172, -0.4128, -0.4244, -0.7399,  0.4801
-0.9613,  0.3577, -0.5767, -0.4689, -0.0169,  0.6861
-0.7065,  0.1786,  0.3995, -0.7953, -0.1719,  0.5569
 0.3888, -0.1716, -0.9001,  0.0718,  0.3276,  0.2500
 0.1731,  0.8068, -0.7251, -0.7214,  0.6148,  0.3297
-0.2046, -0.6693,  0.8550, -0.3045,  0.5016,  0.2129
 0.2473,  0.5019, -0.3022, -0.4601,  0.7918,  0.2613
-0.1438,  0.9297,  0.3269,  0.2434, -0.7705,  0.5171
 0.1568, -0.1837, -0.5259,  0.8068,  0.1474,  0.3307
-0.9943,  0.2343, -0.3467,  0.0541,  0.7719,  0.5581
 0.2467, -0.9684,  0.8589,  0.3818,  0.9946,  0.1092
-0.6553, -0.7257,  0.8652,  0.3936, -0.8680,  0.7018
 0.8460,  0.4230, -0.7515, -0.9602, -0.9476,  0.1996
-0.9434, -0.5076,  0.7201,  0.0777,  0.1056,  0.5664
 0.9392,  0.1221, -0.9627,  0.6013, -0.5341,  0.1533
 0.6142, -0.2243,  0.7271,  0.4942,  0.1125,  0.1661
 0.4260,  0.1194, -0.9749, -0.8561,  0.9346,  0.2230
 0.1362, -0.5934, -0.4953,  0.4877, -0.6091,  0.3810
 0.6937, -0.5203, -0.0125,  0.2399,  0.6580,  0.1460
-0.6864, -0.9628, -0.8600, -0.0273,  0.2127,  0.5387
 0.9772,  0.1595, -0.2397,  0.1019,  0.4907,  0.1611
 0.3385, -0.4702, -0.8673, -0.2598,  0.2594,  0.2270
-0.8669, -0.4794,  0.6095, -0.6131,  0.2789,  0.4700
 0.0493,  0.8496, -0.4734, -0.8681,  0.4701,  0.3516
 0.8639, -0.9721, -0.5313,  0.2336,  0.8980,  0.1412
 0.9004,  0.1133,  0.8312,  0.2831, -0.2200,  0.1782
 0.0991,  0.8524,  0.8375, -0.2102,  0.9265,  0.2150
-0.6521, -0.7473, -0.7298,  0.0113, -0.9570,  0.7422
 0.6190, -0.3105,  0.8802,  0.1640,  0.7577,  0.1056
 0.6895,  0.8108, -0.0802,  0.0927,  0.5972,  0.2214
 0.1982, -0.9689,  0.1870, -0.1326,  0.6147,  0.1310
-0.3695,  0.7858,  0.1557, -0.6320,  0.5759,  0.3773
-0.1596,  0.3581,  0.8372, -0.9992,  0.9535,  0.2071
-0.2468,  0.9476,  0.2094,  0.6577,  0.1494,  0.4132
 0.1737,  0.5000,  0.7166,  0.5102,  0.3961,  0.2611
 0.7290, -0.3546,  0.3416, -0.0983, -0.2358,  0.1332
-0.3652,  0.2438, -0.1395,  0.9476,  0.3556,  0.4170
-0.6029, -0.1466, -0.3133,  0.5953,  0.7600,  0.4334
-0.4596, -0.4953,  0.7098,  0.0554,  0.6043,  0.2775
 0.1450,  0.4663,  0.0380,  0.5418,  0.1377,  0.2931
-0.8636, -0.2442, -0.8407,  0.9656, -0.6368,  0.7429
 0.6237,  0.7499,  0.3768,  0.1390, -0.6781,  0.2185
-0.5499,  0.1850, -0.3755,  0.8326,  0.8193,  0.4399
-0.4858, -0.7782, -0.6141, -0.0008,  0.4572,  0.4197
 0.7033, -0.1683,  0.2334, -0.5327, -0.7961,  0.1776
 0.0317, -0.0457, -0.6947,  0.2436,  0.0880,  0.3345
 0.5031, -0.5559,  0.0387,  0.5706, -0.9553,  0.3107
-0.3513,  0.7458,  0.6894,  0.0769,  0.7332,  0.3170
 0.2205,  0.5992, -0.9309,  0.5405,  0.4635,  0.3532
-0.4806, -0.4859,  0.2646, -0.3094,  0.5932,  0.3202
 0.9809, -0.3995, -0.7140,  0.8026,  0.0831,  0.1600
 0.9495,  0.2732,  0.9878,  0.0921,  0.0529,  0.1289
-0.9476, -0.6792,  0.4913, -0.9392, -0.2669,  0.5966
 0.7247,  0.3854,  0.3819, -0.6227, -0.1162,  0.1550
-0.5922, -0.5045, -0.4757,  0.5003, -0.0860,  0.5863
-0.8861,  0.0170, -0.5761,  0.5972, -0.4053,  0.7301
 0.6877, -0.2380,  0.4997,  0.0223,  0.0819,  0.1404
 0.9189,  0.6079, -0.9354,  0.4188, -0.0700,  0.1907
-0.1428, -0.7820,  0.2676,  0.6059,  0.3936,  0.2790
 0.5324, -0.3151,  0.6917, -0.1425,  0.6480,  0.1071
-0.8432, -0.9633, -0.8666, -0.0828, -0.7733,  0.7784
-0.9444,  0.5097, -0.2103,  0.4939, -0.0952,  0.6787
-0.0520,  0.6063, -0.1952,  0.8094, -0.9259,  0.4836
 0.5477, -0.7487,  0.2370, -0.9793,  0.0773,  0.1241
 0.2450,  0.8116,  0.9799,  0.4222,  0.4636,  0.2355
 0.8186, -0.1983, -0.5003, -0.6531, -0.7611,  0.1511
-0.4714,  0.6382, -0.3788,  0.9648, -0.4667,  0.5950
 0.0673, -0.3711,  0.8215, -0.2669, -0.1328,  0.2677
-0.9381,  0.4338,  0.7820, -0.9454,  0.0441,  0.5518
-0.3480,  0.7190,  0.1170,  0.3805, -0.0943,  0.4724
-0.9813,  0.1535, -0.3771,  0.0345,  0.8328,  0.5438
-0.1471, -0.5052, -0.2574,  0.8637,  0.8737,  0.3042
-0.5454, -0.3712, -0.6505,  0.2142, -0.1728,  0.5783
 0.6327, -0.6297,  0.4038, -0.5193,  0.1484,  0.1153
-0.5424,  0.3282, -0.0055,  0.0380, -0.6506,  0.6613
 0.1414,  0.9935,  0.6337,  0.1887,  0.9520,  0.2540
-0.9351, -0.8128, -0.8693, -0.0965, -0.2491,  0.7353
 0.9507, -0.6640,  0.9456,  0.5349,  0.6485,  0.1059
-0.0462, -0.9737, -0.2940, -0.0159,  0.4602,  0.2606
-0.0627, -0.0852, -0.7247, -0.9782,  0.5166,  0.2977
 0.0478,  0.5098, -0.0723, -0.7504, -0.3750,  0.3335
 0.0090,  0.3477,  0.5403, -0.7393, -0.9542,  0.4415
-0.9748,  0.3449,  0.3736, -0.1015,  0.8296,  0.4358
 0.2887, -0.9895, -0.0311,  0.7186,  0.6608,  0.2057
 0.1570, -0.4518,  0.1211,  0.3435, -0.2951,  0.3244
 0.7117, -0.6099,  0.4946, -0.4208,  0.5476,  0.1096
-0.2929, -0.5726,  0.5346, -0.3827,  0.4665,  0.2465
 0.4889, -0.5572, -0.5718, -0.6021, -0.7150,  0.2163
-0.7782,  0.3491,  0.5996, -0.8389, -0.5366,  0.6516
-0.5847,  0.8347,  0.4226,  0.1078, -0.3910,  0.6134
 0.8469,  0.4121, -0.0439, -0.7476,  0.9521,  0.1571
-0.6803, -0.5948, -0.1376, -0.1916, -0.7065,  0.7156
 0.2878,  0.5086, -0.5785,  0.2019,  0.4979,  0.2980
 0.2764,  0.1943, -0.4090,  0.4632,  0.8906,  0.2960
-0.8877,  0.6705, -0.6155, -0.2098, -0.3998,  0.7107
-0.8398,  0.8093, -0.2597,  0.0614, -0.0118,  0.6502
-0.8476,  0.0158, -0.4769, -0.2859, -0.7839,  0.7715
 0.5751, -0.7868,  0.9714, -0.6457,  0.1448,  0.1175
 0.4802, -0.7001,  0.1022, -0.5668,  0.5184,  0.1090
 0.4458, -0.6469,  0.7239, -0.9604,  0.7205,  0.0779
 0.5175,  0.4339,  0.9747, -0.4438, -0.9924,  0.2879
 0.8678,  0.7158,  0.4577,  0.0334,  0.4139,  0.1678
 0.5406,  0.5012,  0.2264, -0.1963,  0.3946,  0.2088
-0.9938,  0.5498,  0.7928, -0.5214, -0.7585,  0.7687
 0.7661,  0.0863, -0.4266, -0.7233, -0.4197,  0.1466
 0.2277, -0.3517, -0.0853, -0.1118,  0.6563,  0.1767
 0.3499, -0.5570, -0.0655, -0.3705,  0.2537,  0.1632
 0.7547, -0.1046,  0.5689, -0.0861,  0.3125,  0.1257
 0.8186,  0.2110,  0.5335,  0.0094, -0.0039,  0.1391
 0.6858, -0.8644,  0.1465,  0.8855,  0.0357,  0.1845
-0.4967,  0.4015,  0.0805,  0.8977,  0.2487,  0.4663
 0.6760, -0.9841,  0.9787, -0.8446, -0.3557,  0.1509
-0.1203, -0.4885,  0.6054, -0.0443, -0.7313,  0.4854
 0.8557,  0.7919, -0.0169,  0.7134, -0.1628,  0.2002
 0.0115, -0.6209,  0.9300, -0.4116, -0.7931,  0.4052
-0.7114, -0.9718,  0.4319,  0.1290,  0.5892,  0.3661
 0.3915,  0.5557, -0.1870,  0.2955, -0.6404,  0.2954
-0.3564, -0.6548, -0.1827, -0.5172, -0.1862,  0.4622
 0.2392, -0.4959,  0.5857, -0.1341, -0.2850,  0.2470
-0.3394,  0.3947, -0.4627,  0.6166, -0.4094,  0.5325
 0.7107,  0.7768, -0.6312,  0.1707,  0.7964,  0.2757
-0.1078,  0.8437, -0.4420,  0.2177,  0.3649,  0.4028
-0.3139,  0.5595, -0.6505, -0.3161, -0.7108,  0.5546
 0.4335,  0.3986,  0.3770, -0.4932,  0.3847,  0.1810
-0.2562, -0.2894, -0.8847,  0.2633,  0.4146,  0.4036
 0.2272,  0.2966, -0.6601, -0.7011,  0.0284,  0.2778
-0.0743, -0.1421, -0.0054, -0.6770, -0.3151,  0.3597
-0.4762,  0.6891,  0.6007, -0.1467,  0.2140,  0.4266
-0.4061,  0.7193,  0.3432,  0.2669, -0.7505,  0.6147
-0.0588,  0.9731,  0.8966,  0.2902, -0.6966,  0.4955
-0.0627, -0.1439,  0.1985,  0.6999,  0.5022,  0.3077
 0.1587,  0.8494, -0.8705,  0.9827, -0.8940,  0.4263
-0.7850,  0.2473, -0.9040, -0.4308, -0.8779,  0.7199
 0.4070,  0.3369, -0.2428, -0.6236,  0.4940,  0.2215
-0.0242,  0.0513, -0.9430,  0.2885, -0.2987,  0.3947
-0.5416, -0.1322, -0.2351, -0.0604,  0.9590,  0.3683
 0.1055,  0.7783, -0.2901, -0.5090,  0.8220,  0.2984
-0.9129,  0.9015,  0.1128, -0.2473,  0.9901,  0.4776
-0.9378,  0.1424, -0.6391,  0.2619,  0.9618,  0.5368
 0.7498, -0.0963,  0.4169,  0.5549, -0.0103,  0.1614
-0.2612, -0.7156,  0.4538, -0.0460, -0.1022,  0.3717
 0.7720,  0.0552, -0.1818, -0.4622, -0.8560,  0.1685
-0.4177,  0.0070,  0.9319, -0.7812,  0.3461,  0.3052
-0.0001,  0.5542, -0.7128, -0.8336, -0.2016,  0.3803
 0.5356, -0.4194, -0.5662, -0.9666, -0.2027,  0.1776
-0.2378,  0.3187, -0.8582, -0.6948, -0.9668,  0.5474
-0.1947, -0.3579,  0.1158,  0.9869,  0.6690,  0.2992
 0.3992,  0.8365, -0.9205, -0.8593, -0.0520,  0.3154
-0.0209,  0.0793,  0.7905, -0.1067,  0.7541,  0.1864
-0.4928, -0.4524, -0.3433,  0.0951, -0.5597,  0.6261
-0.8118,  0.7404, -0.5263, -0.2280,  0.1431,  0.6349
 0.0516, -0.8480,  0.7483,  0.9023,  0.6250,  0.1959
-0.3212,  0.1093,  0.9488, -0.3766,  0.3376,  0.2735
-0.3481,  0.5490, -0.3484,  0.7797,  0.5034,  0.4379
-0.5785, -0.9170, -0.3563, -0.9258,  0.3877,  0.4121
 0.3407, -0.1391,  0.5356,  0.0720, -0.9203,  0.3458
-0.3287, -0.8954,  0.2102,  0.0241,  0.2349,  0.3247
-0.1353,  0.6954, -0.0919, -0.9692,  0.7461,  0.3338
 0.9036, -0.8982, -0.5299, -0.8733, -0.1567,  0.1187
 0.7277, -0.8368, -0.0538, -0.7489,  0.5458,  0.0830
 0.9049,  0.8878,  0.2279,  0.9470, -0.3103,  0.2194
 0.7957, -0.1308, -0.5284,  0.8817,  0.3684,  0.2172
 0.4647, -0.4931,  0.2010,  0.6292, -0.8918,  0.3371
-0.7390,  0.6849,  0.2367,  0.0626, -0.5034,  0.7039
-0.1567, -0.8711,  0.7940, -0.5932,  0.6525,  0.1710
 0.7635, -0.0265,  0.1969,  0.0545,  0.2496,  0.1445
 0.7675,  0.1354, -0.7698, -0.5460,  0.1920,  0.1728
-0.5211, -0.7372, -0.6763,  0.6897,  0.2044,  0.5217
 0.1913,  0.1980,  0.2314, -0.8816,  0.5006,  0.1998
 0.8964,  0.0694, -0.6149,  0.5059, -0.9854,  0.1825
 0.1767,  0.7104,  0.2093,  0.6452,  0.7590,  0.2832
-0.3580, -0.7541,  0.4426, -0.1193, -0.7465,  0.5657
-0.5996,  0.5766, -0.9758, -0.3933, -0.9572,  0.6800
 0.9950,  0.1641, -0.4132,  0.8579,  0.0142,  0.2003
-0.4717, -0.3894, -0.2567, -0.5111,  0.1691,  0.4266
 0.3917, -0.8561,  0.9422,  0.5061,  0.6123,  0.1212
-0.0366, -0.1087,  0.3449, -0.1025,  0.4086,  0.2475
 0.3633,  0.3943,  0.2372, -0.6980,  0.5216,  0.1925
-0.5325, -0.6466, -0.2178, -0.3589,  0.6310,  0.3568
 0.2271,  0.5200, -0.1447, -0.8011, -0.7699,  0.3128
 0.6415,  0.1993,  0.3777, -0.0178, -0.8237,  0.2181
-0.5298, -0.0768, -0.6028, -0.9490,  0.4588,  0.4356
 0.6870, -0.1431,  0.7294,  0.3141,  0.1621,  0.1632
-0.5985,  0.0591,  0.7889, -0.3900,  0.7419,  0.2945
 0.3661,  0.7984, -0.8486,  0.7572, -0.6183,  0.3449
 0.6995,  0.3342, -0.3113, -0.6972,  0.2707,  0.1712
 0.2565,  0.9126,  0.1798, -0.6043, -0.1413,  0.2893
-0.3265,  0.9839, -0.2395,  0.9854,  0.0376,  0.4770
 0.2690, -0.1722,  0.9818,  0.8599, -0.7015,  0.3954
-0.2102, -0.0768,  0.1219,  0.5607, -0.0256,  0.3949
 0.8216, -0.9555,  0.6422, -0.6231,  0.3715,  0.0801
-0.2896,  0.9484, -0.7545, -0.6249,  0.7789,  0.4370
-0.9985, -0.5448, -0.7092, -0.5931,  0.7926,  0.5402

Test data:

# synthetic_test_40.txt
#
 0.7462,  0.4006, -0.0590,  0.6543, -0.0083,  0.1935
 0.8495, -0.2260, -0.0142, -0.4911,  0.7699,  0.1078
-0.2335, -0.4049,  0.4352, -0.6183, -0.7636,  0.5088
 0.1810, -0.5142,  0.2465,  0.2767, -0.3449,  0.3136
-0.8650,  0.7611, -0.0801,  0.5277, -0.4922,  0.7140
-0.2358, -0.7466, -0.5115, -0.8413, -0.3943,  0.4533
 0.4834,  0.2300,  0.3448, -0.9832,  0.3568,  0.1360
-0.6502, -0.6300,  0.6885,  0.9652,  0.8275,  0.3046
-0.3053,  0.5604,  0.0929,  0.6329, -0.0325,  0.4756
-0.7995,  0.0740, -0.2680,  0.2086,  0.9176,  0.4565
-0.2144, -0.2141,  0.5813,  0.2902, -0.2122,  0.4119
-0.7278, -0.0987, -0.3312, -0.5641,  0.8515,  0.4438
 0.3793,  0.1976,  0.4933,  0.0839,  0.4011,  0.1905
-0.8568,  0.9573, -0.5272,  0.3212, -0.8207,  0.7415
-0.5785,  0.0056, -0.7901, -0.2223,  0.0760,  0.5551
 0.0735, -0.2188,  0.3925,  0.3570,  0.3746,  0.2191
 0.1230, -0.2838,  0.2262,  0.8715,  0.1938,  0.2878
 0.4792, -0.9248,  0.5295,  0.0366, -0.9894,  0.3149
-0.4456,  0.0697,  0.5359, -0.8938,  0.0981,  0.3879
 0.8629, -0.8505, -0.4464,  0.8385,  0.5300,  0.1769
 0.1995,  0.6659,  0.7921,  0.9454,  0.9970,  0.2330
-0.0249, -0.3066, -0.2927, -0.4923,  0.8220,  0.2437
 0.4513, -0.9481, -0.0770, -0.4374, -0.9421,  0.2879
-0.3405,  0.5931, -0.3507, -0.3842,  0.8562,  0.3987
 0.9538,  0.0471,  0.9039,  0.7760,  0.0361,  0.1706
-0.0887,  0.2104,  0.9808,  0.5478, -0.3314,  0.4128
-0.8220, -0.6302,  0.0537, -0.1658,  0.6013,  0.4306
-0.4123, -0.2880,  0.9074, -0.0461, -0.4435,  0.5144
 0.0060,  0.2867, -0.7775,  0.5161,  0.7039,  0.3599
-0.7968, -0.5484,  0.9426, -0.4308,  0.8148,  0.2979
 0.7811,  0.8450, -0.6877,  0.7594,  0.2640,  0.2362
-0.6802, -0.1113, -0.8325, -0.6694, -0.6056,  0.6544
 0.3821,  0.1476,  0.7466, -0.5107,  0.2592,  0.1648
 0.7265,  0.9683, -0.9803, -0.4943, -0.5523,  0.2454
-0.9049, -0.9797, -0.0196, -0.9090, -0.4433,  0.6447
-0.4607,  0.1811, -0.2389,  0.4050, -0.0078,  0.5229
 0.2664, -0.2932, -0.4259, -0.7336,  0.8742,  0.1834
-0.4507,  0.1029, -0.6294, -0.1158, -0.6294,  0.6081
 0.8948, -0.0124,  0.9278,  0.2899, -0.0314,  0.1534
-0.1323, -0.8813, -0.0146, -0.0697,  0.6135,  0.2386
This entry was posted in Machine Learning. Bookmark the permalink.

Leave a Reply