Quadratic Regression Using C# Applied to the Diabetes Dataset – Poor Results

I write code almost every day. Like most things, writing code is a skill that must be practiced, and I enjoy writing code. One morning before work, I figured I’d run the well-known Diabetes Dataset through a quadratic regression model. I wasn’t entirely sure what to expect, but the prediction results were not good at all.

But the quadratic regression results are useful as a baseline for comparison with other regression techniques: nearest neighbors regression, quadratic regression, kernel ridge regression, neural network regression, decision tree regression, bagging tree regression, random forest regression, adaptive boosting regression, and gradient boosting regression.

The Diabetes Dataset looks like:

59, 2, 32.1, 101.00, 157,  93.2, 38, 4.00, 4.8598, 87, 151
48, 1, 21.6,  87.00, 183, 103.2, 70, 3.00, 3.8918, 69,  75
72, 2, 30.5,  93.00, 156,  93.6, 41, 4.00, 4.6728, 85, 141
. . .

Each line represents a patient. The first 10 values on each line are predictors. The last value on each line is the target value (a diabetes metric) to predict. The predictors are: age, sex, body mass index, blood pressure, serum cholesterol, low-density lipoproteins, high-density lipoproteins, total cholesterol, triglycerides, blood sugar. There are 442 data items.

The sex encoding isn’t explained but I suspect male = 1, female = 2 because there are 235 1 values and 206 2 values).

I converted the sex values from 1,2 into 0,1. Then I applied divide-by-constant normalization by dividing the 10 predictor columns by (100, 1, 100, 1000, 1000, 1000, 100, 10, 10, 1000) and the target y values by 1000. The resulting encoded and normalized data looks like:

0.5900, 1.0000, 0.3210, . . . 0.1510
0.4800, 0.0000, 0.2160, . . . 0.0750
0.7200, 1.0000, 0.3050, . . . 0.1410
. . .

I split the 442-items into a 342-item training set and a 100-item test set.

Suppose there are five predictors, instead of the 10 in the diabetes set: (x0, x1, x2, x3, x4). The prediction equation for basic linear regression is:

y’ = (w0 * x0) + (w1 * x1) + (w2 * x2) + (w3 * x3) + (w4 * x4) + b

The wi are model weights (aka coefficients), and b is the model bias (aka intercept). The values of the weights and the bias must be determined by training, so that predicted y’ values are close to the known, correct y values in a set of training data.

The prediction equation for quadratic regression with five predictors is:

y’ = (w0 * x0) + (w1 * x1) + (w2 * x2) + (w3 * x3) + (w4 * x4) +

(w5 * x0*x0) + (w6 * x1*x1) + (w7 * x2*x2) +
(w8 * x3*x3) + (w9 * x4*x4) +

(w10 * x0*x1) + (w11 * x0*x2) + (w12 * x0*x3) + (w13 * x0*x4) +
(w14 * x1*x2) + (w15 * x1*x3) + (w16 * x1*x4) +
(w17 * x2*x3) + (w18 * x2*x4) +
(w19 * x3*x4)

+ b

The squared (“quadratic”) xi^2 terms handle non-linear structure. If there are n predictors, there are also n squared terms. The xi * xj terms between all possible pairs pf original predictors handle interactions between predictors. If there are n predictors, there (n * (n-1)) / 2 interaction terms. So for the Diabetes Dataset, there are 10 base weights, 45 interaction weights, and 10 squared (quadratic) weights, and one bias.

Training is the process of finding values for the weights and the bias so that the model predicts well. There are several different training techniques, including relaxed Moore-Penrose pseudo-inverse training, stochastic gradient descent training, pseudo-inverse via normal equations closed form training, L-BFGS training, and others.

I implemented a quadratic regression model using C#. For training, I used pseudo-inverse via normal equations closed form training with Cholesky inverse.

The math equation is:

w = inv(Xt * X) * Xt * y

The w is a vector of the model bias in cell [0] followed by the weights in the remaining cells. X is a design matrix of the training predictors (the predictors with a leading column of 1s added). Xt is the transpose of the design matrix. The y is the vector of target values. The inv() is any matrix inverse but because Xt * X is square symmetric positive definite, Cholesky inverse is simplest (but by no means simple). The * means matrix multiplication or matrix-to-vector multiplication.

The output of the demo is:

Begin C# quadratic regression

Loading diabetes train (342) and test (100) data
Done

First three train X:
  0.5900  1.0000  0.3210  . . .  0.0870
  0.4800  0.0000  0.2160  . . .  0.0690
  0.7200  1.0000  0.3050  . . .  0.0850

First three train y:
  0.1510
  0.0750
  0.1410

Creating quadratic regression model

Training using pinv via normal equations (Cholesky)
Done

Model base weights:
 -1.4581 -0.0582 -3.2849 -4.2132 -4.5851
 12.2959  0.2567 -1.1348 -7.8512  0.2727

Model quadratic weights:
  0.2379 -0.0684  2.6120 -3.1128 59.7557
 -5.1118  0.3890  1.9664  9.7555 13.9667

Model interaction weights:
   0.1145  -0.1991   0.7579  -9.0780   6.2090
   1.4729   0.7399   2.3095   0.6664   0.1702
   0.7738   1.2678  -1.5112  -0.0329  -0.0037
  -0.2937   0.2082   7.0970 -26.0752  27.2721
   1.7366  -2.0284   7.2057   0.3844 -54.9873
  64.2456   5.7804  -0.8845  17.7970 -49.7961
 -35.7050  -6.6220 -13.5282 -13.6286  92.1111
  -6.0483   1.8248  -0.7395 -96.0245   2.7453
  -0.3896  -6.1198   0.7387   4.7425  -7.4372

Model bias/intercept:   2.2539

Evaluating model
Accuracy train (within 0.10) = 0.2339
Accuracy test (within 0.10) = 0.2400

MSE train = 0.0024
MSE test = 0.0034

Predicting for x =
   0.5900   1.0000   0.3210   0.1010   0.1570
   0.0932   0.3800   0.4000   0.4860   0.0870
Predicted y = 0.2265

End demo

I didn’t expect great results, but the quadratic regression model was even worse than I guessed it would be. The model scored just 23.39% accuracy on the training data (80 out of 342 correct) and 24.00% on the test data (24 out of 100 correct). A prediction is scored correct if it is within 10% of the true target value.

Interesting. It’s usually impossible to tell in advance if a quadratic regression model will fit a particular dataset, and this is a good example of that.



Quadratic regression is an old machine learning technique, but one which still has appeal for certain problem scenarios. I like old Japanese science fiction movies from the 1950s and 1960s — they still have entertainment appeal to me. Here are three that feature aliens with flying saucers.

Left: “Warning from Space” (1956) – Friendly aliens who look like human-sized starfish travel to Earth to warn that a rogue star is on a collision course with our planet. With the aliens help, the rogue star is destroyed just before impact with Earth. I give this movie my personal B grade.

Center: “Invasion of Astro-Monster” (1965) (also known as “Godzilla vs. Monster Zero”) – Planet X is discovered directly opposite the Sun (so it’s hidden from Earth). Planet X is inhabited by Xiliens who want to borrow Godzilla and Rodan to fight Monster Zero, aka King Ghidorah, a giant three-headed dragon creature who is systematically destroying planet X. But the Xiliens secretly want to conquer Earth. The Xiliens are defeated. My grade = B-.

Right: “Battle in Outer Space” (1959) – The evil planet Natal threatens Earth. They intend to colonize and enslave everyone. Early battles don’t go well for Earth but newly developed Atomic Heat Cannons mounted on space fighters that look like X-15 rocket planes finally save the Earth. The aliens had large mother-ship flying saucers (shown) and smaller fighter saucers. My grade = B.


Demo program. Replace “lt” (less than), “gt”, “lte”, gte” with Boolean operator symbols. (My blog editor chokes on symbols).

using System;
using System.IO;
using System.Collections.Generic;

namespace QuadraticRegression
{
  internal class QuadraticRegressionProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin C# quadratic regression ");

      Console.WriteLine("\nLoading diabetes train" +
        " (342) and test (100) data");
      string trainFile =
        "..\\..\\..\\Data\\diabetes_norm_train_342.txt";
      int[] colsX = 
        new int[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
      int colY = 10;
      double[][] trainX =
        MatLoad(trainFile, colsX, ',', "#");
      double[] trainY =
        MatToVec(MatLoad(trainFile,
        new int[] { colY }, ',', "#"));

      string testFile =
        "..\\..\\..\\Data\\diabetes_norm_test_100.txt";
      double[][] testX =
        MatLoad(testFile, colsX, ',', "#");
      double[] testY =
        MatToVec(MatLoad(testFile,
        new int[] { colY }, ',', "#"));
      Console.WriteLine("Done ");

      Console.WriteLine("\nFirst three train X: ");
      for (int i = 0; i "lt" 3; ++i)
        VecShow(trainX[i], 4, 8);

      Console.WriteLine("\nFirst three train y: ");
      for (int i = 0; i "lt" 3; ++i)
        Console.WriteLine(trainY[i].ToString("F4").
          PadLeft(8));

      // 2. create and train model
      Console.WriteLine("\nCreating quadratic regression" +
        " model ");
      QuadraticRegressor model = new QuadraticRegressor();

      // closed-form pseudo-inverse via normal equations
      // (Cholesky inverse)
      Console.WriteLine("\nTraining using pinv via normal " +
        "equations (Cholesky)");
      model.TrainClosed(trainX, trainY);
      Console.WriteLine("Done ");

      // 3. show model weights
      Console.WriteLine("\nModel base weights: ");
      int dim = trainX[0].Length;
      for (int i = 0; i "lt" dim; ++i)
        Console.Write(model.weights[i].
          ToString("F4").PadLeft(9));
      Console.WriteLine("");

      Console.WriteLine("\nModel quadratic weights: ");
      for (int i = dim; i "lt" dim + dim; ++i)
        Console.Write(model.weights[i].
          ToString("F4").PadLeft(9));
      Console.WriteLine("");

      Console.WriteLine("\nModel interaction weights: ");
      for (int i = dim + dim; i "lt" model.weights.Length; ++i)
      {
        Console.Write(model.weights[i].
          ToString("F4").PadLeft(9));
        if (i "gt" dim + dim && i % dim == 0)
          Console.WriteLine("");
      }
      Console.WriteLine("");

      Console.WriteLine("\nModel bias/intercept: " +
        model.bias.ToString("F4").PadLeft(8));

      // 4. evaluate model
      Console.WriteLine("\nEvaluating model ");
      double accTrain = model.Accuracy(trainX, trainY, 0.10);
      Console.WriteLine("Accuracy train (within 0.10) = " +
        accTrain.ToString("F4"));
      double accTest = model.Accuracy(testX, testY, 0.10);
      Console.WriteLine("Accuracy test (within 0.10) = " +
        accTest.ToString("F4"));

      double mseTrain = model.MSE(trainX, trainY);
      Console.WriteLine("\nMSE train = " +
        mseTrain.ToString("F4"));
      double mseTest = model.MSE(testX, testY);
      Console.WriteLine("MSE test = " +
        mseTest.ToString("F4"));

      // 5. use model
      double[] x = trainX[0];
      Console.WriteLine("\nPredicting for x = ");
      VecShow(x, 4, 9);
      double predY = model.Predict(x);
      Console.WriteLine("\nPredicted y = " +
        predY.ToString("F4"));

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main

    // ------------------------------------------------------
    // helpers for Main()
    // ------------------------------------------------------

    static double[][] MatLoad(string fn, int[] usecols,
      char sep, string comment)
    {
      List"lt"double[]"gt" result =
        new List"lt"double[]"gt"();
      string line = "";
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      while ((line = sr.ReadLine()) != null)
      {
        if (line.StartsWith(comment) == true)
          continue;
        string[] tokens = line.Split(sep);
        List"lt"double"gt" lst = new List"lt"double"gt"();
        for (int j = 0; j "lt" usecols.Length; ++j)
          lst.Add(double.Parse(tokens[usecols[j]]));
        double[] row = lst.ToArray();
        result.Add(row);
      }
      sr.Close(); ifs.Close();
      return result.ToArray();
    }

    static double[] MatToVec(double[][] mat)
    {
      int nRows = mat.Length;
      int nCols = mat[0].Length;
      double[] result = new double[nRows * nCols];
      int k = 0;
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[k++] = mat[i][j];
      return result;
    }

    static void VecShow(double[] vec, int dec, int wid)
    {
      for (int i = 0; i "lt" vec.Length; ++i)
        Console.Write(vec[i].ToString("F" + dec).
          PadLeft(wid));
      Console.WriteLine("");
    }
  } // class Program

  // ========================================================

  public class QuadraticRegressor
  {
    public double[] weights;  // regular, quad, interactions
    public double bias;
    private Random rnd;  // for SGD training

    public QuadraticRegressor(int seed = 0)
    {
      this.weights = new double[0];  // empty, but not null
      this.bias = 0; // dummy value
      this.rnd = new Random(seed);
    }

    // ------------------------------------------------------

    public double Predict(double[] x)
    {
      int dim = x.Length;
      double result = 0.0;

      int p = 0; // points into this.weights
      for (int i = 0; i "lt" dim; ++i)   // regular
        result += x[i] * this.weights[p++];

      for (int i = 0; i "lt" dim; ++i)  // quadratic
        result += x[i] * x[i] * this.weights[p++];

      for (int i = 0; i "lt" dim - 1; ++i)  // interactions
        for (int j = i + 1; j "lt" dim; ++j)
          result += x[i] * x[j] * this.weights[p++];

      result += this.bias;
      return result;
    }

    // ------------------------------------------------------

    // ------------------------------------------------------

    public double Accuracy(double[][] dataX, double[] dataY,
      double pctClose)
    {
      int numCorrect = 0; int numWrong = 0;
      for (int i = 0; i "lt" dataX.Length; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        if (Math.Abs(predY - actualY) "lt"
          Math.Abs(pctClose * actualY))
          ++numCorrect;
        else
          ++numWrong;
      }
      return (numCorrect * 1.0) / (numWrong + numCorrect);
    }

    // ------------------------------------------------------

    public double MSE(double[][] dataX,
      double[] dataY)
    {
      int n = dataX.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        sum += (actualY - predY) * (actualY - predY);
      }
      return sum / n;
    }

    // ======================================================

    public void TrainClosed(double[][] trainX,
      double[] trainY)
    {
      // pseudo-inverse training via normal equations
      // w = inv(Xt * X) * Xt * y (X is the design matrix)
      // Cholesky inverse: Xt * X is positive definite
      int dim = trainX[0].Length;
      int nInteractions = (dim * (dim - 1)) / 2;
      this.weights = new double[dim + dim + nInteractions];

      double[][] X = MatAugment(trainX); // add quad terms
      double[][] Xt = MatTranspose(X);
      double[][] XtX = MatProduct(Xt, X); // could fail
      for (int i = 0; i "lt" XtX.Length; ++i)
        XtX[i][i] += 1.0e-8;  // condition Xt*X
      double[][] invXtX = MatInvCholesky(XtX);
      double[][] invXtXXt = MatProduct(invXtX, Xt);
      double[] biasAndWts = MatVecProd(invXtXXt, trainY);
      this.bias = biasAndWts[0];
      for (int i = 1; i "lt" biasAndWts.Length; ++i)
        this.weights[i - 1] = biasAndWts[i];
      return;
    }

    // helpers for TrainClosed: 
    // MatMake, MatIdentity, MatAugment, MatTranspose,
    // MatProduct, MatVecProd, MatInvCholesky,
    // MatDecompCholesky, MatInvLowerTri,
    // MatInvUpperTri
    
    private static double[][] MatMake(int nRows, int nCols)
    {
      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatIdentity(int n)
    {
      double[][] result = MatMake(n, n);
      for (int i = 0; i "lt" n; ++i)
        result[i][i] = 1.0;
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatAugment(double[][] X)
    {
      // design matrix: programmatically add quadratic
      // terms, then add leading col of 1.0s
      int nRows = X.Length;  // src and dest
      int dim = X[0].Length;
      int nInteractions = dim * (dim - 1) / 2;
      int nColsDest = 1 + dim + dim + nInteractions;

      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; i++)
        result[i] = new double[nColsDest];

      for (int i = 0; i "lt" nRows; ++i)
      {
        int p = 0; // points to column of result

        result[i][p++] = 1.0;  // leading 1.0

        for (int j = 0; j "lt" dim;  ++j) // base
          result[i][p++] = X[i][j];

        for (int j = 0; j "lt" dim;  ++j) // quadratic
          result[i][p++] = X[i][j] * X[i][j];

        for (int j = 0; j "lt" nInteractions-1; ++j)
          for (int k = j+1; k "lt" dim; ++k)
            result[i][p++] = X[i][j] * X[i][k];
      }

      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatTranspose(double[][] M)
    {
      int nRows = M.Length; int nCols = M[0].Length;
      double[][] result = MatMake(nCols, nRows);  // note
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[j][i] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatProduct(double[][] A,
      double[][] B)
    {
      int aRows = A.Length; int aCols = A[0].Length;
      int bRows = B.Length; int bCols = B[0].Length;
      if (aCols != bRows)
        throw new Exception("Non-conformable matrices");

      double[][] result = MatMake(aRows, bCols);

      for (int i = 0; i "lt" aRows; ++i) // each row of A
        for (int j = 0; j "lt" bCols; ++j) // each col of B
          for (int k = 0; k "lt" aCols; ++k)
            result[i][j] += A[i][k] * B[k][j];

      return result;
    }

    // ------------------------------------------------------

    private static double[] MatVecProd(double[][] M,
      double[] v)
    {
      // return a regular vector
      int nRows = M.Length; int nCols = M[0].Length;
      int n = v.Length;
      if (nCols != n)
        throw new Exception("non-conform in MatVecProd");

      double[] result = new double[nRows];
      for (int i = 0; i "lt" nRows; ++i)
        for (int k = 0; k "lt" nCols; ++k)
          result[i] += M[i][k] * v[k];

      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatInvCholesky(double[][] A)
    {
      // A must be square symmetric positive definite
      // A = L * Lt, inv(A) = inv(Lt) * inv(L)
      double[][] L = MatDecompCholesky(A); // lower tri
      double[][] Lt = MatTranspose(L); // upper tri
      double[][] invL = MatInvLowerTri(L);
      double[][] invLt = MatInvUpperTri(Lt);
      double[][] result = MatProduct(invLt, invL);
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatDecompCholesky(double[][] A)
    {
      // A must be square, symmetric, positive definite
      int m = A.Length; int n = A[0].Length;  // m == n
      double[][] L = new double[n][];
      for (int i = 0; i "lt" n; ++i)
        L[i] = new double[n];

      for (int i = 0; i "lt" n; ++i)
      {
        for (int j = 0; j "lte" i; ++j)
        {
          double sum = 0.0;
          for (int k = 0; k "lt" j; ++k)
            sum += L[i][k] * L[j][k];
          if (i == j)
          {
            double tmp = A[i][i] - sum;
            if (tmp "lt" 0.0)
              throw new
                Exception("decomp Cholesky fatal");
            L[i][j] = Math.Sqrt(tmp);
          }
          else
          {
            if (L[j][j] == 0.0)
              throw new
                Exception("decomp Cholesky fatal ");
            L[i][j] = (A[i][j] - sum) / L[j][j];
          }
        } // j
      } // i

      return L;
    }

    // ------------------------------------------------------

    private static double[][] MatInvLowerTri(double[][] A)
    {
      int n = A.Length;  // must be square matrix
      double[][] result = MatIdentity(n);

      for (int k = 0; k "lt" n; ++k)
      {
        for (int j = 0; j "lt" n; ++j)
        {
          for (int i = 0; i "lt" k; ++i)
          {
            result[k][j] -= result[i][j] * A[k][i];
          }
          result[k][j] /= (A[k][k] + 1.0e-8);
        }
      }
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatInvUpperTri(double[][] A)
    {
      int n = A.Length;  // must be square matrix
      double[][] result = MatIdentity(n);

      for (int k = 0; k "lt" n; ++k)
      {
        for (int j = 0; j "lt" n; ++j)
        {
          for (int i = 0; i "lt" k; ++i)
          {
            result[j][k] -= result[j][i] * A[i][k];
          }
          result[j][k] /= (A[k][k] + 1.0e-8); // avoid 0
        }
      }
      return result;
    }

    // ------------------------------------------------------

  } // class QuadraticRegressor

  // ========================================================

} // ns

Training data:


# diabetes_norm_train_342.txt
# cols [0] to [9] predictors. col [10] target
# norm division constants:
# 100, -1, 100, 1000, 1000, 1000, 100, 10, 10, 1000, 1000
#
0.5900, 1.0000, 0.3210, 0.1010, 0.1570, 0.0932, 0.3800, 0.4000, 0.4860, 0.0870, 0.1510
0.4800, 0.0000, 0.2160, 0.0870, 0.1830, 0.1032, 0.7000, 0.3000, 0.3892, 0.0690, 0.0750
0.7200, 1.0000, 0.3050, 0.0930, 0.1560, 0.0936, 0.4100, 0.4000, 0.4673, 0.0850, 0.1410
0.2400, 0.0000, 0.2530, 0.0840, 0.1980, 0.1314, 0.4000, 0.5000, 0.4890, 0.0890, 0.2060
0.5000, 0.0000, 0.2300, 0.1010, 0.1920, 0.1254, 0.5200, 0.4000, 0.4291, 0.0800, 0.1350
0.2300, 0.0000, 0.2260, 0.0890, 0.1390, 0.0648, 0.6100, 0.2000, 0.4190, 0.0680, 0.0970
0.3600, 1.0000, 0.2200, 0.0900, 0.1600, 0.0996, 0.5000, 0.3000, 0.3951, 0.0820, 0.1380
0.6600, 1.0000, 0.2620, 0.1140, 0.2550, 0.1850, 0.5600, 0.4550, 0.4249, 0.0920, 0.0630
0.6000, 1.0000, 0.3210, 0.0830, 0.1790, 0.1194, 0.4200, 0.4000, 0.4477, 0.0940, 0.1100
0.2900, 0.0000, 0.3000, 0.0850, 0.1800, 0.0934, 0.4300, 0.4000, 0.5385, 0.0880, 0.3100
0.2200, 0.0000, 0.1860, 0.0970, 0.1140, 0.0576, 0.4600, 0.2000, 0.3951, 0.0830, 0.1010
0.5600, 1.0000, 0.2800, 0.0850, 0.1840, 0.1448, 0.3200, 0.6000, 0.3584, 0.0770, 0.0690
0.5300, 0.0000, 0.2370, 0.0920, 0.1860, 0.1092, 0.6200, 0.3000, 0.4304, 0.0810, 0.1790
0.5000, 1.0000, 0.2620, 0.0970, 0.1860, 0.1054, 0.4900, 0.4000, 0.5063, 0.0880, 0.1850
0.6100, 0.0000, 0.2400, 0.0910, 0.2020, 0.1154, 0.7200, 0.3000, 0.4291, 0.0730, 0.1180
0.3400, 1.0000, 0.2470, 0.1180, 0.2540, 0.1842, 0.3900, 0.7000, 0.5037, 0.0810, 0.1710
0.4700, 0.0000, 0.3030, 0.1090, 0.2070, 0.1002, 0.7000, 0.3000, 0.5215, 0.0980, 0.1660
0.6800, 1.0000, 0.2750, 0.1110, 0.2140, 0.1470, 0.3900, 0.5000, 0.4942, 0.0910, 0.1440
0.3800, 0.0000, 0.2540, 0.0840, 0.1620, 0.1030, 0.4200, 0.4000, 0.4443, 0.0870, 0.0970
0.4100, 0.0000, 0.2470, 0.0830, 0.1870, 0.1082, 0.6000, 0.3000, 0.4543, 0.0780, 0.1680
0.3500, 0.0000, 0.2110, 0.0820, 0.1560, 0.0878, 0.5000, 0.3000, 0.4511, 0.0950, 0.0680
0.2500, 1.0000, 0.2430, 0.0950, 0.1620, 0.0986, 0.5400, 0.3000, 0.3850, 0.0870, 0.0490
0.2500, 0.0000, 0.2600, 0.0920, 0.1870, 0.1204, 0.5600, 0.3000, 0.3970, 0.0880, 0.0680
0.6100, 1.0000, 0.3200, 0.1037, 0.2100, 0.0852, 0.3500, 0.6000, 0.6107, 0.1240, 0.2450
0.3100, 0.0000, 0.2970, 0.0880, 0.1670, 0.1034, 0.4800, 0.4000, 0.4357, 0.0780, 0.1840
0.3000, 1.0000, 0.2520, 0.0830, 0.1780, 0.1184, 0.3400, 0.5000, 0.4852, 0.0830, 0.2020
0.1900, 0.0000, 0.1920, 0.0870, 0.1240, 0.0540, 0.5700, 0.2000, 0.4174, 0.0900, 0.1370
0.4200, 0.0000, 0.3190, 0.0830, 0.1580, 0.0876, 0.5300, 0.3000, 0.4466, 0.1010, 0.0850
0.6300, 0.0000, 0.2440, 0.0730, 0.1600, 0.0914, 0.4800, 0.3000, 0.4635, 0.0780, 0.1310
0.6700, 1.0000, 0.2580, 0.1130, 0.1580, 0.0542, 0.6400, 0.2000, 0.5293, 0.1040, 0.2830
0.3200, 0.0000, 0.3050, 0.0890, 0.1820, 0.1106, 0.5600, 0.3000, 0.4344, 0.0890, 0.1290
0.4200, 0.0000, 0.2030, 0.0710, 0.1610, 0.0812, 0.6600, 0.2000, 0.4234, 0.0810, 0.0590
0.5800, 1.0000, 0.3800, 0.1030, 0.1500, 0.1072, 0.2200, 0.7000, 0.4644, 0.0980, 0.3410
0.5700, 0.0000, 0.2170, 0.0940, 0.1570, 0.0580, 0.8200, 0.2000, 0.4443, 0.0920, 0.0870
0.5300, 0.0000, 0.2050, 0.0780, 0.1470, 0.0842, 0.5200, 0.3000, 0.3989, 0.0750, 0.0650
0.6200, 1.0000, 0.2350, 0.0803, 0.2250, 0.1128, 0.8600, 0.2620, 0.4875, 0.0960, 0.1020
0.5200, 0.0000, 0.2850, 0.1100, 0.1950, 0.0972, 0.6000, 0.3000, 0.5242, 0.0850, 0.2650
0.4600, 0.0000, 0.2740, 0.0780, 0.1710, 0.0880, 0.5800, 0.3000, 0.4828, 0.0900, 0.2760
0.4800, 1.0000, 0.3300, 0.1230, 0.2530, 0.1636, 0.4400, 0.6000, 0.5425, 0.0970, 0.2520
0.4800, 1.0000, 0.2770, 0.0730, 0.1910, 0.1194, 0.4600, 0.4000, 0.4852, 0.0920, 0.0900
0.5000, 1.0000, 0.2560, 0.1010, 0.2290, 0.1622, 0.4300, 0.5000, 0.4779, 0.1140, 0.1000
0.2100, 0.0000, 0.2010, 0.0630, 0.1350, 0.0690, 0.5400, 0.3000, 0.4094, 0.0890, 0.0550
0.3200, 1.0000, 0.2540, 0.0903, 0.1530, 0.1004, 0.3400, 0.4500, 0.4533, 0.0830, 0.0610
0.5400, 0.0000, 0.2420, 0.0740, 0.2040, 0.1090, 0.8200, 0.2000, 0.4174, 0.1090, 0.0920
0.6100, 1.0000, 0.3270, 0.0970, 0.1770, 0.1184, 0.2900, 0.6000, 0.4997, 0.0870, 0.2590
0.5600, 1.0000, 0.2310, 0.1040, 0.1810, 0.1164, 0.4700, 0.4000, 0.4477, 0.0790, 0.0530
0.3300, 0.0000, 0.2530, 0.0850, 0.1550, 0.0850, 0.5100, 0.3000, 0.4554, 0.0700, 0.1900
0.2700, 0.0000, 0.1960, 0.0780, 0.1280, 0.0680, 0.4300, 0.3000, 0.4443, 0.0710, 0.1420
0.6700, 1.0000, 0.2250, 0.0980, 0.1910, 0.1192, 0.6100, 0.3000, 0.3989, 0.0860, 0.0750
0.3700, 1.0000, 0.2770, 0.0930, 0.1800, 0.1194, 0.3000, 0.6000, 0.5030, 0.0880, 0.1420
0.5800, 0.0000, 0.2570, 0.0990, 0.1570, 0.0916, 0.4900, 0.3000, 0.4407, 0.0930, 0.1550
0.6500, 1.0000, 0.2790, 0.1030, 0.1590, 0.0968, 0.4200, 0.4000, 0.4615, 0.0860, 0.2250
0.3400, 0.0000, 0.2550, 0.0930, 0.2180, 0.1440, 0.5700, 0.4000, 0.4443, 0.0880, 0.0590
0.4600, 0.0000, 0.2490, 0.1150, 0.1980, 0.1296, 0.5400, 0.4000, 0.4277, 0.1030, 0.1040
0.3500, 0.0000, 0.2870, 0.0970, 0.2040, 0.1268, 0.6400, 0.3000, 0.4190, 0.0930, 0.1820
0.3700, 0.0000, 0.2180, 0.0840, 0.1840, 0.1010, 0.7300, 0.3000, 0.3912, 0.0930, 0.1280
0.3700, 0.0000, 0.3020, 0.0870, 0.1660, 0.0960, 0.4000, 0.4150, 0.5011, 0.0870, 0.0520
0.4100, 0.0000, 0.2050, 0.0800, 0.1240, 0.0488, 0.6400, 0.2000, 0.4025, 0.0750, 0.0370
0.6000, 0.0000, 0.2040, 0.1050, 0.1980, 0.0784, 0.9900, 0.2000, 0.4635, 0.0790, 0.1700
0.6600, 1.0000, 0.2400, 0.0980, 0.2360, 0.1464, 0.5800, 0.4000, 0.5063, 0.0960, 0.1700
0.2900, 0.0000, 0.2600, 0.0830, 0.1410, 0.0652, 0.6400, 0.2000, 0.4078, 0.0830, 0.0610
0.3700, 1.0000, 0.2680, 0.0790, 0.1570, 0.0980, 0.2800, 0.6000, 0.5043, 0.0960, 0.1440
0.4100, 1.0000, 0.2570, 0.0830, 0.1810, 0.1066, 0.6600, 0.3000, 0.3738, 0.0850, 0.0520
0.3900, 0.0000, 0.2290, 0.0770, 0.2040, 0.1432, 0.4600, 0.4000, 0.4304, 0.0740, 0.1280
0.6700, 1.0000, 0.2400, 0.0830, 0.1430, 0.0772, 0.4900, 0.3000, 0.4431, 0.0940, 0.0710
0.3600, 1.0000, 0.2410, 0.1120, 0.1930, 0.1250, 0.3500, 0.6000, 0.5106, 0.0950, 0.1630
0.4600, 1.0000, 0.2470, 0.0850, 0.1740, 0.1232, 0.3000, 0.6000, 0.4644, 0.0960, 0.1500
0.6000, 1.0000, 0.2500, 0.0897, 0.1850, 0.1208, 0.4600, 0.4020, 0.4511, 0.0920, 0.0970
0.5900, 1.0000, 0.2360, 0.0830, 0.1650, 0.1000, 0.4700, 0.4000, 0.4500, 0.0920, 0.1600
0.5300, 0.0000, 0.2210, 0.0930, 0.1340, 0.0762, 0.4600, 0.3000, 0.4078, 0.0960, 0.1780
0.4800, 0.0000, 0.1990, 0.0910, 0.1890, 0.1096, 0.6900, 0.3000, 0.3951, 0.1010, 0.0480
0.4800, 0.0000, 0.2950, 0.1310, 0.2070, 0.1322, 0.4700, 0.4000, 0.4935, 0.1060, 0.2700
0.6600, 1.0000, 0.2600, 0.0910, 0.2640, 0.1466, 0.6500, 0.4000, 0.5568, 0.0870, 0.2020
0.5200, 1.0000, 0.2450, 0.0940, 0.2170, 0.1494, 0.4800, 0.5000, 0.4585, 0.0890, 0.1110
0.5200, 1.0000, 0.2660, 0.1110, 0.2090, 0.1264, 0.6100, 0.3000, 0.4682, 0.1090, 0.0850
0.4600, 1.0000, 0.2350, 0.0870, 0.1810, 0.1148, 0.4400, 0.4000, 0.4710, 0.0980, 0.0420
0.4000, 1.0000, 0.2900, 0.1150, 0.0970, 0.0472, 0.3500, 0.2770, 0.4304, 0.0950, 0.1700
0.2200, 0.0000, 0.2300, 0.0730, 0.1610, 0.0978, 0.5400, 0.3000, 0.3829, 0.0910, 0.2000
0.5000, 0.0000, 0.2100, 0.0880, 0.1400, 0.0718, 0.3500, 0.4000, 0.5112, 0.0710, 0.2520
0.2000, 0.0000, 0.2290, 0.0870, 0.1910, 0.1282, 0.5300, 0.4000, 0.3892, 0.0850, 0.1130
0.6800, 0.0000, 0.2750, 0.1070, 0.2410, 0.1496, 0.6400, 0.4000, 0.4920, 0.0900, 0.1430
0.5200, 1.0000, 0.2430, 0.0860, 0.1970, 0.1336, 0.4400, 0.5000, 0.4575, 0.0910, 0.0510
0.4400, 0.0000, 0.2310, 0.0870, 0.2130, 0.1264, 0.7700, 0.3000, 0.3871, 0.0720, 0.0520
0.3800, 0.0000, 0.2730, 0.0810, 0.1460, 0.0816, 0.4700, 0.3000, 0.4466, 0.0810, 0.2100
0.4900, 0.0000, 0.2270, 0.0653, 0.1680, 0.0962, 0.6200, 0.2710, 0.3892, 0.0600, 0.0650
0.6100, 0.0000, 0.3300, 0.0950, 0.1820, 0.1148, 0.5400, 0.3000, 0.4190, 0.0740, 0.1410
0.2900, 1.0000, 0.1940, 0.0830, 0.1520, 0.1058, 0.3900, 0.4000, 0.3584, 0.0830, 0.0550
0.6100, 0.0000, 0.2580, 0.0980, 0.2350, 0.1258, 0.7600, 0.3000, 0.5112, 0.0820, 0.1340
0.3400, 1.0000, 0.2260, 0.0750, 0.1660, 0.0918, 0.6000, 0.3000, 0.4263, 0.1080, 0.0420
0.3600, 0.0000, 0.2190, 0.0890, 0.1890, 0.1052, 0.6800, 0.3000, 0.4369, 0.0960, 0.1110
0.5200, 0.0000, 0.2400, 0.0830, 0.1670, 0.0866, 0.7100, 0.2000, 0.3850, 0.0940, 0.0980
0.6100, 0.0000, 0.3120, 0.0790, 0.2350, 0.1568, 0.4700, 0.5000, 0.5050, 0.0960, 0.1640
0.4300, 0.0000, 0.2680, 0.1230, 0.1930, 0.1022, 0.6700, 0.3000, 0.4779, 0.0940, 0.0480
0.3500, 0.0000, 0.2040, 0.0650, 0.1870, 0.1056, 0.6700, 0.2790, 0.4277, 0.0780, 0.0960
0.2700, 0.0000, 0.2480, 0.0910, 0.1890, 0.1068, 0.6900, 0.3000, 0.4190, 0.0690, 0.0900
0.2900, 0.0000, 0.2100, 0.0710, 0.1560, 0.0970, 0.3800, 0.4000, 0.4654, 0.0900, 0.1620
0.6400, 1.0000, 0.2730, 0.1090, 0.1860, 0.1076, 0.3800, 0.5000, 0.5308, 0.0990, 0.1500
0.4100, 0.0000, 0.3460, 0.0873, 0.2050, 0.1426, 0.4100, 0.5000, 0.4673, 0.1100, 0.2790
0.4900, 1.0000, 0.2590, 0.0910, 0.1780, 0.1066, 0.5200, 0.3000, 0.4575, 0.0750, 0.0920
0.4800, 0.0000, 0.2040, 0.0980, 0.2090, 0.1394, 0.4600, 0.5000, 0.4771, 0.0780, 0.0830
0.5300, 0.0000, 0.2800, 0.0880, 0.2330, 0.1438, 0.5800, 0.4000, 0.5050, 0.0910, 0.1280
0.5300, 1.0000, 0.2220, 0.1130, 0.1970, 0.1152, 0.6700, 0.3000, 0.4304, 0.1000, 0.1020
0.2300, 0.0000, 0.2900, 0.0900, 0.2160, 0.1314, 0.6500, 0.3000, 0.4585, 0.0910, 0.3020
0.6500, 1.0000, 0.3020, 0.0980, 0.2190, 0.1606, 0.4000, 0.5000, 0.4522, 0.0840, 0.1980
0.4100, 0.0000, 0.3240, 0.0940, 0.1710, 0.1044, 0.5600, 0.3000, 0.3970, 0.0760, 0.0950
0.5500, 1.0000, 0.2340, 0.0830, 0.1660, 0.1016, 0.4600, 0.4000, 0.4522, 0.0960, 0.0530
0.2200, 0.0000, 0.1930, 0.0820, 0.1560, 0.0932, 0.5200, 0.3000, 0.3989, 0.0710, 0.1340
0.5600, 0.0000, 0.3100, 0.0787, 0.1870, 0.1414, 0.3400, 0.5500, 0.4060, 0.0900, 0.1440
0.5400, 1.0000, 0.3060, 0.1033, 0.1440, 0.0798, 0.3000, 0.4800, 0.5142, 0.1010, 0.2320
0.5900, 1.0000, 0.2550, 0.0953, 0.1900, 0.1394, 0.3500, 0.5430, 0.4357, 0.1170, 0.0810
0.6000, 1.0000, 0.2340, 0.0880, 0.1530, 0.0898, 0.5800, 0.3000, 0.3258, 0.0950, 0.1040
0.5400, 0.0000, 0.2680, 0.0870, 0.2060, 0.1220, 0.6800, 0.3000, 0.4382, 0.0800, 0.0590
0.2500, 0.0000, 0.2830, 0.0870, 0.1930, 0.1280, 0.4900, 0.4000, 0.4382, 0.0920, 0.2460
0.5400, 1.0000, 0.2770, 0.1130, 0.2000, 0.1284, 0.3700, 0.5000, 0.5153, 0.1130, 0.2970
0.5500, 0.0000, 0.3660, 0.1130, 0.1990, 0.0944, 0.4300, 0.4630, 0.5730, 0.0970, 0.2580
0.4000, 1.0000, 0.2650, 0.0930, 0.2360, 0.1470, 0.3700, 0.7000, 0.5561, 0.0920, 0.2290
0.6200, 1.0000, 0.3180, 0.1150, 0.1990, 0.1286, 0.4400, 0.5000, 0.4883, 0.0980, 0.2750
0.6500, 0.0000, 0.2440, 0.1200, 0.2220, 0.1356, 0.3700, 0.6000, 0.5509, 0.1240, 0.2810
0.3300, 1.0000, 0.2540, 0.1020, 0.2060, 0.1410, 0.3900, 0.5000, 0.4868, 0.1050, 0.1790
0.5300, 0.0000, 0.2200, 0.0940, 0.1750, 0.0880, 0.5900, 0.3000, 0.4942, 0.0980, 0.2000
0.3500, 0.0000, 0.2680, 0.0980, 0.1620, 0.1036, 0.4500, 0.4000, 0.4205, 0.0860, 0.2000
0.6600, 0.0000, 0.2800, 0.1010, 0.1950, 0.1292, 0.4000, 0.5000, 0.4860, 0.0940, 0.1730
0.6200, 1.0000, 0.3390, 0.1010, 0.2210, 0.1564, 0.3500, 0.6000, 0.4997, 0.1030, 0.1800
0.5000, 1.0000, 0.2960, 0.0943, 0.3000, 0.2424, 0.3300, 0.9090, 0.4812, 0.1090, 0.0840
0.4700, 0.0000, 0.2860, 0.0970, 0.1640, 0.0906, 0.5600, 0.3000, 0.4466, 0.0880, 0.1210
0.4700, 1.0000, 0.2560, 0.0940, 0.1650, 0.0748, 0.4000, 0.4000, 0.5526, 0.0930, 0.1610
0.2400, 0.0000, 0.2070, 0.0870, 0.1490, 0.0806, 0.6100, 0.2000, 0.3611, 0.0780, 0.0990
0.5800, 1.0000, 0.2620, 0.0910, 0.2170, 0.1242, 0.7100, 0.3000, 0.4691, 0.0680, 0.1090
0.3400, 0.0000, 0.2060, 0.0870, 0.1850, 0.1122, 0.5800, 0.3000, 0.4304, 0.0740, 0.1150
0.5100, 0.0000, 0.2790, 0.0960, 0.1960, 0.1222, 0.4200, 0.5000, 0.5069, 0.1200, 0.2680
0.3100, 1.0000, 0.3530, 0.1250, 0.1870, 0.1124, 0.4800, 0.4000, 0.4890, 0.1090, 0.2740
0.2200, 0.0000, 0.1990, 0.0750, 0.1750, 0.1086, 0.5400, 0.3000, 0.4127, 0.0720, 0.1580
0.5300, 1.0000, 0.2440, 0.0920, 0.2140, 0.1460, 0.5000, 0.4000, 0.4500, 0.0970, 0.1070
0.3700, 1.0000, 0.2140, 0.0830, 0.1280, 0.0696, 0.4900, 0.3000, 0.3850, 0.0840, 0.0830
0.2800, 0.0000, 0.3040, 0.0850, 0.1980, 0.1156, 0.6700, 0.3000, 0.4344, 0.0800, 0.1030
0.4700, 0.0000, 0.3160, 0.0840, 0.1540, 0.0880, 0.3000, 0.5100, 0.5199, 0.1050, 0.2720
0.2300, 0.0000, 0.1880, 0.0780, 0.1450, 0.0720, 0.6300, 0.2000, 0.3912, 0.0860, 0.0850
0.5000, 0.0000, 0.3100, 0.1230, 0.1780, 0.1050, 0.4800, 0.4000, 0.4828, 0.0880, 0.2800
0.5800, 1.0000, 0.3670, 0.1170, 0.1660, 0.0938, 0.4400, 0.4000, 0.4949, 0.1090, 0.3360
0.5500, 0.0000, 0.3210, 0.1100, 0.1640, 0.0842, 0.4200, 0.4000, 0.5242, 0.0900, 0.2810
0.6000, 1.0000, 0.2770, 0.1070, 0.1670, 0.1146, 0.3800, 0.4000, 0.4277, 0.0950, 0.1180
0.4100, 0.0000, 0.3080, 0.0810, 0.2140, 0.1520, 0.2800, 0.7600, 0.5136, 0.1230, 0.3170
0.6000, 1.0000, 0.2750, 0.1060, 0.2290, 0.1438, 0.5100, 0.4000, 0.5142, 0.0910, 0.2350
0.4000, 0.0000, 0.2690, 0.0920, 0.2030, 0.1198, 0.7000, 0.3000, 0.4190, 0.0810, 0.0600
0.5700, 1.0000, 0.3070, 0.0900, 0.2040, 0.1478, 0.3400, 0.6000, 0.4710, 0.0930, 0.1740
0.3700, 0.0000, 0.3830, 0.1130, 0.1650, 0.0946, 0.5300, 0.3000, 0.4466, 0.0790, 0.2590
0.4000, 1.0000, 0.3190, 0.0950, 0.1980, 0.1356, 0.3800, 0.5000, 0.4804, 0.0930, 0.1780
0.3300, 0.0000, 0.3500, 0.0890, 0.2000, 0.1304, 0.4200, 0.4760, 0.4927, 0.1010, 0.1280
0.3200, 1.0000, 0.2780, 0.0890, 0.2160, 0.1462, 0.5500, 0.4000, 0.4304, 0.0910, 0.0960
0.3500, 1.0000, 0.2590, 0.0810, 0.1740, 0.1024, 0.3100, 0.6000, 0.5313, 0.0820, 0.1260
0.5500, 0.0000, 0.3290, 0.1020, 0.1640, 0.1062, 0.4100, 0.4000, 0.4431, 0.0890, 0.2880
0.4900, 0.0000, 0.2600, 0.0930, 0.1830, 0.1002, 0.6400, 0.3000, 0.4543, 0.0880, 0.0880
0.3900, 1.0000, 0.2630, 0.1150, 0.2180, 0.1582, 0.3200, 0.7000, 0.4935, 0.1090, 0.2920
0.6000, 1.0000, 0.2230, 0.1130, 0.1860, 0.1258, 0.4600, 0.4000, 0.4263, 0.0940, 0.0710
0.6700, 1.0000, 0.2830, 0.0930, 0.2040, 0.1322, 0.4900, 0.4000, 0.4736, 0.0920, 0.1970
0.4100, 1.0000, 0.3200, 0.1090, 0.2510, 0.1706, 0.4900, 0.5000, 0.5056, 0.1030, 0.1860
0.4400, 0.0000, 0.2540, 0.0950, 0.1620, 0.0926, 0.5300, 0.3000, 0.4407, 0.0830, 0.0250
0.4800, 1.0000, 0.2330, 0.0893, 0.2120, 0.1428, 0.4600, 0.4610, 0.4754, 0.0980, 0.0840
0.4500, 0.0000, 0.2030, 0.0743, 0.1900, 0.1262, 0.4900, 0.3880, 0.4304, 0.0790, 0.0960
0.4700, 0.0000, 0.3040, 0.1200, 0.1990, 0.1200, 0.4600, 0.4000, 0.5106, 0.0870, 0.1950
0.4600, 0.0000, 0.2060, 0.0730, 0.1720, 0.1070, 0.5100, 0.3000, 0.4249, 0.0800, 0.0530
0.3600, 1.0000, 0.3230, 0.1150, 0.2860, 0.1994, 0.3900, 0.7000, 0.5472, 0.1120, 0.2170
0.3400, 0.0000, 0.2920, 0.0730, 0.1720, 0.1082, 0.4900, 0.4000, 0.4304, 0.0910, 0.1720
0.5300, 1.0000, 0.3310, 0.1170, 0.1830, 0.1190, 0.4800, 0.4000, 0.4382, 0.1060, 0.1310
0.6100, 0.0000, 0.2460, 0.1010, 0.2090, 0.1068, 0.7700, 0.3000, 0.4836, 0.0880, 0.2140
0.3700, 0.0000, 0.2020, 0.0810, 0.1620, 0.0878, 0.6300, 0.3000, 0.4025, 0.0880, 0.0590
0.3300, 1.0000, 0.2080, 0.0840, 0.1250, 0.0702, 0.4600, 0.3000, 0.3784, 0.0660, 0.0700
0.6800, 0.0000, 0.3280, 0.1057, 0.2050, 0.1164, 0.4000, 0.5130, 0.5493, 0.1170, 0.2200
0.4900, 1.0000, 0.3190, 0.0940, 0.2340, 0.1558, 0.3400, 0.7000, 0.5398, 0.1220, 0.2680
0.4800, 0.0000, 0.2390, 0.1090, 0.2320, 0.1052, 0.3700, 0.6000, 0.6107, 0.0960, 0.1520
0.5500, 1.0000, 0.2450, 0.0840, 0.1790, 0.1058, 0.6600, 0.3000, 0.3584, 0.0870, 0.0470
0.4300, 0.0000, 0.2210, 0.0660, 0.1340, 0.0772, 0.4500, 0.3000, 0.4078, 0.0800, 0.0740
0.6000, 1.0000, 0.3300, 0.0970, 0.2170, 0.1256, 0.4500, 0.5000, 0.5447, 0.1120, 0.2950
0.3100, 1.0000, 0.1900, 0.0930, 0.1370, 0.0730, 0.4700, 0.3000, 0.4443, 0.0780, 0.1010
0.5300, 1.0000, 0.2730, 0.0820, 0.1190, 0.0550, 0.3900, 0.3000, 0.4828, 0.0930, 0.1510
0.6700, 0.0000, 0.2280, 0.0870, 0.1660, 0.0986, 0.5200, 0.3000, 0.4344, 0.0920, 0.1270
0.6100, 1.0000, 0.2820, 0.1060, 0.2040, 0.1320, 0.5200, 0.4000, 0.4605, 0.0960, 0.2370
0.6200, 0.0000, 0.2890, 0.0873, 0.2060, 0.1272, 0.3300, 0.6240, 0.5434, 0.0990, 0.2250
0.6000, 0.0000, 0.2560, 0.0870, 0.2070, 0.1258, 0.6900, 0.3000, 0.4111, 0.0840, 0.0810
0.4200, 0.0000, 0.2490, 0.0910, 0.2040, 0.1418, 0.3800, 0.5000, 0.4796, 0.0890, 0.1510
0.3800, 1.0000, 0.2680, 0.1050, 0.1810, 0.1192, 0.3700, 0.5000, 0.4820, 0.0910, 0.1070
0.6200, 0.0000, 0.2240, 0.0790, 0.2220, 0.1474, 0.5900, 0.4000, 0.4357, 0.0760, 0.0640
0.6100, 1.0000, 0.2690, 0.1110, 0.2360, 0.1724, 0.3900, 0.6000, 0.4812, 0.0890, 0.1380
0.6100, 1.0000, 0.2310, 0.1130, 0.1860, 0.1144, 0.4700, 0.4000, 0.4812, 0.1050, 0.1850
0.5300, 0.0000, 0.2860, 0.0880, 0.1710, 0.0988, 0.4100, 0.4000, 0.5050, 0.0990, 0.2650
0.2800, 1.0000, 0.2470, 0.0970, 0.1750, 0.0996, 0.3200, 0.5000, 0.5380, 0.0870, 0.1010
0.2600, 1.0000, 0.3030, 0.0890, 0.2180, 0.1522, 0.3100, 0.7000, 0.5159, 0.0820, 0.1370
0.3000, 0.0000, 0.2130, 0.0870, 0.1340, 0.0630, 0.6300, 0.2000, 0.3689, 0.0660, 0.1430
0.5000, 0.0000, 0.2610, 0.1090, 0.2430, 0.1606, 0.6200, 0.4000, 0.4625, 0.0890, 0.1410
0.4800, 0.0000, 0.2020, 0.0950, 0.1870, 0.1174, 0.5300, 0.4000, 0.4419, 0.0850, 0.0790
0.5100, 0.0000, 0.2520, 0.1030, 0.1760, 0.1122, 0.3700, 0.5000, 0.4898, 0.0900, 0.2920
0.4700, 1.0000, 0.2250, 0.0820, 0.1310, 0.0668, 0.4100, 0.3000, 0.4754, 0.0890, 0.1780
0.6400, 1.0000, 0.2350, 0.0970, 0.2030, 0.1290, 0.5900, 0.3000, 0.4318, 0.0770, 0.0910
0.5100, 1.0000, 0.2590, 0.0760, 0.2400, 0.1690, 0.3900, 0.6000, 0.5075, 0.0960, 0.1160
0.3000, 0.0000, 0.2090, 0.1040, 0.1520, 0.0838, 0.4700, 0.3000, 0.4663, 0.0970, 0.0860
0.5600, 1.0000, 0.2870, 0.0990, 0.2080, 0.1464, 0.3900, 0.5000, 0.4727, 0.0970, 0.1220
0.4200, 0.0000, 0.2210, 0.0850, 0.2130, 0.1386, 0.6000, 0.4000, 0.4277, 0.0940, 0.0720
0.6200, 1.0000, 0.2670, 0.1150, 0.1830, 0.1240, 0.3500, 0.5000, 0.4788, 0.1000, 0.1290
0.3400, 0.0000, 0.3140, 0.0870, 0.1490, 0.0938, 0.4600, 0.3000, 0.3829, 0.0770, 0.1420
0.6000, 0.0000, 0.2220, 0.1047, 0.2210, 0.1054, 0.6000, 0.3680, 0.5628, 0.0930, 0.0900
0.6400, 0.0000, 0.2100, 0.0923, 0.2270, 0.1468, 0.6500, 0.3490, 0.4331, 0.1020, 0.1580
0.3900, 1.0000, 0.2120, 0.0900, 0.1820, 0.1104, 0.6000, 0.3000, 0.4060, 0.0980, 0.0390
0.7100, 1.0000, 0.2650, 0.1050, 0.2810, 0.1736, 0.5500, 0.5000, 0.5568, 0.0840, 0.1960
0.4800, 1.0000, 0.2920, 0.1100, 0.2180, 0.1516, 0.3900, 0.6000, 0.4920, 0.0980, 0.2220
0.7900, 1.0000, 0.2700, 0.1030, 0.1690, 0.1108, 0.3700, 0.5000, 0.4663, 0.1100, 0.2770
0.4000, 0.0000, 0.3070, 0.0990, 0.1770, 0.0854, 0.5000, 0.4000, 0.5338, 0.0850, 0.0990
0.4900, 1.0000, 0.2880, 0.0920, 0.2070, 0.1400, 0.4400, 0.5000, 0.4745, 0.0920, 0.1960
0.5100, 0.0000, 0.3060, 0.1030, 0.1980, 0.1066, 0.5700, 0.3000, 0.5148, 0.1000, 0.2020
0.5700, 0.0000, 0.3010, 0.1170, 0.2020, 0.1396, 0.4200, 0.5000, 0.4625, 0.1200, 0.1550
0.5900, 1.0000, 0.2470, 0.1140, 0.1520, 0.1048, 0.2900, 0.5000, 0.4511, 0.0880, 0.0770
0.5100, 0.0000, 0.2770, 0.0990, 0.2290, 0.1456, 0.6900, 0.3000, 0.4277, 0.0770, 0.1910
0.7400, 0.0000, 0.2980, 0.1010, 0.1710, 0.1048, 0.5000, 0.3000, 0.4394, 0.0860, 0.0700
0.6700, 0.0000, 0.2670, 0.1050, 0.2250, 0.1354, 0.6900, 0.3000, 0.4635, 0.0960, 0.0730
0.4900, 0.0000, 0.1980, 0.0880, 0.1880, 0.1148, 0.5700, 0.3000, 0.4394, 0.0930, 0.0490
0.5700, 0.0000, 0.2330, 0.0880, 0.1550, 0.0636, 0.7800, 0.2000, 0.4205, 0.0780, 0.0650
0.5600, 1.0000, 0.3510, 0.1230, 0.1640, 0.0950, 0.3800, 0.4000, 0.5043, 0.1170, 0.2630
0.5200, 1.0000, 0.2970, 0.1090, 0.2280, 0.1628, 0.3100, 0.8000, 0.5142, 0.1030, 0.2480
0.6900, 0.0000, 0.2930, 0.1240, 0.2230, 0.1390, 0.5400, 0.4000, 0.5011, 0.1020, 0.2960
0.3700, 0.0000, 0.2030, 0.0830, 0.1850, 0.1246, 0.3800, 0.5000, 0.4719, 0.0880, 0.2140
0.2400, 0.0000, 0.2250, 0.0890, 0.1410, 0.0680, 0.5200, 0.3000, 0.4654, 0.0840, 0.1850
0.5500, 1.0000, 0.2270, 0.0930, 0.1540, 0.0942, 0.5300, 0.3000, 0.3526, 0.0750, 0.0780
0.3600, 0.0000, 0.2280, 0.0870, 0.1780, 0.1160, 0.4100, 0.4000, 0.4654, 0.0820, 0.0930
0.4200, 1.0000, 0.2400, 0.1070, 0.1500, 0.0850, 0.4400, 0.3000, 0.4654, 0.0960, 0.2520
0.2100, 0.0000, 0.2420, 0.0760, 0.1470, 0.0770, 0.5300, 0.3000, 0.4443, 0.0790, 0.1500
0.4100, 0.0000, 0.2020, 0.0620, 0.1530, 0.0890, 0.5000, 0.3000, 0.4249, 0.0890, 0.0770
0.5700, 1.0000, 0.2940, 0.1090, 0.1600, 0.0876, 0.3100, 0.5000, 0.5333, 0.0920, 0.2080
0.2000, 1.0000, 0.2210, 0.0870, 0.1710, 0.0996, 0.5800, 0.3000, 0.4205, 0.0780, 0.0770
0.6700, 1.0000, 0.2360, 0.1113, 0.1890, 0.1054, 0.7000, 0.2700, 0.4220, 0.0930, 0.1080
0.3400, 0.0000, 0.2520, 0.0770, 0.1890, 0.1206, 0.5300, 0.4000, 0.4344, 0.0790, 0.1600
0.4100, 1.0000, 0.2490, 0.0860, 0.1920, 0.1150, 0.6100, 0.3000, 0.4382, 0.0940, 0.0530
0.3800, 1.0000, 0.3300, 0.0780, 0.3010, 0.2150, 0.5000, 0.6020, 0.5193, 0.1080, 0.2200
0.5100, 0.0000, 0.2350, 0.1010, 0.1950, 0.1210, 0.5100, 0.4000, 0.4745, 0.0940, 0.1540
0.5200, 1.0000, 0.2640, 0.0913, 0.2180, 0.1520, 0.3900, 0.5590, 0.4905, 0.0990, 0.2590
0.6700, 0.0000, 0.2980, 0.0800, 0.1720, 0.0934, 0.6300, 0.3000, 0.4357, 0.0820, 0.0900
0.6100, 0.0000, 0.3000, 0.1080, 0.1940, 0.1000, 0.5200, 0.3730, 0.5347, 0.1050, 0.2460
0.6700, 1.0000, 0.2500, 0.1117, 0.1460, 0.0934, 0.3300, 0.4420, 0.4585, 0.1030, 0.1240
0.5600, 0.0000, 0.2700, 0.1050, 0.2470, 0.1606, 0.5400, 0.5000, 0.5088, 0.0940, 0.0670
0.6400, 0.0000, 0.2000, 0.0747, 0.1890, 0.1148, 0.6200, 0.3050, 0.4111, 0.0910, 0.0720
0.5800, 1.0000, 0.2550, 0.1120, 0.1630, 0.1106, 0.2900, 0.6000, 0.4762, 0.0860, 0.2570
0.5500, 0.0000, 0.2820, 0.0910, 0.2500, 0.1402, 0.6700, 0.4000, 0.5366, 0.1030, 0.2620
0.6200, 1.0000, 0.3330, 0.1140, 0.1820, 0.1140, 0.3800, 0.5000, 0.5011, 0.0960, 0.2750
0.5700, 1.0000, 0.2560, 0.0960, 0.2000, 0.1330, 0.5200, 0.3850, 0.4318, 0.1050, 0.1770
0.2000, 1.0000, 0.2420, 0.0880, 0.1260, 0.0722, 0.4500, 0.3000, 0.3784, 0.0740, 0.0710
0.5300, 1.0000, 0.2210, 0.0980, 0.1650, 0.1052, 0.4700, 0.4000, 0.4159, 0.0810, 0.0470
0.3200, 1.0000, 0.3140, 0.0890, 0.1530, 0.0842, 0.5600, 0.3000, 0.4159, 0.0900, 0.1870
0.4100, 0.0000, 0.2310, 0.0860, 0.1480, 0.0780, 0.5800, 0.3000, 0.4094, 0.0600, 0.1250
0.6000, 0.0000, 0.2340, 0.0767, 0.2470, 0.1480, 0.6500, 0.3800, 0.5136, 0.0770, 0.0780
0.2600, 0.0000, 0.1880, 0.0830, 0.1910, 0.1036, 0.6900, 0.3000, 0.4522, 0.0690, 0.0510
0.3700, 0.0000, 0.3080, 0.1120, 0.2820, 0.1972, 0.4300, 0.7000, 0.5342, 0.1010, 0.2580
0.4500, 0.0000, 0.3200, 0.1100, 0.2240, 0.1342, 0.4500, 0.5000, 0.5412, 0.0930, 0.2150
0.6700, 0.0000, 0.3160, 0.1160, 0.1790, 0.0904, 0.4100, 0.4000, 0.5472, 0.1000, 0.3030
0.3400, 1.0000, 0.3550, 0.1200, 0.2330, 0.1466, 0.3400, 0.7000, 0.5568, 0.1010, 0.2430
0.5000, 0.0000, 0.3190, 0.0783, 0.2070, 0.1492, 0.3800, 0.5450, 0.4595, 0.0840, 0.0910
0.7100, 0.0000, 0.2950, 0.0970, 0.2270, 0.1516, 0.4500, 0.5000, 0.5024, 0.1080, 0.1500
0.5700, 1.0000, 0.3160, 0.1170, 0.2250, 0.1076, 0.4000, 0.6000, 0.5958, 0.1130, 0.3100
0.4900, 0.0000, 0.2030, 0.0930, 0.1840, 0.1030, 0.6100, 0.3000, 0.4605, 0.0930, 0.1530
0.3500, 0.0000, 0.4130, 0.0810, 0.1680, 0.1028, 0.3700, 0.5000, 0.4949, 0.0940, 0.3460
0.4100, 1.0000, 0.2120, 0.1020, 0.1840, 0.1004, 0.6400, 0.3000, 0.4585, 0.0790, 0.0630
0.7000, 1.0000, 0.2410, 0.0823, 0.1940, 0.1492, 0.3100, 0.6260, 0.4234, 0.1050, 0.0890
0.5200, 0.0000, 0.2300, 0.1070, 0.1790, 0.1237, 0.4250, 0.4210, 0.4159, 0.0930, 0.0500
0.6000, 0.0000, 0.2560, 0.0780, 0.1950, 0.0954, 0.9100, 0.2000, 0.3761, 0.0870, 0.0390
0.6200, 0.0000, 0.2250, 0.1250, 0.2150, 0.0990, 0.9800, 0.2000, 0.4500, 0.0950, 0.1030
0.4400, 1.0000, 0.3820, 0.1230, 0.2010, 0.1266, 0.4400, 0.5000, 0.5024, 0.0920, 0.3080
0.2800, 1.0000, 0.1920, 0.0810, 0.1550, 0.0946, 0.5100, 0.3000, 0.3850, 0.0870, 0.1160
0.5800, 1.0000, 0.2900, 0.0850, 0.1560, 0.1092, 0.3600, 0.4000, 0.3989, 0.0860, 0.1450
0.3900, 1.0000, 0.2400, 0.0897, 0.1900, 0.1136, 0.5200, 0.3650, 0.4804, 0.1010, 0.0740
0.3400, 1.0000, 0.2060, 0.0980, 0.1830, 0.0920, 0.8300, 0.2000, 0.3689, 0.0920, 0.0450
0.6500, 0.0000, 0.2630, 0.0700, 0.2440, 0.1662, 0.5100, 0.5000, 0.4898, 0.0980, 0.1150
0.6600, 1.0000, 0.3460, 0.1150, 0.2040, 0.1394, 0.3600, 0.6000, 0.4963, 0.1090, 0.2640
0.5100, 0.0000, 0.2340, 0.0870, 0.2200, 0.1088, 0.9300, 0.2000, 0.4511, 0.0820, 0.0870
0.5000, 1.0000, 0.2920, 0.1190, 0.1620, 0.0852, 0.5400, 0.3000, 0.4736, 0.0950, 0.2020
0.5900, 1.0000, 0.2720, 0.1070, 0.1580, 0.1020, 0.3900, 0.4000, 0.4443, 0.0930, 0.1270
0.5200, 0.0000, 0.2700, 0.0783, 0.1340, 0.0730, 0.4400, 0.3050, 0.4443, 0.0690, 0.1820
0.6900, 1.0000, 0.2450, 0.1080, 0.2430, 0.1364, 0.4000, 0.6000, 0.5808, 0.1000, 0.2410
0.5300, 0.0000, 0.2410, 0.1050, 0.1840, 0.1134, 0.4600, 0.4000, 0.4812, 0.0950, 0.0660
0.4700, 1.0000, 0.2530, 0.0980, 0.1730, 0.1056, 0.4400, 0.4000, 0.4762, 0.1080, 0.0940
0.5200, 0.0000, 0.2880, 0.1130, 0.2800, 0.1740, 0.6700, 0.4000, 0.5273, 0.0860, 0.2830
0.3900, 0.0000, 0.2090, 0.0950, 0.1500, 0.0656, 0.6800, 0.2000, 0.4407, 0.0950, 0.0640
0.6700, 1.0000, 0.2300, 0.0700, 0.1840, 0.1280, 0.3500, 0.5000, 0.4654, 0.0990, 0.1020
0.5900, 1.0000, 0.2410, 0.0960, 0.1700, 0.0986, 0.5400, 0.3000, 0.4466, 0.0850, 0.2000
0.5100, 1.0000, 0.2810, 0.1060, 0.2020, 0.1222, 0.5500, 0.4000, 0.4820, 0.0870, 0.2650
0.2300, 1.0000, 0.1800, 0.0780, 0.1710, 0.0960, 0.4800, 0.4000, 0.4905, 0.0920, 0.0940
0.6800, 0.0000, 0.2590, 0.0930, 0.2530, 0.1812, 0.5300, 0.5000, 0.4543, 0.0980, 0.2300
0.4400, 0.0000, 0.2150, 0.0850, 0.1570, 0.0922, 0.5500, 0.3000, 0.3892, 0.0840, 0.1810
0.6000, 1.0000, 0.2430, 0.1030, 0.1410, 0.0866, 0.3300, 0.4000, 0.4673, 0.0780, 0.1560
0.5200, 0.0000, 0.2450, 0.0900, 0.1980, 0.1290, 0.2900, 0.7000, 0.5298, 0.0860, 0.2330
0.3800, 0.0000, 0.2130, 0.0720, 0.1650, 0.0602, 0.8800, 0.2000, 0.4431, 0.0900, 0.0600
0.6100, 0.0000, 0.2580, 0.0900, 0.2800, 0.1954, 0.5500, 0.5000, 0.4997, 0.0900, 0.2190
0.6800, 1.0000, 0.2480, 0.1010, 0.2210, 0.1514, 0.6000, 0.4000, 0.3871, 0.0870, 0.0800
0.2800, 1.0000, 0.3150, 0.0830, 0.2280, 0.1494, 0.3800, 0.6000, 0.5313, 0.0830, 0.0680
0.6500, 1.0000, 0.3350, 0.1020, 0.1900, 0.1262, 0.3500, 0.5000, 0.4970, 0.1020, 0.3320
0.6900, 0.0000, 0.2810, 0.1130, 0.2340, 0.1428, 0.5200, 0.4000, 0.5278, 0.0770, 0.2480
0.5100, 0.0000, 0.2430, 0.0853, 0.1530, 0.0716, 0.7100, 0.2150, 0.3951, 0.0820, 0.0840
0.2900, 0.0000, 0.3500, 0.0983, 0.2040, 0.1426, 0.5000, 0.4080, 0.4043, 0.0910, 0.2000
0.5500, 1.0000, 0.2350, 0.0930, 0.1770, 0.1268, 0.4100, 0.4000, 0.3829, 0.0830, 0.0550
0.3400, 1.0000, 0.3000, 0.0830, 0.1850, 0.1072, 0.5300, 0.3000, 0.4820, 0.0920, 0.0850
0.6700, 0.0000, 0.2070, 0.0830, 0.1700, 0.0998, 0.5900, 0.3000, 0.4025, 0.0770, 0.0890
0.4900, 0.0000, 0.2560, 0.0760, 0.1610, 0.0998, 0.5100, 0.3000, 0.3932, 0.0780, 0.0310
0.5500, 1.0000, 0.2290, 0.0810, 0.1230, 0.0672, 0.4100, 0.3000, 0.4304, 0.0880, 0.1290
0.5900, 1.0000, 0.2510, 0.0900, 0.1630, 0.1014, 0.4600, 0.4000, 0.4357, 0.0910, 0.0830
0.5300, 0.0000, 0.3320, 0.0827, 0.1860, 0.1068, 0.4600, 0.4040, 0.5112, 0.1020, 0.2750
0.4800, 1.0000, 0.2410, 0.1100, 0.2090, 0.1346, 0.5800, 0.4000, 0.4407, 0.1000, 0.0650
0.5200, 0.0000, 0.2950, 0.1043, 0.2110, 0.1328, 0.4900, 0.4310, 0.4984, 0.0980, 0.1980
0.6900, 0.0000, 0.2960, 0.1220, 0.2310, 0.1284, 0.5600, 0.4000, 0.5451, 0.0860, 0.2360
0.6000, 1.0000, 0.2280, 0.1100, 0.2450, 0.1898, 0.3900, 0.6000, 0.4394, 0.0880, 0.2530
0.4600, 1.0000, 0.2270, 0.0830, 0.1830, 0.1258, 0.3200, 0.6000, 0.4836, 0.0750, 0.1240
0.5100, 1.0000, 0.2620, 0.1010, 0.1610, 0.0996, 0.4800, 0.3000, 0.4205, 0.0880, 0.0440
0.6700, 1.0000, 0.2350, 0.0960, 0.2070, 0.1382, 0.4200, 0.5000, 0.4898, 0.1110, 0.1720
0.4900, 0.0000, 0.2210, 0.0850, 0.1360, 0.0634, 0.6200, 0.2190, 0.3970, 0.0720, 0.1140
0.4600, 1.0000, 0.2650, 0.0940, 0.2470, 0.1602, 0.5900, 0.4000, 0.4935, 0.1110, 0.1420
0.4700, 0.0000, 0.3240, 0.1050, 0.1880, 0.1250, 0.4600, 0.4090, 0.4443, 0.0990, 0.1090
0.7500, 0.0000, 0.3010, 0.0780, 0.2220, 0.1542, 0.4400, 0.5050, 0.4779, 0.0970, 0.1800
0.2800, 0.0000, 0.2420, 0.0930, 0.1740, 0.1064, 0.5400, 0.3000, 0.4220, 0.0840, 0.1440
0.6500, 1.0000, 0.3130, 0.1100, 0.2130, 0.1280, 0.4700, 0.5000, 0.5247, 0.0910, 0.1630
0.4200, 0.0000, 0.3010, 0.0910, 0.1820, 0.1148, 0.4900, 0.4000, 0.4511, 0.0820, 0.1470
0.5100, 0.0000, 0.2450, 0.0790, 0.2120, 0.1286, 0.6500, 0.3000, 0.4522, 0.0910, 0.0970
0.5300, 1.0000, 0.2770, 0.0950, 0.1900, 0.1018, 0.4100, 0.5000, 0.5464, 0.1010, 0.2200
0.5400, 0.0000, 0.2320, 0.1107, 0.2380, 0.1628, 0.4800, 0.4960, 0.4913, 0.1080, 0.1900
0.7300, 0.0000, 0.2700, 0.1020, 0.2110, 0.1210, 0.6700, 0.3000, 0.4745, 0.0990, 0.1090
0.5400, 0.0000, 0.2680, 0.1080, 0.1760, 0.0806, 0.6700, 0.3000, 0.4956, 0.1060, 0.1910
0.4200, 0.0000, 0.2920, 0.0930, 0.2490, 0.1742, 0.4500, 0.6000, 0.5004, 0.0920, 0.1220
0.7500, 0.0000, 0.3120, 0.1177, 0.2290, 0.1388, 0.2900, 0.7900, 0.5724, 0.1060, 0.2300
0.5500, 1.0000, 0.3210, 0.1127, 0.2070, 0.0924, 0.2500, 0.8280, 0.6105, 0.1110, 0.2420
0.6800, 1.0000, 0.2570, 0.1090, 0.2330, 0.1126, 0.3500, 0.7000, 0.6057, 0.1050, 0.2480
0.5700, 0.0000, 0.2690, 0.0980, 0.2460, 0.1652, 0.3800, 0.7000, 0.5366, 0.0960, 0.2490
0.4800, 0.0000, 0.3140, 0.0753, 0.2420, 0.1516, 0.3800, 0.6370, 0.5568, 0.1030, 0.1920
0.6100, 1.0000, 0.2560, 0.0850, 0.1840, 0.1162, 0.3900, 0.5000, 0.4970, 0.0980, 0.1310
0.6900, 0.0000, 0.3700, 0.1030, 0.2070, 0.1314, 0.5500, 0.4000, 0.4635, 0.0900, 0.2370
0.3800, 0.0000, 0.3260, 0.0770, 0.1680, 0.1006, 0.4700, 0.4000, 0.4625, 0.0960, 0.0780
0.4500, 1.0000, 0.2120, 0.0940, 0.1690, 0.0968, 0.5500, 0.3000, 0.4454, 0.1020, 0.1350
0.5100, 1.0000, 0.2920, 0.1070, 0.1870, 0.1390, 0.3200, 0.6000, 0.4382, 0.0950, 0.2440
0.7100, 1.0000, 0.2400, 0.0840, 0.1380, 0.0858, 0.3900, 0.4000, 0.4190, 0.0900, 0.1990
0.5700, 0.0000, 0.3610, 0.1170, 0.1810, 0.1082, 0.3400, 0.5000, 0.5268, 0.1000, 0.2700
0.5600, 1.0000, 0.2580, 0.1030, 0.1770, 0.1144, 0.3400, 0.5000, 0.4963, 0.0990, 0.1640
0.3200, 1.0000, 0.2200, 0.0880, 0.1370, 0.0786, 0.4800, 0.3000, 0.3951, 0.0780, 0.0720
0.5000, 0.0000, 0.2190, 0.0910, 0.1900, 0.1112, 0.6700, 0.3000, 0.4078, 0.0770, 0.0960
0.4300, 0.0000, 0.3430, 0.0840, 0.2560, 0.1726, 0.3300, 0.8000, 0.5529, 0.1040, 0.3060
0.5400, 1.0000, 0.2520, 0.1150, 0.1810, 0.1200, 0.3900, 0.5000, 0.4701, 0.0920, 0.0910
0.3100, 0.0000, 0.2330, 0.0850, 0.1900, 0.1308, 0.4300, 0.4000, 0.4394, 0.0770, 0.2140
0.5600, 0.0000, 0.2570, 0.0800, 0.2440, 0.1516, 0.5900, 0.4000, 0.5118, 0.0950, 0.0950
0.4400, 0.0000, 0.2510, 0.1330, 0.1820, 0.1130, 0.5500, 0.3000, 0.4249, 0.0840, 0.2160
0.5700, 1.0000, 0.3190, 0.1110, 0.1730, 0.1162, 0.4100, 0.4000, 0.4369, 0.0870, 0.2630

Test data:


# diabetes_norm_test_100.txt
#
0.6400, 1.0000, 0.2840, 0.1110, 0.1840, 0.1270, 0.4100, 0.4000, 0.4382, 0.0970, 0.1780
0.4300, 0.0000, 0.2810, 0.1210, 0.1920, 0.1210, 0.6000, 0.3000, 0.4007, 0.0930, 0.1130
0.1900, 0.0000, 0.2530, 0.0830, 0.2250, 0.1566, 0.4600, 0.5000, 0.4719, 0.0840, 0.2000
0.7100, 1.0000, 0.2610, 0.0850, 0.2200, 0.1524, 0.4700, 0.5000, 0.4635, 0.0910, 0.1390
0.5000, 1.0000, 0.2800, 0.1040, 0.2820, 0.1968, 0.4400, 0.6000, 0.5328, 0.0950, 0.1390
0.5900, 1.0000, 0.2360, 0.0730, 0.1800, 0.1074, 0.5100, 0.4000, 0.4682, 0.0840, 0.0880
0.5700, 0.0000, 0.2450, 0.0930, 0.1860, 0.0966, 0.7100, 0.3000, 0.4522, 0.0910, 0.1480
0.4900, 1.0000, 0.2100, 0.0820, 0.1190, 0.0854, 0.2300, 0.5000, 0.3970, 0.0740, 0.0880
0.4100, 1.0000, 0.3200, 0.1260, 0.1980, 0.1042, 0.4900, 0.4000, 0.5412, 0.1240, 0.2430
0.2500, 1.0000, 0.2260, 0.0850, 0.1300, 0.0710, 0.4800, 0.3000, 0.4007, 0.0810, 0.0710
0.5200, 1.0000, 0.1970, 0.0810, 0.1520, 0.0534, 0.8200, 0.2000, 0.4419, 0.0820, 0.0770
0.3400, 0.0000, 0.2120, 0.0840, 0.2540, 0.1134, 0.5200, 0.5000, 0.6094, 0.0920, 0.1090
0.4200, 1.0000, 0.3060, 0.1010, 0.2690, 0.1722, 0.5000, 0.5000, 0.5455, 0.1060, 0.2720
0.2800, 1.0000, 0.2550, 0.0990, 0.1620, 0.1016, 0.4600, 0.4000, 0.4277, 0.0940, 0.0600
0.4700, 1.0000, 0.2330, 0.0900, 0.1950, 0.1258, 0.5400, 0.4000, 0.4331, 0.0730, 0.0540
0.3200, 1.0000, 0.3100, 0.1000, 0.1770, 0.0962, 0.4500, 0.4000, 0.5187, 0.0770, 0.2210
0.4300, 0.0000, 0.1850, 0.0870, 0.1630, 0.0936, 0.6100, 0.2670, 0.3738, 0.0800, 0.0900
0.5900, 1.0000, 0.2690, 0.1040, 0.1940, 0.1266, 0.4300, 0.5000, 0.4804, 0.1060, 0.3110
0.5300, 0.0000, 0.2830, 0.1010, 0.1790, 0.1070, 0.4800, 0.4000, 0.4788, 0.1010, 0.2810
0.6000, 0.0000, 0.2570, 0.1030, 0.1580, 0.0846, 0.6400, 0.2000, 0.3850, 0.0970, 0.1820
0.5400, 1.0000, 0.3610, 0.1150, 0.1630, 0.0984, 0.4300, 0.4000, 0.4682, 0.1010, 0.3210
0.3500, 1.0000, 0.2410, 0.0947, 0.1550, 0.0974, 0.3200, 0.4840, 0.4852, 0.0940, 0.0580
0.4900, 1.0000, 0.2580, 0.0890, 0.1820, 0.1186, 0.3900, 0.5000, 0.4804, 0.1150, 0.2620
0.5800, 0.0000, 0.2280, 0.0910, 0.1960, 0.1188, 0.4800, 0.4000, 0.4984, 0.1150, 0.2060
0.3600, 1.0000, 0.3910, 0.0900, 0.2190, 0.1358, 0.3800, 0.6000, 0.5421, 0.1030, 0.2330
0.4600, 1.0000, 0.4220, 0.0990, 0.2110, 0.1370, 0.4400, 0.5000, 0.5011, 0.0990, 0.2420
0.4400, 1.0000, 0.2660, 0.0990, 0.2050, 0.1090, 0.4300, 0.5000, 0.5580, 0.1110, 0.1230
0.4600, 0.0000, 0.2990, 0.0830, 0.1710, 0.1130, 0.3800, 0.4500, 0.4585, 0.0980, 0.1670
0.5400, 0.0000, 0.2100, 0.0780, 0.1880, 0.1074, 0.7000, 0.3000, 0.3970, 0.0730, 0.0630
0.6300, 1.0000, 0.2550, 0.1090, 0.2260, 0.1032, 0.4600, 0.5000, 0.5951, 0.0870, 0.1970
0.4100, 1.0000, 0.2420, 0.0900, 0.1990, 0.1236, 0.5700, 0.4000, 0.4522, 0.0860, 0.0710
0.2800, 0.0000, 0.2540, 0.0930, 0.1410, 0.0790, 0.4900, 0.3000, 0.4174, 0.0910, 0.1680
0.1900, 0.0000, 0.2320, 0.0750, 0.1430, 0.0704, 0.5200, 0.3000, 0.4635, 0.0720, 0.1400
0.6100, 1.0000, 0.2610, 0.1260, 0.2150, 0.1298, 0.5700, 0.4000, 0.4949, 0.0960, 0.2170
0.4800, 0.0000, 0.3270, 0.0930, 0.2760, 0.1986, 0.4300, 0.6420, 0.5148, 0.0910, 0.1210
0.5400, 1.0000, 0.2730, 0.1000, 0.2000, 0.1440, 0.3300, 0.6000, 0.4745, 0.0760, 0.2350
0.5300, 1.0000, 0.2660, 0.0930, 0.1850, 0.1224, 0.3600, 0.5000, 0.4890, 0.0820, 0.2450
0.4800, 0.0000, 0.2280, 0.1010, 0.1100, 0.0416, 0.5600, 0.2000, 0.4127, 0.0970, 0.0400
0.5300, 0.0000, 0.2880, 0.1117, 0.1450, 0.0872, 0.4600, 0.3150, 0.4078, 0.0850, 0.0520
0.2900, 1.0000, 0.1810, 0.0730, 0.1580, 0.0990, 0.4100, 0.4000, 0.4500, 0.0780, 0.1040
0.6200, 0.0000, 0.3200, 0.0880, 0.1720, 0.0690, 0.3800, 0.4000, 0.5784, 0.1000, 0.1320
0.5000, 1.0000, 0.2370, 0.0920, 0.1660, 0.0970, 0.5200, 0.3000, 0.4443, 0.0930, 0.0880
0.5800, 1.0000, 0.2360, 0.0960, 0.2570, 0.1710, 0.5900, 0.4000, 0.4905, 0.0820, 0.0690
0.5500, 1.0000, 0.2460, 0.1090, 0.1430, 0.0764, 0.5100, 0.3000, 0.4357, 0.0880, 0.2190
0.5400, 0.0000, 0.2260, 0.0900, 0.1830, 0.1042, 0.6400, 0.3000, 0.4304, 0.0920, 0.0720
0.3600, 0.0000, 0.2780, 0.0730, 0.1530, 0.1044, 0.4200, 0.4000, 0.3497, 0.0730, 0.2010
0.6300, 1.0000, 0.2410, 0.1110, 0.1840, 0.1122, 0.4400, 0.4000, 0.4935, 0.0820, 0.1100
0.4700, 1.0000, 0.2650, 0.0700, 0.1810, 0.1048, 0.6300, 0.3000, 0.4190, 0.0700, 0.0510
0.5100, 1.0000, 0.3280, 0.1120, 0.2020, 0.1006, 0.3700, 0.5000, 0.5775, 0.1090, 0.2770
0.4200, 0.0000, 0.1990, 0.0760, 0.1460, 0.0832, 0.5500, 0.3000, 0.3664, 0.0790, 0.0630
0.3700, 1.0000, 0.2360, 0.0940, 0.2050, 0.1388, 0.5300, 0.4000, 0.4190, 0.1070, 0.1180
0.2800, 0.0000, 0.2210, 0.0820, 0.1680, 0.1006, 0.5400, 0.3000, 0.4205, 0.0860, 0.0690
0.5800, 0.0000, 0.2810, 0.1110, 0.1980, 0.0806, 0.3100, 0.6000, 0.6068, 0.0930, 0.2730
0.3200, 0.0000, 0.2650, 0.0860, 0.1840, 0.1016, 0.5300, 0.4000, 0.4990, 0.0780, 0.2580
0.2500, 1.0000, 0.2350, 0.0880, 0.1430, 0.0808, 0.5500, 0.3000, 0.3584, 0.0830, 0.0430
0.6300, 0.0000, 0.2600, 0.0857, 0.1550, 0.0782, 0.4600, 0.3370, 0.5037, 0.0970, 0.1980
0.5200, 0.0000, 0.2780, 0.0850, 0.2190, 0.1360, 0.4900, 0.4000, 0.5136, 0.0750, 0.2420
0.6500, 1.0000, 0.2850, 0.1090, 0.2010, 0.1230, 0.4600, 0.4000, 0.5075, 0.0960, 0.2320
0.4200, 0.0000, 0.3060, 0.1210, 0.1760, 0.0928, 0.6900, 0.3000, 0.4263, 0.0890, 0.1750
0.5300, 0.0000, 0.2220, 0.0780, 0.1640, 0.0810, 0.7000, 0.2000, 0.4174, 0.1010, 0.0930
0.7900, 1.0000, 0.2330, 0.0880, 0.1860, 0.1284, 0.3300, 0.6000, 0.4812, 0.1020, 0.1680
0.4300, 0.0000, 0.3540, 0.0930, 0.1850, 0.1002, 0.4400, 0.4000, 0.5318, 0.1010, 0.2750
0.4400, 0.0000, 0.3140, 0.1150, 0.1650, 0.0976, 0.5200, 0.3000, 0.4344, 0.0890, 0.2930
0.6200, 1.0000, 0.3780, 0.1190, 0.1130, 0.0510, 0.3100, 0.4000, 0.5043, 0.0840, 0.2810
0.3300, 0.0000, 0.1890, 0.0700, 0.1620, 0.0918, 0.5900, 0.3000, 0.4025, 0.0580, 0.0720
0.5600, 0.0000, 0.3500, 0.0793, 0.1950, 0.1408, 0.4200, 0.4640, 0.4111, 0.0960, 0.1400
0.6600, 0.0000, 0.2170, 0.1260, 0.2120, 0.1278, 0.4500, 0.4710, 0.5278, 0.1010, 0.1890
0.3400, 1.0000, 0.2530, 0.1110, 0.2300, 0.1620, 0.3900, 0.6000, 0.4977, 0.0900, 0.1810
0.4600, 1.0000, 0.2380, 0.0970, 0.2240, 0.1392, 0.4200, 0.5000, 0.5366, 0.0810, 0.2090
0.5000, 0.0000, 0.3180, 0.0820, 0.1360, 0.0692, 0.5500, 0.2000, 0.4078, 0.0850, 0.1360
0.6900, 0.0000, 0.3430, 0.1130, 0.2000, 0.1238, 0.5400, 0.4000, 0.4710, 0.1120, 0.2610
0.3400, 0.0000, 0.2630, 0.0870, 0.1970, 0.1200, 0.6300, 0.3000, 0.4249, 0.0960, 0.1130
0.7100, 1.0000, 0.2700, 0.0933, 0.2690, 0.1902, 0.4100, 0.6560, 0.5242, 0.0930, 0.1310
0.4700, 0.0000, 0.2720, 0.0800, 0.2080, 0.1456, 0.3800, 0.6000, 0.4804, 0.0920, 0.1740
0.4100, 0.0000, 0.3380, 0.1233, 0.1870, 0.1270, 0.4500, 0.4160, 0.4318, 0.1000, 0.2570
0.3400, 0.0000, 0.3300, 0.0730, 0.1780, 0.1146, 0.5100, 0.3490, 0.4127, 0.0920, 0.0550
0.5100, 0.0000, 0.2410, 0.0870, 0.2610, 0.1756, 0.6900, 0.4000, 0.4407, 0.0930, 0.0840
0.4300, 0.0000, 0.2130, 0.0790, 0.1410, 0.0788, 0.5300, 0.3000, 0.3829, 0.0900, 0.0420
0.5500, 0.0000, 0.2300, 0.0947, 0.1900, 0.1376, 0.3800, 0.5000, 0.4277, 0.1060, 0.1460
0.5900, 1.0000, 0.2790, 0.1010, 0.2180, 0.1442, 0.3800, 0.6000, 0.5187, 0.0950, 0.2120
0.2700, 1.0000, 0.3360, 0.1100, 0.2460, 0.1566, 0.5700, 0.4000, 0.5088, 0.0890, 0.2330
0.5100, 1.0000, 0.2270, 0.1030, 0.2170, 0.1624, 0.3000, 0.7000, 0.4812, 0.0800, 0.0910
0.4900, 1.0000, 0.2740, 0.0890, 0.1770, 0.1130, 0.3700, 0.5000, 0.4905, 0.0970, 0.1110
0.2700, 0.0000, 0.2260, 0.0710, 0.1160, 0.0434, 0.5600, 0.2000, 0.4419, 0.0790, 0.1520
0.5700, 1.0000, 0.2320, 0.1073, 0.2310, 0.1594, 0.4100, 0.5630, 0.5030, 0.1120, 0.1200
0.3900, 1.0000, 0.2690, 0.0930, 0.1360, 0.0754, 0.4800, 0.3000, 0.4143, 0.0990, 0.0670
0.6200, 1.0000, 0.3460, 0.1200, 0.2150, 0.1292, 0.4300, 0.5000, 0.5366, 0.1230, 0.3100
0.3700, 0.0000, 0.2330, 0.0880, 0.2230, 0.1420, 0.6500, 0.3400, 0.4357, 0.0820, 0.0940
0.4600, 0.0000, 0.2110, 0.0800, 0.2050, 0.1444, 0.4200, 0.5000, 0.4533, 0.0870, 0.1830
0.6800, 1.0000, 0.2350, 0.1010, 0.1620, 0.0854, 0.5900, 0.3000, 0.4477, 0.0910, 0.0660
0.5100, 0.0000, 0.3150, 0.0930, 0.2310, 0.1440, 0.4900, 0.4700, 0.5252, 0.1170, 0.1730
0.4100, 0.0000, 0.2080, 0.0860, 0.2230, 0.1282, 0.8300, 0.3000, 0.4078, 0.0890, 0.0720
0.5300, 0.0000, 0.2650, 0.0970, 0.1930, 0.1224, 0.5800, 0.3000, 0.4143, 0.0990, 0.0490
0.4500, 0.0000, 0.2420, 0.0830, 0.1770, 0.1184, 0.4500, 0.4000, 0.4220, 0.0820, 0.0640
0.3300, 0.0000, 0.1950, 0.0800, 0.1710, 0.0854, 0.7500, 0.2000, 0.3970, 0.0800, 0.0480
0.6000, 1.0000, 0.2820, 0.1120, 0.1850, 0.1138, 0.4200, 0.4000, 0.4984, 0.0930, 0.1780
0.4700, 1.0000, 0.2490, 0.0750, 0.2250, 0.1660, 0.4200, 0.5000, 0.4443, 0.1020, 0.1040
0.6000, 1.0000, 0.2490, 0.0997, 0.1620, 0.1066, 0.4300, 0.3770, 0.4127, 0.0950, 0.1320
0.3600, 0.0000, 0.3000, 0.0950, 0.2010, 0.1252, 0.4200, 0.4790, 0.5130, 0.0850, 0.2200
0.3600, 0.0000, 0.1960, 0.0710, 0.2500, 0.1332, 0.9700, 0.3000, 0.4595, 0.0920, 0.0570
Posted in Machine Learning | Leave a comment

Comparing Two Versions of QR-Householder Decomposition Implemented Using C#

One early morning, at about 3:00 AM, I figured I’d take a look at an idea that had been percolating in my mind for a while. I have two different C# implementations of a function to compute the QR decomposition of a matrix. Both use the Householder algorithm, as opposed to the Gram-Schmidt algorithm, or the Givens algorithm.

The first QR-Householder implementation is relatively simpler (but still extremely complex) but has accuracy to approximately 1.0e-8. I use it for several machine learning functions (Moore-Penrose pseudo-inverse in particular).

The second QR-Householder implementation is relatively more complicated (and extremely complex) and has accuracy to approximately 1.0e-12. I use it primarily as a helper function for the extraordinarily complicated SVD (singular value decomposition) decomposition.

Both QR-Householder versions work only for matrices that have number of rows greater than or equal to number of columns (standard and adequate for nearly all machine learning scenarios).

The first version has a ‘reduced’ parameter to allow full or reduced results. But in machine learning, the reduced result is always used. The second version computes only a reduced result because that’s required for SVD.

The first QR-Householder medium-accuracy, medium-complexity version has 91 lines of code and 7 helper functions. The second higher-accuracy, higher-complexity version has 50 lines of code but 16 helper functions.

The first version looks like:

 public class MatDecomQR1 // older version based on rosetta
  {
    public static void MatDecomposeQR(double[][] M,
      out double[][] Q, out double[][] R, bool reduced)
    {
      . . . // 91 LOC
    } 

    // ------------------------------------------------------

    // 7 helpers:
    private static double[][] MatMake(int nRows,
      int nCols) { . . }
    private static double[][] MatIdentity(int n) { . . }
    private static double[][] MatCopy(double[][] M) { . . }
    private static double[][] MatProduct(double[][] A,
      double[][] B) { . . }
    private static double[][] VecToMat(double[] vec,
        int nRows, int nCols) { . . }
    private static double VecNorm(double[] vec) { . . }
    private static double VecDot(double[] v1,
      double[] v2) { . . }
  }

The second version looks like:

  public class MatDecomQR2 // newer version for SVD
  {
    public static void MatDecompQR(double[][] A,
      out double[][] Q, out double[][] R)
    {
      . . . // 50 LOC
    } 

    // ------------------------------------------------------

    // 16 helpers:
    private static double[][] MatMake(int nRows,
      int nCols) { . . }
    private static double[][] MatIdentity(int n) { . . }
    private static double[][] MatCopy(double[][] M) { . . }
    private static double[] MatColToEnd(double[][] A,
      int k) { . . }
    private static double[][] MatRowColToEnd(double[][] A,
      int k) { . . }
    private static double[] VecCopy(double[] v) { . . }
    private static double[] VecMatProduct(double[] v,
      double[][] A) { . . }
    private static double[][] VecOuter(double[] v1,
      double[] v2) { . . }
    private static double[][] MatScalarMult(double x,
      double[][] A) { . . }
    private static void MatSubtractCorner(double[][] A,
      int k, double[][] C) { . . }
    private static double[][]
      MatExtractAllRowsColToEnd(double[][] A, int k) { . . }
    private static double[] MatVecProduct(double[][] A,
      double[] v) { . . }
    private static void MatSubtractRows(double[][] A,
      int k, double[][] C) { . . }
    private static double[][]
      MatExtractFirstCols(double[][] A, int k) { . . }
    private static double[][]
      MatExtractFirstRows(double[][] A, int k) { . . }
    private static double VecNorm(double[] vec) { . . }
  } // MatDecomQR2

After exploring both versions, I didn’t come to any surprising conclusions. The first QR-Householder implementation is suitable for general purpose machine learning uses. The second QR-Householder implementation is suitable as a helper for SVD.



Comparing two different versions of QR decomposition was an interesting exploration. Researchers have compared the average IQ of people in different countries. Left: The United States has an overage average of approximately 100. Right: But there is a lot of variation in IQ in the U.S. Here is a woman who didn’t fully grasp the fact that she is subject to traffic laws like everyone else. If you compute the IQ in the United States without African Americans, the average IQ increases to about 104.


Demo program. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my blog editor chokes on symbols).

using System;
using System.IO;

namespace MatDecompQR
{
  internal class Program
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin matrix QR decomposition" +
        " Householder variations ");

      Console.WriteLine("\nKey takeaway: standard QR" +
        " decomp algorithms work only for nRows gte nCols ");

      double[][] Q;
      double[][] R;

      double[][] A = new double[6][];
      A[0] = new double[] { 1, 2, 3, 4, 5 };
      A[1] = new double[] { 0, -3, 5, -7, 9 };
      A[2] = new double[] { 2, 0, -2, 0, -2 };
      A[3] = new double[] { 4, -1, 5, 6, 1 };
      A[4] = new double[] { 3, 6, 8, 2, 2 };
      A[5] = new double[] { 5, -2, 4, -4, 3 };

      Console.WriteLine("\nSource (tall) matrix A: ");
      MatShow(A, 1, 6);

      double[][] B = new double[3][];
      B[0] = new double[] { 3, 1, 1, 0, 5 };
      B[1] = new double[] { -1, 3, 1, -2, 4 };
      B[2] = new double[] { 0, 2, 2, 1, -3 };

      Console.WriteLine("\nSource (wide) matrix B: ");
      MatShow(B, 1, 6);

      Console.WriteLine("\nComputing QR for A using" +
        " older rosetta version ");
      MatDecomQR1.MatDecomposeQR(A, out Q, out R,
        reduced: true);
      Console.WriteLine("\nQ = ");
      MatShow(Q, 6, 11);
      Console.WriteLine("\nR = ");
      MatShow(R, 6, 11);

      //// the rosetta version fails for wide matrices
      //Console.WriteLine("\nThe old rosetta version" +
      //  " fails for wide matrices ");

      Console.WriteLine("\n=========================== ");

      Console.WriteLine("\nComputing QR for A using newer" +
        " hybrid version for SVD ");
      MatDecomQR2.MatDecompQR(A, out Q, out R);
      Console.WriteLine("\nQ = ");
      MatShow(Q, 6, 11);
      Console.WriteLine("\nR = ");
      MatShow(R, 6, 11);

      //Console.WriteLine("\nComputing QR for B using newer" +
      //  " AI hybrid version ");
      //MatDecomQR2.MatDecompQR(B, out Q, out R);
      //Console.WriteLine("\nQ = ");
      //MatShow(Q, 6, 11);
      //Console.WriteLine("\nR = ");
      //MatShow(R, 6, 11);

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main

    // ------------------------------------------------------

    static void MatShow(double[][] M, int dec, int wid)
    {
      for (int i = 0; i "lt" M.Length; ++i)
      {
        for (int j = 0; j "lt" M[0].Length; ++j)
        {
          double v = M[i][j];
          Console.Write(v.ToString("F" + dec).
            PadLeft(wid));
        }
        Console.WriteLine("");
      }
    }

  } // Program

  public class MatDecomQR1 // older version based on rosetta
  {
    public static void MatDecomposeQR(double[][] M,
      out double[][] Q, out double[][] R, bool reduced)
    {
      // QR decomposition, Householder algorithm.
      // see rosettacode.org/wiki/QR_decomposition
      int m = M.Length;
      int n = M[0].Length;

      //if (m "lt" n)
      //  throw new Exception("No rows less than cols");

      double[][] QQ = MatIdentity(m); // working Q
      double[][] RR = MatCopy(M); // working R

      int end;
      if (m == n) end = n - 1;
      else end = n;

      for (int i = 0; i "lt" end; ++i)
      {
        double[][] H = MatIdentity(m);
        double[] a = new double[m - i]; // corr
        int k = 0;
        for (int ii = i; ii "lt" m; ++ii) // corr
          a[k++] = RR[ii][i];

        double normA = VecNorm(a);
        if (a[0] "lt" 0.0 && normA "gt" 0.0) // corr
          normA = -normA;
        else if (a[0] "gt" 0.0 && normA "lt" 0.0)
          normA = -normA;

        double[] v = new double[a.Length];
        for (int j = 0; j "lt" v.Length; ++j)
          v[j] = a[j] / (a[0] + normA);
        v[0] = 1.0;

        // Householder algorithm
        double[][] h = MatIdentity(a.Length);
        double vvDot = VecDot(v, v);
        double[][] A = VecToMat(v, v.Length, 1);
        double[][] B = VecToMat(v, 1, v.Length);
        double[][] AB = MatProduct(A, B);

        for (int ii = 0; ii "lt" h.Length; ++ii)
          for (int jj = 0; jj "lt" h[0].Length; ++jj)
            h[ii][jj] -= (2.0 / vvDot) * AB[ii][jj];

        // copy h[][] into lower right corner of H[][]
        int d = m - h.Length; // corr
        for (int ii = 0; ii "lt" h.Length; ++ii)
          for (int jj = 0; jj "lt" h[0].Length; ++jj)
            H[ii + d][jj + d] = h[ii][jj];

        QQ = MatProduct(QQ, H);
        RR = MatProduct(H, RR);
      } // i

      if (reduced == false)
      {
        Q = QQ; // working results into the out params
        R = RR;
        return;
      }
      //else if (reduced == true)
      {
        int qRows = QQ.Length; int qCols = QQ[0].Length;
        int rRows = RR.Length; int rCols = RR[0].Length;
        // assumes m "gte" n !!

        // square-up R
        int dim = Math.Min(rRows, rCols);
        double[][] Rsquared = MatMake(dim, dim);
        for (int i = 0; i "lt" dim; ++i)
          for (int j = 0; j "lt" dim; ++j)
            Rsquared[i][j] = RR[i][j];

        // Q needs same number columns as R
        // so that inv(R) * trans(Q) works
        double[][] Qtrimmed = MatMake(qRows, dim);
        for (int i = 0; i "lt" qRows; ++i)
          for (int j = 0; j "lt" dim; ++j)
            Qtrimmed[i][j] = QQ[i][j];

        Q = Qtrimmed;
        R = Rsquared;
        return;
      }
    } // MatDecomposeQR()

    // ------------------------------------------------------

    private static double[][] MatMake(int nRows, int nCols)
    {
      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatIdentity(int n)
    {
      double[][] result = MatMake(n, n);
      for (int i = 0; i "lt" n; ++i)
        result[i][i] = 1.0;
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatCopy(double[][] M)
    {
      int nr = M.Length; int nc = M[0].Length;
      double[][] result = MatMake(nr, nc);
      for (int i = 0; i "lt" nr; ++i)
        for (int j = 0; j "lt" nc; ++j)
          result[i][j] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatProduct(double[][] A,
      double[][] B)
    {
      int aRows = A.Length; int aCols = A[0].Length;
      int bRows = B.Length; int bCols = B[0].Length;
      if (aCols != bRows)
        throw new Exception("Non-conformable matrices");

      double[][] result = MatMake(aRows, bCols);

      for (int i = 0; i "lt" aRows; ++i) // each row of A
        for (int j = 0; j "lt" bCols; ++j) // each col of B
          for (int k = 0; k "lt" aCols; ++k)
            result[i][j] += A[i][k] * B[k][j];

      return result;
    }

    // ------------------------------------------------------

    private static double[][] VecToMat(double[] vec,
        int nRows, int nCols)
    {
      double[][] result = MatMake(nRows, nCols);
      int k = 0;
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[i][j] = vec[k++];
      return result;
    }

    // ------------------------------------------------------

    private static double VecNorm(double[] vec)
    {
      int n = vec.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
        sum += vec[i] * vec[i];
      return Math.Sqrt(sum);
    }

    // ------------------------------------------------------

    private static double VecDot(double[] v1, double[] v2)
    {
      double result = 0.0;
      int n = v1.Length;
      for (int i = 0; i "lt" n; ++i)
        result += v1[i] * v2[i];
      return result;
    }

    // ------------------------------------------------------

  } // MatDecomQR1

  public class MatDecomQR2 // newer hybrid version from AI
  {
    public static void MatDecompQR(double[][] A,
      out double[][] Q, out double[][] R)
    {
      // reduced QR decomp using Householder reflections
      // this version used by SVD
      int m = A.Length; int n = A[0].Length;
      //if (m "lt" n)
      //  throw new Exception("No rows less than cols");

      double[][] QQ = MatIdentity(m);  // working Q
      double[][] RR = MatCopy(A);  // working R

      for (int k = 0; k "lt" n; ++k)
      {
        double[] x = MatColToEnd(RR, k); // RR[k:, k]
        double normx = VecNorm(x);
        if (Math.Abs(normx) "lt" 1.0e-12) continue;

        double[] v = VecCopy(x);
        double sign;
        if (x[0] "gte" 0.0) sign = 1.0; else sign = -1.0;
        v[0] += sign * normx;
        double normv = VecNorm(v);
        for (int i = 0; i "lt" v.Length; ++i)
          v[i] /= normv;

        // apply reflection to R
        double[][] tmp1 =
          MatRowColToEnd(RR, k); // RR[k:, k:]
        double[] tmp2 = VecMatProduct(v, tmp1);
        double[][] tmp3 = VecOuter(v, tmp2);
        double[][] B = MatScalarMult(2.0, tmp3);
        MatSubtractCorner(RR, k, B);  // R[k:, k:] -= B

        // accumulate Q
        double[][] tmp4 =
          MatExtractAllRowsColToEnd(QQ, k); // QQ[:, k:]
        double[] tmp5 = MatVecProduct(tmp4, v);
        double[][] tmp6 = VecOuter(tmp5, v);
        double[][] C = MatScalarMult(2.0, tmp6);
        MatSubtractRows(QQ, k, C); // QQ[:, k:] -= C
      } // for k

      // return parts of QQ and RR
      Q = MatExtractFirstCols(QQ, n); // Q[:, :n]
      R = MatExtractFirstRows(RR, n); // R[:n, :]

    } // MatDecompQR()

    // ------------------------------------------------------

    private static double[][] MatMake(int nRows, int nCols)
    {
      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatIdentity(int n)
    {
      double[][] result = MatMake(n, n);
      for (int i = 0; i "lt" n; ++i)
        result[i][i] = 1.0;
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatCopy(double[][] M)
    {
      int nr = M.Length; int nc = M[0].Length;
      double[][] result = MatMake(nr, nc);
      for (int i = 0; i "lt" nr; ++i)
        for (int j = 0; j "lt" nc; ++j)
          result[i][j] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[] MatColToEnd(double[][] A,
      int k)
    {
      // last part of col k, from [k,k] to end
      int m = A.Length; int n = A[0].Length;
      double[] result = new double[m - k];
      for (int i = 0; i "lt" m - k; ++i)
        result[i] = A[i + k][k];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatRowColToEnd(double[][] A,
      int k)
    {
      // block of A, row k to end, col k to end
      // A[k:, k:]
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m - k, n - k);
      for (int i = 0; i "lt" m - k; ++i)
        for (int j = 0; j "lt" n - k; ++j)
          result[i][j] = A[i + k][j + k];
      return result;
    }

    // ------------------------------------------------------

    private static double[] VecCopy(double[] v)
    {
      double[] result = new double[v.Length];
      for (int i = 0; i "lt" v.Length; ++i)
        result[i] = v[i];
      return result;
    }

    // ------------------------------------------------------

    private static double[] VecMatProduct(double[] v,
      double[][] A)
    {
      // v * A
      int m = A.Length; int n = A[0].Length;
      double[] result = new double[n];
      for (int j = 0; j "lt" n; ++j)
        for (int i = 0; i "lt" m; ++i)
          result[j] += v[i] * A[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] VecOuter(double[] v1,
      double[] v2)
    {
      // vector outer product
      double[][] result = MatMake(v1.Length, v2.Length);
      for (int i = 0; i "lt" v1.Length; ++i)
        for (int j = 0; j "lt" v2.Length; ++j)
          result[i][j] = v1[i] * v2[j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatScalarMult(double x,
      double[][] A)
    {
      // x * A element-wise
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m, n);
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n; ++j)
          result[i][j] = x * A[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static void MatSubtractCorner(double[][] A,
      int k, double[][] C)
    {
      // subtract elements in C from lower right A
      // A[k:, k:] -= C
      int m = A.Length; int n = A[0].Length;
      for (int i = 0; i "lt" m - k; ++i)
        for (int j = 0; j "lt" n - k; ++j)
          A[i + k][j + k] -= C[i][j];

      return;  // modify A in-place 
    }

    // ------------------------------------------------------

    private static double[][]
      MatExtractAllRowsColToEnd(double[][] A, int k)
    {
      // A[:, k:]
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m, n - k);
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n - k; ++j)
          result[i][j] = A[i][j + k];
      return result;
    }

    // -----------------------------------------------------

    private static double[] MatVecProduct(double[][] A,
      double[] v)
    {
      // A * v
      int m = A.Length; int n = A[0].Length;
      double[] result = new double[m];
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n; ++j)
          result[i] += A[i][j] * v[j];
      return result;
    }

    // ------------------------------------------------------

    private static void MatSubtractRows(double[][] A,
      int k, double[][] C)
    {
      // A[:, k:] -= C
      int m = A.Length; int n = A[0].Length;
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n - k; ++j)
          A[i][j + k] -= C[i][j];

      return;  // modify A in-place
    }

    // ------------------------------------------------------

    private static double[][]
      MatExtractFirstCols(double[][] A, int k)
    {
      // A[:, :n]
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m, k);
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" k; ++j)
          result[i][j] = A[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][]
      MatExtractFirstRows(double[][] A, int k)
    {
      // A[:n, :]
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(k, n);
      for (int i = 0; i "lt" k; ++i)
        for (int j = 0; j "lt" n; ++j)
          result[i][j] = A[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double VecNorm(double[] vec)
    {
      int n = vec.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
        sum += vec[i] * vec[i];
      return Math.Sqrt(sum);
    }

    // ------------------------------------------------------

  } // MatDecomQR2

} // ns
Posted in Machine Learning | Leave a comment

Quadratic Regression with Pseudo-Inverse Training Implemented Using C#

The goal of a machine learning regression problem is to predict a single numeric value. Quadratic regression is an enhanced form of basic linear regression. One of several ways to train a quadratic regression model is to use pseudo-inverse training.

Suppose there are five predictors, (x0, x1, x2, x3, x4). The prediction equation for basic linear regression is:

y’ = (w0 * x0) + (w1 * x1) + (w2 * x2) + (w3 * x3) + (w4 * x4) + b

The wi are model weights (aka coefficients), and b is the model bias (aka intercept). The values of the weights and the bias must be determined by training, so that predicted y’ values are close to the known, correct y values in a set of training data.

The prediction equation for quadratic regression with five predictors is:

y’ = (w0 * x0) + (w1 * x1) + (w2 * x2) + (w3 * x3) + (w4 * x4) +

(w5 * x0*x0) + (w6 * x1*x1) + (w7 * x2*x2) +
(w8 * x3*x3) + (w9 * x4*x4) +

(w10 * x0*x1) + (w11 * x0*x2) + (w12 * x0*x3) + (w13 * x0*x4) +
(w14 * x1*x2) + (w15 * x1*x3) + (w16 * x1*x4) +
(w17 * x2*x3) + (w18 * x2*x4) +
(w19 * x3*x4)

+ b

The squared (“quadratic”) xi^2 terms handle non-linear structure. If there are n predictors, there are also n squared terms. The xi * xj terms between all possible pairs pf original predictors handle interactions between predictors. If there are n predictors, there (n * (n-1)) / 2 interaction terms.

Training is the process of finding values for the weights and the bias so that the model predicts well. There are several different training techniques, including pseudo-inverse training, stochastic gradient descent training, closed form training, L-BFGS training, and others.

The math equation for pseudo-inverse training is w = pinv(DX) * y where w is a vector that holds the weights and bias, pinv() is the Moore-Penrose pseudo-inverse function, DX is a design matrix (a matrix of the training x values with a column of 1.0s added at the front), and * matrix-to-vector multiplication.

For my demo, I used one of my standard datasets. The data looks like:

-0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
. . .

The first five values on each line are the x predictors. The last value on each line is the target y variable to predict. The data is synthetic, and was generated by a 5-10-1 neural network with randome weights and biases. There are 200 training items and 40 test items.

The output of my demo is:

Begin C# quadratic regression with pseudo-inverse
 (QR-Householder) training

Loading synthetic train (200) and test (40) data
Done

First three train X:
 -0.1660  0.4406 -0.9998 -0.3953 -0.7065
  0.0776 -0.1616  0.3704 -0.5911  0.7562
 -0.9452  0.3409 -0.1654  0.1174 -0.7192

First three train y:
  0.4840
  0.1568
  0.8054

Creating quadratic regression model

Starting pseudo-inverse training
Done

Model base weights:
 -0.2630  0.0354 -0.0421  0.0341 -0.1124

Model quadratic weights:
  0.0655  0.0194  0.0051  0.0047  0.0243

Model interaction weights:
  0.0043  0.0249  0.0071  0.1081 -0.0012 -0.0093
  0.0362  0.0085 -0.0568  0.0016

Model bias/intercept:   0.3220

Evaluating model
Accuracy train (within 0.10) = 0.8850
Accuracy test (within 0.10) = 0.9250

MSE train = 0.0003
MSE test = 0.0005

Predicting for x =
  -0.1660   0.4406  -0.9998  -0.3953  -0.7065

Predicted y = 0.4843

End demo

When pseudo-inverse training works, it’s highly effective, in part because it doesn’t require parameters like a learning rate and maximum epochs. But the technique is very complicated and can (very rarely) fail.



The term “machine learning” was coined by computer scientist Arthur Samuel in 1959. I’ve always been fascinated by old coin operated automata machines. There were quite a few execution theme machines made from the 1920s throught the 1950s.

This is “The American Execution” automata. It was produced by the Canova Novelty Co. in 1920. Quite gruesome.


Demo program. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols. (My blog editor chokes on symbols).

using System;
using System.IO;
using System.Collections.Generic;

namespace QuadraticRegressionPinv
{
  internal class QuadraticRegressionPinvProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin C# quadratic regression" +
        " with pseudo-inverse (QR-Householder) training ");

      // 1. load data
      Console.WriteLine("\nLoading synthetic train" +
        " (200) and test (40) data");
      string trainFile =
        "..\\..\\..\\Data\\synthetic_train_200.txt";
      int[] colsX = new int[] { 0, 1, 2, 3, 4 };
      double[][] trainX =
        MatLoad(trainFile, colsX, ',', "#");
      double[] trainY =
        MatToVec(MatLoad(trainFile,
        new int[] { 5 }, ',', "#"));

      string testFile =
        "..\\..\\..\\Data\\synthetic_test_40.txt";
      double[][] testX =
        MatLoad(testFile, colsX, ',', "#");
      double[] testY =
        MatToVec(MatLoad(testFile,
        new int[] { 5 }, ',', "#"));
      Console.WriteLine("Done ");

      Console.WriteLine("\nFirst three train X: ");
      for (int i = 0; i "lt" 3; ++i)
        VecShow(trainX[i], 4, 8);

      Console.WriteLine("\nFirst three train y: ");
      for (int i = 0; i "lt" 3; ++i)
        Console.WriteLine(trainY[i].ToString("F4").
          PadLeft(8));

      // 2. create and train model
      Console.WriteLine("\nCreating quadratic " +
        "regression model ");
      QuadraticRegressor model = new QuadraticRegressor();

      Console.WriteLine("\nStarting pseudo-inverse " +
        "training ");
      model.Train(trainX, trainY);
      Console.WriteLine("Done ");

      //Console.WriteLine("\nStarting SGD training ");
      //model.TrainSGD(trainX, trainY, 0.01, 200);
      //Console.WriteLine("Done ");

      // 3. show model weights
      Console.WriteLine("\nModel base weights: ");
      int dim = trainX[0].Length;
      for (int i = 0; i "lt" dim; ++i)
        Console.Write(model.weights[i].
          ToString("F4").PadLeft(8));
      Console.WriteLine("");

      Console.WriteLine("\nModel quadratic weights: ");
      for (int i = dim; i "lt" dim + dim; ++i)
        Console.Write(model.weights[i].
          ToString("F4").PadLeft(8));
      Console.WriteLine("");

      Console.WriteLine("\nModel interaction weights: ");
      for (int i = dim + dim; i "lt" model.weights.Length; ++i)
      {
        Console.Write(model.weights[i].
          ToString("F4").PadLeft(8));
        if (i "gt" dim+dim && i % dim == 0)
          Console.WriteLine("");
      }
      Console.WriteLine("");

      Console.WriteLine("\nModel bias/intercept: " +
        model.bias.ToString("F4").PadLeft(8));

      // 4. evaluate model
      Console.WriteLine("\nEvaluating model ");
      double accTrain = model.Accuracy(trainX, trainY, 0.10);
      Console.WriteLine("Accuracy train (within 0.10) = " +
        accTrain.ToString("F4"));
      double accTest = model.Accuracy(testX, testY, 0.10);
      Console.WriteLine("Accuracy test (within 0.10) = " +
        accTest.ToString("F4"));

      double mseTrain = model.MSE(trainX, trainY);
      Console.WriteLine("\nMSE train = " +
        mseTrain.ToString("F4"));
      double mseTest = model.MSE(testX, testY);
      Console.WriteLine("MSE test = " +
        mseTest.ToString("F4"));

      // 5. use model
      double[] x = trainX[0];
      Console.WriteLine("\nPredicting for x = ");
      VecShow(x, 4, 9);
      double predY = model.Predict(x);
      Console.WriteLine("\nPredicted y = " +
        predY.ToString("F4"));

      // 6. TODO: implement model Save() and Load()

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main

    // ------------------------------------------------------
    // helpers for Main()
    // ------------------------------------------------------

    static double[][] MatLoad(string fn, int[] usecols,
      char sep, string comment)
    {
      List"lt"double[]"gt" result =
        new List"lt"double[]"gt"();
      string line = "";
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      while ((line = sr.ReadLine()) != null)
      {
        if (line.StartsWith(comment) == true)
          continue;
        string[] tokens = line.Split(sep);
        List"lt"double"gt" lst = new List"lt"double"gt"();
        for (int j = 0; j "lt" usecols.Length; ++j)
          lst.Add(double.Parse(tokens[usecols[j]]));
        double[] row = lst.ToArray();
        result.Add(row);
      }
      sr.Close(); ifs.Close();
      return result.ToArray();
    }

    static double[] MatToVec(double[][] M)
    {
      int nRows = M.Length;
      int nCols = M[0].Length;
      double[] result = new double[nRows * nCols];
      int k = 0;
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[k++] = M[i][j];
      return result;
    }

    static void VecShow(double[] vec, int dec, int wid)
    {
      for (int i = 0; i "lt" vec.Length; ++i)
        Console.Write(vec[i].ToString("F" + dec).
          PadLeft(wid));
      Console.WriteLine("");
    }
  } // class Program

  // ========================================================

  public class QuadraticRegressor
  {
    public double[] weights;  // regular, quad, interactions
    public double bias;
    private Random rnd;  // not used w/ Pinv training

    public QuadraticRegressor(int seed = 0)
    {
      this.weights = new double[0];  // empty, but not null
      this.bias = 0; // dummy value
      this.rnd = new Random(seed); 
    }

    // ------------------------------------------------------

    public void Train(double[][] trainX, double[] trainY)
    {
      // w = pinv(designX) * y
      int nRows = trainX.Length; // not used
      int dim = trainX[0].Length;
      int nInteractions = (dim * (dim - 1)) / 2;
      this.weights = new double[dim + dim + nInteractions];

      double[][] X = MatDesign(trainX);  // design X
      double[][] Xpinv = MatPseudoInv(X); // QR version
      double[] biasAndWts = MatVecProd(Xpinv, trainY);
      this.bias = biasAndWts[0];  // bias is at [0]
      for (int i = 1; i "lt" biasAndWts.Length; ++i)
        this.weights[i - 1] = biasAndWts[i];
      return;
    }

    // ------------------------------------------------------
    
    public double Predict(double[] x)
    {
      int dim = x.Length;
      double result = 0.0;

      int p = 0; // points into this.weights
      for (int i = 0; i "lt" dim; ++i)   // regular
        result += x[i] * this.weights[p++];

      for (int i = 0; i "lt" dim; ++i)  // quadratic
        result += x[i] * x[i] * this.weights[p++];

      for (int i = 0; i "lt" dim-1; ++i)  // interactions
        for (int j = i+1; j "lt" dim; ++j)
          result += x[i] * x[j] * this.weights[p++]; 
 
      result += this.bias;
      return result;
    }

    // ------------------------------------------------------

    public double Accuracy(double[][] dataX, double[] dataY,
      double pctClose)
    {
      int numCorrect = 0; int numWrong = 0;
      for (int i = 0; i "lt" dataX.Length; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        if (Math.Abs(predY - actualY) "lt"
          Math.Abs(pctClose * actualY))
          ++numCorrect;
        else
          ++numWrong;
      }
      return (numCorrect * 1.0) / (numWrong + numCorrect);
    }

    // ------------------------------------------------------

    public double MSE(double[][] dataX,
      double[] dataY)
    {
      int n = dataX.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        sum += (actualY - predY) * (actualY - predY);
      }
      return sum / n;
    }

    // ------------------------------------------------------
    // helpers for Train():
    //  MatDesign(), MatVecProd(), MatPseudoInv() 
    // sub-helpers:
    //  MatDecomposeQR(), MatInvUpperTri()
    // minor helpers:
    //  MatCopy(), MatTranspose(), MatMake(), MatIdentity(),
    //  MatProduct(), VecNorm(), VecDot(), VecToMat()
    // ------------------------------------------------------

    private static double[][] MatDesign(double[][] trainX)
    {
      // design matrix: add leading col of 1.0s
      int nRows = trainX.Length;  // src and dest
      int dim = trainX[0].Length;
      int nInteractions = dim * (dim - 1) / 2;
      int nColsDest = 1 + dim + dim + nInteractions;

      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; i++)
        result[i] = new double[nColsDest];

      for (int i = 0; i "lt" nRows; ++i)
      {
        int p = 0; // points to column of result

        result[i][p++] = 1.0;  // leading 1.0

        for (int j = 0; j "lt" dim;  ++j) // base
          result[i][p++] = trainX[i][j];

        for (int j = 0; j "lt" dim;  ++j) // quadratic
          result[i][p++] = trainX[i][j] * trainX[i][j];

        for (int j = 0; j "lt" nInteractions-1; ++j)
          for (int k = j+1; k "lt" dim; ++k)
            result[i][p++] = trainX[i][j] * trainX[i][k];
      }

      return result;
    }

    // ------------------------------------------------------

    private static double[] MatVecProd(double[][] M,
      double[] v)
    {
      // return a regular vector
      int nRows = M.Length;
      int nCols = M[0].Length;
      int n = v.Length;
      if (nCols != n)
        throw new Exception("non-comform in MatVecProd");

      double[] result = new double[nRows];
      for (int i = 0; i "lt" nRows; ++i)
        for (int k = 0; k "lt" nCols; ++k)
          result[i] += M[i][k] * v[k];

      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatPseudoInv(double[][] M)
    {
      // Moore-Penrose pseudo-inverse using QR decomp
      // A = Q*R, pinv(A) = inv(R) * trans(Q)

      int m = M.Length; int n = M[0].Length;
      if (m "lt" n)
        Console.WriteLine("ERROR: Works only m "gte" n");

      double[][] Q; double[][] R;
      MatDecomposeQR(M, out Q, out R, true); // reduced

      double[][] Rinv = MatInvUpperTri(R); // std algo
      double[][] Qtrans = MatTranspose(Q);  // is inv(Q)
      double[][] result = MatProduct(Rinv, Qtrans);
      return result;
    }

    // ------------------------------------------------------

    private static void MatDecomposeQR(double[][] M,
      out double[][] Q, out double[][] R, bool reduced)
    {
      // QR decomposition, Householder algorithm.
      int m = M.Length;
      int n = M[0].Length;

      if (m "lt" n)
        throw new Exception("No rows less than cols");

      double[][] QQ = MatIdentity(m); // working Q
      double[][] RR = MatCopy(M); // working R

      int end;
      if (m == n) end = n - 1;
      else end = n;

      for (int i = 0; i "lt" end; ++i)
      {
        double[][] H = MatIdentity(m);
        double[] a = new double[m - i]; // corr
        int k = 0;
        for (int ii = i; ii "lt" m; ++ii) // corr
          a[k++] = RR[ii][i];

        double normA = VecNorm(a);
        if (a[0] "lt" 0.0 && normA "gt" 0.0) // corr
          normA = -normA;
        else if (a[0] "gt" 0.0 && normA "lt" 0.0)
          normA = -normA;

        double[] v = new double[a.Length];
        for (int j = 0; j "lt" v.Length; ++j)
          v[j] = a[j] / (a[0] + normA);
        v[0] = 1.0;

        // Householder algorithm
        double[][] h = MatIdentity(a.Length);
        double vvDot = VecDot(v, v);
        double[][] A = VecToMat(v, v.Length, 1);
        double[][] B = VecToMat(v, 1, v.Length);
        double[][] AB = MatProduct(A, B);

        for (int ii = 0; ii "lt" h.Length; ++ii)
          for (int jj = 0; jj "lt" h[0].Length; ++jj)
            h[ii][jj] -= (2.0 / vvDot) * AB[ii][jj];

        // copy h[][] into lower right corner of H[][]
        int d = m - h.Length; // corr
        for (int ii = 0; ii "lt" h.Length; ++ii)
          for (int jj = 0; jj "lt" h[0].Length; ++jj)
            H[ii + d][jj + d] = h[ii][jj];

        QQ = MatProduct(QQ, H);
        RR = MatProduct(H, RR);
      } // i

      if (reduced == false)
      {
        Q = QQ; // working results into the out params
        R = RR;
        return;
      }
      //else if (reduced == true)
      {
        int qRows = QQ.Length; int qCols = QQ[0].Length;
        int rRows = RR.Length; int rCols = RR[0].Length;
        // assumes m "gte" n !!

        // square-up R
        int dim = Math.Min(rRows, rCols);
        double[][] Rsquared = MatMake(dim, dim);
        for (int i = 0; i "lt" dim; ++i)
          for (int j = 0; j "lt" dim; ++j)
            Rsquared[i][j] = RR[i][j];

        // Q needs same number columns as R
        // so that inv(R) * trans(Q) works
        double[][] Qtrimmed = MatMake(qRows, dim);
        for (int i = 0; i "lt" qRows; ++i)
          for (int j = 0; j "lt" dim; ++j)
            Qtrimmed[i][j] = QQ[i][j];

        Q = Qtrimmed;
        R = Rsquared;
        return;
      }
    } // MatDecomposeQR()

    // ------------------------------------------------------

    private static double[][] MatInvUpperTri(double[][] U)
    {
      // used to invert R from QR
      int n = U.Length;  // must be square matrix
      double[][] result = MatIdentity(n);

      for (int k = 0; k "lt" n; ++k)
      {
        for (int j = 0; j "lt" n; ++j)
        {
          for (int i = 0; i "lt" k; ++i)
          {
            result[j][k] -= result[j][i] * U[i][k];
          }
          result[j][k] /= (U[k][k] + 1.0e-8); // avoid 0
        }
      }
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatCopy(double[][] M)
    {
      int nr = M.Length; int nc = M[0].Length;
      double[][] result = MatMake(nr, nc);
      for (int i = 0; i "lt" nr; ++i)
        for (int j = 0; j "lt" nc; ++j)
          result[i][j] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatTranspose(double[][] M)
    {
      int nr = M.Length;
      int nc = M[0].Length;
      double[][] result = MatMake(nc, nr);  // note
      for (int i = 0; i "lt" nr; ++i)
        for (int j = 0; j "lt" nc; ++j)
          result[j][i] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatMake(int nRows, int nCols)
    {
      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatIdentity(int n)
    {
      double[][] result = MatMake(n, n);
      for (int i = 0; i "lt" n; ++i)
        result[i][i] = 1.0;
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatProduct(double[][] A,
      double[][]B)
    {
      int aRows = A.Length;
      int aCols = A[0].Length;
      int bRows = B.Length;
      int bCols = B[0].Length;
      if (aCols != bRows)
        throw new Exception("Non-conformable matrices");

      double[][] result = MatMake(aRows, bCols);

      for (int i = 0; i "lt" aRows; ++i) // each row of A
        for (int j = 0; j "lt" bCols; ++j) // each col of B
          for (int k = 0; k "lt" aCols; ++k)
            result[i][j] += A[i][k] * B[k][j];

      return result;
    }

    // ------------------------------------------------------

    private static double VecNorm(double[] vec)
    {
      int n = vec.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
        sum += vec[i] * vec[i];
      return Math.Sqrt(sum);
    }

    // ------------------------------------------------------

   
    private static double VecDot(double[] v1, double[] v2)
    {
      double result = 0.0;
      int n = v1.Length;
      for (int i = 0; i "lt" n; ++i)
        result += v1[i] * v2[i];
      return result;
    }

    // ------------------------------------------------------

     private static double[][] VecToMat(double[] vec,
        int nRows, int nCols)
    {
      double[][] result = MatMake(nRows, nCols);
      int k = 0;
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[i][j] = vec[k++];
      return result;
    }

    // ------------------------------------------------------
   
  } // class QuadraticRegressor

  // ========================================================

} // ns

Training data:

# synthetic_train_200.txt
#
-0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
-0.8299, -0.9219, -0.6603,  0.7563, -0.8033,  0.7955
 0.0663,  0.3838, -0.3690,  0.3730,  0.6693,  0.3206
-0.9634,  0.5003,  0.9777,  0.4963, -0.4391,  0.7377
-0.1042,  0.8172, -0.4128, -0.4244, -0.7399,  0.4801
-0.9613,  0.3577, -0.5767, -0.4689, -0.0169,  0.6861
-0.7065,  0.1786,  0.3995, -0.7953, -0.1719,  0.5569
 0.3888, -0.1716, -0.9001,  0.0718,  0.3276,  0.2500
 0.1731,  0.8068, -0.7251, -0.7214,  0.6148,  0.3297
-0.2046, -0.6693,  0.8550, -0.3045,  0.5016,  0.2129
 0.2473,  0.5019, -0.3022, -0.4601,  0.7918,  0.2613
-0.1438,  0.9297,  0.3269,  0.2434, -0.7705,  0.5171
 0.1568, -0.1837, -0.5259,  0.8068,  0.1474,  0.3307
-0.9943,  0.2343, -0.3467,  0.0541,  0.7719,  0.5581
 0.2467, -0.9684,  0.8589,  0.3818,  0.9946,  0.1092
-0.6553, -0.7257,  0.8652,  0.3936, -0.8680,  0.7018
 0.8460,  0.4230, -0.7515, -0.9602, -0.9476,  0.1996
-0.9434, -0.5076,  0.7201,  0.0777,  0.1056,  0.5664
 0.9392,  0.1221, -0.9627,  0.6013, -0.5341,  0.1533
 0.6142, -0.2243,  0.7271,  0.4942,  0.1125,  0.1661
 0.4260,  0.1194, -0.9749, -0.8561,  0.9346,  0.2230
 0.1362, -0.5934, -0.4953,  0.4877, -0.6091,  0.3810
 0.6937, -0.5203, -0.0125,  0.2399,  0.6580,  0.1460
-0.6864, -0.9628, -0.8600, -0.0273,  0.2127,  0.5387
 0.9772,  0.1595, -0.2397,  0.1019,  0.4907,  0.1611
 0.3385, -0.4702, -0.8673, -0.2598,  0.2594,  0.2270
-0.8669, -0.4794,  0.6095, -0.6131,  0.2789,  0.4700
 0.0493,  0.8496, -0.4734, -0.8681,  0.4701,  0.3516
 0.8639, -0.9721, -0.5313,  0.2336,  0.8980,  0.1412
 0.9004,  0.1133,  0.8312,  0.2831, -0.2200,  0.1782
 0.0991,  0.8524,  0.8375, -0.2102,  0.9265,  0.2150
-0.6521, -0.7473, -0.7298,  0.0113, -0.9570,  0.7422
 0.6190, -0.3105,  0.8802,  0.1640,  0.7577,  0.1056
 0.6895,  0.8108, -0.0802,  0.0927,  0.5972,  0.2214
 0.1982, -0.9689,  0.1870, -0.1326,  0.6147,  0.1310
-0.3695,  0.7858,  0.1557, -0.6320,  0.5759,  0.3773
-0.1596,  0.3581,  0.8372, -0.9992,  0.9535,  0.2071
-0.2468,  0.9476,  0.2094,  0.6577,  0.1494,  0.4132
 0.1737,  0.5000,  0.7166,  0.5102,  0.3961,  0.2611
 0.7290, -0.3546,  0.3416, -0.0983, -0.2358,  0.1332
-0.3652,  0.2438, -0.1395,  0.9476,  0.3556,  0.4170
-0.6029, -0.1466, -0.3133,  0.5953,  0.7600,  0.4334
-0.4596, -0.4953,  0.7098,  0.0554,  0.6043,  0.2775
 0.1450,  0.4663,  0.0380,  0.5418,  0.1377,  0.2931
-0.8636, -0.2442, -0.8407,  0.9656, -0.6368,  0.7429
 0.6237,  0.7499,  0.3768,  0.1390, -0.6781,  0.2185
-0.5499,  0.1850, -0.3755,  0.8326,  0.8193,  0.4399
-0.4858, -0.7782, -0.6141, -0.0008,  0.4572,  0.4197
 0.7033, -0.1683,  0.2334, -0.5327, -0.7961,  0.1776
 0.0317, -0.0457, -0.6947,  0.2436,  0.0880,  0.3345
 0.5031, -0.5559,  0.0387,  0.5706, -0.9553,  0.3107
-0.3513,  0.7458,  0.6894,  0.0769,  0.7332,  0.3170
 0.2205,  0.5992, -0.9309,  0.5405,  0.4635,  0.3532
-0.4806, -0.4859,  0.2646, -0.3094,  0.5932,  0.3202
 0.9809, -0.3995, -0.7140,  0.8026,  0.0831,  0.1600
 0.9495,  0.2732,  0.9878,  0.0921,  0.0529,  0.1289
-0.9476, -0.6792,  0.4913, -0.9392, -0.2669,  0.5966
 0.7247,  0.3854,  0.3819, -0.6227, -0.1162,  0.1550
-0.5922, -0.5045, -0.4757,  0.5003, -0.0860,  0.5863
-0.8861,  0.0170, -0.5761,  0.5972, -0.4053,  0.7301
 0.6877, -0.2380,  0.4997,  0.0223,  0.0819,  0.1404
 0.9189,  0.6079, -0.9354,  0.4188, -0.0700,  0.1907
-0.1428, -0.7820,  0.2676,  0.6059,  0.3936,  0.2790
 0.5324, -0.3151,  0.6917, -0.1425,  0.6480,  0.1071
-0.8432, -0.9633, -0.8666, -0.0828, -0.7733,  0.7784
-0.9444,  0.5097, -0.2103,  0.4939, -0.0952,  0.6787
-0.0520,  0.6063, -0.1952,  0.8094, -0.9259,  0.4836
 0.5477, -0.7487,  0.2370, -0.9793,  0.0773,  0.1241
 0.2450,  0.8116,  0.9799,  0.4222,  0.4636,  0.2355
 0.8186, -0.1983, -0.5003, -0.6531, -0.7611,  0.1511
-0.4714,  0.6382, -0.3788,  0.9648, -0.4667,  0.5950
 0.0673, -0.3711,  0.8215, -0.2669, -0.1328,  0.2677
-0.9381,  0.4338,  0.7820, -0.9454,  0.0441,  0.5518
-0.3480,  0.7190,  0.1170,  0.3805, -0.0943,  0.4724
-0.9813,  0.1535, -0.3771,  0.0345,  0.8328,  0.5438
-0.1471, -0.5052, -0.2574,  0.8637,  0.8737,  0.3042
-0.5454, -0.3712, -0.6505,  0.2142, -0.1728,  0.5783
 0.6327, -0.6297,  0.4038, -0.5193,  0.1484,  0.1153
-0.5424,  0.3282, -0.0055,  0.0380, -0.6506,  0.6613
 0.1414,  0.9935,  0.6337,  0.1887,  0.9520,  0.2540
-0.9351, -0.8128, -0.8693, -0.0965, -0.2491,  0.7353
 0.9507, -0.6640,  0.9456,  0.5349,  0.6485,  0.1059
-0.0462, -0.9737, -0.2940, -0.0159,  0.4602,  0.2606
-0.0627, -0.0852, -0.7247, -0.9782,  0.5166,  0.2977
 0.0478,  0.5098, -0.0723, -0.7504, -0.3750,  0.3335
 0.0090,  0.3477,  0.5403, -0.7393, -0.9542,  0.4415
-0.9748,  0.3449,  0.3736, -0.1015,  0.8296,  0.4358
 0.2887, -0.9895, -0.0311,  0.7186,  0.6608,  0.2057
 0.1570, -0.4518,  0.1211,  0.3435, -0.2951,  0.3244
 0.7117, -0.6099,  0.4946, -0.4208,  0.5476,  0.1096
-0.2929, -0.5726,  0.5346, -0.3827,  0.4665,  0.2465
 0.4889, -0.5572, -0.5718, -0.6021, -0.7150,  0.2163
-0.7782,  0.3491,  0.5996, -0.8389, -0.5366,  0.6516
-0.5847,  0.8347,  0.4226,  0.1078, -0.3910,  0.6134
 0.8469,  0.4121, -0.0439, -0.7476,  0.9521,  0.1571
-0.6803, -0.5948, -0.1376, -0.1916, -0.7065,  0.7156
 0.2878,  0.5086, -0.5785,  0.2019,  0.4979,  0.2980
 0.2764,  0.1943, -0.4090,  0.4632,  0.8906,  0.2960
-0.8877,  0.6705, -0.6155, -0.2098, -0.3998,  0.7107
-0.8398,  0.8093, -0.2597,  0.0614, -0.0118,  0.6502
-0.8476,  0.0158, -0.4769, -0.2859, -0.7839,  0.7715
 0.5751, -0.7868,  0.9714, -0.6457,  0.1448,  0.1175
 0.4802, -0.7001,  0.1022, -0.5668,  0.5184,  0.1090
 0.4458, -0.6469,  0.7239, -0.9604,  0.7205,  0.0779
 0.5175,  0.4339,  0.9747, -0.4438, -0.9924,  0.2879
 0.8678,  0.7158,  0.4577,  0.0334,  0.4139,  0.1678
 0.5406,  0.5012,  0.2264, -0.1963,  0.3946,  0.2088
-0.9938,  0.5498,  0.7928, -0.5214, -0.7585,  0.7687
 0.7661,  0.0863, -0.4266, -0.7233, -0.4197,  0.1466
 0.2277, -0.3517, -0.0853, -0.1118,  0.6563,  0.1767
 0.3499, -0.5570, -0.0655, -0.3705,  0.2537,  0.1632
 0.7547, -0.1046,  0.5689, -0.0861,  0.3125,  0.1257
 0.8186,  0.2110,  0.5335,  0.0094, -0.0039,  0.1391
 0.6858, -0.8644,  0.1465,  0.8855,  0.0357,  0.1845
-0.4967,  0.4015,  0.0805,  0.8977,  0.2487,  0.4663
 0.6760, -0.9841,  0.9787, -0.8446, -0.3557,  0.1509
-0.1203, -0.4885,  0.6054, -0.0443, -0.7313,  0.4854
 0.8557,  0.7919, -0.0169,  0.7134, -0.1628,  0.2002
 0.0115, -0.6209,  0.9300, -0.4116, -0.7931,  0.4052
-0.7114, -0.9718,  0.4319,  0.1290,  0.5892,  0.3661
 0.3915,  0.5557, -0.1870,  0.2955, -0.6404,  0.2954
-0.3564, -0.6548, -0.1827, -0.5172, -0.1862,  0.4622
 0.2392, -0.4959,  0.5857, -0.1341, -0.2850,  0.2470
-0.3394,  0.3947, -0.4627,  0.6166, -0.4094,  0.5325
 0.7107,  0.7768, -0.6312,  0.1707,  0.7964,  0.2757
-0.1078,  0.8437, -0.4420,  0.2177,  0.3649,  0.4028
-0.3139,  0.5595, -0.6505, -0.3161, -0.7108,  0.5546
 0.4335,  0.3986,  0.3770, -0.4932,  0.3847,  0.1810
-0.2562, -0.2894, -0.8847,  0.2633,  0.4146,  0.4036
 0.2272,  0.2966, -0.6601, -0.7011,  0.0284,  0.2778
-0.0743, -0.1421, -0.0054, -0.6770, -0.3151,  0.3597
-0.4762,  0.6891,  0.6007, -0.1467,  0.2140,  0.4266
-0.4061,  0.7193,  0.3432,  0.2669, -0.7505,  0.6147
-0.0588,  0.9731,  0.8966,  0.2902, -0.6966,  0.4955
-0.0627, -0.1439,  0.1985,  0.6999,  0.5022,  0.3077
 0.1587,  0.8494, -0.8705,  0.9827, -0.8940,  0.4263
-0.7850,  0.2473, -0.9040, -0.4308, -0.8779,  0.7199
 0.4070,  0.3369, -0.2428, -0.6236,  0.4940,  0.2215
-0.0242,  0.0513, -0.9430,  0.2885, -0.2987,  0.3947
-0.5416, -0.1322, -0.2351, -0.0604,  0.9590,  0.3683
 0.1055,  0.7783, -0.2901, -0.5090,  0.8220,  0.2984
-0.9129,  0.9015,  0.1128, -0.2473,  0.9901,  0.4776
-0.9378,  0.1424, -0.6391,  0.2619,  0.9618,  0.5368
 0.7498, -0.0963,  0.4169,  0.5549, -0.0103,  0.1614
-0.2612, -0.7156,  0.4538, -0.0460, -0.1022,  0.3717
 0.7720,  0.0552, -0.1818, -0.4622, -0.8560,  0.1685
-0.4177,  0.0070,  0.9319, -0.7812,  0.3461,  0.3052
-0.0001,  0.5542, -0.7128, -0.8336, -0.2016,  0.3803
 0.5356, -0.4194, -0.5662, -0.9666, -0.2027,  0.1776
-0.2378,  0.3187, -0.8582, -0.6948, -0.9668,  0.5474
-0.1947, -0.3579,  0.1158,  0.9869,  0.6690,  0.2992
 0.3992,  0.8365, -0.9205, -0.8593, -0.0520,  0.3154
-0.0209,  0.0793,  0.7905, -0.1067,  0.7541,  0.1864
-0.4928, -0.4524, -0.3433,  0.0951, -0.5597,  0.6261
-0.8118,  0.7404, -0.5263, -0.2280,  0.1431,  0.6349
 0.0516, -0.8480,  0.7483,  0.9023,  0.6250,  0.1959
-0.3212,  0.1093,  0.9488, -0.3766,  0.3376,  0.2735
-0.3481,  0.5490, -0.3484,  0.7797,  0.5034,  0.4379
-0.5785, -0.9170, -0.3563, -0.9258,  0.3877,  0.4121
 0.3407, -0.1391,  0.5356,  0.0720, -0.9203,  0.3458
-0.3287, -0.8954,  0.2102,  0.0241,  0.2349,  0.3247
-0.1353,  0.6954, -0.0919, -0.9692,  0.7461,  0.3338
 0.9036, -0.8982, -0.5299, -0.8733, -0.1567,  0.1187
 0.7277, -0.8368, -0.0538, -0.7489,  0.5458,  0.0830
 0.9049,  0.8878,  0.2279,  0.9470, -0.3103,  0.2194
 0.7957, -0.1308, -0.5284,  0.8817,  0.3684,  0.2172
 0.4647, -0.4931,  0.2010,  0.6292, -0.8918,  0.3371
-0.7390,  0.6849,  0.2367,  0.0626, -0.5034,  0.7039
-0.1567, -0.8711,  0.7940, -0.5932,  0.6525,  0.1710
 0.7635, -0.0265,  0.1969,  0.0545,  0.2496,  0.1445
 0.7675,  0.1354, -0.7698, -0.5460,  0.1920,  0.1728
-0.5211, -0.7372, -0.6763,  0.6897,  0.2044,  0.5217
 0.1913,  0.1980,  0.2314, -0.8816,  0.5006,  0.1998
 0.8964,  0.0694, -0.6149,  0.5059, -0.9854,  0.1825
 0.1767,  0.7104,  0.2093,  0.6452,  0.7590,  0.2832
-0.3580, -0.7541,  0.4426, -0.1193, -0.7465,  0.5657
-0.5996,  0.5766, -0.9758, -0.3933, -0.9572,  0.6800
 0.9950,  0.1641, -0.4132,  0.8579,  0.0142,  0.2003
-0.4717, -0.3894, -0.2567, -0.5111,  0.1691,  0.4266
 0.3917, -0.8561,  0.9422,  0.5061,  0.6123,  0.1212
-0.0366, -0.1087,  0.3449, -0.1025,  0.4086,  0.2475
 0.3633,  0.3943,  0.2372, -0.6980,  0.5216,  0.1925
-0.5325, -0.6466, -0.2178, -0.3589,  0.6310,  0.3568
 0.2271,  0.5200, -0.1447, -0.8011, -0.7699,  0.3128
 0.6415,  0.1993,  0.3777, -0.0178, -0.8237,  0.2181
-0.5298, -0.0768, -0.6028, -0.9490,  0.4588,  0.4356
 0.6870, -0.1431,  0.7294,  0.3141,  0.1621,  0.1632
-0.5985,  0.0591,  0.7889, -0.3900,  0.7419,  0.2945
 0.3661,  0.7984, -0.8486,  0.7572, -0.6183,  0.3449
 0.6995,  0.3342, -0.3113, -0.6972,  0.2707,  0.1712
 0.2565,  0.9126,  0.1798, -0.6043, -0.1413,  0.2893
-0.3265,  0.9839, -0.2395,  0.9854,  0.0376,  0.4770
 0.2690, -0.1722,  0.9818,  0.8599, -0.7015,  0.3954
-0.2102, -0.0768,  0.1219,  0.5607, -0.0256,  0.3949
 0.8216, -0.9555,  0.6422, -0.6231,  0.3715,  0.0801
-0.2896,  0.9484, -0.7545, -0.6249,  0.7789,  0.4370
-0.9985, -0.5448, -0.7092, -0.5931,  0.7926,  0.5402

Test data:

# synthetic_test_40.txt
#
 0.7462,  0.4006, -0.0590,  0.6543, -0.0083,  0.1935
 0.8495, -0.2260, -0.0142, -0.4911,  0.7699,  0.1078
-0.2335, -0.4049,  0.4352, -0.6183, -0.7636,  0.5088
 0.1810, -0.5142,  0.2465,  0.2767, -0.3449,  0.3136
-0.8650,  0.7611, -0.0801,  0.5277, -0.4922,  0.7140
-0.2358, -0.7466, -0.5115, -0.8413, -0.3943,  0.4533
 0.4834,  0.2300,  0.3448, -0.9832,  0.3568,  0.1360
-0.6502, -0.6300,  0.6885,  0.9652,  0.8275,  0.3046
-0.3053,  0.5604,  0.0929,  0.6329, -0.0325,  0.4756
-0.7995,  0.0740, -0.2680,  0.2086,  0.9176,  0.4565
-0.2144, -0.2141,  0.5813,  0.2902, -0.2122,  0.4119
-0.7278, -0.0987, -0.3312, -0.5641,  0.8515,  0.4438
 0.3793,  0.1976,  0.4933,  0.0839,  0.4011,  0.1905
-0.8568,  0.9573, -0.5272,  0.3212, -0.8207,  0.7415
-0.5785,  0.0056, -0.7901, -0.2223,  0.0760,  0.5551
 0.0735, -0.2188,  0.3925,  0.3570,  0.3746,  0.2191
 0.1230, -0.2838,  0.2262,  0.8715,  0.1938,  0.2878
 0.4792, -0.9248,  0.5295,  0.0366, -0.9894,  0.3149
-0.4456,  0.0697,  0.5359, -0.8938,  0.0981,  0.3879
 0.8629, -0.8505, -0.4464,  0.8385,  0.5300,  0.1769
 0.1995,  0.6659,  0.7921,  0.9454,  0.9970,  0.2330
-0.0249, -0.3066, -0.2927, -0.4923,  0.8220,  0.2437
 0.4513, -0.9481, -0.0770, -0.4374, -0.9421,  0.2879
-0.3405,  0.5931, -0.3507, -0.3842,  0.8562,  0.3987
 0.9538,  0.0471,  0.9039,  0.7760,  0.0361,  0.1706
-0.0887,  0.2104,  0.9808,  0.5478, -0.3314,  0.4128
-0.8220, -0.6302,  0.0537, -0.1658,  0.6013,  0.4306
-0.4123, -0.2880,  0.9074, -0.0461, -0.4435,  0.5144
 0.0060,  0.2867, -0.7775,  0.5161,  0.7039,  0.3599
-0.7968, -0.5484,  0.9426, -0.4308,  0.8148,  0.2979
 0.7811,  0.8450, -0.6877,  0.7594,  0.2640,  0.2362
-0.6802, -0.1113, -0.8325, -0.6694, -0.6056,  0.6544
 0.3821,  0.1476,  0.7466, -0.5107,  0.2592,  0.1648
 0.7265,  0.9683, -0.9803, -0.4943, -0.5523,  0.2454
-0.9049, -0.9797, -0.0196, -0.9090, -0.4433,  0.6447
-0.4607,  0.1811, -0.2389,  0.4050, -0.0078,  0.5229
 0.2664, -0.2932, -0.4259, -0.7336,  0.8742,  0.1834
-0.4507,  0.1029, -0.6294, -0.1158, -0.6294,  0.6081
 0.8948, -0.0124,  0.9278,  0.2899, -0.0314,  0.1534
-0.1323, -0.8813, -0.0146, -0.0697,  0.6135,  0.2386
Posted in Machine Learning | Leave a comment

Decision Tree Regression From-Scratch C#, No Pointers, No Recursion, No Stack-Build

Decision tree regression is a machine learning technique that incorporates a set of if-then rules in a tree data structure to predict a single numeric value. For example, a decision tree regression model prediction might be, “If employee age is greater than 39.0 and age is less than or equal to 42.5 and years-experience is less than or equal to 8.0 and height is greater than 64.0 then bank account balance is $754.14.”

This blog post presents decision tree regression, implemented from scratch with C#, using list storage for tree nodes (no pointers/references), a sequential iterative algorithm to build the tree (no recursion, no stack), weighted variance minimization for the node split function, and saves associated row indices in each node. Whew!

Maybe one of the points is that decision tree regression has a huge number of posible approaches, and each approach has many implementation options.

The demo data looks like:

 0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
. . .

The first five values on each line are the x predictors. The last value on each line is the target y variable to predict. The data is synthetic and was generated by a 5-10-1 neural network with random weights and bias. Therefore, the data has an underlying structure which can be predicted. There are 200 training items and 40 test items.

The output of the demo program is:

Begin decision tree regression (no stack construction)

Loading synthetic train (200) and test (40) data
Done

First three train X:
 -0.1660  0.4406 -0.9998 -0.3953 -0.7065
  0.0776 -0.1616  0.3704 -0.5911  0.7562
 -0.9452  0.3409 -0.1654  0.1174 -0.7192

First three train y:
  0.4840
  0.1568
  0.8054

Setting maxDepth = 3
Setting minSamples = 2
Setting minLeaf = 18
Using default numSplitCols = -1

Creating and training tree
Done

Tree:
ID 0   | idx   0 | v  -0.2102 | L   1 | R   2 | y   0.3493 | leaf F
ID 1   | idx   4 | v   0.1431 | L   3 | R   4 | y   0.5345 | leaf F
ID 2   | idx   0 | v   0.3915 | L   5 | R   6 | y   0.2382 | leaf F
ID 3   | idx   0 | v  -0.6553 | L   7 | R   8 | y   0.6358 | leaf F
ID 4   | idx  -1 | v   0.0000 | L   9 | R  10 | y   0.4123 | leaf T
ID 5   | idx   4 | v  -0.2987 | L  11 | R  12 | y   0.3032 | leaf F
ID 6   | idx   2 | v   0.3777 | L  13 | R  14 | y   0.1701 | leaf F
ID 7   | idx  -1 | v   0.0000 | L  15 | R  16 | y   0.6952 | leaf T
ID 8   | idx  -1 | v   0.0000 | L  17 | R  18 | y   0.5598 | leaf T
ID 11  | idx  -1 | v   0.0000 | L  23 | R  24 | y   0.4101 | leaf T
ID 12  | idx  -1 | v   0.0000 | L  25 | R  26 | y   0.2613 | leaf T
ID 13  | idx  -1 | v   0.0000 | L  27 | R  28 | y   0.1882 | leaf T
ID 14  | idx  -1 | v   0.0000 | L  29 | R  30 | y   0.1381 | leaf T

Evaluating model
Accuracy train (within 0.10) = 0.3750
Accuracy test (within 0.10) = 0.4750

MSE train = 0.0048
MSE test = 0.0054

Predicting for trainX[0] =
  -0.1660   0.4406  -0.9998  -0.3953  -0.7065
Predicted y = 0.4101

IF
column 0  >   -0.2102 AND
column 0  <=   0.3915 AND
column 4  <=  -0.2987 AND
THEN
predicted = 0.4101

End demo

There is a lot going on in the output. If you compare this output with the visual representation, most of the ideas of decision tree regression will become mostly clear:


Click to enlarge.

The demo decision tree has maxDepth = 3. This creates a tree with 15 nodes (some of which could be empty). In general, if a tree has maxDepth = n, then a full tree has 2^(n+1) – 1 nodes. These tree nodes could be stored as C# program-defined Node objects and connected via pointers/references. Instead, the architecture used by the demo decision tree is to create a List collection with size/count = 15, where each item in the List is a Node object.

With this design, if the root node is at index [0], then the left child is at [1] and the right child is at [2]. In general, if a node is at index [i], then the left child is at index [2*i+1] and the right child is at index [2*i+2].

The tree splitting part of constructing a decision tree regression system is perhaps best understood by looking at a concrete example. The root node of the demo decision tree is associated with all 200 rows of the training data. The splitting process selects some of the 200 rows to be assigned to the left child node, and the remaining rows to be assigned to the right child node.

For simplicity, suppose that instead of the demo training data with 200 rows and 5 columns of predictors, a tree node is associated with just 5 rows of data, and the data has just 3 columns of predictors:

     X                    y
[0] 0.99  0.22  0.77     3.0
[1] 0.55  0.44  0.00     9.0
[2] 0.88  0.66  0.88     7.0
[3] 0.11  0.33  0.22     1.0
[4] 0.55  0.88  0.33     5.0
     [0]   [1]   [2]

The split algorithm will select some of the five rows to go to the left child and the remaining rows to go to the right child. The idea is to select the two sets of rows in a way so that their associated y values are close together. The demo implementation does this by finding the split that minimizes the weighted variance of the target y values. (There are several other split algorithms).

The splitting algorithm examines every possible x value in every column. For each possible candidate split value (also called the threshold), the weighted variance of the y values of the resulting split is computed. For example, for x = 0.33 in column [1], the rows where the values in column [1] are less than or equal to 0.33 are rows [3] and [0] with associated y values (1.0, 3.0). The other rows are [1], [2], [4] with associated y values (9.0, 7.0, 5.0).

The variance of a set of y values is the average of the sum of the squared differences between the values and their mean (average).

For this proposed split, the variance of y values (1.0, 3.0) is computed as 1.00. The mean of (1.0, 3.0) = (1.0 + 3.0) / 2 = 2.0 and so the variance is ((3.0 – 2.0)^2 + (1.0 – 2.0)^2) / 2 = 1.00.

The mean of y values (9.0, 7.0, 5.0) = (9.0 + 7.0 + 5.0) / 3 = 7.0 and the variance is ((9.0 – 7.0)^2 + (7.0 -7.0)^2 + (5.0 – 7.0)^2) / 3 = 2.67.

Because the proposed split has different numbers of rows (two rows for the left child and three rows for the right child), the weighted variance of the proposed split is used and is ((2 * 1.00) + (3 * 2.67)) / 5 = 2.00.

This process is repeated for every x value in every column. The x value and its column that generate the smallest weighted variance define the split.

The demo does not use recursion to build the tree, which is a common aproach in university computer science data structures classes, but not good for a production environment. Another common approach is to use a stack and push and pop nodes as they are created. But the demo in this blog uses sequential iteration, constructing the tree nodes in order. In high-level pseudo-code:

initialize all nodes in tree list to dumy values
initialize the root node (all rows of data)

for-each node in tree-list
  if curr node has no rows
    continue // parent could not create node

  if node too deep in tree to have children OR
  node doesn't have enough rows to split
    curr node is a leaf node
    continue

  get split info (col, threshold) of curr node

  if proposed split doesn't have enough rows
    curr node is a leaf node
    continue

  use split info to construct left child of curr node
  use split info to construct right child of curr node
end-for

The build-tree algorithm is short but very tricky. Implementation took me many hours of coding and debugging.

Decision trees are rarely used by themselves because they usually overfit the training data, and predict poorly on new, previously unseen data. Multiple decision trees are usually combined into an ensemble technique such as bagging trees (“bootstrap aggregation”), random forest regression, adaptive boosting regression, and gradient boosting regression.



Computer science decision trees are generally well-behaved. But some comic book trees and plants are not always so nice.

“The House of Mystery” was a popular comic book title for many years. The first issue was in December, 1950. For the next 10 years, the stories were about the supernatural and horror.

From 1960 to 1968, most of the stories were about aliens or little known super heroes. Starting in June 1968, the stories switched back to horror.

Left: Issue #245, September 1976

Center: Issue #251, March 1977.

Right: Issue #301, February 1982.


Demo code. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my blog editor chokes on symbols).

using System;
using System.IO;
using System.Collections.Generic;

// full List storage with indexes (no pointers)
// iteration-based construction (no stack, no recursion)
// the Nodes store associated row idxs (interpretability)

namespace DecisionTreeRegressionNoStack
{
  internal class DecisionTreeRegressionProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin decision tree" +
        " regression (no stack construction) ");

      // 1. load data
      Console.WriteLine("\nLoading synthetic train" +
        " (200) and test (40) data");
      string trainFile = "..\\..\\..\\Data\\" +
        "synthetic_train_200.txt";
      int[] colsX = new int[] { 0, 1, 2, 3, 4 };
      double[][] trainX =
        MatLoad(trainFile, colsX, ',', "#");
      double[] trainY =
        MatToVec(MatLoad(trainFile,
        new int[] { 5 }, ',', "#"));

      string testFile = "..\\..\\..\\Data\\" +
        "synthetic_test_40.txt";
      double[][] testX =
        MatLoad(testFile, colsX, ',', "#");
      double[] testY =
        MatToVec(MatLoad(testFile,
        new int[] { 5 }, ',', "#"));
      Console.WriteLine("Done ");

      Console.WriteLine("\nFirst three train X: ");
      for (int i = 0; i "lt" 3; ++i)
        VecShow(trainX[i], 4, 8);

      Console.WriteLine("\nFirst three train y: ");
      for (int i = 0; i "lt" 3; ++i)
        Console.WriteLine(trainY[i].ToString("F4").
          PadLeft(8));

      // 2. create and train/build tree
      //int maxDepth = 5;
      //int minSamples = 2; // min rows to try split
      //int minLeaf = 1;    // min rows to accept after split
      //int numSplitCols = -1;  // all columns

      int maxDepth = 3; //
      int minSamples = 2;
      int minLeaf = 18;
      int numSplitCols = -1;  // all columns

      Console.WriteLine("\nSetting maxDepth = " +
        maxDepth);
      Console.WriteLine("Setting minSamples = " +
        minSamples);
      Console.WriteLine("Setting minLeaf = " +
        minLeaf);
      Console.WriteLine("Using default numSplitCols = -1 ");

      Console.WriteLine("\nCreating and training tree ");
      DecisionTreeRegressor dtr =
        new DecisionTreeRegressor(maxDepth, minSamples,
        minLeaf, numSplitCols, seed: 0);
      dtr.Train(trainX, trainY);
      Console.WriteLine("Done ");

      Console.WriteLine("\nTree: ");
      dtr.Display(showRows: false);
     
      // 3. evaluate model
      Console.WriteLine("\nEvaluating model ");
      double accTrain = dtr.Accuracy(trainX, trainY, 0.10);
      Console.WriteLine("Accuracy train (within 0.10) = " +
        accTrain.ToString("F4"));
      double accTest = dtr.Accuracy(testX, testY, 0.10);
      Console.WriteLine("Accuracy test (within 0.10) = " +
        accTest.ToString("F4"));

      double mseTrain = dtr.MSE(trainX, trainY);
      Console.WriteLine("\nMSE train = " +
        mseTrain.ToString("F4"));
      double mseTest = dtr.MSE(testX, testY);
      Console.WriteLine("MSE test = " +
        mseTest.ToString("F4"));

      // 4. use model
      Console.WriteLine("\nPredicting for trainX[0] = ");
      double[] x = trainX[0];
      VecShow(x, 4, 9);
      double predY = dtr.Predict(x);
      Console.WriteLine("Predicted y = " +
        predY.ToString("F4"));

      dtr.Explain(x);

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();

    } // Main()

    // ------------------------------------------------------
    // helpers for Main():
    //   MatLoad(), MatToVec(), VecShow().
    // ------------------------------------------------------

    static double[][] MatLoad(string fn, int[] usecols,
      char sep, string comment)
    {
      List"lt"double[]"gt" result =
        new List"lt"double[]"gt"();
      string line = "";
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      while ((line = sr.ReadLine()) != null)
      {
        if (line.StartsWith(comment) == true)
          continue;
        string[] tokens = line.Split(sep);
        List"lt"double"gt" lst = new List"lt"double"gt"();
        for (int j = 0; j "lt" usecols.Length; ++j)
          lst.Add(double.Parse(tokens[usecols[j]]));
        double[] row = lst.ToArray();
        result.Add(row);
      }
      sr.Close(); ifs.Close();
      return result.ToArray();
    }

    static double[] MatToVec(double[][] mat)
    {
      int nRows = mat.Length;
      int nCols = mat[0].Length;
      double[] result = new double[nRows * nCols];
      int k = 0;
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[k++] = mat[i][j];
      return result;
    }

    static void VecShow(double[] vec, int dec, int wid)
    {
      for (int i = 0; i "lt" vec.Length; ++i)
        Console.Write(vec[i].ToString("F" + dec).
          PadLeft(wid));
      Console.WriteLine("");
    }

  } // class Program

  // ========================================================

  public class DecisionTreeRegressor
  {
    public int maxDepth;
    public int minSamples;  // aka min_samples_split
    public int minLeaf;  // min number of values in a leaf
    public int numSplitCols; // mostly for random forest
    public List"lt"Node"gt" tree = new List"lt"Node"gt"();
    public Random rnd;  // order in which cols are searched

    public double[][] trainX;  // store data by ref
    public double[] trainY;

    // ------------------------------------------------------

    public class Node
    {
      public int id;
      public int colIdx;  // aka featureIdx
      public double thresh;
      public int left;  // index into List
      public int right;
      public double value;
      public bool isLeaf;
      public List"lt"int"gt" rows;  // assoc rows in train data

      public Node(int id, int colIdx, double thresh,
        int left, int right, double value, bool isLeaf,
        List"lt"int"gt" rows)
      {
        this.id = id;
        this.colIdx = colIdx;
        this.thresh = thresh;
        this.left = left;
        this.right = right;
        this.value = value;
        this.isLeaf = isLeaf;

        this.rows = rows;
      }
    } // class Node

    // --------------------------------------------

    public DecisionTreeRegressor(int maxDepth = 2,
      int minSamples = 2, int minLeaf = 1,
      int numSplitCols = -1, int seed = 0)
    {
      // if maxDepth = 0, tree has just a root node
      // if maxDepth = 1, at most 3 nodes (root, l, r)
      // if maxDepth = n, at most 2^(n+1) - 1 nodes
      this.maxDepth = maxDepth;
      this.minSamples = minSamples;
      this.minLeaf = minLeaf;
      this.numSplitCols = numSplitCols;  // for ran. forest

      // create full tree List with dummy nodes
      int numNodes = (int)Math.Pow(2, (maxDepth + 1)) - 1;
      for (int i = 0; i "lt" numNodes; ++i)
      {
        //this.tree.Add(null); // OK, but let's avoid ptrs
        Node n = new Node(i, -1, 0.0, -1, -1, 0.0,
          false, new List"lt"int"gt"());
        this.tree.Add(n); // dummy node
      }
      this.rnd = new Random(seed);
    }

    // ------------------------------------------------------
    // public: Train(), Predict(), Explain(), Accuracy(),
    //   MSE(), Display().
    // helpers: MakeTree(), BestSplit(), TreeTargetMean(),
    //   TreeTargetVariance().
    // ------------------------------------------------------

    public void Train(double[][] trainX, double[] trainY)
    {
      this.trainX = trainX; // 
      this.trainY = trainY;
      this.MakeTree();
    }

    // ------------------------------------------------------

    public double Predict(double[] x)
    {
      int p = 0;
      Node currNode = this.tree[p];
      while (currNode.isLeaf == false && p "lt" this.tree.Count)
      {
        if (x[currNode.colIdx] "lte" currNode.thresh)
          p = currNode.left;
        else
          p = currNode.right;
        currNode = this.tree[p];
      }
      return this.tree[p].value;
    }

    // ------------------------------------------------------

    public void Explain(double[] x)
    {
      int p = 0;
      Console.WriteLine("\nIF ");
      Node currNode = this.tree[p];
      while (currNode.isLeaf == false)
      {
        Console.Write("column ");
        Console.Write(currNode.colIdx + " ");
        if (x[currNode.colIdx] "lte" currNode.thresh)
        {
          Console.Write(" "lte" ");
          Console.WriteLine(currNode.thresh.
            ToString("F4").PadLeft(8) + " AND ");
          p = currNode.left;
          currNode = this.tree[p];
        }
        else
        {
          Console.Write(" "gt"  ");
          Console.WriteLine(currNode.thresh.
            ToString("F4").PadLeft(8) + " AND ");
          p = currNode.right;
          currNode = this.tree[p];
        }
      }
      Console.WriteLine("THEN \npredicted = " +
        currNode.value.ToString("F4"));
    }

    // ------------------------------------------------------

    public double Accuracy(double[][] dataX, double[] dataY,
      double pctClose)
    {
      int numCorrect = 0; int numWrong = 0;
      for (int i = 0; i "lt" dataX.Length; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        if (Math.Abs(predY - actualY) "lt"
          Math.Abs(pctClose * actualY))
          ++numCorrect;
        else
          ++numWrong;
      }
      return (numCorrect * 1.0) / (numWrong + numCorrect);
    }

    // ------------------------------------------------------

    public double MSE(double[][] dataX,
      double[] dataY)
    {
      // standard machine learning MSE
      int n = dataX.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        sum += (actualY - predY) * (actualY - predY);
      }
      return sum / n;
    }

    // ------------------------------------------------------

    public void Display(bool showRows)
    {
      for (int i = 0; i "lt" this.tree.Count; ++i)
      {
        Node n = this.tree[i];
        // check for empty nodes
        if (n == null) continue; // never happen
        // this condition identifies an empty node
        if (n.colIdx == -1 && n.isLeaf == false) continue;

        string s1 = "ID " +
          n.id.ToString().PadRight(3) + " | ";
        string s2 = "idx " +
          n.colIdx.ToString().PadLeft(3) + " | ";
        string s3 = "v " +
          n.thresh.ToString("F4").PadLeft(8) + " | ";
        string s4 = "L " +
          n.left.ToString().PadLeft(3) + " | ";
        string s5 = "R " +
          n.right.ToString().PadLeft(3) + " | ";
        string s6 = "y " +
          n.value.ToString("F4").PadLeft(8) + " | ";
        string s7 = "leaf " + (n.isLeaf == true ? "T" : "F");
          //n.isLeaf.ToString().PadLeft(6);

        string s8;
        if (showRows == true)
        {
          s8 = "\nRows:";
          for (int j = 0; j "lt" n.rows.Count; ++j)
          {
            if (j "gt" 0 && j % 20 == 0)
              s8 += "\n";
            s8 += n.rows[j].ToString().PadLeft(4) + " ";
          }
        }
        else
        {
          s8 = "";
        }
        Console.WriteLine(s1 + s2 + s3 + s4 +
          s5 + s6 + s7 + s8);
        if (showRows == true) Console.WriteLine("\n");
      }
    } // Display()

    // ------------------------------------------------------

    // helpers: MakeTree(), BestSplit(),
    // TreeTargetMean(), TreeTargetVariance()

    private void MakeTree()
    {
      // no recursion, no pointers, List storage, no stack
      if (this.numSplitCols == -1) // use all cols
        this.numSplitCols = this.trainX[0].Length;

      // all nodes have en init to
      // new Node(i, -1, 0.0, -1, -1, 0.0, false, []
      // id is set, but colIdx, thresh, leftIdx,
      // rightIdx, predY, isLeaf, rows must be computed

      // prepare root node
      List"lt"int"gt" allRows = new List"lt"int"gt"();
      for (int i = 0; i "lt" this.trainX.Length; ++i)
        allRows.Add(i);
      double grandMean = this.TreeTargetMean(allRows);

      Node root = new Node(0, -1, 0.0, 1, 2,
        grandMean, false, allRows);
      this.tree[0] = root; // still need colIdx, thresh, val

      for (int i = 0; i "lt" this.tree.Count; ++i)
      {
        Node currNode = this.tree[i];
        List"lt"int"gt" currRows = currNode.rows;

        // this can happen using sequential traversal
        // parent was unable to create node; it is empty
        if (currRows.Count == 0) { continue; }

        // curr node too deep to have children OR
        // curr node not enough rows to split
        if (currNode.id "gte" (int)Math.Pow(2,
          (this.maxDepth)) - 1 ||
          currRows.Count "lt" this.minSamples)
        {
          currNode.value = this.TreeTargetMean(currRows);
          currNode.isLeaf = true;
          continue;
        }

        double[] splitInfo = this.BestSplit(currRows);
        int colIdx = (int)splitInfo[0];
        double thresh = splitInfo[1];

        if (colIdx == -1)  // unable to split
        {
          currNode.value = this.TreeTargetMean(currRows);
          currNode.isLeaf = true;
          continue;
        }

        // got good split so at internal, non-leaf node
        currNode.colIdx = colIdx;
        currNode.thresh = thresh;

        // make the data splits for children
        // find left rows and right rows
        List"lt"int"gt" leftIdxs = new List"lt"int"gt"();
        List"lt"int"gt" rightIdxs = new List"lt"int"gt"();

        for (int k = 0; k "lt" currRows.Count; ++k)
        {
          int r = currRows[k];
          if (this.trainX[r][colIdx] "lte" thresh)
            leftIdxs.Add(r);
          else
            rightIdxs.Add(r);
        }

        int leftID = currNode.id * 2 + 1;
        Node currNodeLeft = new Node(leftID, -1, 0.0,
          2 * leftID + 1, 2 * leftID + 2,
          this.TreeTargetMean(leftIdxs), false, leftIdxs);
        this.tree[leftID] = currNodeLeft;

        int rightID = currNode.id * 2 + 2;
        Node currNodeRight = new Node(rightID, -1, 0.0,
          2 * rightID + 1, 2 * rightID + 2,
          this.TreeTargetMean(rightIdxs), false, rightIdxs);
        this.tree[rightID] = currNodeRight;
      } // i
      return;
    }

    // ------------------------------------------------------

    private double[] BestSplit(List"lt"int"gt" rows)
    {
      // implicit params numSplitCols, minLeaf, numsplitCols
      // result[0] = best col idx (as double)
      // result[1] = best split value
      rows.Sort();

      int bestColIdx = -1;  // indicates bad split
      double bestThresh = 0.0;
      double bestVar = double.MaxValue;  // smaller is better

      int nRows = rows.Count;  // or dataY.Length
      int nCols = this.trainX[0].Length;

      if (nRows == 0)
      {
        throw new Exception("empty data in BestSplit()");
      }

      // process cols in scrambled order
      int[] colIndices = new int[nCols];
      for (int k = 0; k "lt" nCols; ++k)
        colIndices[k] = k;
      // shuffle, inline Fisher-Yates
      int n = colIndices.Length;
      for (int i = 0; i "lt" n; ++i)
      {
        int ri = rnd.Next(i, n);  // be careful
        int tmp = colIndices[i];
        colIndices[i] = colIndices[ri];
        colIndices[ri] = tmp;
      }

      // numSplitCols is usually all columns (-1)
      for (int j = 0; j "lt" this.numSplitCols; ++j)
      {
        int colIdx = colIndices[j];
        HashSet"lt"double"gt" examineds = 
          new HashSet"lt"double"gt"();

        for (int i = 0; i "lt" nRows; ++i) // each row
        {
          // if curr thresh been seen, skip it
          double thresh = this.trainX[rows[i]][colIdx];
          if (examineds.Contains(thresh)) continue;
          examineds.Add(thresh);

          // get row idxs where x is lte, gt thresh
          List"lt"int"gt" leftIdxs = new List"lt"int"gt"();
          List"lt"int"gt" rightIdxs = new List"lt"int"gt"();
          for (int k = 0; k "lt" nRows; ++k)
          {
            if (this.trainX[rows[k]][colIdx] "lte" thresh)
              leftIdxs.Add(rows[k]);
            else
              rightIdxs.Add(rows[k]);
          }

          // Check if proposed split has too few values
          if (leftIdxs.Count "lt" this.minLeaf ||
            rightIdxs.Count "lt" this.minLeaf)
            continue;  // to next row

          double leftVar = 
            this.TreeTargetVariance(leftIdxs);
          double rightVar = 
            this.TreeTargetVariance(rightIdxs);
          double weightedVar = (leftIdxs.Count * leftVar +
            rightIdxs.Count * rightVar) / nRows;

          if (weightedVar "lt" bestVar)
          {
            // if this never happens, bestColIdx remains -1
            // which means a bad split. used in MakeTree()
            bestColIdx = colIdx;
            bestThresh = thresh;
            bestVar = weightedVar;
          }

        } // each row
      } // j each col

      double[] result = new double[2];  // out params ugly
      result[0] = 1.0 * bestColIdx;
      result[1] = bestThresh;
      return result;

    } // BestSplit()

    // ------------------------------------------------------

    private double TreeTargetMean(List"lt"int"gt" rows)
    {
      // mean of rows items in trainY: for node prediction
      double sum = 0.0;
      for (int i = 0; i "lt" rows.Count; ++i)
      {
        int r = rows[i];
        sum += this.trainY[r];
      }
      return sum / rows.Count;
    }

    // ------------------------------------------------------

    private double TreeTargetVariance(List"lt"int"gt" rows)
    {
      double mean = this.TreeTargetMean(rows);
      double sum = 0.0;
      for (int i = 0; i "lt" rows.Count; ++i)
      {
        int r = rows[i];
        sum += (this.trainY[r] - mean) *
          (this.trainY[r] - mean);
      }
      return sum / rows.Count;
    }

    // ------------------------------------------------------

  } // class DecisionTreeRegressor

} // ns

Training data:

# synthetic_train_200.txt
#
-0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
-0.8299, -0.9219, -0.6603,  0.7563, -0.8033,  0.7955
 0.0663,  0.3838, -0.3690,  0.3730,  0.6693,  0.3206
-0.9634,  0.5003,  0.9777,  0.4963, -0.4391,  0.7377
-0.1042,  0.8172, -0.4128, -0.4244, -0.7399,  0.4801
-0.9613,  0.3577, -0.5767, -0.4689, -0.0169,  0.6861
-0.7065,  0.1786,  0.3995, -0.7953, -0.1719,  0.5569
 0.3888, -0.1716, -0.9001,  0.0718,  0.3276,  0.2500
 0.1731,  0.8068, -0.7251, -0.7214,  0.6148,  0.3297
-0.2046, -0.6693,  0.8550, -0.3045,  0.5016,  0.2129
 0.2473,  0.5019, -0.3022, -0.4601,  0.7918,  0.2613
-0.1438,  0.9297,  0.3269,  0.2434, -0.7705,  0.5171
 0.1568, -0.1837, -0.5259,  0.8068,  0.1474,  0.3307
-0.9943,  0.2343, -0.3467,  0.0541,  0.7719,  0.5581
 0.2467, -0.9684,  0.8589,  0.3818,  0.9946,  0.1092
-0.6553, -0.7257,  0.8652,  0.3936, -0.8680,  0.7018
 0.8460,  0.4230, -0.7515, -0.9602, -0.9476,  0.1996
-0.9434, -0.5076,  0.7201,  0.0777,  0.1056,  0.5664
 0.9392,  0.1221, -0.9627,  0.6013, -0.5341,  0.1533
 0.6142, -0.2243,  0.7271,  0.4942,  0.1125,  0.1661
 0.4260,  0.1194, -0.9749, -0.8561,  0.9346,  0.2230
 0.1362, -0.5934, -0.4953,  0.4877, -0.6091,  0.3810
 0.6937, -0.5203, -0.0125,  0.2399,  0.6580,  0.1460
-0.6864, -0.9628, -0.8600, -0.0273,  0.2127,  0.5387
 0.9772,  0.1595, -0.2397,  0.1019,  0.4907,  0.1611
 0.3385, -0.4702, -0.8673, -0.2598,  0.2594,  0.2270
-0.8669, -0.4794,  0.6095, -0.6131,  0.2789,  0.4700
 0.0493,  0.8496, -0.4734, -0.8681,  0.4701,  0.3516
 0.8639, -0.9721, -0.5313,  0.2336,  0.8980,  0.1412
 0.9004,  0.1133,  0.8312,  0.2831, -0.2200,  0.1782
 0.0991,  0.8524,  0.8375, -0.2102,  0.9265,  0.2150
-0.6521, -0.7473, -0.7298,  0.0113, -0.9570,  0.7422
 0.6190, -0.3105,  0.8802,  0.1640,  0.7577,  0.1056
 0.6895,  0.8108, -0.0802,  0.0927,  0.5972,  0.2214
 0.1982, -0.9689,  0.1870, -0.1326,  0.6147,  0.1310
-0.3695,  0.7858,  0.1557, -0.6320,  0.5759,  0.3773
-0.1596,  0.3581,  0.8372, -0.9992,  0.9535,  0.2071
-0.2468,  0.9476,  0.2094,  0.6577,  0.1494,  0.4132
 0.1737,  0.5000,  0.7166,  0.5102,  0.3961,  0.2611
 0.7290, -0.3546,  0.3416, -0.0983, -0.2358,  0.1332
-0.3652,  0.2438, -0.1395,  0.9476,  0.3556,  0.4170
-0.6029, -0.1466, -0.3133,  0.5953,  0.7600,  0.4334
-0.4596, -0.4953,  0.7098,  0.0554,  0.6043,  0.2775
 0.1450,  0.4663,  0.0380,  0.5418,  0.1377,  0.2931
-0.8636, -0.2442, -0.8407,  0.9656, -0.6368,  0.7429
 0.6237,  0.7499,  0.3768,  0.1390, -0.6781,  0.2185
-0.5499,  0.1850, -0.3755,  0.8326,  0.8193,  0.4399
-0.4858, -0.7782, -0.6141, -0.0008,  0.4572,  0.4197
 0.7033, -0.1683,  0.2334, -0.5327, -0.7961,  0.1776
 0.0317, -0.0457, -0.6947,  0.2436,  0.0880,  0.3345
 0.5031, -0.5559,  0.0387,  0.5706, -0.9553,  0.3107
-0.3513,  0.7458,  0.6894,  0.0769,  0.7332,  0.3170
 0.2205,  0.5992, -0.9309,  0.5405,  0.4635,  0.3532
-0.4806, -0.4859,  0.2646, -0.3094,  0.5932,  0.3202
 0.9809, -0.3995, -0.7140,  0.8026,  0.0831,  0.1600
 0.9495,  0.2732,  0.9878,  0.0921,  0.0529,  0.1289
-0.9476, -0.6792,  0.4913, -0.9392, -0.2669,  0.5966
 0.7247,  0.3854,  0.3819, -0.6227, -0.1162,  0.1550
-0.5922, -0.5045, -0.4757,  0.5003, -0.0860,  0.5863
-0.8861,  0.0170, -0.5761,  0.5972, -0.4053,  0.7301
 0.6877, -0.2380,  0.4997,  0.0223,  0.0819,  0.1404
 0.9189,  0.6079, -0.9354,  0.4188, -0.0700,  0.1907
-0.1428, -0.7820,  0.2676,  0.6059,  0.3936,  0.2790
 0.5324, -0.3151,  0.6917, -0.1425,  0.6480,  0.1071
-0.8432, -0.9633, -0.8666, -0.0828, -0.7733,  0.7784
-0.9444,  0.5097, -0.2103,  0.4939, -0.0952,  0.6787
-0.0520,  0.6063, -0.1952,  0.8094, -0.9259,  0.4836
 0.5477, -0.7487,  0.2370, -0.9793,  0.0773,  0.1241
 0.2450,  0.8116,  0.9799,  0.4222,  0.4636,  0.2355
 0.8186, -0.1983, -0.5003, -0.6531, -0.7611,  0.1511
-0.4714,  0.6382, -0.3788,  0.9648, -0.4667,  0.5950
 0.0673, -0.3711,  0.8215, -0.2669, -0.1328,  0.2677
-0.9381,  0.4338,  0.7820, -0.9454,  0.0441,  0.5518
-0.3480,  0.7190,  0.1170,  0.3805, -0.0943,  0.4724
-0.9813,  0.1535, -0.3771,  0.0345,  0.8328,  0.5438
-0.1471, -0.5052, -0.2574,  0.8637,  0.8737,  0.3042
-0.5454, -0.3712, -0.6505,  0.2142, -0.1728,  0.5783
 0.6327, -0.6297,  0.4038, -0.5193,  0.1484,  0.1153
-0.5424,  0.3282, -0.0055,  0.0380, -0.6506,  0.6613
 0.1414,  0.9935,  0.6337,  0.1887,  0.9520,  0.2540
-0.9351, -0.8128, -0.8693, -0.0965, -0.2491,  0.7353
 0.9507, -0.6640,  0.9456,  0.5349,  0.6485,  0.1059
-0.0462, -0.9737, -0.2940, -0.0159,  0.4602,  0.2606
-0.0627, -0.0852, -0.7247, -0.9782,  0.5166,  0.2977
 0.0478,  0.5098, -0.0723, -0.7504, -0.3750,  0.3335
 0.0090,  0.3477,  0.5403, -0.7393, -0.9542,  0.4415
-0.9748,  0.3449,  0.3736, -0.1015,  0.8296,  0.4358
 0.2887, -0.9895, -0.0311,  0.7186,  0.6608,  0.2057
 0.1570, -0.4518,  0.1211,  0.3435, -0.2951,  0.3244
 0.7117, -0.6099,  0.4946, -0.4208,  0.5476,  0.1096
-0.2929, -0.5726,  0.5346, -0.3827,  0.4665,  0.2465
 0.4889, -0.5572, -0.5718, -0.6021, -0.7150,  0.2163
-0.7782,  0.3491,  0.5996, -0.8389, -0.5366,  0.6516
-0.5847,  0.8347,  0.4226,  0.1078, -0.3910,  0.6134
 0.8469,  0.4121, -0.0439, -0.7476,  0.9521,  0.1571
-0.6803, -0.5948, -0.1376, -0.1916, -0.7065,  0.7156
 0.2878,  0.5086, -0.5785,  0.2019,  0.4979,  0.2980
 0.2764,  0.1943, -0.4090,  0.4632,  0.8906,  0.2960
-0.8877,  0.6705, -0.6155, -0.2098, -0.3998,  0.7107
-0.8398,  0.8093, -0.2597,  0.0614, -0.0118,  0.6502
-0.8476,  0.0158, -0.4769, -0.2859, -0.7839,  0.7715
 0.5751, -0.7868,  0.9714, -0.6457,  0.1448,  0.1175
 0.4802, -0.7001,  0.1022, -0.5668,  0.5184,  0.1090
 0.4458, -0.6469,  0.7239, -0.9604,  0.7205,  0.0779
 0.5175,  0.4339,  0.9747, -0.4438, -0.9924,  0.2879
 0.8678,  0.7158,  0.4577,  0.0334,  0.4139,  0.1678
 0.5406,  0.5012,  0.2264, -0.1963,  0.3946,  0.2088
-0.9938,  0.5498,  0.7928, -0.5214, -0.7585,  0.7687
 0.7661,  0.0863, -0.4266, -0.7233, -0.4197,  0.1466
 0.2277, -0.3517, -0.0853, -0.1118,  0.6563,  0.1767
 0.3499, -0.5570, -0.0655, -0.3705,  0.2537,  0.1632
 0.7547, -0.1046,  0.5689, -0.0861,  0.3125,  0.1257
 0.8186,  0.2110,  0.5335,  0.0094, -0.0039,  0.1391
 0.6858, -0.8644,  0.1465,  0.8855,  0.0357,  0.1845
-0.4967,  0.4015,  0.0805,  0.8977,  0.2487,  0.4663
 0.6760, -0.9841,  0.9787, -0.8446, -0.3557,  0.1509
-0.1203, -0.4885,  0.6054, -0.0443, -0.7313,  0.4854
 0.8557,  0.7919, -0.0169,  0.7134, -0.1628,  0.2002
 0.0115, -0.6209,  0.9300, -0.4116, -0.7931,  0.4052
-0.7114, -0.9718,  0.4319,  0.1290,  0.5892,  0.3661
 0.3915,  0.5557, -0.1870,  0.2955, -0.6404,  0.2954
-0.3564, -0.6548, -0.1827, -0.5172, -0.1862,  0.4622
 0.2392, -0.4959,  0.5857, -0.1341, -0.2850,  0.2470
-0.3394,  0.3947, -0.4627,  0.6166, -0.4094,  0.5325
 0.7107,  0.7768, -0.6312,  0.1707,  0.7964,  0.2757
-0.1078,  0.8437, -0.4420,  0.2177,  0.3649,  0.4028
-0.3139,  0.5595, -0.6505, -0.3161, -0.7108,  0.5546
 0.4335,  0.3986,  0.3770, -0.4932,  0.3847,  0.1810
-0.2562, -0.2894, -0.8847,  0.2633,  0.4146,  0.4036
 0.2272,  0.2966, -0.6601, -0.7011,  0.0284,  0.2778
-0.0743, -0.1421, -0.0054, -0.6770, -0.3151,  0.3597
-0.4762,  0.6891,  0.6007, -0.1467,  0.2140,  0.4266
-0.4061,  0.7193,  0.3432,  0.2669, -0.7505,  0.6147
-0.0588,  0.9731,  0.8966,  0.2902, -0.6966,  0.4955
-0.0627, -0.1439,  0.1985,  0.6999,  0.5022,  0.3077
 0.1587,  0.8494, -0.8705,  0.9827, -0.8940,  0.4263
-0.7850,  0.2473, -0.9040, -0.4308, -0.8779,  0.7199
 0.4070,  0.3369, -0.2428, -0.6236,  0.4940,  0.2215
-0.0242,  0.0513, -0.9430,  0.2885, -0.2987,  0.3947
-0.5416, -0.1322, -0.2351, -0.0604,  0.9590,  0.3683
 0.1055,  0.7783, -0.2901, -0.5090,  0.8220,  0.2984
-0.9129,  0.9015,  0.1128, -0.2473,  0.9901,  0.4776
-0.9378,  0.1424, -0.6391,  0.2619,  0.9618,  0.5368
 0.7498, -0.0963,  0.4169,  0.5549, -0.0103,  0.1614
-0.2612, -0.7156,  0.4538, -0.0460, -0.1022,  0.3717
 0.7720,  0.0552, -0.1818, -0.4622, -0.8560,  0.1685
-0.4177,  0.0070,  0.9319, -0.7812,  0.3461,  0.3052
-0.0001,  0.5542, -0.7128, -0.8336, -0.2016,  0.3803
 0.5356, -0.4194, -0.5662, -0.9666, -0.2027,  0.1776
-0.2378,  0.3187, -0.8582, -0.6948, -0.9668,  0.5474
-0.1947, -0.3579,  0.1158,  0.9869,  0.6690,  0.2992
 0.3992,  0.8365, -0.9205, -0.8593, -0.0520,  0.3154
-0.0209,  0.0793,  0.7905, -0.1067,  0.7541,  0.1864
-0.4928, -0.4524, -0.3433,  0.0951, -0.5597,  0.6261
-0.8118,  0.7404, -0.5263, -0.2280,  0.1431,  0.6349
 0.0516, -0.8480,  0.7483,  0.9023,  0.6250,  0.1959
-0.3212,  0.1093,  0.9488, -0.3766,  0.3376,  0.2735
-0.3481,  0.5490, -0.3484,  0.7797,  0.5034,  0.4379
-0.5785, -0.9170, -0.3563, -0.9258,  0.3877,  0.4121
 0.3407, -0.1391,  0.5356,  0.0720, -0.9203,  0.3458
-0.3287, -0.8954,  0.2102,  0.0241,  0.2349,  0.3247
-0.1353,  0.6954, -0.0919, -0.9692,  0.7461,  0.3338
 0.9036, -0.8982, -0.5299, -0.8733, -0.1567,  0.1187
 0.7277, -0.8368, -0.0538, -0.7489,  0.5458,  0.0830
 0.9049,  0.8878,  0.2279,  0.9470, -0.3103,  0.2194
 0.7957, -0.1308, -0.5284,  0.8817,  0.3684,  0.2172
 0.4647, -0.4931,  0.2010,  0.6292, -0.8918,  0.3371
-0.7390,  0.6849,  0.2367,  0.0626, -0.5034,  0.7039
-0.1567, -0.8711,  0.7940, -0.5932,  0.6525,  0.1710
 0.7635, -0.0265,  0.1969,  0.0545,  0.2496,  0.1445
 0.7675,  0.1354, -0.7698, -0.5460,  0.1920,  0.1728
-0.5211, -0.7372, -0.6763,  0.6897,  0.2044,  0.5217
 0.1913,  0.1980,  0.2314, -0.8816,  0.5006,  0.1998
 0.8964,  0.0694, -0.6149,  0.5059, -0.9854,  0.1825
 0.1767,  0.7104,  0.2093,  0.6452,  0.7590,  0.2832
-0.3580, -0.7541,  0.4426, -0.1193, -0.7465,  0.5657
-0.5996,  0.5766, -0.9758, -0.3933, -0.9572,  0.6800
 0.9950,  0.1641, -0.4132,  0.8579,  0.0142,  0.2003
-0.4717, -0.3894, -0.2567, -0.5111,  0.1691,  0.4266
 0.3917, -0.8561,  0.9422,  0.5061,  0.6123,  0.1212
-0.0366, -0.1087,  0.3449, -0.1025,  0.4086,  0.2475
 0.3633,  0.3943,  0.2372, -0.6980,  0.5216,  0.1925
-0.5325, -0.6466, -0.2178, -0.3589,  0.6310,  0.3568
 0.2271,  0.5200, -0.1447, -0.8011, -0.7699,  0.3128
 0.6415,  0.1993,  0.3777, -0.0178, -0.8237,  0.2181
-0.5298, -0.0768, -0.6028, -0.9490,  0.4588,  0.4356
 0.6870, -0.1431,  0.7294,  0.3141,  0.1621,  0.1632
-0.5985,  0.0591,  0.7889, -0.3900,  0.7419,  0.2945
 0.3661,  0.7984, -0.8486,  0.7572, -0.6183,  0.3449
 0.6995,  0.3342, -0.3113, -0.6972,  0.2707,  0.1712
 0.2565,  0.9126,  0.1798, -0.6043, -0.1413,  0.2893
-0.3265,  0.9839, -0.2395,  0.9854,  0.0376,  0.4770
 0.2690, -0.1722,  0.9818,  0.8599, -0.7015,  0.3954
-0.2102, -0.0768,  0.1219,  0.5607, -0.0256,  0.3949
 0.8216, -0.9555,  0.6422, -0.6231,  0.3715,  0.0801
-0.2896,  0.9484, -0.7545, -0.6249,  0.7789,  0.4370
-0.9985, -0.5448, -0.7092, -0.5931,  0.7926,  0.5402

Test data:

# synthetic_test_40.txt
#
 0.7462,  0.4006, -0.0590,  0.6543, -0.0083,  0.1935
 0.8495, -0.2260, -0.0142, -0.4911,  0.7699,  0.1078
-0.2335, -0.4049,  0.4352, -0.6183, -0.7636,  0.5088
 0.1810, -0.5142,  0.2465,  0.2767, -0.3449,  0.3136
-0.8650,  0.7611, -0.0801,  0.5277, -0.4922,  0.7140
-0.2358, -0.7466, -0.5115, -0.8413, -0.3943,  0.4533
 0.4834,  0.2300,  0.3448, -0.9832,  0.3568,  0.1360
-0.6502, -0.6300,  0.6885,  0.9652,  0.8275,  0.3046
-0.3053,  0.5604,  0.0929,  0.6329, -0.0325,  0.4756
-0.7995,  0.0740, -0.2680,  0.2086,  0.9176,  0.4565
-0.2144, -0.2141,  0.5813,  0.2902, -0.2122,  0.4119
-0.7278, -0.0987, -0.3312, -0.5641,  0.8515,  0.4438
 0.3793,  0.1976,  0.4933,  0.0839,  0.4011,  0.1905
-0.8568,  0.9573, -0.5272,  0.3212, -0.8207,  0.7415
-0.5785,  0.0056, -0.7901, -0.2223,  0.0760,  0.5551
 0.0735, -0.2188,  0.3925,  0.3570,  0.3746,  0.2191
 0.1230, -0.2838,  0.2262,  0.8715,  0.1938,  0.2878
 0.4792, -0.9248,  0.5295,  0.0366, -0.9894,  0.3149
-0.4456,  0.0697,  0.5359, -0.8938,  0.0981,  0.3879
 0.8629, -0.8505, -0.4464,  0.8385,  0.5300,  0.1769
 0.1995,  0.6659,  0.7921,  0.9454,  0.9970,  0.2330
-0.0249, -0.3066, -0.2927, -0.4923,  0.8220,  0.2437
 0.4513, -0.9481, -0.0770, -0.4374, -0.9421,  0.2879
-0.3405,  0.5931, -0.3507, -0.3842,  0.8562,  0.3987
 0.9538,  0.0471,  0.9039,  0.7760,  0.0361,  0.1706
-0.0887,  0.2104,  0.9808,  0.5478, -0.3314,  0.4128
-0.8220, -0.6302,  0.0537, -0.1658,  0.6013,  0.4306
-0.4123, -0.2880,  0.9074, -0.0461, -0.4435,  0.5144
 0.0060,  0.2867, -0.7775,  0.5161,  0.7039,  0.3599
-0.7968, -0.5484,  0.9426, -0.4308,  0.8148,  0.2979
 0.7811,  0.8450, -0.6877,  0.7594,  0.2640,  0.2362
-0.6802, -0.1113, -0.8325, -0.6694, -0.6056,  0.6544
 0.3821,  0.1476,  0.7466, -0.5107,  0.2592,  0.1648
 0.7265,  0.9683, -0.9803, -0.4943, -0.5523,  0.2454
-0.9049, -0.9797, -0.0196, -0.9090, -0.4433,  0.6447
-0.4607,  0.1811, -0.2389,  0.4050, -0.0078,  0.5229
 0.2664, -0.2932, -0.4259, -0.7336,  0.8742,  0.1834
-0.4507,  0.1029, -0.6294, -0.1158, -0.6294,  0.6081
 0.8948, -0.0124,  0.9278,  0.2899, -0.0314,  0.1534
-0.1323, -0.8813, -0.0146, -0.0697,  0.6135,  0.2386
Posted in Machine Learning | Leave a comment

“Random Forest Regression Using C#” in Visual Studio Magazine

I wrote an article titled “Random Forest Regression Using C#” in the March 2026 edition of Microsoft Visual Studio Magazine. See https://visualstudiomagazine.com/articles/2026/03/18/random-forest-regression-using-csharp.aspx.

The goal of a machine learning regression problem is to predict a single numeric value. For example, you might want to predict the price of a house based on its square footage, number of bedrooms, property tax rate, and so on.

A simple decision tree regressor encodes a virtual set of if-then rules to make a prediction. For example, “if house-age is greater than 10.0 and house-age is less than or equal to 13.5 and bedrooms greater than 3.0 then price is $525,665.38.” Simple decision trees usually overfit their training data, and then the tree predicts poorly on new, previously unseen data.

A random forest is a collection of simple decision tree regressors that have been trained on different random subsets of the source training data. This process usually goes a long way toward limiting the overfitting problem.

To make a prediction for an input vector x, each tree in the forest makes a prediction and the final predicted y value is the average of the predictions. A bagging (“bootstrap aggregation”) regression system is a specific type of random forest system where all columns/predictors of the source training data are always used to construct the training data subsets.

The VSM article presents a complete demo of random forest regression using the C# language. The output of the demo is:

Begin C# Random Forest regression demo

Loading synthetic train (200) and test (40) data
Done

First three train X:
 -0.1660  0.4406 -0.9998 -0.3953 -0.7065
  0.0776 -0.1616  0.3704 -0.5911  0.7562
 -0.9452  0.3409 -0.1654  0.1174 -0.7192

First three train y:
  0.4840
  0.1568
  0.8054

Setting nTrees = 100
Setting maxDepth = 6
Setting minSamples = 2
Setting minLeaf = 1
Setting nCols = 5
Setting nRows = 150

Creating and training random forest regression model
Done

Accuracy train (within 0.10) = 0.8050
Accuracy test (within 0.10) = 0.5750

MSE train = 0.0006
MSE test = 0.0016

Predicting for x =
  -0.1660   0.4406  -0.9998  -0.3953  -0.7065
y = 0.4728

End demo

The demo data is synthetic. It was generated by a 5-10-1 neural network with random weights and bias values. The idea here is that the synthetic data does have an underlying, but complex, structure which can be predicted.

When using decision trees for regression, it’s not necessary to normalize the training data predictor values because no distance between data items is computed. However, it’s not a bad idea to normalize the predictors just in case you want to send the data to other regression algorithms that do require normalization.

Random forest regression is most often used with data that has strictly numeric predictor variables. It is possible to use random forest regression with mixed categorical and numeric data, by using ordinal encoding on the categorical data. In theory, ordinal encoding shouldn’t work well. For example, if you have a predictor variable color with possible encoded values red = 0, blue = 1, green = 2, then red will always be less-than-or-equal to any other color value in the decision tree construction process. However, in practice, ordinal encoding for random forest regression often works well.

The motivation for combining many simple decision tree regressors into a forest is the fact that a simple decision tree will always overfit training data if the tree is deep enough. By using a collection of trees that have been trained on different random subsets of the source data, the averaged prediction of the collection is much less likely to overfit.



Random forest regression is conceptually related to bagging (bootstrap aggregation) regression, adaptive boosting regression, and gradient boosting regression. I’m a big fan of 1950s science fiction movies. To my eye, several of the actresses in my favorite films seem to have somewhat similar physical appearances.

Left: Actress Julie Adams (1926-2019) is best known for her lead role in “Creature from the Black Lagoon” (1954).

Center: Actress Joan Taylor (1929-2012) had lead roles in “Earth vs. the Flying Saucers” (1956) and “20 Million Miles to Earth” (1957).

Right: Actress Joan Weldon (1930-2021) was in “Them!” (1954) — the well-known giant ant movie.


Posted in Machine Learning | Leave a comment

Linear Regression Using C# Applied to the Diabetes Dataset – Poor Results

I write code every day. Like most things, writing code is a skill that must be practiced. One morning before work, I figured I’d run the well-known Diabetes Dataset through a linear regression model. I fully expected a poor prediction model, and that’s exactly what happened.

But the linear regression results are useful as a baseline for comparison with other regression techniques: nearest neighbors regression, quadratic regression, kernel ridge regression, neural network regression, decision tree regression, bagging tree regression, random forest regression, adaptive boosting regression, and gradient boosting regression.

The Diabetes Dataset looks like:

59, 2, 32.1, 101.00, 157,  93.2, 38, 4.00, 4.8598, 87, 151
48, 1, 21.6,  87.00, 183, 103.2, 70, 3.00, 3.8918, 69,  75
72, 2, 30.5,  93.00, 156,  93.6, 41, 4.00, 4.6728, 85, 141
. . .

Each line represents a patient. The first 10 values on each line are predictors. The last value on each line is the target value (a diabetes metric) to predict. The predictors are: age, sex, body mass index, blood pressure, serum cholesterol, low-density lipoproteins, high-density lipoproteins, total cholesterol, triglycerides, blood sugar. There are 442 data items.

The sex encoding isn’t explained but I suspect male = 1, female = 2 because there are 235 1 values and 206 2 values).

I converted the sex values from 1,2 into 0,1. Then I applied divide-by-constant normalization by dividing the 10 predictor columns by (100, 1, 100, 1000, 1000, 1000, 100, 10, 10, 1000) and the target y values by 1000. The resulting encoded and normalized data looks like:

0.5900, 1.0000, 0.3210, . . . 0.1510
0.4800, 0.0000, 0.2160, . . . 0.0750
0.7200, 1.0000, 0.3050, . . . 0.1410
. . .

I split the 442-items into a 342-item training set and a 100-item test set.

I implemented a linear regression model using C#. For training, I used pseudo-inverse via normal equations closed form training with Cholesky inverse. The math equation is:

w = inv(Xt * X) * Xt * y

The w is a vector of the model bias in cell [0] followed by the weights in the remaining cells. X is a design matrix of the training predictors (the predictors with a leading column of 1s added). Xt is the transpose of the design matrix. The y is the vector of target values. The inv() is any matrix inverse but because Xt * X is square symmetric positive definite, Cholesky inverse is simplest (but by no means simple). The * means matrix multiplication or matrix-to-vector multiplication.

The output of the demo is:

Begin C# linear regression demo

Loading diabetes train (342) and test (100) data
Done

First three train X:
  0.5900  1.0000  0.3210  0.1010  . . .  0.0870
  0.4800  0.0000  0.2160  0.0870  . . .  0.0690
  0.7200  1.0000  0.3050  0.0930  . . .  0.0850

First three train y:
  0.1510
  0.0750
  0.1410

Creating linear regression model
Done

Training using pinv normal equations Cholesky
Done

Weights/coefficients:
-0.0031  -0.0235  0.5557  1.0414  -0.5531
 0.2503  -0.0272  0.0469  0.5555   0.3634
Bias/constant: -0.3013

Evaluating model

Accuracy train (within 0.10) = 0.1813
Accuracy test (within 0.10) = 0.2800

MSE train = 0.0029
MSE test = 0.0027

Predicting for x =
   0.5900   1.0000   0.3210   0.1010   0.1570
   0.0932   0.3800   0.4000   0.4860   0.0870
Predicted y = 0.2034

End demo

As expected, the linear regression model had poor prediction accuracy: just 18.13% on the training data (62 out of 342 correct) and 28.00% on the test data (28 out of 100 correct). A prediction is scored correct if it is within 10% of the true target value.



I don’t know much about art. But here are three images in the art deco style that appeal to my eye. I like the minimalist, linear style.


Demo program. Replace “lt” (less than), “gt”, “lte”, gte” with Boolean operator symbols. (My blog editor chokes on symbols).

using System;
using System.IO;
using System.Collections.Generic;

namespace LinearRegressionDiabetesClosedCholesky
{
  internal class LinearRegressionProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin C# linear regression demo ");

      // 1. load data
      Console.WriteLine("\nLoading diabetes train" +
        " (342) and test (100) data");
      string trainFile =
        "..\\..\\..\\Data\\diabetes_norm_train_342.txt";
      int[] colsX = 
        new int[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
      int colY = 10;
      double[][] trainX =
        MatLoad(trainFile, colsX, ',', "#");
      double[] trainY =
        MatToVec(MatLoad(trainFile,
        new int[] { colY }, ',', "#"));

      string testFile =
        "..\\..\\..\\Data\\diabetes_norm_test_100.txt";
      double[][] testX =
        MatLoad(testFile, colsX, ',', "#");
      double[] testY =
        MatToVec(MatLoad(testFile,
        new int[] { colY }, ',', "#"));
      Console.WriteLine("Done ");

      Console.WriteLine("\nFirst three train X: ");
      for (int i = 0; i "lt" 3; ++i)
        VecShow(trainX[i], 4, 8);

      Console.WriteLine("\nFirst three train y: ");
      for (int i = 0; i "lt" 3; ++i)
        Console.WriteLine(trainY[i].ToString("F4").
          PadLeft(8));

      // 2. create and train model
      Console.WriteLine("\nCreating linear regression model ");
      LinearRegressor model = new LinearRegressor();
      Console.WriteLine("Done");

      Console.WriteLine("\nTraining using pinv normal" +
        " equations Cholesky ");
      model.TrainClosed(trainX, trainY);
      Console.WriteLine("Done ");

      ////Console.WriteLine("\nTraining using SGD ");
      ////model.TrainSGD(trainX, trainY, 0.001, 20000);

      // show model parameters
      Console.WriteLine("\nWeights/coefficients: ");
      for (int i = 0; i "lt" model.weights.Length; ++i)
        Console.Write(model.weights[i].ToString("F4") + "  ");
      Console.WriteLine("\nBias/constant: " +
        model.bias.ToString("F4"));

      // 3. evaluate model
      Console.WriteLine("\nEvaluating model ");
      double accTrain = model.Accuracy(trainX, trainY, 0.10);
      Console.WriteLine("\nAcc train (within 0.10) = " +
        accTrain.ToString("F4"));
      double accTest = model.Accuracy(testX, testY, 0.10);
      Console.WriteLine("Acc test (within 0.10) = " +
        accTest.ToString("F4"));

      double mseTrain = model.MSE(trainX, trainY);
      Console.WriteLine("\nMSE train = " +
        mseTrain.ToString("F4"));
      double mseTest = model.MSE(testX, testY);
      Console.WriteLine("MSE test = " +
        mseTest.ToString("F4"));

      // 4. use model
      double[] x = trainX[0];
      Console.WriteLine("\nPredicting for x = ");
      VecShow(x, 4, 9);
      double predY = model.Predict(x);
      Console.WriteLine("Predicted y = " +
        predY.ToString("F4"));

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main

    // ------------------------------------------------------
    // helpers for Main(): MatLoad, MatToVec, VecShow
    // ------------------------------------------------------

    static double[][] MatLoad(string fn, int[] usecols,
      char sep, string comment)
    {
      List"lt"double[]"gt" result = 
        new List"lt"double[]"gt"();
      string line = "";
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      while ((line = sr.ReadLine()) != null)
      {
        if (line.StartsWith(comment) == true)
          continue;
        string[] tokens = line.Split(sep);
        List"lt"double"gt" lst = new List"lt"double"gt"();
        for (int j = 0; j "lt" usecols.Length; ++j)
          lst.Add(double.Parse(tokens[usecols[j]]));
        double[] row = lst.ToArray();
        result.Add(row);
      }
      sr.Close(); ifs.Close();
      return result.ToArray();
    }

    static double[] MatToVec(double[][] mat)
    {
      int nRows = mat.Length;
      int nCols = mat[0].Length;
      double[] result = new double[nRows * nCols];
      int k = 0;
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[k++] = mat[i][j];
      return result;
    }

    static void VecShow(double[] vec, int dec, int wid)
    {
      for (int i = 0; i "lt" vec.Length; ++i)
        Console.Write(vec[i].ToString("F" + dec).
          PadLeft(wid));
      Console.WriteLine("");
    }
  } // Program

  public class LinearRegressor
  {
    public double[] weights;
    public double bias;
    private Random rnd;  // not used closed-form training

    public LinearRegressor(int seed = 0)
    {
      this.weights = new double[0]; // quasi-null
      this.bias = 0.0;
      this.rnd = new Random(seed);
    }

    // ------------------------------------------------------

    public double Predict(double[] x)
    {
      double result = 0.0;
      for (int j = 0; j "lt" x.Length; ++j)
        result += x[j] * this.weights[j];
      result += this.bias;
      return result;
    }

    // ------------------------------------------------------

    public void TrainClosed(double[][] trainX,
      double[] trainY)
    {
      // pinverse via normal equations with Cholesky inv
      // w = inv(Xt * X) * Xt * y
      int dim = trainX[0].Length;
      this.weights = new double[dim];

      double[][] X = MatDesign(trainX); // add quad terms
      double[][] Xt = MatTranspose(X);
      double[][] XtX = MatProduct(Xt, X); // could fail
      for (int i = 0; i "lt" XtX.Length; ++i)
        XtX[i][i] += 1.0e-8;  // condition Xt*X
      double[][] invXtX = MatInvCholesky(XtX);
      double[][] invXtXXt = MatProduct(invXtX, Xt);
      double[] biasAndWts = MatVecProd(invXtXXt, trainY);
      this.bias = biasAndWts[0];
      for (int i = 1; i "lt" biasAndWts.Length; ++i)
        this.weights[i - 1] = biasAndWts[i];
      return;
    }

    // ------------------------------------------------------

    public double Accuracy(double[][] dataX, double[] dataY,
      double pctClose)
    {
      int numCorrect = 0; int numWrong = 0;
      for (int i = 0; i "lt" dataX.Length; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        if (Math.Abs(predY - actualY) "lt"
          Math.Abs(pctClose * actualY))
          ++numCorrect;
        else
          ++numWrong;
      }
      return (numCorrect * 1.0) / (numWrong + numCorrect);
    }

    // ------------------------------------------------------

    public double MSE(double[][] dataX, double[] dataY)
    {
      int n = dataX.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        sum += (actualY - predY) * (actualY - predY);
      }
      return sum / n;
    }

    // ------------------------------------------------------

    // ------------------------------------------------------
    // helpers for TrainClosed()
    // MatDesign, MatTranspose, MatProduct, MatVecProd
    // MatInvCholesky, MatDecompCholesky, 
    // MatInvLowerTri, MatUpperTri
    // MatMake, MatIdentity

    private static double[][] MatDesign(double[][] X)
    {
      // add a leading column of 1s
      int nRows = X.Length; int nCols = X[0].Length;
      double[][] result = new double[X.Length][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols + 1];

      for (int i = 0; i "lt" nRows; ++i)
      {
        result[i][0] = 1.0;
        for (int j = 1; j "lt" nCols + 1; ++j)
          result[i][j] = X[i][j - 1];
      }
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatTranspose(double[][] M)
    {
      int nRows = M.Length; int nCols = M[0].Length;
      double[][] result = MatMake(nCols, nRows);  // note
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[j][i] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatMake(int nRows, int nCols)
    {
      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatProduct(double[][] A,
      double[][] B)
    {
      int aRows = A.Length; int aCols = A[0].Length;
      int bRows = B.Length; int bCols = B[0].Length;
      if (aCols != bRows)
        throw new Exception("Non-conformable matrices");

      double[][] result = MatMake(aRows, bCols);

      for (int i = 0; i "lt" aRows; ++i) // each row of A
        for (int j = 0; j "lt" bCols; ++j) // each col of B
          for (int k = 0; k "lt" aCols; ++k)
            result[i][j] += A[i][k] * B[k][j];

      return result;
    }

    // ------------------------------------------------------

    private static double[] MatVecProd(double[][] M,
      double[] v)
    {
      // return a regular vector
      int nRows = M.Length; int nCols = M[0].Length;
      int n = v.Length;
      if (nCols != n)
        throw new Exception("non-conform in MatVecProd");

      double[] result = new double[nRows];
      for (int i = 0; i "lt" nRows; ++i)
        for (int k = 0; k "lt" nCols; ++k)
          result[i] += M[i][k] * v[k];

      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatInvCholesky(double[][] A)
    {
      // A must be square symmetric positive definite
      // A = L * Lt, inv(A) = inv(Lt) * inv(L)
      double[][] L = MatDecompCholesky(A); // lower tri
      double[][] Lt = MatTranspose(L); // upper tri
      double[][] invL = MatInvLowerTri(L);
      double[][] invLt = MatInvUpperTri(Lt);
      double[][] result = MatProduct(invLt, invL);
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatDecompCholesky(double[][] A)
    {
      // A must be square, symmetric, positive definite
      int m = A.Length; int n = A[0].Length;  // m == n
      double[][] L = new double[n][];
      for (int i = 0; i "lt" n; ++i)
        L[i] = new double[n];

      for (int i = 0; i "lt" n; ++i)
      {
        for (int j = 0; j "lte" i; ++j)
        {
          double sum = 0.0;
          for (int k = 0; k "lt" j; ++k)
            sum += L[i][k] * L[j][k];
          if (i == j)
          {
            double tmp = A[i][i] - sum;
            if (tmp "lt" 0.0)
              throw new
                Exception("decomp Cholesky fatal");
            L[i][j] = Math.Sqrt(tmp);
          }
          else
          {
            if (L[j][j] == 0.0)
              throw new
                Exception("decomp Cholesky fatal ");
            L[i][j] = (A[i][j] - sum) / L[j][j];
          }
        } // j
      } // i

      return L;
    }

    // ------------------------------------------------------
 
    private static double[][] MatIdentity(int n)
    {
      double[][] result = MatMake(n, n);
      for (int i = 0; i "lt" n; ++i)
        result[i][i] = 1.0;
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatInvLowerTri(double[][] A)
    {
      int n = A.Length;  // must be square matrix
      double[][] result = MatIdentity(n);

      for (int k = 0; k "lt" n; ++k)
      {
        for (int j = 0; j "lt" n; ++j)
        {
          for (int i = 0; i "lt" k; ++i)
          {
            result[k][j] -= result[i][j] * A[k][i];
          }
          result[k][j] /= (A[k][k] + 1.0e-8);
        }
      }
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatInvUpperTri(double[][] A)
    {
      int n = A.Length;  // must be square matrix
      double[][] result = MatIdentity(n);

      for (int k = 0; k "lt" n; ++k)
      {
        for (int j = 0; j "lt" n; ++j)
        {
          for (int i = 0; i "lt" k; ++i)
          {
            result[j][k] -= result[j][i] * A[i][k];
          }
          result[j][k] /= (A[k][k] + 1.0e-8); // avoid 0
        }
      }
      return result;
    }
    // ------------------------------------------------------
  } // class LinearRegressor

} // ns

Training data:


# diabetes_norm_train_342.txt
# cols [0] to [9] predictors. col [10] target
# norm division constants:
# 100, -1, 100, 1000, 1000, 1000, 100, 10, 10, 1000, 1000
#
0.5900, 1.0000, 0.3210, 0.1010, 0.1570, 0.0932, 0.3800, 0.4000, 0.4860, 0.0870, 0.1510
0.4800, 0.0000, 0.2160, 0.0870, 0.1830, 0.1032, 0.7000, 0.3000, 0.3892, 0.0690, 0.0750
0.7200, 1.0000, 0.3050, 0.0930, 0.1560, 0.0936, 0.4100, 0.4000, 0.4673, 0.0850, 0.1410
0.2400, 0.0000, 0.2530, 0.0840, 0.1980, 0.1314, 0.4000, 0.5000, 0.4890, 0.0890, 0.2060
0.5000, 0.0000, 0.2300, 0.1010, 0.1920, 0.1254, 0.5200, 0.4000, 0.4291, 0.0800, 0.1350
0.2300, 0.0000, 0.2260, 0.0890, 0.1390, 0.0648, 0.6100, 0.2000, 0.4190, 0.0680, 0.0970
0.3600, 1.0000, 0.2200, 0.0900, 0.1600, 0.0996, 0.5000, 0.3000, 0.3951, 0.0820, 0.1380
0.6600, 1.0000, 0.2620, 0.1140, 0.2550, 0.1850, 0.5600, 0.4550, 0.4249, 0.0920, 0.0630
0.6000, 1.0000, 0.3210, 0.0830, 0.1790, 0.1194, 0.4200, 0.4000, 0.4477, 0.0940, 0.1100
0.2900, 0.0000, 0.3000, 0.0850, 0.1800, 0.0934, 0.4300, 0.4000, 0.5385, 0.0880, 0.3100
0.2200, 0.0000, 0.1860, 0.0970, 0.1140, 0.0576, 0.4600, 0.2000, 0.3951, 0.0830, 0.1010
0.5600, 1.0000, 0.2800, 0.0850, 0.1840, 0.1448, 0.3200, 0.6000, 0.3584, 0.0770, 0.0690
0.5300, 0.0000, 0.2370, 0.0920, 0.1860, 0.1092, 0.6200, 0.3000, 0.4304, 0.0810, 0.1790
0.5000, 1.0000, 0.2620, 0.0970, 0.1860, 0.1054, 0.4900, 0.4000, 0.5063, 0.0880, 0.1850
0.6100, 0.0000, 0.2400, 0.0910, 0.2020, 0.1154, 0.7200, 0.3000, 0.4291, 0.0730, 0.1180
0.3400, 1.0000, 0.2470, 0.1180, 0.2540, 0.1842, 0.3900, 0.7000, 0.5037, 0.0810, 0.1710
0.4700, 0.0000, 0.3030, 0.1090, 0.2070, 0.1002, 0.7000, 0.3000, 0.5215, 0.0980, 0.1660
0.6800, 1.0000, 0.2750, 0.1110, 0.2140, 0.1470, 0.3900, 0.5000, 0.4942, 0.0910, 0.1440
0.3800, 0.0000, 0.2540, 0.0840, 0.1620, 0.1030, 0.4200, 0.4000, 0.4443, 0.0870, 0.0970
0.4100, 0.0000, 0.2470, 0.0830, 0.1870, 0.1082, 0.6000, 0.3000, 0.4543, 0.0780, 0.1680
0.3500, 0.0000, 0.2110, 0.0820, 0.1560, 0.0878, 0.5000, 0.3000, 0.4511, 0.0950, 0.0680
0.2500, 1.0000, 0.2430, 0.0950, 0.1620, 0.0986, 0.5400, 0.3000, 0.3850, 0.0870, 0.0490
0.2500, 0.0000, 0.2600, 0.0920, 0.1870, 0.1204, 0.5600, 0.3000, 0.3970, 0.0880, 0.0680
0.6100, 1.0000, 0.3200, 0.1037, 0.2100, 0.0852, 0.3500, 0.6000, 0.6107, 0.1240, 0.2450
0.3100, 0.0000, 0.2970, 0.0880, 0.1670, 0.1034, 0.4800, 0.4000, 0.4357, 0.0780, 0.1840
0.3000, 1.0000, 0.2520, 0.0830, 0.1780, 0.1184, 0.3400, 0.5000, 0.4852, 0.0830, 0.2020
0.1900, 0.0000, 0.1920, 0.0870, 0.1240, 0.0540, 0.5700, 0.2000, 0.4174, 0.0900, 0.1370
0.4200, 0.0000, 0.3190, 0.0830, 0.1580, 0.0876, 0.5300, 0.3000, 0.4466, 0.1010, 0.0850
0.6300, 0.0000, 0.2440, 0.0730, 0.1600, 0.0914, 0.4800, 0.3000, 0.4635, 0.0780, 0.1310
0.6700, 1.0000, 0.2580, 0.1130, 0.1580, 0.0542, 0.6400, 0.2000, 0.5293, 0.1040, 0.2830
0.3200, 0.0000, 0.3050, 0.0890, 0.1820, 0.1106, 0.5600, 0.3000, 0.4344, 0.0890, 0.1290
0.4200, 0.0000, 0.2030, 0.0710, 0.1610, 0.0812, 0.6600, 0.2000, 0.4234, 0.0810, 0.0590
0.5800, 1.0000, 0.3800, 0.1030, 0.1500, 0.1072, 0.2200, 0.7000, 0.4644, 0.0980, 0.3410
0.5700, 0.0000, 0.2170, 0.0940, 0.1570, 0.0580, 0.8200, 0.2000, 0.4443, 0.0920, 0.0870
0.5300, 0.0000, 0.2050, 0.0780, 0.1470, 0.0842, 0.5200, 0.3000, 0.3989, 0.0750, 0.0650
0.6200, 1.0000, 0.2350, 0.0803, 0.2250, 0.1128, 0.8600, 0.2620, 0.4875, 0.0960, 0.1020
0.5200, 0.0000, 0.2850, 0.1100, 0.1950, 0.0972, 0.6000, 0.3000, 0.5242, 0.0850, 0.2650
0.4600, 0.0000, 0.2740, 0.0780, 0.1710, 0.0880, 0.5800, 0.3000, 0.4828, 0.0900, 0.2760
0.4800, 1.0000, 0.3300, 0.1230, 0.2530, 0.1636, 0.4400, 0.6000, 0.5425, 0.0970, 0.2520
0.4800, 1.0000, 0.2770, 0.0730, 0.1910, 0.1194, 0.4600, 0.4000, 0.4852, 0.0920, 0.0900
0.5000, 1.0000, 0.2560, 0.1010, 0.2290, 0.1622, 0.4300, 0.5000, 0.4779, 0.1140, 0.1000
0.2100, 0.0000, 0.2010, 0.0630, 0.1350, 0.0690, 0.5400, 0.3000, 0.4094, 0.0890, 0.0550
0.3200, 1.0000, 0.2540, 0.0903, 0.1530, 0.1004, 0.3400, 0.4500, 0.4533, 0.0830, 0.0610
0.5400, 0.0000, 0.2420, 0.0740, 0.2040, 0.1090, 0.8200, 0.2000, 0.4174, 0.1090, 0.0920
0.6100, 1.0000, 0.3270, 0.0970, 0.1770, 0.1184, 0.2900, 0.6000, 0.4997, 0.0870, 0.2590
0.5600, 1.0000, 0.2310, 0.1040, 0.1810, 0.1164, 0.4700, 0.4000, 0.4477, 0.0790, 0.0530
0.3300, 0.0000, 0.2530, 0.0850, 0.1550, 0.0850, 0.5100, 0.3000, 0.4554, 0.0700, 0.1900
0.2700, 0.0000, 0.1960, 0.0780, 0.1280, 0.0680, 0.4300, 0.3000, 0.4443, 0.0710, 0.1420
0.6700, 1.0000, 0.2250, 0.0980, 0.1910, 0.1192, 0.6100, 0.3000, 0.3989, 0.0860, 0.0750
0.3700, 1.0000, 0.2770, 0.0930, 0.1800, 0.1194, 0.3000, 0.6000, 0.5030, 0.0880, 0.1420
0.5800, 0.0000, 0.2570, 0.0990, 0.1570, 0.0916, 0.4900, 0.3000, 0.4407, 0.0930, 0.1550
0.6500, 1.0000, 0.2790, 0.1030, 0.1590, 0.0968, 0.4200, 0.4000, 0.4615, 0.0860, 0.2250
0.3400, 0.0000, 0.2550, 0.0930, 0.2180, 0.1440, 0.5700, 0.4000, 0.4443, 0.0880, 0.0590
0.4600, 0.0000, 0.2490, 0.1150, 0.1980, 0.1296, 0.5400, 0.4000, 0.4277, 0.1030, 0.1040
0.3500, 0.0000, 0.2870, 0.0970, 0.2040, 0.1268, 0.6400, 0.3000, 0.4190, 0.0930, 0.1820
0.3700, 0.0000, 0.2180, 0.0840, 0.1840, 0.1010, 0.7300, 0.3000, 0.3912, 0.0930, 0.1280
0.3700, 0.0000, 0.3020, 0.0870, 0.1660, 0.0960, 0.4000, 0.4150, 0.5011, 0.0870, 0.0520
0.4100, 0.0000, 0.2050, 0.0800, 0.1240, 0.0488, 0.6400, 0.2000, 0.4025, 0.0750, 0.0370
0.6000, 0.0000, 0.2040, 0.1050, 0.1980, 0.0784, 0.9900, 0.2000, 0.4635, 0.0790, 0.1700
0.6600, 1.0000, 0.2400, 0.0980, 0.2360, 0.1464, 0.5800, 0.4000, 0.5063, 0.0960, 0.1700
0.2900, 0.0000, 0.2600, 0.0830, 0.1410, 0.0652, 0.6400, 0.2000, 0.4078, 0.0830, 0.0610
0.3700, 1.0000, 0.2680, 0.0790, 0.1570, 0.0980, 0.2800, 0.6000, 0.5043, 0.0960, 0.1440
0.4100, 1.0000, 0.2570, 0.0830, 0.1810, 0.1066, 0.6600, 0.3000, 0.3738, 0.0850, 0.0520
0.3900, 0.0000, 0.2290, 0.0770, 0.2040, 0.1432, 0.4600, 0.4000, 0.4304, 0.0740, 0.1280
0.6700, 1.0000, 0.2400, 0.0830, 0.1430, 0.0772, 0.4900, 0.3000, 0.4431, 0.0940, 0.0710
0.3600, 1.0000, 0.2410, 0.1120, 0.1930, 0.1250, 0.3500, 0.6000, 0.5106, 0.0950, 0.1630
0.4600, 1.0000, 0.2470, 0.0850, 0.1740, 0.1232, 0.3000, 0.6000, 0.4644, 0.0960, 0.1500
0.6000, 1.0000, 0.2500, 0.0897, 0.1850, 0.1208, 0.4600, 0.4020, 0.4511, 0.0920, 0.0970
0.5900, 1.0000, 0.2360, 0.0830, 0.1650, 0.1000, 0.4700, 0.4000, 0.4500, 0.0920, 0.1600
0.5300, 0.0000, 0.2210, 0.0930, 0.1340, 0.0762, 0.4600, 0.3000, 0.4078, 0.0960, 0.1780
0.4800, 0.0000, 0.1990, 0.0910, 0.1890, 0.1096, 0.6900, 0.3000, 0.3951, 0.1010, 0.0480
0.4800, 0.0000, 0.2950, 0.1310, 0.2070, 0.1322, 0.4700, 0.4000, 0.4935, 0.1060, 0.2700
0.6600, 1.0000, 0.2600, 0.0910, 0.2640, 0.1466, 0.6500, 0.4000, 0.5568, 0.0870, 0.2020
0.5200, 1.0000, 0.2450, 0.0940, 0.2170, 0.1494, 0.4800, 0.5000, 0.4585, 0.0890, 0.1110
0.5200, 1.0000, 0.2660, 0.1110, 0.2090, 0.1264, 0.6100, 0.3000, 0.4682, 0.1090, 0.0850
0.4600, 1.0000, 0.2350, 0.0870, 0.1810, 0.1148, 0.4400, 0.4000, 0.4710, 0.0980, 0.0420
0.4000, 1.0000, 0.2900, 0.1150, 0.0970, 0.0472, 0.3500, 0.2770, 0.4304, 0.0950, 0.1700
0.2200, 0.0000, 0.2300, 0.0730, 0.1610, 0.0978, 0.5400, 0.3000, 0.3829, 0.0910, 0.2000
0.5000, 0.0000, 0.2100, 0.0880, 0.1400, 0.0718, 0.3500, 0.4000, 0.5112, 0.0710, 0.2520
0.2000, 0.0000, 0.2290, 0.0870, 0.1910, 0.1282, 0.5300, 0.4000, 0.3892, 0.0850, 0.1130
0.6800, 0.0000, 0.2750, 0.1070, 0.2410, 0.1496, 0.6400, 0.4000, 0.4920, 0.0900, 0.1430
0.5200, 1.0000, 0.2430, 0.0860, 0.1970, 0.1336, 0.4400, 0.5000, 0.4575, 0.0910, 0.0510
0.4400, 0.0000, 0.2310, 0.0870, 0.2130, 0.1264, 0.7700, 0.3000, 0.3871, 0.0720, 0.0520
0.3800, 0.0000, 0.2730, 0.0810, 0.1460, 0.0816, 0.4700, 0.3000, 0.4466, 0.0810, 0.2100
0.4900, 0.0000, 0.2270, 0.0653, 0.1680, 0.0962, 0.6200, 0.2710, 0.3892, 0.0600, 0.0650
0.6100, 0.0000, 0.3300, 0.0950, 0.1820, 0.1148, 0.5400, 0.3000, 0.4190, 0.0740, 0.1410
0.2900, 1.0000, 0.1940, 0.0830, 0.1520, 0.1058, 0.3900, 0.4000, 0.3584, 0.0830, 0.0550
0.6100, 0.0000, 0.2580, 0.0980, 0.2350, 0.1258, 0.7600, 0.3000, 0.5112, 0.0820, 0.1340
0.3400, 1.0000, 0.2260, 0.0750, 0.1660, 0.0918, 0.6000, 0.3000, 0.4263, 0.1080, 0.0420
0.3600, 0.0000, 0.2190, 0.0890, 0.1890, 0.1052, 0.6800, 0.3000, 0.4369, 0.0960, 0.1110
0.5200, 0.0000, 0.2400, 0.0830, 0.1670, 0.0866, 0.7100, 0.2000, 0.3850, 0.0940, 0.0980
0.6100, 0.0000, 0.3120, 0.0790, 0.2350, 0.1568, 0.4700, 0.5000, 0.5050, 0.0960, 0.1640
0.4300, 0.0000, 0.2680, 0.1230, 0.1930, 0.1022, 0.6700, 0.3000, 0.4779, 0.0940, 0.0480
0.3500, 0.0000, 0.2040, 0.0650, 0.1870, 0.1056, 0.6700, 0.2790, 0.4277, 0.0780, 0.0960
0.2700, 0.0000, 0.2480, 0.0910, 0.1890, 0.1068, 0.6900, 0.3000, 0.4190, 0.0690, 0.0900
0.2900, 0.0000, 0.2100, 0.0710, 0.1560, 0.0970, 0.3800, 0.4000, 0.4654, 0.0900, 0.1620
0.6400, 1.0000, 0.2730, 0.1090, 0.1860, 0.1076, 0.3800, 0.5000, 0.5308, 0.0990, 0.1500
0.4100, 0.0000, 0.3460, 0.0873, 0.2050, 0.1426, 0.4100, 0.5000, 0.4673, 0.1100, 0.2790
0.4900, 1.0000, 0.2590, 0.0910, 0.1780, 0.1066, 0.5200, 0.3000, 0.4575, 0.0750, 0.0920
0.4800, 0.0000, 0.2040, 0.0980, 0.2090, 0.1394, 0.4600, 0.5000, 0.4771, 0.0780, 0.0830
0.5300, 0.0000, 0.2800, 0.0880, 0.2330, 0.1438, 0.5800, 0.4000, 0.5050, 0.0910, 0.1280
0.5300, 1.0000, 0.2220, 0.1130, 0.1970, 0.1152, 0.6700, 0.3000, 0.4304, 0.1000, 0.1020
0.2300, 0.0000, 0.2900, 0.0900, 0.2160, 0.1314, 0.6500, 0.3000, 0.4585, 0.0910, 0.3020
0.6500, 1.0000, 0.3020, 0.0980, 0.2190, 0.1606, 0.4000, 0.5000, 0.4522, 0.0840, 0.1980
0.4100, 0.0000, 0.3240, 0.0940, 0.1710, 0.1044, 0.5600, 0.3000, 0.3970, 0.0760, 0.0950
0.5500, 1.0000, 0.2340, 0.0830, 0.1660, 0.1016, 0.4600, 0.4000, 0.4522, 0.0960, 0.0530
0.2200, 0.0000, 0.1930, 0.0820, 0.1560, 0.0932, 0.5200, 0.3000, 0.3989, 0.0710, 0.1340
0.5600, 0.0000, 0.3100, 0.0787, 0.1870, 0.1414, 0.3400, 0.5500, 0.4060, 0.0900, 0.1440
0.5400, 1.0000, 0.3060, 0.1033, 0.1440, 0.0798, 0.3000, 0.4800, 0.5142, 0.1010, 0.2320
0.5900, 1.0000, 0.2550, 0.0953, 0.1900, 0.1394, 0.3500, 0.5430, 0.4357, 0.1170, 0.0810
0.6000, 1.0000, 0.2340, 0.0880, 0.1530, 0.0898, 0.5800, 0.3000, 0.3258, 0.0950, 0.1040
0.5400, 0.0000, 0.2680, 0.0870, 0.2060, 0.1220, 0.6800, 0.3000, 0.4382, 0.0800, 0.0590
0.2500, 0.0000, 0.2830, 0.0870, 0.1930, 0.1280, 0.4900, 0.4000, 0.4382, 0.0920, 0.2460
0.5400, 1.0000, 0.2770, 0.1130, 0.2000, 0.1284, 0.3700, 0.5000, 0.5153, 0.1130, 0.2970
0.5500, 0.0000, 0.3660, 0.1130, 0.1990, 0.0944, 0.4300, 0.4630, 0.5730, 0.0970, 0.2580
0.4000, 1.0000, 0.2650, 0.0930, 0.2360, 0.1470, 0.3700, 0.7000, 0.5561, 0.0920, 0.2290
0.6200, 1.0000, 0.3180, 0.1150, 0.1990, 0.1286, 0.4400, 0.5000, 0.4883, 0.0980, 0.2750
0.6500, 0.0000, 0.2440, 0.1200, 0.2220, 0.1356, 0.3700, 0.6000, 0.5509, 0.1240, 0.2810
0.3300, 1.0000, 0.2540, 0.1020, 0.2060, 0.1410, 0.3900, 0.5000, 0.4868, 0.1050, 0.1790
0.5300, 0.0000, 0.2200, 0.0940, 0.1750, 0.0880, 0.5900, 0.3000, 0.4942, 0.0980, 0.2000
0.3500, 0.0000, 0.2680, 0.0980, 0.1620, 0.1036, 0.4500, 0.4000, 0.4205, 0.0860, 0.2000
0.6600, 0.0000, 0.2800, 0.1010, 0.1950, 0.1292, 0.4000, 0.5000, 0.4860, 0.0940, 0.1730
0.6200, 1.0000, 0.3390, 0.1010, 0.2210, 0.1564, 0.3500, 0.6000, 0.4997, 0.1030, 0.1800
0.5000, 1.0000, 0.2960, 0.0943, 0.3000, 0.2424, 0.3300, 0.9090, 0.4812, 0.1090, 0.0840
0.4700, 0.0000, 0.2860, 0.0970, 0.1640, 0.0906, 0.5600, 0.3000, 0.4466, 0.0880, 0.1210
0.4700, 1.0000, 0.2560, 0.0940, 0.1650, 0.0748, 0.4000, 0.4000, 0.5526, 0.0930, 0.1610
0.2400, 0.0000, 0.2070, 0.0870, 0.1490, 0.0806, 0.6100, 0.2000, 0.3611, 0.0780, 0.0990
0.5800, 1.0000, 0.2620, 0.0910, 0.2170, 0.1242, 0.7100, 0.3000, 0.4691, 0.0680, 0.1090
0.3400, 0.0000, 0.2060, 0.0870, 0.1850, 0.1122, 0.5800, 0.3000, 0.4304, 0.0740, 0.1150
0.5100, 0.0000, 0.2790, 0.0960, 0.1960, 0.1222, 0.4200, 0.5000, 0.5069, 0.1200, 0.2680
0.3100, 1.0000, 0.3530, 0.1250, 0.1870, 0.1124, 0.4800, 0.4000, 0.4890, 0.1090, 0.2740
0.2200, 0.0000, 0.1990, 0.0750, 0.1750, 0.1086, 0.5400, 0.3000, 0.4127, 0.0720, 0.1580
0.5300, 1.0000, 0.2440, 0.0920, 0.2140, 0.1460, 0.5000, 0.4000, 0.4500, 0.0970, 0.1070
0.3700, 1.0000, 0.2140, 0.0830, 0.1280, 0.0696, 0.4900, 0.3000, 0.3850, 0.0840, 0.0830
0.2800, 0.0000, 0.3040, 0.0850, 0.1980, 0.1156, 0.6700, 0.3000, 0.4344, 0.0800, 0.1030
0.4700, 0.0000, 0.3160, 0.0840, 0.1540, 0.0880, 0.3000, 0.5100, 0.5199, 0.1050, 0.2720
0.2300, 0.0000, 0.1880, 0.0780, 0.1450, 0.0720, 0.6300, 0.2000, 0.3912, 0.0860, 0.0850
0.5000, 0.0000, 0.3100, 0.1230, 0.1780, 0.1050, 0.4800, 0.4000, 0.4828, 0.0880, 0.2800
0.5800, 1.0000, 0.3670, 0.1170, 0.1660, 0.0938, 0.4400, 0.4000, 0.4949, 0.1090, 0.3360
0.5500, 0.0000, 0.3210, 0.1100, 0.1640, 0.0842, 0.4200, 0.4000, 0.5242, 0.0900, 0.2810
0.6000, 1.0000, 0.2770, 0.1070, 0.1670, 0.1146, 0.3800, 0.4000, 0.4277, 0.0950, 0.1180
0.4100, 0.0000, 0.3080, 0.0810, 0.2140, 0.1520, 0.2800, 0.7600, 0.5136, 0.1230, 0.3170
0.6000, 1.0000, 0.2750, 0.1060, 0.2290, 0.1438, 0.5100, 0.4000, 0.5142, 0.0910, 0.2350
0.4000, 0.0000, 0.2690, 0.0920, 0.2030, 0.1198, 0.7000, 0.3000, 0.4190, 0.0810, 0.0600
0.5700, 1.0000, 0.3070, 0.0900, 0.2040, 0.1478, 0.3400, 0.6000, 0.4710, 0.0930, 0.1740
0.3700, 0.0000, 0.3830, 0.1130, 0.1650, 0.0946, 0.5300, 0.3000, 0.4466, 0.0790, 0.2590
0.4000, 1.0000, 0.3190, 0.0950, 0.1980, 0.1356, 0.3800, 0.5000, 0.4804, 0.0930, 0.1780
0.3300, 0.0000, 0.3500, 0.0890, 0.2000, 0.1304, 0.4200, 0.4760, 0.4927, 0.1010, 0.1280
0.3200, 1.0000, 0.2780, 0.0890, 0.2160, 0.1462, 0.5500, 0.4000, 0.4304, 0.0910, 0.0960
0.3500, 1.0000, 0.2590, 0.0810, 0.1740, 0.1024, 0.3100, 0.6000, 0.5313, 0.0820, 0.1260
0.5500, 0.0000, 0.3290, 0.1020, 0.1640, 0.1062, 0.4100, 0.4000, 0.4431, 0.0890, 0.2880
0.4900, 0.0000, 0.2600, 0.0930, 0.1830, 0.1002, 0.6400, 0.3000, 0.4543, 0.0880, 0.0880
0.3900, 1.0000, 0.2630, 0.1150, 0.2180, 0.1582, 0.3200, 0.7000, 0.4935, 0.1090, 0.2920
0.6000, 1.0000, 0.2230, 0.1130, 0.1860, 0.1258, 0.4600, 0.4000, 0.4263, 0.0940, 0.0710
0.6700, 1.0000, 0.2830, 0.0930, 0.2040, 0.1322, 0.4900, 0.4000, 0.4736, 0.0920, 0.1970
0.4100, 1.0000, 0.3200, 0.1090, 0.2510, 0.1706, 0.4900, 0.5000, 0.5056, 0.1030, 0.1860
0.4400, 0.0000, 0.2540, 0.0950, 0.1620, 0.0926, 0.5300, 0.3000, 0.4407, 0.0830, 0.0250
0.4800, 1.0000, 0.2330, 0.0893, 0.2120, 0.1428, 0.4600, 0.4610, 0.4754, 0.0980, 0.0840
0.4500, 0.0000, 0.2030, 0.0743, 0.1900, 0.1262, 0.4900, 0.3880, 0.4304, 0.0790, 0.0960
0.4700, 0.0000, 0.3040, 0.1200, 0.1990, 0.1200, 0.4600, 0.4000, 0.5106, 0.0870, 0.1950
0.4600, 0.0000, 0.2060, 0.0730, 0.1720, 0.1070, 0.5100, 0.3000, 0.4249, 0.0800, 0.0530
0.3600, 1.0000, 0.3230, 0.1150, 0.2860, 0.1994, 0.3900, 0.7000, 0.5472, 0.1120, 0.2170
0.3400, 0.0000, 0.2920, 0.0730, 0.1720, 0.1082, 0.4900, 0.4000, 0.4304, 0.0910, 0.1720
0.5300, 1.0000, 0.3310, 0.1170, 0.1830, 0.1190, 0.4800, 0.4000, 0.4382, 0.1060, 0.1310
0.6100, 0.0000, 0.2460, 0.1010, 0.2090, 0.1068, 0.7700, 0.3000, 0.4836, 0.0880, 0.2140
0.3700, 0.0000, 0.2020, 0.0810, 0.1620, 0.0878, 0.6300, 0.3000, 0.4025, 0.0880, 0.0590
0.3300, 1.0000, 0.2080, 0.0840, 0.1250, 0.0702, 0.4600, 0.3000, 0.3784, 0.0660, 0.0700
0.6800, 0.0000, 0.3280, 0.1057, 0.2050, 0.1164, 0.4000, 0.5130, 0.5493, 0.1170, 0.2200
0.4900, 1.0000, 0.3190, 0.0940, 0.2340, 0.1558, 0.3400, 0.7000, 0.5398, 0.1220, 0.2680
0.4800, 0.0000, 0.2390, 0.1090, 0.2320, 0.1052, 0.3700, 0.6000, 0.6107, 0.0960, 0.1520
0.5500, 1.0000, 0.2450, 0.0840, 0.1790, 0.1058, 0.6600, 0.3000, 0.3584, 0.0870, 0.0470
0.4300, 0.0000, 0.2210, 0.0660, 0.1340, 0.0772, 0.4500, 0.3000, 0.4078, 0.0800, 0.0740
0.6000, 1.0000, 0.3300, 0.0970, 0.2170, 0.1256, 0.4500, 0.5000, 0.5447, 0.1120, 0.2950
0.3100, 1.0000, 0.1900, 0.0930, 0.1370, 0.0730, 0.4700, 0.3000, 0.4443, 0.0780, 0.1010
0.5300, 1.0000, 0.2730, 0.0820, 0.1190, 0.0550, 0.3900, 0.3000, 0.4828, 0.0930, 0.1510
0.6700, 0.0000, 0.2280, 0.0870, 0.1660, 0.0986, 0.5200, 0.3000, 0.4344, 0.0920, 0.1270
0.6100, 1.0000, 0.2820, 0.1060, 0.2040, 0.1320, 0.5200, 0.4000, 0.4605, 0.0960, 0.2370
0.6200, 0.0000, 0.2890, 0.0873, 0.2060, 0.1272, 0.3300, 0.6240, 0.5434, 0.0990, 0.2250
0.6000, 0.0000, 0.2560, 0.0870, 0.2070, 0.1258, 0.6900, 0.3000, 0.4111, 0.0840, 0.0810
0.4200, 0.0000, 0.2490, 0.0910, 0.2040, 0.1418, 0.3800, 0.5000, 0.4796, 0.0890, 0.1510
0.3800, 1.0000, 0.2680, 0.1050, 0.1810, 0.1192, 0.3700, 0.5000, 0.4820, 0.0910, 0.1070
0.6200, 0.0000, 0.2240, 0.0790, 0.2220, 0.1474, 0.5900, 0.4000, 0.4357, 0.0760, 0.0640
0.6100, 1.0000, 0.2690, 0.1110, 0.2360, 0.1724, 0.3900, 0.6000, 0.4812, 0.0890, 0.1380
0.6100, 1.0000, 0.2310, 0.1130, 0.1860, 0.1144, 0.4700, 0.4000, 0.4812, 0.1050, 0.1850
0.5300, 0.0000, 0.2860, 0.0880, 0.1710, 0.0988, 0.4100, 0.4000, 0.5050, 0.0990, 0.2650
0.2800, 1.0000, 0.2470, 0.0970, 0.1750, 0.0996, 0.3200, 0.5000, 0.5380, 0.0870, 0.1010
0.2600, 1.0000, 0.3030, 0.0890, 0.2180, 0.1522, 0.3100, 0.7000, 0.5159, 0.0820, 0.1370
0.3000, 0.0000, 0.2130, 0.0870, 0.1340, 0.0630, 0.6300, 0.2000, 0.3689, 0.0660, 0.1430
0.5000, 0.0000, 0.2610, 0.1090, 0.2430, 0.1606, 0.6200, 0.4000, 0.4625, 0.0890, 0.1410
0.4800, 0.0000, 0.2020, 0.0950, 0.1870, 0.1174, 0.5300, 0.4000, 0.4419, 0.0850, 0.0790
0.5100, 0.0000, 0.2520, 0.1030, 0.1760, 0.1122, 0.3700, 0.5000, 0.4898, 0.0900, 0.2920
0.4700, 1.0000, 0.2250, 0.0820, 0.1310, 0.0668, 0.4100, 0.3000, 0.4754, 0.0890, 0.1780
0.6400, 1.0000, 0.2350, 0.0970, 0.2030, 0.1290, 0.5900, 0.3000, 0.4318, 0.0770, 0.0910
0.5100, 1.0000, 0.2590, 0.0760, 0.2400, 0.1690, 0.3900, 0.6000, 0.5075, 0.0960, 0.1160
0.3000, 0.0000, 0.2090, 0.1040, 0.1520, 0.0838, 0.4700, 0.3000, 0.4663, 0.0970, 0.0860
0.5600, 1.0000, 0.2870, 0.0990, 0.2080, 0.1464, 0.3900, 0.5000, 0.4727, 0.0970, 0.1220
0.4200, 0.0000, 0.2210, 0.0850, 0.2130, 0.1386, 0.6000, 0.4000, 0.4277, 0.0940, 0.0720
0.6200, 1.0000, 0.2670, 0.1150, 0.1830, 0.1240, 0.3500, 0.5000, 0.4788, 0.1000, 0.1290
0.3400, 0.0000, 0.3140, 0.0870, 0.1490, 0.0938, 0.4600, 0.3000, 0.3829, 0.0770, 0.1420
0.6000, 0.0000, 0.2220, 0.1047, 0.2210, 0.1054, 0.6000, 0.3680, 0.5628, 0.0930, 0.0900
0.6400, 0.0000, 0.2100, 0.0923, 0.2270, 0.1468, 0.6500, 0.3490, 0.4331, 0.1020, 0.1580
0.3900, 1.0000, 0.2120, 0.0900, 0.1820, 0.1104, 0.6000, 0.3000, 0.4060, 0.0980, 0.0390
0.7100, 1.0000, 0.2650, 0.1050, 0.2810, 0.1736, 0.5500, 0.5000, 0.5568, 0.0840, 0.1960
0.4800, 1.0000, 0.2920, 0.1100, 0.2180, 0.1516, 0.3900, 0.6000, 0.4920, 0.0980, 0.2220
0.7900, 1.0000, 0.2700, 0.1030, 0.1690, 0.1108, 0.3700, 0.5000, 0.4663, 0.1100, 0.2770
0.4000, 0.0000, 0.3070, 0.0990, 0.1770, 0.0854, 0.5000, 0.4000, 0.5338, 0.0850, 0.0990
0.4900, 1.0000, 0.2880, 0.0920, 0.2070, 0.1400, 0.4400, 0.5000, 0.4745, 0.0920, 0.1960
0.5100, 0.0000, 0.3060, 0.1030, 0.1980, 0.1066, 0.5700, 0.3000, 0.5148, 0.1000, 0.2020
0.5700, 0.0000, 0.3010, 0.1170, 0.2020, 0.1396, 0.4200, 0.5000, 0.4625, 0.1200, 0.1550
0.5900, 1.0000, 0.2470, 0.1140, 0.1520, 0.1048, 0.2900, 0.5000, 0.4511, 0.0880, 0.0770
0.5100, 0.0000, 0.2770, 0.0990, 0.2290, 0.1456, 0.6900, 0.3000, 0.4277, 0.0770, 0.1910
0.7400, 0.0000, 0.2980, 0.1010, 0.1710, 0.1048, 0.5000, 0.3000, 0.4394, 0.0860, 0.0700
0.6700, 0.0000, 0.2670, 0.1050, 0.2250, 0.1354, 0.6900, 0.3000, 0.4635, 0.0960, 0.0730
0.4900, 0.0000, 0.1980, 0.0880, 0.1880, 0.1148, 0.5700, 0.3000, 0.4394, 0.0930, 0.0490
0.5700, 0.0000, 0.2330, 0.0880, 0.1550, 0.0636, 0.7800, 0.2000, 0.4205, 0.0780, 0.0650
0.5600, 1.0000, 0.3510, 0.1230, 0.1640, 0.0950, 0.3800, 0.4000, 0.5043, 0.1170, 0.2630
0.5200, 1.0000, 0.2970, 0.1090, 0.2280, 0.1628, 0.3100, 0.8000, 0.5142, 0.1030, 0.2480
0.6900, 0.0000, 0.2930, 0.1240, 0.2230, 0.1390, 0.5400, 0.4000, 0.5011, 0.1020, 0.2960
0.3700, 0.0000, 0.2030, 0.0830, 0.1850, 0.1246, 0.3800, 0.5000, 0.4719, 0.0880, 0.2140
0.2400, 0.0000, 0.2250, 0.0890, 0.1410, 0.0680, 0.5200, 0.3000, 0.4654, 0.0840, 0.1850
0.5500, 1.0000, 0.2270, 0.0930, 0.1540, 0.0942, 0.5300, 0.3000, 0.3526, 0.0750, 0.0780
0.3600, 0.0000, 0.2280, 0.0870, 0.1780, 0.1160, 0.4100, 0.4000, 0.4654, 0.0820, 0.0930
0.4200, 1.0000, 0.2400, 0.1070, 0.1500, 0.0850, 0.4400, 0.3000, 0.4654, 0.0960, 0.2520
0.2100, 0.0000, 0.2420, 0.0760, 0.1470, 0.0770, 0.5300, 0.3000, 0.4443, 0.0790, 0.1500
0.4100, 0.0000, 0.2020, 0.0620, 0.1530, 0.0890, 0.5000, 0.3000, 0.4249, 0.0890, 0.0770
0.5700, 1.0000, 0.2940, 0.1090, 0.1600, 0.0876, 0.3100, 0.5000, 0.5333, 0.0920, 0.2080
0.2000, 1.0000, 0.2210, 0.0870, 0.1710, 0.0996, 0.5800, 0.3000, 0.4205, 0.0780, 0.0770
0.6700, 1.0000, 0.2360, 0.1113, 0.1890, 0.1054, 0.7000, 0.2700, 0.4220, 0.0930, 0.1080
0.3400, 0.0000, 0.2520, 0.0770, 0.1890, 0.1206, 0.5300, 0.4000, 0.4344, 0.0790, 0.1600
0.4100, 1.0000, 0.2490, 0.0860, 0.1920, 0.1150, 0.6100, 0.3000, 0.4382, 0.0940, 0.0530
0.3800, 1.0000, 0.3300, 0.0780, 0.3010, 0.2150, 0.5000, 0.6020, 0.5193, 0.1080, 0.2200
0.5100, 0.0000, 0.2350, 0.1010, 0.1950, 0.1210, 0.5100, 0.4000, 0.4745, 0.0940, 0.1540
0.5200, 1.0000, 0.2640, 0.0913, 0.2180, 0.1520, 0.3900, 0.5590, 0.4905, 0.0990, 0.2590
0.6700, 0.0000, 0.2980, 0.0800, 0.1720, 0.0934, 0.6300, 0.3000, 0.4357, 0.0820, 0.0900
0.6100, 0.0000, 0.3000, 0.1080, 0.1940, 0.1000, 0.5200, 0.3730, 0.5347, 0.1050, 0.2460
0.6700, 1.0000, 0.2500, 0.1117, 0.1460, 0.0934, 0.3300, 0.4420, 0.4585, 0.1030, 0.1240
0.5600, 0.0000, 0.2700, 0.1050, 0.2470, 0.1606, 0.5400, 0.5000, 0.5088, 0.0940, 0.0670
0.6400, 0.0000, 0.2000, 0.0747, 0.1890, 0.1148, 0.6200, 0.3050, 0.4111, 0.0910, 0.0720
0.5800, 1.0000, 0.2550, 0.1120, 0.1630, 0.1106, 0.2900, 0.6000, 0.4762, 0.0860, 0.2570
0.5500, 0.0000, 0.2820, 0.0910, 0.2500, 0.1402, 0.6700, 0.4000, 0.5366, 0.1030, 0.2620
0.6200, 1.0000, 0.3330, 0.1140, 0.1820, 0.1140, 0.3800, 0.5000, 0.5011, 0.0960, 0.2750
0.5700, 1.0000, 0.2560, 0.0960, 0.2000, 0.1330, 0.5200, 0.3850, 0.4318, 0.1050, 0.1770
0.2000, 1.0000, 0.2420, 0.0880, 0.1260, 0.0722, 0.4500, 0.3000, 0.3784, 0.0740, 0.0710
0.5300, 1.0000, 0.2210, 0.0980, 0.1650, 0.1052, 0.4700, 0.4000, 0.4159, 0.0810, 0.0470
0.3200, 1.0000, 0.3140, 0.0890, 0.1530, 0.0842, 0.5600, 0.3000, 0.4159, 0.0900, 0.1870
0.4100, 0.0000, 0.2310, 0.0860, 0.1480, 0.0780, 0.5800, 0.3000, 0.4094, 0.0600, 0.1250
0.6000, 0.0000, 0.2340, 0.0767, 0.2470, 0.1480, 0.6500, 0.3800, 0.5136, 0.0770, 0.0780
0.2600, 0.0000, 0.1880, 0.0830, 0.1910, 0.1036, 0.6900, 0.3000, 0.4522, 0.0690, 0.0510
0.3700, 0.0000, 0.3080, 0.1120, 0.2820, 0.1972, 0.4300, 0.7000, 0.5342, 0.1010, 0.2580
0.4500, 0.0000, 0.3200, 0.1100, 0.2240, 0.1342, 0.4500, 0.5000, 0.5412, 0.0930, 0.2150
0.6700, 0.0000, 0.3160, 0.1160, 0.1790, 0.0904, 0.4100, 0.4000, 0.5472, 0.1000, 0.3030
0.3400, 1.0000, 0.3550, 0.1200, 0.2330, 0.1466, 0.3400, 0.7000, 0.5568, 0.1010, 0.2430
0.5000, 0.0000, 0.3190, 0.0783, 0.2070, 0.1492, 0.3800, 0.5450, 0.4595, 0.0840, 0.0910
0.7100, 0.0000, 0.2950, 0.0970, 0.2270, 0.1516, 0.4500, 0.5000, 0.5024, 0.1080, 0.1500
0.5700, 1.0000, 0.3160, 0.1170, 0.2250, 0.1076, 0.4000, 0.6000, 0.5958, 0.1130, 0.3100
0.4900, 0.0000, 0.2030, 0.0930, 0.1840, 0.1030, 0.6100, 0.3000, 0.4605, 0.0930, 0.1530
0.3500, 0.0000, 0.4130, 0.0810, 0.1680, 0.1028, 0.3700, 0.5000, 0.4949, 0.0940, 0.3460
0.4100, 1.0000, 0.2120, 0.1020, 0.1840, 0.1004, 0.6400, 0.3000, 0.4585, 0.0790, 0.0630
0.7000, 1.0000, 0.2410, 0.0823, 0.1940, 0.1492, 0.3100, 0.6260, 0.4234, 0.1050, 0.0890
0.5200, 0.0000, 0.2300, 0.1070, 0.1790, 0.1237, 0.4250, 0.4210, 0.4159, 0.0930, 0.0500
0.6000, 0.0000, 0.2560, 0.0780, 0.1950, 0.0954, 0.9100, 0.2000, 0.3761, 0.0870, 0.0390
0.6200, 0.0000, 0.2250, 0.1250, 0.2150, 0.0990, 0.9800, 0.2000, 0.4500, 0.0950, 0.1030
0.4400, 1.0000, 0.3820, 0.1230, 0.2010, 0.1266, 0.4400, 0.5000, 0.5024, 0.0920, 0.3080
0.2800, 1.0000, 0.1920, 0.0810, 0.1550, 0.0946, 0.5100, 0.3000, 0.3850, 0.0870, 0.1160
0.5800, 1.0000, 0.2900, 0.0850, 0.1560, 0.1092, 0.3600, 0.4000, 0.3989, 0.0860, 0.1450
0.3900, 1.0000, 0.2400, 0.0897, 0.1900, 0.1136, 0.5200, 0.3650, 0.4804, 0.1010, 0.0740
0.3400, 1.0000, 0.2060, 0.0980, 0.1830, 0.0920, 0.8300, 0.2000, 0.3689, 0.0920, 0.0450
0.6500, 0.0000, 0.2630, 0.0700, 0.2440, 0.1662, 0.5100, 0.5000, 0.4898, 0.0980, 0.1150
0.6600, 1.0000, 0.3460, 0.1150, 0.2040, 0.1394, 0.3600, 0.6000, 0.4963, 0.1090, 0.2640
0.5100, 0.0000, 0.2340, 0.0870, 0.2200, 0.1088, 0.9300, 0.2000, 0.4511, 0.0820, 0.0870
0.5000, 1.0000, 0.2920, 0.1190, 0.1620, 0.0852, 0.5400, 0.3000, 0.4736, 0.0950, 0.2020
0.5900, 1.0000, 0.2720, 0.1070, 0.1580, 0.1020, 0.3900, 0.4000, 0.4443, 0.0930, 0.1270
0.5200, 0.0000, 0.2700, 0.0783, 0.1340, 0.0730, 0.4400, 0.3050, 0.4443, 0.0690, 0.1820
0.6900, 1.0000, 0.2450, 0.1080, 0.2430, 0.1364, 0.4000, 0.6000, 0.5808, 0.1000, 0.2410
0.5300, 0.0000, 0.2410, 0.1050, 0.1840, 0.1134, 0.4600, 0.4000, 0.4812, 0.0950, 0.0660
0.4700, 1.0000, 0.2530, 0.0980, 0.1730, 0.1056, 0.4400, 0.4000, 0.4762, 0.1080, 0.0940
0.5200, 0.0000, 0.2880, 0.1130, 0.2800, 0.1740, 0.6700, 0.4000, 0.5273, 0.0860, 0.2830
0.3900, 0.0000, 0.2090, 0.0950, 0.1500, 0.0656, 0.6800, 0.2000, 0.4407, 0.0950, 0.0640
0.6700, 1.0000, 0.2300, 0.0700, 0.1840, 0.1280, 0.3500, 0.5000, 0.4654, 0.0990, 0.1020
0.5900, 1.0000, 0.2410, 0.0960, 0.1700, 0.0986, 0.5400, 0.3000, 0.4466, 0.0850, 0.2000
0.5100, 1.0000, 0.2810, 0.1060, 0.2020, 0.1222, 0.5500, 0.4000, 0.4820, 0.0870, 0.2650
0.2300, 1.0000, 0.1800, 0.0780, 0.1710, 0.0960, 0.4800, 0.4000, 0.4905, 0.0920, 0.0940
0.6800, 0.0000, 0.2590, 0.0930, 0.2530, 0.1812, 0.5300, 0.5000, 0.4543, 0.0980, 0.2300
0.4400, 0.0000, 0.2150, 0.0850, 0.1570, 0.0922, 0.5500, 0.3000, 0.3892, 0.0840, 0.1810
0.6000, 1.0000, 0.2430, 0.1030, 0.1410, 0.0866, 0.3300, 0.4000, 0.4673, 0.0780, 0.1560
0.5200, 0.0000, 0.2450, 0.0900, 0.1980, 0.1290, 0.2900, 0.7000, 0.5298, 0.0860, 0.2330
0.3800, 0.0000, 0.2130, 0.0720, 0.1650, 0.0602, 0.8800, 0.2000, 0.4431, 0.0900, 0.0600
0.6100, 0.0000, 0.2580, 0.0900, 0.2800, 0.1954, 0.5500, 0.5000, 0.4997, 0.0900, 0.2190
0.6800, 1.0000, 0.2480, 0.1010, 0.2210, 0.1514, 0.6000, 0.4000, 0.3871, 0.0870, 0.0800
0.2800, 1.0000, 0.3150, 0.0830, 0.2280, 0.1494, 0.3800, 0.6000, 0.5313, 0.0830, 0.0680
0.6500, 1.0000, 0.3350, 0.1020, 0.1900, 0.1262, 0.3500, 0.5000, 0.4970, 0.1020, 0.3320
0.6900, 0.0000, 0.2810, 0.1130, 0.2340, 0.1428, 0.5200, 0.4000, 0.5278, 0.0770, 0.2480
0.5100, 0.0000, 0.2430, 0.0853, 0.1530, 0.0716, 0.7100, 0.2150, 0.3951, 0.0820, 0.0840
0.2900, 0.0000, 0.3500, 0.0983, 0.2040, 0.1426, 0.5000, 0.4080, 0.4043, 0.0910, 0.2000
0.5500, 1.0000, 0.2350, 0.0930, 0.1770, 0.1268, 0.4100, 0.4000, 0.3829, 0.0830, 0.0550
0.3400, 1.0000, 0.3000, 0.0830, 0.1850, 0.1072, 0.5300, 0.3000, 0.4820, 0.0920, 0.0850
0.6700, 0.0000, 0.2070, 0.0830, 0.1700, 0.0998, 0.5900, 0.3000, 0.4025, 0.0770, 0.0890
0.4900, 0.0000, 0.2560, 0.0760, 0.1610, 0.0998, 0.5100, 0.3000, 0.3932, 0.0780, 0.0310
0.5500, 1.0000, 0.2290, 0.0810, 0.1230, 0.0672, 0.4100, 0.3000, 0.4304, 0.0880, 0.1290
0.5900, 1.0000, 0.2510, 0.0900, 0.1630, 0.1014, 0.4600, 0.4000, 0.4357, 0.0910, 0.0830
0.5300, 0.0000, 0.3320, 0.0827, 0.1860, 0.1068, 0.4600, 0.4040, 0.5112, 0.1020, 0.2750
0.4800, 1.0000, 0.2410, 0.1100, 0.2090, 0.1346, 0.5800, 0.4000, 0.4407, 0.1000, 0.0650
0.5200, 0.0000, 0.2950, 0.1043, 0.2110, 0.1328, 0.4900, 0.4310, 0.4984, 0.0980, 0.1980
0.6900, 0.0000, 0.2960, 0.1220, 0.2310, 0.1284, 0.5600, 0.4000, 0.5451, 0.0860, 0.2360
0.6000, 1.0000, 0.2280, 0.1100, 0.2450, 0.1898, 0.3900, 0.6000, 0.4394, 0.0880, 0.2530
0.4600, 1.0000, 0.2270, 0.0830, 0.1830, 0.1258, 0.3200, 0.6000, 0.4836, 0.0750, 0.1240
0.5100, 1.0000, 0.2620, 0.1010, 0.1610, 0.0996, 0.4800, 0.3000, 0.4205, 0.0880, 0.0440
0.6700, 1.0000, 0.2350, 0.0960, 0.2070, 0.1382, 0.4200, 0.5000, 0.4898, 0.1110, 0.1720
0.4900, 0.0000, 0.2210, 0.0850, 0.1360, 0.0634, 0.6200, 0.2190, 0.3970, 0.0720, 0.1140
0.4600, 1.0000, 0.2650, 0.0940, 0.2470, 0.1602, 0.5900, 0.4000, 0.4935, 0.1110, 0.1420
0.4700, 0.0000, 0.3240, 0.1050, 0.1880, 0.1250, 0.4600, 0.4090, 0.4443, 0.0990, 0.1090
0.7500, 0.0000, 0.3010, 0.0780, 0.2220, 0.1542, 0.4400, 0.5050, 0.4779, 0.0970, 0.1800
0.2800, 0.0000, 0.2420, 0.0930, 0.1740, 0.1064, 0.5400, 0.3000, 0.4220, 0.0840, 0.1440
0.6500, 1.0000, 0.3130, 0.1100, 0.2130, 0.1280, 0.4700, 0.5000, 0.5247, 0.0910, 0.1630
0.4200, 0.0000, 0.3010, 0.0910, 0.1820, 0.1148, 0.4900, 0.4000, 0.4511, 0.0820, 0.1470
0.5100, 0.0000, 0.2450, 0.0790, 0.2120, 0.1286, 0.6500, 0.3000, 0.4522, 0.0910, 0.0970
0.5300, 1.0000, 0.2770, 0.0950, 0.1900, 0.1018, 0.4100, 0.5000, 0.5464, 0.1010, 0.2200
0.5400, 0.0000, 0.2320, 0.1107, 0.2380, 0.1628, 0.4800, 0.4960, 0.4913, 0.1080, 0.1900
0.7300, 0.0000, 0.2700, 0.1020, 0.2110, 0.1210, 0.6700, 0.3000, 0.4745, 0.0990, 0.1090
0.5400, 0.0000, 0.2680, 0.1080, 0.1760, 0.0806, 0.6700, 0.3000, 0.4956, 0.1060, 0.1910
0.4200, 0.0000, 0.2920, 0.0930, 0.2490, 0.1742, 0.4500, 0.6000, 0.5004, 0.0920, 0.1220
0.7500, 0.0000, 0.3120, 0.1177, 0.2290, 0.1388, 0.2900, 0.7900, 0.5724, 0.1060, 0.2300
0.5500, 1.0000, 0.3210, 0.1127, 0.2070, 0.0924, 0.2500, 0.8280, 0.6105, 0.1110, 0.2420
0.6800, 1.0000, 0.2570, 0.1090, 0.2330, 0.1126, 0.3500, 0.7000, 0.6057, 0.1050, 0.2480
0.5700, 0.0000, 0.2690, 0.0980, 0.2460, 0.1652, 0.3800, 0.7000, 0.5366, 0.0960, 0.2490
0.4800, 0.0000, 0.3140, 0.0753, 0.2420, 0.1516, 0.3800, 0.6370, 0.5568, 0.1030, 0.1920
0.6100, 1.0000, 0.2560, 0.0850, 0.1840, 0.1162, 0.3900, 0.5000, 0.4970, 0.0980, 0.1310
0.6900, 0.0000, 0.3700, 0.1030, 0.2070, 0.1314, 0.5500, 0.4000, 0.4635, 0.0900, 0.2370
0.3800, 0.0000, 0.3260, 0.0770, 0.1680, 0.1006, 0.4700, 0.4000, 0.4625, 0.0960, 0.0780
0.4500, 1.0000, 0.2120, 0.0940, 0.1690, 0.0968, 0.5500, 0.3000, 0.4454, 0.1020, 0.1350
0.5100, 1.0000, 0.2920, 0.1070, 0.1870, 0.1390, 0.3200, 0.6000, 0.4382, 0.0950, 0.2440
0.7100, 1.0000, 0.2400, 0.0840, 0.1380, 0.0858, 0.3900, 0.4000, 0.4190, 0.0900, 0.1990
0.5700, 0.0000, 0.3610, 0.1170, 0.1810, 0.1082, 0.3400, 0.5000, 0.5268, 0.1000, 0.2700
0.5600, 1.0000, 0.2580, 0.1030, 0.1770, 0.1144, 0.3400, 0.5000, 0.4963, 0.0990, 0.1640
0.3200, 1.0000, 0.2200, 0.0880, 0.1370, 0.0786, 0.4800, 0.3000, 0.3951, 0.0780, 0.0720
0.5000, 0.0000, 0.2190, 0.0910, 0.1900, 0.1112, 0.6700, 0.3000, 0.4078, 0.0770, 0.0960
0.4300, 0.0000, 0.3430, 0.0840, 0.2560, 0.1726, 0.3300, 0.8000, 0.5529, 0.1040, 0.3060
0.5400, 1.0000, 0.2520, 0.1150, 0.1810, 0.1200, 0.3900, 0.5000, 0.4701, 0.0920, 0.0910
0.3100, 0.0000, 0.2330, 0.0850, 0.1900, 0.1308, 0.4300, 0.4000, 0.4394, 0.0770, 0.2140
0.5600, 0.0000, 0.2570, 0.0800, 0.2440, 0.1516, 0.5900, 0.4000, 0.5118, 0.0950, 0.0950
0.4400, 0.0000, 0.2510, 0.1330, 0.1820, 0.1130, 0.5500, 0.3000, 0.4249, 0.0840, 0.2160
0.5700, 1.0000, 0.3190, 0.1110, 0.1730, 0.1162, 0.4100, 0.4000, 0.4369, 0.0870, 0.2630

Test data:


# diabetes_norm_test_100.txt
#
0.6400, 1.0000, 0.2840, 0.1110, 0.1840, 0.1270, 0.4100, 0.4000, 0.4382, 0.0970, 0.1780
0.4300, 0.0000, 0.2810, 0.1210, 0.1920, 0.1210, 0.6000, 0.3000, 0.4007, 0.0930, 0.1130
0.1900, 0.0000, 0.2530, 0.0830, 0.2250, 0.1566, 0.4600, 0.5000, 0.4719, 0.0840, 0.2000
0.7100, 1.0000, 0.2610, 0.0850, 0.2200, 0.1524, 0.4700, 0.5000, 0.4635, 0.0910, 0.1390
0.5000, 1.0000, 0.2800, 0.1040, 0.2820, 0.1968, 0.4400, 0.6000, 0.5328, 0.0950, 0.1390
0.5900, 1.0000, 0.2360, 0.0730, 0.1800, 0.1074, 0.5100, 0.4000, 0.4682, 0.0840, 0.0880
0.5700, 0.0000, 0.2450, 0.0930, 0.1860, 0.0966, 0.7100, 0.3000, 0.4522, 0.0910, 0.1480
0.4900, 1.0000, 0.2100, 0.0820, 0.1190, 0.0854, 0.2300, 0.5000, 0.3970, 0.0740, 0.0880
0.4100, 1.0000, 0.3200, 0.1260, 0.1980, 0.1042, 0.4900, 0.4000, 0.5412, 0.1240, 0.2430
0.2500, 1.0000, 0.2260, 0.0850, 0.1300, 0.0710, 0.4800, 0.3000, 0.4007, 0.0810, 0.0710
0.5200, 1.0000, 0.1970, 0.0810, 0.1520, 0.0534, 0.8200, 0.2000, 0.4419, 0.0820, 0.0770
0.3400, 0.0000, 0.2120, 0.0840, 0.2540, 0.1134, 0.5200, 0.5000, 0.6094, 0.0920, 0.1090
0.4200, 1.0000, 0.3060, 0.1010, 0.2690, 0.1722, 0.5000, 0.5000, 0.5455, 0.1060, 0.2720
0.2800, 1.0000, 0.2550, 0.0990, 0.1620, 0.1016, 0.4600, 0.4000, 0.4277, 0.0940, 0.0600
0.4700, 1.0000, 0.2330, 0.0900, 0.1950, 0.1258, 0.5400, 0.4000, 0.4331, 0.0730, 0.0540
0.3200, 1.0000, 0.3100, 0.1000, 0.1770, 0.0962, 0.4500, 0.4000, 0.5187, 0.0770, 0.2210
0.4300, 0.0000, 0.1850, 0.0870, 0.1630, 0.0936, 0.6100, 0.2670, 0.3738, 0.0800, 0.0900
0.5900, 1.0000, 0.2690, 0.1040, 0.1940, 0.1266, 0.4300, 0.5000, 0.4804, 0.1060, 0.3110
0.5300, 0.0000, 0.2830, 0.1010, 0.1790, 0.1070, 0.4800, 0.4000, 0.4788, 0.1010, 0.2810
0.6000, 0.0000, 0.2570, 0.1030, 0.1580, 0.0846, 0.6400, 0.2000, 0.3850, 0.0970, 0.1820
0.5400, 1.0000, 0.3610, 0.1150, 0.1630, 0.0984, 0.4300, 0.4000, 0.4682, 0.1010, 0.3210
0.3500, 1.0000, 0.2410, 0.0947, 0.1550, 0.0974, 0.3200, 0.4840, 0.4852, 0.0940, 0.0580
0.4900, 1.0000, 0.2580, 0.0890, 0.1820, 0.1186, 0.3900, 0.5000, 0.4804, 0.1150, 0.2620
0.5800, 0.0000, 0.2280, 0.0910, 0.1960, 0.1188, 0.4800, 0.4000, 0.4984, 0.1150, 0.2060
0.3600, 1.0000, 0.3910, 0.0900, 0.2190, 0.1358, 0.3800, 0.6000, 0.5421, 0.1030, 0.2330
0.4600, 1.0000, 0.4220, 0.0990, 0.2110, 0.1370, 0.4400, 0.5000, 0.5011, 0.0990, 0.2420
0.4400, 1.0000, 0.2660, 0.0990, 0.2050, 0.1090, 0.4300, 0.5000, 0.5580, 0.1110, 0.1230
0.4600, 0.0000, 0.2990, 0.0830, 0.1710, 0.1130, 0.3800, 0.4500, 0.4585, 0.0980, 0.1670
0.5400, 0.0000, 0.2100, 0.0780, 0.1880, 0.1074, 0.7000, 0.3000, 0.3970, 0.0730, 0.0630
0.6300, 1.0000, 0.2550, 0.1090, 0.2260, 0.1032, 0.4600, 0.5000, 0.5951, 0.0870, 0.1970
0.4100, 1.0000, 0.2420, 0.0900, 0.1990, 0.1236, 0.5700, 0.4000, 0.4522, 0.0860, 0.0710
0.2800, 0.0000, 0.2540, 0.0930, 0.1410, 0.0790, 0.4900, 0.3000, 0.4174, 0.0910, 0.1680
0.1900, 0.0000, 0.2320, 0.0750, 0.1430, 0.0704, 0.5200, 0.3000, 0.4635, 0.0720, 0.1400
0.6100, 1.0000, 0.2610, 0.1260, 0.2150, 0.1298, 0.5700, 0.4000, 0.4949, 0.0960, 0.2170
0.4800, 0.0000, 0.3270, 0.0930, 0.2760, 0.1986, 0.4300, 0.6420, 0.5148, 0.0910, 0.1210
0.5400, 1.0000, 0.2730, 0.1000, 0.2000, 0.1440, 0.3300, 0.6000, 0.4745, 0.0760, 0.2350
0.5300, 1.0000, 0.2660, 0.0930, 0.1850, 0.1224, 0.3600, 0.5000, 0.4890, 0.0820, 0.2450
0.4800, 0.0000, 0.2280, 0.1010, 0.1100, 0.0416, 0.5600, 0.2000, 0.4127, 0.0970, 0.0400
0.5300, 0.0000, 0.2880, 0.1117, 0.1450, 0.0872, 0.4600, 0.3150, 0.4078, 0.0850, 0.0520
0.2900, 1.0000, 0.1810, 0.0730, 0.1580, 0.0990, 0.4100, 0.4000, 0.4500, 0.0780, 0.1040
0.6200, 0.0000, 0.3200, 0.0880, 0.1720, 0.0690, 0.3800, 0.4000, 0.5784, 0.1000, 0.1320
0.5000, 1.0000, 0.2370, 0.0920, 0.1660, 0.0970, 0.5200, 0.3000, 0.4443, 0.0930, 0.0880
0.5800, 1.0000, 0.2360, 0.0960, 0.2570, 0.1710, 0.5900, 0.4000, 0.4905, 0.0820, 0.0690
0.5500, 1.0000, 0.2460, 0.1090, 0.1430, 0.0764, 0.5100, 0.3000, 0.4357, 0.0880, 0.2190
0.5400, 0.0000, 0.2260, 0.0900, 0.1830, 0.1042, 0.6400, 0.3000, 0.4304, 0.0920, 0.0720
0.3600, 0.0000, 0.2780, 0.0730, 0.1530, 0.1044, 0.4200, 0.4000, 0.3497, 0.0730, 0.2010
0.6300, 1.0000, 0.2410, 0.1110, 0.1840, 0.1122, 0.4400, 0.4000, 0.4935, 0.0820, 0.1100
0.4700, 1.0000, 0.2650, 0.0700, 0.1810, 0.1048, 0.6300, 0.3000, 0.4190, 0.0700, 0.0510
0.5100, 1.0000, 0.3280, 0.1120, 0.2020, 0.1006, 0.3700, 0.5000, 0.5775, 0.1090, 0.2770
0.4200, 0.0000, 0.1990, 0.0760, 0.1460, 0.0832, 0.5500, 0.3000, 0.3664, 0.0790, 0.0630
0.3700, 1.0000, 0.2360, 0.0940, 0.2050, 0.1388, 0.5300, 0.4000, 0.4190, 0.1070, 0.1180
0.2800, 0.0000, 0.2210, 0.0820, 0.1680, 0.1006, 0.5400, 0.3000, 0.4205, 0.0860, 0.0690
0.5800, 0.0000, 0.2810, 0.1110, 0.1980, 0.0806, 0.3100, 0.6000, 0.6068, 0.0930, 0.2730
0.3200, 0.0000, 0.2650, 0.0860, 0.1840, 0.1016, 0.5300, 0.4000, 0.4990, 0.0780, 0.2580
0.2500, 1.0000, 0.2350, 0.0880, 0.1430, 0.0808, 0.5500, 0.3000, 0.3584, 0.0830, 0.0430
0.6300, 0.0000, 0.2600, 0.0857, 0.1550, 0.0782, 0.4600, 0.3370, 0.5037, 0.0970, 0.1980
0.5200, 0.0000, 0.2780, 0.0850, 0.2190, 0.1360, 0.4900, 0.4000, 0.5136, 0.0750, 0.2420
0.6500, 1.0000, 0.2850, 0.1090, 0.2010, 0.1230, 0.4600, 0.4000, 0.5075, 0.0960, 0.2320
0.4200, 0.0000, 0.3060, 0.1210, 0.1760, 0.0928, 0.6900, 0.3000, 0.4263, 0.0890, 0.1750
0.5300, 0.0000, 0.2220, 0.0780, 0.1640, 0.0810, 0.7000, 0.2000, 0.4174, 0.1010, 0.0930
0.7900, 1.0000, 0.2330, 0.0880, 0.1860, 0.1284, 0.3300, 0.6000, 0.4812, 0.1020, 0.1680
0.4300, 0.0000, 0.3540, 0.0930, 0.1850, 0.1002, 0.4400, 0.4000, 0.5318, 0.1010, 0.2750
0.4400, 0.0000, 0.3140, 0.1150, 0.1650, 0.0976, 0.5200, 0.3000, 0.4344, 0.0890, 0.2930
0.6200, 1.0000, 0.3780, 0.1190, 0.1130, 0.0510, 0.3100, 0.4000, 0.5043, 0.0840, 0.2810
0.3300, 0.0000, 0.1890, 0.0700, 0.1620, 0.0918, 0.5900, 0.3000, 0.4025, 0.0580, 0.0720
0.5600, 0.0000, 0.3500, 0.0793, 0.1950, 0.1408, 0.4200, 0.4640, 0.4111, 0.0960, 0.1400
0.6600, 0.0000, 0.2170, 0.1260, 0.2120, 0.1278, 0.4500, 0.4710, 0.5278, 0.1010, 0.1890
0.3400, 1.0000, 0.2530, 0.1110, 0.2300, 0.1620, 0.3900, 0.6000, 0.4977, 0.0900, 0.1810
0.4600, 1.0000, 0.2380, 0.0970, 0.2240, 0.1392, 0.4200, 0.5000, 0.5366, 0.0810, 0.2090
0.5000, 0.0000, 0.3180, 0.0820, 0.1360, 0.0692, 0.5500, 0.2000, 0.4078, 0.0850, 0.1360
0.6900, 0.0000, 0.3430, 0.1130, 0.2000, 0.1238, 0.5400, 0.4000, 0.4710, 0.1120, 0.2610
0.3400, 0.0000, 0.2630, 0.0870, 0.1970, 0.1200, 0.6300, 0.3000, 0.4249, 0.0960, 0.1130
0.7100, 1.0000, 0.2700, 0.0933, 0.2690, 0.1902, 0.4100, 0.6560, 0.5242, 0.0930, 0.1310
0.4700, 0.0000, 0.2720, 0.0800, 0.2080, 0.1456, 0.3800, 0.6000, 0.4804, 0.0920, 0.1740
0.4100, 0.0000, 0.3380, 0.1233, 0.1870, 0.1270, 0.4500, 0.4160, 0.4318, 0.1000, 0.2570
0.3400, 0.0000, 0.3300, 0.0730, 0.1780, 0.1146, 0.5100, 0.3490, 0.4127, 0.0920, 0.0550
0.5100, 0.0000, 0.2410, 0.0870, 0.2610, 0.1756, 0.6900, 0.4000, 0.4407, 0.0930, 0.0840
0.4300, 0.0000, 0.2130, 0.0790, 0.1410, 0.0788, 0.5300, 0.3000, 0.3829, 0.0900, 0.0420
0.5500, 0.0000, 0.2300, 0.0947, 0.1900, 0.1376, 0.3800, 0.5000, 0.4277, 0.1060, 0.1460
0.5900, 1.0000, 0.2790, 0.1010, 0.2180, 0.1442, 0.3800, 0.6000, 0.5187, 0.0950, 0.2120
0.2700, 1.0000, 0.3360, 0.1100, 0.2460, 0.1566, 0.5700, 0.4000, 0.5088, 0.0890, 0.2330
0.5100, 1.0000, 0.2270, 0.1030, 0.2170, 0.1624, 0.3000, 0.7000, 0.4812, 0.0800, 0.0910
0.4900, 1.0000, 0.2740, 0.0890, 0.1770, 0.1130, 0.3700, 0.5000, 0.4905, 0.0970, 0.1110
0.2700, 0.0000, 0.2260, 0.0710, 0.1160, 0.0434, 0.5600, 0.2000, 0.4419, 0.0790, 0.1520
0.5700, 1.0000, 0.2320, 0.1073, 0.2310, 0.1594, 0.4100, 0.5630, 0.5030, 0.1120, 0.1200
0.3900, 1.0000, 0.2690, 0.0930, 0.1360, 0.0754, 0.4800, 0.3000, 0.4143, 0.0990, 0.0670
0.6200, 1.0000, 0.3460, 0.1200, 0.2150, 0.1292, 0.4300, 0.5000, 0.5366, 0.1230, 0.3100
0.3700, 0.0000, 0.2330, 0.0880, 0.2230, 0.1420, 0.6500, 0.3400, 0.4357, 0.0820, 0.0940
0.4600, 0.0000, 0.2110, 0.0800, 0.2050, 0.1444, 0.4200, 0.5000, 0.4533, 0.0870, 0.1830
0.6800, 1.0000, 0.2350, 0.1010, 0.1620, 0.0854, 0.5900, 0.3000, 0.4477, 0.0910, 0.0660
0.5100, 0.0000, 0.3150, 0.0930, 0.2310, 0.1440, 0.4900, 0.4700, 0.5252, 0.1170, 0.1730
0.4100, 0.0000, 0.2080, 0.0860, 0.2230, 0.1282, 0.8300, 0.3000, 0.4078, 0.0890, 0.0720
0.5300, 0.0000, 0.2650, 0.0970, 0.1930, 0.1224, 0.5800, 0.3000, 0.4143, 0.0990, 0.0490
0.4500, 0.0000, 0.2420, 0.0830, 0.1770, 0.1184, 0.4500, 0.4000, 0.4220, 0.0820, 0.0640
0.3300, 0.0000, 0.1950, 0.0800, 0.1710, 0.0854, 0.7500, 0.2000, 0.3970, 0.0800, 0.0480
0.6000, 1.0000, 0.2820, 0.1120, 0.1850, 0.1138, 0.4200, 0.4000, 0.4984, 0.0930, 0.1780
0.4700, 1.0000, 0.2490, 0.0750, 0.2250, 0.1660, 0.4200, 0.5000, 0.4443, 0.1020, 0.1040
0.6000, 1.0000, 0.2490, 0.0997, 0.1620, 0.1066, 0.4300, 0.3770, 0.4127, 0.0950, 0.1320
0.3600, 0.0000, 0.3000, 0.0950, 0.2010, 0.1252, 0.4200, 0.4790, 0.5130, 0.0850, 0.2200
0.3600, 0.0000, 0.1960, 0.0710, 0.2500, 0.1332, 0.9700, 0.3000, 0.4595, 0.0920, 0.0570
Posted in Machine Learning | Leave a comment

Computing Matrix QR Decomposition Using Givens Rotations With C#

If you have a matrix, there are several ways to decompose it: LUP decomposition, SVD decomposition, QR decomposition, Cholesky decomposition (only for positive-definite matrices). The QR decomposition can be used for many algorithms, such as computing a matrix inverse, or pseudo-inverse.

If you have a matrix A, the QR decomposition gives you a matrix Q and a matrix R such that A = Q * R (where the * is matrix multiplication).

There are three main algorithms to compute a QR decomposition: Householder, modified Gram-Schmidt, Givens. Each of these three algorithms has several variations. Householder is by far the most complicated of the three, but gives the best answer precision. Modified Gram-Schmidt is dramatically simpler than Householder, but not as precise for pathological matrices. Givens is roughly comparable to modified Gram-Schmidt in terms of complexity and precision.

One evening, I was waiting for the rain to stop so I could walk my dogs Kevin and Riley. I figured I’d make use of the time by implementing QR decomposition using the Givens rotations algorithm, using the C# language.

The output of my demo:

Begin QR decomposition using Givens rotations with C#
Less precise than QR-Householder but dramatically simpler

Source (tall) matrix A:
   1.0   2.0   3.0   4.0   5.0
   0.0  -3.0   5.0  -7.0   9.0
   2.0   0.0  -2.0   0.0  -2.0
   4.0  -1.0   5.0   6.0   1.0
   3.0   6.0   8.0   2.0   2.0
   5.0  -2.0   4.0  -4.0   3.0

Computing QR
Done

Q =
   0.134840   0.258894   0.147094   0.310903   0.869379
   0.000000  -0.410745   0.759588  -0.274531   0.180705
   0.269680  -0.029872  -0.526675  -0.219131   0.300339
   0.539360  -0.196660   0.116670   0.738280  -0.260150
   0.404520   0.776682   0.316260  -0.255197  -0.226191
   0.674200  -0.348511  -0.101841  -0.412033   0.049823

R =
   7.416198   0.809040   8.494918   1.887760   3.505839
   0.000000   7.303797   2.618812   5.678242  -2.031322
   0.000000   0.000000   7.998637  -2.988836   9.068775
   0.000000   0.000000   0.000000   8.732743  -1.486219
   0.000000   0.000000   0.000000   0.000000   4.809501

End demo

I verified the results by using the NumPy np.linalg.qr() library function, and I got the same results. The demo implementation returns “reduced” Q and R, which is standard. The demo implementation works only for “tall” matrices where the number of rows is greater than or equal to the number of columns, which is standard for machine learning scenarios.

Good fun on a rainy evening in the Pacific Northwest.



I’m a big fan of science fiction movies — good movies, bad movies, old movies, new movies — any kind of science fiction movies. Here are three that were OK (I give all three a “C” grade on my purely personal “A” to “F” scale) but could have been great.

Left: “Valerian and the City of a Thousand Planets” (2017) – When I heard “Valerian” was going to be directed by Luc Besson, I was excited. Among other films, Besson did “The Fifth Element” (1997), one of my favorite movies of all time. And then . . disaster. The main characters look like teen brother and sister who are halfway through some sort of gender transition therapy to make Valerian into a fembot and Laurelin into an enforcer player on a WNBA basketball team. They act like that too. Everything else about the movie was quite good, but the two lead characters destroyed the movie for me.

Center: “War of the Worlds” (2005) – One of my top ten sci-fi movies of the 1950s and 60s is the 1953 version of “War of the Worlds”, so when I heard a new version was coming, directed by the famous Steven Spielberg, I was excited. And then . . disaster. The first half of the movie is quite good, especially the initial invasion scenes in the city. But the movie utterly disintegrates in the second half. Also, I greatly disliked the cinematography that used a hazy tint for the entire movie — it gave me a splitting headache.

Right: “John Carter” (2012) – Like many of my friends, we all grew up reading “A Princess of Mars” (1912) by ER Burroughs, and it is hands-down our favorite book of all time. When we heard that Disney was adapting the book to a movie, we were all excited. And then . . disaster. The main problem is the actress who plays Princess Dejah Thoris. Instead of a princess, she comes off as an annoying, abasive, tatoo-covered, entitled girl-thug. She single-handedly destroyed the movie.


Demo code. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my blog editor chokes on symbols).

using System;
using System.IO;

namespace MatrixDecompQRGivens
{
  internal class Program
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin QR decomposition using" +
        " Givens rotations with C# ");
      Console.WriteLine("Less precise than QR-Householder" +
        " but dramatically simpler ");
      
      double[][] A = new double[6][];
      A[0] = new double[] { 1, 2, 3, 4, 5 };
      A[1] = new double[] { 0, -3, 5, -7, 9 };
      A[2] = new double[] { 2, 0, -2, 0, -2 };
      A[3] = new double[] { 4, -1, 5, 6, 1 };
      A[4] = new double[] { 3, 6, 8, 2, 2 };
      A[5] = new double[] { 5, -2, 4, -4, 3 };

      Console.WriteLine("\nSource (tall) matrix A: ");
      MatShow(A, 1, 6);

      Console.WriteLine("\nComputing QR ");
      double[][] Q;
      double[][] R;
      MatDecompQR(A, out Q, out R);
      Console.WriteLine("Done ");

      Console.WriteLine("\nQ = ");
      MatShow(Q, 6, 11);
      Console.WriteLine("\nR = ");
      MatShow(R, 6, 11);

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main()

    // ------------------------------------------------------

    static void MatShow(double[][] M, int dec, int wid)
    {
      for (int i = 0; i "lt" M.Length; ++i)
      {
        for (int j = 0; j "lt" M[0].Length; ++j)
        {
          double v = M[i][j];
          Console.Write(v.ToString("F" + dec).
            PadLeft(wid));
        }
        Console.WriteLine("");
      }
    }

    // ------------------------------------------------------

    static void MatDecompQR(double[][] A, out double[][] Q,
      out double[][] R)
    {
      // QR decomposition using Givens rotations
      // 'reduced' mode (all machine learning scenarios)
      int m = A.Length; int n = A[0].Length;
      if (m "lt" n)
        throw new Exception("m must be gte n ");
      int k = Math.Min(m, n);

      double[][] QQ = new double[m][];  // working Q mxm
      for (int i = 0; i "lt" m; ++i)
        QQ[i] = new double[m];
      for (int i = 0; i "lt" m; ++i)
        QQ[i][i] = 1.0;  // Identity

      double[][] RR = new double[m][];  // working R mxn
      for (int i = 0; i "lt" m; ++i)
        RR[i] = new double[n];
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n; ++j)
          RR[i][j] = A[i][j];  // copy of A

      for (int j = 0; j "lt" k; ++j) // outer loop
      {
        for (int i = m-1; i "gt" j; --i) // inner loop
        {
          double a = RR[j][j];
          double b = RR[i][j];
          //if (b == 0.0) continue;  // risky
          if (Math.Abs(b) "lt" 1.0e-12) continue;

          double r = Math.Sqrt((a * a) + (b * b));
          double c = a / r;
          double s = -b / r;

          for (int col = j; col "lt" n; ++col)
          {
            double tmp = 
              (c * RR[j][col]) - (s * RR[i][col]);
            RR[i][col] = 
              (s * RR[j][col]) + (c * RR[i][col]);
            RR[j][col] = tmp;
          }

          for (int row = 0; row "lt" m; ++row)
          {
            double tmp = 
              (c * QQ[row][j]) - (s * QQ[row][i]);
            QQ[row][i] = 
              (s * QQ[row][j]) + (c * QQ[row][i]);
            QQ[row][j] = tmp;
          }
        } // inner i loop

      } // j loop

      // Q_reduced = Q[:, :k] // all rows, col 0 to k
      // R_reduced = R[:k, :] // rows 0 to k, all cols

      double[][] Qred = new double[m][];
      for (int i = 0; i "lt" m; ++i)
        Qred[i] = new double[k];
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" k; ++j)
          Qred[i][j] = QQ[i][j];
      Q = Qred;

      double[][] Rred = new double[k][];
      for (int i = 0; i "lt" k; ++i)
        Rred[i] = new double[n];
      for (int i = 0; i "lt" k; ++i)
        for (int j = 0; j "lt" n; ++j)
          Rred[i][j] = RR[i][j];
      R = Rred;

    } // MatDecompQR()

  } // Program

} // ns
Posted in Machine Learning | Leave a comment

Decision Tree Regression From Scratch Using JavaScript Without Pointers or Recursion

Decision tree regression is a machine learning technique that incorporates a set of if-then rules in a tree data structure to predict a single numeric value. For example, a decision tree regression model prediction might be, “If employee age is greater than 29.0 and age is less than or equal to 32.5 and years-experience is less than or equal to 4.0 and height is greater than 67.0 then bank account balance is $655.17.”

This blog post presents decision tree regression, implemented from scratch with JavaScript, using list storage for tree nodes (no pointers/references), a FIFO stack to build the tree (no recursion), weighted variance minimization for the node split function, and saves associated row indices in each node.

The demo data looks like:

 0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
. . .

The first five values on each line are the x predictors. The last value on each line is the target y variable to predict. The data is synthetic and was generated by a 5-10-1 neural network with random weights and bias. Therefore, the data has an underlying structure which can be predicted. There are 200 training items and 40 test items.

The key parts of the output of the demo program are:

JavaScript decision tree regression

Loading synthetic train (200), test (40)
Done

First three train x:
-0.1660   0.4406  -0.9998  -0.3953  -0.7065
 0.0776  -0.1616   0.3704  -0.5911   0.7562
-0.9452   0.3409  -0.1654   0.1174  -0.7192

First three train y:
0.484 0.1568 0.8054

Setting maxDepth = 3
Setting minSamples = 2
Setting minLeaf = 18
Using default numSplitCols = -1

Creating and training tree
Done

ID   0 | col  0 | v -0.2102 | L  1 | R  2 | y 0.3493 | leaf F
ID   1 | col  4 | v  0.1431 | L  3 | R  4 | y 0.5345 | leaf F
ID   2 | col  0 | v  0.3915 | L  5 | R  6 | y 0.2382 | leaf F
ID   3 | col  0 | v -0.6553 | L  7 | R  8 | y 0.6358 | leaf F
ID   4 | col -1 | v  0.0000 | L  9 | R 10 | y 0.4123 | leaf T
ID   5 | col  4 | v -0.2987 | L 11 | R 12 | y 0.3032 | leaf F
ID   6 | col  2 | v  0.3777 | L 13 | R 14 | y 0.1701 | leaf F
ID   7 | col -1 | v  0.0000 | L 15 | R 16 | y 0.6952 | leaf T
ID   8 | col -1 | v  0.0000 | L 17 | R 18 | y 0.5598 | leaf T
ID  11 | col -1 | v  0.0000 | L 23 | R 24 | y 0.4101 | leaf T
ID  12 | col -1 | v  0.0000 | L 25 | R 26 | y 0.2613 | leaf T
ID  13 | col -1 | v  0.0000 | L 27 | R 28 | y 0.1882 | leaf T
ID  14 | col -1 | v  0.0000 | L 29 | R 30 | y 0.1381 | leaf T

Evaluating tree model

Train acc (within 0.10) = 0.3750
Test acc (within 0.10) = 0.4750

Train MSE = 0.0048
Test MSE = 0.0054

Predicting for x =
-0.1660   0.4406  -0.9998  -0.3953  -0.7065
y = 0.4101

IF
column 0  >   -0.2102 AND
column 0  <=   0.3915 AND
column 4  <=  -0.2987 AND
THEN
predicted = 0.4101

End decision tree regression demo

If you compare this output with the visual representation, most of the ideas of decision tree regression will become mostly clear:

The demo decision tree has maxDepth = 3. This creates a tree with 15 nodes (some of which could be empty). In general, if a tree has maxDepth = n, then it has 2^(n+1) - 1 1 nodes. These tree nodes could be stored as JavaScript program-defined Node objects and connected via pointers/references. But the architecture used by the demo decision tree is to create a List collection with size/count = 15, where each item in the List is a Node object.

With this design, if the root node is at index [0], then the left child is at [1] and the right child is at [2]. In general, if a node is at index [i], then the left child is at index [2*i+1] and the right child is at index [2*i+2].

The tree splitting part of constructing a decision tree regression system is perhaps best understood by looking at a concrete example. The root node of the demo decision tree is associated with all 200 rows of the training data. The splitting process selects some of the 200 rows to be assigned to the left child node, and the remaining rows to be assigned to the right child node.

For simplicity, suppose that instead of the demo training data with 200 rows and 5 columns of predictors, a tree node is associated with just 5 rows of data, and the data has just 3 columns of predictors:

     X                    y
[0] 0.97  0.25  0.75     3.0
[1] 0.53  0.41  0.00     9.0
[2] 0.86  0.64  0.86     7.0
[3] 0.14  0.37  0.25     2.0
[4] 0.53  0.86  0.37     6.0
     [0]   [1]   [2]

The split algorithm will select some of the five rows to go to the left child and the remaining rows to go to the right child. The idea is to select the two sets of rows in a way so that their associated y values are close together. The demo implementation does this by finding the split that minimizes the weighted variance of the target y values.

The splitting algorithm examines every possible x value in every row and every column. For each possible candidate split value (also called the threshold), the weighted variance of the y values of the resulting split is computed. For example, for x = 0.37 in column [1], the rows where the values in column [1] are less than or equal to 0.37 are rows [3] and [0] with associated y values (2.0, 3.0). The other rows are [1], [2], [4] with associated y values (9.0, 7.0, 6.0).

The variance of a set of y values is the average of the sum of the squared differences between the values and their mean (average).

For this proposed split, the variance of y values (2.0, 3.0) is computed as 0.25. The mean of (2.0, 3.0) = (2.0 + 3.0) / 2 = 2.5 and so the variance is ((2.0 - 2.5)^2 + (3.0 - 2.5)^2) / 2 = 0.25.

The mean of y values (9.0, 7.0, 6.0) = (9.0 + 7.0 + 6.0) / 3 = 7.33 and the variance is ((9.0 - 7.33)^2 + (7.0 -7.33)^2 + (6.0 - 7.33)^2) / 3 = 2.33.

Because the proposed split has different numbers of rows (two rows for the left child and three rows for the right child), the weighted variance of the proposed split is used and is ((2 * 0.25) + (3 * 2.33)) / 5 = 1.50.

This process is repeated for every x value in the training data. The x threshold value and its column that generate the smallest weighted variance define the split. In practice it's common to keep track of which values in a given column have already been checked, to avoid unnecessary work computing weighted variance.

After a best split is found, the predicted y value for a leaf node (no further splits) is the average of the y values associated with the node. The demo program computes predicted y values for internal, non-leaf nodes, but those predicted values are not used.

The demo does not use recursion to build the tree. Instead, the demo uses a stack data structure. In high-level pseudo-code:

create empty stack
push (root, depth)
while stack not empty
  pop (currNode, currDepth)
  if at maxDepth or not enough data then
    node is a leaf, predicted = avg of associated Y values
  compute split value and split column using the
  associated rows stored in currNode
  if unable to split then
    node is a leaf, predicted = avg of associated Y values
  compute left-rows, right-rows
  create and push (leftNode, currDepth+1)
  create and push (rightNode, currDepth+1)   
end-while

The build-tree algorithm is short but very tricky. The stack holds a tree node and its associated rows of the training data, and the current depth of the tree. The current depth of the tree is maintained so that the algorithm knows when maxDepth has been reached.



My love of math, computer science, and machine learning can probably be traced back to the games I loved to play as a young man: poker, Yahtzee, blackjack, and many others.

I was also fascinated by gambling machines of all kinds. When I was about ten years old, my Uncle Brent gave me a small mechanical slot machine toy from Vegas after one of his trips there.

Here is "Twenty One" made in 1936 by Groetchen, a UK company. It's a lot like blackjack where you have to beat the dealer's total without going over 21.

Left: You put a coin in and pull the handle. You get to see the first two numbers. Here, 7 + 6 = 13. Now, if you want you can draw a card by pulling the small lever in position 3.

Right: The card drawn was a 3, so the player has a total of 13. If you want to stand with that total, you pull the "House" lever to reveal the House hand. Here is it 20, so the player loses.


Demo code. Replace "lt" (less than), "gt", "lte", "gte" with Boolean operator symbols (my blog editor chokes on symbols).

// decision_tree_regression.js
// node.js v22.16.0

let FS = require('fs');  // file load

class DecisionTreeRegressor
{
  // --------------------------------------------------------

  constructor(maxDepth, minSamples,
    minLeaf, numSplitCols, seed)
  {
    // if maxDepth = n, list has 2^(n+1) - 1 nodes
    this.maxDepth = maxDepth;
    this.minSamples = minSamples;
    this.minLeaf = minLeaf;
    this.numSplitCols = numSplitCols; // for random forest

    // create full tree
    this.tree = [];  // a list of Node objects
    let numNodes = Math.pow(2, (maxDepth + 1)) - 1;
    for (let i = 0; i "lt" numNodes; ++i) {
      let n = new this.Node(i, -1, 0.0, -1,
        -1, 0.0, false, []);
      this.tree[i] = n; // dummy nodes
    }

    this.seed = seed + 0.5;  // quasi random
    this.trainX = null;  // supplied in train()
    this.trainY = null;  
  }

  // --------------------------------------------------------

    Node = function(id, colIdx, thresh, left, right, value,
      isLeaf, rows) // call:  n = new this.Node(0, . .);
    {
      this.id = id; this.colIdx = colIdx;
      this.thresh = thresh; this.left = left;
      this.right = right; this.value = value;
      this.isLeaf = isLeaf;
      this.rows = rows; // a list
    }

    StackInfo = function(node, depth)
    {
      this.node = node;
      this.depth = depth;
    }

  // --------------------------------------------------------

  // core methods: 
  // train(), predict(), explain(), accuracy(),
  //   mse(), display()

  // --------------------------------------------------------

  train(trainX, trainY)
  {
    this.trainX = trainX;  // by ref
    this.trainY = trainY;
    this.makeTree();
  }

  // --------------------------------------------------------

  predict(x)
  {
    let p = 0;
    let currNode = this.tree[p];
    while (currNode.isLeaf == false &&
    p "lt" this.tree.length) {
      if (x[currNode.colIdx] "lte" currNode.thresh)
        p = currNode.left;
      else
        p = currNode.right;
      currNode = this.tree[p];
    }
    return this.tree[p].value;
  }

  // --------------------------------------------------------

  explain(x)
  {
    let p = 0;
    console.log("\nIF ");
    let currNode = this.tree[p];
    while (currNode.isLeaf == false) {
      process.stdout.write("column ");
      process.stdout.write(currNode.colIdx + " ");
      if (x[currNode.colIdx] "lte" currNode.thresh) {
        process.stdout.write(" "lte" ");
        console.log(currNode.thresh.toFixed(4).
        toString().padStart(8) + " AND ");
        p = currNode.left;
        currNode = this.tree[p];
      }
      else {
        process.stdout.write(" "gt"  ");
        console.log(currNode.thresh.toFixed(4).
        toString().padStart(8) + " AND ");
        p = currNode.right;
        currNode = this.tree[p];
      }
    }
    console.log("THEN \npredicted = " +
      currNode.value.toFixed(4).toString());
  }

  // --------------------------------------------------------

  accuracy(dataX, dataY, pctClose)
  {
    let nCorrect = 0; let nWrong = 0;
    for (let i = 0; i "lt" dataX.length; ++i) {
      let x = dataX[i];
      let actualY = dataY[i];
      let predY = this.predict(x);
      if (Math.abs(predY - actualY) "lt" 
        Math.abs(pctClose * actualY)) {
        ++nCorrect;
      }
      else {
        ++nWrong;
      }
    }
    return (nCorrect * 1.0) / (nCorrect + nWrong);
  }

  // --------------------------------------------------------

  mse(dataX, dataY)
  {
    let n = dataX.length;
    let sum = 0.0;
    for (let i = 0; i "lt" n; ++i) {
      let x = dataX[i];
      let actualY = dataY[i];
      let predY = this.predict(x);
      sum += (actualY - predY) * (actualY - predY);
    }
    return sum / n;
  }

  // --------------------------------------------------------

  display(showRows)
  {
    let size = this.tree.length;
    for (let i = 0; i "lt" size; ++i) { // each node
      let n = this.tree[i]; // covenience
      if (n == null) continue;
      if (n.colIdx == -1 && n.isLeaf == false) continue;

      let s = "";
      s += "ID " + n.id.toString().padStart(3) + " ";
      s += "| col " + n.colIdx.toString().padStart(2) + " ";
      s += "| v " + n.thresh.toFixed(4).
        toString().padStart(8) + " ";
      s += "| L " + n.left.toString().padStart(2) + " ";
      s += "| R " + n.right.toString().padStart(2) + " ";
      s += "| y " + n.value.toFixed(4).toString() + " ";
      let tf = n.isLeaf == true ? "T" : "F";
      s += "| leaf " + tf;

      if (showRows == true) {
        s += "\nrows: ";
        for (let k = 0; k "lt" n.rows.length; ++k) {
          if (k "gt" 0 && k % 20 == 0)
              s += "\n";
          s += n.rows[k].toString() + " ";
        }
      }
      console.log(s);
      if (showRows == true) console.log("");
    }
  } // display()

  // --------------------------------------------------------

  // helpers for train(): makeTree(), bestSplit(), next(),
  //   nextInt(), shuffle(), treeTargetMean(),
  //   treeTargetVariance()

  // --------------------------------------------------------

  makeTree()
  {
    if (this.numSplitCols == -1)
      this.numSplitCols = this.trainX[0].length;

    let allRows = [];
    for (let i = 0; i "lt" this.trainX.length; ++i)
      allRows.push(i);
    let grandMean = this.treeTargetMean(allRows);

    let root = new this.Node(0, -1, 0.0, 1, 2,
        grandMean, false, allRows);
      
    let stack = [];  // first-in, first-out
    stack.push(new this.StackInfo(root, 0));
    while (stack.length "gt" 0) {
      let info = stack.pop();
      let currNode = info.node;
      this.tree[currNode.id] = currNode;

      let currRows = info.node.rows;
      let currDepth = info.depth;

      if (currDepth == this.maxDepth ||
        currRows.length "lt" this.minSamples) {
        currNode.value = this.treeTargetMean(currRows);
        currNode.isLeaf = true;
        continue;
      }

      let splitInfo = this.bestSplit(currRows);
      let colIdx = Math.trunc(splitInfo[0]);
      let thresh = splitInfo[1];

      if (colIdx == -1) {  // unable to split
        currNode.value = this.treeTargetMean(currRows);
        currNode.isLeaf = true;
        continue;
      }

      // good split so at internal, non-leaf node
      currNode.colIdx = colIdx;
      currNode.thresh = thresh;

      // make the data splits for children
      // find left rows and right rows
      let leftIdxs = [];
      let rightIdxs = [];

      for (let k = 0; k "lt" currRows.length; ++k) {
        let r = currRows[k];
        if (this.trainX[r][colIdx] "lte" thresh)
          leftIdxs.push(r);
        else
          rightIdxs.push(r);
      }

      let leftID = currNode.id * 2 + 1;
      let currNodeLeft = new this.Node(leftID, -1, 0.0,
        2 * leftID + 1, 2 * leftID + 2,
        this.treeTargetMean(leftIdxs), false, leftIdxs);
      stack.push(new this.StackInfo(currNodeLeft,
        currDepth + 1));

      let rightID = currNode.id * 2 + 2;
      let currNodeRight = new this.Node(rightID, -1, 0.0,
        2*rightID + 1, 2 * rightID + 2,
        this.treeTargetMean(rightIdxs), false, rightIdxs);
      stack.push(new this.StackInfo(currNodeRight,
        currDepth + 1));

    } // while
  } // makeTree()

  // --------------------------------------------------------

  bestSplit(rows)
  {
    // result[0] = best col idx (as double)
    // result[1] = best split value

    let bestColIdx = -1;  // indicates bad split
    let bestThresh = 0.0;
    let bestVar = Number.MAX_VALUE;  // smaller is better

    let nRows = rows.length;
    let nCols = this.trainX[0].length;

    if (nRows == 0)
      console.log("FATAL: empty data in bestSplit()");

    // process cols in scrambled order
    let colIndices = [];
    for (let k = 0; k "lt" nCols; ++k)
      colIndices[k] = k;
    this.shuffle(colIndices);

    // each column of trainX
    for (let j = 0; j "lt" this.numSplitCols; ++j) {
      let colIdx = colIndices[j];
      let examineds = [];

      for (let i = 0; i "lt" nRows; ++i) {  // each row
        // if curr thresh been seen, skip it
        let thresh = this.trainX[rows[i]][colIdx];
        if (examineds.includes(thresh)) continue;
        examineds.push(thresh);

        // get row idxs where x is lte, gt thresh
        let leftIdxs = [];
        let rightIdxs = [];
        for (let k = 0; k "lt" nRows; ++k) {
          if (this.trainX[rows[k]][colIdx] "lte" thresh)
            leftIdxs.push(rows[k]);
          else
            rightIdxs.push(rows[k]);
        }

        // Check if proposed split has too few values
        if (leftIdxs.length "lt" this.minLeaf || 
        rightIdxs.length "lt" this.minLeaf) 
          continue;  // to next row

        let leftVar = this.treeTargetVariance(leftIdxs);
        let rightVar = this.treeTargetVariance(rightIdxs);
        let weightedVar = (leftIdxs.length * leftVar +
          rightIdxs.length * rightVar) / nRows;

        if (weightedVar "lt" bestVar) {
          bestColIdx = colIdx;
          bestThresh = thresh;
          bestVar = weightedVar;
        }
      } // each row
    } // j each col

    let result = [-1, -1];
    result[0] = bestColIdx;
    result[1] = bestThresh;
    return result;
  } // bestSplit()

  // --------------------------------------------------------

  next()
  {
    // return sort-of-random in [0.0, 1.0)
    let x = Math.sin(this.seed) * 1000;
    let result = x - Math.floor(x);  // [0.0,1.0)
    this.seed = result;  // for next call
    return result;
  }

  // --------------------------------------------------------

  nextInt(lo, hi)
  {
    let x = this.next();
    return Math.trunc((hi - lo) * x + lo);
  }

  // --------------------------------------------------------

  shuffle(indices)
  {
    // Fisher-Yates
    for (let i = 0; i "lt" indices.length; ++i) {
      let ri = this.nextInt(i, indices.length);
      let tmp = indices[ri];
      indices[ri] = indices[i];
      indices[i] = tmp;
    }
  }

  // -------------------------------------------------------- 

  treeTargetMean(rows)
  {
    let sum = 0.0;
    for (let i = 0; i "lt" rows.length; ++i) {
      let r = rows[i];
      sum += this.trainY[r];
    }
    return sum / rows.length;
  }

  // ------------------------------------------------------

  treeTargetVariance(rows)
  {
    let mean = this.treeTargetMean(rows);
    let sum = 0.0;
    for (let i = 0; i "lt" rows.length; ++i) {
      let r = rows[i];
      sum += (this.trainY[r] - mean) *
        (this.trainY[r] - mean);
    }
    return sum / rows.length;
  }

} // class DecisionTreeRegressor

// ----------------------------------------------------------
// helper functions for main(): loadTxt(), vecShow()
// ----------------------------------------------------------

function loadTxt(fn, delimit, usecols, comment) {
  // load numeric matrix from text file
  let all = FS.readFileSync(fn, "utf8");  // giant string
  all = all.trim();  // strip final crlf in file
  let lines = all.split("\n");  // array of lines

  // count number non-comment lines
  let nRows = 0;
  for (let i = 0; i "lt" lines.length; ++i) {
    if (!lines[i].startsWith(comment))
      ++nRows;
  }
  nCols = usecols.length;
  //let result = matMake(nRows, nCols, 0.0);
  let result = [];
  for (let i = 0; i "lt" nRows; ++i) {
    result[i] = [];
    for (let j = 0; j "lt" nCols; ++j) {
      result[i][j] = 0.0;
    }
  }
 
  let r = 0;  // into lines
  let i = 0;  // into result[][]
  while (r "lt" lines.length) {
    if (lines[r].startsWith(comment)) {
      ++r;  // next row
    }
    else {
      let tokens = lines[r].split(delimit);
      for (let j = 0; j "lt" nCols; ++j) {
        result[i][j] = parseFloat(tokens[usecols[j]]);
      }
      ++r;
      ++i;
    }
  }

  if (usecols.length "gt" 1)
    return result;

  // if length of usecols is just 1, convert matrix to vector
  // the curr matrix is shape nRows x 1
  if (usecols.length == 1) {
    let vec = [];
    for (let i = 0; i "lt" nRows; ++i)
      vec[i] = result[i][0];
    return vec;
  }
}

// ----------------------------------------------------------

function vecShow(v, dec, len)
{
  for (let i = 0; i "lt" v.length; ++i) {
    if (i != 0 && i % len == 0) {
      process.stdout.write("\n");
    }
    if (v[i] "gte" 0.0) {
      process.stdout.write(" ");  // + or - space
    }
    process.stdout.write(v[i].toFixed(dec));
    process.stdout.write("  ");
  }
  process.stdout.write("\n");
}

// ----------------------------------------------------------

function main()
{
  console.log("\nJavaScript decision tree regression ");

  // 1. load data
  console.log("\nLoading synthetic train (200), test (40)");
  let trainFile = ".\\Data\\synthetic_train_200.txt";
  let testFile = ".\\Data\\synthetic_test_40.txt";

  let trainX = loadTxt(trainFile, ",", [0,1,2,3,4], "#");
  let trainY = loadTxt(trainFile, ",", [5], "#");
  let testX = loadTxt(testFile, ",", [0,1,2,3,4], "#");
  let testY = loadTxt(testFile, ",", [5], "#");
  console.log("Done ");

  console.log("\nFirst three train x: ");
  for (let i = 0; i "lt" 3; ++i)
    vecShow(trainX[i], 4, 8);  // 4 decimals 8 wide
  console.log("\nFirst three train y: ");
  for (let i = 0; i "lt" 3; ++i)
    process.stdout.write(trainY[i].toString() + " ");
  console.log("");

  // 2. create and train tree
  let maxDepth = 3;
  let minSamples = 2;
  let minLeaf = 18;  // 18
  let numSplitCols = -1;  // all columns
  let seed = 0;
  console.log("\nSetting maxDepth = " +
    maxDepth.toString());
  console.log("Setting minSamples = " +
    minSamples.toString());
  console.log("Setting minLeaf = " +
    minLeaf.toString());
  console.log("Using default numSplitCols = -1 ");

  console.log("\nCreating and training tree ");
  let dtr = 
    new DecisionTreeRegressor(maxDepth, minSamples,
      minLeaf, numSplitCols, seed);

  dtr.train(trainX, trainY);
  console.log("Done \n");
  dtr.display(false); // show rows

  // 3. evaluate tree
  console.log("\nEvaluating tree model ");
  let trainAcc = dtr.accuracy(trainX, trainY, 0.10);
  let testAcc = dtr.accuracy(testX, testY, 0.10);
  console.log("\nTrain acc (within 0.10) = " +
    trainAcc.toFixed(4).toString());
  console.log("Test acc (within 0.10) = " +
    testAcc.toFixed(4).toString());

  let trainMSE = dtr.mse(trainX, trainY);
  let testMSE = dtr.mse(testX, testY);
  console.log("\nTrain MSE = " +
    trainMSE.toFixed(4).toString());
  console.log("Test MSE = " +
    testMSE.toFixed(4).toString());

  // 4. use tree to make prediction
  let x = trainX[0];
  console.log("\nPredicting for x = ");
  vecShow(x, 4, 9);
  let predY = dtr.predict(x);
  console.log("y = " + predY.toFixed(4).toString());

  dtr.explain(x);

  console.log("\nEnd decision tree regression demo ");
}

main();

Training data:

# synthetic_train_200.txt
#
-0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
-0.8299, -0.9219, -0.6603,  0.7563, -0.8033,  0.7955
 0.0663,  0.3838, -0.3690,  0.3730,  0.6693,  0.3206
-0.9634,  0.5003,  0.9777,  0.4963, -0.4391,  0.7377
-0.1042,  0.8172, -0.4128, -0.4244, -0.7399,  0.4801
-0.9613,  0.3577, -0.5767, -0.4689, -0.0169,  0.6861
-0.7065,  0.1786,  0.3995, -0.7953, -0.1719,  0.5569
 0.3888, -0.1716, -0.9001,  0.0718,  0.3276,  0.2500
 0.1731,  0.8068, -0.7251, -0.7214,  0.6148,  0.3297
-0.2046, -0.6693,  0.8550, -0.3045,  0.5016,  0.2129
 0.2473,  0.5019, -0.3022, -0.4601,  0.7918,  0.2613
-0.1438,  0.9297,  0.3269,  0.2434, -0.7705,  0.5171
 0.1568, -0.1837, -0.5259,  0.8068,  0.1474,  0.3307
-0.9943,  0.2343, -0.3467,  0.0541,  0.7719,  0.5581
 0.2467, -0.9684,  0.8589,  0.3818,  0.9946,  0.1092
-0.6553, -0.7257,  0.8652,  0.3936, -0.8680,  0.7018
 0.8460,  0.4230, -0.7515, -0.9602, -0.9476,  0.1996
-0.9434, -0.5076,  0.7201,  0.0777,  0.1056,  0.5664
 0.9392,  0.1221, -0.9627,  0.6013, -0.5341,  0.1533
 0.6142, -0.2243,  0.7271,  0.4942,  0.1125,  0.1661
 0.4260,  0.1194, -0.9749, -0.8561,  0.9346,  0.2230
 0.1362, -0.5934, -0.4953,  0.4877, -0.6091,  0.3810
 0.6937, -0.5203, -0.0125,  0.2399,  0.6580,  0.1460
-0.6864, -0.9628, -0.8600, -0.0273,  0.2127,  0.5387
 0.9772,  0.1595, -0.2397,  0.1019,  0.4907,  0.1611
 0.3385, -0.4702, -0.8673, -0.2598,  0.2594,  0.2270
-0.8669, -0.4794,  0.6095, -0.6131,  0.2789,  0.4700
 0.0493,  0.8496, -0.4734, -0.8681,  0.4701,  0.3516
 0.8639, -0.9721, -0.5313,  0.2336,  0.8980,  0.1412
 0.9004,  0.1133,  0.8312,  0.2831, -0.2200,  0.1782
 0.0991,  0.8524,  0.8375, -0.2102,  0.9265,  0.2150
-0.6521, -0.7473, -0.7298,  0.0113, -0.9570,  0.7422
 0.6190, -0.3105,  0.8802,  0.1640,  0.7577,  0.1056
 0.6895,  0.8108, -0.0802,  0.0927,  0.5972,  0.2214
 0.1982, -0.9689,  0.1870, -0.1326,  0.6147,  0.1310
-0.3695,  0.7858,  0.1557, -0.6320,  0.5759,  0.3773
-0.1596,  0.3581,  0.8372, -0.9992,  0.9535,  0.2071
-0.2468,  0.9476,  0.2094,  0.6577,  0.1494,  0.4132
 0.1737,  0.5000,  0.7166,  0.5102,  0.3961,  0.2611
 0.7290, -0.3546,  0.3416, -0.0983, -0.2358,  0.1332
-0.3652,  0.2438, -0.1395,  0.9476,  0.3556,  0.4170
-0.6029, -0.1466, -0.3133,  0.5953,  0.7600,  0.4334
-0.4596, -0.4953,  0.7098,  0.0554,  0.6043,  0.2775
 0.1450,  0.4663,  0.0380,  0.5418,  0.1377,  0.2931
-0.8636, -0.2442, -0.8407,  0.9656, -0.6368,  0.7429
 0.6237,  0.7499,  0.3768,  0.1390, -0.6781,  0.2185
-0.5499,  0.1850, -0.3755,  0.8326,  0.8193,  0.4399
-0.4858, -0.7782, -0.6141, -0.0008,  0.4572,  0.4197
 0.7033, -0.1683,  0.2334, -0.5327, -0.7961,  0.1776
 0.0317, -0.0457, -0.6947,  0.2436,  0.0880,  0.3345
 0.5031, -0.5559,  0.0387,  0.5706, -0.9553,  0.3107
-0.3513,  0.7458,  0.6894,  0.0769,  0.7332,  0.3170
 0.2205,  0.5992, -0.9309,  0.5405,  0.4635,  0.3532
-0.4806, -0.4859,  0.2646, -0.3094,  0.5932,  0.3202
 0.9809, -0.3995, -0.7140,  0.8026,  0.0831,  0.1600
 0.9495,  0.2732,  0.9878,  0.0921,  0.0529,  0.1289
-0.9476, -0.6792,  0.4913, -0.9392, -0.2669,  0.5966
 0.7247,  0.3854,  0.3819, -0.6227, -0.1162,  0.1550
-0.5922, -0.5045, -0.4757,  0.5003, -0.0860,  0.5863
-0.8861,  0.0170, -0.5761,  0.5972, -0.4053,  0.7301
 0.6877, -0.2380,  0.4997,  0.0223,  0.0819,  0.1404
 0.9189,  0.6079, -0.9354,  0.4188, -0.0700,  0.1907
-0.1428, -0.7820,  0.2676,  0.6059,  0.3936,  0.2790
 0.5324, -0.3151,  0.6917, -0.1425,  0.6480,  0.1071
-0.8432, -0.9633, -0.8666, -0.0828, -0.7733,  0.7784
-0.9444,  0.5097, -0.2103,  0.4939, -0.0952,  0.6787
-0.0520,  0.6063, -0.1952,  0.8094, -0.9259,  0.4836
 0.5477, -0.7487,  0.2370, -0.9793,  0.0773,  0.1241
 0.2450,  0.8116,  0.9799,  0.4222,  0.4636,  0.2355
 0.8186, -0.1983, -0.5003, -0.6531, -0.7611,  0.1511
-0.4714,  0.6382, -0.3788,  0.9648, -0.4667,  0.5950
 0.0673, -0.3711,  0.8215, -0.2669, -0.1328,  0.2677
-0.9381,  0.4338,  0.7820, -0.9454,  0.0441,  0.5518
-0.3480,  0.7190,  0.1170,  0.3805, -0.0943,  0.4724
-0.9813,  0.1535, -0.3771,  0.0345,  0.8328,  0.5438
-0.1471, -0.5052, -0.2574,  0.8637,  0.8737,  0.3042
-0.5454, -0.3712, -0.6505,  0.2142, -0.1728,  0.5783
 0.6327, -0.6297,  0.4038, -0.5193,  0.1484,  0.1153
-0.5424,  0.3282, -0.0055,  0.0380, -0.6506,  0.6613
 0.1414,  0.9935,  0.6337,  0.1887,  0.9520,  0.2540
-0.9351, -0.8128, -0.8693, -0.0965, -0.2491,  0.7353
 0.9507, -0.6640,  0.9456,  0.5349,  0.6485,  0.1059
-0.0462, -0.9737, -0.2940, -0.0159,  0.4602,  0.2606
-0.0627, -0.0852, -0.7247, -0.9782,  0.5166,  0.2977
 0.0478,  0.5098, -0.0723, -0.7504, -0.3750,  0.3335
 0.0090,  0.3477,  0.5403, -0.7393, -0.9542,  0.4415
-0.9748,  0.3449,  0.3736, -0.1015,  0.8296,  0.4358
 0.2887, -0.9895, -0.0311,  0.7186,  0.6608,  0.2057
 0.1570, -0.4518,  0.1211,  0.3435, -0.2951,  0.3244
 0.7117, -0.6099,  0.4946, -0.4208,  0.5476,  0.1096
-0.2929, -0.5726,  0.5346, -0.3827,  0.4665,  0.2465
 0.4889, -0.5572, -0.5718, -0.6021, -0.7150,  0.2163
-0.7782,  0.3491,  0.5996, -0.8389, -0.5366,  0.6516
-0.5847,  0.8347,  0.4226,  0.1078, -0.3910,  0.6134
 0.8469,  0.4121, -0.0439, -0.7476,  0.9521,  0.1571
-0.6803, -0.5948, -0.1376, -0.1916, -0.7065,  0.7156
 0.2878,  0.5086, -0.5785,  0.2019,  0.4979,  0.2980
 0.2764,  0.1943, -0.4090,  0.4632,  0.8906,  0.2960
-0.8877,  0.6705, -0.6155, -0.2098, -0.3998,  0.7107
-0.8398,  0.8093, -0.2597,  0.0614, -0.0118,  0.6502
-0.8476,  0.0158, -0.4769, -0.2859, -0.7839,  0.7715
 0.5751, -0.7868,  0.9714, -0.6457,  0.1448,  0.1175
 0.4802, -0.7001,  0.1022, -0.5668,  0.5184,  0.1090
 0.4458, -0.6469,  0.7239, -0.9604,  0.7205,  0.0779
 0.5175,  0.4339,  0.9747, -0.4438, -0.9924,  0.2879
 0.8678,  0.7158,  0.4577,  0.0334,  0.4139,  0.1678
 0.5406,  0.5012,  0.2264, -0.1963,  0.3946,  0.2088
-0.9938,  0.5498,  0.7928, -0.5214, -0.7585,  0.7687
 0.7661,  0.0863, -0.4266, -0.7233, -0.4197,  0.1466
 0.2277, -0.3517, -0.0853, -0.1118,  0.6563,  0.1767
 0.3499, -0.5570, -0.0655, -0.3705,  0.2537,  0.1632
 0.7547, -0.1046,  0.5689, -0.0861,  0.3125,  0.1257
 0.8186,  0.2110,  0.5335,  0.0094, -0.0039,  0.1391
 0.6858, -0.8644,  0.1465,  0.8855,  0.0357,  0.1845
-0.4967,  0.4015,  0.0805,  0.8977,  0.2487,  0.4663
 0.6760, -0.9841,  0.9787, -0.8446, -0.3557,  0.1509
-0.1203, -0.4885,  0.6054, -0.0443, -0.7313,  0.4854
 0.8557,  0.7919, -0.0169,  0.7134, -0.1628,  0.2002
 0.0115, -0.6209,  0.9300, -0.4116, -0.7931,  0.4052
-0.7114, -0.9718,  0.4319,  0.1290,  0.5892,  0.3661
 0.3915,  0.5557, -0.1870,  0.2955, -0.6404,  0.2954
-0.3564, -0.6548, -0.1827, -0.5172, -0.1862,  0.4622
 0.2392, -0.4959,  0.5857, -0.1341, -0.2850,  0.2470
-0.3394,  0.3947, -0.4627,  0.6166, -0.4094,  0.5325
 0.7107,  0.7768, -0.6312,  0.1707,  0.7964,  0.2757
-0.1078,  0.8437, -0.4420,  0.2177,  0.3649,  0.4028
-0.3139,  0.5595, -0.6505, -0.3161, -0.7108,  0.5546
 0.4335,  0.3986,  0.3770, -0.4932,  0.3847,  0.1810
-0.2562, -0.2894, -0.8847,  0.2633,  0.4146,  0.4036
 0.2272,  0.2966, -0.6601, -0.7011,  0.0284,  0.2778
-0.0743, -0.1421, -0.0054, -0.6770, -0.3151,  0.3597
-0.4762,  0.6891,  0.6007, -0.1467,  0.2140,  0.4266
-0.4061,  0.7193,  0.3432,  0.2669, -0.7505,  0.6147
-0.0588,  0.9731,  0.8966,  0.2902, -0.6966,  0.4955
-0.0627, -0.1439,  0.1985,  0.6999,  0.5022,  0.3077
 0.1587,  0.8494, -0.8705,  0.9827, -0.8940,  0.4263
-0.7850,  0.2473, -0.9040, -0.4308, -0.8779,  0.7199
 0.4070,  0.3369, -0.2428, -0.6236,  0.4940,  0.2215
-0.0242,  0.0513, -0.9430,  0.2885, -0.2987,  0.3947
-0.5416, -0.1322, -0.2351, -0.0604,  0.9590,  0.3683
 0.1055,  0.7783, -0.2901, -0.5090,  0.8220,  0.2984
-0.9129,  0.9015,  0.1128, -0.2473,  0.9901,  0.4776
-0.9378,  0.1424, -0.6391,  0.2619,  0.9618,  0.5368
 0.7498, -0.0963,  0.4169,  0.5549, -0.0103,  0.1614
-0.2612, -0.7156,  0.4538, -0.0460, -0.1022,  0.3717
 0.7720,  0.0552, -0.1818, -0.4622, -0.8560,  0.1685
-0.4177,  0.0070,  0.9319, -0.7812,  0.3461,  0.3052
-0.0001,  0.5542, -0.7128, -0.8336, -0.2016,  0.3803
 0.5356, -0.4194, -0.5662, -0.9666, -0.2027,  0.1776
-0.2378,  0.3187, -0.8582, -0.6948, -0.9668,  0.5474
-0.1947, -0.3579,  0.1158,  0.9869,  0.6690,  0.2992
 0.3992,  0.8365, -0.9205, -0.8593, -0.0520,  0.3154
-0.0209,  0.0793,  0.7905, -0.1067,  0.7541,  0.1864
-0.4928, -0.4524, -0.3433,  0.0951, -0.5597,  0.6261
-0.8118,  0.7404, -0.5263, -0.2280,  0.1431,  0.6349
 0.0516, -0.8480,  0.7483,  0.9023,  0.6250,  0.1959
-0.3212,  0.1093,  0.9488, -0.3766,  0.3376,  0.2735
-0.3481,  0.5490, -0.3484,  0.7797,  0.5034,  0.4379
-0.5785, -0.9170, -0.3563, -0.9258,  0.3877,  0.4121
 0.3407, -0.1391,  0.5356,  0.0720, -0.9203,  0.3458
-0.3287, -0.8954,  0.2102,  0.0241,  0.2349,  0.3247
-0.1353,  0.6954, -0.0919, -0.9692,  0.7461,  0.3338
 0.9036, -0.8982, -0.5299, -0.8733, -0.1567,  0.1187
 0.7277, -0.8368, -0.0538, -0.7489,  0.5458,  0.0830
 0.9049,  0.8878,  0.2279,  0.9470, -0.3103,  0.2194
 0.7957, -0.1308, -0.5284,  0.8817,  0.3684,  0.2172
 0.4647, -0.4931,  0.2010,  0.6292, -0.8918,  0.3371
-0.7390,  0.6849,  0.2367,  0.0626, -0.5034,  0.7039
-0.1567, -0.8711,  0.7940, -0.5932,  0.6525,  0.1710
 0.7635, -0.0265,  0.1969,  0.0545,  0.2496,  0.1445
 0.7675,  0.1354, -0.7698, -0.5460,  0.1920,  0.1728
-0.5211, -0.7372, -0.6763,  0.6897,  0.2044,  0.5217
 0.1913,  0.1980,  0.2314, -0.8816,  0.5006,  0.1998
 0.8964,  0.0694, -0.6149,  0.5059, -0.9854,  0.1825
 0.1767,  0.7104,  0.2093,  0.6452,  0.7590,  0.2832
-0.3580, -0.7541,  0.4426, -0.1193, -0.7465,  0.5657
-0.5996,  0.5766, -0.9758, -0.3933, -0.9572,  0.6800
 0.9950,  0.1641, -0.4132,  0.8579,  0.0142,  0.2003
-0.4717, -0.3894, -0.2567, -0.5111,  0.1691,  0.4266
 0.3917, -0.8561,  0.9422,  0.5061,  0.6123,  0.1212
-0.0366, -0.1087,  0.3449, -0.1025,  0.4086,  0.2475
 0.3633,  0.3943,  0.2372, -0.6980,  0.5216,  0.1925
-0.5325, -0.6466, -0.2178, -0.3589,  0.6310,  0.3568
 0.2271,  0.5200, -0.1447, -0.8011, -0.7699,  0.3128
 0.6415,  0.1993,  0.3777, -0.0178, -0.8237,  0.2181
-0.5298, -0.0768, -0.6028, -0.9490,  0.4588,  0.4356
 0.6870, -0.1431,  0.7294,  0.3141,  0.1621,  0.1632
-0.5985,  0.0591,  0.7889, -0.3900,  0.7419,  0.2945
 0.3661,  0.7984, -0.8486,  0.7572, -0.6183,  0.3449
 0.6995,  0.3342, -0.3113, -0.6972,  0.2707,  0.1712
 0.2565,  0.9126,  0.1798, -0.6043, -0.1413,  0.2893
-0.3265,  0.9839, -0.2395,  0.9854,  0.0376,  0.4770
 0.2690, -0.1722,  0.9818,  0.8599, -0.7015,  0.3954
-0.2102, -0.0768,  0.1219,  0.5607, -0.0256,  0.3949
 0.8216, -0.9555,  0.6422, -0.6231,  0.3715,  0.0801
-0.2896,  0.9484, -0.7545, -0.6249,  0.7789,  0.4370
-0.9985, -0.5448, -0.7092, -0.5931,  0.7926,  0.5402

Test data:

# synthetic_test_40.txt
#
 0.7462,  0.4006, -0.0590,  0.6543, -0.0083,  0.1935
 0.8495, -0.2260, -0.0142, -0.4911,  0.7699,  0.1078
-0.2335, -0.4049,  0.4352, -0.6183, -0.7636,  0.5088
 0.1810, -0.5142,  0.2465,  0.2767, -0.3449,  0.3136
-0.8650,  0.7611, -0.0801,  0.5277, -0.4922,  0.7140
-0.2358, -0.7466, -0.5115, -0.8413, -0.3943,  0.4533
 0.4834,  0.2300,  0.3448, -0.9832,  0.3568,  0.1360
-0.6502, -0.6300,  0.6885,  0.9652,  0.8275,  0.3046
-0.3053,  0.5604,  0.0929,  0.6329, -0.0325,  0.4756
-0.7995,  0.0740, -0.2680,  0.2086,  0.9176,  0.4565
-0.2144, -0.2141,  0.5813,  0.2902, -0.2122,  0.4119
-0.7278, -0.0987, -0.3312, -0.5641,  0.8515,  0.4438
 0.3793,  0.1976,  0.4933,  0.0839,  0.4011,  0.1905
-0.8568,  0.9573, -0.5272,  0.3212, -0.8207,  0.7415
-0.5785,  0.0056, -0.7901, -0.2223,  0.0760,  0.5551
 0.0735, -0.2188,  0.3925,  0.3570,  0.3746,  0.2191
 0.1230, -0.2838,  0.2262,  0.8715,  0.1938,  0.2878
 0.4792, -0.9248,  0.5295,  0.0366, -0.9894,  0.3149
-0.4456,  0.0697,  0.5359, -0.8938,  0.0981,  0.3879
 0.8629, -0.8505, -0.4464,  0.8385,  0.5300,  0.1769
 0.1995,  0.6659,  0.7921,  0.9454,  0.9970,  0.2330
-0.0249, -0.3066, -0.2927, -0.4923,  0.8220,  0.2437
 0.4513, -0.9481, -0.0770, -0.4374, -0.9421,  0.2879
-0.3405,  0.5931, -0.3507, -0.3842,  0.8562,  0.3987
 0.9538,  0.0471,  0.9039,  0.7760,  0.0361,  0.1706
-0.0887,  0.2104,  0.9808,  0.5478, -0.3314,  0.4128
-0.8220, -0.6302,  0.0537, -0.1658,  0.6013,  0.4306
-0.4123, -0.2880,  0.9074, -0.0461, -0.4435,  0.5144
 0.0060,  0.2867, -0.7775,  0.5161,  0.7039,  0.3599
-0.7968, -0.5484,  0.9426, -0.4308,  0.8148,  0.2979
 0.7811,  0.8450, -0.6877,  0.7594,  0.2640,  0.2362
-0.6802, -0.1113, -0.8325, -0.6694, -0.6056,  0.6544
 0.3821,  0.1476,  0.7466, -0.5107,  0.2592,  0.1648
 0.7265,  0.9683, -0.9803, -0.4943, -0.5523,  0.2454
-0.9049, -0.9797, -0.0196, -0.9090, -0.4433,  0.6447
-0.4607,  0.1811, -0.2389,  0.4050, -0.0078,  0.5229
 0.2664, -0.2932, -0.4259, -0.7336,  0.8742,  0.1834
-0.4507,  0.1029, -0.6294, -0.1158, -0.6294,  0.6081
 0.8948, -0.0124,  0.9278,  0.2899, -0.0314,  0.1534
-0.1323, -0.8813, -0.0146, -0.0697,  0.6135,  0.2386
Posted in JavaScript, Machine Learning | Leave a comment

The Ladybug on the Clock Question

I was sitting in an airport, waiting to board a flight. I figured I’d use the time to write a computer simulation to solve the Ladybug on the Clock question.

Imagine a standard clock. A ladybug flies in and lands on the “12”. At each second, the ladybug randomly moves forward one number, or backwards one number. She continues doing this until she hits all 12 numbers. What is the probability that the last number she lands on is the “6”?

As it turns out, the counterintuitive answer is that all 11 numbers (1 through 11 — the 12 doesn’t count because the ladybug starts there) are equally likely to be the last number landed on. The probability of all 11 numbers is 1 / 11 = 0.0909.

The simulation was a bit trickier than I expected, but even so, it only took me about 20 minutes to code up a simulation, because I have written simulations like this many times over the past 50 years (starting with my school days at U.C. Irvine when I wrote a craps dice game simulation).

I ran the simulation 100,000 trials. The output:

Begin ladybug-clock simulation
Setting n_trials = 100000
Done

Last number landed frequencies:

   1  0.0906
   2  0.0910
   3  0.0901
   4  0.0897
   5  0.0910
   6  0.0913
   7  0.0922
   8  0.0911
   9  0.0915
  10  0.0910
  11  0.0905

I could have gotten more accurate results by running the simulation longer.

Good fun sitting in an airport.

Demo program:

# ladybug_clock.py
#
# A ladybug starts on the "12" of a clock.
# Each second she randomly moves forward one number,
# or back one number.
# What is the prob that the last number she 
# touches is the "6"?

import numpy as np

print("\nBegin ladybug-clock simulation ")

counts = np.zeros(13, dtype=np.int64)
rnd = np.random.RandomState(0)

n_trials = 100000
print("Setting n_trials = " + str(n_trials))
for trial in range(n_trials):
  positions = np.zeros(13, dtype=np.int64)
  spot = 12
  positions[12] = 1
  while np.sum(positions) != 11:
    flip = rnd.randint(low=0, high=2) # 0 or 1
    if flip == 0:  # go backward
      if spot == 1: spot = 12
      else: spot -= 1
    elif flip == 1: # go forward
      if spot == 12: spot = 1
      else: spot += 1
    positions[spot] = 1
  # find the missing spot
  for i in range(1,13):
    if positions[i] == 0:
      # print("last landed = " + str(i))
      counts[i] += 1
print("Done ")

print("\nLast number landed frequencies: \n")
for i in range(1,12):
  print("%4d  %0.4f" % (i, (counts[i] / n_trials)))
Posted in Miscellaneous | Leave a comment

“Quadratic Regression with SGD Training Using JavaScript” in Visual Studio Magazine

I wrote an article titled “Quadratic Regression with SGD Training Using JavaScript” in the March 2026 edition of Microsoft Visual Studio Magazine. See https://visualstudiomagazine.com/articles/2026/03/11/quadratic-regression-with-sgd-training-using-javascript.aspx.

The goal of a machine learning regression problem is to predict a single numeric value. For example, you might want to predict an employee’s salary based on age, IQ test score, high school grade point average, and so on. There are approximately a dozen common regression techniques. The most basic technique is called linear regression.

The form of a basic linear regression prediction model is y’ = (w0 * x0) + (w1 * x1) + . . + b, where y’ is the predicted value, the xi are predictor values, the wi are weights, and b is the bias. Quadratic regression extends linear regression. The form of a quadratic regression model is y’ = (w0 * x0) + . . + (wn * xn) + (wj * x0 * x0) + . . + (wk * x0 * x1) + . . . + b. There are derived predictors that are the square of each original predictor, and interaction terms that are the multiplication product of all possible pairs of original predictors.

Compared to basic linear regression, quadratic regression can handle more complex data. And compared to the most powerful regression techniques that are designed to handle complex data, quadratic regression often has slightly worse prediction accuracy, but is much easier to implement and train, and has much better model interpretability.

My article presents a demo of quadratic regression, implemented from scratch, trained with stochastic gradient descent (SGD), using the JavaScript language. The output of the demo program is:

Begin quadratic regression with SGD training
 using node.js JavaScript

Loading train (200) and test (40) from file

First three train X:
 -0.1660   0.4406  -0.9998  -0.3953  -0.7065
  0.0776  -0.1616   0.3704  -0.5911   0.7562
 -0.9452   0.3409  -0.1654   0.1174  -0.7192

First three train y:
   0.4840
   0.1568
   0.8054

Creating quadratic regression model

Setting lrnRate = 0.001
Setting maxEpochs = 1000

Starting SGD training
epoch =      0  MSE = 0.0925  acc = 0.0100
epoch =    200  MSE = 0.0003  acc = 0.9050
epoch =    400  MSE = 0.0003  acc = 0.8850
epoch =    600  MSE = 0.0003  acc = 0.8850
epoch =    800  MSE = 0.0003  acc = 0.8850
Done

Model base weights:
 -0.2630  0.0354 -0.0420  0.0341 -0.1124

Model quadratic weights:
  0.0655  0.0194  0.0051  0.0047  0.0243

Model interaction weights:
  0.0043  0.0249  0.0071  0.1081 -0.0012 -0.0093
  0.0362  0.0085 -0.0568  0.0016

Model bias: 0.3220

Computing model accuracy

Train acc (within 0.10) = 0.8850
Test acc (within 0.10) = 0.9250

Train MSE = 0.0003
Test MSE = 0.0005

Predicting for x =
  -0.1660    0.4406   -0.9998   -0.3953   -0.7065
Predicted y = 0.4843

End demo

The demo data is synthetic and was generated by a 5-10-1 neural network with random weight and bias values. The accuracy function scores a prediction as correct if it’s within 10% of the correct target value.

The squared (aka “quadratic”) xi^2 terms handle non-linear structure. If there are n predictors, there are also n squared terms. The xi * xj terms between all possible pairs of original predictors handle interactions between predictors. If there are n predictors, there are (n * (n-1)) / 2 interaction terms.

Therefore, in general, if there are n original predictor variables, there are a total of n + n + (n * (n-1))/2 model weights and one bias. Behind the scenes, the derived xi^2 squared terms and the derived xi*xj interaction terms are computed programmatically on-the-fly, as opposed to explicitly creating an augmented dataset.

Quadratic regression has a nice balance of prediction power and interpretability. The model weights/coefficients are easy to interpret. If the predictor values have been normalized to the same scale, larger magnitudes mean larger effect, and the sign of the weights indicate the direction of the effect. But you should be a bit careful when interpreting the interaction weights. If an interaction weight is positive, it can indicate an increase in y when the corresponding pair of predictor values are both positive or both negative.



I wrote this blog post while I was visiting Japan. I’m not a huge fan of the Japanese manga style of art, but I do like several animated movies from Studio Ghibli. Three of my favorites feature a young girl as the central character, reflecting Japan’s mildly creepy (to me anyway) cultural focus on young girls.

Left: In “Spirited Away” (2001), a 10-year-old girl named Chihiro must save her parents from a witch named Yubaba. Great art and a totally weird plot. My grade = A-.

Center: In “My Neighbor Totoro” (1988), university professor Tatsuo Kusakabe, his sick wife Yasuko, and his two young daughters Satsuki and Mei, move into an old house near a hospital. Totoro is one of several spirits who live nearby. My grade = A-.

Right: In “Kiki’s Delivery Service” (1989), Kiki is a young witch who runs a delivery service, aided by her cat Jiji. My grade = A-.


Posted in JavaScript, Machine Learning | Leave a comment