“Random Forest Regression Using C#” in Visual Studio Magazine

I wrote an article titled “Random Forest Regression Using C#” in the March 2026 edition of Microsoft Visual Studio Magazine. See https://visualstudiomagazine.com/articles/2026/03/18/random-forest-regression-using-csharp.aspx.

The goal of a machine learning regression problem is to predict a single numeric value. For example, you might want to predict the price of a house based on its square footage, number of bedrooms, property tax rate, and so on.

A simple decision tree regressor encodes a virtual set of if-then rules to make a prediction. For example, “if house-age is greater than 10.0 and house-age is less than or equal to 13.5 and bedrooms greater than 3.0 then price is $525,665.38.” Simple decision trees usually overfit their training data, and then the tree predicts poorly on new, previously unseen data.

A random forest is a collection of simple decision tree regressors that have been trained on different random subsets of the source training data. This process usually goes a long way toward limiting the overfitting problem.

To make a prediction for an input vector x, each tree in the forest makes a prediction and the final predicted y value is the average of the predictions. A bagging (“bootstrap aggregation”) regression system is a specific type of random forest system where all columns/predictors of the source training data are always used to construct the training data subsets.

The VSM article presents a complete demo of random forest regression using the C# language. The output of the demo is:

Begin C# Random Forest regression demo

Loading synthetic train (200) and test (40) data
Done

First three train X:
 -0.1660  0.4406 -0.9998 -0.3953 -0.7065
  0.0776 -0.1616  0.3704 -0.5911  0.7562
 -0.9452  0.3409 -0.1654  0.1174 -0.7192

First three train y:
  0.4840
  0.1568
  0.8054

Setting nTrees = 100
Setting maxDepth = 6
Setting minSamples = 2
Setting minLeaf = 1
Setting nCols = 5
Setting nRows = 150

Creating and training random forest regression model
Done

Accuracy train (within 0.10) = 0.8050
Accuracy test (within 0.10) = 0.5750

MSE train = 0.0006
MSE test = 0.0016

Predicting for x =
  -0.1660   0.4406  -0.9998  -0.3953  -0.7065
y = 0.4728

End demo

The demo data is synthetic. It was generated by a 5-10-1 neural network with random weights and bias values. The idea here is that the synthetic data does have an underlying, but complex, structure which can be predicted.

When using decision trees for regression, it’s not necessary to normalize the training data predictor values because no distance between data items is computed. However, it’s not a bad idea to normalize the predictors just in case you want to send the data to other regression algorithms that do require normalization.

Random forest regression is most often used with data that has strictly numeric predictor variables. It is possible to use random forest regression with mixed categorical and numeric data, by using ordinal encoding on the categorical data. In theory, ordinal encoding shouldn’t work well. For example, if you have a predictor variable color with possible encoded values red = 0, blue = 1, green = 2, then red will always be less-than-or-equal to any other color value in the decision tree construction process. However, in practice, ordinal encoding for random forest regression often works well.

The motivation for combining many simple decision tree regressors into a forest is the fact that a simple decision tree will always overfit training data if the tree is deep enough. By using a collection of trees that have been trained on different random subsets of the source data, the averaged prediction of the collection is much less likely to overfit.



Random forest regression is conceptually related to bagging (bootstrap aggregation) regression, adaptive boosting regression, and gradient boosting regression. I’m a big fan of 1950s science fiction movies. To my eye, several of the actresses in my favorite films seem to have somewhat similar physical appearances.

Left: Actress Julie Adams (1926-2019) is best known for her lead role in “Creature from the Black Lagoon” (1954).

Center: Actress Joan Taylor (1929-2012) had lead roles in “Earth vs. the Flying Saucers” (1956) and “20 Million Miles to Earth” (1957).

Right: Actress Joan Weldon (1930-2021) was in “Them!” (1954) — the well-known giant ant movie.


Posted in Machine Learning | Leave a comment

Linear Regression Using C# Applied to the Diabetes Dataset – Poor Results

I write code every day. Like most things, writing code is a skill that must be practiced. One morning before work, I figured I’d run the well-known Diabetes Dataset through a linear regression model. I fully expected a poor prediction model, and that’s exactly what happened.

But the linear regression results are useful as a baseline for comparison with other regression techniques: nearest neighbors regression, quadratic regression, kernel ridge regression, neural network regression, decision tree regression, bagging tree regression, random forest regression, adaptive boosting regression, and gradient boosting regression.

The Diabetes Dataset looks like:

59, 2, 32.1, 101.00, 157,  93.2, 38, 4.00, 4.8598, 87, 151
48, 1, 21.6,  87.00, 183, 103.2, 70, 3.00, 3.8918, 69,  75
72, 2, 30.5,  93.00, 156,  93.6, 41, 4.00, 4.6728, 85, 141
. . .

Each line represents a patient. The first 10 values on each line are predictors. The last value on each line is the target value (a diabetes metric) to predict. The predictors are: age, sex, body mass index, blood pressure, serum cholesterol, low-density lipoproteins, high-density lipoproteins, total cholesterol, triglycerides, blood sugar. There are 442 data items.

The sex encoding isn’t explained but I suspect male = 1, female = 2 because there are 235 1 values and 206 2 values).

I converted the sex values from 1,2 into 0,1. Then I applied divide-by-constant normalization by dividing the 10 predictor columns by (100, 1, 100, 1000, 1000, 1000, 100, 10, 10, 1000) and the target y values by 1000. The resulting encoded and normalized data looks like:

0.5900, 1.0000, 0.3210, . . . 0.1510
0.4800, 0.0000, 0.2160, . . . 0.0750
0.7200, 1.0000, 0.3050, . . . 0.1410
. . .

I split the 442-items into a 342-item training set and a 100-item test set.

I implemented a linear regression model using C#. For training, I used pseudo-inverse via normal equations closed form training with Cholesky inverse. The math equation is:

w = inv(Xt * X) * Xt * y

The w is a vector of the model bias in cell [0] followed by the weights in the remaining cells. X is a design matrix of the training predictors (the predictors with a leading column of 1s added). Xt is the transpose of the design matrix. The y is the vector of target values. The inv() is any matrix inverse but because Xt * X is square symmetric positive definite, Cholesky inverse is simplest (but by no means simple). The * means matrix multiplication or matrix-to-vector multiplication.

The output of the demo is:

Begin C# linear regression demo

Loading diabetes train (342) and test (100) data
Done

First three train X:
  0.5900  1.0000  0.3210  0.1010  . . .  0.0870
  0.4800  0.0000  0.2160  0.0870  . . .  0.0690
  0.7200  1.0000  0.3050  0.0930  . . .  0.0850

First three train y:
  0.1510
  0.0750
  0.1410

Creating linear regression model
Done

Training using pinv normal equations Cholesky
Done

Weights/coefficients:
-0.0031  -0.0235  0.5557  1.0414  -0.5531
 0.2503  -0.0272  0.0469  0.5555   0.3634
Bias/constant: -0.3013

Evaluating model

Accuracy train (within 0.10) = 0.1813
Accuracy test (within 0.10) = 0.2800

MSE train = 0.0029
MSE test = 0.0027

Predicting for x =
   0.5900   1.0000   0.3210   0.1010   0.1570
   0.0932   0.3800   0.4000   0.4860   0.0870
Predicted y = 0.2034

End demo

As expected, the linear regression model had poor prediction accuracy: just 18.13% on the training data (62 out of 342 correct) and 28.00% on the test data (28 out of 100 correct). A prediction is scored correct if it is within 10% of the true target value.



I don’t know much about art. But here are three images in the art deco style that appeal to my eye. I like the minimalist, linear style.


Demo program. Replace “lt” (less than), “gt”, “lte”, gte” with Boolean operator symbols. (My blog editor chokes on symbols).

using System;
using System.IO;
using System.Collections.Generic;

namespace LinearRegressionDiabetesClosedCholesky
{
  internal class LinearRegressionProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin C# linear regression demo ");

      // 1. load data
      Console.WriteLine("\nLoading diabetes train" +
        " (342) and test (100) data");
      string trainFile =
        "..\\..\\..\\Data\\diabetes_norm_train_342.txt";
      int[] colsX = 
        new int[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
      int colY = 10;
      double[][] trainX =
        MatLoad(trainFile, colsX, ',', "#");
      double[] trainY =
        MatToVec(MatLoad(trainFile,
        new int[] { colY }, ',', "#"));

      string testFile =
        "..\\..\\..\\Data\\diabetes_norm_test_100.txt";
      double[][] testX =
        MatLoad(testFile, colsX, ',', "#");
      double[] testY =
        MatToVec(MatLoad(testFile,
        new int[] { colY }, ',', "#"));
      Console.WriteLine("Done ");

      Console.WriteLine("\nFirst three train X: ");
      for (int i = 0; i "lt" 3; ++i)
        VecShow(trainX[i], 4, 8);

      Console.WriteLine("\nFirst three train y: ");
      for (int i = 0; i "lt" 3; ++i)
        Console.WriteLine(trainY[i].ToString("F4").
          PadLeft(8));

      // 2. create and train model
      Console.WriteLine("\nCreating linear regression model ");
      LinearRegressor model = new LinearRegressor();
      Console.WriteLine("Done");

      Console.WriteLine("\nTraining using pinv normal" +
        " equations Cholesky ");
      model.TrainClosed(trainX, trainY);
      Console.WriteLine("Done ");

      ////Console.WriteLine("\nTraining using SGD ");
      ////model.TrainSGD(trainX, trainY, 0.001, 20000);

      // show model parameters
      Console.WriteLine("\nWeights/coefficients: ");
      for (int i = 0; i "lt" model.weights.Length; ++i)
        Console.Write(model.weights[i].ToString("F4") + "  ");
      Console.WriteLine("\nBias/constant: " +
        model.bias.ToString("F4"));

      // 3. evaluate model
      Console.WriteLine("\nEvaluating model ");
      double accTrain = model.Accuracy(trainX, trainY, 0.10);
      Console.WriteLine("\nAcc train (within 0.10) = " +
        accTrain.ToString("F4"));
      double accTest = model.Accuracy(testX, testY, 0.10);
      Console.WriteLine("Acc test (within 0.10) = " +
        accTest.ToString("F4"));

      double mseTrain = model.MSE(trainX, trainY);
      Console.WriteLine("\nMSE train = " +
        mseTrain.ToString("F4"));
      double mseTest = model.MSE(testX, testY);
      Console.WriteLine("MSE test = " +
        mseTest.ToString("F4"));

      // 4. use model
      double[] x = trainX[0];
      Console.WriteLine("\nPredicting for x = ");
      VecShow(x, 4, 9);
      double predY = model.Predict(x);
      Console.WriteLine("Predicted y = " +
        predY.ToString("F4"));

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main

    // ------------------------------------------------------
    // helpers for Main(): MatLoad, MatToVec, VecShow
    // ------------------------------------------------------

    static double[][] MatLoad(string fn, int[] usecols,
      char sep, string comment)
    {
      List"lt"double[]"gt" result = 
        new List"lt"double[]"gt"();
      string line = "";
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      while ((line = sr.ReadLine()) != null)
      {
        if (line.StartsWith(comment) == true)
          continue;
        string[] tokens = line.Split(sep);
        List"lt"double"gt" lst = new List"lt"double"gt"();
        for (int j = 0; j "lt" usecols.Length; ++j)
          lst.Add(double.Parse(tokens[usecols[j]]));
        double[] row = lst.ToArray();
        result.Add(row);
      }
      sr.Close(); ifs.Close();
      return result.ToArray();
    }

    static double[] MatToVec(double[][] mat)
    {
      int nRows = mat.Length;
      int nCols = mat[0].Length;
      double[] result = new double[nRows * nCols];
      int k = 0;
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[k++] = mat[i][j];
      return result;
    }

    static void VecShow(double[] vec, int dec, int wid)
    {
      for (int i = 0; i "lt" vec.Length; ++i)
        Console.Write(vec[i].ToString("F" + dec).
          PadLeft(wid));
      Console.WriteLine("");
    }
  } // Program

  public class LinearRegressor
  {
    public double[] weights;
    public double bias;
    private Random rnd;  // not used closed-form training

    public LinearRegressor(int seed = 0)
    {
      this.weights = new double[0]; // quasi-null
      this.bias = 0.0;
      this.rnd = new Random(seed);
    }

    // ------------------------------------------------------

    public double Predict(double[] x)
    {
      double result = 0.0;
      for (int j = 0; j "lt" x.Length; ++j)
        result += x[j] * this.weights[j];
      result += this.bias;
      return result;
    }

    // ------------------------------------------------------

    public void TrainClosed(double[][] trainX,
      double[] trainY)
    {
      // pinverse via normal equations with Cholesky inv
      // w = inv(Xt * X) * Xt * y
      int dim = trainX[0].Length;
      this.weights = new double[dim];

      double[][] X = MatDesign(trainX); // add quad terms
      double[][] Xt = MatTranspose(X);
      double[][] XtX = MatProduct(Xt, X); // could fail
      for (int i = 0; i "lt" XtX.Length; ++i)
        XtX[i][i] += 1.0e-8;  // condition Xt*X
      double[][] invXtX = MatInvCholesky(XtX);
      double[][] invXtXXt = MatProduct(invXtX, Xt);
      double[] biasAndWts = MatVecProd(invXtXXt, trainY);
      this.bias = biasAndWts[0];
      for (int i = 1; i "lt" biasAndWts.Length; ++i)
        this.weights[i - 1] = biasAndWts[i];
      return;
    }

    // ------------------------------------------------------

    public double Accuracy(double[][] dataX, double[] dataY,
      double pctClose)
    {
      int numCorrect = 0; int numWrong = 0;
      for (int i = 0; i "lt" dataX.Length; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        if (Math.Abs(predY - actualY) "lt"
          Math.Abs(pctClose * actualY))
          ++numCorrect;
        else
          ++numWrong;
      }
      return (numCorrect * 1.0) / (numWrong + numCorrect);
    }

    // ------------------------------------------------------

    public double MSE(double[][] dataX, double[] dataY)
    {
      int n = dataX.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        sum += (actualY - predY) * (actualY - predY);
      }
      return sum / n;
    }

    // ------------------------------------------------------

    // ------------------------------------------------------
    // helpers for TrainClosed()
    // MatDesign, MatTranspose, MatProduct, MatVecProd
    // MatInvCholesky, MatDecompCholesky, 
    // MatInvLowerTri, MatUpperTri
    // MatMake, MatIdentity

    private static double[][] MatDesign(double[][] X)
    {
      // add a leading column of 1s
      int nRows = X.Length; int nCols = X[0].Length;
      double[][] result = new double[X.Length][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols + 1];

      for (int i = 0; i "lt" nRows; ++i)
      {
        result[i][0] = 1.0;
        for (int j = 1; j "lt" nCols + 1; ++j)
          result[i][j] = X[i][j - 1];
      }
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatTranspose(double[][] M)
    {
      int nRows = M.Length; int nCols = M[0].Length;
      double[][] result = MatMake(nCols, nRows);  // note
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[j][i] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatMake(int nRows, int nCols)
    {
      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatProduct(double[][] A,
      double[][] B)
    {
      int aRows = A.Length; int aCols = A[0].Length;
      int bRows = B.Length; int bCols = B[0].Length;
      if (aCols != bRows)
        throw new Exception("Non-conformable matrices");

      double[][] result = MatMake(aRows, bCols);

      for (int i = 0; i "lt" aRows; ++i) // each row of A
        for (int j = 0; j "lt" bCols; ++j) // each col of B
          for (int k = 0; k "lt" aCols; ++k)
            result[i][j] += A[i][k] * B[k][j];

      return result;
    }

    // ------------------------------------------------------

    private static double[] MatVecProd(double[][] M,
      double[] v)
    {
      // return a regular vector
      int nRows = M.Length; int nCols = M[0].Length;
      int n = v.Length;
      if (nCols != n)
        throw new Exception("non-conform in MatVecProd");

      double[] result = new double[nRows];
      for (int i = 0; i "lt" nRows; ++i)
        for (int k = 0; k "lt" nCols; ++k)
          result[i] += M[i][k] * v[k];

      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatInvCholesky(double[][] A)
    {
      // A must be square symmetric positive definite
      // A = L * Lt, inv(A) = inv(Lt) * inv(L)
      double[][] L = MatDecompCholesky(A); // lower tri
      double[][] Lt = MatTranspose(L); // upper tri
      double[][] invL = MatInvLowerTri(L);
      double[][] invLt = MatInvUpperTri(Lt);
      double[][] result = MatProduct(invLt, invL);
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatDecompCholesky(double[][] A)
    {
      // A must be square, symmetric, positive definite
      int m = A.Length; int n = A[0].Length;  // m == n
      double[][] L = new double[n][];
      for (int i = 0; i "lt" n; ++i)
        L[i] = new double[n];

      for (int i = 0; i "lt" n; ++i)
      {
        for (int j = 0; j "lte" i; ++j)
        {
          double sum = 0.0;
          for (int k = 0; k "lt" j; ++k)
            sum += L[i][k] * L[j][k];
          if (i == j)
          {
            double tmp = A[i][i] - sum;
            if (tmp "lt" 0.0)
              throw new
                Exception("decomp Cholesky fatal");
            L[i][j] = Math.Sqrt(tmp);
          }
          else
          {
            if (L[j][j] == 0.0)
              throw new
                Exception("decomp Cholesky fatal ");
            L[i][j] = (A[i][j] - sum) / L[j][j];
          }
        } // j
      } // i

      return L;
    }

    // ------------------------------------------------------
 
    private static double[][] MatIdentity(int n)
    {
      double[][] result = MatMake(n, n);
      for (int i = 0; i "lt" n; ++i)
        result[i][i] = 1.0;
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatInvLowerTri(double[][] A)
    {
      int n = A.Length;  // must be square matrix
      double[][] result = MatIdentity(n);

      for (int k = 0; k "lt" n; ++k)
      {
        for (int j = 0; j "lt" n; ++j)
        {
          for (int i = 0; i "lt" k; ++i)
          {
            result[k][j] -= result[i][j] * A[k][i];
          }
          result[k][j] /= (A[k][k] + 1.0e-8);
        }
      }
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatInvUpperTri(double[][] A)
    {
      int n = A.Length;  // must be square matrix
      double[][] result = MatIdentity(n);

      for (int k = 0; k "lt" n; ++k)
      {
        for (int j = 0; j "lt" n; ++j)
        {
          for (int i = 0; i "lt" k; ++i)
          {
            result[j][k] -= result[j][i] * A[i][k];
          }
          result[j][k] /= (A[k][k] + 1.0e-8); // avoid 0
        }
      }
      return result;
    }
    // ------------------------------------------------------
  } // class LinearRegressor

} // ns

Training data:


# diabetes_norm_train_342.txt
# cols [0] to [9] predictors. col [10] target
# norm division constants:
# 100, -1, 100, 1000, 1000, 1000, 100, 10, 10, 1000, 1000
#
0.5900, 1.0000, 0.3210, 0.1010, 0.1570, 0.0932, 0.3800, 0.4000, 0.4860, 0.0870, 0.1510
0.4800, 0.0000, 0.2160, 0.0870, 0.1830, 0.1032, 0.7000, 0.3000, 0.3892, 0.0690, 0.0750
0.7200, 1.0000, 0.3050, 0.0930, 0.1560, 0.0936, 0.4100, 0.4000, 0.4673, 0.0850, 0.1410
0.2400, 0.0000, 0.2530, 0.0840, 0.1980, 0.1314, 0.4000, 0.5000, 0.4890, 0.0890, 0.2060
0.5000, 0.0000, 0.2300, 0.1010, 0.1920, 0.1254, 0.5200, 0.4000, 0.4291, 0.0800, 0.1350
0.2300, 0.0000, 0.2260, 0.0890, 0.1390, 0.0648, 0.6100, 0.2000, 0.4190, 0.0680, 0.0970
0.3600, 1.0000, 0.2200, 0.0900, 0.1600, 0.0996, 0.5000, 0.3000, 0.3951, 0.0820, 0.1380
0.6600, 1.0000, 0.2620, 0.1140, 0.2550, 0.1850, 0.5600, 0.4550, 0.4249, 0.0920, 0.0630
0.6000, 1.0000, 0.3210, 0.0830, 0.1790, 0.1194, 0.4200, 0.4000, 0.4477, 0.0940, 0.1100
0.2900, 0.0000, 0.3000, 0.0850, 0.1800, 0.0934, 0.4300, 0.4000, 0.5385, 0.0880, 0.3100
0.2200, 0.0000, 0.1860, 0.0970, 0.1140, 0.0576, 0.4600, 0.2000, 0.3951, 0.0830, 0.1010
0.5600, 1.0000, 0.2800, 0.0850, 0.1840, 0.1448, 0.3200, 0.6000, 0.3584, 0.0770, 0.0690
0.5300, 0.0000, 0.2370, 0.0920, 0.1860, 0.1092, 0.6200, 0.3000, 0.4304, 0.0810, 0.1790
0.5000, 1.0000, 0.2620, 0.0970, 0.1860, 0.1054, 0.4900, 0.4000, 0.5063, 0.0880, 0.1850
0.6100, 0.0000, 0.2400, 0.0910, 0.2020, 0.1154, 0.7200, 0.3000, 0.4291, 0.0730, 0.1180
0.3400, 1.0000, 0.2470, 0.1180, 0.2540, 0.1842, 0.3900, 0.7000, 0.5037, 0.0810, 0.1710
0.4700, 0.0000, 0.3030, 0.1090, 0.2070, 0.1002, 0.7000, 0.3000, 0.5215, 0.0980, 0.1660
0.6800, 1.0000, 0.2750, 0.1110, 0.2140, 0.1470, 0.3900, 0.5000, 0.4942, 0.0910, 0.1440
0.3800, 0.0000, 0.2540, 0.0840, 0.1620, 0.1030, 0.4200, 0.4000, 0.4443, 0.0870, 0.0970
0.4100, 0.0000, 0.2470, 0.0830, 0.1870, 0.1082, 0.6000, 0.3000, 0.4543, 0.0780, 0.1680
0.3500, 0.0000, 0.2110, 0.0820, 0.1560, 0.0878, 0.5000, 0.3000, 0.4511, 0.0950, 0.0680
0.2500, 1.0000, 0.2430, 0.0950, 0.1620, 0.0986, 0.5400, 0.3000, 0.3850, 0.0870, 0.0490
0.2500, 0.0000, 0.2600, 0.0920, 0.1870, 0.1204, 0.5600, 0.3000, 0.3970, 0.0880, 0.0680
0.6100, 1.0000, 0.3200, 0.1037, 0.2100, 0.0852, 0.3500, 0.6000, 0.6107, 0.1240, 0.2450
0.3100, 0.0000, 0.2970, 0.0880, 0.1670, 0.1034, 0.4800, 0.4000, 0.4357, 0.0780, 0.1840
0.3000, 1.0000, 0.2520, 0.0830, 0.1780, 0.1184, 0.3400, 0.5000, 0.4852, 0.0830, 0.2020
0.1900, 0.0000, 0.1920, 0.0870, 0.1240, 0.0540, 0.5700, 0.2000, 0.4174, 0.0900, 0.1370
0.4200, 0.0000, 0.3190, 0.0830, 0.1580, 0.0876, 0.5300, 0.3000, 0.4466, 0.1010, 0.0850
0.6300, 0.0000, 0.2440, 0.0730, 0.1600, 0.0914, 0.4800, 0.3000, 0.4635, 0.0780, 0.1310
0.6700, 1.0000, 0.2580, 0.1130, 0.1580, 0.0542, 0.6400, 0.2000, 0.5293, 0.1040, 0.2830
0.3200, 0.0000, 0.3050, 0.0890, 0.1820, 0.1106, 0.5600, 0.3000, 0.4344, 0.0890, 0.1290
0.4200, 0.0000, 0.2030, 0.0710, 0.1610, 0.0812, 0.6600, 0.2000, 0.4234, 0.0810, 0.0590
0.5800, 1.0000, 0.3800, 0.1030, 0.1500, 0.1072, 0.2200, 0.7000, 0.4644, 0.0980, 0.3410
0.5700, 0.0000, 0.2170, 0.0940, 0.1570, 0.0580, 0.8200, 0.2000, 0.4443, 0.0920, 0.0870
0.5300, 0.0000, 0.2050, 0.0780, 0.1470, 0.0842, 0.5200, 0.3000, 0.3989, 0.0750, 0.0650
0.6200, 1.0000, 0.2350, 0.0803, 0.2250, 0.1128, 0.8600, 0.2620, 0.4875, 0.0960, 0.1020
0.5200, 0.0000, 0.2850, 0.1100, 0.1950, 0.0972, 0.6000, 0.3000, 0.5242, 0.0850, 0.2650
0.4600, 0.0000, 0.2740, 0.0780, 0.1710, 0.0880, 0.5800, 0.3000, 0.4828, 0.0900, 0.2760
0.4800, 1.0000, 0.3300, 0.1230, 0.2530, 0.1636, 0.4400, 0.6000, 0.5425, 0.0970, 0.2520
0.4800, 1.0000, 0.2770, 0.0730, 0.1910, 0.1194, 0.4600, 0.4000, 0.4852, 0.0920, 0.0900
0.5000, 1.0000, 0.2560, 0.1010, 0.2290, 0.1622, 0.4300, 0.5000, 0.4779, 0.1140, 0.1000
0.2100, 0.0000, 0.2010, 0.0630, 0.1350, 0.0690, 0.5400, 0.3000, 0.4094, 0.0890, 0.0550
0.3200, 1.0000, 0.2540, 0.0903, 0.1530, 0.1004, 0.3400, 0.4500, 0.4533, 0.0830, 0.0610
0.5400, 0.0000, 0.2420, 0.0740, 0.2040, 0.1090, 0.8200, 0.2000, 0.4174, 0.1090, 0.0920
0.6100, 1.0000, 0.3270, 0.0970, 0.1770, 0.1184, 0.2900, 0.6000, 0.4997, 0.0870, 0.2590
0.5600, 1.0000, 0.2310, 0.1040, 0.1810, 0.1164, 0.4700, 0.4000, 0.4477, 0.0790, 0.0530
0.3300, 0.0000, 0.2530, 0.0850, 0.1550, 0.0850, 0.5100, 0.3000, 0.4554, 0.0700, 0.1900
0.2700, 0.0000, 0.1960, 0.0780, 0.1280, 0.0680, 0.4300, 0.3000, 0.4443, 0.0710, 0.1420
0.6700, 1.0000, 0.2250, 0.0980, 0.1910, 0.1192, 0.6100, 0.3000, 0.3989, 0.0860, 0.0750
0.3700, 1.0000, 0.2770, 0.0930, 0.1800, 0.1194, 0.3000, 0.6000, 0.5030, 0.0880, 0.1420
0.5800, 0.0000, 0.2570, 0.0990, 0.1570, 0.0916, 0.4900, 0.3000, 0.4407, 0.0930, 0.1550
0.6500, 1.0000, 0.2790, 0.1030, 0.1590, 0.0968, 0.4200, 0.4000, 0.4615, 0.0860, 0.2250
0.3400, 0.0000, 0.2550, 0.0930, 0.2180, 0.1440, 0.5700, 0.4000, 0.4443, 0.0880, 0.0590
0.4600, 0.0000, 0.2490, 0.1150, 0.1980, 0.1296, 0.5400, 0.4000, 0.4277, 0.1030, 0.1040
0.3500, 0.0000, 0.2870, 0.0970, 0.2040, 0.1268, 0.6400, 0.3000, 0.4190, 0.0930, 0.1820
0.3700, 0.0000, 0.2180, 0.0840, 0.1840, 0.1010, 0.7300, 0.3000, 0.3912, 0.0930, 0.1280
0.3700, 0.0000, 0.3020, 0.0870, 0.1660, 0.0960, 0.4000, 0.4150, 0.5011, 0.0870, 0.0520
0.4100, 0.0000, 0.2050, 0.0800, 0.1240, 0.0488, 0.6400, 0.2000, 0.4025, 0.0750, 0.0370
0.6000, 0.0000, 0.2040, 0.1050, 0.1980, 0.0784, 0.9900, 0.2000, 0.4635, 0.0790, 0.1700
0.6600, 1.0000, 0.2400, 0.0980, 0.2360, 0.1464, 0.5800, 0.4000, 0.5063, 0.0960, 0.1700
0.2900, 0.0000, 0.2600, 0.0830, 0.1410, 0.0652, 0.6400, 0.2000, 0.4078, 0.0830, 0.0610
0.3700, 1.0000, 0.2680, 0.0790, 0.1570, 0.0980, 0.2800, 0.6000, 0.5043, 0.0960, 0.1440
0.4100, 1.0000, 0.2570, 0.0830, 0.1810, 0.1066, 0.6600, 0.3000, 0.3738, 0.0850, 0.0520
0.3900, 0.0000, 0.2290, 0.0770, 0.2040, 0.1432, 0.4600, 0.4000, 0.4304, 0.0740, 0.1280
0.6700, 1.0000, 0.2400, 0.0830, 0.1430, 0.0772, 0.4900, 0.3000, 0.4431, 0.0940, 0.0710
0.3600, 1.0000, 0.2410, 0.1120, 0.1930, 0.1250, 0.3500, 0.6000, 0.5106, 0.0950, 0.1630
0.4600, 1.0000, 0.2470, 0.0850, 0.1740, 0.1232, 0.3000, 0.6000, 0.4644, 0.0960, 0.1500
0.6000, 1.0000, 0.2500, 0.0897, 0.1850, 0.1208, 0.4600, 0.4020, 0.4511, 0.0920, 0.0970
0.5900, 1.0000, 0.2360, 0.0830, 0.1650, 0.1000, 0.4700, 0.4000, 0.4500, 0.0920, 0.1600
0.5300, 0.0000, 0.2210, 0.0930, 0.1340, 0.0762, 0.4600, 0.3000, 0.4078, 0.0960, 0.1780
0.4800, 0.0000, 0.1990, 0.0910, 0.1890, 0.1096, 0.6900, 0.3000, 0.3951, 0.1010, 0.0480
0.4800, 0.0000, 0.2950, 0.1310, 0.2070, 0.1322, 0.4700, 0.4000, 0.4935, 0.1060, 0.2700
0.6600, 1.0000, 0.2600, 0.0910, 0.2640, 0.1466, 0.6500, 0.4000, 0.5568, 0.0870, 0.2020
0.5200, 1.0000, 0.2450, 0.0940, 0.2170, 0.1494, 0.4800, 0.5000, 0.4585, 0.0890, 0.1110
0.5200, 1.0000, 0.2660, 0.1110, 0.2090, 0.1264, 0.6100, 0.3000, 0.4682, 0.1090, 0.0850
0.4600, 1.0000, 0.2350, 0.0870, 0.1810, 0.1148, 0.4400, 0.4000, 0.4710, 0.0980, 0.0420
0.4000, 1.0000, 0.2900, 0.1150, 0.0970, 0.0472, 0.3500, 0.2770, 0.4304, 0.0950, 0.1700
0.2200, 0.0000, 0.2300, 0.0730, 0.1610, 0.0978, 0.5400, 0.3000, 0.3829, 0.0910, 0.2000
0.5000, 0.0000, 0.2100, 0.0880, 0.1400, 0.0718, 0.3500, 0.4000, 0.5112, 0.0710, 0.2520
0.2000, 0.0000, 0.2290, 0.0870, 0.1910, 0.1282, 0.5300, 0.4000, 0.3892, 0.0850, 0.1130
0.6800, 0.0000, 0.2750, 0.1070, 0.2410, 0.1496, 0.6400, 0.4000, 0.4920, 0.0900, 0.1430
0.5200, 1.0000, 0.2430, 0.0860, 0.1970, 0.1336, 0.4400, 0.5000, 0.4575, 0.0910, 0.0510
0.4400, 0.0000, 0.2310, 0.0870, 0.2130, 0.1264, 0.7700, 0.3000, 0.3871, 0.0720, 0.0520
0.3800, 0.0000, 0.2730, 0.0810, 0.1460, 0.0816, 0.4700, 0.3000, 0.4466, 0.0810, 0.2100
0.4900, 0.0000, 0.2270, 0.0653, 0.1680, 0.0962, 0.6200, 0.2710, 0.3892, 0.0600, 0.0650
0.6100, 0.0000, 0.3300, 0.0950, 0.1820, 0.1148, 0.5400, 0.3000, 0.4190, 0.0740, 0.1410
0.2900, 1.0000, 0.1940, 0.0830, 0.1520, 0.1058, 0.3900, 0.4000, 0.3584, 0.0830, 0.0550
0.6100, 0.0000, 0.2580, 0.0980, 0.2350, 0.1258, 0.7600, 0.3000, 0.5112, 0.0820, 0.1340
0.3400, 1.0000, 0.2260, 0.0750, 0.1660, 0.0918, 0.6000, 0.3000, 0.4263, 0.1080, 0.0420
0.3600, 0.0000, 0.2190, 0.0890, 0.1890, 0.1052, 0.6800, 0.3000, 0.4369, 0.0960, 0.1110
0.5200, 0.0000, 0.2400, 0.0830, 0.1670, 0.0866, 0.7100, 0.2000, 0.3850, 0.0940, 0.0980
0.6100, 0.0000, 0.3120, 0.0790, 0.2350, 0.1568, 0.4700, 0.5000, 0.5050, 0.0960, 0.1640
0.4300, 0.0000, 0.2680, 0.1230, 0.1930, 0.1022, 0.6700, 0.3000, 0.4779, 0.0940, 0.0480
0.3500, 0.0000, 0.2040, 0.0650, 0.1870, 0.1056, 0.6700, 0.2790, 0.4277, 0.0780, 0.0960
0.2700, 0.0000, 0.2480, 0.0910, 0.1890, 0.1068, 0.6900, 0.3000, 0.4190, 0.0690, 0.0900
0.2900, 0.0000, 0.2100, 0.0710, 0.1560, 0.0970, 0.3800, 0.4000, 0.4654, 0.0900, 0.1620
0.6400, 1.0000, 0.2730, 0.1090, 0.1860, 0.1076, 0.3800, 0.5000, 0.5308, 0.0990, 0.1500
0.4100, 0.0000, 0.3460, 0.0873, 0.2050, 0.1426, 0.4100, 0.5000, 0.4673, 0.1100, 0.2790
0.4900, 1.0000, 0.2590, 0.0910, 0.1780, 0.1066, 0.5200, 0.3000, 0.4575, 0.0750, 0.0920
0.4800, 0.0000, 0.2040, 0.0980, 0.2090, 0.1394, 0.4600, 0.5000, 0.4771, 0.0780, 0.0830
0.5300, 0.0000, 0.2800, 0.0880, 0.2330, 0.1438, 0.5800, 0.4000, 0.5050, 0.0910, 0.1280
0.5300, 1.0000, 0.2220, 0.1130, 0.1970, 0.1152, 0.6700, 0.3000, 0.4304, 0.1000, 0.1020
0.2300, 0.0000, 0.2900, 0.0900, 0.2160, 0.1314, 0.6500, 0.3000, 0.4585, 0.0910, 0.3020
0.6500, 1.0000, 0.3020, 0.0980, 0.2190, 0.1606, 0.4000, 0.5000, 0.4522, 0.0840, 0.1980
0.4100, 0.0000, 0.3240, 0.0940, 0.1710, 0.1044, 0.5600, 0.3000, 0.3970, 0.0760, 0.0950
0.5500, 1.0000, 0.2340, 0.0830, 0.1660, 0.1016, 0.4600, 0.4000, 0.4522, 0.0960, 0.0530
0.2200, 0.0000, 0.1930, 0.0820, 0.1560, 0.0932, 0.5200, 0.3000, 0.3989, 0.0710, 0.1340
0.5600, 0.0000, 0.3100, 0.0787, 0.1870, 0.1414, 0.3400, 0.5500, 0.4060, 0.0900, 0.1440
0.5400, 1.0000, 0.3060, 0.1033, 0.1440, 0.0798, 0.3000, 0.4800, 0.5142, 0.1010, 0.2320
0.5900, 1.0000, 0.2550, 0.0953, 0.1900, 0.1394, 0.3500, 0.5430, 0.4357, 0.1170, 0.0810
0.6000, 1.0000, 0.2340, 0.0880, 0.1530, 0.0898, 0.5800, 0.3000, 0.3258, 0.0950, 0.1040
0.5400, 0.0000, 0.2680, 0.0870, 0.2060, 0.1220, 0.6800, 0.3000, 0.4382, 0.0800, 0.0590
0.2500, 0.0000, 0.2830, 0.0870, 0.1930, 0.1280, 0.4900, 0.4000, 0.4382, 0.0920, 0.2460
0.5400, 1.0000, 0.2770, 0.1130, 0.2000, 0.1284, 0.3700, 0.5000, 0.5153, 0.1130, 0.2970
0.5500, 0.0000, 0.3660, 0.1130, 0.1990, 0.0944, 0.4300, 0.4630, 0.5730, 0.0970, 0.2580
0.4000, 1.0000, 0.2650, 0.0930, 0.2360, 0.1470, 0.3700, 0.7000, 0.5561, 0.0920, 0.2290
0.6200, 1.0000, 0.3180, 0.1150, 0.1990, 0.1286, 0.4400, 0.5000, 0.4883, 0.0980, 0.2750
0.6500, 0.0000, 0.2440, 0.1200, 0.2220, 0.1356, 0.3700, 0.6000, 0.5509, 0.1240, 0.2810
0.3300, 1.0000, 0.2540, 0.1020, 0.2060, 0.1410, 0.3900, 0.5000, 0.4868, 0.1050, 0.1790
0.5300, 0.0000, 0.2200, 0.0940, 0.1750, 0.0880, 0.5900, 0.3000, 0.4942, 0.0980, 0.2000
0.3500, 0.0000, 0.2680, 0.0980, 0.1620, 0.1036, 0.4500, 0.4000, 0.4205, 0.0860, 0.2000
0.6600, 0.0000, 0.2800, 0.1010, 0.1950, 0.1292, 0.4000, 0.5000, 0.4860, 0.0940, 0.1730
0.6200, 1.0000, 0.3390, 0.1010, 0.2210, 0.1564, 0.3500, 0.6000, 0.4997, 0.1030, 0.1800
0.5000, 1.0000, 0.2960, 0.0943, 0.3000, 0.2424, 0.3300, 0.9090, 0.4812, 0.1090, 0.0840
0.4700, 0.0000, 0.2860, 0.0970, 0.1640, 0.0906, 0.5600, 0.3000, 0.4466, 0.0880, 0.1210
0.4700, 1.0000, 0.2560, 0.0940, 0.1650, 0.0748, 0.4000, 0.4000, 0.5526, 0.0930, 0.1610
0.2400, 0.0000, 0.2070, 0.0870, 0.1490, 0.0806, 0.6100, 0.2000, 0.3611, 0.0780, 0.0990
0.5800, 1.0000, 0.2620, 0.0910, 0.2170, 0.1242, 0.7100, 0.3000, 0.4691, 0.0680, 0.1090
0.3400, 0.0000, 0.2060, 0.0870, 0.1850, 0.1122, 0.5800, 0.3000, 0.4304, 0.0740, 0.1150
0.5100, 0.0000, 0.2790, 0.0960, 0.1960, 0.1222, 0.4200, 0.5000, 0.5069, 0.1200, 0.2680
0.3100, 1.0000, 0.3530, 0.1250, 0.1870, 0.1124, 0.4800, 0.4000, 0.4890, 0.1090, 0.2740
0.2200, 0.0000, 0.1990, 0.0750, 0.1750, 0.1086, 0.5400, 0.3000, 0.4127, 0.0720, 0.1580
0.5300, 1.0000, 0.2440, 0.0920, 0.2140, 0.1460, 0.5000, 0.4000, 0.4500, 0.0970, 0.1070
0.3700, 1.0000, 0.2140, 0.0830, 0.1280, 0.0696, 0.4900, 0.3000, 0.3850, 0.0840, 0.0830
0.2800, 0.0000, 0.3040, 0.0850, 0.1980, 0.1156, 0.6700, 0.3000, 0.4344, 0.0800, 0.1030
0.4700, 0.0000, 0.3160, 0.0840, 0.1540, 0.0880, 0.3000, 0.5100, 0.5199, 0.1050, 0.2720
0.2300, 0.0000, 0.1880, 0.0780, 0.1450, 0.0720, 0.6300, 0.2000, 0.3912, 0.0860, 0.0850
0.5000, 0.0000, 0.3100, 0.1230, 0.1780, 0.1050, 0.4800, 0.4000, 0.4828, 0.0880, 0.2800
0.5800, 1.0000, 0.3670, 0.1170, 0.1660, 0.0938, 0.4400, 0.4000, 0.4949, 0.1090, 0.3360
0.5500, 0.0000, 0.3210, 0.1100, 0.1640, 0.0842, 0.4200, 0.4000, 0.5242, 0.0900, 0.2810
0.6000, 1.0000, 0.2770, 0.1070, 0.1670, 0.1146, 0.3800, 0.4000, 0.4277, 0.0950, 0.1180
0.4100, 0.0000, 0.3080, 0.0810, 0.2140, 0.1520, 0.2800, 0.7600, 0.5136, 0.1230, 0.3170
0.6000, 1.0000, 0.2750, 0.1060, 0.2290, 0.1438, 0.5100, 0.4000, 0.5142, 0.0910, 0.2350
0.4000, 0.0000, 0.2690, 0.0920, 0.2030, 0.1198, 0.7000, 0.3000, 0.4190, 0.0810, 0.0600
0.5700, 1.0000, 0.3070, 0.0900, 0.2040, 0.1478, 0.3400, 0.6000, 0.4710, 0.0930, 0.1740
0.3700, 0.0000, 0.3830, 0.1130, 0.1650, 0.0946, 0.5300, 0.3000, 0.4466, 0.0790, 0.2590
0.4000, 1.0000, 0.3190, 0.0950, 0.1980, 0.1356, 0.3800, 0.5000, 0.4804, 0.0930, 0.1780
0.3300, 0.0000, 0.3500, 0.0890, 0.2000, 0.1304, 0.4200, 0.4760, 0.4927, 0.1010, 0.1280
0.3200, 1.0000, 0.2780, 0.0890, 0.2160, 0.1462, 0.5500, 0.4000, 0.4304, 0.0910, 0.0960
0.3500, 1.0000, 0.2590, 0.0810, 0.1740, 0.1024, 0.3100, 0.6000, 0.5313, 0.0820, 0.1260
0.5500, 0.0000, 0.3290, 0.1020, 0.1640, 0.1062, 0.4100, 0.4000, 0.4431, 0.0890, 0.2880
0.4900, 0.0000, 0.2600, 0.0930, 0.1830, 0.1002, 0.6400, 0.3000, 0.4543, 0.0880, 0.0880
0.3900, 1.0000, 0.2630, 0.1150, 0.2180, 0.1582, 0.3200, 0.7000, 0.4935, 0.1090, 0.2920
0.6000, 1.0000, 0.2230, 0.1130, 0.1860, 0.1258, 0.4600, 0.4000, 0.4263, 0.0940, 0.0710
0.6700, 1.0000, 0.2830, 0.0930, 0.2040, 0.1322, 0.4900, 0.4000, 0.4736, 0.0920, 0.1970
0.4100, 1.0000, 0.3200, 0.1090, 0.2510, 0.1706, 0.4900, 0.5000, 0.5056, 0.1030, 0.1860
0.4400, 0.0000, 0.2540, 0.0950, 0.1620, 0.0926, 0.5300, 0.3000, 0.4407, 0.0830, 0.0250
0.4800, 1.0000, 0.2330, 0.0893, 0.2120, 0.1428, 0.4600, 0.4610, 0.4754, 0.0980, 0.0840
0.4500, 0.0000, 0.2030, 0.0743, 0.1900, 0.1262, 0.4900, 0.3880, 0.4304, 0.0790, 0.0960
0.4700, 0.0000, 0.3040, 0.1200, 0.1990, 0.1200, 0.4600, 0.4000, 0.5106, 0.0870, 0.1950
0.4600, 0.0000, 0.2060, 0.0730, 0.1720, 0.1070, 0.5100, 0.3000, 0.4249, 0.0800, 0.0530
0.3600, 1.0000, 0.3230, 0.1150, 0.2860, 0.1994, 0.3900, 0.7000, 0.5472, 0.1120, 0.2170
0.3400, 0.0000, 0.2920, 0.0730, 0.1720, 0.1082, 0.4900, 0.4000, 0.4304, 0.0910, 0.1720
0.5300, 1.0000, 0.3310, 0.1170, 0.1830, 0.1190, 0.4800, 0.4000, 0.4382, 0.1060, 0.1310
0.6100, 0.0000, 0.2460, 0.1010, 0.2090, 0.1068, 0.7700, 0.3000, 0.4836, 0.0880, 0.2140
0.3700, 0.0000, 0.2020, 0.0810, 0.1620, 0.0878, 0.6300, 0.3000, 0.4025, 0.0880, 0.0590
0.3300, 1.0000, 0.2080, 0.0840, 0.1250, 0.0702, 0.4600, 0.3000, 0.3784, 0.0660, 0.0700
0.6800, 0.0000, 0.3280, 0.1057, 0.2050, 0.1164, 0.4000, 0.5130, 0.5493, 0.1170, 0.2200
0.4900, 1.0000, 0.3190, 0.0940, 0.2340, 0.1558, 0.3400, 0.7000, 0.5398, 0.1220, 0.2680
0.4800, 0.0000, 0.2390, 0.1090, 0.2320, 0.1052, 0.3700, 0.6000, 0.6107, 0.0960, 0.1520
0.5500, 1.0000, 0.2450, 0.0840, 0.1790, 0.1058, 0.6600, 0.3000, 0.3584, 0.0870, 0.0470
0.4300, 0.0000, 0.2210, 0.0660, 0.1340, 0.0772, 0.4500, 0.3000, 0.4078, 0.0800, 0.0740
0.6000, 1.0000, 0.3300, 0.0970, 0.2170, 0.1256, 0.4500, 0.5000, 0.5447, 0.1120, 0.2950
0.3100, 1.0000, 0.1900, 0.0930, 0.1370, 0.0730, 0.4700, 0.3000, 0.4443, 0.0780, 0.1010
0.5300, 1.0000, 0.2730, 0.0820, 0.1190, 0.0550, 0.3900, 0.3000, 0.4828, 0.0930, 0.1510
0.6700, 0.0000, 0.2280, 0.0870, 0.1660, 0.0986, 0.5200, 0.3000, 0.4344, 0.0920, 0.1270
0.6100, 1.0000, 0.2820, 0.1060, 0.2040, 0.1320, 0.5200, 0.4000, 0.4605, 0.0960, 0.2370
0.6200, 0.0000, 0.2890, 0.0873, 0.2060, 0.1272, 0.3300, 0.6240, 0.5434, 0.0990, 0.2250
0.6000, 0.0000, 0.2560, 0.0870, 0.2070, 0.1258, 0.6900, 0.3000, 0.4111, 0.0840, 0.0810
0.4200, 0.0000, 0.2490, 0.0910, 0.2040, 0.1418, 0.3800, 0.5000, 0.4796, 0.0890, 0.1510
0.3800, 1.0000, 0.2680, 0.1050, 0.1810, 0.1192, 0.3700, 0.5000, 0.4820, 0.0910, 0.1070
0.6200, 0.0000, 0.2240, 0.0790, 0.2220, 0.1474, 0.5900, 0.4000, 0.4357, 0.0760, 0.0640
0.6100, 1.0000, 0.2690, 0.1110, 0.2360, 0.1724, 0.3900, 0.6000, 0.4812, 0.0890, 0.1380
0.6100, 1.0000, 0.2310, 0.1130, 0.1860, 0.1144, 0.4700, 0.4000, 0.4812, 0.1050, 0.1850
0.5300, 0.0000, 0.2860, 0.0880, 0.1710, 0.0988, 0.4100, 0.4000, 0.5050, 0.0990, 0.2650
0.2800, 1.0000, 0.2470, 0.0970, 0.1750, 0.0996, 0.3200, 0.5000, 0.5380, 0.0870, 0.1010
0.2600, 1.0000, 0.3030, 0.0890, 0.2180, 0.1522, 0.3100, 0.7000, 0.5159, 0.0820, 0.1370
0.3000, 0.0000, 0.2130, 0.0870, 0.1340, 0.0630, 0.6300, 0.2000, 0.3689, 0.0660, 0.1430
0.5000, 0.0000, 0.2610, 0.1090, 0.2430, 0.1606, 0.6200, 0.4000, 0.4625, 0.0890, 0.1410
0.4800, 0.0000, 0.2020, 0.0950, 0.1870, 0.1174, 0.5300, 0.4000, 0.4419, 0.0850, 0.0790
0.5100, 0.0000, 0.2520, 0.1030, 0.1760, 0.1122, 0.3700, 0.5000, 0.4898, 0.0900, 0.2920
0.4700, 1.0000, 0.2250, 0.0820, 0.1310, 0.0668, 0.4100, 0.3000, 0.4754, 0.0890, 0.1780
0.6400, 1.0000, 0.2350, 0.0970, 0.2030, 0.1290, 0.5900, 0.3000, 0.4318, 0.0770, 0.0910
0.5100, 1.0000, 0.2590, 0.0760, 0.2400, 0.1690, 0.3900, 0.6000, 0.5075, 0.0960, 0.1160
0.3000, 0.0000, 0.2090, 0.1040, 0.1520, 0.0838, 0.4700, 0.3000, 0.4663, 0.0970, 0.0860
0.5600, 1.0000, 0.2870, 0.0990, 0.2080, 0.1464, 0.3900, 0.5000, 0.4727, 0.0970, 0.1220
0.4200, 0.0000, 0.2210, 0.0850, 0.2130, 0.1386, 0.6000, 0.4000, 0.4277, 0.0940, 0.0720
0.6200, 1.0000, 0.2670, 0.1150, 0.1830, 0.1240, 0.3500, 0.5000, 0.4788, 0.1000, 0.1290
0.3400, 0.0000, 0.3140, 0.0870, 0.1490, 0.0938, 0.4600, 0.3000, 0.3829, 0.0770, 0.1420
0.6000, 0.0000, 0.2220, 0.1047, 0.2210, 0.1054, 0.6000, 0.3680, 0.5628, 0.0930, 0.0900
0.6400, 0.0000, 0.2100, 0.0923, 0.2270, 0.1468, 0.6500, 0.3490, 0.4331, 0.1020, 0.1580
0.3900, 1.0000, 0.2120, 0.0900, 0.1820, 0.1104, 0.6000, 0.3000, 0.4060, 0.0980, 0.0390
0.7100, 1.0000, 0.2650, 0.1050, 0.2810, 0.1736, 0.5500, 0.5000, 0.5568, 0.0840, 0.1960
0.4800, 1.0000, 0.2920, 0.1100, 0.2180, 0.1516, 0.3900, 0.6000, 0.4920, 0.0980, 0.2220
0.7900, 1.0000, 0.2700, 0.1030, 0.1690, 0.1108, 0.3700, 0.5000, 0.4663, 0.1100, 0.2770
0.4000, 0.0000, 0.3070, 0.0990, 0.1770, 0.0854, 0.5000, 0.4000, 0.5338, 0.0850, 0.0990
0.4900, 1.0000, 0.2880, 0.0920, 0.2070, 0.1400, 0.4400, 0.5000, 0.4745, 0.0920, 0.1960
0.5100, 0.0000, 0.3060, 0.1030, 0.1980, 0.1066, 0.5700, 0.3000, 0.5148, 0.1000, 0.2020
0.5700, 0.0000, 0.3010, 0.1170, 0.2020, 0.1396, 0.4200, 0.5000, 0.4625, 0.1200, 0.1550
0.5900, 1.0000, 0.2470, 0.1140, 0.1520, 0.1048, 0.2900, 0.5000, 0.4511, 0.0880, 0.0770
0.5100, 0.0000, 0.2770, 0.0990, 0.2290, 0.1456, 0.6900, 0.3000, 0.4277, 0.0770, 0.1910
0.7400, 0.0000, 0.2980, 0.1010, 0.1710, 0.1048, 0.5000, 0.3000, 0.4394, 0.0860, 0.0700
0.6700, 0.0000, 0.2670, 0.1050, 0.2250, 0.1354, 0.6900, 0.3000, 0.4635, 0.0960, 0.0730
0.4900, 0.0000, 0.1980, 0.0880, 0.1880, 0.1148, 0.5700, 0.3000, 0.4394, 0.0930, 0.0490
0.5700, 0.0000, 0.2330, 0.0880, 0.1550, 0.0636, 0.7800, 0.2000, 0.4205, 0.0780, 0.0650
0.5600, 1.0000, 0.3510, 0.1230, 0.1640, 0.0950, 0.3800, 0.4000, 0.5043, 0.1170, 0.2630
0.5200, 1.0000, 0.2970, 0.1090, 0.2280, 0.1628, 0.3100, 0.8000, 0.5142, 0.1030, 0.2480
0.6900, 0.0000, 0.2930, 0.1240, 0.2230, 0.1390, 0.5400, 0.4000, 0.5011, 0.1020, 0.2960
0.3700, 0.0000, 0.2030, 0.0830, 0.1850, 0.1246, 0.3800, 0.5000, 0.4719, 0.0880, 0.2140
0.2400, 0.0000, 0.2250, 0.0890, 0.1410, 0.0680, 0.5200, 0.3000, 0.4654, 0.0840, 0.1850
0.5500, 1.0000, 0.2270, 0.0930, 0.1540, 0.0942, 0.5300, 0.3000, 0.3526, 0.0750, 0.0780
0.3600, 0.0000, 0.2280, 0.0870, 0.1780, 0.1160, 0.4100, 0.4000, 0.4654, 0.0820, 0.0930
0.4200, 1.0000, 0.2400, 0.1070, 0.1500, 0.0850, 0.4400, 0.3000, 0.4654, 0.0960, 0.2520
0.2100, 0.0000, 0.2420, 0.0760, 0.1470, 0.0770, 0.5300, 0.3000, 0.4443, 0.0790, 0.1500
0.4100, 0.0000, 0.2020, 0.0620, 0.1530, 0.0890, 0.5000, 0.3000, 0.4249, 0.0890, 0.0770
0.5700, 1.0000, 0.2940, 0.1090, 0.1600, 0.0876, 0.3100, 0.5000, 0.5333, 0.0920, 0.2080
0.2000, 1.0000, 0.2210, 0.0870, 0.1710, 0.0996, 0.5800, 0.3000, 0.4205, 0.0780, 0.0770
0.6700, 1.0000, 0.2360, 0.1113, 0.1890, 0.1054, 0.7000, 0.2700, 0.4220, 0.0930, 0.1080
0.3400, 0.0000, 0.2520, 0.0770, 0.1890, 0.1206, 0.5300, 0.4000, 0.4344, 0.0790, 0.1600
0.4100, 1.0000, 0.2490, 0.0860, 0.1920, 0.1150, 0.6100, 0.3000, 0.4382, 0.0940, 0.0530
0.3800, 1.0000, 0.3300, 0.0780, 0.3010, 0.2150, 0.5000, 0.6020, 0.5193, 0.1080, 0.2200
0.5100, 0.0000, 0.2350, 0.1010, 0.1950, 0.1210, 0.5100, 0.4000, 0.4745, 0.0940, 0.1540
0.5200, 1.0000, 0.2640, 0.0913, 0.2180, 0.1520, 0.3900, 0.5590, 0.4905, 0.0990, 0.2590
0.6700, 0.0000, 0.2980, 0.0800, 0.1720, 0.0934, 0.6300, 0.3000, 0.4357, 0.0820, 0.0900
0.6100, 0.0000, 0.3000, 0.1080, 0.1940, 0.1000, 0.5200, 0.3730, 0.5347, 0.1050, 0.2460
0.6700, 1.0000, 0.2500, 0.1117, 0.1460, 0.0934, 0.3300, 0.4420, 0.4585, 0.1030, 0.1240
0.5600, 0.0000, 0.2700, 0.1050, 0.2470, 0.1606, 0.5400, 0.5000, 0.5088, 0.0940, 0.0670
0.6400, 0.0000, 0.2000, 0.0747, 0.1890, 0.1148, 0.6200, 0.3050, 0.4111, 0.0910, 0.0720
0.5800, 1.0000, 0.2550, 0.1120, 0.1630, 0.1106, 0.2900, 0.6000, 0.4762, 0.0860, 0.2570
0.5500, 0.0000, 0.2820, 0.0910, 0.2500, 0.1402, 0.6700, 0.4000, 0.5366, 0.1030, 0.2620
0.6200, 1.0000, 0.3330, 0.1140, 0.1820, 0.1140, 0.3800, 0.5000, 0.5011, 0.0960, 0.2750
0.5700, 1.0000, 0.2560, 0.0960, 0.2000, 0.1330, 0.5200, 0.3850, 0.4318, 0.1050, 0.1770
0.2000, 1.0000, 0.2420, 0.0880, 0.1260, 0.0722, 0.4500, 0.3000, 0.3784, 0.0740, 0.0710
0.5300, 1.0000, 0.2210, 0.0980, 0.1650, 0.1052, 0.4700, 0.4000, 0.4159, 0.0810, 0.0470
0.3200, 1.0000, 0.3140, 0.0890, 0.1530, 0.0842, 0.5600, 0.3000, 0.4159, 0.0900, 0.1870
0.4100, 0.0000, 0.2310, 0.0860, 0.1480, 0.0780, 0.5800, 0.3000, 0.4094, 0.0600, 0.1250
0.6000, 0.0000, 0.2340, 0.0767, 0.2470, 0.1480, 0.6500, 0.3800, 0.5136, 0.0770, 0.0780
0.2600, 0.0000, 0.1880, 0.0830, 0.1910, 0.1036, 0.6900, 0.3000, 0.4522, 0.0690, 0.0510
0.3700, 0.0000, 0.3080, 0.1120, 0.2820, 0.1972, 0.4300, 0.7000, 0.5342, 0.1010, 0.2580
0.4500, 0.0000, 0.3200, 0.1100, 0.2240, 0.1342, 0.4500, 0.5000, 0.5412, 0.0930, 0.2150
0.6700, 0.0000, 0.3160, 0.1160, 0.1790, 0.0904, 0.4100, 0.4000, 0.5472, 0.1000, 0.3030
0.3400, 1.0000, 0.3550, 0.1200, 0.2330, 0.1466, 0.3400, 0.7000, 0.5568, 0.1010, 0.2430
0.5000, 0.0000, 0.3190, 0.0783, 0.2070, 0.1492, 0.3800, 0.5450, 0.4595, 0.0840, 0.0910
0.7100, 0.0000, 0.2950, 0.0970, 0.2270, 0.1516, 0.4500, 0.5000, 0.5024, 0.1080, 0.1500
0.5700, 1.0000, 0.3160, 0.1170, 0.2250, 0.1076, 0.4000, 0.6000, 0.5958, 0.1130, 0.3100
0.4900, 0.0000, 0.2030, 0.0930, 0.1840, 0.1030, 0.6100, 0.3000, 0.4605, 0.0930, 0.1530
0.3500, 0.0000, 0.4130, 0.0810, 0.1680, 0.1028, 0.3700, 0.5000, 0.4949, 0.0940, 0.3460
0.4100, 1.0000, 0.2120, 0.1020, 0.1840, 0.1004, 0.6400, 0.3000, 0.4585, 0.0790, 0.0630
0.7000, 1.0000, 0.2410, 0.0823, 0.1940, 0.1492, 0.3100, 0.6260, 0.4234, 0.1050, 0.0890
0.5200, 0.0000, 0.2300, 0.1070, 0.1790, 0.1237, 0.4250, 0.4210, 0.4159, 0.0930, 0.0500
0.6000, 0.0000, 0.2560, 0.0780, 0.1950, 0.0954, 0.9100, 0.2000, 0.3761, 0.0870, 0.0390
0.6200, 0.0000, 0.2250, 0.1250, 0.2150, 0.0990, 0.9800, 0.2000, 0.4500, 0.0950, 0.1030
0.4400, 1.0000, 0.3820, 0.1230, 0.2010, 0.1266, 0.4400, 0.5000, 0.5024, 0.0920, 0.3080
0.2800, 1.0000, 0.1920, 0.0810, 0.1550, 0.0946, 0.5100, 0.3000, 0.3850, 0.0870, 0.1160
0.5800, 1.0000, 0.2900, 0.0850, 0.1560, 0.1092, 0.3600, 0.4000, 0.3989, 0.0860, 0.1450
0.3900, 1.0000, 0.2400, 0.0897, 0.1900, 0.1136, 0.5200, 0.3650, 0.4804, 0.1010, 0.0740
0.3400, 1.0000, 0.2060, 0.0980, 0.1830, 0.0920, 0.8300, 0.2000, 0.3689, 0.0920, 0.0450
0.6500, 0.0000, 0.2630, 0.0700, 0.2440, 0.1662, 0.5100, 0.5000, 0.4898, 0.0980, 0.1150
0.6600, 1.0000, 0.3460, 0.1150, 0.2040, 0.1394, 0.3600, 0.6000, 0.4963, 0.1090, 0.2640
0.5100, 0.0000, 0.2340, 0.0870, 0.2200, 0.1088, 0.9300, 0.2000, 0.4511, 0.0820, 0.0870
0.5000, 1.0000, 0.2920, 0.1190, 0.1620, 0.0852, 0.5400, 0.3000, 0.4736, 0.0950, 0.2020
0.5900, 1.0000, 0.2720, 0.1070, 0.1580, 0.1020, 0.3900, 0.4000, 0.4443, 0.0930, 0.1270
0.5200, 0.0000, 0.2700, 0.0783, 0.1340, 0.0730, 0.4400, 0.3050, 0.4443, 0.0690, 0.1820
0.6900, 1.0000, 0.2450, 0.1080, 0.2430, 0.1364, 0.4000, 0.6000, 0.5808, 0.1000, 0.2410
0.5300, 0.0000, 0.2410, 0.1050, 0.1840, 0.1134, 0.4600, 0.4000, 0.4812, 0.0950, 0.0660
0.4700, 1.0000, 0.2530, 0.0980, 0.1730, 0.1056, 0.4400, 0.4000, 0.4762, 0.1080, 0.0940
0.5200, 0.0000, 0.2880, 0.1130, 0.2800, 0.1740, 0.6700, 0.4000, 0.5273, 0.0860, 0.2830
0.3900, 0.0000, 0.2090, 0.0950, 0.1500, 0.0656, 0.6800, 0.2000, 0.4407, 0.0950, 0.0640
0.6700, 1.0000, 0.2300, 0.0700, 0.1840, 0.1280, 0.3500, 0.5000, 0.4654, 0.0990, 0.1020
0.5900, 1.0000, 0.2410, 0.0960, 0.1700, 0.0986, 0.5400, 0.3000, 0.4466, 0.0850, 0.2000
0.5100, 1.0000, 0.2810, 0.1060, 0.2020, 0.1222, 0.5500, 0.4000, 0.4820, 0.0870, 0.2650
0.2300, 1.0000, 0.1800, 0.0780, 0.1710, 0.0960, 0.4800, 0.4000, 0.4905, 0.0920, 0.0940
0.6800, 0.0000, 0.2590, 0.0930, 0.2530, 0.1812, 0.5300, 0.5000, 0.4543, 0.0980, 0.2300
0.4400, 0.0000, 0.2150, 0.0850, 0.1570, 0.0922, 0.5500, 0.3000, 0.3892, 0.0840, 0.1810
0.6000, 1.0000, 0.2430, 0.1030, 0.1410, 0.0866, 0.3300, 0.4000, 0.4673, 0.0780, 0.1560
0.5200, 0.0000, 0.2450, 0.0900, 0.1980, 0.1290, 0.2900, 0.7000, 0.5298, 0.0860, 0.2330
0.3800, 0.0000, 0.2130, 0.0720, 0.1650, 0.0602, 0.8800, 0.2000, 0.4431, 0.0900, 0.0600
0.6100, 0.0000, 0.2580, 0.0900, 0.2800, 0.1954, 0.5500, 0.5000, 0.4997, 0.0900, 0.2190
0.6800, 1.0000, 0.2480, 0.1010, 0.2210, 0.1514, 0.6000, 0.4000, 0.3871, 0.0870, 0.0800
0.2800, 1.0000, 0.3150, 0.0830, 0.2280, 0.1494, 0.3800, 0.6000, 0.5313, 0.0830, 0.0680
0.6500, 1.0000, 0.3350, 0.1020, 0.1900, 0.1262, 0.3500, 0.5000, 0.4970, 0.1020, 0.3320
0.6900, 0.0000, 0.2810, 0.1130, 0.2340, 0.1428, 0.5200, 0.4000, 0.5278, 0.0770, 0.2480
0.5100, 0.0000, 0.2430, 0.0853, 0.1530, 0.0716, 0.7100, 0.2150, 0.3951, 0.0820, 0.0840
0.2900, 0.0000, 0.3500, 0.0983, 0.2040, 0.1426, 0.5000, 0.4080, 0.4043, 0.0910, 0.2000
0.5500, 1.0000, 0.2350, 0.0930, 0.1770, 0.1268, 0.4100, 0.4000, 0.3829, 0.0830, 0.0550
0.3400, 1.0000, 0.3000, 0.0830, 0.1850, 0.1072, 0.5300, 0.3000, 0.4820, 0.0920, 0.0850
0.6700, 0.0000, 0.2070, 0.0830, 0.1700, 0.0998, 0.5900, 0.3000, 0.4025, 0.0770, 0.0890
0.4900, 0.0000, 0.2560, 0.0760, 0.1610, 0.0998, 0.5100, 0.3000, 0.3932, 0.0780, 0.0310
0.5500, 1.0000, 0.2290, 0.0810, 0.1230, 0.0672, 0.4100, 0.3000, 0.4304, 0.0880, 0.1290
0.5900, 1.0000, 0.2510, 0.0900, 0.1630, 0.1014, 0.4600, 0.4000, 0.4357, 0.0910, 0.0830
0.5300, 0.0000, 0.3320, 0.0827, 0.1860, 0.1068, 0.4600, 0.4040, 0.5112, 0.1020, 0.2750
0.4800, 1.0000, 0.2410, 0.1100, 0.2090, 0.1346, 0.5800, 0.4000, 0.4407, 0.1000, 0.0650
0.5200, 0.0000, 0.2950, 0.1043, 0.2110, 0.1328, 0.4900, 0.4310, 0.4984, 0.0980, 0.1980
0.6900, 0.0000, 0.2960, 0.1220, 0.2310, 0.1284, 0.5600, 0.4000, 0.5451, 0.0860, 0.2360
0.6000, 1.0000, 0.2280, 0.1100, 0.2450, 0.1898, 0.3900, 0.6000, 0.4394, 0.0880, 0.2530
0.4600, 1.0000, 0.2270, 0.0830, 0.1830, 0.1258, 0.3200, 0.6000, 0.4836, 0.0750, 0.1240
0.5100, 1.0000, 0.2620, 0.1010, 0.1610, 0.0996, 0.4800, 0.3000, 0.4205, 0.0880, 0.0440
0.6700, 1.0000, 0.2350, 0.0960, 0.2070, 0.1382, 0.4200, 0.5000, 0.4898, 0.1110, 0.1720
0.4900, 0.0000, 0.2210, 0.0850, 0.1360, 0.0634, 0.6200, 0.2190, 0.3970, 0.0720, 0.1140
0.4600, 1.0000, 0.2650, 0.0940, 0.2470, 0.1602, 0.5900, 0.4000, 0.4935, 0.1110, 0.1420
0.4700, 0.0000, 0.3240, 0.1050, 0.1880, 0.1250, 0.4600, 0.4090, 0.4443, 0.0990, 0.1090
0.7500, 0.0000, 0.3010, 0.0780, 0.2220, 0.1542, 0.4400, 0.5050, 0.4779, 0.0970, 0.1800
0.2800, 0.0000, 0.2420, 0.0930, 0.1740, 0.1064, 0.5400, 0.3000, 0.4220, 0.0840, 0.1440
0.6500, 1.0000, 0.3130, 0.1100, 0.2130, 0.1280, 0.4700, 0.5000, 0.5247, 0.0910, 0.1630
0.4200, 0.0000, 0.3010, 0.0910, 0.1820, 0.1148, 0.4900, 0.4000, 0.4511, 0.0820, 0.1470
0.5100, 0.0000, 0.2450, 0.0790, 0.2120, 0.1286, 0.6500, 0.3000, 0.4522, 0.0910, 0.0970
0.5300, 1.0000, 0.2770, 0.0950, 0.1900, 0.1018, 0.4100, 0.5000, 0.5464, 0.1010, 0.2200
0.5400, 0.0000, 0.2320, 0.1107, 0.2380, 0.1628, 0.4800, 0.4960, 0.4913, 0.1080, 0.1900
0.7300, 0.0000, 0.2700, 0.1020, 0.2110, 0.1210, 0.6700, 0.3000, 0.4745, 0.0990, 0.1090
0.5400, 0.0000, 0.2680, 0.1080, 0.1760, 0.0806, 0.6700, 0.3000, 0.4956, 0.1060, 0.1910
0.4200, 0.0000, 0.2920, 0.0930, 0.2490, 0.1742, 0.4500, 0.6000, 0.5004, 0.0920, 0.1220
0.7500, 0.0000, 0.3120, 0.1177, 0.2290, 0.1388, 0.2900, 0.7900, 0.5724, 0.1060, 0.2300
0.5500, 1.0000, 0.3210, 0.1127, 0.2070, 0.0924, 0.2500, 0.8280, 0.6105, 0.1110, 0.2420
0.6800, 1.0000, 0.2570, 0.1090, 0.2330, 0.1126, 0.3500, 0.7000, 0.6057, 0.1050, 0.2480
0.5700, 0.0000, 0.2690, 0.0980, 0.2460, 0.1652, 0.3800, 0.7000, 0.5366, 0.0960, 0.2490
0.4800, 0.0000, 0.3140, 0.0753, 0.2420, 0.1516, 0.3800, 0.6370, 0.5568, 0.1030, 0.1920
0.6100, 1.0000, 0.2560, 0.0850, 0.1840, 0.1162, 0.3900, 0.5000, 0.4970, 0.0980, 0.1310
0.6900, 0.0000, 0.3700, 0.1030, 0.2070, 0.1314, 0.5500, 0.4000, 0.4635, 0.0900, 0.2370
0.3800, 0.0000, 0.3260, 0.0770, 0.1680, 0.1006, 0.4700, 0.4000, 0.4625, 0.0960, 0.0780
0.4500, 1.0000, 0.2120, 0.0940, 0.1690, 0.0968, 0.5500, 0.3000, 0.4454, 0.1020, 0.1350
0.5100, 1.0000, 0.2920, 0.1070, 0.1870, 0.1390, 0.3200, 0.6000, 0.4382, 0.0950, 0.2440
0.7100, 1.0000, 0.2400, 0.0840, 0.1380, 0.0858, 0.3900, 0.4000, 0.4190, 0.0900, 0.1990
0.5700, 0.0000, 0.3610, 0.1170, 0.1810, 0.1082, 0.3400, 0.5000, 0.5268, 0.1000, 0.2700
0.5600, 1.0000, 0.2580, 0.1030, 0.1770, 0.1144, 0.3400, 0.5000, 0.4963, 0.0990, 0.1640
0.3200, 1.0000, 0.2200, 0.0880, 0.1370, 0.0786, 0.4800, 0.3000, 0.3951, 0.0780, 0.0720
0.5000, 0.0000, 0.2190, 0.0910, 0.1900, 0.1112, 0.6700, 0.3000, 0.4078, 0.0770, 0.0960
0.4300, 0.0000, 0.3430, 0.0840, 0.2560, 0.1726, 0.3300, 0.8000, 0.5529, 0.1040, 0.3060
0.5400, 1.0000, 0.2520, 0.1150, 0.1810, 0.1200, 0.3900, 0.5000, 0.4701, 0.0920, 0.0910
0.3100, 0.0000, 0.2330, 0.0850, 0.1900, 0.1308, 0.4300, 0.4000, 0.4394, 0.0770, 0.2140
0.5600, 0.0000, 0.2570, 0.0800, 0.2440, 0.1516, 0.5900, 0.4000, 0.5118, 0.0950, 0.0950
0.4400, 0.0000, 0.2510, 0.1330, 0.1820, 0.1130, 0.5500, 0.3000, 0.4249, 0.0840, 0.2160
0.5700, 1.0000, 0.3190, 0.1110, 0.1730, 0.1162, 0.4100, 0.4000, 0.4369, 0.0870, 0.2630

Test data:


# diabetes_norm_test_100.txt
#
0.6400, 1.0000, 0.2840, 0.1110, 0.1840, 0.1270, 0.4100, 0.4000, 0.4382, 0.0970, 0.1780
0.4300, 0.0000, 0.2810, 0.1210, 0.1920, 0.1210, 0.6000, 0.3000, 0.4007, 0.0930, 0.1130
0.1900, 0.0000, 0.2530, 0.0830, 0.2250, 0.1566, 0.4600, 0.5000, 0.4719, 0.0840, 0.2000
0.7100, 1.0000, 0.2610, 0.0850, 0.2200, 0.1524, 0.4700, 0.5000, 0.4635, 0.0910, 0.1390
0.5000, 1.0000, 0.2800, 0.1040, 0.2820, 0.1968, 0.4400, 0.6000, 0.5328, 0.0950, 0.1390
0.5900, 1.0000, 0.2360, 0.0730, 0.1800, 0.1074, 0.5100, 0.4000, 0.4682, 0.0840, 0.0880
0.5700, 0.0000, 0.2450, 0.0930, 0.1860, 0.0966, 0.7100, 0.3000, 0.4522, 0.0910, 0.1480
0.4900, 1.0000, 0.2100, 0.0820, 0.1190, 0.0854, 0.2300, 0.5000, 0.3970, 0.0740, 0.0880
0.4100, 1.0000, 0.3200, 0.1260, 0.1980, 0.1042, 0.4900, 0.4000, 0.5412, 0.1240, 0.2430
0.2500, 1.0000, 0.2260, 0.0850, 0.1300, 0.0710, 0.4800, 0.3000, 0.4007, 0.0810, 0.0710
0.5200, 1.0000, 0.1970, 0.0810, 0.1520, 0.0534, 0.8200, 0.2000, 0.4419, 0.0820, 0.0770
0.3400, 0.0000, 0.2120, 0.0840, 0.2540, 0.1134, 0.5200, 0.5000, 0.6094, 0.0920, 0.1090
0.4200, 1.0000, 0.3060, 0.1010, 0.2690, 0.1722, 0.5000, 0.5000, 0.5455, 0.1060, 0.2720
0.2800, 1.0000, 0.2550, 0.0990, 0.1620, 0.1016, 0.4600, 0.4000, 0.4277, 0.0940, 0.0600
0.4700, 1.0000, 0.2330, 0.0900, 0.1950, 0.1258, 0.5400, 0.4000, 0.4331, 0.0730, 0.0540
0.3200, 1.0000, 0.3100, 0.1000, 0.1770, 0.0962, 0.4500, 0.4000, 0.5187, 0.0770, 0.2210
0.4300, 0.0000, 0.1850, 0.0870, 0.1630, 0.0936, 0.6100, 0.2670, 0.3738, 0.0800, 0.0900
0.5900, 1.0000, 0.2690, 0.1040, 0.1940, 0.1266, 0.4300, 0.5000, 0.4804, 0.1060, 0.3110
0.5300, 0.0000, 0.2830, 0.1010, 0.1790, 0.1070, 0.4800, 0.4000, 0.4788, 0.1010, 0.2810
0.6000, 0.0000, 0.2570, 0.1030, 0.1580, 0.0846, 0.6400, 0.2000, 0.3850, 0.0970, 0.1820
0.5400, 1.0000, 0.3610, 0.1150, 0.1630, 0.0984, 0.4300, 0.4000, 0.4682, 0.1010, 0.3210
0.3500, 1.0000, 0.2410, 0.0947, 0.1550, 0.0974, 0.3200, 0.4840, 0.4852, 0.0940, 0.0580
0.4900, 1.0000, 0.2580, 0.0890, 0.1820, 0.1186, 0.3900, 0.5000, 0.4804, 0.1150, 0.2620
0.5800, 0.0000, 0.2280, 0.0910, 0.1960, 0.1188, 0.4800, 0.4000, 0.4984, 0.1150, 0.2060
0.3600, 1.0000, 0.3910, 0.0900, 0.2190, 0.1358, 0.3800, 0.6000, 0.5421, 0.1030, 0.2330
0.4600, 1.0000, 0.4220, 0.0990, 0.2110, 0.1370, 0.4400, 0.5000, 0.5011, 0.0990, 0.2420
0.4400, 1.0000, 0.2660, 0.0990, 0.2050, 0.1090, 0.4300, 0.5000, 0.5580, 0.1110, 0.1230
0.4600, 0.0000, 0.2990, 0.0830, 0.1710, 0.1130, 0.3800, 0.4500, 0.4585, 0.0980, 0.1670
0.5400, 0.0000, 0.2100, 0.0780, 0.1880, 0.1074, 0.7000, 0.3000, 0.3970, 0.0730, 0.0630
0.6300, 1.0000, 0.2550, 0.1090, 0.2260, 0.1032, 0.4600, 0.5000, 0.5951, 0.0870, 0.1970
0.4100, 1.0000, 0.2420, 0.0900, 0.1990, 0.1236, 0.5700, 0.4000, 0.4522, 0.0860, 0.0710
0.2800, 0.0000, 0.2540, 0.0930, 0.1410, 0.0790, 0.4900, 0.3000, 0.4174, 0.0910, 0.1680
0.1900, 0.0000, 0.2320, 0.0750, 0.1430, 0.0704, 0.5200, 0.3000, 0.4635, 0.0720, 0.1400
0.6100, 1.0000, 0.2610, 0.1260, 0.2150, 0.1298, 0.5700, 0.4000, 0.4949, 0.0960, 0.2170
0.4800, 0.0000, 0.3270, 0.0930, 0.2760, 0.1986, 0.4300, 0.6420, 0.5148, 0.0910, 0.1210
0.5400, 1.0000, 0.2730, 0.1000, 0.2000, 0.1440, 0.3300, 0.6000, 0.4745, 0.0760, 0.2350
0.5300, 1.0000, 0.2660, 0.0930, 0.1850, 0.1224, 0.3600, 0.5000, 0.4890, 0.0820, 0.2450
0.4800, 0.0000, 0.2280, 0.1010, 0.1100, 0.0416, 0.5600, 0.2000, 0.4127, 0.0970, 0.0400
0.5300, 0.0000, 0.2880, 0.1117, 0.1450, 0.0872, 0.4600, 0.3150, 0.4078, 0.0850, 0.0520
0.2900, 1.0000, 0.1810, 0.0730, 0.1580, 0.0990, 0.4100, 0.4000, 0.4500, 0.0780, 0.1040
0.6200, 0.0000, 0.3200, 0.0880, 0.1720, 0.0690, 0.3800, 0.4000, 0.5784, 0.1000, 0.1320
0.5000, 1.0000, 0.2370, 0.0920, 0.1660, 0.0970, 0.5200, 0.3000, 0.4443, 0.0930, 0.0880
0.5800, 1.0000, 0.2360, 0.0960, 0.2570, 0.1710, 0.5900, 0.4000, 0.4905, 0.0820, 0.0690
0.5500, 1.0000, 0.2460, 0.1090, 0.1430, 0.0764, 0.5100, 0.3000, 0.4357, 0.0880, 0.2190
0.5400, 0.0000, 0.2260, 0.0900, 0.1830, 0.1042, 0.6400, 0.3000, 0.4304, 0.0920, 0.0720
0.3600, 0.0000, 0.2780, 0.0730, 0.1530, 0.1044, 0.4200, 0.4000, 0.3497, 0.0730, 0.2010
0.6300, 1.0000, 0.2410, 0.1110, 0.1840, 0.1122, 0.4400, 0.4000, 0.4935, 0.0820, 0.1100
0.4700, 1.0000, 0.2650, 0.0700, 0.1810, 0.1048, 0.6300, 0.3000, 0.4190, 0.0700, 0.0510
0.5100, 1.0000, 0.3280, 0.1120, 0.2020, 0.1006, 0.3700, 0.5000, 0.5775, 0.1090, 0.2770
0.4200, 0.0000, 0.1990, 0.0760, 0.1460, 0.0832, 0.5500, 0.3000, 0.3664, 0.0790, 0.0630
0.3700, 1.0000, 0.2360, 0.0940, 0.2050, 0.1388, 0.5300, 0.4000, 0.4190, 0.1070, 0.1180
0.2800, 0.0000, 0.2210, 0.0820, 0.1680, 0.1006, 0.5400, 0.3000, 0.4205, 0.0860, 0.0690
0.5800, 0.0000, 0.2810, 0.1110, 0.1980, 0.0806, 0.3100, 0.6000, 0.6068, 0.0930, 0.2730
0.3200, 0.0000, 0.2650, 0.0860, 0.1840, 0.1016, 0.5300, 0.4000, 0.4990, 0.0780, 0.2580
0.2500, 1.0000, 0.2350, 0.0880, 0.1430, 0.0808, 0.5500, 0.3000, 0.3584, 0.0830, 0.0430
0.6300, 0.0000, 0.2600, 0.0857, 0.1550, 0.0782, 0.4600, 0.3370, 0.5037, 0.0970, 0.1980
0.5200, 0.0000, 0.2780, 0.0850, 0.2190, 0.1360, 0.4900, 0.4000, 0.5136, 0.0750, 0.2420
0.6500, 1.0000, 0.2850, 0.1090, 0.2010, 0.1230, 0.4600, 0.4000, 0.5075, 0.0960, 0.2320
0.4200, 0.0000, 0.3060, 0.1210, 0.1760, 0.0928, 0.6900, 0.3000, 0.4263, 0.0890, 0.1750
0.5300, 0.0000, 0.2220, 0.0780, 0.1640, 0.0810, 0.7000, 0.2000, 0.4174, 0.1010, 0.0930
0.7900, 1.0000, 0.2330, 0.0880, 0.1860, 0.1284, 0.3300, 0.6000, 0.4812, 0.1020, 0.1680
0.4300, 0.0000, 0.3540, 0.0930, 0.1850, 0.1002, 0.4400, 0.4000, 0.5318, 0.1010, 0.2750
0.4400, 0.0000, 0.3140, 0.1150, 0.1650, 0.0976, 0.5200, 0.3000, 0.4344, 0.0890, 0.2930
0.6200, 1.0000, 0.3780, 0.1190, 0.1130, 0.0510, 0.3100, 0.4000, 0.5043, 0.0840, 0.2810
0.3300, 0.0000, 0.1890, 0.0700, 0.1620, 0.0918, 0.5900, 0.3000, 0.4025, 0.0580, 0.0720
0.5600, 0.0000, 0.3500, 0.0793, 0.1950, 0.1408, 0.4200, 0.4640, 0.4111, 0.0960, 0.1400
0.6600, 0.0000, 0.2170, 0.1260, 0.2120, 0.1278, 0.4500, 0.4710, 0.5278, 0.1010, 0.1890
0.3400, 1.0000, 0.2530, 0.1110, 0.2300, 0.1620, 0.3900, 0.6000, 0.4977, 0.0900, 0.1810
0.4600, 1.0000, 0.2380, 0.0970, 0.2240, 0.1392, 0.4200, 0.5000, 0.5366, 0.0810, 0.2090
0.5000, 0.0000, 0.3180, 0.0820, 0.1360, 0.0692, 0.5500, 0.2000, 0.4078, 0.0850, 0.1360
0.6900, 0.0000, 0.3430, 0.1130, 0.2000, 0.1238, 0.5400, 0.4000, 0.4710, 0.1120, 0.2610
0.3400, 0.0000, 0.2630, 0.0870, 0.1970, 0.1200, 0.6300, 0.3000, 0.4249, 0.0960, 0.1130
0.7100, 1.0000, 0.2700, 0.0933, 0.2690, 0.1902, 0.4100, 0.6560, 0.5242, 0.0930, 0.1310
0.4700, 0.0000, 0.2720, 0.0800, 0.2080, 0.1456, 0.3800, 0.6000, 0.4804, 0.0920, 0.1740
0.4100, 0.0000, 0.3380, 0.1233, 0.1870, 0.1270, 0.4500, 0.4160, 0.4318, 0.1000, 0.2570
0.3400, 0.0000, 0.3300, 0.0730, 0.1780, 0.1146, 0.5100, 0.3490, 0.4127, 0.0920, 0.0550
0.5100, 0.0000, 0.2410, 0.0870, 0.2610, 0.1756, 0.6900, 0.4000, 0.4407, 0.0930, 0.0840
0.4300, 0.0000, 0.2130, 0.0790, 0.1410, 0.0788, 0.5300, 0.3000, 0.3829, 0.0900, 0.0420
0.5500, 0.0000, 0.2300, 0.0947, 0.1900, 0.1376, 0.3800, 0.5000, 0.4277, 0.1060, 0.1460
0.5900, 1.0000, 0.2790, 0.1010, 0.2180, 0.1442, 0.3800, 0.6000, 0.5187, 0.0950, 0.2120
0.2700, 1.0000, 0.3360, 0.1100, 0.2460, 0.1566, 0.5700, 0.4000, 0.5088, 0.0890, 0.2330
0.5100, 1.0000, 0.2270, 0.1030, 0.2170, 0.1624, 0.3000, 0.7000, 0.4812, 0.0800, 0.0910
0.4900, 1.0000, 0.2740, 0.0890, 0.1770, 0.1130, 0.3700, 0.5000, 0.4905, 0.0970, 0.1110
0.2700, 0.0000, 0.2260, 0.0710, 0.1160, 0.0434, 0.5600, 0.2000, 0.4419, 0.0790, 0.1520
0.5700, 1.0000, 0.2320, 0.1073, 0.2310, 0.1594, 0.4100, 0.5630, 0.5030, 0.1120, 0.1200
0.3900, 1.0000, 0.2690, 0.0930, 0.1360, 0.0754, 0.4800, 0.3000, 0.4143, 0.0990, 0.0670
0.6200, 1.0000, 0.3460, 0.1200, 0.2150, 0.1292, 0.4300, 0.5000, 0.5366, 0.1230, 0.3100
0.3700, 0.0000, 0.2330, 0.0880, 0.2230, 0.1420, 0.6500, 0.3400, 0.4357, 0.0820, 0.0940
0.4600, 0.0000, 0.2110, 0.0800, 0.2050, 0.1444, 0.4200, 0.5000, 0.4533, 0.0870, 0.1830
0.6800, 1.0000, 0.2350, 0.1010, 0.1620, 0.0854, 0.5900, 0.3000, 0.4477, 0.0910, 0.0660
0.5100, 0.0000, 0.3150, 0.0930, 0.2310, 0.1440, 0.4900, 0.4700, 0.5252, 0.1170, 0.1730
0.4100, 0.0000, 0.2080, 0.0860, 0.2230, 0.1282, 0.8300, 0.3000, 0.4078, 0.0890, 0.0720
0.5300, 0.0000, 0.2650, 0.0970, 0.1930, 0.1224, 0.5800, 0.3000, 0.4143, 0.0990, 0.0490
0.4500, 0.0000, 0.2420, 0.0830, 0.1770, 0.1184, 0.4500, 0.4000, 0.4220, 0.0820, 0.0640
0.3300, 0.0000, 0.1950, 0.0800, 0.1710, 0.0854, 0.7500, 0.2000, 0.3970, 0.0800, 0.0480
0.6000, 1.0000, 0.2820, 0.1120, 0.1850, 0.1138, 0.4200, 0.4000, 0.4984, 0.0930, 0.1780
0.4700, 1.0000, 0.2490, 0.0750, 0.2250, 0.1660, 0.4200, 0.5000, 0.4443, 0.1020, 0.1040
0.6000, 1.0000, 0.2490, 0.0997, 0.1620, 0.1066, 0.4300, 0.3770, 0.4127, 0.0950, 0.1320
0.3600, 0.0000, 0.3000, 0.0950, 0.2010, 0.1252, 0.4200, 0.4790, 0.5130, 0.0850, 0.2200
0.3600, 0.0000, 0.1960, 0.0710, 0.2500, 0.1332, 0.9700, 0.3000, 0.4595, 0.0920, 0.0570
Posted in Machine Learning | Leave a comment

Computing Matrix QR Decomposition Using Givens Rotations With C#

If you have a matrix, there are several ways to decompose it: LUP decomposition, SVD decomposition, QR decomposition, Cholesky decomposition (only for positive-definite matrices). The QR decomposition can be used for many algorithms, such as computing a matrix inverse, or pseudo-inverse.

If you have a matrix A, the QR decomposition gives you a matrix Q and a matrix R such that A = Q * R (where the * is matrix multiplication).

There are three main algorithms to compute a QR decomposition: Householder, modified Gram-Schmidt, Givens. Each of these three algorithms has several variations. Householder is by far the most complicated of the three, but gives the best answer precision. Modified Gram-Schmidt is dramatically simpler than Householder, but not as precise for pathological matrices. Givens is roughly comparable to modified Gram-Schmidt in terms of complexity and precision.

One evening, I was waiting for the rain to stop so I could walk my dogs Kevin and Riley. I figured I’d make use of the time by implementing QR decomposition using the Givens rotations algorithm, using the C# language.

The output of my demo:

Begin QR decomposition using Givens rotations with C#
Less precise than QR-Householder but dramatically simpler

Source (tall) matrix A:
   1.0   2.0   3.0   4.0   5.0
   0.0  -3.0   5.0  -7.0   9.0
   2.0   0.0  -2.0   0.0  -2.0
   4.0  -1.0   5.0   6.0   1.0
   3.0   6.0   8.0   2.0   2.0
   5.0  -2.0   4.0  -4.0   3.0

Computing QR
Done

Q =
   0.134840   0.258894   0.147094   0.310903   0.869379
   0.000000  -0.410745   0.759588  -0.274531   0.180705
   0.269680  -0.029872  -0.526675  -0.219131   0.300339
   0.539360  -0.196660   0.116670   0.738280  -0.260150
   0.404520   0.776682   0.316260  -0.255197  -0.226191
   0.674200  -0.348511  -0.101841  -0.412033   0.049823

R =
   7.416198   0.809040   8.494918   1.887760   3.505839
   0.000000   7.303797   2.618812   5.678242  -2.031322
   0.000000   0.000000   7.998637  -2.988836   9.068775
   0.000000   0.000000   0.000000   8.732743  -1.486219
   0.000000   0.000000   0.000000   0.000000   4.809501

End demo

I verified the results by using the NumPy np.linalg.qr() library function, and I got the same results. The demo implementation returns “reduced” Q and R, which is standard. The demo implementation works only for “tall” matrices where the number of rows is greater than or equal to the number of columns, which is standard for machine learning scenarios.

Good fun on a rainy evening in the Pacific Northwest.



I’m a big fan of science fiction movies — good movies, bad movies, old movies, new movies — any kind of science fiction movies. Here are three that were OK (I give all three a “C” grade on my purely personal “A” to “F” scale) but could have been great.

Left: “Valerian and the City of a Thousand Planets” (2017) – When I heard “Valerian” was going to be directed by Luc Besson, I was excited. Among other films, Besson did “The Fifth Element” (1997), one of my favorite movies of all time. And then . . disaster. The main characters look like teen brother and sister who are halfway through some sort of gender transition therapy to make Valerian into a fembot and Laurelin into an enforcer player on a WNBA basketball team. They act like that too. Everything else about the movie was quite good, but the two lead characters destroyed the movie for me.

Center: “War of the Worlds” (2005) – One of my top ten sci-fi movies of the 1950s and 60s is the 1953 version of “War of the Worlds”, so when I heard a new version was coming, directed by the famous Steven Spielberg, I was excited. And then . . disaster. The first half of the movie is quite good, especially the initial invasion scenes in the city. But the movie utterly disintegrates in the second half. Also, I greatly disliked the cinematography that used a hazy tint for the entire movie — it gave me a splitting headache.

Right: “John Carter” (2012) – Like many of my friends, we all grew up reading “A Princess of Mars” (1912) by ER Burroughs, and it is hands-down our favorite book of all time. When we heard that Disney was adapting the book to a movie, we were all excited. And then . . disaster. The main problem is the actress who plays Princess Dejah Thoris. Instead of a princess, she comes off as an annoying, abasive, tatoo-covered, entitled girl-thug. She single-handedly destroyed the movie.


Demo code. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my blog editor chokes on symbols).

using System;
using System.IO;

namespace MatrixDecompQRGivens
{
  internal class Program
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin QR decomposition using" +
        " Givens rotations with C# ");
      Console.WriteLine("Less precise than QR-Householder" +
        " but dramatically simpler ");
      
      double[][] A = new double[6][];
      A[0] = new double[] { 1, 2, 3, 4, 5 };
      A[1] = new double[] { 0, -3, 5, -7, 9 };
      A[2] = new double[] { 2, 0, -2, 0, -2 };
      A[3] = new double[] { 4, -1, 5, 6, 1 };
      A[4] = new double[] { 3, 6, 8, 2, 2 };
      A[5] = new double[] { 5, -2, 4, -4, 3 };

      Console.WriteLine("\nSource (tall) matrix A: ");
      MatShow(A, 1, 6);

      Console.WriteLine("\nComputing QR ");
      double[][] Q;
      double[][] R;
      MatDecompQR(A, out Q, out R);
      Console.WriteLine("Done ");

      Console.WriteLine("\nQ = ");
      MatShow(Q, 6, 11);
      Console.WriteLine("\nR = ");
      MatShow(R, 6, 11);

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main()

    // ------------------------------------------------------

    static void MatShow(double[][] M, int dec, int wid)
    {
      for (int i = 0; i "lt" M.Length; ++i)
      {
        for (int j = 0; j "lt" M[0].Length; ++j)
        {
          double v = M[i][j];
          Console.Write(v.ToString("F" + dec).
            PadLeft(wid));
        }
        Console.WriteLine("");
      }
    }

    // ------------------------------------------------------

    static void MatDecompQR(double[][] A, out double[][] Q,
      out double[][] R)
    {
      // QR decomposition using Givens rotations
      // 'reduced' mode (all machine learning scenarios)
      int m = A.Length; int n = A[0].Length;
      if (m "lt" n)
        throw new Exception("m must be gte n ");
      int k = Math.Min(m, n);

      double[][] QQ = new double[m][];  // working Q mxm
      for (int i = 0; i "lt" m; ++i)
        QQ[i] = new double[m];
      for (int i = 0; i "lt" m; ++i)
        QQ[i][i] = 1.0;  // Identity

      double[][] RR = new double[m][];  // working R mxn
      for (int i = 0; i "lt" m; ++i)
        RR[i] = new double[n];
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n; ++j)
          RR[i][j] = A[i][j];  // copy of A

      for (int j = 0; j "lt" k; ++j) // outer loop
      {
        for (int i = m-1; i "gt" j; --i) // inner loop
        {
          double a = RR[j][j];
          double b = RR[i][j];
          //if (b == 0.0) continue;  // risky
          if (Math.Abs(b) "lt" 1.0e-12) continue;

          double r = Math.Sqrt((a * a) + (b * b));
          double c = a / r;
          double s = -b / r;

          for (int col = j; col "lt" n; ++col)
          {
            double tmp = 
              (c * RR[j][col]) - (s * RR[i][col]);
            RR[i][col] = 
              (s * RR[j][col]) + (c * RR[i][col]);
            RR[j][col] = tmp;
          }

          for (int row = 0; row "lt" m; ++row)
          {
            double tmp = 
              (c * QQ[row][j]) - (s * QQ[row][i]);
            QQ[row][i] = 
              (s * QQ[row][j]) + (c * QQ[row][i]);
            QQ[row][j] = tmp;
          }
        } // inner i loop

      } // j loop

      // Q_reduced = Q[:, :k] // all rows, col 0 to k
      // R_reduced = R[:k, :] // rows 0 to k, all cols

      double[][] Qred = new double[m][];
      for (int i = 0; i "lt" m; ++i)
        Qred[i] = new double[k];
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" k; ++j)
          Qred[i][j] = QQ[i][j];
      Q = Qred;

      double[][] Rred = new double[k][];
      for (int i = 0; i "lt" k; ++i)
        Rred[i] = new double[n];
      for (int i = 0; i "lt" k; ++i)
        for (int j = 0; j "lt" n; ++j)
          Rred[i][j] = RR[i][j];
      R = Rred;

    } // MatDecompQR()

  } // Program

} // ns
Posted in Machine Learning | Leave a comment

Decision Tree Regression From Scratch Using JavaScript Without Pointers or Recursion

Decision tree regression is a machine learning technique that incorporates a set of if-then rules in a tree data structure to predict a single numeric value. For example, a decision tree regression model prediction might be, “If employee age is greater than 29.0 and age is less than or equal to 32.5 and years-experience is less than or equal to 4.0 and height is greater than 67.0 then bank account balance is $655.17.”

This blog post presents decision tree regression, implemented from scratch with JavaScript, using list storage for tree nodes (no pointers/references), a FIFO stack to build the tree (no recursion), weighted variance minimization for the node split function, and saves associated row indices in each node.

The demo data looks like:

 0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
. . .

The first five values on each line are the x predictors. The last value on each line is the target y variable to predict. The data is synthetic and was generated by a 5-10-1 neural network with random weights and bias. Therefore, the data has an underlying structure which can be predicted. There are 200 training items and 40 test items.

The key parts of the output of the demo program are:

JavaScript decision tree regression

Loading synthetic train (200), test (40)
Done

First three train x:
-0.1660   0.4406  -0.9998  -0.3953  -0.7065
 0.0776  -0.1616   0.3704  -0.5911   0.7562
-0.9452   0.3409  -0.1654   0.1174  -0.7192

First three train y:
0.484 0.1568 0.8054

Setting maxDepth = 3
Setting minSamples = 2
Setting minLeaf = 18
Using default numSplitCols = -1

Creating and training tree
Done

ID   0 | col  0 | v -0.2102 | L  1 | R  2 | y 0.3493 | leaf F
ID   1 | col  4 | v  0.1431 | L  3 | R  4 | y 0.5345 | leaf F
ID   2 | col  0 | v  0.3915 | L  5 | R  6 | y 0.2382 | leaf F
ID   3 | col  0 | v -0.6553 | L  7 | R  8 | y 0.6358 | leaf F
ID   4 | col -1 | v  0.0000 | L  9 | R 10 | y 0.4123 | leaf T
ID   5 | col  4 | v -0.2987 | L 11 | R 12 | y 0.3032 | leaf F
ID   6 | col  2 | v  0.3777 | L 13 | R 14 | y 0.1701 | leaf F
ID   7 | col -1 | v  0.0000 | L 15 | R 16 | y 0.6952 | leaf T
ID   8 | col -1 | v  0.0000 | L 17 | R 18 | y 0.5598 | leaf T
ID  11 | col -1 | v  0.0000 | L 23 | R 24 | y 0.4101 | leaf T
ID  12 | col -1 | v  0.0000 | L 25 | R 26 | y 0.2613 | leaf T
ID  13 | col -1 | v  0.0000 | L 27 | R 28 | y 0.1882 | leaf T
ID  14 | col -1 | v  0.0000 | L 29 | R 30 | y 0.1381 | leaf T

Evaluating tree model

Train acc (within 0.10) = 0.3750
Test acc (within 0.10) = 0.4750

Train MSE = 0.0048
Test MSE = 0.0054

Predicting for x =
-0.1660   0.4406  -0.9998  -0.3953  -0.7065
y = 0.4101

IF
column 0  >   -0.2102 AND
column 0  <=   0.3915 AND
column 4  <=  -0.2987 AND
THEN
predicted = 0.4101

End decision tree regression demo

If you compare this output with the visual representation, most of the ideas of decision tree regression will become mostly clear:

The demo decision tree has maxDepth = 3. This creates a tree with 15 nodes (some of which could be empty). In general, if a tree has maxDepth = n, then it has 2^(n+1) - 1 1 nodes. These tree nodes could be stored as JavaScript program-defined Node objects and connected via pointers/references. But the architecture used by the demo decision tree is to create a List collection with size/count = 15, where each item in the List is a Node object.

With this design, if the root node is at index [0], then the left child is at [1] and the right child is at [2]. In general, if a node is at index [i], then the left child is at index [2*i+1] and the right child is at index [2*i+2].

The tree splitting part of constructing a decision tree regression system is perhaps best understood by looking at a concrete example. The root node of the demo decision tree is associated with all 200 rows of the training data. The splitting process selects some of the 200 rows to be assigned to the left child node, and the remaining rows to be assigned to the right child node.

For simplicity, suppose that instead of the demo training data with 200 rows and 5 columns of predictors, a tree node is associated with just 5 rows of data, and the data has just 3 columns of predictors:

     X                    y
[0] 0.97  0.25  0.75     3.0
[1] 0.53  0.41  0.00     9.0
[2] 0.86  0.64  0.86     7.0
[3] 0.14  0.37  0.25     2.0
[4] 0.53  0.86  0.37     6.0
     [0]   [1]   [2]

The split algorithm will select some of the five rows to go to the left child and the remaining rows to go to the right child. The idea is to select the two sets of rows in a way so that their associated y values are close together. The demo implementation does this by finding the split that minimizes the weighted variance of the target y values.

The splitting algorithm examines every possible x value in every row and every column. For each possible candidate split value (also called the threshold), the weighted variance of the y values of the resulting split is computed. For example, for x = 0.37 in column [1], the rows where the values in column [1] are less than or equal to 0.37 are rows [3] and [0] with associated y values (2.0, 3.0). The other rows are [1], [2], [4] with associated y values (9.0, 7.0, 6.0).

The variance of a set of y values is the average of the sum of the squared differences between the values and their mean (average).

For this proposed split, the variance of y values (2.0, 3.0) is computed as 0.25. The mean of (2.0, 3.0) = (2.0 + 3.0) / 2 = 2.5 and so the variance is ((2.0 - 2.5)^2 + (3.0 - 2.5)^2) / 2 = 0.25.

The mean of y values (9.0, 7.0, 6.0) = (9.0 + 7.0 + 6.0) / 3 = 7.33 and the variance is ((9.0 - 7.33)^2 + (7.0 -7.33)^2 + (6.0 - 7.33)^2) / 3 = 2.33.

Because the proposed split has different numbers of rows (two rows for the left child and three rows for the right child), the weighted variance of the proposed split is used and is ((2 * 0.25) + (3 * 2.33)) / 5 = 1.50.

This process is repeated for every x value in the training data. The x threshold value and its column that generate the smallest weighted variance define the split. In practice it's common to keep track of which values in a given column have already been checked, to avoid unnecessary work computing weighted variance.

After a best split is found, the predicted y value for a leaf node (no further splits) is the average of the y values associated with the node. The demo program computes predicted y values for internal, non-leaf nodes, but those predicted values are not used.

The demo does not use recursion to build the tree. Instead, the demo uses a stack data structure. In high-level pseudo-code:

create empty stack
push (root, depth)
while stack not empty
  pop (currNode, currDepth)
  if at maxDepth or not enough data then
    node is a leaf, predicted = avg of associated Y values
  compute split value and split column using the
  associated rows stored in currNode
  if unable to split then
    node is a leaf, predicted = avg of associated Y values
  compute left-rows, right-rows
  create and push (leftNode, currDepth+1)
  create and push (rightNode, currDepth+1)   
end-while

The build-tree algorithm is short but very tricky. The stack holds a tree node and its associated rows of the training data, and the current depth of the tree. The current depth of the tree is maintained so that the algorithm knows when maxDepth has been reached.



My love of math, computer science, and machine learning can probably be traced back to the games I loved to play as a young man: poker, Yahtzee, blackjack, and many others.

I was also fascinated by gambling machines of all kinds. When I was about ten years old, my Uncle Brent gave me a small mechanical slot machine toy from Vegas after one of his trips there.

Here is "Twenty One" made in 1936 by Groetchen, a UK company. It's a lot like blackjack where you have to beat the dealer's total without going over 21.

Left: You put a coin in and pull the handle. You get to see the first two numbers. Here, 7 + 6 = 13. Now, if you want you can draw a card by pulling the small lever in position 3.

Right: The card drawn was a 3, so the player has a total of 13. If you want to stand with that total, you pull the "House" lever to reveal the House hand. Here is it 20, so the player loses.


Demo code. Replace "lt" (less than), "gt", "lte", "gte" with Boolean operator symbols (my blog editor chokes on symbols).

// decision_tree_regression.js
// node.js v22.16.0

let FS = require('fs');  // file load

class DecisionTreeRegressor
{
  // --------------------------------------------------------

  constructor(maxDepth, minSamples,
    minLeaf, numSplitCols, seed)
  {
    // if maxDepth = n, list has 2^(n+1) - 1 nodes
    this.maxDepth = maxDepth;
    this.minSamples = minSamples;
    this.minLeaf = minLeaf;
    this.numSplitCols = numSplitCols; // for random forest

    // create full tree
    this.tree = [];  // a list of Node objects
    let numNodes = Math.pow(2, (maxDepth + 1)) - 1;
    for (let i = 0; i "lt" numNodes; ++i) {
      let n = new this.Node(i, -1, 0.0, -1,
        -1, 0.0, false, []);
      this.tree[i] = n; // dummy nodes
    }

    this.seed = seed + 0.5;  // quasi random
    this.trainX = null;  // supplied in train()
    this.trainY = null;  
  }

  // --------------------------------------------------------

    Node = function(id, colIdx, thresh, left, right, value,
      isLeaf, rows) // call:  n = new this.Node(0, . .);
    {
      this.id = id; this.colIdx = colIdx;
      this.thresh = thresh; this.left = left;
      this.right = right; this.value = value;
      this.isLeaf = isLeaf;
      this.rows = rows; // a list
    }

    StackInfo = function(node, depth)
    {
      this.node = node;
      this.depth = depth;
    }

  // --------------------------------------------------------

  // core methods: 
  // train(), predict(), explain(), accuracy(),
  //   mse(), display()

  // --------------------------------------------------------

  train(trainX, trainY)
  {
    this.trainX = trainX;  // by ref
    this.trainY = trainY;
    this.makeTree();
  }

  // --------------------------------------------------------

  predict(x)
  {
    let p = 0;
    let currNode = this.tree[p];
    while (currNode.isLeaf == false &&
    p "lt" this.tree.length) {
      if (x[currNode.colIdx] "lte" currNode.thresh)
        p = currNode.left;
      else
        p = currNode.right;
      currNode = this.tree[p];
    }
    return this.tree[p].value;
  }

  // --------------------------------------------------------

  explain(x)
  {
    let p = 0;
    console.log("\nIF ");
    let currNode = this.tree[p];
    while (currNode.isLeaf == false) {
      process.stdout.write("column ");
      process.stdout.write(currNode.colIdx + " ");
      if (x[currNode.colIdx] "lte" currNode.thresh) {
        process.stdout.write(" "lte" ");
        console.log(currNode.thresh.toFixed(4).
        toString().padStart(8) + " AND ");
        p = currNode.left;
        currNode = this.tree[p];
      }
      else {
        process.stdout.write(" "gt"  ");
        console.log(currNode.thresh.toFixed(4).
        toString().padStart(8) + " AND ");
        p = currNode.right;
        currNode = this.tree[p];
      }
    }
    console.log("THEN \npredicted = " +
      currNode.value.toFixed(4).toString());
  }

  // --------------------------------------------------------

  accuracy(dataX, dataY, pctClose)
  {
    let nCorrect = 0; let nWrong = 0;
    for (let i = 0; i "lt" dataX.length; ++i) {
      let x = dataX[i];
      let actualY = dataY[i];
      let predY = this.predict(x);
      if (Math.abs(predY - actualY) "lt" 
        Math.abs(pctClose * actualY)) {
        ++nCorrect;
      }
      else {
        ++nWrong;
      }
    }
    return (nCorrect * 1.0) / (nCorrect + nWrong);
  }

  // --------------------------------------------------------

  mse(dataX, dataY)
  {
    let n = dataX.length;
    let sum = 0.0;
    for (let i = 0; i "lt" n; ++i) {
      let x = dataX[i];
      let actualY = dataY[i];
      let predY = this.predict(x);
      sum += (actualY - predY) * (actualY - predY);
    }
    return sum / n;
  }

  // --------------------------------------------------------

  display(showRows)
  {
    let size = this.tree.length;
    for (let i = 0; i "lt" size; ++i) { // each node
      let n = this.tree[i]; // covenience
      if (n == null) continue;
      if (n.colIdx == -1 && n.isLeaf == false) continue;

      let s = "";
      s += "ID " + n.id.toString().padStart(3) + " ";
      s += "| col " + n.colIdx.toString().padStart(2) + " ";
      s += "| v " + n.thresh.toFixed(4).
        toString().padStart(8) + " ";
      s += "| L " + n.left.toString().padStart(2) + " ";
      s += "| R " + n.right.toString().padStart(2) + " ";
      s += "| y " + n.value.toFixed(4).toString() + " ";
      let tf = n.isLeaf == true ? "T" : "F";
      s += "| leaf " + tf;

      if (showRows == true) {
        s += "\nrows: ";
        for (let k = 0; k "lt" n.rows.length; ++k) {
          if (k "gt" 0 && k % 20 == 0)
              s += "\n";
          s += n.rows[k].toString() + " ";
        }
      }
      console.log(s);
      if (showRows == true) console.log("");
    }
  } // display()

  // --------------------------------------------------------

  // helpers for train(): makeTree(), bestSplit(), next(),
  //   nextInt(), shuffle(), treeTargetMean(),
  //   treeTargetVariance()

  // --------------------------------------------------------

  makeTree()
  {
    if (this.numSplitCols == -1)
      this.numSplitCols = this.trainX[0].length;

    let allRows = [];
    for (let i = 0; i "lt" this.trainX.length; ++i)
      allRows.push(i);
    let grandMean = this.treeTargetMean(allRows);

    let root = new this.Node(0, -1, 0.0, 1, 2,
        grandMean, false, allRows);
      
    let stack = [];  // first-in, first-out
    stack.push(new this.StackInfo(root, 0));
    while (stack.length "gt" 0) {
      let info = stack.pop();
      let currNode = info.node;
      this.tree[currNode.id] = currNode;

      let currRows = info.node.rows;
      let currDepth = info.depth;

      if (currDepth == this.maxDepth ||
        currRows.length "lt" this.minSamples) {
        currNode.value = this.treeTargetMean(currRows);
        currNode.isLeaf = true;
        continue;
      }

      let splitInfo = this.bestSplit(currRows);
      let colIdx = Math.trunc(splitInfo[0]);
      let thresh = splitInfo[1];

      if (colIdx == -1) {  // unable to split
        currNode.value = this.treeTargetMean(currRows);
        currNode.isLeaf = true;
        continue;
      }

      // good split so at internal, non-leaf node
      currNode.colIdx = colIdx;
      currNode.thresh = thresh;

      // make the data splits for children
      // find left rows and right rows
      let leftIdxs = [];
      let rightIdxs = [];

      for (let k = 0; k "lt" currRows.length; ++k) {
        let r = currRows[k];
        if (this.trainX[r][colIdx] "lte" thresh)
          leftIdxs.push(r);
        else
          rightIdxs.push(r);
      }

      let leftID = currNode.id * 2 + 1;
      let currNodeLeft = new this.Node(leftID, -1, 0.0,
        2 * leftID + 1, 2 * leftID + 2,
        this.treeTargetMean(leftIdxs), false, leftIdxs);
      stack.push(new this.StackInfo(currNodeLeft,
        currDepth + 1));

      let rightID = currNode.id * 2 + 2;
      let currNodeRight = new this.Node(rightID, -1, 0.0,
        2*rightID + 1, 2 * rightID + 2,
        this.treeTargetMean(rightIdxs), false, rightIdxs);
      stack.push(new this.StackInfo(currNodeRight,
        currDepth + 1));

    } // while
  } // makeTree()

  // --------------------------------------------------------

  bestSplit(rows)
  {
    // result[0] = best col idx (as double)
    // result[1] = best split value

    let bestColIdx = -1;  // indicates bad split
    let bestThresh = 0.0;
    let bestVar = Number.MAX_VALUE;  // smaller is better

    let nRows = rows.length;
    let nCols = this.trainX[0].length;

    if (nRows == 0)
      console.log("FATAL: empty data in bestSplit()");

    // process cols in scrambled order
    let colIndices = [];
    for (let k = 0; k "lt" nCols; ++k)
      colIndices[k] = k;
    this.shuffle(colIndices);

    // each column of trainX
    for (let j = 0; j "lt" this.numSplitCols; ++j) {
      let colIdx = colIndices[j];
      let examineds = [];

      for (let i = 0; i "lt" nRows; ++i) {  // each row
        // if curr thresh been seen, skip it
        let thresh = this.trainX[rows[i]][colIdx];
        if (examineds.includes(thresh)) continue;
        examineds.push(thresh);

        // get row idxs where x is lte, gt thresh
        let leftIdxs = [];
        let rightIdxs = [];
        for (let k = 0; k "lt" nRows; ++k) {
          if (this.trainX[rows[k]][colIdx] "lte" thresh)
            leftIdxs.push(rows[k]);
          else
            rightIdxs.push(rows[k]);
        }

        // Check if proposed split has too few values
        if (leftIdxs.length "lt" this.minLeaf || 
        rightIdxs.length "lt" this.minLeaf) 
          continue;  // to next row

        let leftVar = this.treeTargetVariance(leftIdxs);
        let rightVar = this.treeTargetVariance(rightIdxs);
        let weightedVar = (leftIdxs.length * leftVar +
          rightIdxs.length * rightVar) / nRows;

        if (weightedVar "lt" bestVar) {
          bestColIdx = colIdx;
          bestThresh = thresh;
          bestVar = weightedVar;
        }
      } // each row
    } // j each col

    let result = [-1, -1];
    result[0] = bestColIdx;
    result[1] = bestThresh;
    return result;
  } // bestSplit()

  // --------------------------------------------------------

  next()
  {
    // return sort-of-random in [0.0, 1.0)
    let x = Math.sin(this.seed) * 1000;
    let result = x - Math.floor(x);  // [0.0,1.0)
    this.seed = result;  // for next call
    return result;
  }

  // --------------------------------------------------------

  nextInt(lo, hi)
  {
    let x = this.next();
    return Math.trunc((hi - lo) * x + lo);
  }

  // --------------------------------------------------------

  shuffle(indices)
  {
    // Fisher-Yates
    for (let i = 0; i "lt" indices.length; ++i) {
      let ri = this.nextInt(i, indices.length);
      let tmp = indices[ri];
      indices[ri] = indices[i];
      indices[i] = tmp;
    }
  }

  // -------------------------------------------------------- 

  treeTargetMean(rows)
  {
    let sum = 0.0;
    for (let i = 0; i "lt" rows.length; ++i) {
      let r = rows[i];
      sum += this.trainY[r];
    }
    return sum / rows.length;
  }

  // ------------------------------------------------------

  treeTargetVariance(rows)
  {
    let mean = this.treeTargetMean(rows);
    let sum = 0.0;
    for (let i = 0; i "lt" rows.length; ++i) {
      let r = rows[i];
      sum += (this.trainY[r] - mean) *
        (this.trainY[r] - mean);
    }
    return sum / rows.length;
  }

} // class DecisionTreeRegressor

// ----------------------------------------------------------
// helper functions for main(): loadTxt(), vecShow()
// ----------------------------------------------------------

function loadTxt(fn, delimit, usecols, comment) {
  // load numeric matrix from text file
  let all = FS.readFileSync(fn, "utf8");  // giant string
  all = all.trim();  // strip final crlf in file
  let lines = all.split("\n");  // array of lines

  // count number non-comment lines
  let nRows = 0;
  for (let i = 0; i "lt" lines.length; ++i) {
    if (!lines[i].startsWith(comment))
      ++nRows;
  }
  nCols = usecols.length;
  //let result = matMake(nRows, nCols, 0.0);
  let result = [];
  for (let i = 0; i "lt" nRows; ++i) {
    result[i] = [];
    for (let j = 0; j "lt" nCols; ++j) {
      result[i][j] = 0.0;
    }
  }
 
  let r = 0;  // into lines
  let i = 0;  // into result[][]
  while (r "lt" lines.length) {
    if (lines[r].startsWith(comment)) {
      ++r;  // next row
    }
    else {
      let tokens = lines[r].split(delimit);
      for (let j = 0; j "lt" nCols; ++j) {
        result[i][j] = parseFloat(tokens[usecols[j]]);
      }
      ++r;
      ++i;
    }
  }

  if (usecols.length "gt" 1)
    return result;

  // if length of usecols is just 1, convert matrix to vector
  // the curr matrix is shape nRows x 1
  if (usecols.length == 1) {
    let vec = [];
    for (let i = 0; i "lt" nRows; ++i)
      vec[i] = result[i][0];
    return vec;
  }
}

// ----------------------------------------------------------

function vecShow(v, dec, len)
{
  for (let i = 0; i "lt" v.length; ++i) {
    if (i != 0 && i % len == 0) {
      process.stdout.write("\n");
    }
    if (v[i] "gte" 0.0) {
      process.stdout.write(" ");  // + or - space
    }
    process.stdout.write(v[i].toFixed(dec));
    process.stdout.write("  ");
  }
  process.stdout.write("\n");
}

// ----------------------------------------------------------

function main()
{
  console.log("\nJavaScript decision tree regression ");

  // 1. load data
  console.log("\nLoading synthetic train (200), test (40)");
  let trainFile = ".\\Data\\synthetic_train_200.txt";
  let testFile = ".\\Data\\synthetic_test_40.txt";

  let trainX = loadTxt(trainFile, ",", [0,1,2,3,4], "#");
  let trainY = loadTxt(trainFile, ",", [5], "#");
  let testX = loadTxt(testFile, ",", [0,1,2,3,4], "#");
  let testY = loadTxt(testFile, ",", [5], "#");
  console.log("Done ");

  console.log("\nFirst three train x: ");
  for (let i = 0; i "lt" 3; ++i)
    vecShow(trainX[i], 4, 8);  // 4 decimals 8 wide
  console.log("\nFirst three train y: ");
  for (let i = 0; i "lt" 3; ++i)
    process.stdout.write(trainY[i].toString() + " ");
  console.log("");

  // 2. create and train tree
  let maxDepth = 3;
  let minSamples = 2;
  let minLeaf = 18;  // 18
  let numSplitCols = -1;  // all columns
  let seed = 0;
  console.log("\nSetting maxDepth = " +
    maxDepth.toString());
  console.log("Setting minSamples = " +
    minSamples.toString());
  console.log("Setting minLeaf = " +
    minLeaf.toString());
  console.log("Using default numSplitCols = -1 ");

  console.log("\nCreating and training tree ");
  let dtr = 
    new DecisionTreeRegressor(maxDepth, minSamples,
      minLeaf, numSplitCols, seed);

  dtr.train(trainX, trainY);
  console.log("Done \n");
  dtr.display(false); // show rows

  // 3. evaluate tree
  console.log("\nEvaluating tree model ");
  let trainAcc = dtr.accuracy(trainX, trainY, 0.10);
  let testAcc = dtr.accuracy(testX, testY, 0.10);
  console.log("\nTrain acc (within 0.10) = " +
    trainAcc.toFixed(4).toString());
  console.log("Test acc (within 0.10) = " +
    testAcc.toFixed(4).toString());

  let trainMSE = dtr.mse(trainX, trainY);
  let testMSE = dtr.mse(testX, testY);
  console.log("\nTrain MSE = " +
    trainMSE.toFixed(4).toString());
  console.log("Test MSE = " +
    testMSE.toFixed(4).toString());

  // 4. use tree to make prediction
  let x = trainX[0];
  console.log("\nPredicting for x = ");
  vecShow(x, 4, 9);
  let predY = dtr.predict(x);
  console.log("y = " + predY.toFixed(4).toString());

  dtr.explain(x);

  console.log("\nEnd decision tree regression demo ");
}

main();

Training data:

# synthetic_train_200.txt
#
-0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
-0.8299, -0.9219, -0.6603,  0.7563, -0.8033,  0.7955
 0.0663,  0.3838, -0.3690,  0.3730,  0.6693,  0.3206
-0.9634,  0.5003,  0.9777,  0.4963, -0.4391,  0.7377
-0.1042,  0.8172, -0.4128, -0.4244, -0.7399,  0.4801
-0.9613,  0.3577, -0.5767, -0.4689, -0.0169,  0.6861
-0.7065,  0.1786,  0.3995, -0.7953, -0.1719,  0.5569
 0.3888, -0.1716, -0.9001,  0.0718,  0.3276,  0.2500
 0.1731,  0.8068, -0.7251, -0.7214,  0.6148,  0.3297
-0.2046, -0.6693,  0.8550, -0.3045,  0.5016,  0.2129
 0.2473,  0.5019, -0.3022, -0.4601,  0.7918,  0.2613
-0.1438,  0.9297,  0.3269,  0.2434, -0.7705,  0.5171
 0.1568, -0.1837, -0.5259,  0.8068,  0.1474,  0.3307
-0.9943,  0.2343, -0.3467,  0.0541,  0.7719,  0.5581
 0.2467, -0.9684,  0.8589,  0.3818,  0.9946,  0.1092
-0.6553, -0.7257,  0.8652,  0.3936, -0.8680,  0.7018
 0.8460,  0.4230, -0.7515, -0.9602, -0.9476,  0.1996
-0.9434, -0.5076,  0.7201,  0.0777,  0.1056,  0.5664
 0.9392,  0.1221, -0.9627,  0.6013, -0.5341,  0.1533
 0.6142, -0.2243,  0.7271,  0.4942,  0.1125,  0.1661
 0.4260,  0.1194, -0.9749, -0.8561,  0.9346,  0.2230
 0.1362, -0.5934, -0.4953,  0.4877, -0.6091,  0.3810
 0.6937, -0.5203, -0.0125,  0.2399,  0.6580,  0.1460
-0.6864, -0.9628, -0.8600, -0.0273,  0.2127,  0.5387
 0.9772,  0.1595, -0.2397,  0.1019,  0.4907,  0.1611
 0.3385, -0.4702, -0.8673, -0.2598,  0.2594,  0.2270
-0.8669, -0.4794,  0.6095, -0.6131,  0.2789,  0.4700
 0.0493,  0.8496, -0.4734, -0.8681,  0.4701,  0.3516
 0.8639, -0.9721, -0.5313,  0.2336,  0.8980,  0.1412
 0.9004,  0.1133,  0.8312,  0.2831, -0.2200,  0.1782
 0.0991,  0.8524,  0.8375, -0.2102,  0.9265,  0.2150
-0.6521, -0.7473, -0.7298,  0.0113, -0.9570,  0.7422
 0.6190, -0.3105,  0.8802,  0.1640,  0.7577,  0.1056
 0.6895,  0.8108, -0.0802,  0.0927,  0.5972,  0.2214
 0.1982, -0.9689,  0.1870, -0.1326,  0.6147,  0.1310
-0.3695,  0.7858,  0.1557, -0.6320,  0.5759,  0.3773
-0.1596,  0.3581,  0.8372, -0.9992,  0.9535,  0.2071
-0.2468,  0.9476,  0.2094,  0.6577,  0.1494,  0.4132
 0.1737,  0.5000,  0.7166,  0.5102,  0.3961,  0.2611
 0.7290, -0.3546,  0.3416, -0.0983, -0.2358,  0.1332
-0.3652,  0.2438, -0.1395,  0.9476,  0.3556,  0.4170
-0.6029, -0.1466, -0.3133,  0.5953,  0.7600,  0.4334
-0.4596, -0.4953,  0.7098,  0.0554,  0.6043,  0.2775
 0.1450,  0.4663,  0.0380,  0.5418,  0.1377,  0.2931
-0.8636, -0.2442, -0.8407,  0.9656, -0.6368,  0.7429
 0.6237,  0.7499,  0.3768,  0.1390, -0.6781,  0.2185
-0.5499,  0.1850, -0.3755,  0.8326,  0.8193,  0.4399
-0.4858, -0.7782, -0.6141, -0.0008,  0.4572,  0.4197
 0.7033, -0.1683,  0.2334, -0.5327, -0.7961,  0.1776
 0.0317, -0.0457, -0.6947,  0.2436,  0.0880,  0.3345
 0.5031, -0.5559,  0.0387,  0.5706, -0.9553,  0.3107
-0.3513,  0.7458,  0.6894,  0.0769,  0.7332,  0.3170
 0.2205,  0.5992, -0.9309,  0.5405,  0.4635,  0.3532
-0.4806, -0.4859,  0.2646, -0.3094,  0.5932,  0.3202
 0.9809, -0.3995, -0.7140,  0.8026,  0.0831,  0.1600
 0.9495,  0.2732,  0.9878,  0.0921,  0.0529,  0.1289
-0.9476, -0.6792,  0.4913, -0.9392, -0.2669,  0.5966
 0.7247,  0.3854,  0.3819, -0.6227, -0.1162,  0.1550
-0.5922, -0.5045, -0.4757,  0.5003, -0.0860,  0.5863
-0.8861,  0.0170, -0.5761,  0.5972, -0.4053,  0.7301
 0.6877, -0.2380,  0.4997,  0.0223,  0.0819,  0.1404
 0.9189,  0.6079, -0.9354,  0.4188, -0.0700,  0.1907
-0.1428, -0.7820,  0.2676,  0.6059,  0.3936,  0.2790
 0.5324, -0.3151,  0.6917, -0.1425,  0.6480,  0.1071
-0.8432, -0.9633, -0.8666, -0.0828, -0.7733,  0.7784
-0.9444,  0.5097, -0.2103,  0.4939, -0.0952,  0.6787
-0.0520,  0.6063, -0.1952,  0.8094, -0.9259,  0.4836
 0.5477, -0.7487,  0.2370, -0.9793,  0.0773,  0.1241
 0.2450,  0.8116,  0.9799,  0.4222,  0.4636,  0.2355
 0.8186, -0.1983, -0.5003, -0.6531, -0.7611,  0.1511
-0.4714,  0.6382, -0.3788,  0.9648, -0.4667,  0.5950
 0.0673, -0.3711,  0.8215, -0.2669, -0.1328,  0.2677
-0.9381,  0.4338,  0.7820, -0.9454,  0.0441,  0.5518
-0.3480,  0.7190,  0.1170,  0.3805, -0.0943,  0.4724
-0.9813,  0.1535, -0.3771,  0.0345,  0.8328,  0.5438
-0.1471, -0.5052, -0.2574,  0.8637,  0.8737,  0.3042
-0.5454, -0.3712, -0.6505,  0.2142, -0.1728,  0.5783
 0.6327, -0.6297,  0.4038, -0.5193,  0.1484,  0.1153
-0.5424,  0.3282, -0.0055,  0.0380, -0.6506,  0.6613
 0.1414,  0.9935,  0.6337,  0.1887,  0.9520,  0.2540
-0.9351, -0.8128, -0.8693, -0.0965, -0.2491,  0.7353
 0.9507, -0.6640,  0.9456,  0.5349,  0.6485,  0.1059
-0.0462, -0.9737, -0.2940, -0.0159,  0.4602,  0.2606
-0.0627, -0.0852, -0.7247, -0.9782,  0.5166,  0.2977
 0.0478,  0.5098, -0.0723, -0.7504, -0.3750,  0.3335
 0.0090,  0.3477,  0.5403, -0.7393, -0.9542,  0.4415
-0.9748,  0.3449,  0.3736, -0.1015,  0.8296,  0.4358
 0.2887, -0.9895, -0.0311,  0.7186,  0.6608,  0.2057
 0.1570, -0.4518,  0.1211,  0.3435, -0.2951,  0.3244
 0.7117, -0.6099,  0.4946, -0.4208,  0.5476,  0.1096
-0.2929, -0.5726,  0.5346, -0.3827,  0.4665,  0.2465
 0.4889, -0.5572, -0.5718, -0.6021, -0.7150,  0.2163
-0.7782,  0.3491,  0.5996, -0.8389, -0.5366,  0.6516
-0.5847,  0.8347,  0.4226,  0.1078, -0.3910,  0.6134
 0.8469,  0.4121, -0.0439, -0.7476,  0.9521,  0.1571
-0.6803, -0.5948, -0.1376, -0.1916, -0.7065,  0.7156
 0.2878,  0.5086, -0.5785,  0.2019,  0.4979,  0.2980
 0.2764,  0.1943, -0.4090,  0.4632,  0.8906,  0.2960
-0.8877,  0.6705, -0.6155, -0.2098, -0.3998,  0.7107
-0.8398,  0.8093, -0.2597,  0.0614, -0.0118,  0.6502
-0.8476,  0.0158, -0.4769, -0.2859, -0.7839,  0.7715
 0.5751, -0.7868,  0.9714, -0.6457,  0.1448,  0.1175
 0.4802, -0.7001,  0.1022, -0.5668,  0.5184,  0.1090
 0.4458, -0.6469,  0.7239, -0.9604,  0.7205,  0.0779
 0.5175,  0.4339,  0.9747, -0.4438, -0.9924,  0.2879
 0.8678,  0.7158,  0.4577,  0.0334,  0.4139,  0.1678
 0.5406,  0.5012,  0.2264, -0.1963,  0.3946,  0.2088
-0.9938,  0.5498,  0.7928, -0.5214, -0.7585,  0.7687
 0.7661,  0.0863, -0.4266, -0.7233, -0.4197,  0.1466
 0.2277, -0.3517, -0.0853, -0.1118,  0.6563,  0.1767
 0.3499, -0.5570, -0.0655, -0.3705,  0.2537,  0.1632
 0.7547, -0.1046,  0.5689, -0.0861,  0.3125,  0.1257
 0.8186,  0.2110,  0.5335,  0.0094, -0.0039,  0.1391
 0.6858, -0.8644,  0.1465,  0.8855,  0.0357,  0.1845
-0.4967,  0.4015,  0.0805,  0.8977,  0.2487,  0.4663
 0.6760, -0.9841,  0.9787, -0.8446, -0.3557,  0.1509
-0.1203, -0.4885,  0.6054, -0.0443, -0.7313,  0.4854
 0.8557,  0.7919, -0.0169,  0.7134, -0.1628,  0.2002
 0.0115, -0.6209,  0.9300, -0.4116, -0.7931,  0.4052
-0.7114, -0.9718,  0.4319,  0.1290,  0.5892,  0.3661
 0.3915,  0.5557, -0.1870,  0.2955, -0.6404,  0.2954
-0.3564, -0.6548, -0.1827, -0.5172, -0.1862,  0.4622
 0.2392, -0.4959,  0.5857, -0.1341, -0.2850,  0.2470
-0.3394,  0.3947, -0.4627,  0.6166, -0.4094,  0.5325
 0.7107,  0.7768, -0.6312,  0.1707,  0.7964,  0.2757
-0.1078,  0.8437, -0.4420,  0.2177,  0.3649,  0.4028
-0.3139,  0.5595, -0.6505, -0.3161, -0.7108,  0.5546
 0.4335,  0.3986,  0.3770, -0.4932,  0.3847,  0.1810
-0.2562, -0.2894, -0.8847,  0.2633,  0.4146,  0.4036
 0.2272,  0.2966, -0.6601, -0.7011,  0.0284,  0.2778
-0.0743, -0.1421, -0.0054, -0.6770, -0.3151,  0.3597
-0.4762,  0.6891,  0.6007, -0.1467,  0.2140,  0.4266
-0.4061,  0.7193,  0.3432,  0.2669, -0.7505,  0.6147
-0.0588,  0.9731,  0.8966,  0.2902, -0.6966,  0.4955
-0.0627, -0.1439,  0.1985,  0.6999,  0.5022,  0.3077
 0.1587,  0.8494, -0.8705,  0.9827, -0.8940,  0.4263
-0.7850,  0.2473, -0.9040, -0.4308, -0.8779,  0.7199
 0.4070,  0.3369, -0.2428, -0.6236,  0.4940,  0.2215
-0.0242,  0.0513, -0.9430,  0.2885, -0.2987,  0.3947
-0.5416, -0.1322, -0.2351, -0.0604,  0.9590,  0.3683
 0.1055,  0.7783, -0.2901, -0.5090,  0.8220,  0.2984
-0.9129,  0.9015,  0.1128, -0.2473,  0.9901,  0.4776
-0.9378,  0.1424, -0.6391,  0.2619,  0.9618,  0.5368
 0.7498, -0.0963,  0.4169,  0.5549, -0.0103,  0.1614
-0.2612, -0.7156,  0.4538, -0.0460, -0.1022,  0.3717
 0.7720,  0.0552, -0.1818, -0.4622, -0.8560,  0.1685
-0.4177,  0.0070,  0.9319, -0.7812,  0.3461,  0.3052
-0.0001,  0.5542, -0.7128, -0.8336, -0.2016,  0.3803
 0.5356, -0.4194, -0.5662, -0.9666, -0.2027,  0.1776
-0.2378,  0.3187, -0.8582, -0.6948, -0.9668,  0.5474
-0.1947, -0.3579,  0.1158,  0.9869,  0.6690,  0.2992
 0.3992,  0.8365, -0.9205, -0.8593, -0.0520,  0.3154
-0.0209,  0.0793,  0.7905, -0.1067,  0.7541,  0.1864
-0.4928, -0.4524, -0.3433,  0.0951, -0.5597,  0.6261
-0.8118,  0.7404, -0.5263, -0.2280,  0.1431,  0.6349
 0.0516, -0.8480,  0.7483,  0.9023,  0.6250,  0.1959
-0.3212,  0.1093,  0.9488, -0.3766,  0.3376,  0.2735
-0.3481,  0.5490, -0.3484,  0.7797,  0.5034,  0.4379
-0.5785, -0.9170, -0.3563, -0.9258,  0.3877,  0.4121
 0.3407, -0.1391,  0.5356,  0.0720, -0.9203,  0.3458
-0.3287, -0.8954,  0.2102,  0.0241,  0.2349,  0.3247
-0.1353,  0.6954, -0.0919, -0.9692,  0.7461,  0.3338
 0.9036, -0.8982, -0.5299, -0.8733, -0.1567,  0.1187
 0.7277, -0.8368, -0.0538, -0.7489,  0.5458,  0.0830
 0.9049,  0.8878,  0.2279,  0.9470, -0.3103,  0.2194
 0.7957, -0.1308, -0.5284,  0.8817,  0.3684,  0.2172
 0.4647, -0.4931,  0.2010,  0.6292, -0.8918,  0.3371
-0.7390,  0.6849,  0.2367,  0.0626, -0.5034,  0.7039
-0.1567, -0.8711,  0.7940, -0.5932,  0.6525,  0.1710
 0.7635, -0.0265,  0.1969,  0.0545,  0.2496,  0.1445
 0.7675,  0.1354, -0.7698, -0.5460,  0.1920,  0.1728
-0.5211, -0.7372, -0.6763,  0.6897,  0.2044,  0.5217
 0.1913,  0.1980,  0.2314, -0.8816,  0.5006,  0.1998
 0.8964,  0.0694, -0.6149,  0.5059, -0.9854,  0.1825
 0.1767,  0.7104,  0.2093,  0.6452,  0.7590,  0.2832
-0.3580, -0.7541,  0.4426, -0.1193, -0.7465,  0.5657
-0.5996,  0.5766, -0.9758, -0.3933, -0.9572,  0.6800
 0.9950,  0.1641, -0.4132,  0.8579,  0.0142,  0.2003
-0.4717, -0.3894, -0.2567, -0.5111,  0.1691,  0.4266
 0.3917, -0.8561,  0.9422,  0.5061,  0.6123,  0.1212
-0.0366, -0.1087,  0.3449, -0.1025,  0.4086,  0.2475
 0.3633,  0.3943,  0.2372, -0.6980,  0.5216,  0.1925
-0.5325, -0.6466, -0.2178, -0.3589,  0.6310,  0.3568
 0.2271,  0.5200, -0.1447, -0.8011, -0.7699,  0.3128
 0.6415,  0.1993,  0.3777, -0.0178, -0.8237,  0.2181
-0.5298, -0.0768, -0.6028, -0.9490,  0.4588,  0.4356
 0.6870, -0.1431,  0.7294,  0.3141,  0.1621,  0.1632
-0.5985,  0.0591,  0.7889, -0.3900,  0.7419,  0.2945
 0.3661,  0.7984, -0.8486,  0.7572, -0.6183,  0.3449
 0.6995,  0.3342, -0.3113, -0.6972,  0.2707,  0.1712
 0.2565,  0.9126,  0.1798, -0.6043, -0.1413,  0.2893
-0.3265,  0.9839, -0.2395,  0.9854,  0.0376,  0.4770
 0.2690, -0.1722,  0.9818,  0.8599, -0.7015,  0.3954
-0.2102, -0.0768,  0.1219,  0.5607, -0.0256,  0.3949
 0.8216, -0.9555,  0.6422, -0.6231,  0.3715,  0.0801
-0.2896,  0.9484, -0.7545, -0.6249,  0.7789,  0.4370
-0.9985, -0.5448, -0.7092, -0.5931,  0.7926,  0.5402

Test data:

# synthetic_test_40.txt
#
 0.7462,  0.4006, -0.0590,  0.6543, -0.0083,  0.1935
 0.8495, -0.2260, -0.0142, -0.4911,  0.7699,  0.1078
-0.2335, -0.4049,  0.4352, -0.6183, -0.7636,  0.5088
 0.1810, -0.5142,  0.2465,  0.2767, -0.3449,  0.3136
-0.8650,  0.7611, -0.0801,  0.5277, -0.4922,  0.7140
-0.2358, -0.7466, -0.5115, -0.8413, -0.3943,  0.4533
 0.4834,  0.2300,  0.3448, -0.9832,  0.3568,  0.1360
-0.6502, -0.6300,  0.6885,  0.9652,  0.8275,  0.3046
-0.3053,  0.5604,  0.0929,  0.6329, -0.0325,  0.4756
-0.7995,  0.0740, -0.2680,  0.2086,  0.9176,  0.4565
-0.2144, -0.2141,  0.5813,  0.2902, -0.2122,  0.4119
-0.7278, -0.0987, -0.3312, -0.5641,  0.8515,  0.4438
 0.3793,  0.1976,  0.4933,  0.0839,  0.4011,  0.1905
-0.8568,  0.9573, -0.5272,  0.3212, -0.8207,  0.7415
-0.5785,  0.0056, -0.7901, -0.2223,  0.0760,  0.5551
 0.0735, -0.2188,  0.3925,  0.3570,  0.3746,  0.2191
 0.1230, -0.2838,  0.2262,  0.8715,  0.1938,  0.2878
 0.4792, -0.9248,  0.5295,  0.0366, -0.9894,  0.3149
-0.4456,  0.0697,  0.5359, -0.8938,  0.0981,  0.3879
 0.8629, -0.8505, -0.4464,  0.8385,  0.5300,  0.1769
 0.1995,  0.6659,  0.7921,  0.9454,  0.9970,  0.2330
-0.0249, -0.3066, -0.2927, -0.4923,  0.8220,  0.2437
 0.4513, -0.9481, -0.0770, -0.4374, -0.9421,  0.2879
-0.3405,  0.5931, -0.3507, -0.3842,  0.8562,  0.3987
 0.9538,  0.0471,  0.9039,  0.7760,  0.0361,  0.1706
-0.0887,  0.2104,  0.9808,  0.5478, -0.3314,  0.4128
-0.8220, -0.6302,  0.0537, -0.1658,  0.6013,  0.4306
-0.4123, -0.2880,  0.9074, -0.0461, -0.4435,  0.5144
 0.0060,  0.2867, -0.7775,  0.5161,  0.7039,  0.3599
-0.7968, -0.5484,  0.9426, -0.4308,  0.8148,  0.2979
 0.7811,  0.8450, -0.6877,  0.7594,  0.2640,  0.2362
-0.6802, -0.1113, -0.8325, -0.6694, -0.6056,  0.6544
 0.3821,  0.1476,  0.7466, -0.5107,  0.2592,  0.1648
 0.7265,  0.9683, -0.9803, -0.4943, -0.5523,  0.2454
-0.9049, -0.9797, -0.0196, -0.9090, -0.4433,  0.6447
-0.4607,  0.1811, -0.2389,  0.4050, -0.0078,  0.5229
 0.2664, -0.2932, -0.4259, -0.7336,  0.8742,  0.1834
-0.4507,  0.1029, -0.6294, -0.1158, -0.6294,  0.6081
 0.8948, -0.0124,  0.9278,  0.2899, -0.0314,  0.1534
-0.1323, -0.8813, -0.0146, -0.0697,  0.6135,  0.2386
Posted in JavaScript, Machine Learning | Leave a comment

The Ladybug on the Clock Question

I was sitting in an airport, waiting to board a flight. I figured I’d use the time to write a computer simulation to solve the Ladybug on the Clock question.

Imagine a standard clock. A ladybug flies in and lands on the “12”. At each second, the ladybug randomly moves forward one number, or backwards one number. She continues doing this until she hits all 12 numbers. What is the probability that the last number she lands on is the “6”?

As it turns out, the counterintuitive answer is that all 11 numbers (1 through 11 — the 12 doesn’t count because the ladybug starts there) are equally likely to be the last number landed on. The probability of all 11 numbers is 1 / 11 = 0.0909.

The simulation was a bit trickier than I expected, but even so, it only took me about 20 minutes to code up a simulation, because I have written simulations like this many times over the past 50 years (starting with my school days at U.C. Irvine when I wrote a craps dice game simulation).

I ran the simulation 100,000 trials. The output:

Begin ladybug-clock simulation
Setting n_trials = 100000
Done

Last number landed frequencies:

   1  0.0906
   2  0.0910
   3  0.0901
   4  0.0897
   5  0.0910
   6  0.0913
   7  0.0922
   8  0.0911
   9  0.0915
  10  0.0910
  11  0.0905

I could have gotten more accurate results by running the simulation longer.

Good fun sitting in an airport.

Demo program:

# ladybug_clock.py
#
# A ladybug starts on the "12" of a clock.
# Each second she randomly moves forward one number,
# or back one number.
# What is the prob that the last number she 
# touches is the "6"?

import numpy as np

print("\nBegin ladybug-clock simulation ")

counts = np.zeros(13, dtype=np.int64)
rnd = np.random.RandomState(0)

n_trials = 100000
print("Setting n_trials = " + str(n_trials))
for trial in range(n_trials):
  positions = np.zeros(13, dtype=np.int64)
  spot = 12
  positions[12] = 1
  while np.sum(positions) != 11:
    flip = rnd.randint(low=0, high=2) # 0 or 1
    if flip == 0:  # go backward
      if spot == 1: spot = 12
      else: spot -= 1
    elif flip == 1: # go forward
      if spot == 12: spot = 1
      else: spot += 1
    positions[spot] = 1
  # find the missing spot
  for i in range(1,13):
    if positions[i] == 0:
      # print("last landed = " + str(i))
      counts[i] += 1
print("Done ")

print("\nLast number landed frequencies: \n")
for i in range(1,12):
  print("%4d  %0.4f" % (i, (counts[i] / n_trials)))
Posted in Miscellaneous | Leave a comment

“Quadratic Regression with SGD Training Using JavaScript” in Visual Studio Magazine

I wrote an article titled “Quadratic Regression with SGD Training Using JavaScript” in the March 2026 edition of Microsoft Visual Studio Magazine. See https://visualstudiomagazine.com/articles/2026/03/11/quadratic-regression-with-sgd-training-using-javascript.aspx.

The goal of a machine learning regression problem is to predict a single numeric value. For example, you might want to predict an employee’s salary based on age, IQ test score, high school grade point average, and so on. There are approximately a dozen common regression techniques. The most basic technique is called linear regression.

The form of a basic linear regression prediction model is y’ = (w0 * x0) + (w1 * x1) + . . + b, where y’ is the predicted value, the xi are predictor values, the wi are weights, and b is the bias. Quadratic regression extends linear regression. The form of a quadratic regression model is y’ = (w0 * x0) + . . + (wn * xn) + (wj * x0 * x0) + . . + (wk * x0 * x1) + . . . + b. There are derived predictors that are the square of each original predictor, and interaction terms that are the multiplication product of all possible pairs of original predictors.

Compared to basic linear regression, quadratic regression can handle more complex data. And compared to the most powerful regression techniques that are designed to handle complex data, quadratic regression often has slightly worse prediction accuracy, but is much easier to implement and train, and has much better model interpretability.

My article presents a demo of quadratic regression, implemented from scratch, trained with stochastic gradient descent (SGD), using the JavaScript language. The output of the demo program is:

Begin quadratic regression with SGD training
 using node.js JavaScript

Loading train (200) and test (40) from file

First three train X:
 -0.1660   0.4406  -0.9998  -0.3953  -0.7065
  0.0776  -0.1616   0.3704  -0.5911   0.7562
 -0.9452   0.3409  -0.1654   0.1174  -0.7192

First three train y:
   0.4840
   0.1568
   0.8054

Creating quadratic regression model

Setting lrnRate = 0.001
Setting maxEpochs = 1000

Starting SGD training
epoch =      0  MSE = 0.0925  acc = 0.0100
epoch =    200  MSE = 0.0003  acc = 0.9050
epoch =    400  MSE = 0.0003  acc = 0.8850
epoch =    600  MSE = 0.0003  acc = 0.8850
epoch =    800  MSE = 0.0003  acc = 0.8850
Done

Model base weights:
 -0.2630  0.0354 -0.0420  0.0341 -0.1124

Model quadratic weights:
  0.0655  0.0194  0.0051  0.0047  0.0243

Model interaction weights:
  0.0043  0.0249  0.0071  0.1081 -0.0012 -0.0093
  0.0362  0.0085 -0.0568  0.0016

Model bias: 0.3220

Computing model accuracy

Train acc (within 0.10) = 0.8850
Test acc (within 0.10) = 0.9250

Train MSE = 0.0003
Test MSE = 0.0005

Predicting for x =
  -0.1660    0.4406   -0.9998   -0.3953   -0.7065
Predicted y = 0.4843

End demo

The demo data is synthetic and was generated by a 5-10-1 neural network with random weight and bias values. The accuracy function scores a prediction as correct if it’s within 10% of the correct target value.

The squared (aka “quadratic”) xi^2 terms handle non-linear structure. If there are n predictors, there are also n squared terms. The xi * xj terms between all possible pairs of original predictors handle interactions between predictors. If there are n predictors, there are (n * (n-1)) / 2 interaction terms.

Therefore, in general, if there are n original predictor variables, there are a total of n + n + (n * (n-1))/2 model weights and one bias. Behind the scenes, the derived xi^2 squared terms and the derived xi*xj interaction terms are computed programmatically on-the-fly, as opposed to explicitly creating an augmented dataset.

Quadratic regression has a nice balance of prediction power and interpretability. The model weights/coefficients are easy to interpret. If the predictor values have been normalized to the same scale, larger magnitudes mean larger effect, and the sign of the weights indicate the direction of the effect. But you should be a bit careful when interpreting the interaction weights. If an interaction weight is positive, it can indicate an increase in y when the corresponding pair of predictor values are both positive or both negative.



I wrote this blog post while I was visiting Japan. I’m not a huge fan of the Japanese manga style of art, but I do like several animated movies from Studio Ghibli. Three of my favorites feature a young girl as the central character, reflecting Japan’s mildly creepy (to me anyway) cultural focus on young girls.

Left: In “Spirited Away” (2001), a 10-year-old girl named Chihiro must save her parents from a witch named Yubaba. Great art and a totally weird plot. My grade = A-.

Center: In “My Neighbor Totoro” (1988), university professor Tatsuo Kusakabe, his sick wife Yasuko, and his two young daughters Satsuki and Mei, move into an old house near a hospital. Totoro is one of several spirits who live nearby. My grade = A-.

Right: In “Kiki’s Delivery Service” (1989), Kiki is a young witch who runs a delivery service, aided by her cat Jiji. My grade = A-.


Posted in JavaScript, Machine Learning | Leave a comment

Nearest Neighbors Regression Using C# Applied to the Diabetes Dataset – Poor Results

I write code almost every day. Writing code is a skill that must be practiced, and I enjoy writing code. One morning before work, I figured I’d run the well-known Diabetes Dataset through a nearest neighbors regression model. I didn’t expect very good results, and that’s what happened.

But the nearest neighbors regression results are useful as a baseline for comparison with more powerful regression techniques: quadratic regression, kernel ridge regression, neural network regression, decision tree regression, bagging tree regression, random forest regression, adaptive boosting regression, and gradient boosting regression.

The Diabetes Dataset looks like:

59, 2, 32.1, 101.00, 157,  93.2, 38, 4.00, 4.8598, 87, 151
48, 1, 21.6,  87.00, 183, 103.2, 70, 3.00, 3.8918, 69,  75
72, 2, 30.5,  93.00, 156,  93.6, 41, 4.00, 4.6728, 85, 141
. . .

Each line represents a patient. The first 10 values on each line are predictors. The last value on each line is the target value (a diabetes metric) to predict. The predictors are: age, sex, body mass index, blood pressure, serum cholesterol, low-density lipoproteins, high-density lipoproteins, total cholesterol, triglycerides, blood sugar. There are 442 data items.

The sex encoding isn’t explained but I suspect male = 1, female = 2 because there are 235 1 values and 206 2 values).

I converted the sex values from 1,2 into 0,1. Then I applied divide-by-constant normalization by dividing the 10 predictor columns by (100, 1, 100, 1000, 1000, 1000, 100, 10, 10, 1000) and the target y values by 1000. The resulting encoded and normalized diabetes data looks like:

0.5900, 1.0000, 0.3210, . . . 0.1510
0.4800, 0.0000, 0.2160, . . . 0.0750
0.7200, 1.0000, 0.3050, . . . 0.1410
. . .

I split the 442-items into a 342-item training set and a 100-item test set.

Nearest neighbors regression is arguably the simplest of all regression techniques. Suppose you want to predict the y value for an input vector x of predictor values. You scan through a set of training data (sometimes called the reference data) and kind the k-closest data items, where the value of k is typically about 3 or 5. Then you fetch the associated k target values from the training data, compute their average, and that average is the predicted y value.

I implemented a nearest neighbors regression model using C#. The output of the demo is:

Begin k-NN regression using C#

Loading diabetes train (342) and test (100) data
Done

First three train X:
  0.5900  1.0000  0.3210  . . .  0.0870
  0.4800  0.0000  0.2160  . . .  0.0690
  0.7200  1.0000  0.3050  . . .  0.0850

First three train y:
  0.1510
  0.0750
  0.1410

Creating k-NN regression model, k=5
Done

Accuracy train (within 0.10): 0.2602
Accuracy test (within 0.10): 0.1700

MSE train = 0.0027
MSE test = 0.0041

Predicting for trainX[0]
Predicted y = 0.1828

Explaining for train[0]
X = [  0]   0.59  . . .  0.09 | y = 0.1510 | dist = 0.0000 | wt = 0.2000
X = [215]   0.56  . . .  0.12 | y = 0.2630 | dist = 0.0598 | wt = 0.2000
X = [271]   0.59  . . .  0.09 | y = 0.1270 | dist = 0.0663 | wt = 0.2000
X = [341]   0.57  . . .  0.09 | y = 0.2630 | dist = 0.0678 | wt = 0.2000
X = [  8]   0.60  . . .  0.09 | y = 0.1100 | dist = 0.0686 | wt = 0.2000

Predicted y = 0.1828

End demo

The demo shows that nearest neighbors regression is highly interpretable. With k set to 5, to predict the y value for the first training item, the model finds the 5 closest training items: items [0], [215], [271], [341], [8]. The associated y values are (0.1510, 0.2630, 0.1270, 0.2630, 0.1100). The predicted y value is the average: (0.1510 + . . + 0.1100) / 5 = 0.1828. The demo weights each of the 5 items equally (wt = 0.2000). An alternative design is to give higher weights to closer items.

I didn’t expect great results and that’s what the model produced. The model scored just 26.02% accuracy on the training data (89 out of 342 correct) and 17.00% on the test data (17 out of 100 correct). A prediction is scored correct if it is within 10% of the true target value.

Interesting. It’s usually impossible to tell in advance if a nearest neighbors regression model will fit a particular dataset, and this is a good example of that.



Quadratic regression is an old machine learning technique that can often be effective (in terms of predictive accuracy). Shadow photography is an old technique that can often be effective (in terms of artistry, at least according to my eye).


Demo program. Replace “lt” (less than), “gt”, “lte”, gte” with Boolean operator symbols. (My blog editor chokes on symbols).

using System;
using System.IO;
using System.Collections.Generic;

namespace NearestNeighborsRegression
{
  internal class NearestNeighborsProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin k-NN regression" +
        " using C# ");

      // 1. load data
      Console.WriteLine("\nLoading diabetes train" +
        " (342) and test (100) data");
      string trainFile =
        "..\\..\\..\\Data\\diabetes_norm_train_342.txt";
      int[] colsX = 
        new int[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
      int colY = 10;
      double[][] trainX =
        MatLoad(trainFile, colsX, ',', "#");
      double[] trainY =
        MatToVec(MatLoad(trainFile,
        new int[] { colY }, ',', "#"));

      string testFile =
        "..\\..\\..\\Data\\diabetes_norm_test_100.txt";
      double[][] testX =
        MatLoad(testFile, colsX, ',', "#");
      double[] testY =
        MatToVec(MatLoad(testFile,
        new int[] { colY }, ',', "#"));
      Console.WriteLine("Done ");

      Console.WriteLine("\nFirst three train X: ");
      for (int i = 0; i "lt" 3; ++i)
        VecShow(trainX[i], 4, 8);

      Console.WriteLine("\nFirst three train y: ");
      for (int i = 0; i "lt" 3; ++i)
        Console.WriteLine(trainY[i].
          ToString("F4").PadLeft(8));

      // 2. create and train/fit model
      int numNeighbors = 5;
      Console.WriteLine("\nCreating k-NN regression " +
        "model, k=" + numNeighbors);
      KNNR model = new KNNR(numNeighbors);
      model.Train(trainX, trainY); // aka Fit()
      Console.WriteLine("Done ");

      // 3. evaluate model
      double accTrain = model.Accuracy(trainX, trainY, 0.10);
      Console.WriteLine("\nAccuracy train (within 0.10): " +
        accTrain.ToString("F4"));

      double accTest = model.Accuracy(testX, testY, 0.10);
      Console.WriteLine("Accuracy test (within 0.10): " +
        accTest.ToString("F4"));

      double mseTrain = model.MSE(trainX, trainY);
      Console.WriteLine("\nMSE train = " +
        mseTrain.ToString("F4"));
      double mseTest = model.MSE(testX, testY);
      Console.WriteLine("MSE test = " +
        mseTest.ToString("F4"));

      // 4. use model
      Console.WriteLine("\nPredicting for trainX[0] ");
      double[] x = trainX[0];
      double y = model.Predict(x);
      Console.WriteLine("Predicted y = " +
        y.ToString("F4").PadLeft(4));

      Console.WriteLine("\nExplaining for train[0] ");
      model.Explain(x);

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main

    // helpers for Main(): MatLoad, MatToVec, VecShow

    static double[][] MatLoad(string fn, int[] usecols,
      char sep, string comment)
    {
      List"lt"double[]"gt" result = 
        new List"lt"double[]"gt"();
      string line = "";
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      while ((line = sr.ReadLine()) != null)
      {
        if (line.StartsWith(comment) == true)
          continue;
        string[] tokens = line.Split(sep);
        List"lt"double"gt" lst = new List"lt"double"gt"();
        for (int j = 0; j "lt" usecols.Length; ++j)
          lst.Add(double.Parse(tokens[usecols[j]]));
        double[] row = lst.ToArray();
        result.Add(row);
      }
      sr.Close(); ifs.Close();
      return result.ToArray();
    }

    static double[] MatToVec(double[][] mat)
    {
      int nRows = mat.Length;
      int nCols = mat[0].Length;
      double[] result = new double[nRows * nCols];
      int k = 0;
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[k++] = mat[i][j];
      return result;
    }

    public static void VecShow(double[] vec,
      int dec, int wid)
    {
      for (int i = 0; i "lt" vec.Length; ++i)
        Console.Write(vec[i].ToString("F" + dec).
          PadLeft(wid));
      Console.WriteLine("");
    }

  } // class Program

  public class KNNR
  {
    public int nNeighbors;
    public double[] weights;
    public double[][] trainX;
    public double[] trainY;

    public KNNR(int nNeighbors)
    {
      this.nNeighbors = nNeighbors;
      this.weights = new double[nNeighbors];
      for (int k = 0; k "lt" nNeighbors; ++k)
        this.weights[k] = 1.0 / nNeighbors;
      this.trainX = new double[0][]; // quasi-null
      this.trainY = new double[0];
    }

    public void Train(double[][] trainX,
      double[] trainY)
    {
      this.trainX = trainX; // store by ref
      this.trainY = trainY;
    }

    public double Predict(double[] x)
    {
      // compute distances from x to all trainX
      int N = trainX.Length;
      double[] dists = new double[N];
      for (int i = 0; i "lt" N; ++i)
        dists[i] = Distance(x, this.trainX[i]);
      // determine sort order
      int[] sortedIdxs = ArgSort(dists);
      double sum = 0.0;
      for (int k = 0; k "lt" this.nNeighbors; ++k)
      {
        int idx = sortedIdxs[k]; // near to far
        sum += this.trainY[idx] * this.weights[k];
      }
      return sum;
    }

    public void Explain(double[] x)
    {
      // 0. set up ordering/indices
      int n = this.trainX.Length;
      int[] indices = new int[n];
      for (int i = 0; i "lt" n; ++i)
        indices[i] = i;

      // 1. compute distances from x to all trainX
      double[] distances = new double[n];
      for (int i = 0; i "lt" n; ++i)
        distances[i] = Distance(x, this.trainX[i]);

      // 2. sort distances indices of X and Y, by distances
      Array.Sort(distances, indices);

      // 3. compute predicted y
      double predY = 0.0;
      for (int k = 0; k "lt" this.nNeighbors; ++k)
        predY += this.weights[k] * this.trainY[indices[k]];

      // 4. show info 
      for (int k = 0; k "lt" this.nNeighbors; ++k)
      {
        int i = indices[k];
        Console.Write("X = ");
        Console.Write("[" + i.ToString().PadLeft(3) + "] ");
 
        for (int j = 0; j "lt" this.trainX[i].Length; ++j)
          Console.Write(this.trainX[i][j].
            ToString("F2").PadLeft(6));

        Console.Write(" | y = ");
        Console.Write(this.trainY[i].ToString("F4"));
        Console.Write(" | dist = ");
        Console.Write(distances[k].ToString("F4"));
        Console.Write(" | wt = ");
        Console.Write(this.weights[k].ToString("F4"));
        Console.WriteLine("");
      }

      Console.WriteLine("\nPredicted y = " +
        predY.ToString("F4"));
    } // Explain

    public double Accuracy(double[][] dataX,
      double[] dataY, double pctClose)
    {
      int nCorrect = 0; int nWrong = 0;
      for (int i = 0; i "lt" dataX.Length; ++i)
      {
        double predY = this.Predict(dataX[i]);
        double actualY = dataY[i];
        if (Math.Abs(predY - actualY) "lt" 
          Math.Abs(pctClose * actualY))
          ++nCorrect;
        else
          ++nWrong;
      }
      return (nCorrect * 1.0) / (nCorrect + nWrong);
    }

    public double MSE(double[][] dataX, double[] dataY)
    {
      double sum = 0.0;
      for (int i = 0; i "lt" dataX.Length; ++i)
      {
        double predY = this.Predict(dataX[i]);
        double actualY = dataY[i];
        sum += (predY - actualY) * (predY - actualY);
      }
      return sum / dataX.Length;
    }

    private static double Distance(double[] v1, double[] v2)
    {
      // Euclidean distance
      int n = v1.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
        sum += (v1[i] - v2[i]) * (v1[i] - v2[i]);
      return Math.Sqrt(sum);
    }

    private static int[] ArgSort(double[] values)
    {
      int n = values.Length;
      double[] copy = new double[n];
      int[] indices = new int[n];
      for (int i = 0; i "lt" n; ++i)
      {
        copy[i] = values[i];
        indices[i] = i;
      }
      Array.Sort(copy, indices);
      return indices;
    }
  } // class KNNR

} // ns

Training data:


# diabetes_norm_train_342.txt
# cols [0] to [9] predictors. col [10] target
# norm division constants:
# 100, -1, 100, 1000, 1000, 1000, 100, 10, 10, 1000, 1000
#
0.5900, 1.0000, 0.3210, 0.1010, 0.1570, 0.0932, 0.3800, 0.4000, 0.4860, 0.0870, 0.1510
0.4800, 0.0000, 0.2160, 0.0870, 0.1830, 0.1032, 0.7000, 0.3000, 0.3892, 0.0690, 0.0750
0.7200, 1.0000, 0.3050, 0.0930, 0.1560, 0.0936, 0.4100, 0.4000, 0.4673, 0.0850, 0.1410
0.2400, 0.0000, 0.2530, 0.0840, 0.1980, 0.1314, 0.4000, 0.5000, 0.4890, 0.0890, 0.2060
0.5000, 0.0000, 0.2300, 0.1010, 0.1920, 0.1254, 0.5200, 0.4000, 0.4291, 0.0800, 0.1350
0.2300, 0.0000, 0.2260, 0.0890, 0.1390, 0.0648, 0.6100, 0.2000, 0.4190, 0.0680, 0.0970
0.3600, 1.0000, 0.2200, 0.0900, 0.1600, 0.0996, 0.5000, 0.3000, 0.3951, 0.0820, 0.1380
0.6600, 1.0000, 0.2620, 0.1140, 0.2550, 0.1850, 0.5600, 0.4550, 0.4249, 0.0920, 0.0630
0.6000, 1.0000, 0.3210, 0.0830, 0.1790, 0.1194, 0.4200, 0.4000, 0.4477, 0.0940, 0.1100
0.2900, 0.0000, 0.3000, 0.0850, 0.1800, 0.0934, 0.4300, 0.4000, 0.5385, 0.0880, 0.3100
0.2200, 0.0000, 0.1860, 0.0970, 0.1140, 0.0576, 0.4600, 0.2000, 0.3951, 0.0830, 0.1010
0.5600, 1.0000, 0.2800, 0.0850, 0.1840, 0.1448, 0.3200, 0.6000, 0.3584, 0.0770, 0.0690
0.5300, 0.0000, 0.2370, 0.0920, 0.1860, 0.1092, 0.6200, 0.3000, 0.4304, 0.0810, 0.1790
0.5000, 1.0000, 0.2620, 0.0970, 0.1860, 0.1054, 0.4900, 0.4000, 0.5063, 0.0880, 0.1850
0.6100, 0.0000, 0.2400, 0.0910, 0.2020, 0.1154, 0.7200, 0.3000, 0.4291, 0.0730, 0.1180
0.3400, 1.0000, 0.2470, 0.1180, 0.2540, 0.1842, 0.3900, 0.7000, 0.5037, 0.0810, 0.1710
0.4700, 0.0000, 0.3030, 0.1090, 0.2070, 0.1002, 0.7000, 0.3000, 0.5215, 0.0980, 0.1660
0.6800, 1.0000, 0.2750, 0.1110, 0.2140, 0.1470, 0.3900, 0.5000, 0.4942, 0.0910, 0.1440
0.3800, 0.0000, 0.2540, 0.0840, 0.1620, 0.1030, 0.4200, 0.4000, 0.4443, 0.0870, 0.0970
0.4100, 0.0000, 0.2470, 0.0830, 0.1870, 0.1082, 0.6000, 0.3000, 0.4543, 0.0780, 0.1680
0.3500, 0.0000, 0.2110, 0.0820, 0.1560, 0.0878, 0.5000, 0.3000, 0.4511, 0.0950, 0.0680
0.2500, 1.0000, 0.2430, 0.0950, 0.1620, 0.0986, 0.5400, 0.3000, 0.3850, 0.0870, 0.0490
0.2500, 0.0000, 0.2600, 0.0920, 0.1870, 0.1204, 0.5600, 0.3000, 0.3970, 0.0880, 0.0680
0.6100, 1.0000, 0.3200, 0.1037, 0.2100, 0.0852, 0.3500, 0.6000, 0.6107, 0.1240, 0.2450
0.3100, 0.0000, 0.2970, 0.0880, 0.1670, 0.1034, 0.4800, 0.4000, 0.4357, 0.0780, 0.1840
0.3000, 1.0000, 0.2520, 0.0830, 0.1780, 0.1184, 0.3400, 0.5000, 0.4852, 0.0830, 0.2020
0.1900, 0.0000, 0.1920, 0.0870, 0.1240, 0.0540, 0.5700, 0.2000, 0.4174, 0.0900, 0.1370
0.4200, 0.0000, 0.3190, 0.0830, 0.1580, 0.0876, 0.5300, 0.3000, 0.4466, 0.1010, 0.0850
0.6300, 0.0000, 0.2440, 0.0730, 0.1600, 0.0914, 0.4800, 0.3000, 0.4635, 0.0780, 0.1310
0.6700, 1.0000, 0.2580, 0.1130, 0.1580, 0.0542, 0.6400, 0.2000, 0.5293, 0.1040, 0.2830
0.3200, 0.0000, 0.3050, 0.0890, 0.1820, 0.1106, 0.5600, 0.3000, 0.4344, 0.0890, 0.1290
0.4200, 0.0000, 0.2030, 0.0710, 0.1610, 0.0812, 0.6600, 0.2000, 0.4234, 0.0810, 0.0590
0.5800, 1.0000, 0.3800, 0.1030, 0.1500, 0.1072, 0.2200, 0.7000, 0.4644, 0.0980, 0.3410
0.5700, 0.0000, 0.2170, 0.0940, 0.1570, 0.0580, 0.8200, 0.2000, 0.4443, 0.0920, 0.0870
0.5300, 0.0000, 0.2050, 0.0780, 0.1470, 0.0842, 0.5200, 0.3000, 0.3989, 0.0750, 0.0650
0.6200, 1.0000, 0.2350, 0.0803, 0.2250, 0.1128, 0.8600, 0.2620, 0.4875, 0.0960, 0.1020
0.5200, 0.0000, 0.2850, 0.1100, 0.1950, 0.0972, 0.6000, 0.3000, 0.5242, 0.0850, 0.2650
0.4600, 0.0000, 0.2740, 0.0780, 0.1710, 0.0880, 0.5800, 0.3000, 0.4828, 0.0900, 0.2760
0.4800, 1.0000, 0.3300, 0.1230, 0.2530, 0.1636, 0.4400, 0.6000, 0.5425, 0.0970, 0.2520
0.4800, 1.0000, 0.2770, 0.0730, 0.1910, 0.1194, 0.4600, 0.4000, 0.4852, 0.0920, 0.0900
0.5000, 1.0000, 0.2560, 0.1010, 0.2290, 0.1622, 0.4300, 0.5000, 0.4779, 0.1140, 0.1000
0.2100, 0.0000, 0.2010, 0.0630, 0.1350, 0.0690, 0.5400, 0.3000, 0.4094, 0.0890, 0.0550
0.3200, 1.0000, 0.2540, 0.0903, 0.1530, 0.1004, 0.3400, 0.4500, 0.4533, 0.0830, 0.0610
0.5400, 0.0000, 0.2420, 0.0740, 0.2040, 0.1090, 0.8200, 0.2000, 0.4174, 0.1090, 0.0920
0.6100, 1.0000, 0.3270, 0.0970, 0.1770, 0.1184, 0.2900, 0.6000, 0.4997, 0.0870, 0.2590
0.5600, 1.0000, 0.2310, 0.1040, 0.1810, 0.1164, 0.4700, 0.4000, 0.4477, 0.0790, 0.0530
0.3300, 0.0000, 0.2530, 0.0850, 0.1550, 0.0850, 0.5100, 0.3000, 0.4554, 0.0700, 0.1900
0.2700, 0.0000, 0.1960, 0.0780, 0.1280, 0.0680, 0.4300, 0.3000, 0.4443, 0.0710, 0.1420
0.6700, 1.0000, 0.2250, 0.0980, 0.1910, 0.1192, 0.6100, 0.3000, 0.3989, 0.0860, 0.0750
0.3700, 1.0000, 0.2770, 0.0930, 0.1800, 0.1194, 0.3000, 0.6000, 0.5030, 0.0880, 0.1420
0.5800, 0.0000, 0.2570, 0.0990, 0.1570, 0.0916, 0.4900, 0.3000, 0.4407, 0.0930, 0.1550
0.6500, 1.0000, 0.2790, 0.1030, 0.1590, 0.0968, 0.4200, 0.4000, 0.4615, 0.0860, 0.2250
0.3400, 0.0000, 0.2550, 0.0930, 0.2180, 0.1440, 0.5700, 0.4000, 0.4443, 0.0880, 0.0590
0.4600, 0.0000, 0.2490, 0.1150, 0.1980, 0.1296, 0.5400, 0.4000, 0.4277, 0.1030, 0.1040
0.3500, 0.0000, 0.2870, 0.0970, 0.2040, 0.1268, 0.6400, 0.3000, 0.4190, 0.0930, 0.1820
0.3700, 0.0000, 0.2180, 0.0840, 0.1840, 0.1010, 0.7300, 0.3000, 0.3912, 0.0930, 0.1280
0.3700, 0.0000, 0.3020, 0.0870, 0.1660, 0.0960, 0.4000, 0.4150, 0.5011, 0.0870, 0.0520
0.4100, 0.0000, 0.2050, 0.0800, 0.1240, 0.0488, 0.6400, 0.2000, 0.4025, 0.0750, 0.0370
0.6000, 0.0000, 0.2040, 0.1050, 0.1980, 0.0784, 0.9900, 0.2000, 0.4635, 0.0790, 0.1700
0.6600, 1.0000, 0.2400, 0.0980, 0.2360, 0.1464, 0.5800, 0.4000, 0.5063, 0.0960, 0.1700
0.2900, 0.0000, 0.2600, 0.0830, 0.1410, 0.0652, 0.6400, 0.2000, 0.4078, 0.0830, 0.0610
0.3700, 1.0000, 0.2680, 0.0790, 0.1570, 0.0980, 0.2800, 0.6000, 0.5043, 0.0960, 0.1440
0.4100, 1.0000, 0.2570, 0.0830, 0.1810, 0.1066, 0.6600, 0.3000, 0.3738, 0.0850, 0.0520
0.3900, 0.0000, 0.2290, 0.0770, 0.2040, 0.1432, 0.4600, 0.4000, 0.4304, 0.0740, 0.1280
0.6700, 1.0000, 0.2400, 0.0830, 0.1430, 0.0772, 0.4900, 0.3000, 0.4431, 0.0940, 0.0710
0.3600, 1.0000, 0.2410, 0.1120, 0.1930, 0.1250, 0.3500, 0.6000, 0.5106, 0.0950, 0.1630
0.4600, 1.0000, 0.2470, 0.0850, 0.1740, 0.1232, 0.3000, 0.6000, 0.4644, 0.0960, 0.1500
0.6000, 1.0000, 0.2500, 0.0897, 0.1850, 0.1208, 0.4600, 0.4020, 0.4511, 0.0920, 0.0970
0.5900, 1.0000, 0.2360, 0.0830, 0.1650, 0.1000, 0.4700, 0.4000, 0.4500, 0.0920, 0.1600
0.5300, 0.0000, 0.2210, 0.0930, 0.1340, 0.0762, 0.4600, 0.3000, 0.4078, 0.0960, 0.1780
0.4800, 0.0000, 0.1990, 0.0910, 0.1890, 0.1096, 0.6900, 0.3000, 0.3951, 0.1010, 0.0480
0.4800, 0.0000, 0.2950, 0.1310, 0.2070, 0.1322, 0.4700, 0.4000, 0.4935, 0.1060, 0.2700
0.6600, 1.0000, 0.2600, 0.0910, 0.2640, 0.1466, 0.6500, 0.4000, 0.5568, 0.0870, 0.2020
0.5200, 1.0000, 0.2450, 0.0940, 0.2170, 0.1494, 0.4800, 0.5000, 0.4585, 0.0890, 0.1110
0.5200, 1.0000, 0.2660, 0.1110, 0.2090, 0.1264, 0.6100, 0.3000, 0.4682, 0.1090, 0.0850
0.4600, 1.0000, 0.2350, 0.0870, 0.1810, 0.1148, 0.4400, 0.4000, 0.4710, 0.0980, 0.0420
0.4000, 1.0000, 0.2900, 0.1150, 0.0970, 0.0472, 0.3500, 0.2770, 0.4304, 0.0950, 0.1700
0.2200, 0.0000, 0.2300, 0.0730, 0.1610, 0.0978, 0.5400, 0.3000, 0.3829, 0.0910, 0.2000
0.5000, 0.0000, 0.2100, 0.0880, 0.1400, 0.0718, 0.3500, 0.4000, 0.5112, 0.0710, 0.2520
0.2000, 0.0000, 0.2290, 0.0870, 0.1910, 0.1282, 0.5300, 0.4000, 0.3892, 0.0850, 0.1130
0.6800, 0.0000, 0.2750, 0.1070, 0.2410, 0.1496, 0.6400, 0.4000, 0.4920, 0.0900, 0.1430
0.5200, 1.0000, 0.2430, 0.0860, 0.1970, 0.1336, 0.4400, 0.5000, 0.4575, 0.0910, 0.0510
0.4400, 0.0000, 0.2310, 0.0870, 0.2130, 0.1264, 0.7700, 0.3000, 0.3871, 0.0720, 0.0520
0.3800, 0.0000, 0.2730, 0.0810, 0.1460, 0.0816, 0.4700, 0.3000, 0.4466, 0.0810, 0.2100
0.4900, 0.0000, 0.2270, 0.0653, 0.1680, 0.0962, 0.6200, 0.2710, 0.3892, 0.0600, 0.0650
0.6100, 0.0000, 0.3300, 0.0950, 0.1820, 0.1148, 0.5400, 0.3000, 0.4190, 0.0740, 0.1410
0.2900, 1.0000, 0.1940, 0.0830, 0.1520, 0.1058, 0.3900, 0.4000, 0.3584, 0.0830, 0.0550
0.6100, 0.0000, 0.2580, 0.0980, 0.2350, 0.1258, 0.7600, 0.3000, 0.5112, 0.0820, 0.1340
0.3400, 1.0000, 0.2260, 0.0750, 0.1660, 0.0918, 0.6000, 0.3000, 0.4263, 0.1080, 0.0420
0.3600, 0.0000, 0.2190, 0.0890, 0.1890, 0.1052, 0.6800, 0.3000, 0.4369, 0.0960, 0.1110
0.5200, 0.0000, 0.2400, 0.0830, 0.1670, 0.0866, 0.7100, 0.2000, 0.3850, 0.0940, 0.0980
0.6100, 0.0000, 0.3120, 0.0790, 0.2350, 0.1568, 0.4700, 0.5000, 0.5050, 0.0960, 0.1640
0.4300, 0.0000, 0.2680, 0.1230, 0.1930, 0.1022, 0.6700, 0.3000, 0.4779, 0.0940, 0.0480
0.3500, 0.0000, 0.2040, 0.0650, 0.1870, 0.1056, 0.6700, 0.2790, 0.4277, 0.0780, 0.0960
0.2700, 0.0000, 0.2480, 0.0910, 0.1890, 0.1068, 0.6900, 0.3000, 0.4190, 0.0690, 0.0900
0.2900, 0.0000, 0.2100, 0.0710, 0.1560, 0.0970, 0.3800, 0.4000, 0.4654, 0.0900, 0.1620
0.6400, 1.0000, 0.2730, 0.1090, 0.1860, 0.1076, 0.3800, 0.5000, 0.5308, 0.0990, 0.1500
0.4100, 0.0000, 0.3460, 0.0873, 0.2050, 0.1426, 0.4100, 0.5000, 0.4673, 0.1100, 0.2790
0.4900, 1.0000, 0.2590, 0.0910, 0.1780, 0.1066, 0.5200, 0.3000, 0.4575, 0.0750, 0.0920
0.4800, 0.0000, 0.2040, 0.0980, 0.2090, 0.1394, 0.4600, 0.5000, 0.4771, 0.0780, 0.0830
0.5300, 0.0000, 0.2800, 0.0880, 0.2330, 0.1438, 0.5800, 0.4000, 0.5050, 0.0910, 0.1280
0.5300, 1.0000, 0.2220, 0.1130, 0.1970, 0.1152, 0.6700, 0.3000, 0.4304, 0.1000, 0.1020
0.2300, 0.0000, 0.2900, 0.0900, 0.2160, 0.1314, 0.6500, 0.3000, 0.4585, 0.0910, 0.3020
0.6500, 1.0000, 0.3020, 0.0980, 0.2190, 0.1606, 0.4000, 0.5000, 0.4522, 0.0840, 0.1980
0.4100, 0.0000, 0.3240, 0.0940, 0.1710, 0.1044, 0.5600, 0.3000, 0.3970, 0.0760, 0.0950
0.5500, 1.0000, 0.2340, 0.0830, 0.1660, 0.1016, 0.4600, 0.4000, 0.4522, 0.0960, 0.0530
0.2200, 0.0000, 0.1930, 0.0820, 0.1560, 0.0932, 0.5200, 0.3000, 0.3989, 0.0710, 0.1340
0.5600, 0.0000, 0.3100, 0.0787, 0.1870, 0.1414, 0.3400, 0.5500, 0.4060, 0.0900, 0.1440
0.5400, 1.0000, 0.3060, 0.1033, 0.1440, 0.0798, 0.3000, 0.4800, 0.5142, 0.1010, 0.2320
0.5900, 1.0000, 0.2550, 0.0953, 0.1900, 0.1394, 0.3500, 0.5430, 0.4357, 0.1170, 0.0810
0.6000, 1.0000, 0.2340, 0.0880, 0.1530, 0.0898, 0.5800, 0.3000, 0.3258, 0.0950, 0.1040
0.5400, 0.0000, 0.2680, 0.0870, 0.2060, 0.1220, 0.6800, 0.3000, 0.4382, 0.0800, 0.0590
0.2500, 0.0000, 0.2830, 0.0870, 0.1930, 0.1280, 0.4900, 0.4000, 0.4382, 0.0920, 0.2460
0.5400, 1.0000, 0.2770, 0.1130, 0.2000, 0.1284, 0.3700, 0.5000, 0.5153, 0.1130, 0.2970
0.5500, 0.0000, 0.3660, 0.1130, 0.1990, 0.0944, 0.4300, 0.4630, 0.5730, 0.0970, 0.2580
0.4000, 1.0000, 0.2650, 0.0930, 0.2360, 0.1470, 0.3700, 0.7000, 0.5561, 0.0920, 0.2290
0.6200, 1.0000, 0.3180, 0.1150, 0.1990, 0.1286, 0.4400, 0.5000, 0.4883, 0.0980, 0.2750
0.6500, 0.0000, 0.2440, 0.1200, 0.2220, 0.1356, 0.3700, 0.6000, 0.5509, 0.1240, 0.2810
0.3300, 1.0000, 0.2540, 0.1020, 0.2060, 0.1410, 0.3900, 0.5000, 0.4868, 0.1050, 0.1790
0.5300, 0.0000, 0.2200, 0.0940, 0.1750, 0.0880, 0.5900, 0.3000, 0.4942, 0.0980, 0.2000
0.3500, 0.0000, 0.2680, 0.0980, 0.1620, 0.1036, 0.4500, 0.4000, 0.4205, 0.0860, 0.2000
0.6600, 0.0000, 0.2800, 0.1010, 0.1950, 0.1292, 0.4000, 0.5000, 0.4860, 0.0940, 0.1730
0.6200, 1.0000, 0.3390, 0.1010, 0.2210, 0.1564, 0.3500, 0.6000, 0.4997, 0.1030, 0.1800
0.5000, 1.0000, 0.2960, 0.0943, 0.3000, 0.2424, 0.3300, 0.9090, 0.4812, 0.1090, 0.0840
0.4700, 0.0000, 0.2860, 0.0970, 0.1640, 0.0906, 0.5600, 0.3000, 0.4466, 0.0880, 0.1210
0.4700, 1.0000, 0.2560, 0.0940, 0.1650, 0.0748, 0.4000, 0.4000, 0.5526, 0.0930, 0.1610
0.2400, 0.0000, 0.2070, 0.0870, 0.1490, 0.0806, 0.6100, 0.2000, 0.3611, 0.0780, 0.0990
0.5800, 1.0000, 0.2620, 0.0910, 0.2170, 0.1242, 0.7100, 0.3000, 0.4691, 0.0680, 0.1090
0.3400, 0.0000, 0.2060, 0.0870, 0.1850, 0.1122, 0.5800, 0.3000, 0.4304, 0.0740, 0.1150
0.5100, 0.0000, 0.2790, 0.0960, 0.1960, 0.1222, 0.4200, 0.5000, 0.5069, 0.1200, 0.2680
0.3100, 1.0000, 0.3530, 0.1250, 0.1870, 0.1124, 0.4800, 0.4000, 0.4890, 0.1090, 0.2740
0.2200, 0.0000, 0.1990, 0.0750, 0.1750, 0.1086, 0.5400, 0.3000, 0.4127, 0.0720, 0.1580
0.5300, 1.0000, 0.2440, 0.0920, 0.2140, 0.1460, 0.5000, 0.4000, 0.4500, 0.0970, 0.1070
0.3700, 1.0000, 0.2140, 0.0830, 0.1280, 0.0696, 0.4900, 0.3000, 0.3850, 0.0840, 0.0830
0.2800, 0.0000, 0.3040, 0.0850, 0.1980, 0.1156, 0.6700, 0.3000, 0.4344, 0.0800, 0.1030
0.4700, 0.0000, 0.3160, 0.0840, 0.1540, 0.0880, 0.3000, 0.5100, 0.5199, 0.1050, 0.2720
0.2300, 0.0000, 0.1880, 0.0780, 0.1450, 0.0720, 0.6300, 0.2000, 0.3912, 0.0860, 0.0850
0.5000, 0.0000, 0.3100, 0.1230, 0.1780, 0.1050, 0.4800, 0.4000, 0.4828, 0.0880, 0.2800
0.5800, 1.0000, 0.3670, 0.1170, 0.1660, 0.0938, 0.4400, 0.4000, 0.4949, 0.1090, 0.3360
0.5500, 0.0000, 0.3210, 0.1100, 0.1640, 0.0842, 0.4200, 0.4000, 0.5242, 0.0900, 0.2810
0.6000, 1.0000, 0.2770, 0.1070, 0.1670, 0.1146, 0.3800, 0.4000, 0.4277, 0.0950, 0.1180
0.4100, 0.0000, 0.3080, 0.0810, 0.2140, 0.1520, 0.2800, 0.7600, 0.5136, 0.1230, 0.3170
0.6000, 1.0000, 0.2750, 0.1060, 0.2290, 0.1438, 0.5100, 0.4000, 0.5142, 0.0910, 0.2350
0.4000, 0.0000, 0.2690, 0.0920, 0.2030, 0.1198, 0.7000, 0.3000, 0.4190, 0.0810, 0.0600
0.5700, 1.0000, 0.3070, 0.0900, 0.2040, 0.1478, 0.3400, 0.6000, 0.4710, 0.0930, 0.1740
0.3700, 0.0000, 0.3830, 0.1130, 0.1650, 0.0946, 0.5300, 0.3000, 0.4466, 0.0790, 0.2590
0.4000, 1.0000, 0.3190, 0.0950, 0.1980, 0.1356, 0.3800, 0.5000, 0.4804, 0.0930, 0.1780
0.3300, 0.0000, 0.3500, 0.0890, 0.2000, 0.1304, 0.4200, 0.4760, 0.4927, 0.1010, 0.1280
0.3200, 1.0000, 0.2780, 0.0890, 0.2160, 0.1462, 0.5500, 0.4000, 0.4304, 0.0910, 0.0960
0.3500, 1.0000, 0.2590, 0.0810, 0.1740, 0.1024, 0.3100, 0.6000, 0.5313, 0.0820, 0.1260
0.5500, 0.0000, 0.3290, 0.1020, 0.1640, 0.1062, 0.4100, 0.4000, 0.4431, 0.0890, 0.2880
0.4900, 0.0000, 0.2600, 0.0930, 0.1830, 0.1002, 0.6400, 0.3000, 0.4543, 0.0880, 0.0880
0.3900, 1.0000, 0.2630, 0.1150, 0.2180, 0.1582, 0.3200, 0.7000, 0.4935, 0.1090, 0.2920
0.6000, 1.0000, 0.2230, 0.1130, 0.1860, 0.1258, 0.4600, 0.4000, 0.4263, 0.0940, 0.0710
0.6700, 1.0000, 0.2830, 0.0930, 0.2040, 0.1322, 0.4900, 0.4000, 0.4736, 0.0920, 0.1970
0.4100, 1.0000, 0.3200, 0.1090, 0.2510, 0.1706, 0.4900, 0.5000, 0.5056, 0.1030, 0.1860
0.4400, 0.0000, 0.2540, 0.0950, 0.1620, 0.0926, 0.5300, 0.3000, 0.4407, 0.0830, 0.0250
0.4800, 1.0000, 0.2330, 0.0893, 0.2120, 0.1428, 0.4600, 0.4610, 0.4754, 0.0980, 0.0840
0.4500, 0.0000, 0.2030, 0.0743, 0.1900, 0.1262, 0.4900, 0.3880, 0.4304, 0.0790, 0.0960
0.4700, 0.0000, 0.3040, 0.1200, 0.1990, 0.1200, 0.4600, 0.4000, 0.5106, 0.0870, 0.1950
0.4600, 0.0000, 0.2060, 0.0730, 0.1720, 0.1070, 0.5100, 0.3000, 0.4249, 0.0800, 0.0530
0.3600, 1.0000, 0.3230, 0.1150, 0.2860, 0.1994, 0.3900, 0.7000, 0.5472, 0.1120, 0.2170
0.3400, 0.0000, 0.2920, 0.0730, 0.1720, 0.1082, 0.4900, 0.4000, 0.4304, 0.0910, 0.1720
0.5300, 1.0000, 0.3310, 0.1170, 0.1830, 0.1190, 0.4800, 0.4000, 0.4382, 0.1060, 0.1310
0.6100, 0.0000, 0.2460, 0.1010, 0.2090, 0.1068, 0.7700, 0.3000, 0.4836, 0.0880, 0.2140
0.3700, 0.0000, 0.2020, 0.0810, 0.1620, 0.0878, 0.6300, 0.3000, 0.4025, 0.0880, 0.0590
0.3300, 1.0000, 0.2080, 0.0840, 0.1250, 0.0702, 0.4600, 0.3000, 0.3784, 0.0660, 0.0700
0.6800, 0.0000, 0.3280, 0.1057, 0.2050, 0.1164, 0.4000, 0.5130, 0.5493, 0.1170, 0.2200
0.4900, 1.0000, 0.3190, 0.0940, 0.2340, 0.1558, 0.3400, 0.7000, 0.5398, 0.1220, 0.2680
0.4800, 0.0000, 0.2390, 0.1090, 0.2320, 0.1052, 0.3700, 0.6000, 0.6107, 0.0960, 0.1520
0.5500, 1.0000, 0.2450, 0.0840, 0.1790, 0.1058, 0.6600, 0.3000, 0.3584, 0.0870, 0.0470
0.4300, 0.0000, 0.2210, 0.0660, 0.1340, 0.0772, 0.4500, 0.3000, 0.4078, 0.0800, 0.0740
0.6000, 1.0000, 0.3300, 0.0970, 0.2170, 0.1256, 0.4500, 0.5000, 0.5447, 0.1120, 0.2950
0.3100, 1.0000, 0.1900, 0.0930, 0.1370, 0.0730, 0.4700, 0.3000, 0.4443, 0.0780, 0.1010
0.5300, 1.0000, 0.2730, 0.0820, 0.1190, 0.0550, 0.3900, 0.3000, 0.4828, 0.0930, 0.1510
0.6700, 0.0000, 0.2280, 0.0870, 0.1660, 0.0986, 0.5200, 0.3000, 0.4344, 0.0920, 0.1270
0.6100, 1.0000, 0.2820, 0.1060, 0.2040, 0.1320, 0.5200, 0.4000, 0.4605, 0.0960, 0.2370
0.6200, 0.0000, 0.2890, 0.0873, 0.2060, 0.1272, 0.3300, 0.6240, 0.5434, 0.0990, 0.2250
0.6000, 0.0000, 0.2560, 0.0870, 0.2070, 0.1258, 0.6900, 0.3000, 0.4111, 0.0840, 0.0810
0.4200, 0.0000, 0.2490, 0.0910, 0.2040, 0.1418, 0.3800, 0.5000, 0.4796, 0.0890, 0.1510
0.3800, 1.0000, 0.2680, 0.1050, 0.1810, 0.1192, 0.3700, 0.5000, 0.4820, 0.0910, 0.1070
0.6200, 0.0000, 0.2240, 0.0790, 0.2220, 0.1474, 0.5900, 0.4000, 0.4357, 0.0760, 0.0640
0.6100, 1.0000, 0.2690, 0.1110, 0.2360, 0.1724, 0.3900, 0.6000, 0.4812, 0.0890, 0.1380
0.6100, 1.0000, 0.2310, 0.1130, 0.1860, 0.1144, 0.4700, 0.4000, 0.4812, 0.1050, 0.1850
0.5300, 0.0000, 0.2860, 0.0880, 0.1710, 0.0988, 0.4100, 0.4000, 0.5050, 0.0990, 0.2650
0.2800, 1.0000, 0.2470, 0.0970, 0.1750, 0.0996, 0.3200, 0.5000, 0.5380, 0.0870, 0.1010
0.2600, 1.0000, 0.3030, 0.0890, 0.2180, 0.1522, 0.3100, 0.7000, 0.5159, 0.0820, 0.1370
0.3000, 0.0000, 0.2130, 0.0870, 0.1340, 0.0630, 0.6300, 0.2000, 0.3689, 0.0660, 0.1430
0.5000, 0.0000, 0.2610, 0.1090, 0.2430, 0.1606, 0.6200, 0.4000, 0.4625, 0.0890, 0.1410
0.4800, 0.0000, 0.2020, 0.0950, 0.1870, 0.1174, 0.5300, 0.4000, 0.4419, 0.0850, 0.0790
0.5100, 0.0000, 0.2520, 0.1030, 0.1760, 0.1122, 0.3700, 0.5000, 0.4898, 0.0900, 0.2920
0.4700, 1.0000, 0.2250, 0.0820, 0.1310, 0.0668, 0.4100, 0.3000, 0.4754, 0.0890, 0.1780
0.6400, 1.0000, 0.2350, 0.0970, 0.2030, 0.1290, 0.5900, 0.3000, 0.4318, 0.0770, 0.0910
0.5100, 1.0000, 0.2590, 0.0760, 0.2400, 0.1690, 0.3900, 0.6000, 0.5075, 0.0960, 0.1160
0.3000, 0.0000, 0.2090, 0.1040, 0.1520, 0.0838, 0.4700, 0.3000, 0.4663, 0.0970, 0.0860
0.5600, 1.0000, 0.2870, 0.0990, 0.2080, 0.1464, 0.3900, 0.5000, 0.4727, 0.0970, 0.1220
0.4200, 0.0000, 0.2210, 0.0850, 0.2130, 0.1386, 0.6000, 0.4000, 0.4277, 0.0940, 0.0720
0.6200, 1.0000, 0.2670, 0.1150, 0.1830, 0.1240, 0.3500, 0.5000, 0.4788, 0.1000, 0.1290
0.3400, 0.0000, 0.3140, 0.0870, 0.1490, 0.0938, 0.4600, 0.3000, 0.3829, 0.0770, 0.1420
0.6000, 0.0000, 0.2220, 0.1047, 0.2210, 0.1054, 0.6000, 0.3680, 0.5628, 0.0930, 0.0900
0.6400, 0.0000, 0.2100, 0.0923, 0.2270, 0.1468, 0.6500, 0.3490, 0.4331, 0.1020, 0.1580
0.3900, 1.0000, 0.2120, 0.0900, 0.1820, 0.1104, 0.6000, 0.3000, 0.4060, 0.0980, 0.0390
0.7100, 1.0000, 0.2650, 0.1050, 0.2810, 0.1736, 0.5500, 0.5000, 0.5568, 0.0840, 0.1960
0.4800, 1.0000, 0.2920, 0.1100, 0.2180, 0.1516, 0.3900, 0.6000, 0.4920, 0.0980, 0.2220
0.7900, 1.0000, 0.2700, 0.1030, 0.1690, 0.1108, 0.3700, 0.5000, 0.4663, 0.1100, 0.2770
0.4000, 0.0000, 0.3070, 0.0990, 0.1770, 0.0854, 0.5000, 0.4000, 0.5338, 0.0850, 0.0990
0.4900, 1.0000, 0.2880, 0.0920, 0.2070, 0.1400, 0.4400, 0.5000, 0.4745, 0.0920, 0.1960
0.5100, 0.0000, 0.3060, 0.1030, 0.1980, 0.1066, 0.5700, 0.3000, 0.5148, 0.1000, 0.2020
0.5700, 0.0000, 0.3010, 0.1170, 0.2020, 0.1396, 0.4200, 0.5000, 0.4625, 0.1200, 0.1550
0.5900, 1.0000, 0.2470, 0.1140, 0.1520, 0.1048, 0.2900, 0.5000, 0.4511, 0.0880, 0.0770
0.5100, 0.0000, 0.2770, 0.0990, 0.2290, 0.1456, 0.6900, 0.3000, 0.4277, 0.0770, 0.1910
0.7400, 0.0000, 0.2980, 0.1010, 0.1710, 0.1048, 0.5000, 0.3000, 0.4394, 0.0860, 0.0700
0.6700, 0.0000, 0.2670, 0.1050, 0.2250, 0.1354, 0.6900, 0.3000, 0.4635, 0.0960, 0.0730
0.4900, 0.0000, 0.1980, 0.0880, 0.1880, 0.1148, 0.5700, 0.3000, 0.4394, 0.0930, 0.0490
0.5700, 0.0000, 0.2330, 0.0880, 0.1550, 0.0636, 0.7800, 0.2000, 0.4205, 0.0780, 0.0650
0.5600, 1.0000, 0.3510, 0.1230, 0.1640, 0.0950, 0.3800, 0.4000, 0.5043, 0.1170, 0.2630
0.5200, 1.0000, 0.2970, 0.1090, 0.2280, 0.1628, 0.3100, 0.8000, 0.5142, 0.1030, 0.2480
0.6900, 0.0000, 0.2930, 0.1240, 0.2230, 0.1390, 0.5400, 0.4000, 0.5011, 0.1020, 0.2960
0.3700, 0.0000, 0.2030, 0.0830, 0.1850, 0.1246, 0.3800, 0.5000, 0.4719, 0.0880, 0.2140
0.2400, 0.0000, 0.2250, 0.0890, 0.1410, 0.0680, 0.5200, 0.3000, 0.4654, 0.0840, 0.1850
0.5500, 1.0000, 0.2270, 0.0930, 0.1540, 0.0942, 0.5300, 0.3000, 0.3526, 0.0750, 0.0780
0.3600, 0.0000, 0.2280, 0.0870, 0.1780, 0.1160, 0.4100, 0.4000, 0.4654, 0.0820, 0.0930
0.4200, 1.0000, 0.2400, 0.1070, 0.1500, 0.0850, 0.4400, 0.3000, 0.4654, 0.0960, 0.2520
0.2100, 0.0000, 0.2420, 0.0760, 0.1470, 0.0770, 0.5300, 0.3000, 0.4443, 0.0790, 0.1500
0.4100, 0.0000, 0.2020, 0.0620, 0.1530, 0.0890, 0.5000, 0.3000, 0.4249, 0.0890, 0.0770
0.5700, 1.0000, 0.2940, 0.1090, 0.1600, 0.0876, 0.3100, 0.5000, 0.5333, 0.0920, 0.2080
0.2000, 1.0000, 0.2210, 0.0870, 0.1710, 0.0996, 0.5800, 0.3000, 0.4205, 0.0780, 0.0770
0.6700, 1.0000, 0.2360, 0.1113, 0.1890, 0.1054, 0.7000, 0.2700, 0.4220, 0.0930, 0.1080
0.3400, 0.0000, 0.2520, 0.0770, 0.1890, 0.1206, 0.5300, 0.4000, 0.4344, 0.0790, 0.1600
0.4100, 1.0000, 0.2490, 0.0860, 0.1920, 0.1150, 0.6100, 0.3000, 0.4382, 0.0940, 0.0530
0.3800, 1.0000, 0.3300, 0.0780, 0.3010, 0.2150, 0.5000, 0.6020, 0.5193, 0.1080, 0.2200
0.5100, 0.0000, 0.2350, 0.1010, 0.1950, 0.1210, 0.5100, 0.4000, 0.4745, 0.0940, 0.1540
0.5200, 1.0000, 0.2640, 0.0913, 0.2180, 0.1520, 0.3900, 0.5590, 0.4905, 0.0990, 0.2590
0.6700, 0.0000, 0.2980, 0.0800, 0.1720, 0.0934, 0.6300, 0.3000, 0.4357, 0.0820, 0.0900
0.6100, 0.0000, 0.3000, 0.1080, 0.1940, 0.1000, 0.5200, 0.3730, 0.5347, 0.1050, 0.2460
0.6700, 1.0000, 0.2500, 0.1117, 0.1460, 0.0934, 0.3300, 0.4420, 0.4585, 0.1030, 0.1240
0.5600, 0.0000, 0.2700, 0.1050, 0.2470, 0.1606, 0.5400, 0.5000, 0.5088, 0.0940, 0.0670
0.6400, 0.0000, 0.2000, 0.0747, 0.1890, 0.1148, 0.6200, 0.3050, 0.4111, 0.0910, 0.0720
0.5800, 1.0000, 0.2550, 0.1120, 0.1630, 0.1106, 0.2900, 0.6000, 0.4762, 0.0860, 0.2570
0.5500, 0.0000, 0.2820, 0.0910, 0.2500, 0.1402, 0.6700, 0.4000, 0.5366, 0.1030, 0.2620
0.6200, 1.0000, 0.3330, 0.1140, 0.1820, 0.1140, 0.3800, 0.5000, 0.5011, 0.0960, 0.2750
0.5700, 1.0000, 0.2560, 0.0960, 0.2000, 0.1330, 0.5200, 0.3850, 0.4318, 0.1050, 0.1770
0.2000, 1.0000, 0.2420, 0.0880, 0.1260, 0.0722, 0.4500, 0.3000, 0.3784, 0.0740, 0.0710
0.5300, 1.0000, 0.2210, 0.0980, 0.1650, 0.1052, 0.4700, 0.4000, 0.4159, 0.0810, 0.0470
0.3200, 1.0000, 0.3140, 0.0890, 0.1530, 0.0842, 0.5600, 0.3000, 0.4159, 0.0900, 0.1870
0.4100, 0.0000, 0.2310, 0.0860, 0.1480, 0.0780, 0.5800, 0.3000, 0.4094, 0.0600, 0.1250
0.6000, 0.0000, 0.2340, 0.0767, 0.2470, 0.1480, 0.6500, 0.3800, 0.5136, 0.0770, 0.0780
0.2600, 0.0000, 0.1880, 0.0830, 0.1910, 0.1036, 0.6900, 0.3000, 0.4522, 0.0690, 0.0510
0.3700, 0.0000, 0.3080, 0.1120, 0.2820, 0.1972, 0.4300, 0.7000, 0.5342, 0.1010, 0.2580
0.4500, 0.0000, 0.3200, 0.1100, 0.2240, 0.1342, 0.4500, 0.5000, 0.5412, 0.0930, 0.2150
0.6700, 0.0000, 0.3160, 0.1160, 0.1790, 0.0904, 0.4100, 0.4000, 0.5472, 0.1000, 0.3030
0.3400, 1.0000, 0.3550, 0.1200, 0.2330, 0.1466, 0.3400, 0.7000, 0.5568, 0.1010, 0.2430
0.5000, 0.0000, 0.3190, 0.0783, 0.2070, 0.1492, 0.3800, 0.5450, 0.4595, 0.0840, 0.0910
0.7100, 0.0000, 0.2950, 0.0970, 0.2270, 0.1516, 0.4500, 0.5000, 0.5024, 0.1080, 0.1500
0.5700, 1.0000, 0.3160, 0.1170, 0.2250, 0.1076, 0.4000, 0.6000, 0.5958, 0.1130, 0.3100
0.4900, 0.0000, 0.2030, 0.0930, 0.1840, 0.1030, 0.6100, 0.3000, 0.4605, 0.0930, 0.1530
0.3500, 0.0000, 0.4130, 0.0810, 0.1680, 0.1028, 0.3700, 0.5000, 0.4949, 0.0940, 0.3460
0.4100, 1.0000, 0.2120, 0.1020, 0.1840, 0.1004, 0.6400, 0.3000, 0.4585, 0.0790, 0.0630
0.7000, 1.0000, 0.2410, 0.0823, 0.1940, 0.1492, 0.3100, 0.6260, 0.4234, 0.1050, 0.0890
0.5200, 0.0000, 0.2300, 0.1070, 0.1790, 0.1237, 0.4250, 0.4210, 0.4159, 0.0930, 0.0500
0.6000, 0.0000, 0.2560, 0.0780, 0.1950, 0.0954, 0.9100, 0.2000, 0.3761, 0.0870, 0.0390
0.6200, 0.0000, 0.2250, 0.1250, 0.2150, 0.0990, 0.9800, 0.2000, 0.4500, 0.0950, 0.1030
0.4400, 1.0000, 0.3820, 0.1230, 0.2010, 0.1266, 0.4400, 0.5000, 0.5024, 0.0920, 0.3080
0.2800, 1.0000, 0.1920, 0.0810, 0.1550, 0.0946, 0.5100, 0.3000, 0.3850, 0.0870, 0.1160
0.5800, 1.0000, 0.2900, 0.0850, 0.1560, 0.1092, 0.3600, 0.4000, 0.3989, 0.0860, 0.1450
0.3900, 1.0000, 0.2400, 0.0897, 0.1900, 0.1136, 0.5200, 0.3650, 0.4804, 0.1010, 0.0740
0.3400, 1.0000, 0.2060, 0.0980, 0.1830, 0.0920, 0.8300, 0.2000, 0.3689, 0.0920, 0.0450
0.6500, 0.0000, 0.2630, 0.0700, 0.2440, 0.1662, 0.5100, 0.5000, 0.4898, 0.0980, 0.1150
0.6600, 1.0000, 0.3460, 0.1150, 0.2040, 0.1394, 0.3600, 0.6000, 0.4963, 0.1090, 0.2640
0.5100, 0.0000, 0.2340, 0.0870, 0.2200, 0.1088, 0.9300, 0.2000, 0.4511, 0.0820, 0.0870
0.5000, 1.0000, 0.2920, 0.1190, 0.1620, 0.0852, 0.5400, 0.3000, 0.4736, 0.0950, 0.2020
0.5900, 1.0000, 0.2720, 0.1070, 0.1580, 0.1020, 0.3900, 0.4000, 0.4443, 0.0930, 0.1270
0.5200, 0.0000, 0.2700, 0.0783, 0.1340, 0.0730, 0.4400, 0.3050, 0.4443, 0.0690, 0.1820
0.6900, 1.0000, 0.2450, 0.1080, 0.2430, 0.1364, 0.4000, 0.6000, 0.5808, 0.1000, 0.2410
0.5300, 0.0000, 0.2410, 0.1050, 0.1840, 0.1134, 0.4600, 0.4000, 0.4812, 0.0950, 0.0660
0.4700, 1.0000, 0.2530, 0.0980, 0.1730, 0.1056, 0.4400, 0.4000, 0.4762, 0.1080, 0.0940
0.5200, 0.0000, 0.2880, 0.1130, 0.2800, 0.1740, 0.6700, 0.4000, 0.5273, 0.0860, 0.2830
0.3900, 0.0000, 0.2090, 0.0950, 0.1500, 0.0656, 0.6800, 0.2000, 0.4407, 0.0950, 0.0640
0.6700, 1.0000, 0.2300, 0.0700, 0.1840, 0.1280, 0.3500, 0.5000, 0.4654, 0.0990, 0.1020
0.5900, 1.0000, 0.2410, 0.0960, 0.1700, 0.0986, 0.5400, 0.3000, 0.4466, 0.0850, 0.2000
0.5100, 1.0000, 0.2810, 0.1060, 0.2020, 0.1222, 0.5500, 0.4000, 0.4820, 0.0870, 0.2650
0.2300, 1.0000, 0.1800, 0.0780, 0.1710, 0.0960, 0.4800, 0.4000, 0.4905, 0.0920, 0.0940
0.6800, 0.0000, 0.2590, 0.0930, 0.2530, 0.1812, 0.5300, 0.5000, 0.4543, 0.0980, 0.2300
0.4400, 0.0000, 0.2150, 0.0850, 0.1570, 0.0922, 0.5500, 0.3000, 0.3892, 0.0840, 0.1810
0.6000, 1.0000, 0.2430, 0.1030, 0.1410, 0.0866, 0.3300, 0.4000, 0.4673, 0.0780, 0.1560
0.5200, 0.0000, 0.2450, 0.0900, 0.1980, 0.1290, 0.2900, 0.7000, 0.5298, 0.0860, 0.2330
0.3800, 0.0000, 0.2130, 0.0720, 0.1650, 0.0602, 0.8800, 0.2000, 0.4431, 0.0900, 0.0600
0.6100, 0.0000, 0.2580, 0.0900, 0.2800, 0.1954, 0.5500, 0.5000, 0.4997, 0.0900, 0.2190
0.6800, 1.0000, 0.2480, 0.1010, 0.2210, 0.1514, 0.6000, 0.4000, 0.3871, 0.0870, 0.0800
0.2800, 1.0000, 0.3150, 0.0830, 0.2280, 0.1494, 0.3800, 0.6000, 0.5313, 0.0830, 0.0680
0.6500, 1.0000, 0.3350, 0.1020, 0.1900, 0.1262, 0.3500, 0.5000, 0.4970, 0.1020, 0.3320
0.6900, 0.0000, 0.2810, 0.1130, 0.2340, 0.1428, 0.5200, 0.4000, 0.5278, 0.0770, 0.2480
0.5100, 0.0000, 0.2430, 0.0853, 0.1530, 0.0716, 0.7100, 0.2150, 0.3951, 0.0820, 0.0840
0.2900, 0.0000, 0.3500, 0.0983, 0.2040, 0.1426, 0.5000, 0.4080, 0.4043, 0.0910, 0.2000
0.5500, 1.0000, 0.2350, 0.0930, 0.1770, 0.1268, 0.4100, 0.4000, 0.3829, 0.0830, 0.0550
0.3400, 1.0000, 0.3000, 0.0830, 0.1850, 0.1072, 0.5300, 0.3000, 0.4820, 0.0920, 0.0850
0.6700, 0.0000, 0.2070, 0.0830, 0.1700, 0.0998, 0.5900, 0.3000, 0.4025, 0.0770, 0.0890
0.4900, 0.0000, 0.2560, 0.0760, 0.1610, 0.0998, 0.5100, 0.3000, 0.3932, 0.0780, 0.0310
0.5500, 1.0000, 0.2290, 0.0810, 0.1230, 0.0672, 0.4100, 0.3000, 0.4304, 0.0880, 0.1290
0.5900, 1.0000, 0.2510, 0.0900, 0.1630, 0.1014, 0.4600, 0.4000, 0.4357, 0.0910, 0.0830
0.5300, 0.0000, 0.3320, 0.0827, 0.1860, 0.1068, 0.4600, 0.4040, 0.5112, 0.1020, 0.2750
0.4800, 1.0000, 0.2410, 0.1100, 0.2090, 0.1346, 0.5800, 0.4000, 0.4407, 0.1000, 0.0650
0.5200, 0.0000, 0.2950, 0.1043, 0.2110, 0.1328, 0.4900, 0.4310, 0.4984, 0.0980, 0.1980
0.6900, 0.0000, 0.2960, 0.1220, 0.2310, 0.1284, 0.5600, 0.4000, 0.5451, 0.0860, 0.2360
0.6000, 1.0000, 0.2280, 0.1100, 0.2450, 0.1898, 0.3900, 0.6000, 0.4394, 0.0880, 0.2530
0.4600, 1.0000, 0.2270, 0.0830, 0.1830, 0.1258, 0.3200, 0.6000, 0.4836, 0.0750, 0.1240
0.5100, 1.0000, 0.2620, 0.1010, 0.1610, 0.0996, 0.4800, 0.3000, 0.4205, 0.0880, 0.0440
0.6700, 1.0000, 0.2350, 0.0960, 0.2070, 0.1382, 0.4200, 0.5000, 0.4898, 0.1110, 0.1720
0.4900, 0.0000, 0.2210, 0.0850, 0.1360, 0.0634, 0.6200, 0.2190, 0.3970, 0.0720, 0.1140
0.4600, 1.0000, 0.2650, 0.0940, 0.2470, 0.1602, 0.5900, 0.4000, 0.4935, 0.1110, 0.1420
0.4700, 0.0000, 0.3240, 0.1050, 0.1880, 0.1250, 0.4600, 0.4090, 0.4443, 0.0990, 0.1090
0.7500, 0.0000, 0.3010, 0.0780, 0.2220, 0.1542, 0.4400, 0.5050, 0.4779, 0.0970, 0.1800
0.2800, 0.0000, 0.2420, 0.0930, 0.1740, 0.1064, 0.5400, 0.3000, 0.4220, 0.0840, 0.1440
0.6500, 1.0000, 0.3130, 0.1100, 0.2130, 0.1280, 0.4700, 0.5000, 0.5247, 0.0910, 0.1630
0.4200, 0.0000, 0.3010, 0.0910, 0.1820, 0.1148, 0.4900, 0.4000, 0.4511, 0.0820, 0.1470
0.5100, 0.0000, 0.2450, 0.0790, 0.2120, 0.1286, 0.6500, 0.3000, 0.4522, 0.0910, 0.0970
0.5300, 1.0000, 0.2770, 0.0950, 0.1900, 0.1018, 0.4100, 0.5000, 0.5464, 0.1010, 0.2200
0.5400, 0.0000, 0.2320, 0.1107, 0.2380, 0.1628, 0.4800, 0.4960, 0.4913, 0.1080, 0.1900
0.7300, 0.0000, 0.2700, 0.1020, 0.2110, 0.1210, 0.6700, 0.3000, 0.4745, 0.0990, 0.1090
0.5400, 0.0000, 0.2680, 0.1080, 0.1760, 0.0806, 0.6700, 0.3000, 0.4956, 0.1060, 0.1910
0.4200, 0.0000, 0.2920, 0.0930, 0.2490, 0.1742, 0.4500, 0.6000, 0.5004, 0.0920, 0.1220
0.7500, 0.0000, 0.3120, 0.1177, 0.2290, 0.1388, 0.2900, 0.7900, 0.5724, 0.1060, 0.2300
0.5500, 1.0000, 0.3210, 0.1127, 0.2070, 0.0924, 0.2500, 0.8280, 0.6105, 0.1110, 0.2420
0.6800, 1.0000, 0.2570, 0.1090, 0.2330, 0.1126, 0.3500, 0.7000, 0.6057, 0.1050, 0.2480
0.5700, 0.0000, 0.2690, 0.0980, 0.2460, 0.1652, 0.3800, 0.7000, 0.5366, 0.0960, 0.2490
0.4800, 0.0000, 0.3140, 0.0753, 0.2420, 0.1516, 0.3800, 0.6370, 0.5568, 0.1030, 0.1920
0.6100, 1.0000, 0.2560, 0.0850, 0.1840, 0.1162, 0.3900, 0.5000, 0.4970, 0.0980, 0.1310
0.6900, 0.0000, 0.3700, 0.1030, 0.2070, 0.1314, 0.5500, 0.4000, 0.4635, 0.0900, 0.2370
0.3800, 0.0000, 0.3260, 0.0770, 0.1680, 0.1006, 0.4700, 0.4000, 0.4625, 0.0960, 0.0780
0.4500, 1.0000, 0.2120, 0.0940, 0.1690, 0.0968, 0.5500, 0.3000, 0.4454, 0.1020, 0.1350
0.5100, 1.0000, 0.2920, 0.1070, 0.1870, 0.1390, 0.3200, 0.6000, 0.4382, 0.0950, 0.2440
0.7100, 1.0000, 0.2400, 0.0840, 0.1380, 0.0858, 0.3900, 0.4000, 0.4190, 0.0900, 0.1990
0.5700, 0.0000, 0.3610, 0.1170, 0.1810, 0.1082, 0.3400, 0.5000, 0.5268, 0.1000, 0.2700
0.5600, 1.0000, 0.2580, 0.1030, 0.1770, 0.1144, 0.3400, 0.5000, 0.4963, 0.0990, 0.1640
0.3200, 1.0000, 0.2200, 0.0880, 0.1370, 0.0786, 0.4800, 0.3000, 0.3951, 0.0780, 0.0720
0.5000, 0.0000, 0.2190, 0.0910, 0.1900, 0.1112, 0.6700, 0.3000, 0.4078, 0.0770, 0.0960
0.4300, 0.0000, 0.3430, 0.0840, 0.2560, 0.1726, 0.3300, 0.8000, 0.5529, 0.1040, 0.3060
0.5400, 1.0000, 0.2520, 0.1150, 0.1810, 0.1200, 0.3900, 0.5000, 0.4701, 0.0920, 0.0910
0.3100, 0.0000, 0.2330, 0.0850, 0.1900, 0.1308, 0.4300, 0.4000, 0.4394, 0.0770, 0.2140
0.5600, 0.0000, 0.2570, 0.0800, 0.2440, 0.1516, 0.5900, 0.4000, 0.5118, 0.0950, 0.0950
0.4400, 0.0000, 0.2510, 0.1330, 0.1820, 0.1130, 0.5500, 0.3000, 0.4249, 0.0840, 0.2160
0.5700, 1.0000, 0.3190, 0.1110, 0.1730, 0.1162, 0.4100, 0.4000, 0.4369, 0.0870, 0.2630

Test data:


# diabetes_norm_test_100.txt
#
0.6400, 1.0000, 0.2840, 0.1110, 0.1840, 0.1270, 0.4100, 0.4000, 0.4382, 0.0970, 0.1780
0.4300, 0.0000, 0.2810, 0.1210, 0.1920, 0.1210, 0.6000, 0.3000, 0.4007, 0.0930, 0.1130
0.1900, 0.0000, 0.2530, 0.0830, 0.2250, 0.1566, 0.4600, 0.5000, 0.4719, 0.0840, 0.2000
0.7100, 1.0000, 0.2610, 0.0850, 0.2200, 0.1524, 0.4700, 0.5000, 0.4635, 0.0910, 0.1390
0.5000, 1.0000, 0.2800, 0.1040, 0.2820, 0.1968, 0.4400, 0.6000, 0.5328, 0.0950, 0.1390
0.5900, 1.0000, 0.2360, 0.0730, 0.1800, 0.1074, 0.5100, 0.4000, 0.4682, 0.0840, 0.0880
0.5700, 0.0000, 0.2450, 0.0930, 0.1860, 0.0966, 0.7100, 0.3000, 0.4522, 0.0910, 0.1480
0.4900, 1.0000, 0.2100, 0.0820, 0.1190, 0.0854, 0.2300, 0.5000, 0.3970, 0.0740, 0.0880
0.4100, 1.0000, 0.3200, 0.1260, 0.1980, 0.1042, 0.4900, 0.4000, 0.5412, 0.1240, 0.2430
0.2500, 1.0000, 0.2260, 0.0850, 0.1300, 0.0710, 0.4800, 0.3000, 0.4007, 0.0810, 0.0710
0.5200, 1.0000, 0.1970, 0.0810, 0.1520, 0.0534, 0.8200, 0.2000, 0.4419, 0.0820, 0.0770
0.3400, 0.0000, 0.2120, 0.0840, 0.2540, 0.1134, 0.5200, 0.5000, 0.6094, 0.0920, 0.1090
0.4200, 1.0000, 0.3060, 0.1010, 0.2690, 0.1722, 0.5000, 0.5000, 0.5455, 0.1060, 0.2720
0.2800, 1.0000, 0.2550, 0.0990, 0.1620, 0.1016, 0.4600, 0.4000, 0.4277, 0.0940, 0.0600
0.4700, 1.0000, 0.2330, 0.0900, 0.1950, 0.1258, 0.5400, 0.4000, 0.4331, 0.0730, 0.0540
0.3200, 1.0000, 0.3100, 0.1000, 0.1770, 0.0962, 0.4500, 0.4000, 0.5187, 0.0770, 0.2210
0.4300, 0.0000, 0.1850, 0.0870, 0.1630, 0.0936, 0.6100, 0.2670, 0.3738, 0.0800, 0.0900
0.5900, 1.0000, 0.2690, 0.1040, 0.1940, 0.1266, 0.4300, 0.5000, 0.4804, 0.1060, 0.3110
0.5300, 0.0000, 0.2830, 0.1010, 0.1790, 0.1070, 0.4800, 0.4000, 0.4788, 0.1010, 0.2810
0.6000, 0.0000, 0.2570, 0.1030, 0.1580, 0.0846, 0.6400, 0.2000, 0.3850, 0.0970, 0.1820
0.5400, 1.0000, 0.3610, 0.1150, 0.1630, 0.0984, 0.4300, 0.4000, 0.4682, 0.1010, 0.3210
0.3500, 1.0000, 0.2410, 0.0947, 0.1550, 0.0974, 0.3200, 0.4840, 0.4852, 0.0940, 0.0580
0.4900, 1.0000, 0.2580, 0.0890, 0.1820, 0.1186, 0.3900, 0.5000, 0.4804, 0.1150, 0.2620
0.5800, 0.0000, 0.2280, 0.0910, 0.1960, 0.1188, 0.4800, 0.4000, 0.4984, 0.1150, 0.2060
0.3600, 1.0000, 0.3910, 0.0900, 0.2190, 0.1358, 0.3800, 0.6000, 0.5421, 0.1030, 0.2330
0.4600, 1.0000, 0.4220, 0.0990, 0.2110, 0.1370, 0.4400, 0.5000, 0.5011, 0.0990, 0.2420
0.4400, 1.0000, 0.2660, 0.0990, 0.2050, 0.1090, 0.4300, 0.5000, 0.5580, 0.1110, 0.1230
0.4600, 0.0000, 0.2990, 0.0830, 0.1710, 0.1130, 0.3800, 0.4500, 0.4585, 0.0980, 0.1670
0.5400, 0.0000, 0.2100, 0.0780, 0.1880, 0.1074, 0.7000, 0.3000, 0.3970, 0.0730, 0.0630
0.6300, 1.0000, 0.2550, 0.1090, 0.2260, 0.1032, 0.4600, 0.5000, 0.5951, 0.0870, 0.1970
0.4100, 1.0000, 0.2420, 0.0900, 0.1990, 0.1236, 0.5700, 0.4000, 0.4522, 0.0860, 0.0710
0.2800, 0.0000, 0.2540, 0.0930, 0.1410, 0.0790, 0.4900, 0.3000, 0.4174, 0.0910, 0.1680
0.1900, 0.0000, 0.2320, 0.0750, 0.1430, 0.0704, 0.5200, 0.3000, 0.4635, 0.0720, 0.1400
0.6100, 1.0000, 0.2610, 0.1260, 0.2150, 0.1298, 0.5700, 0.4000, 0.4949, 0.0960, 0.2170
0.4800, 0.0000, 0.3270, 0.0930, 0.2760, 0.1986, 0.4300, 0.6420, 0.5148, 0.0910, 0.1210
0.5400, 1.0000, 0.2730, 0.1000, 0.2000, 0.1440, 0.3300, 0.6000, 0.4745, 0.0760, 0.2350
0.5300, 1.0000, 0.2660, 0.0930, 0.1850, 0.1224, 0.3600, 0.5000, 0.4890, 0.0820, 0.2450
0.4800, 0.0000, 0.2280, 0.1010, 0.1100, 0.0416, 0.5600, 0.2000, 0.4127, 0.0970, 0.0400
0.5300, 0.0000, 0.2880, 0.1117, 0.1450, 0.0872, 0.4600, 0.3150, 0.4078, 0.0850, 0.0520
0.2900, 1.0000, 0.1810, 0.0730, 0.1580, 0.0990, 0.4100, 0.4000, 0.4500, 0.0780, 0.1040
0.6200, 0.0000, 0.3200, 0.0880, 0.1720, 0.0690, 0.3800, 0.4000, 0.5784, 0.1000, 0.1320
0.5000, 1.0000, 0.2370, 0.0920, 0.1660, 0.0970, 0.5200, 0.3000, 0.4443, 0.0930, 0.0880
0.5800, 1.0000, 0.2360, 0.0960, 0.2570, 0.1710, 0.5900, 0.4000, 0.4905, 0.0820, 0.0690
0.5500, 1.0000, 0.2460, 0.1090, 0.1430, 0.0764, 0.5100, 0.3000, 0.4357, 0.0880, 0.2190
0.5400, 0.0000, 0.2260, 0.0900, 0.1830, 0.1042, 0.6400, 0.3000, 0.4304, 0.0920, 0.0720
0.3600, 0.0000, 0.2780, 0.0730, 0.1530, 0.1044, 0.4200, 0.4000, 0.3497, 0.0730, 0.2010
0.6300, 1.0000, 0.2410, 0.1110, 0.1840, 0.1122, 0.4400, 0.4000, 0.4935, 0.0820, 0.1100
0.4700, 1.0000, 0.2650, 0.0700, 0.1810, 0.1048, 0.6300, 0.3000, 0.4190, 0.0700, 0.0510
0.5100, 1.0000, 0.3280, 0.1120, 0.2020, 0.1006, 0.3700, 0.5000, 0.5775, 0.1090, 0.2770
0.4200, 0.0000, 0.1990, 0.0760, 0.1460, 0.0832, 0.5500, 0.3000, 0.3664, 0.0790, 0.0630
0.3700, 1.0000, 0.2360, 0.0940, 0.2050, 0.1388, 0.5300, 0.4000, 0.4190, 0.1070, 0.1180
0.2800, 0.0000, 0.2210, 0.0820, 0.1680, 0.1006, 0.5400, 0.3000, 0.4205, 0.0860, 0.0690
0.5800, 0.0000, 0.2810, 0.1110, 0.1980, 0.0806, 0.3100, 0.6000, 0.6068, 0.0930, 0.2730
0.3200, 0.0000, 0.2650, 0.0860, 0.1840, 0.1016, 0.5300, 0.4000, 0.4990, 0.0780, 0.2580
0.2500, 1.0000, 0.2350, 0.0880, 0.1430, 0.0808, 0.5500, 0.3000, 0.3584, 0.0830, 0.0430
0.6300, 0.0000, 0.2600, 0.0857, 0.1550, 0.0782, 0.4600, 0.3370, 0.5037, 0.0970, 0.1980
0.5200, 0.0000, 0.2780, 0.0850, 0.2190, 0.1360, 0.4900, 0.4000, 0.5136, 0.0750, 0.2420
0.6500, 1.0000, 0.2850, 0.1090, 0.2010, 0.1230, 0.4600, 0.4000, 0.5075, 0.0960, 0.2320
0.4200, 0.0000, 0.3060, 0.1210, 0.1760, 0.0928, 0.6900, 0.3000, 0.4263, 0.0890, 0.1750
0.5300, 0.0000, 0.2220, 0.0780, 0.1640, 0.0810, 0.7000, 0.2000, 0.4174, 0.1010, 0.0930
0.7900, 1.0000, 0.2330, 0.0880, 0.1860, 0.1284, 0.3300, 0.6000, 0.4812, 0.1020, 0.1680
0.4300, 0.0000, 0.3540, 0.0930, 0.1850, 0.1002, 0.4400, 0.4000, 0.5318, 0.1010, 0.2750
0.4400, 0.0000, 0.3140, 0.1150, 0.1650, 0.0976, 0.5200, 0.3000, 0.4344, 0.0890, 0.2930
0.6200, 1.0000, 0.3780, 0.1190, 0.1130, 0.0510, 0.3100, 0.4000, 0.5043, 0.0840, 0.2810
0.3300, 0.0000, 0.1890, 0.0700, 0.1620, 0.0918, 0.5900, 0.3000, 0.4025, 0.0580, 0.0720
0.5600, 0.0000, 0.3500, 0.0793, 0.1950, 0.1408, 0.4200, 0.4640, 0.4111, 0.0960, 0.1400
0.6600, 0.0000, 0.2170, 0.1260, 0.2120, 0.1278, 0.4500, 0.4710, 0.5278, 0.1010, 0.1890
0.3400, 1.0000, 0.2530, 0.1110, 0.2300, 0.1620, 0.3900, 0.6000, 0.4977, 0.0900, 0.1810
0.4600, 1.0000, 0.2380, 0.0970, 0.2240, 0.1392, 0.4200, 0.5000, 0.5366, 0.0810, 0.2090
0.5000, 0.0000, 0.3180, 0.0820, 0.1360, 0.0692, 0.5500, 0.2000, 0.4078, 0.0850, 0.1360
0.6900, 0.0000, 0.3430, 0.1130, 0.2000, 0.1238, 0.5400, 0.4000, 0.4710, 0.1120, 0.2610
0.3400, 0.0000, 0.2630, 0.0870, 0.1970, 0.1200, 0.6300, 0.3000, 0.4249, 0.0960, 0.1130
0.7100, 1.0000, 0.2700, 0.0933, 0.2690, 0.1902, 0.4100, 0.6560, 0.5242, 0.0930, 0.1310
0.4700, 0.0000, 0.2720, 0.0800, 0.2080, 0.1456, 0.3800, 0.6000, 0.4804, 0.0920, 0.1740
0.4100, 0.0000, 0.3380, 0.1233, 0.1870, 0.1270, 0.4500, 0.4160, 0.4318, 0.1000, 0.2570
0.3400, 0.0000, 0.3300, 0.0730, 0.1780, 0.1146, 0.5100, 0.3490, 0.4127, 0.0920, 0.0550
0.5100, 0.0000, 0.2410, 0.0870, 0.2610, 0.1756, 0.6900, 0.4000, 0.4407, 0.0930, 0.0840
0.4300, 0.0000, 0.2130, 0.0790, 0.1410, 0.0788, 0.5300, 0.3000, 0.3829, 0.0900, 0.0420
0.5500, 0.0000, 0.2300, 0.0947, 0.1900, 0.1376, 0.3800, 0.5000, 0.4277, 0.1060, 0.1460
0.5900, 1.0000, 0.2790, 0.1010, 0.2180, 0.1442, 0.3800, 0.6000, 0.5187, 0.0950, 0.2120
0.2700, 1.0000, 0.3360, 0.1100, 0.2460, 0.1566, 0.5700, 0.4000, 0.5088, 0.0890, 0.2330
0.5100, 1.0000, 0.2270, 0.1030, 0.2170, 0.1624, 0.3000, 0.7000, 0.4812, 0.0800, 0.0910
0.4900, 1.0000, 0.2740, 0.0890, 0.1770, 0.1130, 0.3700, 0.5000, 0.4905, 0.0970, 0.1110
0.2700, 0.0000, 0.2260, 0.0710, 0.1160, 0.0434, 0.5600, 0.2000, 0.4419, 0.0790, 0.1520
0.5700, 1.0000, 0.2320, 0.1073, 0.2310, 0.1594, 0.4100, 0.5630, 0.5030, 0.1120, 0.1200
0.3900, 1.0000, 0.2690, 0.0930, 0.1360, 0.0754, 0.4800, 0.3000, 0.4143, 0.0990, 0.0670
0.6200, 1.0000, 0.3460, 0.1200, 0.2150, 0.1292, 0.4300, 0.5000, 0.5366, 0.1230, 0.3100
0.3700, 0.0000, 0.2330, 0.0880, 0.2230, 0.1420, 0.6500, 0.3400, 0.4357, 0.0820, 0.0940
0.4600, 0.0000, 0.2110, 0.0800, 0.2050, 0.1444, 0.4200, 0.5000, 0.4533, 0.0870, 0.1830
0.6800, 1.0000, 0.2350, 0.1010, 0.1620, 0.0854, 0.5900, 0.3000, 0.4477, 0.0910, 0.0660
0.5100, 0.0000, 0.3150, 0.0930, 0.2310, 0.1440, 0.4900, 0.4700, 0.5252, 0.1170, 0.1730
0.4100, 0.0000, 0.2080, 0.0860, 0.2230, 0.1282, 0.8300, 0.3000, 0.4078, 0.0890, 0.0720
0.5300, 0.0000, 0.2650, 0.0970, 0.1930, 0.1224, 0.5800, 0.3000, 0.4143, 0.0990, 0.0490
0.4500, 0.0000, 0.2420, 0.0830, 0.1770, 0.1184, 0.4500, 0.4000, 0.4220, 0.0820, 0.0640
0.3300, 0.0000, 0.1950, 0.0800, 0.1710, 0.0854, 0.7500, 0.2000, 0.3970, 0.0800, 0.0480
0.6000, 1.0000, 0.2820, 0.1120, 0.1850, 0.1138, 0.4200, 0.4000, 0.4984, 0.0930, 0.1780
0.4700, 1.0000, 0.2490, 0.0750, 0.2250, 0.1660, 0.4200, 0.5000, 0.4443, 0.1020, 0.1040
0.6000, 1.0000, 0.2490, 0.0997, 0.1620, 0.1066, 0.4300, 0.3770, 0.4127, 0.0950, 0.1320
0.3600, 0.0000, 0.3000, 0.0950, 0.2010, 0.1252, 0.4200, 0.4790, 0.5130, 0.0850, 0.2200
0.3600, 0.0000, 0.1960, 0.0710, 0.2500, 0.1332, 0.9700, 0.3000, 0.4595, 0.0920, 0.0570
Posted in Machine Learning | Leave a comment

Example of Roach Infestation Optimization for Rastrigin’s Function Using C#

One year ago, I wrote an article for the now-defunct Microsoft MSDN Magazine titled “Roach Infestation Optimization”. The C# code associated with that article vanished into the void, so I figured I’d rehydrate the demo program.

Roach infestation optimization (RIO) is a numerical optimization algorithm that’s loosely based on the behavior of common cockroaches such as Periplaneta americana. Let me explain.

In machine learning, a numerical optimization algorithm is often used to find a set of values for variables (usually called weights) that minimize some measure of error. The process of determining the values of the weights is called training the model. The idea is to use a collection of training data that has known correct output values. A numerical optimization algorithm is used to find the values of the weights that minimize the error between computed output values and known correct output values.

The most common training algorithms are based on calculus derivatives (stochastic gradient descent), but there are also algorithms that are based on the behaviors of natural systems. These are sometimes called bio-inspired algorithms. These bio-inspired algorithms are usually not practical, but they’re interesting.

The goal of the demo program is to use RIO to find the minimum value of the Rastrigin function with eight input variables. The Rastrigin function is a standard benchmark function used to evaluate the effectiveness of numerical optimization algorithms. The function has a known minimum value of 0.0 located at x = (0, 0, . . 0) where the number of 0 values is equal to the number of input values.

The Rastrigin function is extremely difficult for most optimization algorithms because it has many peaks and valleys that create local minimum values that can trap the algorithms. It’s not possible to easily visualize the Rastrigin function with eight input values, but you can get a good idea of the function’s characteristics by examining a graph of the function for two input values:

The output of the demo is:

Begin roach infestation optimization demo

Goal is minimize Rastrigin's function in 8 dimensions
Problem known min value = 0.0 at (0, 0, .. 0)

Setting number of roaches  = 50
Setting maximum time = 10000

Starting roach optimization

time =    500 best error = 104.76530
time =   1000 best error = 67.36795
time =   1500 best error = 12.11031
time =   2000 best error = 8.90949
time =   2500 best error = 3.99195
Mass extinction at t =   2500
time =   3000 best error = 3.99195
time =   3500 best error = 1.80840
time =   4000 best error = 1.02166
time =   4500 best error = 0.99382
time =   5000 best error = 0.00000
Mass extinction at t =   5000
time =   5500 best error = 0.00000
time =   6000 best error = 0.00000
time =   6500 best error = 0.00000
time =   7000 best error = 0.00000
time =   7500 best error = 0.00000
Mass extinction at t =   7500
time =   8000 best error = 0.00000
time =   8500 best error = 0.00000
time =   9000 best error = 0.00000
time =   9500 best error = 0.00000

Roach algorithm completed

Best error found = 0.000000 at:

0.0000 0.0000 -0.0000 0.0000 -0.0000 0.0000 -0.0000 -0.0000

End roach infestation optimization demo

The demo program sets the number of roaches to 50. Each simulated roach has a position that represents a possible solution to the minimization problem. More roaches increase the chance of finding the true optimal solution at the expense of performance.

RIO is an iterative process and requires a maximum loop counter value. The demo sets the maximum value to 10,000 iterations. The maximum number of iterations will vary from problem to problem.

In the demo, the best (smallest) error associated with the best roach position found so far was displayed every 500 time units. After the algorithm finished, the best position found for any roach was x = (0, 0, 0, 0, 0, 0, 0, 0), which is, in fact, the correct answer. But notice if the maximum number of iterations had been set to 3,000 instead of 10,000, RIO would not have found the one global minimum. RIO is not guaranteed to find an optimal solution in practical scenarios.

RIO is essentially a variation of the similar particle swarm optimization (PSO). The main difference is that in RIO, roaches that are close to each other will sometimes exchange information with each other.

Bio-inspired algorithms such as roach infestation optimization and particle swarm optimization are usually too slow to be practical. But if/when quantum computing becomes a reality, these algorithms could become essential techniques.



“Terra Formars” (2016) is an absolutely bonkers Japanese science fiction movie. In the 21st century, Earth plans to colonize Mars. To terraform the planet, the surface of Mars is seeded with moss and cockroaches to spread the moss.

Then 500 years later, a crew of about 16 men and women is sent to Mars. They discover that the roaches have evolved into very nasty human-sized roaches. Then . . . well, all kinds of bizarre craziness ensues.

This movie is one of many things Japanese that absolutely baffle me.


Demo program. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my blog editor chokes on symbols).

using System;

// "Roach Infestation Optimization" (2008)
// IEEE Swarm Intelligence Symposium
// T. Havens, C. Spain, N. Salmon, J. Keller

namespace RoachOptimization
{
  class RoachOptimizationProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin roach infestation " +
        "optimization demo\n");

      int dim = 8;
      int numRoaches = 50;
      int tMax = 10000;
      Random rnd = new Random(1);
      
      Console.WriteLine("Goal is minimize Rastrigin's " +
        "function in " + dim + " dimensions");
      Console.WriteLine("Problem known min value = 0.0 " +
        "at (0, 0, .. 0) \n");
      Console.WriteLine("Setting number of roaches  = " +
        numRoaches);
      Console.WriteLine("Setting maximum time = " +
        tMax);
      
      Console.WriteLine("\nStarting roach optimization \n");
      double[] answer = 
        SolveRastrigin(dim, numRoaches, tMax, rnd);
      Console.WriteLine("\nRoach algorithm completed ");

      double err = Error(answer);
      Console.WriteLine("\nBest error found = " +
        err.ToString("F6") + " at: \n");

      for (int i = 0; i "lt" dim; ++i)
        Console.Write(answer[i].ToString("F4") + " ");
      Console.WriteLine("");

      Console.WriteLine("\nEnd roach infestation " +
        "optimization demo ");
      Console.ReadLine();
    } // Main

    // -----------------------------------------------------

    static double[] SolveRastrigin(int dim, int numRoaches,
      int tMax, Random rnd)
    {
      // estimate solution to Rastrigin's function
      // using roach infestation optimization
      double C0 = 0.7;  // RIO is based on PSO
      double C1 = 1.43;
      // probs of exchanging info based on number neighbors
      // actual roach behavior: 0.49, 0.63, 0.65
      // double[] A = new double[] { 0.49, 0.63, 0.65 }; 
      double[] A = new double[] { 0.2, 0.3, 0.4 }; // better 

      // smaller divisor (e.g., 5 instead of 10)
      // roaches stay longer
      int tHunger = tMax / 10;  
      double minX = -10.0;
      double maxX = 10.0;
      int extinction = tMax / 4;  // mass extinction restart
      //Random rnd = new Random(seed);  // random order

      Roach[] herd = new Roach[numRoaches];  // 'intrusion'
      for (int i = 0; i "lt" numRoaches; ++i)
        herd[i] = new Roach(dim, minX, maxX, tHunger, i);

      int t = 0;  // loop counter (time)
      int[] indices = new int[numRoaches];
      for (int i = 0; i "lt" numRoaches; ++i)
        indices[i] = i;

      double bestError = double.MaxValue;  // by any roach
      double[] bestPosition = new double[numRoaches];

      int displayInterval = tMax / 20; // 20 times

      // distances between all pairs of roaches
      double[][] distances = new double[numRoaches][];
      for (int i = 0; i "lt" numRoaches; ++i)
        distances[i] = new double[numRoaches];

      while (t "lt" tMax) // main loop
      {
        if (t "gt" 0 && t % displayInterval == 0)
        {
          Console.Write("time = " +
            t.ToString().PadLeft(6));
          Console.WriteLine(" best error = " +
            bestError.ToString("F5"));
        }

        // compute distances between all pairs
        for (int i = 0; i "lt" numRoaches - 1; ++i)
        {
          for (int j = i + 1; j "lt" numRoaches; ++j)
          {
            double d = 
              Distance(herd[i].position, herd[j].position);
            distances[i][j] = distances[j][i] = d;
          }
        }

        // find 'quarter' median distance
        double[] sortedDists = 
          new double[numRoaches * (numRoaches - 1) / 2];
        int k = 0;
        for (int i = 0; i "lt" numRoaches - 1; ++i)
        {
          for (int j = i + 1; j "lt" numRoaches; ++j)
          {
            sortedDists[k++] = distances[i][j];
          }
        }

        Array.Sort(sortedDists);
        double medianDist = 
          sortedDists[(int)(sortedDists.Length / 4)];

        Shuffle(indices, rnd); // process roaches rnd order

        for (int i = 0; i "lt" numRoaches; ++i)  // each roach
        {
          int idx = indices[i]; // roach index

          Roach curr = herd[idx]; // ref to current roach
          int numNeighbors = 0;

          // calculate number of neighbors of curr roach
          for (int j = 0; j "lt" numRoaches; ++j)  
          {
            if (j == idx) continue; // not a neighbor self
            double d = distances[idx][j];
            if (d "lt" medianDist) // is a neighbor to roach[j]
              ++numNeighbors;
          }

          // at most 3 neighbors are relevant
          int effectiveNeighbors = numNeighbors;
          if (effectiveNeighbors "gte" 3)  
            effectiveNeighbors = 3;

          // possibly swap group best information
          for (int j = 0; j "lt" numRoaches; ++j)
          {
            if (j == idx) continue; // not neighbor of self
            if (effectiveNeighbors == 0) continue;
            double prob = rnd.NextDouble();
            if (prob "gt" A[effectiveNeighbors - 1])
              continue; // don't always swap

            double d = distances[idx][j];
            if (d "lt" medianDist) // is a neighbor
            {
              // exchange info if curr roach better
              if (curr.error "lt" herd[j].error) // 
              {
                for (int p = 0; p "lt" dim; ++p)
                {
                  // use curr roach's personal best
                  // as the group best for both roaches
                  herd[j].groupBestPos[p] = 
                    curr.personalBestPos[p];  
                  curr.groupBestPos[p] = 
                    curr.personalBestPos[p];
                }
              }
              else // roach[j] is better than curr
              {
                for (int p = 0; p "lt" dim; ++p)
                {
                  curr.groupBestPos[p] = 
                    herd[j].personalBestPos[p];
                  herd[j].groupBestPos[p] = 
                    herd[j].personalBestPos[p];
                }
              }

            } // if a neighbor
          } // j, each neighbor

          if (curr.hunger "lt" tHunger) // move roach
          {
            // new velocity
            for (int p = 0; p "lt" dim; ++p)
              curr.velocity[p] = 
                (C0 * curr.velocity[p]) +
                (C1 * rnd.NextDouble() *
                (curr.personalBestPos[p] - 
                curr.position[p])) +
                (C1 * rnd.NextDouble() *
                (curr.groupBestPos[p] - 
                curr.position[p]));

            // update position
            for (int p = 0; p "lt" dim; ++p)
              curr.position[p] = 
                curr.position[p] + curr.velocity[p];

            double e = Error(curr.position); // Rastrigin
            curr.error = e;

            // new personal best position?
            if (curr.error "lt" curr.personalBestErr)
            {
              curr.personalBestErr = curr.error;
              for (int p = 0; p "lt" dim; ++p)
                curr.personalBestPos[p] = curr.position[p];
            }

            // new global best position?
            if (curr.error "lt" bestError)
            {
              bestError = curr.error;
              for (int p = 0; p "lt" dim; ++p)
                bestPosition[p] = curr.position[p];
            }

            ++curr.hunger;
          }
          else // hungry. leave group to random position
          {
            herd[idx] = 
              new Roach(dim, minX, maxX, tHunger, t);
          }

        } // each roach

        // mass extinction?
        if (t "gt" 0 && t % extinction == 0)  
        {
          Console.WriteLine("Mass extinction at t = " +
            t.ToString().PadLeft(6));
          for (int i = 0; i "lt" numRoaches; ++i)
            herd[i] = new Roach(dim, minX, maxX, tHunger, i);
        }

        ++t;
      } // main while loop

      return bestPosition;

    } // SolveRastrigin

    // ------------------------------------------------------

    public static double Error(double[] x)
    {
      // Rastrigin function has min value of 0.0
      double trueMin = 0.0;
      double rastrigin = 0.0;
      for (int i = 0; i "lt" x.Length; ++i)
      {
        double xi = x[i];
        rastrigin += (xi * xi) -
          (10 * Math.Cos(2 * Math.PI * xi)) + 10;
      }

      return (rastrigin - trueMin) * (rastrigin - trueMin);
    }

    // ------------------------------------------------------

    static double Distance(double[] pos1, double[] pos2)
    {
      double sum = 0.0;
      for (int i = 0; i "lt" pos1.Length; ++i)
        sum += (pos1[i] - pos2[i]) * (pos1[i] - pos2[i]);
      return Math.Sqrt(sum);
    }

    // ------------------------------------------------------

    static void Shuffle(int[] indices, Random rnd)
    {
      //Random rnd = new Random(seed);
      for (int i = 0; i "lt" indices.Length; ++i)
      {
        int r = rnd.Next(i, indices.Length);
        int tmp = indices[r];
        indices[r] = indices[i];
        indices[i] = tmp;
      }
    }

  } // Program

  // --------------------------------------------------------

  public class Roach
  {
    public int dim;
    public double[] position;
    public double[] velocity;
    public double error; // at curr position

    public double[] personalBestPos; // best position found
    public double personalBestErr; // 

    public double[] groupBestPos; // by any roach
 
    public int hunger;
    private Random rnd;

    // ------------------------------------------------------

    public Roach(int dim, double minX, double maxX,
      int tHunger, int rndSeed)
    {
      this.dim = dim;
      this.position = new double[dim];
      this.velocity = new double[dim];
      this.personalBestPos = new double[dim];
      this.groupBestPos = new double[dim];

      this.rnd = new Random(rndSeed);
      this.hunger = this.rnd.Next(0, tHunger);

      for (int i = 0; i "lt" dim; ++i)
      {
        this.position[i] = 
          (maxX - minX) * rnd.NextDouble() + minX;
        this.velocity[i] = 
          (maxX - minX) * rnd.NextDouble() + minX;
        this.personalBestPos[i] = this.position[i];
        this.groupBestPos[i] = this.position[i];
      }

      this.error = 
        RoachOptimizationProgram.Error(this.position);
      this.personalBestErr = this.error;
    }

  } // class Roach

} // ns
Posted in Machine Learning | Leave a comment

Singular Value Decomposition Using C#

If you have a matrix A, the singular value decomposition (SVD) gives you a matrix U, a vector s, and a matrix Vh, so that A = U * S * Vh. The s is a vector like [2, 4, 6], and S is a square matrix with the s values on the diagonal, 0s off the diagonal.

SVD is used in dozens of important software algorithms.

SVD is an extremely complicated topic, and computing the SVD of a matrix is one of the most challenging problems in all of computer programming. Many researchers worked for many years on the problem, and came up with many different versions of SVD implementations that vary in accuracy, complexity, and robustness. The standard SVD implementation in LAPACK is over 10,000 lines of Fortran code.

I put together an implementation of SVD (Householder + QR version) using the C# language. Unfortunately, my implementation is not as stable as my older implementation that uses the Jacobi algorithm. The first part of my demo output is:

Source (tall) matrix:
   1.0   2.0   3.0   4.0   5.0
   0.0  -3.0   5.0  -7.0   9.0
   2.0   0.0  -2.0   0.0  -2.0
   4.0  -1.0   5.0   6.0   1.0
   3.0   6.0   8.0   2.0   2.0
   5.0  -2.0   4.0  -4.0   3.0

Computing SVD
Done

U =
   0.319267   0.295047   0.358160   0.465731   0.652040
   0.625260  -0.614481   0.189087   0.174205  -0.138090
  -0.129989   0.032917  -0.342862  -0.160369   0.572690
   0.284056   0.494504  -0.490845   0.494308  -0.381366
   0.486033   0.495867   0.263713  -0.648653  -0.103790
   0.416300  -0.209426  -0.638701  -0.248874   0.267559

s =
  15.967661  12.793149   6.297366   5.706879   2.478679

Vh =
   0.296544   0.035215   0.708796  -0.130799   0.625557
   0.217254   0.416871   0.261754   0.803400  -0.255056
  -0.745282   0.555722  -0.030757   0.039094   0.365040
  -0.187161  -0.609726  -0.196995   0.579569   0.467438
   0.523820   0.379983  -0.623971   0.003540   0.438033

Behind the scenes, I verified my implementation by feeding the same source matrix A to the NumPy library np.linalg.svd() function to make sure I got the same results. I also constructed the S matrix from the s vector, and then verified that A = U * S * Vh.

For the second part of my demo, I used a “wide” matrix that has more columns than rows – the SVD algorithm works differently for tall matrices than it does for wide matrices:

Source (wide) matrix:
   3.0   1.0   1.0   0.0   5.0
  -1.0   3.0   1.0  -2.0   4.0
   0.0   2.0   2.0   1.0  -3.0

Computing SVD
Done

U =
  -0.727149   0.195146  -0.658158
  -0.621912  -0.593195   0.511219
   0.290654  -0.781049  -0.552705

s =
   7.639218   4.023860   3.232785

Vh =
  -0.204149  -0.263322  -0.100502   0.200868  -0.915716
   0.292911  -0.781970  -0.487131   0.100734   0.235122
  -0.768902  -0.071118  -0.387390  -0.487240   0.127506

My C# implementation is intended mostly for machine learning scenarios where accuracy to 1.0e-8 is good enough, as opposed to scientific programming where accuracy to 1.0e-15 is typical.

I worked on this SVD implementation off-and-on for several years. I finally got some extended uninterrupted time and coded up a demo but the implementation took several weeks to write. I used over a dozen major resources. The implementation here has all normal error-checking removed to keep the number of lines of code as small as possible.

I’ve written some complex code over the years but this SVD implementation is one of the most complex. I consider the code presented in this blog post to be one of the interesting explorations of my software engineering career. Unfortunately, this implementation isn’t as stable as my older SVD using the Jacobi algorithm.



The development of an algorithm to compute the SVD of a matrix was one of the key advances in the history of computer programming.

The history of pinball machines has always interested me.

The Gottlieb “Flipper Pool” (1965) introduced “in-lanes” — narrow channels on either side of the flippers at the bottom of the machine, often leading to a flipper or a special feature like a kickback.

Previous innovations included first electrically powered bumpers (1948), modern flipper geometry (1950), automatic reel scoring (1953), and drop-targets (1962).


Demo program. Very long, very complicated. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols. (My blog editor chokes on symbols).

using System;
using System.IO;
using System.Collections.Generic;

// no significant .NET dependencies

namespace MatrixSingularValueDecomposition
{
  internal class SVDProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin SVD using scratch C# ");
      double[][] A = new double[6][];
      A[0] = new double[] { 1, 2, 3, 4, 5 };
      A[1] = new double[] { 0, -3, 5, -7, 9 };
      A[2] = new double[] { 2, 0, -2, 0, -2 };
      A[3] = new double[] { 4, -1, 5, 6, 1 };
      A[4] = new double[] { 3, 6, 8, 2, 2 };
      A[5] = new double[] { 5, -2, 4, -4, 3 };

      Console.WriteLine("\nSource (tall) matrix: ");
      MatShow(A, 1, 6);

      Console.WriteLine("\nComputing SVD ");
      double[][] U;
      double[] s;
      double[][] Vh;
      SVD.MatDecomp(A, out U, out s, out Vh);
      Console.WriteLine("Done ");

      Console.WriteLine("\nU = ");
      MatShow(U, 6, 11);

      Console.WriteLine("\ns = ");
      for (int i = 0; i "lt" s.Length; ++i)
        Console.Write(s[i].ToString("F6").PadLeft(11));
      Console.WriteLine("");

      Console.WriteLine("\nVh = ");
      MatShow(Vh, 6, 11);

      double[][] B = new double[3][];
      B[0] = new double[] { 3, 1, 1, 0, 5 };
      B[1] = new double[] { -1, 3, 1, -2, 4 };
      B[2] = new double[] { 0, 2, 2, 1, -3 };

      Console.WriteLine("\n============================ ");

      Console.WriteLine("\nSource (wide) matrix: ");
      MatShow(B, 1, 6);

      Console.WriteLine("\nComputing SVD ");
      SVD.MatDecomp(B, out U, out s, out Vh);
      Console.WriteLine("Done ");

      Console.WriteLine("\nU = ");
      MatShow(U, 6, 11);

      Console.WriteLine("\ns = ");
      for (int i = 0; i "lt" s.Length; ++i)
        Console.Write(s[i].ToString("F6").PadLeft(11));
      Console.WriteLine("");

      Console.WriteLine("\nVh = ");
      MatShow(Vh, 6, 11);

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main

    // helpers for Main()

    // ------------------------------------------------------

    static void MatShow(double[][] M, int dec, int wid)
    {
      for (int i = 0; i "lt" M.Length; ++i)
      {
        for (int j = 0; j "lt" M[0].Length; ++j)
        {
          double v = M[i][j];
          Console.Write(v.ToString("F" + dec).
            PadLeft(wid));
        }
        Console.WriteLine("");
      }
    }

    // ------------------------------------------------------

    static double[][] MatLoad(string fn, int[] usecols,
      char sep, string comment)
    {
      // ex: double[][] A =
      //  MatLoad("data.txt", new int[] {0,1,2}, ',', "#");
      List"lt"double[]"gt" result =
        new List"lt"double[]"gt"();
      string line = "";
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      while ((line = sr.ReadLine()) != null)
      {
        if (line.StartsWith(comment) == true)
          continue;
        string[] tokens = line.Split(sep);
        List"lt"double"gt" lst = new List"lt"double"gt"();
        for (int j = 0; j "lt" usecols.Length; ++j)
          lst.Add(double.Parse(tokens[usecols[j]]));
        double[] row = lst.ToArray();
        result.Add(row);
      }
      sr.Close(); ifs.Close();
      return result.ToArray();
    }

  } // class Program

  // --------------------------------------------------------

  public class SVD
  {
    public static void MatDecomp(double[][] A,
      out double[][] U, out double[] s, out double[][] Vt,
      double tol = 1.0e-12)
    {
      int m = A.Length; int n = A[0].Length;
      if (m "gte" n)
      {
        double[][] Q; double[][] R;
        MatDecompQR(A, out Q, out R);

        double[][] RtR = MatProduct(MatTranspose(R), R);

        double[] evals; double[][] V;
        MatDecompEigen(RtR, out evals, out V);

        int[] sortIdxs = VecReversed(ArgSort(evals));
        evals = VecSortedUsingIdxs(evals, sortIdxs);
        V = MatSortedUsingIdxs(V, sortIdxs);

        double[] ss = VecSqrt(VecClip(evals, 0.0));
        int[] idxs = VecIndicesWhereNonNegative(ss, tol);
        double[] ssNonZero = VecRemoveZeros(ss, idxs);
        double[][] VNonZero = MatRemoveCols(V, idxs);

        double[][] tmp1 = MatProduct(R, VNonZero);
        double[][] tmp2 = MatDivVector(tmp1, ssNonZero);
        double[][] UU = MatProduct(Q, tmp2);

        U = UU;
        s = ssNonZero;
        Vt = MatTranspose(VNonZero);
      } // m "gte" n
      else
      {
        // use the transpose trick
        double[][] dummyU;
        double[] dummyS;
        double[][] dummyVt;
        MatDecomp(MatTranspose(A),
          out dummyU, out dummyS, out dummyVt);
        U = MatTranspose(dummyVt);
        s = dummyS;
        Vt = MatTranspose(dummyU);
      } // m "lt" n

    } // MatDecomp()

    // ------------------------------------------------------
    // primary helpers: MatDecompQR() and MatDecompEigen()
    // ------------------------------------------------------

    private static void MatDecompQR(double[][] A,
      out double[][] Q, out double[][] R)
    {
      // reduced QR decomp using Householder reflections
      int m = A.Length; int n = A[0].Length;
      double[][] QQ = MatIdentity(m);  // working Q
      double[][] RR = MatCopy(A);  // working R

      for (int k = 0; k "lt" n; ++k)
      {
        double[] x = MatColToEnd(RR, k); // RR[k:, k]
        double normx = VecNorm(x);
        if (Math.Abs(normx) "lt" 1.0e-12) continue;

        double[] v = VecCopy(x);
        double sign;
        if (x[0] "gte" 0.0) sign = 1.0; else sign = -1.0;
        v[0] += sign * normx;
        double normv = VecNorm(v);
        for (int i = 0; i "lt" v.Length; ++i)
          v[i] /= normv;

        // apply reflection to R
        double[][] tmp1 = 
          MatRowColToEnd(RR, k); // RR[k:, k:]
        double[] tmp2 = VecMatProduct(v, tmp1);
        double[][] tmp3 = VecOuter(v, tmp2);
        double[][] B = MatScalarMult(2.0, tmp3);
        MatSubtractCorner(RR, k, B);  // R[k:, k:] -= B

        // accumulate Q
        double[][] tmp4 = 
          MatExtractAllRowsColToEnd(QQ, k); // QQ[:, k:]
        double[] tmp5 = MatVecProduct(tmp4, v);
        double[][] tmp6 = VecOuter(tmp5, v);
        double[][] C = MatScalarMult(2.0, tmp6);
        MatSubtractRows(QQ, k, C); // QQ[:, k:] -= C
      } // for k

      // return parts of QQ and RR
      Q = MatExtractFirstCols(QQ, n); // Q[:, :n]
      R = MatExtractFirstRows(RR, n); // R[:n, :]
    } // MatDecompQR()

    // ------------------------------------------------------

    private static void MatDecompEigen(double[][] A,
      out double[] evals, out double[][] Evecs,
      double tol = 1.0e-12, int maxIter = 5000)
    {
      // eigen-decomp of symmetric matrix using QR
      int m = A.Length; int n = A[0].Length;
      double[][] AA = MatCopy(A);  // don't destroy A
      double[][] V = MatIdentity(n);

      int iter = 0;
      while (iter "lt" maxIter)
      {
        double[][] Q;
        double[][] R;
        MatDecompQR(AA, out Q, out R);
        double[][] Anew = MatProduct(R, Q);
        V = MatProduct(V, Q);

        // # check convergence (off-diagonal norm)
        double[] tmp1 = MatDiag(Anew);
        double[][] tmp2 = MatCreateDiag(tmp1);
        double[][] tmp3 = MatSubtract(Anew, tmp2);
        double norm = MatNorm(tmp3);
        if (norm "lt" tol)
        {
          AA = Anew; break;
        }
        AA = Anew;
        ++iter;
      } // iter

      if (iter == maxIter)
        Console.WriteLine("Warn: exceeded maxIter ");

      evals = MatDiag(AA);
      Evecs = V;
    } // MatDecompEigen()

    // ------------------------------------------------------
    // 32 secondary helpers
    // ------------------------------------------------------

    private static int[] ArgSort(double[] vec)
    {
      int n = vec.Length;
      int[] idxs = new int[n];
      for (int i = 0; i "lt" n; ++i)
        idxs[i] = i;

      double[] dup = new double[n];
      for (int i = 0; i "lt" n; ++i)
        dup[i] = vec[i]; // don't modify vec

      // special C# overload
      Array.Sort(dup, idxs);  // sort idxs based on dup vals
      return idxs;
    }

    // ------------------------------------------------------

    private static double[][] MatMake(int nr, int nc)
    {
      double[][] result = new double[nr][];
      for (int i = 0; i "lt" nr; ++i)
        result[i] = new double[nc];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatIdentity(int n)
    {
      double[][] result = MatMake(n, n);
      for (int i = 0; i "lt" n; ++i)
        result[i][i] = 1.0;
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatCopy(double[][] A)
    {
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m, n);
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n; ++j)
          result[i][j] = A[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[] MatDiag(double[][] A)
    {
      // return diag elements of A as a vector
      int m = A.Length; int n = A[0].Length; // m == n
      double[] result = new double[m];
      for (int i = 0; i "lt" m; ++i)
        result[i] = A[i][i];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatCreateDiag(double[] v)
    {
      // create a matrix using v as diagonal elements
      int n = v.Length;
      double[][] result = MatMake(n, n);
      for (int i = 0;i "lt" n; ++i)
        result[i][i] = v[i];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatTranspose(double[][] A)
    {
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(n, m);  // note
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n; ++j)
          result[j][i] = A[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatSubtract(double[][] A,
      double[][] B)
    {
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m, n);
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n; ++j)
          result[i][j] = A[i][j] - B[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[] MatColToEnd(double[][] A,
      int k)
    {
      // last part of col k, from [k,k] to end
      int m = A.Length; int n = A[0].Length;
      double[] result = new double[m - k];
      for (int i = 0; i "lt" m - k; ++i)
        result[i] = A[i + k][k];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatRowColToEnd(double[][] A,
      int k)
    {
      // block of A, row k to end, col k to end
      // A[k:, k:]
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m-k, n-k);
      for (int i = 0; i "lt" m - k; ++i)
        for (int j = 0; j "lt" n - k; ++j)
          result[i][j] = A[i + k][j + k];
      return result;
    }

    // ------------------------------------------------------

    private static double[] MatVecProduct(double[][] A,
      double[] v)
    {
      // A * v
      int m = A.Length; int n = A[0].Length;
      double[] result = new double[m];
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n; ++j)
          result[i] += A[i][j] * v[j];
      return result;
    }

    private static double[] VecMatProduct(double[] v,
      double[][] A)
    {
      // v * A
      int m = A.Length; int n = A[0].Length;
      double[] result = new double[n];
      for (int j = 0; j "lt" n; ++j)
        for (int i = 0; i "lt" m; ++i)
          result[j] += v[i] * A[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatScalarMult(double x,
      double[][] A)
    {
      // x * A element-wise
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m, n);
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n; ++j)
          result[i][j] = x * A[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static void MatSubtractCorner(double[][] A,
      int k, double[][] C)
    {
      // subtract elements in C from lower right A
      // A[k:, k:] -= C
      int m = A.Length; int n = A[0].Length;
      for (int i = 0; i "lt" m - k; ++i)
        for (int j = 0; j "lt" n - k; ++j)
          A[i + k][j + k] -= C[i][j];

      return;  // modify A in-place 
    }

    // ------------------------------------------------------

    private static void MatSubtractRows(double[][] A,
      int k, double[][] C)
    {
      // A[:, k:] -= C
      int m = A.Length; int n = A[0].Length;
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n-k; ++j)
          A[i][j+k] -= C[i][j];

      return;  // modify A in-place
    }

    // ------------------------------------------------------

    private static double[][] 
      MatExtractAllRowsColToEnd(double[][] A, int k)
    {
      // A[:, k:]
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m, n - k);
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n - k; ++j)
          result[i][j] = A[i][j + k];
      return result;
    }

    // -----------------------------------------------------

    private static double[][] 
      MatExtractFirstCols(double[][] A, int k)
    {
      // A[:, :n]
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m, k);
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" k; ++j)
          result[i][j] = A[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] 
      MatExtractFirstRows(double[][] A, int k)
    {
      // A[:n, :]
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(k, n);
      for (int i = 0; i "lt" k; ++i)
        for (int j = 0; j "lt" n; ++j)
          result[i][j] = A[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatProduct(double[][] A,
        double[][] B)
    {
      int aRows = A.Length;
      int aCols = A[0].Length;
      int bRows = B.Length;
      int bCols = B[0].Length;
      if (aCols != bRows)
        throw new Exception("Non-conformable matrices");

      double[][] result = MatMake(aRows, bCols);

      for (int i = 0; i "lt" aRows; ++i) // each row of A
        for (int j = 0; j "lt" bCols; ++j) // each col of B
          for (int k = 0; k "lt" aCols; ++k)
            result[i][j] += A[i][k] * B[k][j];

      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatRemoveCols(double[][] A,
      int[] idxs)
    {
      // remove the columns specified in idxs
      int m = A.Length; int n = A[0].Length;
      int countNonzeos = 0;
      for (int i = 0; i "lt" idxs.Length; ++i)
        if (idxs[i] == 1) ++countNonzeos;

      double[][] result = MatMake(m, countNonzeos);
      int k = 0; // destination col
      for (int j = 0; j "lt" n; ++j)
      {
        if (idxs[j] == 0) continue; // skip this col
        for (int i = 0; i "lt" m; ++i)
        {
          result[i][k] = A[i][j];
        }
        ++k;
      }
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatDivVector(double[][] A,
      double[] v)
    {
      // v must be all non-zero
      // divide each row of A by v, element-wise
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m, n);
      for (int i = 0; i "lt" m; ++i)
        for (int j= 0; j "lt" n; ++j)
          result[i][j] = A[i][j] / v[j];
      return result;
    }

    // ------------------------------------------------------

    private static double MatNorm(double[][] A)
    {
      // treat A as one big vector
      int m = A.Length; int n = A[0].Length;
      double sum = 0.0;
      for (int i = 0; i "lt" m; ++i)
        for (int j = 0; j "lt" n; ++j)
          sum += A[i][j] * A[i][j];
      return Math.Sqrt(sum); 
    }

    private static double VecNorm(double[] v)
    {
      // standard vector norm
      double sum = 0.0;
      for (int i = 0; i "lt" v.Length; ++i)
        sum += v[i] * v[i];
      return Math.Sqrt(sum);
    }

    // ------------------------------------------------------

    private static double[][] 
      MatSortedUsingIdxs(double[][] A, int[] idxs)
    {
      // arrange columns of A using order in idxs
      int m = A.Length; int n = A[0].Length;
      double[][] result = MatMake(m, n);
      for (int i = 0; i "lt" m; ++i)
      {
        for (int j = 0; j "lt" n; ++j)
        {
          int srcCol = idxs[j];
          result[i][j] = A[i][srcCol];
        }

      }
      return result;
    }

    // ------------------------------------------------------

    private static double[] 
      VecSortedUsingIdxs(double[] v, int[] idxs)
    {
      // arrange values in v using order specified in idxs
      int n = v.Length;
      double[] result = new double[n];
      for (int i = 0; i "lt" n; ++i)
        result[i] = v[idxs[i]];
      return result;
    }

    // ------------------------------------------------------

    private static double[] VecCopy(double[] v)
    {
      double[] result = new double[v.Length];
      for (int i = 0; i "lt" v.Length; ++i)
        result[i] = v[i];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] VecOuter(double[] v1,
      double[] v2)
    {
      // vector outer product
      double[][] result = MatMake(v1.Length, v2.Length);
      for (int i = 0; i "lt" v1.Length; ++i)
        for (int j = 0; j "lt" v2.Length; ++j)
          result[i][j] = v1[i] * v2[j];
      return result;
    }

    // ------------------------------------------------------

    private static int[] VecReversed(int[] v)
    {
      // return vector of v elements in reversed order
      int n = v.Length;
      int[] result = new int[n];
      for (int i = 0; i "lt" n; ++i)
        result[i] = v[n-1-i];
      return result;
    }

    // ------------------------------------------------------

    private static double[] VecClip(double[] v, double minVal)
    {
      // used to make sure no negative values
      int n = v.Length;
      double[] result = new double[n];
      for (int i = 0; i "lt" n; ++i)
      {
        if (v[i] "lt" minVal)
          result[i] = minVal;
        else
          result[i] = v[i];
      }
      return result;
    }

    // ------------------------------------------------------

    private static double[] VecSqrt(double[] v)
    {
      // sqrt each element in v
      // assumes v has no negative values
      int n = v.Length;
      double[] result = new double[n];
      for (int i = 0; i "lt" n; ++i)
        result[i] = Math.Sqrt(v[i]);
      return result;
    }

    // ------------------------------------------------------

    private static int[] 
      VecIndicesWhereNonNegative(double[] v, double tol)
    {
      // indices of non-negative values (within tolerance)
      int n = v.Length;
      int[] result = new int[n];
      for (int i = 0; i "lt" n; ++i)
      {
        if (Math.Abs(v[i]) "gt" tol)
          result[i] = 1;
      }
      return result;
    }

    // ------------------------------------------------------

    private static double[] VecRemoveZeros(double[] v,
      int[] idxs)
    {
      // if idx = 1, val is non-zero, if 0 val is near zero
      int n = v.Length;
      List"lt"double"gt" lst = new List"lt"double"gt"();
      for (int i = 0; i "lt" n; ++i)
      {
        if (idxs[i] == 1)
          lst.Add(v[i]);
      }
      return lst.ToArray();
    }

    // ------------------------------------------------------

  } // class SVD

} // ns
Posted in Machine Learning | Leave a comment

The One-Norm Condition Number of a Matrix Using C#

The condition number of a mathematical object is a number that is a measure of how sensitivity the object is to small changes. There are several types of condition numbers. In machine learning you might want to know how sensitive a square matrix A that defines a set of linear equations Ax = b is. Or you might want to know how sensitive a tall matrix (more rows than columns) that holds training data for a prediction model is.

The one-norm condition number of a matrix A is defined as cond(A) * cond(Ainv), in words, the one-norm of A times the one-norm of the inverse of A.

The one-norm of a matrix is the maximum of the column sums of the magnitudes of the values. For example, if A is:

   3.0   0.0  -2.0   5.0
  -1.0   4.0   6.0   3.0
   4.0   1.0   0.0   3.0
  -3.0   2.0   4.0   5.0

The column sums of the absolute values are 11, 7, 12, 16, and so the one-norm is 16.

Note: Another common condition number for matrices is the two-norm condition number. It is defines as the ratio of the largest singular value of the matrix divided by the smallest singular value. Computing singular values is one of the most difficult tasks in all of numerical programming.

I put together a demo using the C# language. To account for tall machine learning style matrices, I computed the pseudo-inverse (which applies to any shape matrix) instead of the standard inverse (which applies to only square matrices).

There are dozens of ways to compute a pseudo-inverse, all of them very complicated. I used QR decomposition via the modified Gram-Schmidt algorithm, which is, in my opinion, the simplest possible algorithm for pseudo-inverse.

The output of my demo:

Begin one-norm condition number demo

Source (square) matrix A:
   3.0   0.0  -2.0   5.0
  -1.0   4.0   6.0   3.0
   4.0   1.0   0.0   3.0
  -3.0   2.0   4.0   5.0

The one-norm condition number = 53.3333

Source (on-square) matrix B:
   3.0   0.0  -2.0   5.0
  -1.0   4.0   6.0   3.0
   4.0   1.0   0.0   3.0
  -3.0   2.0   4.0   5.0
   5.0   3.0  -1.0   0.0

The one-norm condition number = 16.5219

End demo

It’s very difficult to interpret condition numbers other than larger condition numbers mean more sensitive, in other words, small changes in the source matrix can lead to very large chances in the result (linear equation coefficients, matrix inverse for training, etc.)



I have a strange liking for comedy movies that have a special scene/condition: obviously fake bats. To me, these scenes are hilarious.

Left: In “Black Sheep” (1996), loser Mike Donnelly (actor Chris Farley) is the brother of rising politician Al Donnelly. Al’s campaign manager Steve Dodds (actor David Spade) is tasked with keeping Mike out of trouble and out of the press. Steve and Mike try to go into quarantine at a mountain cabin, complete with fake bat. Everything works out in the end. My grade = B.

Center: In “Spy” (2015), desk CIA agent Susan Cooper (actress Melissa McCarthy) works out of a basement office where all the losers are sequestered. The office also houses a colony of bats. This movie could have been excellent, but there were too many crude and sexual scenes to suit me. Even so, my grade = B+.

Right: In “Ace Ventura: When Nature Calls” (1995), detective Ace Ventura (actor Jim Carrey) loves all animals . . except bats. He is hired to go to Nibia, Africa to find a kidnapped sacred white bat. I’m not a Jim Carrey fan, but this movie has some hilarious scenes. My grade = B+.


Demo program. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my blog editor chokes on symbols).

// matrix 1-norm condition number

using System.IO;

namespace MatrixConditionNumberDefinition
{
  internal class Program
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin one-norm condition" +
        " number demo ");

      // 1. set up source matrix
      double[][] A = new double[4][];
      A[0] = new double[] { 3, 0, -2, 5 };
      A[1] = new double[] { -1, 4, 6, 3 };
      A[2] = new double[] { 4, 1, 0, 3 };
      A[3] = new double[] { -3, 2, 4, 5 };
      // A[4] = new double[] { 5, 3, -1, 0 };

      Console.WriteLine("\nSource (square) matrix A: ");
      MatShow(A, 1, 6);

      double cnA = MatOneNormCondition(A);
      Console.WriteLine("\nThe one-norm condition " +
        "number = " + cnA.ToString("F4"));

      double[][] B = new double[5][];
      B[0] = new double[] { 3, 0, -2, 5 };
      B[1] = new double[] { -1, 4, 6, 3 };
      B[2] = new double[] { 4, 1, 0, 3 };
      B[3] = new double[] { -3, 2, 4, 5 };
      B[4] = new double[] { 5, 3, -1, 0 };

      Console.WriteLine("\nSource (on-square) matrix B: ");
      MatShow(B, 1, 6);

      double cnB = MatOneNormCondition(B);
      Console.WriteLine("\nThe one-norm condition " +
        "number = " + cnB.ToString("F4"));

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main()

    // ------------------------------------------------------

    static double MatOneNormCondition(double[][] A)
    {
      // one-norm condition for a matrix in a
      // machine learning scenarios where nRows gte nCols
      // cond = norm(A) * norm(Ainv)
      double a = MatOneNorm(A);
      double b = MatOneNorm(QRGramSchmidt.MatPseudoInv(A));
      return a * b;
    }


    // ------------------------------------------------------

    static double MatOneNorm(double[][] A)
    {
      // max col sum of abs values
      // do not assume A is a square matrix
      // assume m gte n
      int m = A.Length; int n = A[0].Length;
      double result = 0;
      for (int j = 0; j "lt" n; ++j) // each column
      {
        double colSum = 0.0;
        for (int i = 0; i "lt" m; ++i) // walk down column
        {
          colSum += Math.Abs(A[i][j]);
        }
        if (colSum "gt" result)
          result = colSum;
      }
      return result;
    }

    // ------------------------------------------------------

    public static void MatShow(double[][] A, int dec,
      int wid)
    {
      int nRows = A.Length;
      int nCols = A[0].Length;
      double small = 1.0 / Math.Pow(10, dec);
      for (int i = 0; i "lt" nRows; ++i)
      {
        for (int j = 0; j "lt" nCols; ++j)
        {
          double v = A[i][j]; // avoid "-0.00000"
          if (Math.Abs(v) "lt" small) v = 0.0;
          Console.Write(v.ToString("F" + dec).
            PadLeft(wid));
        }
        Console.WriteLine("");
      }
    }

    // ------------------------------------------------------

  } // class Program

    // ========================================================

  public class QRGramSchmidt
  {
    // container class for relaxed Moore-Penrose
    // pseudo-inverse using QR decomposition with modified
    // Gram-Schmidt algorithm

    public static double[][] MatPseudoInv(double[][] M)
    {
      // relaxed Moore-Penrose pseudo-inverse using QR-GS
      // A = Q*R, pinv(A) = inv(R) * trans(Q)

      int nr = M.Length; int nc = M[0].Length; // aka m, n
      if (nr "lt" nc)
        Console.WriteLine("ERROR: Works only m "gte" n");

      double[][] Q; double[][] R;
      MatDecompQR(M, out Q, out R); // Gram-Schmidt

      double[][] Rinv = MatInvUpperTri(R); // std algo
      double[][] Qinv = MatTranspose(Q);  // is inv(Q)
      double[][] result = MatProduct(Rinv, Qinv);
      return result;
    }

    // ------------------------------------------------------

    private static void MatDecompQR(double[][] A,
      out double[][] Q, out double[][] R)
    {
      // QR decomposition, modified Gram-Schmidt
      // 'reduced' mode (all machine learning scenarios)
      int m = A.Length; int n = A[0].Length;
      if (m "lt" n)
        throw new Exception("m must be gte n ");

      double[][] QQ = new double[m][];  // working Q mxn
      for (int i = 0; i "lt" m; ++i)
        QQ[i] = new double[n];

      double[][] RR = new double[n][];  // working R nxn
      for (int i = 0; i "lt" n; ++i)
        RR[i] = new double[n];

      for (int k = 0; k "lt" n; ++k) // main loop each col
      {
        double[] v = new double[m];
        for (int i = 0; i "lt" m; ++i)  // col k
          v[i] = A[i][k];
        for (int j = 0; j "lt" k; ++j)  // inner loop
        {
          double[] colj = new double[QQ.Length];
          for (int i = 0; i "lt" colj.Length; ++i)
            colj[i] = QQ[i][j];

          double vecdot = 0.0;
          for (int i = 0; i "lt" colj.Length; ++i)
            vecdot += colj[i] * v[i];
          RR[j][k] = vecdot;

          // v = v - (R[j, k] * Q[:, j])
          for (int i = 0; i "lt" v.Length; ++i)
            v[i] = v[i] - (RR[j][k] * QQ[i][j]);
        } // j

        double normv = 0.0;
        for (int i = 0; i "lt" v.Length; ++i)
          normv += v[i] * v[i];
        normv = Math.Sqrt(normv);
        RR[k][k] = normv;

        // Q[:, k] = v / R[k, k]
        for (int i = 0; i "lt" QQ.Length; ++i)
          QQ[i][k] = v[i] / (RR[k][k] + 1.0e-12);

      } // k

      Q = QQ;
      R = RR;
    }

    // ------------------------------------------------------

    private static double[][] MatInvUpperTri(double[][] A)
    {
      // used to invert R from QR
      int n = A.Length;  // must be square matrix
      double[][] result = MatIdentity(n);

      for (int k = 0; k "lt" n; ++k)
      {
        for (int j = 0; j "lt" n; ++j)
        {
          for (int i = 0; i "lt" k; ++i)
          {
            result[j][k] -= result[j][i] * A[i][k];
          }
          result[j][k] /= (A[k][k] + 1.0e-8); // avoid 0
        }
      }
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatTranspose(double[][] M)
    {
      int nRows = M.Length; int nCols = M[0].Length;
      double[][] result = MatMake(nCols, nRows);  // note
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[j][i] = M[i][j];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatMake(int nRows, int nCols)
    {
      double[][] result = new double[nRows][];
      for (int i = 0; i "lt" nRows; ++i)
        result[i] = new double[nCols];
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatIdentity(int n)
    {
      double[][] result = MatMake(n, n);
      for (int i = 0; i "lt" n; ++i)
        result[i][i] = 1.0;
      return result;
    }

    // ------------------------------------------------------

    private static double[][] MatProduct(double[][] A,
      double[][] B)
    {
      int aRows = A.Length; int aCols = A[0].Length;
      int bRows = B.Length; int bCols = B[0].Length;
      if (aCols != bRows)
        throw new Exception("Non-conformable matrices");

      double[][] result = MatMake(aRows, bCols);

      for (int i = 0; i "lt" aRows; ++i) // each row of A
        for (int j = 0; j "lt" bCols; ++j) // each col of B
          for (int k = 0; k "lt" aCols; ++k)
            result[i][j] += A[i][k] * B[k][j];

      return result;
    }

  } // class QRGramSchmidt

  // ========================================================

} // ns
Posted in Machine Learning | Leave a comment