Two Reasons Why Drop-First Encoding for Linear Regression Categorical Predictor Variables is Preferable to One-Hot Encoding

Bottom line: When using linear regression on data that has categorical predictor variables, you must use drop-first encoding instead of one-hot encoding if you train the model using a closed form pseudo-inverse technique (left pseudo-inverse, Moore-Penrose pseudo-inverse). If you use iterative techniques SGD or L-BFGS to train, you can use one-hot encoding or drop-first encoding, but drop-first is preferred because it gives a more interpretable model than one-hot encoding. In short, for linear regression, drop-first encoding of categorical predictors is the preferred encoding technique.

The goal of a machine learning regression model is to predict a single numeric value. For example, suppose you have data that looks like:

F, 24, michigan, 29500.00, liberal
M, 39, oklahoma, 51200.00, moderate
F, 63, nebraska, 75800.00, conservative
M, 36, virginia, 44500.00, moderate
F, 27, nebraska, 28600.00, liberal
. . .

The fields are sex, age, State (Michigan or Nebraska or Oklahoma or Virginia), annual income, and political leaning. And suppose you want to predict income from the other four variables.

The most fundamental regression technique is linear regression.

In most scenarios, the best way to normalize and encode the data for linear regression is to use divide-by-constant normalization and drop-first encoding:

1, 0.24, 0 0, 0 0 0, 0.29500, 0 1
0, 0.39, 0 1, 0 1 0, 0.51200, 1 0
1, 0.63, 1 0, 1 0 0, 0.75800, 0 0
0, 0.36, 0 1, 0 0 1, 0.44500, 1 0
1, 0.27, 0 0, 1 0 0, 0.28600, 0 1
. . .

The age values are divided by 100, and the income values are divided by 100,000. The binary sex values are encoded as M = 0, F = 1.

If the four State values were one-hot encoded, they would be Michigan = 1000, Nebraska = 0100, Oklahoma = 0010, Virginia = 0001, but drop-first gives Michigan = 000, Nebraska = 100, Oklahoma = 010, Virginia = 001.

If the three political leaning values were one-hot encoded, they would be conservative = 100, moderate = 010, liberal = 001, but drop-first drops the first digit, giving conservative = 00, moderate = 10, liberal = 01.

If you use one-hot encoding, you include redundant information in a sense, in part because both one-hot encoding and drop-first encoding unambiguously identify a variable’s value, but drop-first does so using one less bit of information.

Now, if you train a linear regression model using a pseudo-inverse (either left-pseudo inverse via normal equations, or relaxed Moore-Penrose pseudo-inverse via SVD or QR decomposition), one-hot encoding creates multicollinearity in the data and the inverse operation will usually fail. So, when using a pseudo-inverse training technique, drop-first encoding is pretty much required.

If you train a linear regression model using stochastic gradient descent (SGD), you don’t run into the matrix inverse failure issue, and so one-hot encoding will work. However, because one-hot encoding has redundant information, SGD can produce wildly different results in the model weights and bias, but which give the same prediction accuracy. So, if you use one-hot encoding and SGD training, you get a model that predicts well, but one where the weights and bias can’t be interpreted well. Therefore, you might as well just use drop-first encoding.

Note that for neural networks, one-hot encoding is always used with SGD training because neural networks can’t be interpreted easily, so drop-first encoding provides no advantage.


It’s important to pay attention to details when working with machine learning. I love old science fiction movies from the 1950s. Minor characters can make a big difference in movies.

A relatively obscure actor named John Zaremba (1908-1986) played a supporting role in three of my favorites. He added a lot to the final result.

Left: In “Earth vs. the Flying Saucers” (1956), Zaremba plays Professor Kanter, a scientist who helped develop a sonic weapon to defeat an alien invasion.

Center: In “The Magnetic Monster” (1953), Zaremba plays Engineer Watson, the chief engineer for the Department of Power and Light. He receives the initial phone call alerting authorities to the existence of a new man-made element that doubles in mass every 11 hours, threatening to throw Earth out of orbit.

Right: In “20 Million Miles to Earth”, Zaremba plays Dr. Judson Uhl, a scientist involved with a secret manned mission to Venus. The spacecraft brings back a small Venusian creature who grows to giant size and causes havoc.


Demo program. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols. (My blog editor chokes on symbols).

using System;
using System.IO;
using System.Collections.Generic;

namespace LinearRegressionCategorical
{
  internal class LinearRegressionProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin C# linear regression ");
      Console.WriteLine("Predict salary from sex, age," +
        " State, politics ");

      // 1. load data
      Console.WriteLine("\nLoading people train" +
        " (200) and test (40) data");
      string trainFile =
        "C:\\VSM\\LinearRegressionCategorical\\" +
        "Data\\people_train_df.txt";
      int[] colsX = new int[] { 0, 1, 2, 3, 5, 6 };
      int colY = 4;
      double[][] trainX =
        MatLoad(trainFile, colsX, ',', "#");
      double[] trainY =
        MatToVec(MatLoad(trainFile,
        new int[] { colY }, ',', "#"));

      string testFile =
        "C:\\VSM\\LinearRegressionCategorical\\" +
        "Data\\people_test_df.txt";
     double[][] testX =
        MatLoad(testFile, colsX, ',', "#");
      double[] testY =
        MatToVec(MatLoad(testFile,
        new int[] { colY }, ',', "#"));
      Console.WriteLine("Done ");

      Console.WriteLine("\nFirst three train X: ");
      for (int i = 0; i "lt" 3; ++i)
        VecShow(trainX[i], 4, 8);

      Console.WriteLine("\nFirst three train y: ");
      for (int i = 0; i "lt" 3; ++i)
        Console.WriteLine(trainY[i].ToString("F5").
          PadLeft(8));

      // 2. create and train model
      double lrnRate = 0.001;
      int maxEpochs = 3000;
      int seed = 0;
      Console.WriteLine("\nSetting lrnRate = " +
        lrnRate.ToString("F4"));
      Console.WriteLine("Setting maxEpohcs = " +
        maxEpochs);

      Console.WriteLine("\nCreating and training" +
        " Linear Regression model using SGD ");
      LinearRegressor model =
        new LinearRegressor(seed);
      model.Train(trainX, trainY, lrnRate, maxEpochs);
      Console.WriteLine("Done ");

      // 2b. show model parameters
      Console.WriteLine("\nCoefficients/weights: ");
      for (int i = 0; i "lt" model.weights.Length; ++i)
        Console.Write(model.weights[i].ToString("F4") + "  ");
      Console.WriteLine("\nBias/constant: " +
        model.bias.ToString("F4"));

      // 3. evaluate model
      Console.WriteLine("\nEvaluating model ");
      double accTrain = model.Accuracy(trainX, trainY, 0.10);
      Console.WriteLine("\nAccuracy train (within 0.10) = " +
        accTrain.ToString("F4"));
      double accTest = model.Accuracy(testX, testY, 0.10);
      Console.WriteLine("Accuracy test (within 0.10) = " +
        accTest.ToString("F4"));

      double mseTrain = model.MSE(trainX, trainY);
      Console.WriteLine("\nMSE train = " +
        mseTrain.ToString("F6"));
      double mseTest = model.MSE(testX, testY);
      Console.WriteLine("MSE test = " +
        mseTest.ToString("F6"));

      // 4. use model to predict first training item
      double[] x = trainX[0];
      Console.WriteLine("\nPredicting for x = ");
      VecShow(x, 4, 9);
      double predY = model.Predict(x);
      Console.WriteLine("\nPredicted y = " +
        predY.ToString("F5"));

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();

    } // Main()

    // ------------------------------------------------------
    // helpers for Main()
    // ------------------------------------------------------

    static double[][] MatLoad(string fn, int[] usecols,
      char sep, string comment)
    {
      List"lt"double[]"gt" result = 
        new List"lt"double[]"gt"();
      string line = "";
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      while ((line = sr.ReadLine()) != null)
      {
        if (line.StartsWith(comment) == true)
          continue;
        string[] tokens = line.Split(sep);
        List"lt"double"gt" lst = new List"lt"double"gt"();
        for (int j = 0; j "lt" usecols.Length; ++j)
          lst.Add(double.Parse(tokens[usecols[j]]));
        double[] row = lst.ToArray();
        result.Add(row);
      }
      sr.Close(); ifs.Close();
      return result.ToArray();
    }

    static double[] MatToVec(double[][] mat)
    {
      int nRows = mat.Length;
      int nCols = mat[0].Length;
      double[] result = new double[nRows * nCols];
      int k = 0;
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[k++] = mat[i][j];
      return result;
    }

    static void VecShow(double[] vec, int dec, int wid)
    {
      for (int i = 0; i "lt" vec.Length; ++i)
        Console.Write(vec[i].ToString("F" + dec).
          PadLeft(wid));
      Console.WriteLine("");
    }

  } // class Program

  public class LinearRegressor
  {
    public double[] weights;
    public double bias;
    private Random rnd;

    public LinearRegressor(int seed = 0)
    {
      this.weights = new double[0];
      this.bias = 0;
      this.rnd = new Random(seed);
    }

    public void Train(double[][] trainX, double[] trainY,
      double lrnRate, int maxEpochs)
    {
      int n = trainX.Length;
      int dim = trainX[0].Length;
      this.weights = new double[dim];
      double low = -0.01; double hi = 0.01;
      for (int i = 0; i "lt" dim; ++i)
        this.weights[i] = (hi - low) *
          this.rnd.NextDouble() + low;
      this.bias = (hi - low) *
          this.rnd.NextDouble() + low;

      int[] indices = new int[n];
      for (int i = 0; i "lt" n; ++i)
        indices[i] = i;

      for (int epoch = 0; epoch "lt" maxEpochs; ++epoch)
      {
        Shuffle(indices, this.rnd);
        for (int i = 0; i "lt" n; ++i)
        {
          int idx = indices[i];
          double[] x = trainX[idx];
          double predY = this.Predict(x);
          double actualY = trainY[idx];
          for (int j = 0; j "lt" dim; ++j)
            this.weights[j] -= lrnRate *
              (predY - actualY) * x[j];
          this.bias -= lrnRate * (predY - actualY);
        }
        if (epoch % (int)(maxEpochs / 5) == 0)
        {
          double mse = this.MSE(trainX, trainY);
          string s1 = "epoch = " + 
            epoch.ToString().PadLeft(5);
          string s2 = "  MSE = " + 
            mse.ToString("F6").PadLeft(8);
          Console.WriteLine(s1 + s2);
        }
      }
    }

    public double Predict(double[] x)
    {
      double result = 0.0;
      for (int j = 0; j "lt" x.Length; ++j)
        result += x[j] * this.weights[j];
      result += this.bias;
      return result;
    }

    public double Accuracy(double[][] dataX, double[] dataY,
      double pctClose)
    {
      int numCorrect = 0; int numWrong = 0;
      for (int i = 0; i "lt" dataX.Length; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        if (Math.Abs(predY - actualY) "lt"
          Math.Abs(pctClose * actualY))
          ++numCorrect;
        else
          ++numWrong;
      }
      return (numCorrect * 1.0) / (numWrong + numCorrect);
    }

    public double MSE(double[][] dataX, double[] dataY)
    {
      int n = dataX.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        sum += (actualY - predY) * (actualY - predY);
      }
      return sum / n;
    }

    private static void Shuffle(int[] indices, Random rnd)
    {
      // Fisher-Yates
      int n = indices.Length;
      for (int i = 0; i "lt" n; ++i)
      {
        int ri = rnd.Next(i, n);
        int tmp = indices[i];
        indices[i] = indices[ri];
        indices[ri] = tmp;
      }
    }

  } // class LinearRegressor


} // ns

Training data:

# people_train_df.txt
# sex (0 = male, 1 = female)
# age, state (michigan, nebraska, oklahoma), income,
# politics type (conservative, moderate, liberal)
#
1, 0.24, 0, 0, 0.29500, 0, 1
0, 0.39, 0, 1, 0.51200, 1, 0
1, 0.63, 1, 0, 0.75800, 0, 0
0, 0.36, 0, 0, 0.44500, 1, 0
1, 0.27, 1, 0, 0.28600, 0, 1
1, 0.50, 1, 0, 0.56500, 1, 0
1, 0.50, 0, 1, 0.55000, 1, 0
0, 0.19, 0, 1, 0.32700, 0, 0
1, 0.22, 1, 0, 0.27700, 1, 0
0, 0.39, 0, 1, 0.47100, 0, 1
1, 0.34, 0, 0, 0.39400, 1, 0
0, 0.22, 0, 0, 0.33500, 0, 0
1, 0.35, 0, 1, 0.35200, 0, 1
0, 0.33, 1, 0, 0.46400, 1, 0
1, 0.45, 1, 0, 0.54100, 1, 0
1, 0.42, 1, 0, 0.50700, 1, 0
0, 0.33, 1, 0, 0.46800, 1, 0
1, 0.25, 0, 1, 0.30000, 1, 0
0, 0.31, 1, 0, 0.46400, 0, 0
1, 0.27, 0, 0, 0.32500, 0, 1
1, 0.48, 0, 0, 0.54000, 1, 0
0, 0.64, 1, 0, 0.71300, 0, 1
1, 0.61, 1, 0, 0.72400, 0, 0
1, 0.54, 0, 1, 0.61000, 0, 0
1, 0.29, 0, 0, 0.36300, 0, 0
1, 0.50, 0, 1, 0.55000, 1, 0
1, 0.55, 0, 1, 0.62500, 0, 0
1, 0.40, 0, 0, 0.52400, 0, 0
1, 0.22, 0, 0, 0.23600, 0, 1
1, 0.68, 1, 0, 0.78400, 0, 0
0, 0.60, 0, 0, 0.71700, 0, 1
0, 0.34, 0, 1, 0.46500, 1, 0
0, 0.25, 0, 1, 0.37100, 0, 0
0, 0.31, 1, 0, 0.48900, 1, 0
1, 0.43, 0, 1, 0.48000, 1, 0
1, 0.58, 1, 0, 0.65400, 0, 1
0, 0.55, 1, 0, 0.60700, 0, 1
0, 0.43, 1, 0, 0.51100, 1, 0
0, 0.43, 0, 1, 0.53200, 1, 0
0, 0.21, 0, 0, 0.37200, 0, 0
1, 0.55, 0, 1, 0.64600, 0, 0
1, 0.64, 1, 0, 0.74800, 0, 0
0, 0.41, 0, 0, 0.58800, 1, 0
1, 0.64, 0, 1, 0.72700, 0, 0
0, 0.56, 0, 1, 0.66600, 0, 1
1, 0.31, 0, 1, 0.36000, 1, 0
0, 0.65, 0, 1, 0.70100, 0, 1
1, 0.55, 0, 1, 0.64300, 0, 0
0, 0.25, 0, 0, 0.40300, 0, 0
1, 0.46, 0, 1, 0.51000, 1, 0
0, 0.36, 0, 0, 0.53500, 0, 0
1, 0.52, 1, 0, 0.58100, 1, 0
1, 0.61, 0, 1, 0.67900, 0, 0
1, 0.57, 0, 1, 0.65700, 0, 0
0, 0.46, 1, 0, 0.52600, 1, 0
0, 0.62, 0, 0, 0.66800, 0, 1
1, 0.55, 0, 1, 0.62700, 0, 0
0, 0.22, 0, 1, 0.27700, 1, 0
0, 0.50, 0, 0, 0.62900, 0, 0
0, 0.32, 1, 0, 0.41800, 1, 0
0, 0.21, 0, 1, 0.35600, 0, 0
1, 0.44, 1, 0, 0.52000, 1, 0
1, 0.46, 1, 0, 0.51700, 1, 0
1, 0.62, 1, 0, 0.69700, 0, 0
1, 0.57, 1, 0, 0.66400, 0, 0
0, 0.67, 0, 1, 0.75800, 0, 1
1, 0.29, 0, 0, 0.34300, 0, 1
1, 0.53, 0, 0, 0.60100, 0, 0
0, 0.44, 0, 0, 0.54800, 1, 0
1, 0.46, 1, 0, 0.52300, 1, 0
0, 0.20, 1, 0, 0.30100, 1, 0
0, 0.38, 0, 0, 0.53500, 1, 0
1, 0.50, 1, 0, 0.58600, 1, 0
1, 0.33, 1, 0, 0.42500, 1, 0
0, 0.33, 1, 0, 0.39300, 1, 0
1, 0.26, 1, 0, 0.40400, 0, 0
1, 0.58, 0, 0, 0.70700, 0, 0
1, 0.43, 0, 1, 0.48000, 1, 0
0, 0.46, 0, 0, 0.64400, 0, 0
1, 0.60, 0, 0, 0.71700, 0, 0
0, 0.42, 0, 0, 0.48900, 1, 0
0, 0.56, 0, 1, 0.56400, 0, 1
0, 0.62, 1, 0, 0.66300, 0, 1
0, 0.50, 0, 0, 0.64800, 1, 0
1, 0.47, 0, 1, 0.52000, 1, 0
0, 0.67, 1, 0, 0.80400, 0, 1
0, 0.40, 0, 1, 0.50400, 1, 0
1, 0.42, 1, 0, 0.48400, 1, 0
1, 0.64, 0, 0, 0.72000, 0, 0
0, 0.47, 0, 0, 0.58700, 0, 1
1, 0.45, 1, 0, 0.52800, 1, 0
0, 0.25, 0, 1, 0.40900, 0, 0
1, 0.38, 0, 0, 0.48400, 0, 0
1, 0.55, 0, 1, 0.60000, 1, 0
0, 0.44, 0, 0, 0.60600, 1, 0
1, 0.33, 0, 0, 0.41000, 1, 0
1, 0.34, 0, 1, 0.39000, 1, 0
1, 0.27, 1, 0, 0.33700, 0, 1
1, 0.32, 1, 0, 0.40700, 1, 0
1, 0.42, 0, 1, 0.47000, 1, 0
0, 0.24, 0, 1, 0.40300, 0, 0
1, 0.42, 1, 0, 0.50300, 1, 0
1, 0.25, 0, 1, 0.28000, 0, 1
1, 0.51, 1, 0, 0.58000, 1, 0
0, 0.55, 1, 0, 0.63500, 0, 1
1, 0.44, 0, 0, 0.47800, 0, 1
0, 0.18, 0, 0, 0.39800, 0, 0
0, 0.67, 1, 0, 0.71600, 0, 1
1, 0.45, 0, 1, 0.50000, 1, 0
1, 0.48, 0, 0, 0.55800, 1, 0
0, 0.25, 1, 0, 0.39000, 1, 0
0, 0.67, 0, 0, 0.78300, 1, 0
1, 0.37, 0, 1, 0.42000, 1, 0
0, 0.32, 0, 0, 0.42700, 1, 0
1, 0.48, 0, 0, 0.57000, 1, 0
0, 0.66, 0, 1, 0.75000, 0, 1
1, 0.61, 0, 0, 0.70000, 0, 0
0, 0.58, 0, 1, 0.68900, 1, 0
1, 0.19, 0, 0, 0.24000, 0, 1
1, 0.38, 0, 1, 0.43000, 1, 0
0, 0.27, 0, 0, 0.36400, 1, 0
1, 0.42, 0, 0, 0.48000, 1, 0
1, 0.60, 0, 0, 0.71300, 0, 0
0, 0.27, 0, 1, 0.34800, 0, 0
1, 0.29, 1, 0, 0.37100, 0, 0
0, 0.43, 0, 0, 0.56700, 1, 0
1, 0.48, 0, 0, 0.56700, 1, 0
1, 0.27, 0, 1, 0.29400, 0, 1
0, 0.44, 0, 0, 0.55200, 0, 0
1, 0.23, 1, 0, 0.26300, 0, 1
0, 0.36, 1, 0, 0.53000, 0, 1
1, 0.64, 0, 1, 0.72500, 0, 0
1, 0.29, 0, 1, 0.30000, 0, 1
0, 0.33, 0, 0, 0.49300, 1, 0
0, 0.66, 1, 0, 0.75000, 0, 1
0, 0.21, 0, 1, 0.34300, 0, 0
1, 0.27, 0, 0, 0.32700, 0, 1
1, 0.29, 0, 0, 0.31800, 0, 1
0, 0.31, 0, 0, 0.48600, 1, 0
1, 0.36, 0, 1, 0.41000, 1, 0
1, 0.49, 1, 0, 0.55700, 1, 0
0, 0.28, 0, 0, 0.38400, 0, 0
0, 0.43, 0, 1, 0.56600, 1, 0
0, 0.46, 1, 0, 0.58800, 1, 0
1, 0.57, 0, 0, 0.69800, 0, 0
0, 0.52, 0, 1, 0.59400, 1, 0
0, 0.31, 0, 1, 0.43500, 1, 0
0, 0.55, 0, 0, 0.62000, 0, 1
1, 0.50, 0, 0, 0.56400, 1, 0
1, 0.48, 1, 0, 0.55900, 1, 0
0, 0.22, 0, 1, 0.34500, 0, 0
1, 0.59, 0, 1, 0.66700, 0, 0
1, 0.34, 0, 0, 0.42800, 0, 1
0, 0.64, 0, 0, 0.77200, 0, 1
1, 0.29, 0, 1, 0.33500, 0, 1
0, 0.34, 1, 0, 0.43200, 1, 0
0, 0.61, 0, 0, 0.75000, 0, 1
1, 0.64, 0, 1, 0.71100, 0, 0
0, 0.29, 0, 0, 0.41300, 0, 0
1, 0.63, 1, 0, 0.70600, 0, 0
0, 0.29, 1, 0, 0.40000, 0, 0
0, 0.51, 0, 0, 0.62700, 1, 0
0, 0.24, 0, 1, 0.37700, 0, 0
1, 0.48, 1, 0, 0.57500, 1, 0
1, 0.18, 0, 0, 0.27400, 0, 0
1, 0.18, 0, 0, 0.20300, 0, 1
1, 0.33, 1, 0, 0.38200, 0, 1
0, 0.20, 0, 1, 0.34800, 0, 0
1, 0.29, 0, 1, 0.33000, 0, 1
0, 0.44, 0, 1, 0.63000, 0, 0
0, 0.65, 0, 1, 0.81800, 0, 0
0, 0.56, 0, 0, 0.63700, 0, 1
0, 0.52, 0, 1, 0.58400, 1, 0
0, 0.29, 1, 0, 0.48600, 0, 0
0, 0.47, 1, 0, 0.58900, 1, 0
1, 0.68, 0, 0, 0.72600, 0, 1
1, 0.31, 0, 1, 0.36000, 1, 0
1, 0.61, 1, 0, 0.62500, 0, 1
1, 0.19, 1, 0, 0.21500, 0, 1
1, 0.38, 0, 1, 0.43000, 1, 0
0, 0.26, 0, 0, 0.42300, 0, 0
1, 0.61, 1, 0, 0.67400, 0, 0
1, 0.40, 0, 0, 0.46500, 1, 0
0, 0.49, 0, 0, 0.65200, 1, 0
1, 0.56, 0, 0, 0.67500, 0, 0
0, 0.48, 1, 0, 0.66000, 1, 0
1, 0.52, 0, 0, 0.56300, 0, 1
0, 0.18, 0, 0, 0.29800, 0, 0
0, 0.56, 0, 1, 0.59300, 0, 1
0, 0.52, 1, 0, 0.64400, 1, 0
0, 0.18, 1, 0, 0.28600, 1, 0
0, 0.58, 0, 0, 0.66200, 0, 1
0, 0.39, 1, 0, 0.55100, 1, 0
0, 0.46, 0, 0, 0.62900, 1, 0
0, 0.40, 1, 0, 0.46200, 1, 0
0, 0.60, 0, 0, 0.72700, 0, 1
1, 0.36, 1, 0, 0.40700, 0, 1
1, 0.44, 0, 0, 0.52300, 1, 0
1, 0.28, 0, 0, 0.31300, 0, 1
1, 0.54, 0, 1, 0.62600, 0, 0

Test data:

# people_test_df.txt
#
0, 0.51, 0, 0, 0.61200, 1, 0
0, 0.32, 1, 0, 0.46100, 1, 0
1, 0.55, 0, 0, 0.62700, 0, 0
1, 0.25, 0, 1, 0.26200, 0, 1
1, 0.33, 0, 1, 0.37300, 0, 1
0, 0.29, 1, 0, 0.46200, 0, 0
1, 0.65, 0, 0, 0.72700, 0, 0
0, 0.43, 1, 0, 0.51400, 1, 0
0, 0.54, 1, 0, 0.64800, 0, 1
1, 0.61, 1, 0, 0.72700, 0, 0
1, 0.52, 1, 0, 0.63600, 0, 0
1, 0.30, 1, 0, 0.33500, 0, 1
1, 0.29, 0, 0, 0.31400, 0, 1
0, 0.47, 0, 1, 0.59400, 1, 0
1, 0.39, 1, 0, 0.47800, 1, 0
1, 0.47, 0, 1, 0.52000, 1, 0
0, 0.49, 0, 0, 0.58600, 1, 0
0, 0.63, 0, 1, 0.67400, 0, 1
0, 0.30, 0, 0, 0.39200, 0, 0
0, 0.61, 0, 1, 0.69600, 0, 1
0, 0.47, 0, 1, 0.58700, 1, 0
1, 0.30, 0, 1, 0.34500, 0, 1
0, 0.51, 0, 1, 0.58000, 1, 0
0, 0.24, 0, 0, 0.38800, 1, 0
0, 0.49, 0, 0, 0.64500, 1, 0
1, 0.66, 0, 1, 0.74500, 0, 0
0, 0.65, 0, 0, 0.76900, 0, 0
0, 0.46, 1, 0, 0.58000, 0, 0
0, 0.45, 0, 1, 0.51800, 1, 0
0, 0.47, 0, 0, 0.63600, 0, 0
0, 0.29, 0, 0, 0.44800, 0, 0
0, 0.57, 0, 1, 0.69300, 0, 1
0, 0.20, 0, 0, 0.28700, 0, 1
0, 0.35, 0, 0, 0.43400, 1, 0
0, 0.61, 0, 1, 0.67000, 0, 1
0, 0.31, 0, 1, 0.37300, 1, 0
1, 0.18, 0, 0, 0.20800, 0, 1
1, 0.26, 0, 1, 0.29200, 0, 1
0, 0.28, 0, 0, 0.36400, 0, 1
0, 0.59, 0, 1, 0.69400, 0, 1
This entry was posted in Machine Learning. Bookmark the permalink.

Leave a Reply