Several years ago, I wrote an article in the old Microsoft MSDN Magazine where I showed an example of k-nearest neighbors classification. In that article, I mentioned the possibility of combining a k-NN model with a neural network model — an “ensemble” model. An ensemble model simply combines the results of two or more prediction models into a single result.
Suppose you have a k-NN multi-class classification model for a 3-class problem. For a particular input x, the k-NN model predicts the class probabilities to be [0.4000, 0.3500, 0.2500] and so the prediction is class [0]. For the same input x, the neural model predicts the class probabilities to be [0.2000, 0.3600, 0.4400] and so the prediction is class [2]. If you average the prediction probabilities you get [0.3000, 0.3550, 0.3450] and so the ensemble combined prediction is class [1].
A reader asked me to construct an example of an ensemble model that uses a neural network model and a k-nearest neighbors model. The ideas underlying an ensemble model are very simple but there are tons of details to deal with. I had to spend a lot of time to make sure that the two models could handle a single set of data. For example, the target Y data is read into memory as type double, but a neural network requires one-hot encoded data and a k-NN model requires an array of integer data.
For my example, I set up some synthetic data that looks like:
0.24, 0.29500, 2 0.39, 0.51200, 1 0.63, 0.75800, 0 0.36, 0.44500, 1 . . .
Each line of data represents a person. The fields are age (divided by 100), income (divided by $100,000), political leaning (0 = conservative, 1 = moderate, 2 = liberal). There are 200 data items. The goal is to predict political leaning from age and income.
My ensemble demo creates a 2-20-3 neural network. I trained the network using 10,000 epochs, a batch size of 10, and a learning rate of 0.02. These training parameters were determined by a bit of trial and error. The neural network model scored 0.7000 accuracy (140 out of 200 correct).
The demo creates a simple k-nearest neighbors model with k = 5. Note that a k-NN model doesn’t need explicit training. The k-NN model scored 0.7400 accuracy (148 out of 200 correct).
The ensemble model combines the predicted probabilities of the two models by using a simple average. The ensemble approach scored 0.7450 accuracy (149 out of 200 correct).

Ensemble systems combine different machine learning techniques. Mixed media art combines different art techniques. Here are three representative examples from an Internet image search for “mixed media portrait”. Left: Portrait by artist Loui Jover. Center: Portrait by artist Leon Bosboom. Right: Portrait by artist Minh Hang.
Demo code. Extremely long and complex. Not fully tested. Replace “lt”, “gt”, “lte”, “gte”, “and” with Boolean operator symbols.
// multi-class classification ensemble
// k-NN + Neural Net
using System;
using System.IO;
namespace Ensemble
{
internal class EnsembleProgram
{
static void Main(string[] args)
{
Console.WriteLine("\nBegin kNN + NN ensemble demo ");
Console.WriteLine("Predict political leaning from" +
" age and income ");
string trainFile =
"..\\..\\..\\Data\\people_train.txt";
// age, income, political leaning
// 0.24, 0.29500, 2
// 0.39, 0.51200, 1
double[][] trainX = Utils.MatLoad(trainFile,
new int[] { 0, 1 }, ',', "#");
//double[][] trainY = Utils.MatLoad(trainFile, new int[]
// { 2 }, ',', "#");
int[] trainY = Utils.MatToIntVec(Utils.MatLoad(trainFile,
new int[] { 2 }, ',', "#"));
//double[][] neuralTrainY = Utils.MatToOneHot(trainY, 3);
//int[] neighborsTrainY = Utils.MatToIntVec(trainY);
// create and train NN
Console.WriteLine("\nCreating and training 2-20-3" +
" neural net ");
NeuralNetwork nn = new NeuralNetwork(2, 20, 3);
nn.Train(trainX, trainY, 0.02, 10, 10000, true);
double accNN = nn.Accuracy(trainX, trainY);
Console.WriteLine("NN accuracy = " + accNN.ToString("F4"));
// create and "train" KNN
Console.WriteLine("\nCreating and training k=5 nearest" +
" neighbors model ");
KNN knn = new KNN(5, 3);
knn.Train(trainX, trainY);
double accKNN = knn.Accuracy(trainX, trainY);
Console.WriteLine("KNN accuracy = " +
accKNN.ToString("F4"));
Console.WriteLine("\nEvaluating a combined NN" +
" and KNN model ");
double accEnsemble = EnsembleAccuracy(nn, knn, trainX,
trainY);
Console.WriteLine("Ensemble accuracy = " +
accEnsemble.ToString("F4"));
Console.WriteLine("\nEnd demo ");
Console.ReadLine();
} // Main()
public static double[] EnsemblePredictProbs(double[] x,
NeuralNetwork nn, KNN knn)
{
double[] probsNN = nn.PredictProbs(x);
double[] probsKNN = knn.PredictProbs(x);
double[] result = new double[probsNN.Length];
for (int i = 0; i "lt" result.Length; ++i)
result[i] = (probsNN[i] + probsKNN[i]) / 2;
return result;
}
public static double EnsembleAccuracy(NeuralNetwork nn,
KNN knn, double[][] dataX, int[] dataY)
{
int nCorrect = 0; int nWrong = 0;
for (int i = 0; i "lt" dataX.Length; ++i)
{
double[] x = dataX[i];
double[] probs = EnsemblePredictProbs(x, nn, knn);
int predClass = Utils.ArgMax(probs);
int actualClass = dataY[i];
if (predClass == actualClass)
++nCorrect;
else
++nWrong;
}
return (nCorrect * 1.0) / (nCorrect + nWrong);
}
} // Program
// =========================================================
public class KNN
{
public int k;
public int nOutput;
public double[][] trainX;
public int[] trainY;
public KNN(int k, int nOutput)
{
this.k = k;
this.nOutput = nOutput;
} // ctor
public void Train(double[][] trainX, int[] trainY)
{
this.trainX = trainX;
this.trainY = trainY;
}
public double[] PredictProbs(double[] x)
{
double[] result = new double[this.nOutput];
// 1. compute distances from x to all train X
int N = this.trainX.Length;
double[] distances = new double[N];
for (int i = 0; i "lt" N; ++i)
distances[i] = DistFunc(x, this.trainX[i]);
// 2. compute ordering of the distances
int[] ordering = new int[N];
for (int i = 0; i "lt" N; ++i)
ordering[i] = i;
double[] distancesCopy = new double[N];
Array.Copy(distances, distancesCopy, distances.Length);
Array.Sort(distancesCopy, ordering);
// count class labels of the closest
//int[] counts = new int[this.nOutput];
for (int i = 0; i "lt" this.k; ++i)
{
int idx = ordering[i];
int c = this.trainY[idx];
++result[c];
}
// convert counts to probs
for (int i = 0; i "lt" this.nOutput; ++i)
result[i] /= this.k;
return result;
} // PredicrProbs
private static double DistFunc(double[] x1,
double[] x2)
{
int dim = x1.Length;
double sum = 0.0;
for (int i = 0; i "lt" dim; ++i)
{
double diff = x1[i] - x2[i];
sum += diff * diff;
}
return Math.Sqrt(sum);
}
public double Accuracy(double[][] dataX, int[] dataY)
{
int nCorrect = 0; int nWrong = 0;
for (int i = 0; i "lt" dataX.Length; ++i)
{
double[] x = dataX[i];
double[] probs = this.PredictProbs(x);
int predClass = Utils.ArgMax(probs);
int actualClass = dataY[i];
if (predClass == actualClass)
++nCorrect;
else
++nWrong;
}
return (nCorrect * 1.0) / (nCorrect + nWrong);
}
} // class KNN
// ========================================================
public class NeuralNetwork
{
private int ni; // number input nodes
private int nh; // hidden
private int no; // output
private double[] iNodes;
private double[][] ihWeights; // input-hidden
private double[] hBiases;
private double[] hNodes;
private double[][] hoWeights; // hidden-output
private double[] oBiases;
private double[] oNodes;
private Random rnd; // wt init and train shuffle
// --------------------------------------------------------
public NeuralNetwork(int numIn, int numHid,
int numOut, int seed = 0)
{
this.ni = numIn; // 6 for this demo
this.nh = numHid; //
this.no = numOut; // 3
this.iNodes = new double[numIn];
this.ihWeights = Utils.MatCreate(numIn, numHid);
this.hBiases = new double[numHid];
this.hNodes = new double[numHid];
this.hoWeights = Utils.MatCreate(numHid, numOut);
this.oBiases = new double[numOut];
this.oNodes = new double[numOut];
this.rnd = new Random(seed);
this.InitWeights(); // all weights and biases
} // ctor
// --------------------------------------------------------
private void InitWeights() // helper for ctor
{
// weights and biases to small random values
double lo = -0.10; double hi = +0.10;
int numWts = (this.ni * this.nh) +
(this.nh * this.no) + this.nh + this.no;
double[] initialWeights = new double[numWts];
for (int i = 0; i "lt" initialWeights.Length; ++i)
initialWeights[i] =
(hi - lo) * rnd.NextDouble() + lo;
this.SetWeights(initialWeights);
}
// --------------------------------------------------------
public void SetWeights(double[] wts)
{
// copy serialized weights and biases in wts[]
// to ih weights, ih biases, ho weights, ho biases
int numWts = (this.ni * this.nh) +
(this.nh * this.no) + this.nh + this.no;
if (wts.Length != numWts)
throw new Exception("Bad array in SetWeights");
int k = 0; // points into wts param
for (int i = 0; i "lt" this.ni; ++i)
for (int j = 0; j "lt" this.nh; ++j)
this.ihWeights[i][j] = wts[k++];
for (int i = 0; i "lt" this.nh; ++i)
this.hBiases[i] = wts[k++];
for (int i = 0; i "lt" this.nh; ++i)
for (int j = 0; j "lt" this.no; ++j)
this.hoWeights[i][j] = wts[k++];
for (int i = 0; i "lt" this.no; ++i)
this.oBiases[i] = wts[k++];
}
// --------------------------------------------------------
public double[] GetWeights()
{
int numWts = (this.ni * this.nh) +
(this.nh * this.no) + this.nh + this.no;
double[] result = new double[numWts];
int k = 0;
for (int i = 0; i "lt" ihWeights.Length; ++i)
for (int j = 0; j "lt" this.ihWeights[0].Length; ++j)
result[k++] = this.ihWeights[i][j];
for (int i = 0; i "lt" this.hBiases.Length; ++i)
result[k++] = this.hBiases[i];
for (int i = 0; i "lt" this.hoWeights.Length; ++i)
for (int j = 0; j "lt" this.hoWeights[0].Length; ++j)
result[k++] = this.hoWeights[i][j];
for (int i = 0; i "lt" this.oBiases.Length; ++i)
result[k++] = this.oBiases[i];
return result;
}
// --------------------------------------------------------
public double[] ComputeOutput(double[] x)
{
double[] hSums = new double[this.nh]; // scratch
double[] oSums = new double[this.no]; // out sums
for (int i = 0; i "lt" x.Length; ++i)
this.iNodes[i] = x[i];
// note: no need to copy x-values unless
// you implement a ToString.
// more efficient to simply use the x[] directly.
// 1. compute i-h sum of weights * inputs
for (int j = 0; j "lt" this.nh; ++j)
for (int i = 0; i "lt" this.ni; ++i)
hSums[j] += this.iNodes[i] *
this.ihWeights[i][j]; // note +=
// 2. add biases to hidden sums
for (int i = 0; i "lt" this.nh; ++i)
hSums[i] += this.hBiases[i];
// 3. apply hidden activation
for (int i = 0; i "lt" this.nh; ++i)
this.hNodes[i] = HyperTan(hSums[i]);
// 4. compute h-o sum of wts * hOutputs
for (int j = 0; j "lt" this.no; ++j)
for (int i = 0; i "lt" this.nh; ++i)
oSums[j] += this.hNodes[i] *
this.hoWeights[i][j]; // [1]
// 5. add biases to output sums
for (int i = 0; i "lt" this.no; ++i)
oSums[i] += this.oBiases[i];
double[] softOut = Softmax(oSums);
Array.Copy(softOut, this.oNodes, softOut.Length);
double[] retResult = new double[this.no];
Array.Copy(this.oNodes, retResult, retResult.Length);
return retResult;
}
// --------------------------------------------------------
public double[] PredictProbs(double[] x)
{
double[] result = this.ComputeOutput(x);
return result;
}
// --------------------------------------------------------
private static double HyperTan(double x)
{
if (x "lt" -10.0) return -1.0;
else if (x "gt" 10.0) return 1.0;
else return Math.Tanh(x);
}
// --------------------------------------------------------
private static double[] Softmax(double[] logits)
{
// determine max logit
// does all output nodes at once so scale
// doesn't have to be re-computed each time
double max = logits[0];
for (int i = 0; i "lt" logits.Length; ++i)
if (logits[i] "gt" max) max = logits[i];
// scaling factor -- sum of exp(each val - max)
double scale = 0.0;
for (int i = 0; i "lt" logits.Length; ++i)
scale += Math.Exp(logits[i] - max);
double[] result = new double[logits.Length];
for (int i = 0; i "lt" logits.Length; ++i)
result[i] = Math.Exp(logits[i] - max) / scale;
return result; // now scaled so that xi sum to 1.0
}
// --------------------------------------------------------
public void Train(double[][] trainX, int[] trainY,
double lrnRate, int batSize,
int maxEpochs, bool verbose)
{
// convert y data to one-hot encoded
double[][] trainYoneHot =
Utils.VecToOneHot(trainY, this.no);
// 0. create accumulated grads
double[][] hoGrads =
Utils.MatCreate(this.nh, this.no);
double[] obGrads = new double[this.no];
double[][] ihGrads =
Utils.MatCreate(this.ni, this.nh);
double[] hbGrads = new double[this.nh];
double[] oSignals = new double[this.no];
double[] hSignals = new double[this.nh];
// create indices
int n = trainX.Length;
int[] indices = new int[n];
for (int i = 0; i "lt" n; ++i)
indices[i] = i;
// calc freq of progress and batches-per-epoch
int freq = maxEpochs / 5;
int numBatches = n / batSize; // int division
for (int epoch = 0; epoch "lt" maxEpochs; ++epoch)
{
Shuffle(indices);
for (int batIdx = 0; batIdx "lt" numBatches; ++batIdx)
{
// zero out all grads from previous batch
for (int i = 0; i "lt" this.ni; ++i)
for (int j = 0; j "lt" this.nh; ++j)
ihGrads[i][j] = 0.0;
for (int j = 0; j "lt" this.nh; ++j)
hbGrads[j] = 0.0;
for (int j = 0; j "lt" this.nh; ++j)
for (int k = 0; k "lt" this.no; ++k)
hoGrads[j][k] = 0.0;
for (int k = 0; k "lt" this.no; ++k)
obGrads[k] = 0.0;
// accumulate grads for each item in batch
//for (int ii = 0; ii "lt" n; ++ii)
for (int ii = 0; ii "lt" batSize; ++ii)
{
int idx = indices[ii];
double[] x = trainX[idx];
double[] y = trainYoneHot[idx];
this.ComputeOutput(x);
// 1. compute output node scratch signals
for (int k = 0; k "lt" this.no; ++k)
{
// double derivative = 1.0;
// double derivative = (1 - this.outputs[i]) *
// this.outputs[i]; // softmax + MSE loss
oSignals[k] = 1 * (this.oNodes[k] - y[k]);
}
// --------------------------------------------------
// 2. accum hidden-to-output gradients
for (int j = 0; j "lt" this.nh; ++j)
{
for (int k = 0; k "lt" this.no; ++k)
{
hoGrads[j][k] += oSignals[k] *
this.hNodes[j]; // note the +=
}
}
// 3. accum output node bias gradients
for (int k = 0; k "lt" this.no; ++k)
{
obGrads[k] += oSignals[k] * 1.0; // 1.0 dummy
}
// --------------------------------------------------
// 4. compute hidden node signals
for (int j = 0; j "lt" this.nh; ++j)
{
double sum = 0.0;
for (int k = 0; k "lt" this.no; ++k)
{
sum += oSignals[k] * this.hoWeights[j][k];
}
double derivative =
(1 - this.hNodes[j]) *
(1 + this.hNodes[j]); // tanh
hSignals[j] = derivative * sum;
}
// --------------------------------------------------
// 5. accum input-to-hidden gradients
for (int i = 0; i "lt" this.ni; ++i)
{
for (int j = 0; j "lt" this.nh; ++j)
{
ihGrads[i][j] += hSignals[j] *
this.iNodes[i];
}
}
// --------------------------------------------------
// 6. accum hidden node bias gradients
for (int j = 0; j "lt" this.nh; ++j)
{
hbGrads[j] += hSignals[j] * 1.0; // 1.0 dummy
}
} // curr batch
// divide all accumulated gradients by batch size
// a. hidden-to-output gradients
for (int j = 0; j "lt" this.nh; ++j)
for (int k = 0; k "lt" this.no; ++k)
hoGrads[j][k] /= batSize;
// b. output node bias gradients
for (int k = 0; k "lt" this.no; ++k)
obGrads[k] /= batSize;
// c. input-to-hidden gradients
for (int i = 0; i "lt" this.ni; ++i)
for (int j = 0; j "lt" this.nh; ++j)
ihGrads[i][j] /= batSize;
// d. hidden node bias gradients
for (int j = 0; j "lt" this.nh; ++j)
hbGrads[j] /= batSize;
// --------------------------------------------------
// 7. update input-to-hidden weights
for (int i = 0; i "lt" this.ni; ++i)
{
for (int j = 0; j "lt" this.nh; ++j)
{
double delta = -1.0 * lrnRate * ihGrads[i][j];
this.ihWeights[i][j] += delta;
}
}
// 8. update hidden node biases
for (int j = 0; j "lt" this.nh; ++j)
{
double delta = -1.0 * lrnRate * hbGrads[j];
this.hBiases[j] += delta;
}
// 9. update hidden-to-output weights
for (int j = 0; j "lt" this.nh; ++j)
{
for (int k = 0; k "lt" this.no; ++k)
{
double delta = -1.0 * lrnRate * hoGrads[j][k];
this.hoWeights[j][k] += delta;
}
}
// --------------------------------------------------
// 10. update output node biases
for (int k = 0; k "lt" this.no; ++k)
{
double delta = -1.0 * lrnRate * obGrads[k];
this.oBiases[k] += delta;
}
} // batches
if (verbose == true "and" (epoch % freq == 0))
{
double mcee = this.MeanCrossEntError(trainX,
trainYoneHot);
double acc = this.Accuracy(trainX, trainY);
string s1 = "epoch: " + epoch.ToString().PadLeft(4);
string s2 = " MCEE = " + mcee.ToString("F4");
string s3 = " acc = " + acc.ToString("F4");
Console.WriteLine(s1 + s2 + s3);
}
} // epoch
} // TrainBatch
// --------------------------------------------------------
private void Shuffle(int[] sequence)
{
// Fisher-Yates
for (int i = 0; i "lt" sequence.Length; ++i)
{
int r = this.rnd.Next(i, sequence.Length);
int tmp = sequence[r];
sequence[r] = sequence[i];
sequence[i] = tmp;
//sequence[i] = i; // for testing
}
} // Shuffle
// --------------------------------------------------------
public double MeanSqError(double[][] trainX,
double[][] trainY)
{
// MSE - useful for progress (easier to interpret)
int n = trainX.Length;
double sumSquaredError = 0.0;
for (int i = 0; i "lt" n; ++i)
{
double[] predY = this.ComputeOutput(trainX[i]);
double[] actualY = trainY[i];
for (int j = 0; j "lt" this.no; ++j)
{
sumSquaredError += (predY[j] - actualY[j]) *
(predY[j] - actualY[j]);
}
}
return sumSquaredError / n;
} // MSE loss
// --------------------------------------------------------
public double MeanCrossEntError(double[][] trainX,
double[][] trainY)
{
int n = trainX.Length;
double sum = 0.0;
for (int i = 0; i "lt" n; ++i)
{
double[] predY = this.ComputeOutput(trainX[i]);
int idx = Utils.ArgMax(trainY[i]);
sum += -Math.Log(predY[idx]);
}
return sum / n;
} // MCEE loss
// --------------------------------------------------------
// --------------------------------------------------------
public double Accuracy(double[][] dataX, int[] dataY)
{
int n = dataX.Length;
int nCorrect = 0; int nWrong = 0;
for (int i = 0; i "lt" n; ++i)
{
double[] predY = this.ComputeOutput(dataX[i]); // probs
int actualY = dataY[i];
if (Utils.ArgMax(predY) == actualY)
++nCorrect;
else
++nWrong;
}
return (nCorrect * 1.0) / (nCorrect + nWrong);
}
// --------------------------------------------------------
public int[][] ConfusionMatrix(double[][] dataX,
double[][] dataY)
{
int n = this.no;
int[][] result = new int[n][]; // nxn
for (int i = 0; i "lt" n; ++i)
result[i] = new int[n];
for (int i = 0; i "lt" dataX.Length; ++i)
{
double[] x = dataX[i]; // inputs
int targetK = Utils.ArgMax(dataY[i]);
double[] probs = this.ComputeOutput(x);
int predK = Utils.ArgMax(probs);
++result[targetK][predK];
}
return result;
}
public void ShowConfusion(int[][] cm)
{
int n = cm.Length;
for (int i = 0; i "lt" n; ++i)
{
Console.Write("actual " + i + ": ");
for (int j = 0; j "lt" n; ++j)
{
Console.Write(cm[i][j].ToString().PadLeft(4) + " ");
}
Console.WriteLine("");
}
}
} // NeuralNetwork class
// ========================================================
public class Utils
{
public static double[][] VecToMat(double[] vec,
int rows, int cols)
{
// vector to row vec/matrix
double[][] result = MatCreate(rows, cols);
int k = 0;
for (int i = 0; i "lt" rows; ++i)
for (int j = 0; j "lt" cols; ++j)
result[i][j] = vec[k++];
return result;
}
// --------------------------------------------------------
public static double[][] MatCreate(int rows,
int cols)
{
double[][] result = new double[rows][];
for (int i = 0; i "lt" rows; ++i)
result[i] = new double[cols];
return result;
}
// --------------------------------------------------------
static int NumNonCommentLines(string fn,
string comment)
{
int ct = 0;
string line = "";
FileStream ifs = new FileStream(fn,
FileMode.Open);
StreamReader sr = new StreamReader(ifs);
while ((line = sr.ReadLine()) != null)
if (line.StartsWith(comment) == false)
++ct;
sr.Close(); ifs.Close();
return ct;
}
// --------------------------------------------------------
public static double[][] MatLoad(string fn,
int[] usecols, char sep, string comment)
{
// count number of non-comment lines
int nRows = NumNonCommentLines(fn, comment);
int nCols = usecols.Length;
double[][] result = MatCreate(nRows, nCols);
string line = "";
string[] tokens = null;
FileStream ifs = new FileStream(fn, FileMode.Open);
StreamReader sr = new StreamReader(ifs);
int i = 0;
while ((line = sr.ReadLine()) != null)
{
if (line.StartsWith(comment) == true)
continue;
tokens = line.Split(sep);
for (int j = 0; j "lt" nCols; ++j)
{
int k = usecols[j]; // into tokens
result[i][j] = double.Parse(tokens[k]);
}
++i;
}
sr.Close(); ifs.Close();
return result;
}
// --------------------------------------------------------
public static double[] MatToDoubleVec(double[][] m)
{
int rows = m.Length;
int cols = m[0].Length;
double[] result = new double[rows * cols];
int k = 0;
for (int i = 0; i "lt" rows; ++i)
for (int j = 0; j "lt" cols; ++j)
result[k++] = m[i][j];
return result;
}
public static int[] MatToIntVec(double[][] m)
{
int rows = m.Length;
int cols = m[0].Length;
int[] result = new int[rows * cols];
int k = 0;
for (int i = 0; i "lt" rows; ++i)
for (int j = 0; j "lt" cols; ++j)
result[k++] = (int)m[i][j];
return result;
}
// --------------------------------------------------------
public static double[][] MatToOneHot(double[][] m, int n)
{
// convert ordinal (0,1,2 . .) to one-hot
int rows = m.Length;
int cols = m[0].Length; // assumed 1
double[][] result = MatCreate(rows, n);
for (int i = 0; i "lt" rows; ++i)
{
int k = (int)m[i][0]; // 0,1,2 . .
result[i] = new double[n]; // [0.0 0.0 0.0]
result[i][k] = 1.0; // [ 0.0 1.0 0.0]
}
return result;
}
public static double[][] VecToOneHot(int[] vec, int n)
{
int N = vec.Length;
double[][] result = MatCreate(N, n); // all zeros
for (int i = 0; i "lt" N; ++i)
result[i][vec[i]] = 1.0;
return result;
}
// --------------------------------------------------------
public static void MatShow(double[][] m,
int dec, int wid)
{
for (int i = 0; i "lt" m.Length; ++i)
{
for (int j = 0; j "lt" m[0].Length; ++j)
{
double v = m[i][j];
if (Math.Abs(v) "lt" 1.0e-8) v = 0.0; // hack
Console.Write(v.ToString("F" +
dec).PadLeft(wid));
}
Console.WriteLine("");
}
}
// --------------------------------------------------------
public static void VecShow(int[] vec, int wid)
{
for (int i = 0; i "lt" vec.Length; ++i)
Console.Write(vec[i].ToString().PadLeft(wid));
Console.WriteLine("");
}
// --------------------------------------------------------
public static void VecShow(double[] vec,
int dec, int wid, bool newLine)
{
for (int i = 0; i "lt" vec.Length; ++i)
{
double x = vec[i];
if (Math.Abs(x) "lt" 1.0e-8) x = 0.0;
Console.Write(x.ToString("F" +
dec).PadLeft(wid));
}
if (newLine == true)
Console.WriteLine("");
}
public static int ArgMax(double[] v)
{
// index of largest value in v[]
int result = 0;
double big = v[0];
for (int i = 0; i "lt" v.Length; ++i)
{
if (v[i] "gt" big)
{
result = i;
big = v[i];
}
}
return result;
}
} // Utils class
// ========================================================
} // ns
Demo data:
# people_train.txt # 0.24, 0.29500, 2 0.39, 0.51200, 1 0.63, 0.75800, 0 0.36, 0.44500, 1 0.27, 0.28600, 2 0.50, 0.56500, 1 0.50, 0.55000, 1 0.19, 0.32700, 0 0.22, 0.27700, 1 0.39, 0.47100, 2 0.34, 0.39400, 1 0.22, 0.33500, 0 0.35, 0.35200, 2 0.33, 0.46400, 1 0.45, 0.54100, 1 0.42, 0.50700, 1 0.33, 0.46800, 1 0.25, 0.30000, 1 0.31, 0.46400, 0 0.27, 0.32500, 2 0.48, 0.54000, 1 0.64, 0.71300, 2 0.61, 0.72400, 0 0.54, 0.61000, 0 0.29, 0.36300, 0 0.50, 0.55000, 1 0.55, 0.62500, 0 0.40, 0.52400, 0 0.22, 0.23600, 2 0.68, 0.78400, 0 0.60, 0.71700, 2 0.34, 0.46500, 1 0.25, 0.37100, 0 0.31, 0.48900, 1 0.43, 0.48000, 1 0.58, 0.65400, 2 0.55, 0.60700, 2 0.43, 0.51100, 1 0.43, 0.53200, 1 0.21, 0.37200, 0 0.55, 0.64600, 0 0.64, 0.74800, 0 0.41, 0.58800, 1 0.64, 0.72700, 0 0.56, 0.66600, 2 0.31, 0.36000, 1 0.65, 0.70100, 2 0.55, 0.64300, 0 0.25, 0.40300, 0 0.46, 0.51000, 1 0.36, 0.53500, 0 0.52, 0.58100, 1 0.61, 0.67900, 0 0.57, 0.65700, 0 0.46, 0.52600, 1 0.62, 0.66800, 2 0.55, 0.62700, 0 0.22, 0.27700, 1 0.50, 0.62900, 0 0.32, 0.41800, 1 0.21, 0.35600, 0 0.44, 0.52000, 1 0.46, 0.51700, 1 0.62, 0.69700, 0 0.57, 0.66400, 0 0.67, 0.75800, 2 0.29, 0.34300, 2 0.53, 0.60100, 0 0.44, 0.54800, 1 0.46, 0.52300, 1 0.20, 0.30100, 1 0.38, 0.53500, 1 0.50, 0.58600, 1 0.33, 0.42500, 1 0.33, 0.39300, 1 0.26, 0.40400, 0 0.58, 0.70700, 0 0.43, 0.48000, 1 0.46, 0.64400, 0 0.60, 0.71700, 0 0.42, 0.48900, 1 0.56, 0.56400, 2 0.62, 0.66300, 2 0.50, 0.64800, 1 0.47, 0.52000, 1 0.67, 0.80400, 2 0.40, 0.50400, 1 0.42, 0.48400, 1 0.64, 0.72000, 0 0.47, 0.58700, 2 0.45, 0.52800, 1 0.25, 0.40900, 0 0.38, 0.48400, 0 0.55, 0.60000, 1 0.44, 0.60600, 1 0.33, 0.41000, 1 0.34, 0.39000, 1 0.27, 0.33700, 2 0.32, 0.40700, 1 0.42, 0.47000, 1 0.24, 0.40300, 0 0.42, 0.50300, 1 0.25, 0.28000, 2 0.51, 0.58000, 1 0.55, 0.63500, 2 0.44, 0.47800, 2 0.18, 0.39800, 0 0.67, 0.71600, 2 0.45, 0.50000, 1 0.48, 0.55800, 1 0.25, 0.39000, 1 0.67, 0.78300, 1 0.37, 0.42000, 1 0.32, 0.42700, 1 0.48, 0.57000, 1 0.66, 0.75000, 2 0.61, 0.70000, 0 0.58, 0.68900, 1 0.19, 0.24000, 2 0.38, 0.43000, 1 0.27, 0.36400, 1 0.42, 0.48000, 1 0.60, 0.71300, 0 0.27, 0.34800, 0 0.29, 0.37100, 0 0.43, 0.56700, 1 0.48, 0.56700, 1 0.27, 0.29400, 2 0.44, 0.55200, 0 0.23, 0.26300, 2 0.36, 0.53000, 2 0.64, 0.72500, 0 0.29, 0.30000, 2 0.33, 0.49300, 1 0.66, 0.75000, 2 0.21, 0.34300, 0 0.27, 0.32700, 2 0.29, 0.31800, 2 0.31, 0.48600, 1 0.36, 0.41000, 1 0.49, 0.55700, 1 0.28, 0.38400, 0 0.43, 0.56600, 1 0.46, 0.58800, 1 0.57, 0.69800, 0 0.52, 0.59400, 1 0.31, 0.43500, 1 0.55, 0.62000, 2 0.50, 0.56400, 1 0.48, 0.55900, 1 0.22, 0.34500, 0 0.59, 0.66700, 0 0.34, 0.42800, 2 0.64, 0.77200, 2 0.29, 0.33500, 2 0.34, 0.43200, 1 0.61, 0.75000, 2 0.64, 0.71100, 0 0.29, 0.41300, 0 0.63, 0.70600, 0 0.29, 0.40000, 0 0.51, 0.62700, 1 0.24, 0.37700, 0 0.48, 0.57500, 1 0.18, 0.27400, 0 0.18, 0.20300, 2 0.33, 0.38200, 2 0.20, 0.34800, 0 0.29, 0.33000, 2 0.44, 0.63000, 0 0.65, 0.81800, 0 0.56, 0.63700, 2 0.52, 0.58400, 1 0.29, 0.48600, 0 0.47, 0.58900, 1 0.68, 0.72600, 2 0.31, 0.36000, 1 0.61, 0.62500, 2 0.19, 0.21500, 2 0.38, 0.43000, 1 0.26, 0.42300, 0 0.61, 0.67400, 0 0.40, 0.46500, 1 0.49, 0.65200, 1 0.56, 0.67500, 0 0.48, 0.66000, 1 0.52, 0.56300, 2 0.18, 0.29800, 0 0.56, 0.59300, 2 0.52, 0.64400, 1 0.18, 0.28600, 1 0.58, 0.66200, 2 0.39, 0.55100, 1 0.46, 0.62900, 1 0.40, 0.46200, 1 0.60, 0.72700, 2 0.36, 0.40700, 2 0.44, 0.52300, 1 0.28, 0.31300, 2 0.54, 0.62600, 0

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2026 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2026 G2E Conference
2026 iSC West Conference
You must be logged in to post a comment.