Recently, I had been taking an extended look into implementing various machine learning regression algorithms using C# — linear regression, k-nearest neighbors regression, kernel ridge regression, Gaussian process regression, AdaBoost regression, gradient boosting regression, random forest regression, Poisson regression, naive Byes regression, quantile regression, and a few others. The calling interface to each implementation was a bit different. During the explorations, I gave some thought to the idea of using a uniform API set, so that all calling interfaces are (nearly) all the same.
Because the scikit-learn library has become somewhat of a de facto standard for machine learning, I figured the scikit API design would be a good one to imitate. Most scikit regressor modules have a constructor that accepts all the pertinent parameters, including parameters such as learning rate that are only needed by the fit() method. With this design, using all scikit regression modules looks like:
from sklearn import SomeSortOfRegressor model = SomeSortOfRegressor(param_1, param_2, . . param_n) model.fit(X, y) # X matrix of preds, y vector target values X = [[some input values]] y = model.predict(X)
So, the scikit API has constructors with a large number of parameters, often 20 or more. But the train() method always has just two parameters, X and y. (Note: there are a few rare exceptions).
Normally, when I implement a regression model using C#, I keep my constructors relatively simple, and pass the parameters needed for training to a Train() method, as opposed to the scikit standard interface design.
Anyway, I refactored one of my usual C# neural network implementations so that the constructor accepts all parameters except the training data, and the Train() method (I didn’t want to use “fit()” to make it clear that my C# implementation isn’t a one-to-one mapping with scikit) accepts just trainX, and trainY:
double[][] trainX = // load from file
double[] trainY = // load from file
int numInput = 5;
int numHidden = 100;
int numOutput = 1;
int maxEpochs = 5000;
double lrnRate = 0.05;
int batSize = 10;
NeuralNetworkRegressor nn =
new NeuralNetworkRegressor(numInput, numHidden, numOutput,
maxEpochs, lrnRate, batSize, seed: 99);
Console.WriteLine("Starting training ");
nn.Train(trainX, trainY);
Hmmmm. On the one hand, I like the idea of a consistent API across different regression systems. But on the other hand, the consistent API design violates the principle that you should only pass parameters to methods/functions that are needed by the method/function. Of course, you can argue that a class constructor could conceivably require all parameters needed by any of the class methods, but the counter argument is that, in that case, why not pass the training data to the constructor too?
In the end, I think that for C#, passing all parameters, including those just for training, to a technique constructor, is not a better design than passing training parameters such as learning rate to the Train() method. Briefly, one of the reasons for the scikit API design is that Python does not support function/method overloading. C# does support overloading so there’s no need to pass a zillion parameters with default arguments to a constructor. (This is a subtle idea that I’m not in the mood to explain fully in this post).
There are a surprising number of small details that I encountered during my exploration. For example, the scikit API predict(X) method accepts a 2D matrix of predictor values because Python is horribly slow at explicit loops, but has optimizations for matrix operations. My API Predict(x) accepts a single vector of predictor values. A complete discussion of all the details would take many pages, so I won’t go into that discussion.
The output of my demo program is:
Begin C# NN regression demo Loading synthetic train (200) and test (40) data Done First three train X: -0.1660 0.4406 -0.9998 -0.3953 -0.7065 0.0776 -0.1616 0.3704 -0.5911 0.7562 -0.9452 0.3409 -0.1654 0.1174 -0.7192 First three train y: 0.4840 0.1568 0.8054 Creating 5-100-1 tanh() identity() neural network Setting maxEpochs = 5000 Setting lrnRate = 0.050 Setting batSize = 10 Done Starting (batch) training epoch: 0 MSE = 0.0336 acc = 0.1550 epoch: 500 MSE = 0.0010 acc = 0.6700 epoch: 1000 MSE = 0.0005 acc = 0.7750 epoch: 1500 MSE = 0.0004 acc = 0.8150 epoch: 2000 MSE = 0.0005 acc = 0.8150 epoch: 2500 MSE = 0.0005 acc = 0.7800 epoch: 3000 MSE = 0.0003 acc = 0.8550 epoch: 3500 MSE = 0.0002 acc = 0.8700 epoch: 4000 MSE = 0.0002 acc = 0.8750 epoch: 4500 MSE = 0.0002 acc = 0.9150 Done Evaluating model Accuracy (10%) on train data = 0.9050 Accuracy (10%) on test data = 0.9250 Predicting y for train[0] Predicted y = 0.4979 End demo
The data is synthetic, and was (coincidentally) generated by a 5-10-1 PyTorch neural network with random weights and biases.
Maybe the moral of the story is that in software engineering, there are multiple skills involved. Getting code to work is one thing, and designing APIs is another, and sometimes there are conflicts.

Perhaps the biggest disadvantage of neural network regression systems, compared to other techniques, is that neural networks are relatively difficult to tune. I’ve been driving for just about 60 years now, and during those years I’ve owned a lot of automobiles — well over 40. Of the cars I’ve owned, these three were the most difficult to tune.
Left: I learned to drive using my mother’s 1956 Plymouth Suburban station wagon with an awesome three-speed manual transmission on the steering column (“three on the tree”). But the first car that I owned was a 1957 Ford Custom 300. It cost $200 and came from my father’s friend Al Wood who owned an auto repair business. Automobile technology was very primitive in those days and so I had to regularly adjust the carburetor, adjust the ignition timing, and so on. My car was a pinkish-salmon and white two-tone (not the green and white shown here) and we called her Pinky. All cars had names in those days.
Center: My girlfriend at the time, Jane, and I owned a 1975 red Triumph TR7. That car looked great (so did Jane) but the electrical system, by the not-well-loved Lucas Electrical company, often needed tuning in the form of replacement parts. Jane needed absolutely no fine-tuning — she was great in every way.
Right: At one point in my career, I had just moved from Hawaii to Washington State to take a job at the huge tech company, where I still work today (after a couple of detours to other companies). I was renting a car (a green Toyota Corolla) but after a few weeks, I got tired of paying so much for a rental car. One night after work, I went out with the thought in my head that I wasn’t going back to my hotel room at the Silver Cloud Inn in Redmond until I bought a car. I ended up with a 1997 black Dodge Stratus with a manual transmission. It was a decent car but it needed frequent tuning of the brakes and transmission, and so I only owned it for a few months until I traded it in for a 1998 BMW 5 series with a rare but very cool manual transmission.
Demo program. Replace “lt” (less than), “gt”, “lte”, “gte”, “and” with Boolean operator symbols. (My blog editor chokes on symbols).
using System;
using System.IO;
using System.Collections.Generic;
namespace NeuralNetworkRegressionSynthetic
{
internal class NeuralnetworkRegressionProgram
{
static void Main(string[] args)
{
Console.WriteLine("\nBegin C# NN regression demo ");
// 1. load data
Console.WriteLine("\nLoading synthetic train" +
" (200) and test (40) data");
string trainFile =
"..\\..\\..\\Data\\synthetic_train_200.txt";
int[] colsX = new int[] { 0, 1, 2, 3, 4 };
double[][] trainX =
Utils.MatLoad(trainFile, colsX, ',', "#");
double[] trainY =
Utils.MatToVec(Utils.MatLoad(trainFile,
new int[] { 5 }, ',', "#"));
string testFile =
"..\\..\\..\\Data\\synthetic_test_40.txt";
double[][] testX =
Utils.MatLoad(testFile, colsX, ',', "#");
double[] testY =
Utils.MatToVec(Utils.MatLoad(testFile,
new int[] { 5 }, ',', "#"));
Console.WriteLine("Done ");
Console.WriteLine("\nFirst three train X: ");
for (int i = 0; i "lt" 3; ++i)
Utils.VecShow(trainX[i], 4, 8, newLine: true);
Console.WriteLine("\nFirst three train y: ");
for (int i = 0; i "lt" 3; ++i)
Console.WriteLine(trainY[i].ToString("F4").
PadLeft(8));
Console.WriteLine("\nCreating 5-100-1 tanh()" +
" identity() neural network ");
int maxEpochs = 5000;
double lrnRate = 0.05;
int batSize = 10;
//int maxEpochs = 3000;
//double lrnRate = 0.05;
//int batSize = 10;
Console.WriteLine("\nSetting maxEpochs = " +
maxEpochs);
Console.WriteLine("Setting lrnRate = " +
lrnRate.ToString("F3"));
Console.WriteLine("Setting batSize = " + batSize);
NeuralNetworkRegressor nn =
new NeuralNetworkRegressor(5, 100, 1, maxEpochs,
lrnRate, batSize, seed: 99);
Console.WriteLine("Done ");
Console.WriteLine("\nStarting (batch) training ");
nn.Train(trainX, trainY);
Console.WriteLine("Done ");
Console.WriteLine("\nEvaluating model ");
double trainAcc = nn.Accuracy(trainX, trainY, 0.10);
Console.WriteLine("Accuracy (10%) on train data = " +
trainAcc.ToString("F4"));
double testAcc = nn.Accuracy(testX, testY, 0.10);
Console.WriteLine("Accuracy (10%) on test data = " +
testAcc.ToString("F4"));
Console.WriteLine("\nPredicting y for train[0] ");
double[] x = trainX[0];
double predY = nn.Predict(x);
Console.WriteLine("Predicted y = " +
predY.ToString("F4"));
Console.WriteLine("\nEnd demo ");
Console.ReadLine();
//// save trained model wts
//Console.WriteLine("\nSaving model wts to file ");
//string fn = "..\\..\\..\\Models\\synthetic_wts.txt";
//nn.SaveWeights(fn);
//Console.WriteLine("Done ");
//// load saved wts later
//Console.WriteLine("\nLoading saved wts to new NN ");
//NeuralNetworkRegressor nn2 = create but training
// params are irrelevant
//nn2.LoadWeights(fn);
//Console.WriteLine("Done ");
//double[] x = { five input values };
//double y2 = nn2.Predict(X);
//Console.WriteLine("Predicted y = " +
// y2.ToString("F5"));
} // Main()
} // class Program
// ========================================================
public class NeuralNetworkRegressor
{
private int ni; // number input nodes
private int nh;
private int no;
private double[] iNodes;
private double[][] ihWeights; // input-hidden
private double[] hBiases;
private double[] hNodes;
private double[][] hoWeights; // hidden-output
private double[] oBiases;
private double[] oNodes; // single val as array
// gradients
private double[][] ihGrads;
private double[] hbGrads;
private double[][] hoGrads;
private double[] obGrads;
// training
private int maxEpochs;
private double lrnRate;
private int batSize;
private Random rnd;
// ------------------------------------------------------
public NeuralNetworkRegressor(int numIn, int numHid,
int numOut, int maxEpochs, double lrnRate, int batSize,
int seed)
{
// architecture
this.ni = numIn;
this.nh = numHid;
this.no = numOut; // 1 for regression
this.iNodes = new double[numIn];
this.ihWeights = Utils.MatCreate(numIn, numHid);
this.hBiases = new double[numHid];
this.hNodes = new double[numHid];
this.hoWeights = Utils.MatCreate(numHid, numOut);
this.oBiases = new double[numOut]; // [1]
this.oNodes = new double[numOut]; // [1]
this.ihGrads = Utils.MatCreate(numIn, numHid);
this.hbGrads = new double[numHid];
this.hoGrads = Utils.MatCreate(numHid, numOut);
this.obGrads = new double[numOut];
// training
this.maxEpochs = maxEpochs;
this.lrnRate = lrnRate;
this.batSize = batSize;
this.rnd = new Random(seed);
this.InitWeights(); // all weights and biases
} // ctor
// ------------------------------------------------------
private void InitWeights() // helper for ctor
{
// weights and biases to small random values
double lo = -0.01; double hi = +0.01;
int numWts = (this.ni * this.nh) +
(this.nh * this.no) + this.nh + this.no;
double[] initialWeights = new double[numWts];
for (int i = 0; i "lt" initialWeights.Length; ++i)
initialWeights[i] =
(hi - lo) * rnd.NextDouble() + lo;
this.SetWeights(initialWeights);
}
// ------------------------------------------------------
public void SetWeights(double[] wts)
{
// copy serialized weights and biases in wts[]
// to ih weights, ih biases, ho weights, ho biases
int numWts = (this.ni * this.nh) +
(this.nh * this.no) + this.nh + this.no;
if (wts.Length != numWts)
throw new Exception("Bad array in SetWeights");
int k = 0; // points into wts param
for (int i = 0; i "lt" this.ni; ++i)
for (int j = 0; j "lt" this.nh; ++j)
this.ihWeights[i][j] = wts[k++];
for (int i = 0; i "lt" this.nh; ++i)
this.hBiases[i] = wts[k++];
for (int i = 0; i "lt" this.nh; ++i)
for (int j = 0; j "lt" this.no; ++j)
this.hoWeights[i][j] = wts[k++];
for (int i = 0; i "lt" this.no; ++i)
this.oBiases[i] = wts[k++];
}
// ------------------------------------------------------
public double[] GetWeights()
{
int numWts = (this.ni * this.nh) +
(this.nh * this.no) + this.nh + this.no;
double[] result = new double[numWts];
int k = 0;
for (int i = 0; i "lt" ihWeights.Length; ++i)
for (int j = 0; j "lt" this.ihWeights[0].Length; ++j)
result[k++] = this.ihWeights[i][j];
for (int i = 0; i "lt" this.hBiases.Length; ++i)
result[k++] = this.hBiases[i];
for (int i = 0; i "lt" this.hoWeights.Length; ++i)
for (int j = 0; j "lt" this.hoWeights[0].Length; ++j)
result[k++] = this.hoWeights[i][j];
for (int i = 0; i "lt" this.oBiases.Length; ++i)
result[k++] = this.oBiases[i];
return result;
}
// ------------------------------------------------------
public double Predict(double[] x)
{
double[] hSums = new double[this.nh]; // scratch
double[] oSums = new double[this.no]; // out sums
for (int i = 0; i "lt" x.Length; ++i)
this.iNodes[i] = x[i];
// note: no need to copy x-values unless
// you implement a ToString.
// more efficient to simply use the X[] directly.
// 1. compute i-h sum of weights * inputs
for (int j = 0; j "lt" this.nh; ++j)
for (int i = 0; i "lt" this.ni; ++i)
hSums[j] += this.iNodes[i] *
this.ihWeights[i][j]; // note +=
// 2. add biases to hidden sums
for (int i = 0; i "lt" this.nh; ++i)
hSums[i] += this.hBiases[i];
// 3. apply hidden activation
for (int i = 0; i "lt" this.nh; ++i)
this.hNodes[i] = HyperTan(hSums[i]);
// 4. compute h-o sum of wts * hOutputs
for (int j = 0; j "lt" this.no; ++j)
for (int i = 0; i "lt" this.nh; ++i)
oSums[j] += this.hNodes[i] *
this.hoWeights[i][j]; // [1]
// 5. add biases to output sums
for (int i = 0; i "lt" this.no; ++i)
oSums[i] += this.oBiases[i];
// 6. apply output activation
for (int i = 0; i "lt" this.no; ++i)
this.oNodes[i] = Identity(oSums[i]);
return this.oNodes[0]; // single value
}
// ------------------------------------------------------
private static double HyperTan(double x)
{
if (x "lt" -10.0) return -1.0;
else if (x "gt" 10.0) return 1.0;
else return Math.Tanh(x);
}
// ------------------------------------------------------
private static double Identity(double x)
{
return x;
}
// ------------------------------------------------------
// ------------------------------------------------------
private void ZeroOutGrads()
{
for (int i = 0; i "lt" this.ni; ++i)
for (int j = 0; j "lt" this.nh; ++j)
this.ihGrads[i][j] = 0.0;
for (int j = 0; j "lt" this.nh; ++j)
this.hbGrads[j] = 0.0;
for (int j = 0; j "lt" this.nh; ++j)
for (int k = 0; k "lt" this.no; ++k)
this.hoGrads[j][k] = 0.0;
for (int k = 0; k "lt" this.no; ++k)
this.obGrads[k] = 0.0;
} // ZeroOutGrads()
// ------------------------------------------------------
private void AccumGrads(double y)
{
double[] oSignals = new double[this.no];
double[] hSignals = new double[this.nh];
// 1. compute output node scratch signals
for (int k = 0; k "lt" this.no; ++k)
{
double derivative = 1.0;
oSignals[k] = derivative * (this.oNodes[k] - y);
}
// 2. accum hidden-to-output gradients
for (int j = 0; j "lt" this.nh; ++j)
for (int k = 0; k "lt" this.no; ++k)
hoGrads[j][k] +=
oSignals[k] * this.hNodes[j];
// 3. accum output node bias gradients
for (int k = 0; k "lt" this.no; ++k)
obGrads[k] +=
oSignals[k] * 1.0; // 1.0 dummy
// 4. compute hidden node signals
for (int j = 0; j "lt" this.nh; ++j)
{
double sum = 0.0;
for (int k = 0; k "lt" this.no; ++k)
sum += oSignals[k] * this.hoWeights[j][k];
double derivative =
(1 - this.hNodes[j]) *
(1 + this.hNodes[j]); // assumes tanh
hSignals[j] = derivative * sum;
}
// 5. accum input-to-hidden gradients
for (int i = 0; i "lt" this.ni; ++i)
for (int j = 0; j "lt" this.nh; ++j)
this.ihGrads[i][j] +=
hSignals[j] * this.iNodes[i];
// 6. accum hidden node bias gradients
for (int j = 0; j "lt" this.nh; ++j)
this.hbGrads[j] +=
hSignals[j] * 1.0; // 1.0 dummy
} // AccumGrads
// ------------------------------------------------------
private void UpdateWeights(double lrnRate)
{
// assumes all gradients computed
// 1. update input-to-hidden weights
for (int i = 0; i "lt" this.ni; ++i)
{
for (int j = 0; j "lt" this.nh; ++j)
{
double delta = -1.0 * lrnRate *
this.ihGrads[i][j];
this.ihWeights[i][j] += delta;
}
}
// 2. update hidden node biases
for (int j = 0; j "lt" this.nh; ++j)
{
double delta = -1.0 * lrnRate *
this.hbGrads[j];
this.hBiases[j] += delta;
}
// 3. update hidden-to-output weights
for (int j = 0; j "lt" this.nh; ++j)
{
for (int k = 0; k "lt" this.no; ++k)
{
double delta = -1.0 * lrnRate *
this.hoGrads[j][k];
this.hoWeights[j][k] += delta;
}
}
// 4. update output node biases
for (int k = 0; k "lt" this.no; ++k)
{
double delta = -1.0 * lrnRate *
this.obGrads[k];
this.oBiases[k] += delta;
}
} // UpdateWeights()
// ------------------------------------------------------
public void Train(double[][] trainX, double[] trainY)
{
int n = trainX.Length; // 200
int batchesPerEpoch = n / this.batSize; // 20
int freq = maxEpochs / 10; // to show progress
int[] indices = new int[n];
for (int i = 0; i "lt" n; ++i)
indices[i] = i;
// ----------------------------------------------------
//
// n = 200; bs = 10
// batches per epoch = 200 / 10 = 20
// for epoch = 0; epoch "lt" maxEpochs; ++epoch
// shuffle indices
// for batch = 0; batch "lt" bpe; ++batch
// for item = 0; item "lt" bs; ++item
// compute output
// accum grads
// end-item
// update weights
// zero-out grads
// end-batches
// end-epochs
//
// ----------------------------------------------------
for (int epoch = 0; epoch "lt" this.maxEpochs; ++epoch)
{
this.Shuffle(indices);
int ptr = 0; // points into indices
for (int batIdx = 0; batIdx "lt" batchesPerEpoch;
++batIdx) // 0, 1, . . 19
{
for (int i = 0; i "lt" this.batSize; ++i) // 0 . . 9
{
int ii = indices[ptr++]; // compute output
double[] x = trainX[ii];
double y = trainY[ii];
this.Predict(x); // into this.oNodes
this.AccumGrads(y);
}
this.UpdateWeights(this.lrnRate);
this.ZeroOutGrads(); // prep for next batch
} // batches
if (epoch % freq == 0) // progress every few epochs
{
double mse = this.Error(trainX, trainY);
double acc = this.Accuracy(trainX, trainY, 0.10);
string s1 = "epoch: " + epoch.ToString().PadLeft(4);
string s2 = " MSE = " + mse.ToString("F4");
string s3 = " acc = " + acc.ToString("F4");
Console.WriteLine(s1 + s2 + s3);
}
} // epoch
} // Train
// ------------------------------------------------------
private void Shuffle(int[] sequence)
{
for (int i = 0; i "lt" sequence.Length; ++i)
{
int r = this.rnd.Next(i, sequence.Length);
int tmp = sequence[r];
sequence[r] = sequence[i];
sequence[i] = tmp;
//sequence[i] = i; // for testing
}
} // Shuffle
// ------------------------------------------------------
public double Error(double[][] trainX, double[] trainY)
{
// MSE
int n = trainX.Length;
double sumSquaredError = 0.0;
for (int i = 0; i "lt" n; ++i)
{
double predY = this.Predict(trainX[i]);
double actualY = trainY[i];
sumSquaredError += (predY - actualY) *
(predY - actualY);
}
return sumSquaredError / n;
} // Error
// ------------------------------------------------------
public double Accuracy(double[][] dataX,
double[] dataY, double pctClose)
{
// percentage correct using winner-takes all
int n = dataX.Length;
int nCorrect = 0;
int nWrong = 0;
for (int i = 0; i "lt" n; ++i)
{
double predY = this.Predict(dataX[i]);
double actualY = dataY[i];
if (Math.Abs(predY - actualY) "lt"
Math.Abs(pctClose * actualY))
++nCorrect;
else
++nWrong;
}
return (nCorrect * 1.0) / (nCorrect + nWrong);
}
// ------------------------------------------------------
public void SaveWeights(string fn)
{
FileStream ofs = new FileStream(fn, FileMode.Create);
StreamWriter sw = new StreamWriter(ofs);
double[] wts = this.GetWeights();
for (int i = 0; i "lt" wts.Length; ++i)
sw.WriteLine(wts[i].ToString("F8")); // one per line
sw.Close();
ofs.Close();
}
public void LoadWeights(string fn)
{
FileStream ifs = new FileStream(fn, FileMode.Open);
StreamReader sr = new StreamReader(ifs);
List"lt"double"gt" listWts = new List"lt"double"gt"();
string line = ""; // one wt per line
while ((line = sr.ReadLine()) != null)
{
// if (line.StartsWith(comment) == true)
// continue;
listWts.Add(double.Parse(line));
}
sr.Close();
ifs.Close();
double[] wts = listWts.ToArray();
this.SetWeights(wts);
}
// ------------------------------------------------------
} // NeuralNetwork class
// ========================================================
public class Utils
{
public static double[][] VecToMat(double[] vec,
int rows, int cols)
{
// vector to row vec/matrix
double[][] result = MatCreate(rows, cols);
int k = 0;
for (int i = 0; i "lt" rows; ++i)
for (int j = 0; j "lt" cols; ++j)
result[i][j] = vec[k++];
return result;
}
// ------------------------------------------------------
public static double[][] MatCreate(int rows,
int cols)
{
double[][] result = new double[rows][];
for (int i = 0; i "lt" rows; ++i)
result[i] = new double[cols];
return result;
}
// ------------------------------------------------------
public static double[][] MatLoad(string fn, int[] usecols,
char sep, string comment)
{
List"lt"double[]"gt" result =
new List"lt"double[]"gt"();
string line = "";
FileStream ifs = new FileStream(fn, FileMode.Open);
StreamReader sr = new StreamReader(ifs);
while ((line = sr.ReadLine()) != null)
{
if (line.StartsWith(comment) == true)
continue;
string[] tokens = line.Split(sep);
List"lt"double"gt" lst = new List"lt"double"gt"();
for (int j = 0; j "lt" usecols.Length; ++j)
lst.Add(double.Parse(tokens[usecols[j]]));
double[] row = lst.ToArray();
result.Add(row);
}
sr.Close(); ifs.Close();
return result.ToArray();
}
// ------------------------------------------------------
public static double[] MatToVec(double[][] m)
{
int rows = m.Length;
int cols = m[0].Length;
double[] result = new double[rows * cols];
int k = 0;
for (int i = 0; i "lt" rows; ++i)
for (int j = 0; j "lt" cols; ++j)
result[k++] = m[i][j];
return result;
}
// ------------------------------------------------------
public static void MatShow(double[][] m,
int dec, int wid)
{
for (int i = 0; i "lt" m.Length; ++i)
{
for (int j = 0; j "lt" m[0].Length; ++j)
{
double v = m[i][j];
if (Math.Abs(v) "lt" 1.0e-8) v = 0.0; // hack
Console.Write(v.ToString("F" +
dec).PadLeft(wid));
}
Console.WriteLine("");
}
}
// ------------------------------------------------------
public static void VecShow(int[] vec, int wid)
{
for (int i = 0; i "lt" vec.Length; ++i)
Console.Write(vec[i].ToString().PadLeft(wid));
Console.WriteLine("");
}
// ------------------------------------------------------
public static void VecShow(double[] vec,
int dec, int wid, bool newLine)
{
for (int i = 0; i "lt" vec.Length; ++i)
{
double x = vec[i];
if (Math.Abs(x) "lt" 1.0e-8) x = 0.0;
Console.Write(x.ToString("F" +
dec).PadLeft(wid));
}
if (newLine == true)
Console.WriteLine("");
}
} // Utils class
// ========================================================
} // ns
Training data:
# synthetic_train_200.txt # -0.1660, 0.4406, -0.9998, -0.3953, -0.7065, 0.4840 0.0776, -0.1616, 0.3704, -0.5911, 0.7562, 0.1568 -0.9452, 0.3409, -0.1654, 0.1174, -0.7192, 0.8054 0.9365, -0.3732, 0.3846, 0.7528, 0.7892, 0.1345 -0.8299, -0.9219, -0.6603, 0.7563, -0.8033, 0.7955 0.0663, 0.3838, -0.3690, 0.3730, 0.6693, 0.3206 -0.9634, 0.5003, 0.9777, 0.4963, -0.4391, 0.7377 -0.1042, 0.8172, -0.4128, -0.4244, -0.7399, 0.4801 -0.9613, 0.3577, -0.5767, -0.4689, -0.0169, 0.6861 -0.7065, 0.1786, 0.3995, -0.7953, -0.1719, 0.5569 0.3888, -0.1716, -0.9001, 0.0718, 0.3276, 0.2500 0.1731, 0.8068, -0.7251, -0.7214, 0.6148, 0.3297 -0.2046, -0.6693, 0.8550, -0.3045, 0.5016, 0.2129 0.2473, 0.5019, -0.3022, -0.4601, 0.7918, 0.2613 -0.1438, 0.9297, 0.3269, 0.2434, -0.7705, 0.5171 0.1568, -0.1837, -0.5259, 0.8068, 0.1474, 0.3307 -0.9943, 0.2343, -0.3467, 0.0541, 0.7719, 0.5581 0.2467, -0.9684, 0.8589, 0.3818, 0.9946, 0.1092 -0.6553, -0.7257, 0.8652, 0.3936, -0.8680, 0.7018 0.8460, 0.4230, -0.7515, -0.9602, -0.9476, 0.1996 -0.9434, -0.5076, 0.7201, 0.0777, 0.1056, 0.5664 0.9392, 0.1221, -0.9627, 0.6013, -0.5341, 0.1533 0.6142, -0.2243, 0.7271, 0.4942, 0.1125, 0.1661 0.4260, 0.1194, -0.9749, -0.8561, 0.9346, 0.2230 0.1362, -0.5934, -0.4953, 0.4877, -0.6091, 0.3810 0.6937, -0.5203, -0.0125, 0.2399, 0.6580, 0.1460 -0.6864, -0.9628, -0.8600, -0.0273, 0.2127, 0.5387 0.9772, 0.1595, -0.2397, 0.1019, 0.4907, 0.1611 0.3385, -0.4702, -0.8673, -0.2598, 0.2594, 0.2270 -0.8669, -0.4794, 0.6095, -0.6131, 0.2789, 0.4700 0.0493, 0.8496, -0.4734, -0.8681, 0.4701, 0.3516 0.8639, -0.9721, -0.5313, 0.2336, 0.8980, 0.1412 0.9004, 0.1133, 0.8312, 0.2831, -0.2200, 0.1782 0.0991, 0.8524, 0.8375, -0.2102, 0.9265, 0.2150 -0.6521, -0.7473, -0.7298, 0.0113, -0.9570, 0.7422 0.6190, -0.3105, 0.8802, 0.1640, 0.7577, 0.1056 0.6895, 0.8108, -0.0802, 0.0927, 0.5972, 0.2214 0.1982, -0.9689, 0.1870, -0.1326, 0.6147, 0.1310 -0.3695, 0.7858, 0.1557, -0.6320, 0.5759, 0.3773 -0.1596, 0.3581, 0.8372, -0.9992, 0.9535, 0.2071 -0.2468, 0.9476, 0.2094, 0.6577, 0.1494, 0.4132 0.1737, 0.5000, 0.7166, 0.5102, 0.3961, 0.2611 0.7290, -0.3546, 0.3416, -0.0983, -0.2358, 0.1332 -0.3652, 0.2438, -0.1395, 0.9476, 0.3556, 0.4170 -0.6029, -0.1466, -0.3133, 0.5953, 0.7600, 0.4334 -0.4596, -0.4953, 0.7098, 0.0554, 0.6043, 0.2775 0.1450, 0.4663, 0.0380, 0.5418, 0.1377, 0.2931 -0.8636, -0.2442, -0.8407, 0.9656, -0.6368, 0.7429 0.6237, 0.7499, 0.3768, 0.1390, -0.6781, 0.2185 -0.5499, 0.1850, -0.3755, 0.8326, 0.8193, 0.4399 -0.4858, -0.7782, -0.6141, -0.0008, 0.4572, 0.4197 0.7033, -0.1683, 0.2334, -0.5327, -0.7961, 0.1776 0.0317, -0.0457, -0.6947, 0.2436, 0.0880, 0.3345 0.5031, -0.5559, 0.0387, 0.5706, -0.9553, 0.3107 -0.3513, 0.7458, 0.6894, 0.0769, 0.7332, 0.3170 0.2205, 0.5992, -0.9309, 0.5405, 0.4635, 0.3532 -0.4806, -0.4859, 0.2646, -0.3094, 0.5932, 0.3202 0.9809, -0.3995, -0.7140, 0.8026, 0.0831, 0.1600 0.9495, 0.2732, 0.9878, 0.0921, 0.0529, 0.1289 -0.9476, -0.6792, 0.4913, -0.9392, -0.2669, 0.5966 0.7247, 0.3854, 0.3819, -0.6227, -0.1162, 0.1550 -0.5922, -0.5045, -0.4757, 0.5003, -0.0860, 0.5863 -0.8861, 0.0170, -0.5761, 0.5972, -0.4053, 0.7301 0.6877, -0.2380, 0.4997, 0.0223, 0.0819, 0.1404 0.9189, 0.6079, -0.9354, 0.4188, -0.0700, 0.1907 -0.1428, -0.7820, 0.2676, 0.6059, 0.3936, 0.2790 0.5324, -0.3151, 0.6917, -0.1425, 0.6480, 0.1071 -0.8432, -0.9633, -0.8666, -0.0828, -0.7733, 0.7784 -0.9444, 0.5097, -0.2103, 0.4939, -0.0952, 0.6787 -0.0520, 0.6063, -0.1952, 0.8094, -0.9259, 0.4836 0.5477, -0.7487, 0.2370, -0.9793, 0.0773, 0.1241 0.2450, 0.8116, 0.9799, 0.4222, 0.4636, 0.2355 0.8186, -0.1983, -0.5003, -0.6531, -0.7611, 0.1511 -0.4714, 0.6382, -0.3788, 0.9648, -0.4667, 0.5950 0.0673, -0.3711, 0.8215, -0.2669, -0.1328, 0.2677 -0.9381, 0.4338, 0.7820, -0.9454, 0.0441, 0.5518 -0.3480, 0.7190, 0.1170, 0.3805, -0.0943, 0.4724 -0.9813, 0.1535, -0.3771, 0.0345, 0.8328, 0.5438 -0.1471, -0.5052, -0.2574, 0.8637, 0.8737, 0.3042 -0.5454, -0.3712, -0.6505, 0.2142, -0.1728, 0.5783 0.6327, -0.6297, 0.4038, -0.5193, 0.1484, 0.1153 -0.5424, 0.3282, -0.0055, 0.0380, -0.6506, 0.6613 0.1414, 0.9935, 0.6337, 0.1887, 0.9520, 0.2540 -0.9351, -0.8128, -0.8693, -0.0965, -0.2491, 0.7353 0.9507, -0.6640, 0.9456, 0.5349, 0.6485, 0.1059 -0.0462, -0.9737, -0.2940, -0.0159, 0.4602, 0.2606 -0.0627, -0.0852, -0.7247, -0.9782, 0.5166, 0.2977 0.0478, 0.5098, -0.0723, -0.7504, -0.3750, 0.3335 0.0090, 0.3477, 0.5403, -0.7393, -0.9542, 0.4415 -0.9748, 0.3449, 0.3736, -0.1015, 0.8296, 0.4358 0.2887, -0.9895, -0.0311, 0.7186, 0.6608, 0.2057 0.1570, -0.4518, 0.1211, 0.3435, -0.2951, 0.3244 0.7117, -0.6099, 0.4946, -0.4208, 0.5476, 0.1096 -0.2929, -0.5726, 0.5346, -0.3827, 0.4665, 0.2465 0.4889, -0.5572, -0.5718, -0.6021, -0.7150, 0.2163 -0.7782, 0.3491, 0.5996, -0.8389, -0.5366, 0.6516 -0.5847, 0.8347, 0.4226, 0.1078, -0.3910, 0.6134 0.8469, 0.4121, -0.0439, -0.7476, 0.9521, 0.1571 -0.6803, -0.5948, -0.1376, -0.1916, -0.7065, 0.7156 0.2878, 0.5086, -0.5785, 0.2019, 0.4979, 0.2980 0.2764, 0.1943, -0.4090, 0.4632, 0.8906, 0.2960 -0.8877, 0.6705, -0.6155, -0.2098, -0.3998, 0.7107 -0.8398, 0.8093, -0.2597, 0.0614, -0.0118, 0.6502 -0.8476, 0.0158, -0.4769, -0.2859, -0.7839, 0.7715 0.5751, -0.7868, 0.9714, -0.6457, 0.1448, 0.1175 0.4802, -0.7001, 0.1022, -0.5668, 0.5184, 0.1090 0.4458, -0.6469, 0.7239, -0.9604, 0.7205, 0.0779 0.5175, 0.4339, 0.9747, -0.4438, -0.9924, 0.2879 0.8678, 0.7158, 0.4577, 0.0334, 0.4139, 0.1678 0.5406, 0.5012, 0.2264, -0.1963, 0.3946, 0.2088 -0.9938, 0.5498, 0.7928, -0.5214, -0.7585, 0.7687 0.7661, 0.0863, -0.4266, -0.7233, -0.4197, 0.1466 0.2277, -0.3517, -0.0853, -0.1118, 0.6563, 0.1767 0.3499, -0.5570, -0.0655, -0.3705, 0.2537, 0.1632 0.7547, -0.1046, 0.5689, -0.0861, 0.3125, 0.1257 0.8186, 0.2110, 0.5335, 0.0094, -0.0039, 0.1391 0.6858, -0.8644, 0.1465, 0.8855, 0.0357, 0.1845 -0.4967, 0.4015, 0.0805, 0.8977, 0.2487, 0.4663 0.6760, -0.9841, 0.9787, -0.8446, -0.3557, 0.1509 -0.1203, -0.4885, 0.6054, -0.0443, -0.7313, 0.4854 0.8557, 0.7919, -0.0169, 0.7134, -0.1628, 0.2002 0.0115, -0.6209, 0.9300, -0.4116, -0.7931, 0.4052 -0.7114, -0.9718, 0.4319, 0.1290, 0.5892, 0.3661 0.3915, 0.5557, -0.1870, 0.2955, -0.6404, 0.2954 -0.3564, -0.6548, -0.1827, -0.5172, -0.1862, 0.4622 0.2392, -0.4959, 0.5857, -0.1341, -0.2850, 0.2470 -0.3394, 0.3947, -0.4627, 0.6166, -0.4094, 0.5325 0.7107, 0.7768, -0.6312, 0.1707, 0.7964, 0.2757 -0.1078, 0.8437, -0.4420, 0.2177, 0.3649, 0.4028 -0.3139, 0.5595, -0.6505, -0.3161, -0.7108, 0.5546 0.4335, 0.3986, 0.3770, -0.4932, 0.3847, 0.1810 -0.2562, -0.2894, -0.8847, 0.2633, 0.4146, 0.4036 0.2272, 0.2966, -0.6601, -0.7011, 0.0284, 0.2778 -0.0743, -0.1421, -0.0054, -0.6770, -0.3151, 0.3597 -0.4762, 0.6891, 0.6007, -0.1467, 0.2140, 0.4266 -0.4061, 0.7193, 0.3432, 0.2669, -0.7505, 0.6147 -0.0588, 0.9731, 0.8966, 0.2902, -0.6966, 0.4955 -0.0627, -0.1439, 0.1985, 0.6999, 0.5022, 0.3077 0.1587, 0.8494, -0.8705, 0.9827, -0.8940, 0.4263 -0.7850, 0.2473, -0.9040, -0.4308, -0.8779, 0.7199 0.4070, 0.3369, -0.2428, -0.6236, 0.4940, 0.2215 -0.0242, 0.0513, -0.9430, 0.2885, -0.2987, 0.3947 -0.5416, -0.1322, -0.2351, -0.0604, 0.9590, 0.3683 0.1055, 0.7783, -0.2901, -0.5090, 0.8220, 0.2984 -0.9129, 0.9015, 0.1128, -0.2473, 0.9901, 0.4776 -0.9378, 0.1424, -0.6391, 0.2619, 0.9618, 0.5368 0.7498, -0.0963, 0.4169, 0.5549, -0.0103, 0.1614 -0.2612, -0.7156, 0.4538, -0.0460, -0.1022, 0.3717 0.7720, 0.0552, -0.1818, -0.4622, -0.8560, 0.1685 -0.4177, 0.0070, 0.9319, -0.7812, 0.3461, 0.3052 -0.0001, 0.5542, -0.7128, -0.8336, -0.2016, 0.3803 0.5356, -0.4194, -0.5662, -0.9666, -0.2027, 0.1776 -0.2378, 0.3187, -0.8582, -0.6948, -0.9668, 0.5474 -0.1947, -0.3579, 0.1158, 0.9869, 0.6690, 0.2992 0.3992, 0.8365, -0.9205, -0.8593, -0.0520, 0.3154 -0.0209, 0.0793, 0.7905, -0.1067, 0.7541, 0.1864 -0.4928, -0.4524, -0.3433, 0.0951, -0.5597, 0.6261 -0.8118, 0.7404, -0.5263, -0.2280, 0.1431, 0.6349 0.0516, -0.8480, 0.7483, 0.9023, 0.6250, 0.1959 -0.3212, 0.1093, 0.9488, -0.3766, 0.3376, 0.2735 -0.3481, 0.5490, -0.3484, 0.7797, 0.5034, 0.4379 -0.5785, -0.9170, -0.3563, -0.9258, 0.3877, 0.4121 0.3407, -0.1391, 0.5356, 0.0720, -0.9203, 0.3458 -0.3287, -0.8954, 0.2102, 0.0241, 0.2349, 0.3247 -0.1353, 0.6954, -0.0919, -0.9692, 0.7461, 0.3338 0.9036, -0.8982, -0.5299, -0.8733, -0.1567, 0.1187 0.7277, -0.8368, -0.0538, -0.7489, 0.5458, 0.0830 0.9049, 0.8878, 0.2279, 0.9470, -0.3103, 0.2194 0.7957, -0.1308, -0.5284, 0.8817, 0.3684, 0.2172 0.4647, -0.4931, 0.2010, 0.6292, -0.8918, 0.3371 -0.7390, 0.6849, 0.2367, 0.0626, -0.5034, 0.7039 -0.1567, -0.8711, 0.7940, -0.5932, 0.6525, 0.1710 0.7635, -0.0265, 0.1969, 0.0545, 0.2496, 0.1445 0.7675, 0.1354, -0.7698, -0.5460, 0.1920, 0.1728 -0.5211, -0.7372, -0.6763, 0.6897, 0.2044, 0.5217 0.1913, 0.1980, 0.2314, -0.8816, 0.5006, 0.1998 0.8964, 0.0694, -0.6149, 0.5059, -0.9854, 0.1825 0.1767, 0.7104, 0.2093, 0.6452, 0.7590, 0.2832 -0.3580, -0.7541, 0.4426, -0.1193, -0.7465, 0.5657 -0.5996, 0.5766, -0.9758, -0.3933, -0.9572, 0.6800 0.9950, 0.1641, -0.4132, 0.8579, 0.0142, 0.2003 -0.4717, -0.3894, -0.2567, -0.5111, 0.1691, 0.4266 0.3917, -0.8561, 0.9422, 0.5061, 0.6123, 0.1212 -0.0366, -0.1087, 0.3449, -0.1025, 0.4086, 0.2475 0.3633, 0.3943, 0.2372, -0.6980, 0.5216, 0.1925 -0.5325, -0.6466, -0.2178, -0.3589, 0.6310, 0.3568 0.2271, 0.5200, -0.1447, -0.8011, -0.7699, 0.3128 0.6415, 0.1993, 0.3777, -0.0178, -0.8237, 0.2181 -0.5298, -0.0768, -0.6028, -0.9490, 0.4588, 0.4356 0.6870, -0.1431, 0.7294, 0.3141, 0.1621, 0.1632 -0.5985, 0.0591, 0.7889, -0.3900, 0.7419, 0.2945 0.3661, 0.7984, -0.8486, 0.7572, -0.6183, 0.3449 0.6995, 0.3342, -0.3113, -0.6972, 0.2707, 0.1712 0.2565, 0.9126, 0.1798, -0.6043, -0.1413, 0.2893 -0.3265, 0.9839, -0.2395, 0.9854, 0.0376, 0.4770 0.2690, -0.1722, 0.9818, 0.8599, -0.7015, 0.3954 -0.2102, -0.0768, 0.1219, 0.5607, -0.0256, 0.3949 0.8216, -0.9555, 0.6422, -0.6231, 0.3715, 0.0801 -0.2896, 0.9484, -0.7545, -0.6249, 0.7789, 0.4370 -0.9985, -0.5448, -0.7092, -0.5931, 0.7926, 0.5402
Test data:
# synthetic_test_40.txt # 0.7462, 0.4006, -0.0590, 0.6543, -0.0083, 0.1935 0.8495, -0.2260, -0.0142, -0.4911, 0.7699, 0.1078 -0.2335, -0.4049, 0.4352, -0.6183, -0.7636, 0.5088 0.1810, -0.5142, 0.2465, 0.2767, -0.3449, 0.3136 -0.8650, 0.7611, -0.0801, 0.5277, -0.4922, 0.7140 -0.2358, -0.7466, -0.5115, -0.8413, -0.3943, 0.4533 0.4834, 0.2300, 0.3448, -0.9832, 0.3568, 0.1360 -0.6502, -0.6300, 0.6885, 0.9652, 0.8275, 0.3046 -0.3053, 0.5604, 0.0929, 0.6329, -0.0325, 0.4756 -0.7995, 0.0740, -0.2680, 0.2086, 0.9176, 0.4565 -0.2144, -0.2141, 0.5813, 0.2902, -0.2122, 0.4119 -0.7278, -0.0987, -0.3312, -0.5641, 0.8515, 0.4438 0.3793, 0.1976, 0.4933, 0.0839, 0.4011, 0.1905 -0.8568, 0.9573, -0.5272, 0.3212, -0.8207, 0.7415 -0.5785, 0.0056, -0.7901, -0.2223, 0.0760, 0.5551 0.0735, -0.2188, 0.3925, 0.3570, 0.3746, 0.2191 0.1230, -0.2838, 0.2262, 0.8715, 0.1938, 0.2878 0.4792, -0.9248, 0.5295, 0.0366, -0.9894, 0.3149 -0.4456, 0.0697, 0.5359, -0.8938, 0.0981, 0.3879 0.8629, -0.8505, -0.4464, 0.8385, 0.5300, 0.1769 0.1995, 0.6659, 0.7921, 0.9454, 0.9970, 0.2330 -0.0249, -0.3066, -0.2927, -0.4923, 0.8220, 0.2437 0.4513, -0.9481, -0.0770, -0.4374, -0.9421, 0.2879 -0.3405, 0.5931, -0.3507, -0.3842, 0.8562, 0.3987 0.9538, 0.0471, 0.9039, 0.7760, 0.0361, 0.1706 -0.0887, 0.2104, 0.9808, 0.5478, -0.3314, 0.4128 -0.8220, -0.6302, 0.0537, -0.1658, 0.6013, 0.4306 -0.4123, -0.2880, 0.9074, -0.0461, -0.4435, 0.5144 0.0060, 0.2867, -0.7775, 0.5161, 0.7039, 0.3599 -0.7968, -0.5484, 0.9426, -0.4308, 0.8148, 0.2979 0.7811, 0.8450, -0.6877, 0.7594, 0.2640, 0.2362 -0.6802, -0.1113, -0.8325, -0.6694, -0.6056, 0.6544 0.3821, 0.1476, 0.7466, -0.5107, 0.2592, 0.1648 0.7265, 0.9683, -0.9803, -0.4943, -0.5523, 0.2454 -0.9049, -0.9797, -0.0196, -0.9090, -0.4433, 0.6447 -0.4607, 0.1811, -0.2389, 0.4050, -0.0078, 0.5229 0.2664, -0.2932, -0.4259, -0.7336, 0.8742, 0.1834 -0.4507, 0.1029, -0.6294, -0.1158, -0.6294, 0.6081 0.8948, -0.0124, 0.9278, 0.2899, -0.0314, 0.1534 -0.1323, -0.8813, -0.0146, -0.0697, 0.6135, 0.2386

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2026 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2026 G2E Conference
2026 iSC West Conference
You must be logged in to post a comment.