I put together a demo program that trains a kernel ridge regression model using evolutionary optimization training. Bottom line: the technique did not work as well as I had hoped for — training was very slow and the resulting prediction model was not as accurate as one trained using the standard matrix inverse technique.
Kernel ridge regression (KRR) is a technique that adds the “kernel trick” to basic linear regression so that the KRR prediction model can deal with complex data that isn’t linearly separable. The “ridge” means there is built-in L2 (aka ridge) regularization to prevent overfitting.
A KRR model can be trained using a closed form solution with a matrix inverse. However, if you have 1,000 data items, the model has 1,000 weights, and the kernel matrix to invert has shape 1,000-by-1,000. This is doable. But once the source data is larger than a few thousands, KRR training using matrix inverse becomes impractical (but you can still use gradient descent techniques). In other words, KRR doesn’t scale well to large datasets.
For an experiment, I figured I’d try to train a KRR model using evolutionary optimization. In very high level pseudo-code:
create a population of possible solutions/chromosomes loop max_generations times pick two good parents from population use parents to create a child solution mutate the child slightly replace a bad solution in population with the child create a new random solution replace a bad solution in pop with random solution keep track if new best solution found end-loop return best solution found
For my demo I used a set of synthetic data with five predictor values and one target value to predict. The data was generated by a 5-10-1 neural network with random weight and bias values, therefore the data is complex but predictable in theory. There are 200 training items (and so the trained model has 200 weights) and 40 test items. The output of one of my demo runs is:
Begin Kernel Ridge Regression with evolutionary
optimization training
Loading train (200) and test (40) from file
Done
First three train X:
-0.1660 0.4406 -0.9998 -0.3953 -0.7065
0.0776 -0.1616 0.3704 -0.5911 0.7562
-0.9452 0.3409 -0.1654 0.1174 -0.7192
First three train y:
0.4840
0.1568
0.8054
Setting RBF gamma = 0.100
Setting alpha = 0.001
Creating KRR model
Done
Setting popSize = 200
Setting maxGen = 200000
Setting mRate = 0.8000
Setting tau (parent pressure) = 0.50
Setting sigma (child replace) = 0.50
Setting omega (rnd replace) = 0.50
Starting evolutionary training
generation = 0 error = 0.01689683 accu (0.10) = 0.2600
generation = 200000 error = 0.00034347 accu (0.10) = 0.8650
generation = 400000 error = 0.00020685 accu (0.10) = 0.9300
generation = 600000 error = 0.00018679 accu (0.10) = 0.9250
generation = 800000 error = 0.00018128 accu (0.10) = 0.9250
generation = 1000000 error = 0.00017854 accu (0.10) = 0.9050
generation = 1200000 error = 0.00017724 accu (0.10) = 0.9050
generation = 1400000 error = 0.00017648 accu (0.10) = 0.9050
generation = 1600000 error = 0.00017582 accu (0.10) = 0.9100
generation = 1800000 error = 0.00017518 accu (0.10) = 0.9050
Done
Model weights/coefficients:
-0.1663 -0.1893 0.1732 0.3802 0.4804 -0.0986
0.4848 -0.0285 -0.0668 0.0963 -0.2160 0.0216
. . . (200 weights)
-0.0143 -0.0940 0.0637 0.1092 -0.2955 0.2420
0.3143 0.3899
Computing model accuracy
Train acc (within 0.10) = 0.9050
Test acc (within 0.10) = 0.9500
Train MSE = 0.00017455
Test MSE = 0.00012633
Predicting for x =
-0.1660 0.4406 -0.9998 -0.3953 -0.7065
predicted y = 0.4829
End demo
The evolutionary optimization training starts out very well, but then gets stuck at about 90% to 93% accuracy.
A minor problem I encountered is that the ridge regularization in standard KRR training is simple and effective: you add a small value (often about 0.001) to the diagonal elements of the kernel matrix before inverting the matrix. But evolutionary optimization training doesn’t use a kernel matrix, so there’s no obvious way to add regularization. In my demo I skipped explicit regularization, my thought being that evolutionary optimization implicitly adds noise.
For my demo, I used a hard-coded radial basis function (RBF), also known as the Gaussian kernel, for the kernel function. Most KRR implementations allow you to specify one of several kernel functions, such as linear, polynomial, and so on. But RBF is an excellent kernel function for most datasets, and it only requires one parameter (gamma).
Training a KRR model using evolutionary optimization is very slow. To train my model, with just 200 data items, using a population size of 200, for 2,000,000 generations took over two hours on a fairly fast machine. Standard training using scikit (which uses the matrix inverse technique for training) only took a few seconds. Here’s some key output from a scikit KernelRidge module that ran almost instantaneously:
Begin scikit SVR linear demo Loading train and test data Done First three X data: [[-0.1660 0.4406 -0.9998 -0.3953 -0.7065] [ 0.0776 -0.1616 0.3704 -0.5911 0.7562] [-0.9452 0.3409 -0.1654 0.1174 -0.7192]] . . . First three y targets: [0.4840 0.1568 0.8054] . . . Creating and training KRR RBF model Setting gamma = 0.100, alpha = 0.001 Done Dual coefficients: [ -4.0621 -9.3182 -8.8092 -14.0933 -0.3002 0.1056 5.1146 4.3951 0.1753 5.2028 8.8409 -0.4698 7.0688 4.7942 2.1142 5.5679 4.8962 7.5384 -8.4670 5.0930 13.1902 -8.0109 -5.4780 -2.5951 -3.1010 7.0071 -9.1779 -0.7936 6.3949 5.9605 5.0747 0.3396 18.5395 -2.7000 0.4168 -3.1407 -5.5714 -13.1522 -8.2336 6.4788 -10.2406 11.7951 -10.6603 -10.7788 -1.6000 -0.7263 -16.0858 -1.5681 -12.9580 0.3914 -6.9487 -1.6890 -9.8436 -5.4678 4.1521 -1.4978 4.7248 -8.8854 -13.0858 -11.1040 7.7854 8.8629 -4.1085 1.9902 -7.3150 -8.5995 -3.0083 -1.8794 -6.7644 -17.8075 0.8345 -4.1221 1.9570 13.6729 -2.3569 2.6958 -8.5640 4.1899 -4.8593 6.1456 3.4228 16.8676 4.4296 6.4875 -0.9622 4.3707 -1.4808 -18.4243 11.2566 -12.1761 10.2213 8.2260 21.3728 -8.3055 -14.5729 9.2123 4.2088 -12.0034 6.0130 0.9240 13.3422 -11.6248 4.4520 -5.9944 0.6189 -1.1055 -6.3877 -0.4614 -0.1605 22.5837 -13.2100 -4.7317 -9.3882 -3.7890 3.0957 -2.5600 0.6684 2.7375 -0.5020 12.4123 -3.0796 0.9748 0.1481 0.7383 17.2000 -8.8991 -1.2841 -1.4301 2.1305 5.3976 1.0769 -4.3476 -4.7575 1.4967 -4.0716 -0.7248 6.2342 19.9161 3.4816 -12.8797 12.8022 -6.7232 4.2184 -8.3022 2.4988 -1.4373 -9.4516 -6.9603 4.7524 1.5667 -4.1711 -2.3520 12.3394 -6.8946 6.2633 1.4896 11.5521 11.4834 3.5178 -22.8366 2.2423 8.2786 -2.4143 -17.6250 7.2524 7.2352 -4.9042 5.0042 -0.0958 9.2463 13.9077 10.2577 5.5424 3.2610 8.6191 -1.1175 8.0135 1.6988 2.0254 8.0727 15.0779 -8.1617 -3.8956 1.4725 2.6730 -8.5577 -18.9562 -6.6963 -1.2733 11.2704 -1.9976 6.5690 1.9351 1.2072 -0.6203 0.6920 -12.3647 -1.7577 -5.6308 3.4553] Accuracy (0.10) train = 0.9700 Accuracy (0.10) test = 0.9750 MSE train = 0.00006562 MSE test = 0.00016499 Predicting for: [[-0.1660 0.4406 -0.9998 -0.3953 -0.7065]] Predicted y = 0.4881 End demo
In short, for KRR, standard training is hundreds or thousands of times faster and is produces a significantly more accurate prediction model. Sigh.
One of my motivations for looking at evolutionary optimization training for KRR is that I’m interested in looking at the possible use of evolutionary optimization training for kernel support vector regression. Unlike KRR, kernel SVR cannot be easily trained using matrix or gradient techniques, because the SVR loss function is not differentiable. This is the kind of scenario where bio-inspired techniques like evolutionary optimization or particle swarm optimization might be valuable.
By the way, one of my sub-experiments used a different technique to initialize the population of possible solutions. I created a kernel matrix and then computed the starting inverse approximation for Newton inverse, and then computed an approximate approximation of the model weights. I thought I was being very clever but the technique didn’t show any improvement over standard random initialization.
Well, as I expected, evolutionary optimization training is just too slow. Darn. But when quantum computing becomes a reality, the situation may change and evolutionary optimization techniques might become very useful.

The history of the evolution of TV cartoon shows has always interested me. Nickelodeon was launched in 1979 and was the first cable channel aimed at children and young adults. Over the years, Nickelodeon has featured many wonderful cartoon shows that were written with adult humor.
Left: “SpongeBob SquarePants” is by far the best known animated show from Nickelodeon. The series was introduced in 1999 and is still running strong. My two favorite characters on the show are cranky Squidward and tightwad Mr. Krabs.
Center: “The Fairly OddParents” ran from 2001 to 2017. Ten-year old Timmy Turner has two fairy godparents, Cosmo and Wanda who grant him wishes, but Timmy’s wishes often (well, always) backfire. I like Vicky, the suspicious babysitter.
Right: “The Angry Beavers” ran from 1997 to 2003. The series features brothers Norbert and Daggett who move out from home to be on their own. The stories varied a lot but were always wildly creative.
Demo program. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my blog editor chokes on symbols).
using System;
using System.IO;
using System.Collections.Generic;
namespace KernelRidgeRegressionEvo
{
internal class KRREvoProgram
{
static void Main(string[] args)
{
Console.WriteLine("\nBegin Kernel Ridge " +
"Regression with evolutionary optimization training ");
// 1. Load data
Console.WriteLine("\nLoading train (200) and" +
" test (40) from file ");
string trainFile =
"..\\..\\..\\Data\\synthetic_train_200.txt";
double[][] trainX = MatLoad(trainFile,
new int[] { 0, 1, 2, 3, 4 }, ',', "#");
double[] trainY =
MatToVec(MatLoad(trainFile,
new int[] { 5 }, ',', "#"));
string testFile =
"..\\..\\..\\Data\\synthetic_test_40.txt";
double[][] testX = MatLoad(testFile,
new int[] { 0, 1, 2, 3, 4 }, ',', "#");
double[] testY =
MatToVec(MatLoad(testFile,
new int[] { 5 }, ',', "#"));
Console.WriteLine("Done ");
Console.WriteLine("\nFirst three train X: ");
for (int i = 0; i "lt" 3; ++i)
VecShow(trainX[i], 4, 8);
Console.WriteLine("\nFirst three train y: ");
for (int i = 0; i "lt" 3; ++i)
Console.WriteLine(trainY[i].ToString("F4").
PadLeft(8));
// 2. create and train
double gamma = 0.1; // RBF param
double alpha = 1.0e-3; // regularization; not used
Console.WriteLine("\nSetting RBF gamma = " +
gamma.ToString("F3"));
Console.WriteLine("Setting alpha = " +
alpha.ToString("F3"));
Console.WriteLine("\nCreating KRR model ");
KRR model = new KRR(gamma, alpha, seed:0);
Console.WriteLine("Done ");
int popSize = 200;
int maxGen = 200000;
double mRate = 0.80;
double tau = 0.75; // parent selection pressure
double sigma = 0.75; // child replace pressure
double omega = 0.75; // rnd soln replace pressure
Console.WriteLine("\nSetting popSize = " + popSize);
Console.WriteLine("Setting maxGen = " + maxGen);
Console.WriteLine("Setting mRate = " +
mRate.ToString("F4"));
Console.WriteLine("\nStarting evolutionary training ");
model.TrainEvo(trainX, trainY, popSize, maxGen,
mRate, tau, sigma, omega);
Console.WriteLine("Done ");
Console.WriteLine("\nModel weights/coefficients: ");
for (int i = 0; i "lt" model.weights.Length; ++i)
{
if (i "gt" 0 && i % 6 == 0)
Console.WriteLine("");
Console.Write(model.weights[i].ToString("F4").
PadLeft(10));
}
Console.WriteLine("");
// 3. evaluate
Console.WriteLine("\nComputing model accuracy ");
double trainAcc = model.Accuracy(trainX, trainY, 0.10);
double testAcc = model.Accuracy(testX, testY, 0.10);
Console.WriteLine("\nTrain acc (within 0.10) = " +
trainAcc.ToString("F4"));
Console.WriteLine("Test acc (within 0.10) = " +
testAcc.ToString("F4"));
double trainMSE = model.Error(trainX, trainY);
double testMSE = model.Error(testX, testY);
Console.WriteLine("\nTrain MSE = " +
trainMSE.ToString("F8"));
Console.WriteLine("Test MSE = " +
testMSE.ToString("F8"));
// 4. use model
double[] x = trainX[0];
Console.WriteLine("\nPredicting for x = ");
VecShow(x, 4, 9);
double predY = model.Predict(x);
Console.WriteLine("\npredicted y = " +
predY.ToString("F4"));
Console.WriteLine("\nEnd demo ");
Console.ReadLine();
} // Main
// ------------------------------------------------------
// helpers for Main()
// ------------------------------------------------------
static double[][] MatLoad(string fn, int[] usecols,
char sep, string comment)
{
List"lt"double[]"gt" result =
new List"lt"double[]"gt"();
string line = "";
FileStream ifs = new FileStream(fn, FileMode.Open);
StreamReader sr = new StreamReader(ifs);
while ((line = sr.ReadLine()) != null)
{
if (line.StartsWith(comment) == true)
continue;
string[] tokens = line.Split(sep);
List"lt"double"gt" lst = new List"lt"double"gt"();
for (int j = 0; j "lt" usecols.Length; ++j)
lst.Add(double.Parse(tokens[usecols[j]]));
double[] row = lst.ToArray();
result.Add(row);
}
sr.Close(); ifs.Close();
return result.ToArray();
}
static double[] MatToVec(double[][] mat)
{
int nRows = mat.Length;
int nCols = mat[0].Length;
double[] result = new double[nRows * nCols];
int k = 0;
for (int i = 0; i "lt" nRows; ++i)
for (int j = 0; j "lt" nCols; ++j)
result[k++] = mat[i][j];
return result;
}
static void VecShow(double[] vec, int dec, int wid)
{
for (int i = 0; i "lt" vec.Length; ++i)
Console.Write(vec[i].ToString("F" + dec).
PadLeft(wid));
Console.WriteLine("");
}
} // Program
public class KRR
{
public double gamma; // for RBF kernel
public double alpha; // regularization noise
public double[][] trainX; // needed for prediction
public double[] trainY; // not necessary
public double[] weights; // one per trainX item
private Random rnd;
// ------------------------------------------------------
public class Cell
{
public double[] chromo; // soln = wts, no bias!
public double error;
public Cell(int solnLen)
{
this.chromo = new double[solnLen];
this.error = 0.0;
}
} // Cell
// ------------------------------------------------------
public KRR(double gamma, double alpha, int seed = 0)
{
this.gamma = gamma;
this.alpha = alpha;
this.rnd = new Random(seed);
}
// ------------------------------------------------------
public double Predict(double[] x)
{
int N = this.trainX.Length;
double sum = 0.0;
for (int i = 0; i "lt" N; ++i)
{
double[] xx = this.trainX[i];
double k = Rbf(x, xx, this.gamma);
sum += this.weights[i] * k;
}
return sum;
}
// ------------------------------------------------------
private static double Rbf(double[] v1, double[] v2,
double gamma)
{
// the gamma version not len_scale version
int dim = v1.Length;
double sum = 0.0;
for (int i = 0; i "lt" dim; ++i)
{
sum += (v1[i] - v2[i]) * (v1[i] - v2[i]);
}
return Math.Exp(-1 * gamma * sum);
}
// ------------------------------------------------------
public double Accuracy(double[][] dataX, double[] dataY,
double pctClose)
{
int numCorrect = 0; int numWrong = 0;
int n = dataX.Length;
for (int i = 0; i "lt" n; ++i)
{
double[] x = dataX[i];
double predY = this.Predict(x);
double actualY = dataY[i];
if (Math.Abs(predY - actualY)
"lt" Math.Abs(pctClose * actualY))
numCorrect += 1;
else
numWrong += 1;
}
return (numCorrect * 1.0) / (numCorrect + numWrong);
}
//-------------------------------------------------------
public double Error(double[][] dataX, double[] dataY)
{
double sum = 0.0;
int n = dataX.Length;
for (int i = 0; i "lt" n; ++i)
{
double[] x = dataX[i];
double predY = this.Predict(x);
double actualY = dataY[i];
sum += (predY - actualY) * (predY - actualY);
}
return sum / n;
}
// ------------------------------------------------------
// ------------------------------------------------------
public double PredictUsing(double[] soln, double[] x)
{
int N = this.trainX.Length;
double sum = 0.0;
for (int i = 0; i "lt" N; ++i)
{
double[] xx = this.trainX[i];
double k = Rbf(x, xx, this.gamma);
//sum += this.wts[i] * k;
sum += soln[i] * k;
}
return sum;
}
public double ErrorUsing(double[] soln,
double[][] dataX, double[] dataY)
{
// mean squared error
int n = dataX.Length;
double sum = 0.0;
for (int i = 0; i "lt" n; ++i)
{
double[] x = dataX[i];
double predY = this.PredictUsing(soln, x);
double actualY = dataY[i];
sum += (predY - actualY) * (predY - actualY);
}
return sum / n;
}
public double AccuracyUsing(double[] soln,
double[][] dataX, double[] dataY,
double pctClose)
{
int numCorrect = 0; int numWrong = 0;
int n = dataX.Length;
for (int i = 0; i "lt" n; ++i)
{
double[] x = dataX[i];
double predY = this.PredictUsing(soln, x);
double actualY = dataY[i];
if (Math.Abs(predY - actualY)
"lt" Math.Abs(pctClose * actualY))
numCorrect += 1;
else
numWrong += 1;
}
return (numCorrect * 1.0) / (numCorrect + numWrong);
}
// ------------------------------------------------------
public void TrainEvo(double[][] trainX, double[] trainY,
int popSize, int maxGen, double mRate = 0.10,
double tau = 0.50, double sigma = 0.50,
double omega = 0.50)
{
// training data needed to predict
this.trainX = trainX; // by ref -- could copy
this.trainY = trainY; // not used this version
//// compute approx solution using K matrix
//int N = trainX.Length;
//double[][] K = new double[N][];
//for (int i = 0; i "lt" N; ++i)
// K[i] = new double[N];
//for (int i = 0; i "lt" N; ++i)
// for (int j = 0; j "lt" N; ++j)
// K[i][j] = Rbf(trainX[i], trainX[j], this.gamma);
//// add regularization on diagonal
//for (int i = 0; i "lt" N; ++i)
// K[i][i] += this.alpha;
//// compute Newton inverse start as approx
//double[][] approxInverse = NewtonStart(K);
//double[] approxWeights = VecMatProd(trainY,
// approxInverse);
// 1. allocate wts (now that dim is known via trainX)
int solnLen = trainX.Length; // one per data, no bias
this.weights = new double[solnLen];
// 2. create a pop of possible solns (wts)
//double lo = -10.0; double hi = +10.0;
double a = -0.10; double b = +0.10;
Cell[] pop = new Cell[popSize];
for (int i = 0; i "lt" popSize; ++i)
{
pop[i] = new Cell(solnLen);
for (int j = 0; j "lt" solnLen; ++j)
pop[i].chromo[j] = (b - a) *
this.rnd.NextDouble() + a;
//pop[i].chromo[j] = approxWeights[j] +
// ((b - a) * this.rnd.NextDouble()) + a;
pop[i].error = ErrorUsing(pop[i].chromo, trainX,
trainY);
}
// 3. set up goal: find best solution/chromo
double[] bestSoln = new double[solnLen]; // best found
double bestError = double.MaxValue;
// 4. main loop
for (int gen = 0; gen "lt" maxGen; ++gen)
{
// 4a. select two good parents (idxs)
int numItems = (int)(popSize * tau);
int[] allIndices = new int[popSize];
for (int i = 0; i "lt" popSize; ++i)
allIndices[i] = i;
this.Shuffle(allIndices);
int parent1 = allIndices[0];
double bestError1 = pop[parent1].error;
for (int i = 0; i "lt" numItems; ++i)
{
int idx = allIndices[i];
if (pop[idx].error "lt" bestError1)
{
parent1 = idx;
bestError1 = pop[idx].error;
}
}
this.Shuffle(allIndices);
int parent2 = allIndices[0];
double bestError2 = pop[parent2].error;
for (int i = 0; i "lt" numItems; ++i)
{
int idx = allIndices[i];
if (pop[idx].error "lt" bestError2)
{
parent2 = idx;
bestError2 = pop[idx].error;
}
}
// 4b. create a child
Cell child = new Cell(solnLen);
int crossIdx = this.rnd.Next(1, solnLen);
for (int j = 0; j "lt" crossIdx; ++j)
child.chromo[j] = pop[parent1].chromo[j]; // left
for (int j = crossIdx; j "lt" solnLen; ++j)
child.chromo[j] = pop[parent2].chromo[j]; // right
// 4c. mutate child and compute its error
//double mLo = -1.00; double mHi = 1.00;
double mLo = -0.001; double mHi = +0.001;
for (int j = 0; j "lt" solnLen; ++j)
{
double p = rnd.NextDouble();
if (p "lt" mRate) // rarely
{
child.chromo[j] += (mHi - mLo) *
this.rnd.NextDouble() + mLo;
}
}
child.error = ErrorUsing(child.chromo, trainX, trainY);
// 4d. is child new best soln?
if (child.error "lt" bestError)
{
bestError = child.error;
for (int j = 0; j "lt" solnLen; ++j)
{
bestSoln[j] = child.chromo[j];
}
}
// 4e. select a bad Cell to replace
this.Shuffle(allIndices);
numItems = (int)(popSize * sigma);
int badIdx = allIndices[0];
double worstError = pop[badIdx].error;
for (int i = 0; i "lt" numItems; ++i)
{
int idx = allIndices[i];
if (pop[idx].error "gt" worstError)
{
badIdx = idx;
worstError = pop[idx].error;
}
}
// 4f. replace bad Cell with new child
pop[badIdx] = child; // by ref
// 4g. create a random solution/Cell/chromo
Cell rndCell = new Cell(solnLen);
for (int j = 0; j "lt" solnLen; ++j)
rndCell.chromo[j] =
((b - a) * this.rnd.NextDouble()) + a;
//rndCell.chromo[j] = approxWeights[j] +
// ((b - a) * this.rnd.NextDouble()) + a;
rndCell.error =
ErrorUsing(rndCell.chromo, trainX, trainY);
// 4h. is random Cell new best soln?
if (rndCell.error "lt" bestError)
{
bestError = rndCell.error;
for (int j = 0; j "lt" solnLen; ++j)
{
bestSoln[j] = rndCell.chromo[j];
}
}
// 4i. replace bad Cell with random solution/Cell
this.Shuffle(allIndices);
numItems = (int)(popSize * omega);
badIdx = allIndices[0];
worstError = pop[badIdx].error;
for (int i = 0; i "lt" numItems; ++i)
{
int idx = allIndices[i];
if (pop[idx].error "gt" worstError)
{
badIdx = idx;
worstError = pop[idx].error;
}
}
// 4f. replace bad Cell with new child
pop[badIdx] = rndCell; // by ref
// show progress
if (gen % (maxGen / 10) == 0) // display progress
{
double bestAcc =
this.AccuracyUsing(bestSoln, trainX, trainY, 0.10);
string s1 = "generation = " +
gen.ToString().PadLeft(8);
string s2 = " error = " +
bestError.ToString("F8").PadLeft(12);
string s3 = " accu (0.10) = " +
bestAcc.ToString("F4");
Console.WriteLine(s1 + s2 + s3);
}
} // maxGen
// 5. copy best soln found to model weights
for (int j = 0; j "lt" solnLen; ++j)
this.weights[j] = bestSoln[j];
} // TrainEvo
private void Shuffle(int[] arr)
{
// Fisher-Yates algorithm
int n = arr.Length;
for (int i = 0; i "lt" n; ++i)
{
int ri = this.rnd.Next(i, n); // random index
int tmp = arr[ri];
arr[ri] = arr[i];
arr[i] = tmp;
}
}
//public static double[][] MatProduct(double[][] matA,
// double[][] matB)
//{
// int aRows = matA.Length;
// int aCols = matA[0].Length;
// int bRows = matB.Length;
// int bCols = matB[0].Length;
// if (aCols != bRows)
// throw new Exception("Non-conformable matrices");
// double[][] result = new double[aRows][];
// for (int i = 0; i "lt" aRows; ++i)
// result[i] = new double[bCols];
// for (int i = 0; i "lt" aRows; ++i) // each row of A
// for (int j = 0; j "lt" bCols; ++j) // each col of B
// for (int k = 0; k "lt" aCols; ++k)
// result[i][j] += matA[i][k] * matB[k][j];
// return result;
//}
//// ------------------------------------------------------
//public static double[] VecMatProd(double[] v,
// double[][] m)
//{
// // one-dim vec * two-dim mat
// int nRows = m.Length;
// int nCols = m[0].Length;
// int n = v.Length;
// if (n != nCols)
// throw new Exception("non-comform in VecMatProd");
// double[] result = new double[n];
// for (int i = 0; i "lt" n; ++i)
// {
// for (int j = 0; j "lt" nCols; ++j)
// {
// result[i] += v[j] * m[i][j];
// }
// }
// return result;
//}
//// ------------------------------------------------------
//static double[][] NewtonStart(double[][] m)
//{
// int n = m.Length;
// double maxRowSum = 0.0;
// double maxColSum = 0.0;
// for (int i = 0; i "lt" n; ++i)
// {
// double rowSum = 0.0;
// for (int j = 0; j "lt" n; ++j)
// rowSum += Math.Abs(m[i][j]);
// if (rowSum "gt" maxRowSum)
// maxRowSum = rowSum;
// }
// for (int j = 0; j "lt" n; ++j)
// {
// double colSum = 0.0;
// for (int i = 0; i "lt" n; ++i)
// colSum += Math.Abs(m[i][j]);
// if (colSum "gt" maxColSum)
// maxColSum = colSum;
// }
// double[][] result = MatTranspose(m);
// double t = 1.0 / (maxRowSum * maxRowSum + 1.0e-8);
// for (int i = 0; i "lt" m.Length; ++i)
// for (int j = 0; j "lt" m.Length; ++j)
// result[i][j] *= t;
// return result;
//} // NewtonStart()
//// ------------------------------------------------------
//public static double[][] MatTranspose(double[][] m)
//{
// int nr = m.Length; int nc = m[0].Length;
// //double[][] result = MatMake(nc, nr); // note
// double[][] result = new double[nc][];
// for (int j = 0; j "lt" nc; ++j)
// result[j] = new double[nr];
// for (int i = 0; i "lt" nr; ++i)
// for (int j = 0; j "lt" nc; ++j)
// result[j][i] = m[i][j];
// return result;
//}
// ------------------------------------------------------
} // KRR
} // ns
Training data:
# synthetic_train_200.txt # -0.1660, 0.4406, -0.9998, -0.3953, -0.7065, 0.4840 0.0776, -0.1616, 0.3704, -0.5911, 0.7562, 0.1568 -0.9452, 0.3409, -0.1654, 0.1174, -0.7192, 0.8054 0.9365, -0.3732, 0.3846, 0.7528, 0.7892, 0.1345 -0.8299, -0.9219, -0.6603, 0.7563, -0.8033, 0.7955 0.0663, 0.3838, -0.3690, 0.3730, 0.6693, 0.3206 -0.9634, 0.5003, 0.9777, 0.4963, -0.4391, 0.7377 -0.1042, 0.8172, -0.4128, -0.4244, -0.7399, 0.4801 -0.9613, 0.3577, -0.5767, -0.4689, -0.0169, 0.6861 -0.7065, 0.1786, 0.3995, -0.7953, -0.1719, 0.5569 0.3888, -0.1716, -0.9001, 0.0718, 0.3276, 0.2500 0.1731, 0.8068, -0.7251, -0.7214, 0.6148, 0.3297 -0.2046, -0.6693, 0.8550, -0.3045, 0.5016, 0.2129 0.2473, 0.5019, -0.3022, -0.4601, 0.7918, 0.2613 -0.1438, 0.9297, 0.3269, 0.2434, -0.7705, 0.5171 0.1568, -0.1837, -0.5259, 0.8068, 0.1474, 0.3307 -0.9943, 0.2343, -0.3467, 0.0541, 0.7719, 0.5581 0.2467, -0.9684, 0.8589, 0.3818, 0.9946, 0.1092 -0.6553, -0.7257, 0.8652, 0.3936, -0.8680, 0.7018 0.8460, 0.4230, -0.7515, -0.9602, -0.9476, 0.1996 -0.9434, -0.5076, 0.7201, 0.0777, 0.1056, 0.5664 0.9392, 0.1221, -0.9627, 0.6013, -0.5341, 0.1533 0.6142, -0.2243, 0.7271, 0.4942, 0.1125, 0.1661 0.4260, 0.1194, -0.9749, -0.8561, 0.9346, 0.2230 0.1362, -0.5934, -0.4953, 0.4877, -0.6091, 0.3810 0.6937, -0.5203, -0.0125, 0.2399, 0.6580, 0.1460 -0.6864, -0.9628, -0.8600, -0.0273, 0.2127, 0.5387 0.9772, 0.1595, -0.2397, 0.1019, 0.4907, 0.1611 0.3385, -0.4702, -0.8673, -0.2598, 0.2594, 0.2270 -0.8669, -0.4794, 0.6095, -0.6131, 0.2789, 0.4700 0.0493, 0.8496, -0.4734, -0.8681, 0.4701, 0.3516 0.8639, -0.9721, -0.5313, 0.2336, 0.8980, 0.1412 0.9004, 0.1133, 0.8312, 0.2831, -0.2200, 0.1782 0.0991, 0.8524, 0.8375, -0.2102, 0.9265, 0.2150 -0.6521, -0.7473, -0.7298, 0.0113, -0.9570, 0.7422 0.6190, -0.3105, 0.8802, 0.1640, 0.7577, 0.1056 0.6895, 0.8108, -0.0802, 0.0927, 0.5972, 0.2214 0.1982, -0.9689, 0.1870, -0.1326, 0.6147, 0.1310 -0.3695, 0.7858, 0.1557, -0.6320, 0.5759, 0.3773 -0.1596, 0.3581, 0.8372, -0.9992, 0.9535, 0.2071 -0.2468, 0.9476, 0.2094, 0.6577, 0.1494, 0.4132 0.1737, 0.5000, 0.7166, 0.5102, 0.3961, 0.2611 0.7290, -0.3546, 0.3416, -0.0983, -0.2358, 0.1332 -0.3652, 0.2438, -0.1395, 0.9476, 0.3556, 0.4170 -0.6029, -0.1466, -0.3133, 0.5953, 0.7600, 0.4334 -0.4596, -0.4953, 0.7098, 0.0554, 0.6043, 0.2775 0.1450, 0.4663, 0.0380, 0.5418, 0.1377, 0.2931 -0.8636, -0.2442, -0.8407, 0.9656, -0.6368, 0.7429 0.6237, 0.7499, 0.3768, 0.1390, -0.6781, 0.2185 -0.5499, 0.1850, -0.3755, 0.8326, 0.8193, 0.4399 -0.4858, -0.7782, -0.6141, -0.0008, 0.4572, 0.4197 0.7033, -0.1683, 0.2334, -0.5327, -0.7961, 0.1776 0.0317, -0.0457, -0.6947, 0.2436, 0.0880, 0.3345 0.5031, -0.5559, 0.0387, 0.5706, -0.9553, 0.3107 -0.3513, 0.7458, 0.6894, 0.0769, 0.7332, 0.3170 0.2205, 0.5992, -0.9309, 0.5405, 0.4635, 0.3532 -0.4806, -0.4859, 0.2646, -0.3094, 0.5932, 0.3202 0.9809, -0.3995, -0.7140, 0.8026, 0.0831, 0.1600 0.9495, 0.2732, 0.9878, 0.0921, 0.0529, 0.1289 -0.9476, -0.6792, 0.4913, -0.9392, -0.2669, 0.5966 0.7247, 0.3854, 0.3819, -0.6227, -0.1162, 0.1550 -0.5922, -0.5045, -0.4757, 0.5003, -0.0860, 0.5863 -0.8861, 0.0170, -0.5761, 0.5972, -0.4053, 0.7301 0.6877, -0.2380, 0.4997, 0.0223, 0.0819, 0.1404 0.9189, 0.6079, -0.9354, 0.4188, -0.0700, 0.1907 -0.1428, -0.7820, 0.2676, 0.6059, 0.3936, 0.2790 0.5324, -0.3151, 0.6917, -0.1425, 0.6480, 0.1071 -0.8432, -0.9633, -0.8666, -0.0828, -0.7733, 0.7784 -0.9444, 0.5097, -0.2103, 0.4939, -0.0952, 0.6787 -0.0520, 0.6063, -0.1952, 0.8094, -0.9259, 0.4836 0.5477, -0.7487, 0.2370, -0.9793, 0.0773, 0.1241 0.2450, 0.8116, 0.9799, 0.4222, 0.4636, 0.2355 0.8186, -0.1983, -0.5003, -0.6531, -0.7611, 0.1511 -0.4714, 0.6382, -0.3788, 0.9648, -0.4667, 0.5950 0.0673, -0.3711, 0.8215, -0.2669, -0.1328, 0.2677 -0.9381, 0.4338, 0.7820, -0.9454, 0.0441, 0.5518 -0.3480, 0.7190, 0.1170, 0.3805, -0.0943, 0.4724 -0.9813, 0.1535, -0.3771, 0.0345, 0.8328, 0.5438 -0.1471, -0.5052, -0.2574, 0.8637, 0.8737, 0.3042 -0.5454, -0.3712, -0.6505, 0.2142, -0.1728, 0.5783 0.6327, -0.6297, 0.4038, -0.5193, 0.1484, 0.1153 -0.5424, 0.3282, -0.0055, 0.0380, -0.6506, 0.6613 0.1414, 0.9935, 0.6337, 0.1887, 0.9520, 0.2540 -0.9351, -0.8128, -0.8693, -0.0965, -0.2491, 0.7353 0.9507, -0.6640, 0.9456, 0.5349, 0.6485, 0.1059 -0.0462, -0.9737, -0.2940, -0.0159, 0.4602, 0.2606 -0.0627, -0.0852, -0.7247, -0.9782, 0.5166, 0.2977 0.0478, 0.5098, -0.0723, -0.7504, -0.3750, 0.3335 0.0090, 0.3477, 0.5403, -0.7393, -0.9542, 0.4415 -0.9748, 0.3449, 0.3736, -0.1015, 0.8296, 0.4358 0.2887, -0.9895, -0.0311, 0.7186, 0.6608, 0.2057 0.1570, -0.4518, 0.1211, 0.3435, -0.2951, 0.3244 0.7117, -0.6099, 0.4946, -0.4208, 0.5476, 0.1096 -0.2929, -0.5726, 0.5346, -0.3827, 0.4665, 0.2465 0.4889, -0.5572, -0.5718, -0.6021, -0.7150, 0.2163 -0.7782, 0.3491, 0.5996, -0.8389, -0.5366, 0.6516 -0.5847, 0.8347, 0.4226, 0.1078, -0.3910, 0.6134 0.8469, 0.4121, -0.0439, -0.7476, 0.9521, 0.1571 -0.6803, -0.5948, -0.1376, -0.1916, -0.7065, 0.7156 0.2878, 0.5086, -0.5785, 0.2019, 0.4979, 0.2980 0.2764, 0.1943, -0.4090, 0.4632, 0.8906, 0.2960 -0.8877, 0.6705, -0.6155, -0.2098, -0.3998, 0.7107 -0.8398, 0.8093, -0.2597, 0.0614, -0.0118, 0.6502 -0.8476, 0.0158, -0.4769, -0.2859, -0.7839, 0.7715 0.5751, -0.7868, 0.9714, -0.6457, 0.1448, 0.1175 0.4802, -0.7001, 0.1022, -0.5668, 0.5184, 0.1090 0.4458, -0.6469, 0.7239, -0.9604, 0.7205, 0.0779 0.5175, 0.4339, 0.9747, -0.4438, -0.9924, 0.2879 0.8678, 0.7158, 0.4577, 0.0334, 0.4139, 0.1678 0.5406, 0.5012, 0.2264, -0.1963, 0.3946, 0.2088 -0.9938, 0.5498, 0.7928, -0.5214, -0.7585, 0.7687 0.7661, 0.0863, -0.4266, -0.7233, -0.4197, 0.1466 0.2277, -0.3517, -0.0853, -0.1118, 0.6563, 0.1767 0.3499, -0.5570, -0.0655, -0.3705, 0.2537, 0.1632 0.7547, -0.1046, 0.5689, -0.0861, 0.3125, 0.1257 0.8186, 0.2110, 0.5335, 0.0094, -0.0039, 0.1391 0.6858, -0.8644, 0.1465, 0.8855, 0.0357, 0.1845 -0.4967, 0.4015, 0.0805, 0.8977, 0.2487, 0.4663 0.6760, -0.9841, 0.9787, -0.8446, -0.3557, 0.1509 -0.1203, -0.4885, 0.6054, -0.0443, -0.7313, 0.4854 0.8557, 0.7919, -0.0169, 0.7134, -0.1628, 0.2002 0.0115, -0.6209, 0.9300, -0.4116, -0.7931, 0.4052 -0.7114, -0.9718, 0.4319, 0.1290, 0.5892, 0.3661 0.3915, 0.5557, -0.1870, 0.2955, -0.6404, 0.2954 -0.3564, -0.6548, -0.1827, -0.5172, -0.1862, 0.4622 0.2392, -0.4959, 0.5857, -0.1341, -0.2850, 0.2470 -0.3394, 0.3947, -0.4627, 0.6166, -0.4094, 0.5325 0.7107, 0.7768, -0.6312, 0.1707, 0.7964, 0.2757 -0.1078, 0.8437, -0.4420, 0.2177, 0.3649, 0.4028 -0.3139, 0.5595, -0.6505, -0.3161, -0.7108, 0.5546 0.4335, 0.3986, 0.3770, -0.4932, 0.3847, 0.1810 -0.2562, -0.2894, -0.8847, 0.2633, 0.4146, 0.4036 0.2272, 0.2966, -0.6601, -0.7011, 0.0284, 0.2778 -0.0743, -0.1421, -0.0054, -0.6770, -0.3151, 0.3597 -0.4762, 0.6891, 0.6007, -0.1467, 0.2140, 0.4266 -0.4061, 0.7193, 0.3432, 0.2669, -0.7505, 0.6147 -0.0588, 0.9731, 0.8966, 0.2902, -0.6966, 0.4955 -0.0627, -0.1439, 0.1985, 0.6999, 0.5022, 0.3077 0.1587, 0.8494, -0.8705, 0.9827, -0.8940, 0.4263 -0.7850, 0.2473, -0.9040, -0.4308, -0.8779, 0.7199 0.4070, 0.3369, -0.2428, -0.6236, 0.4940, 0.2215 -0.0242, 0.0513, -0.9430, 0.2885, -0.2987, 0.3947 -0.5416, -0.1322, -0.2351, -0.0604, 0.9590, 0.3683 0.1055, 0.7783, -0.2901, -0.5090, 0.8220, 0.2984 -0.9129, 0.9015, 0.1128, -0.2473, 0.9901, 0.4776 -0.9378, 0.1424, -0.6391, 0.2619, 0.9618, 0.5368 0.7498, -0.0963, 0.4169, 0.5549, -0.0103, 0.1614 -0.2612, -0.7156, 0.4538, -0.0460, -0.1022, 0.3717 0.7720, 0.0552, -0.1818, -0.4622, -0.8560, 0.1685 -0.4177, 0.0070, 0.9319, -0.7812, 0.3461, 0.3052 -0.0001, 0.5542, -0.7128, -0.8336, -0.2016, 0.3803 0.5356, -0.4194, -0.5662, -0.9666, -0.2027, 0.1776 -0.2378, 0.3187, -0.8582, -0.6948, -0.9668, 0.5474 -0.1947, -0.3579, 0.1158, 0.9869, 0.6690, 0.2992 0.3992, 0.8365, -0.9205, -0.8593, -0.0520, 0.3154 -0.0209, 0.0793, 0.7905, -0.1067, 0.7541, 0.1864 -0.4928, -0.4524, -0.3433, 0.0951, -0.5597, 0.6261 -0.8118, 0.7404, -0.5263, -0.2280, 0.1431, 0.6349 0.0516, -0.8480, 0.7483, 0.9023, 0.6250, 0.1959 -0.3212, 0.1093, 0.9488, -0.3766, 0.3376, 0.2735 -0.3481, 0.5490, -0.3484, 0.7797, 0.5034, 0.4379 -0.5785, -0.9170, -0.3563, -0.9258, 0.3877, 0.4121 0.3407, -0.1391, 0.5356, 0.0720, -0.9203, 0.3458 -0.3287, -0.8954, 0.2102, 0.0241, 0.2349, 0.3247 -0.1353, 0.6954, -0.0919, -0.9692, 0.7461, 0.3338 0.9036, -0.8982, -0.5299, -0.8733, -0.1567, 0.1187 0.7277, -0.8368, -0.0538, -0.7489, 0.5458, 0.0830 0.9049, 0.8878, 0.2279, 0.9470, -0.3103, 0.2194 0.7957, -0.1308, -0.5284, 0.8817, 0.3684, 0.2172 0.4647, -0.4931, 0.2010, 0.6292, -0.8918, 0.3371 -0.7390, 0.6849, 0.2367, 0.0626, -0.5034, 0.7039 -0.1567, -0.8711, 0.7940, -0.5932, 0.6525, 0.1710 0.7635, -0.0265, 0.1969, 0.0545, 0.2496, 0.1445 0.7675, 0.1354, -0.7698, -0.5460, 0.1920, 0.1728 -0.5211, -0.7372, -0.6763, 0.6897, 0.2044, 0.5217 0.1913, 0.1980, 0.2314, -0.8816, 0.5006, 0.1998 0.8964, 0.0694, -0.6149, 0.5059, -0.9854, 0.1825 0.1767, 0.7104, 0.2093, 0.6452, 0.7590, 0.2832 -0.3580, -0.7541, 0.4426, -0.1193, -0.7465, 0.5657 -0.5996, 0.5766, -0.9758, -0.3933, -0.9572, 0.6800 0.9950, 0.1641, -0.4132, 0.8579, 0.0142, 0.2003 -0.4717, -0.3894, -0.2567, -0.5111, 0.1691, 0.4266 0.3917, -0.8561, 0.9422, 0.5061, 0.6123, 0.1212 -0.0366, -0.1087, 0.3449, -0.1025, 0.4086, 0.2475 0.3633, 0.3943, 0.2372, -0.6980, 0.5216, 0.1925 -0.5325, -0.6466, -0.2178, -0.3589, 0.6310, 0.3568 0.2271, 0.5200, -0.1447, -0.8011, -0.7699, 0.3128 0.6415, 0.1993, 0.3777, -0.0178, -0.8237, 0.2181 -0.5298, -0.0768, -0.6028, -0.9490, 0.4588, 0.4356 0.6870, -0.1431, 0.7294, 0.3141, 0.1621, 0.1632 -0.5985, 0.0591, 0.7889, -0.3900, 0.7419, 0.2945 0.3661, 0.7984, -0.8486, 0.7572, -0.6183, 0.3449 0.6995, 0.3342, -0.3113, -0.6972, 0.2707, 0.1712 0.2565, 0.9126, 0.1798, -0.6043, -0.1413, 0.2893 -0.3265, 0.9839, -0.2395, 0.9854, 0.0376, 0.4770 0.2690, -0.1722, 0.9818, 0.8599, -0.7015, 0.3954 -0.2102, -0.0768, 0.1219, 0.5607, -0.0256, 0.3949 0.8216, -0.9555, 0.6422, -0.6231, 0.3715, 0.0801 -0.2896, 0.9484, -0.7545, -0.6249, 0.7789, 0.4370 -0.9985, -0.5448, -0.7092, -0.5931, 0.7926, 0.5402
Test data:
# synthetic_test_40.txt # 0.7462, 0.4006, -0.0590, 0.6543, -0.0083, 0.1935 0.8495, -0.2260, -0.0142, -0.4911, 0.7699, 0.1078 -0.2335, -0.4049, 0.4352, -0.6183, -0.7636, 0.5088 0.1810, -0.5142, 0.2465, 0.2767, -0.3449, 0.3136 -0.8650, 0.7611, -0.0801, 0.5277, -0.4922, 0.7140 -0.2358, -0.7466, -0.5115, -0.8413, -0.3943, 0.4533 0.4834, 0.2300, 0.3448, -0.9832, 0.3568, 0.1360 -0.6502, -0.6300, 0.6885, 0.9652, 0.8275, 0.3046 -0.3053, 0.5604, 0.0929, 0.6329, -0.0325, 0.4756 -0.7995, 0.0740, -0.2680, 0.2086, 0.9176, 0.4565 -0.2144, -0.2141, 0.5813, 0.2902, -0.2122, 0.4119 -0.7278, -0.0987, -0.3312, -0.5641, 0.8515, 0.4438 0.3793, 0.1976, 0.4933, 0.0839, 0.4011, 0.1905 -0.8568, 0.9573, -0.5272, 0.3212, -0.8207, 0.7415 -0.5785, 0.0056, -0.7901, -0.2223, 0.0760, 0.5551 0.0735, -0.2188, 0.3925, 0.3570, 0.3746, 0.2191 0.1230, -0.2838, 0.2262, 0.8715, 0.1938, 0.2878 0.4792, -0.9248, 0.5295, 0.0366, -0.9894, 0.3149 -0.4456, 0.0697, 0.5359, -0.8938, 0.0981, 0.3879 0.8629, -0.8505, -0.4464, 0.8385, 0.5300, 0.1769 0.1995, 0.6659, 0.7921, 0.9454, 0.9970, 0.2330 -0.0249, -0.3066, -0.2927, -0.4923, 0.8220, 0.2437 0.4513, -0.9481, -0.0770, -0.4374, -0.9421, 0.2879 -0.3405, 0.5931, -0.3507, -0.3842, 0.8562, 0.3987 0.9538, 0.0471, 0.9039, 0.7760, 0.0361, 0.1706 -0.0887, 0.2104, 0.9808, 0.5478, -0.3314, 0.4128 -0.8220, -0.6302, 0.0537, -0.1658, 0.6013, 0.4306 -0.4123, -0.2880, 0.9074, -0.0461, -0.4435, 0.5144 0.0060, 0.2867, -0.7775, 0.5161, 0.7039, 0.3599 -0.7968, -0.5484, 0.9426, -0.4308, 0.8148, 0.2979 0.7811, 0.8450, -0.6877, 0.7594, 0.2640, 0.2362 -0.6802, -0.1113, -0.8325, -0.6694, -0.6056, 0.6544 0.3821, 0.1476, 0.7466, -0.5107, 0.2592, 0.1648 0.7265, 0.9683, -0.9803, -0.4943, -0.5523, 0.2454 -0.9049, -0.9797, -0.0196, -0.9090, -0.4433, 0.6447 -0.4607, 0.1811, -0.2389, 0.4050, -0.0078, 0.5229 0.2664, -0.2932, -0.4259, -0.7336, 0.8742, 0.1834 -0.4507, 0.1029, -0.6294, -0.1158, -0.6294, 0.6081 0.8948, -0.0124, 0.9278, 0.2899, -0.0314, 0.1534 -0.1323, -0.8813, -0.0146, -0.0697, 0.6135, 0.2386

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2026 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2026 G2E Conference
2026 iSC West Conference
You must be logged in to post a comment.