A few days ago, I was looking at linear support vector regression (linear SVR). Linear SVR is unusual in the sense that a model cannot be trained using standard techniques such as stochastic gradient descent or L-BFGS, because the loss/error function (“epsilon-insensitive loss”) is not calculus-differentiable. So, I used an evolutionary algorithm to train a demo SVR model.
The experiment with an evolutionary algorithm was a success, but I wondered how particle swarm optimization (PSO) would fare for training a linear SVR model. Bottom line: using PSO for training a linear SVR model worked even better than using an evolutionary algorithm.
My demo data looks like:
-0.1660, 0.4406, -0.9998, -0.3953, -0.7065, 0.4840 0.0776, -0.1616, 0.3704, -0.5911, 0.7562, 0.1568 -0.9452, 0.3409, -0.1654, 0.1174, -0.7192, 0.8054 0.9365, -0.3732, 0.3846, 0.7528, 0.7892, 0.1345 . . .
The first five values on each line are the predictor values. The last value on each line is the target value to predict. The data is synthetic. It was generated by a 5-10-1 neural network with random weights and biases, so the data is predictable in theory, but not too predictable by linear techniques. There are 200 training items and 40 test items.
For regular linear regression, during training, all data items contribute to the error/loss equally via mean squared error. But for support vector regression, items with predicted y values that are close (within a small distance epsilon) to actual target y values do not directly contribute to the loss. Only items where the predicted y value is greater than epsilon more than the true target y value contribute to the loss. This is called epsilon-insensitive loss.
The diagram below gives you a rough idea for a scenario where there is just one predictor value x. Each dot is a training data item. The red line is the linear prediction equation created using epsilon-insensitive loss. The epsilon value (like a lower-case ‘e’, often about 0.10 or so) creates a tube around the prediction equation. Data items that fall within the tube do not contribute to the loss value. Each data item that falls outside the tube generates a loss value usually denoted by Greek xi (looks like a script ‘E’).
The loss function for linear SVR is (1/2 * ||w||^2) + (C * sum(xi values)). The ||w||^2 is the squared vector norm of the weights (including bias). For example, if there are three predictors, the linear prediction equation is y’ = (w0 * x0) + (w1 * x1) + (w2 * x2) + b and ||w||^2 = w0^2 + w1^2 + w2^2 + b^2. C is a free parameter, usually about 1.0 or 2.0. A larger value of C penalizes outlier data items more than a smaller value. The idea of minimizing the magnitudes of the weights is very subtle: it creates a “flatter” prediction equation, meaning the weight values are relatively in the same magnitude-range rather than wildly different.
Now here’s where the difficulty of linear support regression arises. The loss function is not differentiable, which means that normal techniques for finding the values of the weights and bias don’t work, in particular the stochastic gradient descent optimization algorithm. This means you must use extremely complex techniques like quadratic programming algorithms, or . . . use something like particle swarm optimization. In pseudo-code:
create a swarm of particles (possible solutions)
loop max_iteration times
for-each particle
compute a new velocity using
1. curr velocity,
2. best known previous position (solution),
3. and best known previous position of any particle in swarm
use new velocity to update position/solution
keep track if new best solution found
end-for
end-loop
return best position/solution found
The details are very clever but a bit tricky. The two equations that control how a particle “moves” (updates its position/solution) are:
v(t+1) = (w * v(t)) +
(c1 * r1 * (p(t) – x(t)) +
(c2 * r2 * (g(t) – x(t))
x(t+1) = x(t) + v(t+1)
In the second equation, x(t) is the position/solution at time t and x(t+1) is the new position/solution. The v(t+1) is the new velocity.
In the first equation, the new velocity has three components. The first component is a constant w times the current velocity. The second component uses constant c1, random value r1, the best position the particle has encountered p(t), and the current position. The third component has constant c2, random value r2, the best position any particle has encountered g(t), and the current position.
Suppose the goal is to find the values of (x0, x1) that minimize f(x0, x1) = x0^2 + x1^2. The obvious solution is (x0, x1) = (0.0, 0.0) but pretend you don’t know this. Suppose at some time t, a particle has current position/solution x(t) = (3.0, 4.0). And the current velocity = (-1.0, -1.5). And magic constants w = 0.7, c1 = 1.4, c2 = 1.4. And random values r1 = 0.5 and r2 = 0.6. And suppose the best position seen by the particle so far is (2.5, 3.6), and the best position seen by any particle so far is (2.3, 3.4).
The new velocity is:
v(t+1) = (w * v(t)) +
(c1 * r1 * (p(t) – x(t)) +
(c2 * r2 * (g(t) – x(t))
= (0.7 * (-1.0, -1.5)) +
(1.4 * 0.5 * ((2.5, 3.6) - (3.0, 4.0)) +
(1.4 * 0.6 * ((2.3, 3.4) - (3.0, 4.0))
= (-0.70, -1.05) + (-0.35, -0.28) + (-0.59, -0.50)
= (-1.64, -1.83)
The new position/solution is:
x(t+1) = x(t) + v(t+1)
= (3.0, 4.0) + (-1.64, -1.83)
= (1.36, 2.17)
Recall that the optimal solution is (x0, x1) = (0.0, 0.0). The update process has improved the old position/solution from (3.0, 4.0) to (1.36, 2.17). If the process continued, the particle’s position/solution would quickly approach the optimal solution of (0.0, 0.0).
The output of my demo program is:
Begin linear support vector regression with particle swarm training demo Loading synthetic train (200) and test (40) data Done First three train X: -0.1660 0.4406 -0.9998 -0.3953 -0.7065 0.0776 -0.1616 0.3704 -0.5911 0.7562 -0.9452 0.3409 -0.1654 0.1174 -0.7192 First three train y: 0.4840 0.1568 0.8054 Creating SVR linear model Done Setting SVR parameters: C = 1.00 epsilon = 0.10 Setting particle swarm training parameters: numParticles = 100 maxIter = 1000 Starting PSO training iteration = 0 error = 414.7436 accuracy (0.15) = 0.0050 iteration = 200 error = 0.1774 accuracy (0.15) = 0.6200 iteration = 400 error = 0.1774 accuracy (0.15) = 0.6200 iteration = 600 error = 0.1774 accuracy (0.15) = 0.6200 iteration = 800 error = 0.1774 accuracy (0.15) = 0.6200 Done Coefficients/weights: -0.2588 0.0299 -0.0520 0.0259 -0.1014 Bias/constant/intercept: 0.3729 Coeffs via scikit library: -0.2588 0.0300 -0.0520 0.0259 -0.1014 Intercept: 0.3729 Evaluating model Accuracy train (within 0.15) = 0.6200 Accuracy test (within 0.15) = 0.6750 Predicting for x = -0.1660 0.4406 -0.9998 -0.3953 -0.7065 0.5424 End demo
The demo worked very nicely. To verify the particle swarm results, I ran the data through the Python language scikit library LinearSVT module and got essentially identical values for the weights and the bias.
A very interesting exploration.

On Christmas Day in 1962, I got the “Broadside” board game. It’s fairly complicated, which appealed to me and I liked the game a lot. Unfortunately, none of the other boys in the neighborhood had the patience required so I only got to play the game a handful of times. Don’t get me wrong — the other boys in the neighborhood were very bright and all had successful careers: electrical engineering (Kenny), medicine (Tommy), law (Roger), and sales (Rick and Bill).
It might be fun to develop an AI system to play the game against humans. But it would require a very big effort. Life is awesome — I wish I had more time to explore things like an AI for Broadside.
Demo program. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my lame blog editor chokes on symbols).
using System;
using System.IO;
using System.Collections.Generic;
namespace SupportVectorRegressionLinearSwarm
{
internal class SupportVectorRegressionLinearSwarmProgram
{
static void Main(string[] args)
{
Console.WriteLine("\nBegin linear support vector " +
"regression with particle swarm training demo ");
// 1. load data
Console.WriteLine("\nLoading synthetic train" +
" (200) and test (40) data");
string trainFile =
"..\\..\\..\\Data\\synthetic_train_200.txt";
int[] colsX = new int[] { 0, 1, 2, 3, 4 };
double[][] trainX =
MatLoad(trainFile, colsX, ',', "#");
double[] trainY =
MatToVec(MatLoad(trainFile,
new int[] { 5 }, ',', "#"));
string testFile =
"..\\..\\..\\Data\\synthetic_test_40.txt";
double[][] testX =
MatLoad(testFile, colsX, ',', "#");
double[] testY =
MatToVec(MatLoad(testFile,
new int[] { 5 }, ',', "#"));
Console.WriteLine("Done ");
Console.WriteLine("\nFirst three train X: ");
for (int i = 0; i "lt" 3; ++i)
VecShow(trainX[i], 4, 8);
Console.WriteLine("\nFirst three train y: ");
for (int i = 0; i "lt" 3; ++i)
Console.WriteLine(trainY[i].ToString("F4").
PadLeft(8));
// 2. create and train
Console.WriteLine("\nCreating SVR linear " +
"model ");
SupportVectorRegressor model =
new SupportVectorRegressor(seed: 0);
Console.WriteLine("Done ");
double C = 1.0;
double epsilon = 0.10;
Console.WriteLine("\nSetting SVR parameters: ");
Console.WriteLine("C = " + C.ToString("F2"));
Console.WriteLine("epsilon = " +
epsilon.ToString("F2"));
int numParticles = 100;
int maxIter = 1000;
Console.WriteLine("\nSetting particle swarm " +
"training parameters: ");
Console.WriteLine("numParticles = " + numParticles);
Console.WriteLine("maxIter = " + maxIter);
Console.WriteLine("\nStarting PSO training ");
model.TrainSwarm(trainX, trainY, C, epsilon,
numParticles, maxIter);
Console.WriteLine("Done");
// 2b. show trained model weights and bias
Console.WriteLine("\nCoefficients/weights: ");
for (int i = 0; i "lt" model.weights.Length; ++i)
Console.Write(model.weights[i].ToString("F4") + " ");
Console.WriteLine("\nBias/constant/intercept: " +
model.bias.ToString("F4"));
// 2c. show scikit SGD wts and bias
Console.WriteLine("\nCoeffs via scikit / libsvm: ");
Console.WriteLine("-0.2588 0.0300 -0.0520 0.0259" +
" -0.1014");
Console.WriteLine("Intercept: 0.3729");
// 3. evaluate model
Console.WriteLine("\nEvaluating model ");
double accTrain = model.Accuracy(trainX, trainY, 0.15);
Console.WriteLine("Accuracy train (within 0.15) = " +
accTrain.ToString("F4"));
double accTest = model.Accuracy(testX, testY, 0.15);
Console.WriteLine("Accuracy test (within 0.15) = " +
accTest.ToString("F4"));
// 4. use model
double[] x = trainX[0];
Console.WriteLine("\nPredicting for x = ");
VecShow(x, 4, 8);
double y = model.Predict(x);
Console.WriteLine(y.ToString("F4"));
Console.WriteLine("\nEnd demo ");
Console.ReadLine();
} // Main()
// ------------------------------------------------------
// helpers for Main()
// ------------------------------------------------------
static double[][] MatLoad(string fn, int[] usecols,
char sep, string comment)
{
List"lt"double[]"gt" result =
new List"lt"double[]"gt"();
string line = "";
FileStream ifs = new FileStream(fn, FileMode.Open);
StreamReader sr = new StreamReader(ifs);
while ((line = sr.ReadLine()) != null)
{
if (line.StartsWith(comment) == true)
continue;
string[] tokens = line.Split(sep);
List"lt"double"gt" lst = new List"lt"double"gt"();
for (int j = 0; j "lt" usecols.Length; ++j)
lst.Add(double.Parse(tokens[usecols[j]]));
double[] row = lst.ToArray();
result.Add(row);
}
sr.Close(); ifs.Close();
return result.ToArray();
}
static double[] MatToVec(double[][] mat)
{
int nRows = mat.Length;
int nCols = mat[0].Length;
double[] result = new double[nRows * nCols];
int k = 0;
for (int i = 0; i "lt" nRows; ++i)
for (int j = 0; j "lt" nCols; ++j)
result[k++] = mat[i][j];
return result;
}
static void VecShow(double[] vec, int dec, int wid)
{
for (int i = 0; i "lt" vec.Length; ++i)
Console.Write(vec[i].ToString("F" + dec).
PadLeft(wid));
Console.WriteLine("");
}
} // class Program
public class SupportVectorRegressor
{
// ------------------------------------------------------
public class Particle
{
public double[] position; // soln = wts + bias
public double loss; // epsilon-insensitive
public double[] velocity; // to determine next position
public double[] bestPosition; // best seen
public double bestLoss;
public Particle(int solnLen)
{
this.position = new double[solnLen];
this.velocity = new double[solnLen];
this.bestPosition = new double[solnLen];
}
} // Particle
// ------------------------------------------------------
public double[] weights; // aka coefficients
public double bias; // aka constant, intercept
private Random rnd;
public SupportVectorRegressor(int seed)
{
this.rnd = new Random(seed);
}
public double Predict(double[] x)
{
double result = 0.0;
for (int j = 0; j "lt" x.Length; ++j)
result += x[j] * this.weights[j];
result += this.bias;
return result;
}
public double Accuracy(double[][] dataX, double[] dataY,
double pctClose)
{
int numCorrect = 0; int numWrong = 0;
for (int i = 0; i "lt" dataX.Length; ++i)
{
double actualY = dataY[i];
double predY = this.Predict(dataX[i]);
if (Math.Abs(predY - actualY) "lt"
Math.Abs(pctClose * actualY))
++numCorrect;
else
++numWrong;
}
return (numCorrect * 1.0) / (numWrong + numCorrect);
}
// ------------------------------------------------------
public void TrainSwarm(double[][] trainX, double[] trainY,
double C, double epsilon,
int numParticles, int maxIter)
{
double lo = -10.0; double hi = 10.0;
int dim = trainX[0].Length;
int solnLen = dim + 1; // add 1 for bias
this.weights = new double[dim];
this.bias = 0.0;
double[] globalBestPosition = new double[solnLen];
double globalBestLoss = double.MaxValue;
double w = 0.729; // inertia weight
double c1 = 1.49445; // cognitive weight
double c2 = 1.49445; // social weight
// create the swarm
Particle[] swarm = new Particle[numParticles];
for (int i = 0; i "lt" numParticles; ++i)
{
swarm[i] = new Particle(solnLen);
for (int j = 0; j "lt" solnLen; ++j)
swarm[i].position[j] = (hi - lo) *
this.rnd.NextDouble() + lo;
swarm[i].loss = this.LossUsing(swarm[i].position,
trainX, trainY, C, epsilon);
for (int j = 0; j "lt" solnLen; ++j)
swarm[i].velocity[j] = (hi - lo) *
this.rnd.NextDouble() + lo;
for (int j = 0; j "lt" solnLen; ++j)
swarm[i].bestPosition[j] = swarm[i].position[j];
swarm[i].bestLoss = swarm[i].loss;
}
// set global bests
for (int i = 0; i "lt" numParticles; ++i)
{
if (swarm[i].loss "lt" globalBestLoss)
{
globalBestLoss = swarm[i].loss;
for (int j = 0; j "lt" solnLen; ++j)
globalBestPosition[j] = swarm[i].position[j];
}
}
// main PSO processing loop
for (int iter = 0; iter "lt" maxIter; ++iter)
{
for (int i = 0; i "lt" numParticles; ++i)
{
Particle currP = swarm[i]; // ref for clarity
for (int j = 0; j "lt" solnLen; ++j) // 1. velocity
{
double r1 = this.rnd.NextDouble();
double r2 = this.rnd.NextDouble();
currP.velocity[j] = (w * currP.velocity[j]) +
(c1 * r1 * (currP.bestPosition[j] -
currP.position[j])) +
(c2 * r2 * (globalBestPosition[j] -
currP.position[j]));
}
for (int j = 0; j "lt" solnLen; ++j) // 2. position
{
currP.position[j] = currP.position[j] +
currP.velocity[j];
}
// 3. update particle's loss
currP.loss = this.LossUsing(currP.position,
trainX, trainY, C, epsilon);
// 4. check if particle new best
if (currP.loss "lt" currP.bestLoss)
{
currP.bestLoss = currP.loss;
for (int j = 0; j "lt" solnLen; ++j)
currP.bestPosition[j] = currP.position[j];
}
// 5. check if new global best found
if (currP.loss "lt" globalBestLoss)
{
globalBestLoss = currP.loss;
for (int j = 0; j "lt" solnLen; ++j)
globalBestPosition[j] = currP.position[j];
}
} // each particle
if (iter % (maxIter / 5) == 0) // display progress
{
double bestAcc =
this.AccuracyUsing(globalBestPosition,
trainX, trainY, 0.15);
string s1 = "iteration = " +
iter.ToString().PadLeft(6);
string s2 = " loss = " +
globalBestLoss.ToString("F4").PadLeft(8);
string s3 = " acc (0.15) = " +
bestAcc.ToString("F4");
Console.WriteLine(s1 + s2 + s3);
}
} // iter
// copy best soln found into model
for (int j = 0; j "lt" solnLen - 1; ++j)
this.weights[j] = globalBestPosition[j];
this.bias = globalBestPosition[solnLen - 1];
} // TrainSwarm()
private double PredictUsing(double[] soln, double[] x)
{
// bias is last cell of soln
double result = 0.0;
for (int i = 0; i "lt" x.Length; ++i)
result += x[i] * soln[i];
result += soln[soln.Length - 1]; // the bias
return result;
}
public double LossUsing(double[] soln,
double[][] dataX, double[] dataY,
double C, double eps)
{
// 1. sum of wts-squared (squared vector norm)
double sum = 0.0;
for (int j = 0; j "lt" soln.Length; ++j)
sum += soln[j] * soln[j];
// 2. outliers penalty
double sumPenalties = 0.0;
for (int i = 0; i "lt" dataX.Length; ++i)
{
double[] x = dataX[i];
double predY = PredictUsing(soln, x);
double actualY = dataY[i];
if (Math.Abs(actualY - predY) "gt" eps)
sumPenalties += Math.Abs(predY - actualY) - eps;
}
double loss = (sum / 2.0) + (C * sumPenalties);
return loss;
}
private double AccuracyUsing(double[] soln, double[][] dataX,
double[] dataY, double pctClose)
{
int numCorrect = 0; int numWrong = 0;
int N = dataX.Length;
for (int i = 0; i "lt" N; ++i)
{
double[] x = dataX[i];
double actualY = dataY[i];
double predY = this.PredictUsing(soln, x);
if (Math.Abs(predY - actualY) "lt"
Math.Abs(pctClose * actualY))
++numCorrect;
else
++numWrong;
}
return (1.0 * numCorrect) / N;
}
} // class SupportVectorRegressor
} // ns
Training data:
# synthetic_train_200.txt # -0.1660, 0.4406, -0.9998, -0.3953, -0.7065, 0.4840 0.0776, -0.1616, 0.3704, -0.5911, 0.7562, 0.1568 -0.9452, 0.3409, -0.1654, 0.1174, -0.7192, 0.8054 0.9365, -0.3732, 0.3846, 0.7528, 0.7892, 0.1345 -0.8299, -0.9219, -0.6603, 0.7563, -0.8033, 0.7955 0.0663, 0.3838, -0.3690, 0.3730, 0.6693, 0.3206 -0.9634, 0.5003, 0.9777, 0.4963, -0.4391, 0.7377 -0.1042, 0.8172, -0.4128, -0.4244, -0.7399, 0.4801 -0.9613, 0.3577, -0.5767, -0.4689, -0.0169, 0.6861 -0.7065, 0.1786, 0.3995, -0.7953, -0.1719, 0.5569 0.3888, -0.1716, -0.9001, 0.0718, 0.3276, 0.2500 0.1731, 0.8068, -0.7251, -0.7214, 0.6148, 0.3297 -0.2046, -0.6693, 0.8550, -0.3045, 0.5016, 0.2129 0.2473, 0.5019, -0.3022, -0.4601, 0.7918, 0.2613 -0.1438, 0.9297, 0.3269, 0.2434, -0.7705, 0.5171 0.1568, -0.1837, -0.5259, 0.8068, 0.1474, 0.3307 -0.9943, 0.2343, -0.3467, 0.0541, 0.7719, 0.5581 0.2467, -0.9684, 0.8589, 0.3818, 0.9946, 0.1092 -0.6553, -0.7257, 0.8652, 0.3936, -0.8680, 0.7018 0.8460, 0.4230, -0.7515, -0.9602, -0.9476, 0.1996 -0.9434, -0.5076, 0.7201, 0.0777, 0.1056, 0.5664 0.9392, 0.1221, -0.9627, 0.6013, -0.5341, 0.1533 0.6142, -0.2243, 0.7271, 0.4942, 0.1125, 0.1661 0.4260, 0.1194, -0.9749, -0.8561, 0.9346, 0.2230 0.1362, -0.5934, -0.4953, 0.4877, -0.6091, 0.3810 0.6937, -0.5203, -0.0125, 0.2399, 0.6580, 0.1460 -0.6864, -0.9628, -0.8600, -0.0273, 0.2127, 0.5387 0.9772, 0.1595, -0.2397, 0.1019, 0.4907, 0.1611 0.3385, -0.4702, -0.8673, -0.2598, 0.2594, 0.2270 -0.8669, -0.4794, 0.6095, -0.6131, 0.2789, 0.4700 0.0493, 0.8496, -0.4734, -0.8681, 0.4701, 0.3516 0.8639, -0.9721, -0.5313, 0.2336, 0.8980, 0.1412 0.9004, 0.1133, 0.8312, 0.2831, -0.2200, 0.1782 0.0991, 0.8524, 0.8375, -0.2102, 0.9265, 0.2150 -0.6521, -0.7473, -0.7298, 0.0113, -0.9570, 0.7422 0.6190, -0.3105, 0.8802, 0.1640, 0.7577, 0.1056 0.6895, 0.8108, -0.0802, 0.0927, 0.5972, 0.2214 0.1982, -0.9689, 0.1870, -0.1326, 0.6147, 0.1310 -0.3695, 0.7858, 0.1557, -0.6320, 0.5759, 0.3773 -0.1596, 0.3581, 0.8372, -0.9992, 0.9535, 0.2071 -0.2468, 0.9476, 0.2094, 0.6577, 0.1494, 0.4132 0.1737, 0.5000, 0.7166, 0.5102, 0.3961, 0.2611 0.7290, -0.3546, 0.3416, -0.0983, -0.2358, 0.1332 -0.3652, 0.2438, -0.1395, 0.9476, 0.3556, 0.4170 -0.6029, -0.1466, -0.3133, 0.5953, 0.7600, 0.4334 -0.4596, -0.4953, 0.7098, 0.0554, 0.6043, 0.2775 0.1450, 0.4663, 0.0380, 0.5418, 0.1377, 0.2931 -0.8636, -0.2442, -0.8407, 0.9656, -0.6368, 0.7429 0.6237, 0.7499, 0.3768, 0.1390, -0.6781, 0.2185 -0.5499, 0.1850, -0.3755, 0.8326, 0.8193, 0.4399 -0.4858, -0.7782, -0.6141, -0.0008, 0.4572, 0.4197 0.7033, -0.1683, 0.2334, -0.5327, -0.7961, 0.1776 0.0317, -0.0457, -0.6947, 0.2436, 0.0880, 0.3345 0.5031, -0.5559, 0.0387, 0.5706, -0.9553, 0.3107 -0.3513, 0.7458, 0.6894, 0.0769, 0.7332, 0.3170 0.2205, 0.5992, -0.9309, 0.5405, 0.4635, 0.3532 -0.4806, -0.4859, 0.2646, -0.3094, 0.5932, 0.3202 0.9809, -0.3995, -0.7140, 0.8026, 0.0831, 0.1600 0.9495, 0.2732, 0.9878, 0.0921, 0.0529, 0.1289 -0.9476, -0.6792, 0.4913, -0.9392, -0.2669, 0.5966 0.7247, 0.3854, 0.3819, -0.6227, -0.1162, 0.1550 -0.5922, -0.5045, -0.4757, 0.5003, -0.0860, 0.5863 -0.8861, 0.0170, -0.5761, 0.5972, -0.4053, 0.7301 0.6877, -0.2380, 0.4997, 0.0223, 0.0819, 0.1404 0.9189, 0.6079, -0.9354, 0.4188, -0.0700, 0.1907 -0.1428, -0.7820, 0.2676, 0.6059, 0.3936, 0.2790 0.5324, -0.3151, 0.6917, -0.1425, 0.6480, 0.1071 -0.8432, -0.9633, -0.8666, -0.0828, -0.7733, 0.7784 -0.9444, 0.5097, -0.2103, 0.4939, -0.0952, 0.6787 -0.0520, 0.6063, -0.1952, 0.8094, -0.9259, 0.4836 0.5477, -0.7487, 0.2370, -0.9793, 0.0773, 0.1241 0.2450, 0.8116, 0.9799, 0.4222, 0.4636, 0.2355 0.8186, -0.1983, -0.5003, -0.6531, -0.7611, 0.1511 -0.4714, 0.6382, -0.3788, 0.9648, -0.4667, 0.5950 0.0673, -0.3711, 0.8215, -0.2669, -0.1328, 0.2677 -0.9381, 0.4338, 0.7820, -0.9454, 0.0441, 0.5518 -0.3480, 0.7190, 0.1170, 0.3805, -0.0943, 0.4724 -0.9813, 0.1535, -0.3771, 0.0345, 0.8328, 0.5438 -0.1471, -0.5052, -0.2574, 0.8637, 0.8737, 0.3042 -0.5454, -0.3712, -0.6505, 0.2142, -0.1728, 0.5783 0.6327, -0.6297, 0.4038, -0.5193, 0.1484, 0.1153 -0.5424, 0.3282, -0.0055, 0.0380, -0.6506, 0.6613 0.1414, 0.9935, 0.6337, 0.1887, 0.9520, 0.2540 -0.9351, -0.8128, -0.8693, -0.0965, -0.2491, 0.7353 0.9507, -0.6640, 0.9456, 0.5349, 0.6485, 0.1059 -0.0462, -0.9737, -0.2940, -0.0159, 0.4602, 0.2606 -0.0627, -0.0852, -0.7247, -0.9782, 0.5166, 0.2977 0.0478, 0.5098, -0.0723, -0.7504, -0.3750, 0.3335 0.0090, 0.3477, 0.5403, -0.7393, -0.9542, 0.4415 -0.9748, 0.3449, 0.3736, -0.1015, 0.8296, 0.4358 0.2887, -0.9895, -0.0311, 0.7186, 0.6608, 0.2057 0.1570, -0.4518, 0.1211, 0.3435, -0.2951, 0.3244 0.7117, -0.6099, 0.4946, -0.4208, 0.5476, 0.1096 -0.2929, -0.5726, 0.5346, -0.3827, 0.4665, 0.2465 0.4889, -0.5572, -0.5718, -0.6021, -0.7150, 0.2163 -0.7782, 0.3491, 0.5996, -0.8389, -0.5366, 0.6516 -0.5847, 0.8347, 0.4226, 0.1078, -0.3910, 0.6134 0.8469, 0.4121, -0.0439, -0.7476, 0.9521, 0.1571 -0.6803, -0.5948, -0.1376, -0.1916, -0.7065, 0.7156 0.2878, 0.5086, -0.5785, 0.2019, 0.4979, 0.2980 0.2764, 0.1943, -0.4090, 0.4632, 0.8906, 0.2960 -0.8877, 0.6705, -0.6155, -0.2098, -0.3998, 0.7107 -0.8398, 0.8093, -0.2597, 0.0614, -0.0118, 0.6502 -0.8476, 0.0158, -0.4769, -0.2859, -0.7839, 0.7715 0.5751, -0.7868, 0.9714, -0.6457, 0.1448, 0.1175 0.4802, -0.7001, 0.1022, -0.5668, 0.5184, 0.1090 0.4458, -0.6469, 0.7239, -0.9604, 0.7205, 0.0779 0.5175, 0.4339, 0.9747, -0.4438, -0.9924, 0.2879 0.8678, 0.7158, 0.4577, 0.0334, 0.4139, 0.1678 0.5406, 0.5012, 0.2264, -0.1963, 0.3946, 0.2088 -0.9938, 0.5498, 0.7928, -0.5214, -0.7585, 0.7687 0.7661, 0.0863, -0.4266, -0.7233, -0.4197, 0.1466 0.2277, -0.3517, -0.0853, -0.1118, 0.6563, 0.1767 0.3499, -0.5570, -0.0655, -0.3705, 0.2537, 0.1632 0.7547, -0.1046, 0.5689, -0.0861, 0.3125, 0.1257 0.8186, 0.2110, 0.5335, 0.0094, -0.0039, 0.1391 0.6858, -0.8644, 0.1465, 0.8855, 0.0357, 0.1845 -0.4967, 0.4015, 0.0805, 0.8977, 0.2487, 0.4663 0.6760, -0.9841, 0.9787, -0.8446, -0.3557, 0.1509 -0.1203, -0.4885, 0.6054, -0.0443, -0.7313, 0.4854 0.8557, 0.7919, -0.0169, 0.7134, -0.1628, 0.2002 0.0115, -0.6209, 0.9300, -0.4116, -0.7931, 0.4052 -0.7114, -0.9718, 0.4319, 0.1290, 0.5892, 0.3661 0.3915, 0.5557, -0.1870, 0.2955, -0.6404, 0.2954 -0.3564, -0.6548, -0.1827, -0.5172, -0.1862, 0.4622 0.2392, -0.4959, 0.5857, -0.1341, -0.2850, 0.2470 -0.3394, 0.3947, -0.4627, 0.6166, -0.4094, 0.5325 0.7107, 0.7768, -0.6312, 0.1707, 0.7964, 0.2757 -0.1078, 0.8437, -0.4420, 0.2177, 0.3649, 0.4028 -0.3139, 0.5595, -0.6505, -0.3161, -0.7108, 0.5546 0.4335, 0.3986, 0.3770, -0.4932, 0.3847, 0.1810 -0.2562, -0.2894, -0.8847, 0.2633, 0.4146, 0.4036 0.2272, 0.2966, -0.6601, -0.7011, 0.0284, 0.2778 -0.0743, -0.1421, -0.0054, -0.6770, -0.3151, 0.3597 -0.4762, 0.6891, 0.6007, -0.1467, 0.2140, 0.4266 -0.4061, 0.7193, 0.3432, 0.2669, -0.7505, 0.6147 -0.0588, 0.9731, 0.8966, 0.2902, -0.6966, 0.4955 -0.0627, -0.1439, 0.1985, 0.6999, 0.5022, 0.3077 0.1587, 0.8494, -0.8705, 0.9827, -0.8940, 0.4263 -0.7850, 0.2473, -0.9040, -0.4308, -0.8779, 0.7199 0.4070, 0.3369, -0.2428, -0.6236, 0.4940, 0.2215 -0.0242, 0.0513, -0.9430, 0.2885, -0.2987, 0.3947 -0.5416, -0.1322, -0.2351, -0.0604, 0.9590, 0.3683 0.1055, 0.7783, -0.2901, -0.5090, 0.8220, 0.2984 -0.9129, 0.9015, 0.1128, -0.2473, 0.9901, 0.4776 -0.9378, 0.1424, -0.6391, 0.2619, 0.9618, 0.5368 0.7498, -0.0963, 0.4169, 0.5549, -0.0103, 0.1614 -0.2612, -0.7156, 0.4538, -0.0460, -0.1022, 0.3717 0.7720, 0.0552, -0.1818, -0.4622, -0.8560, 0.1685 -0.4177, 0.0070, 0.9319, -0.7812, 0.3461, 0.3052 -0.0001, 0.5542, -0.7128, -0.8336, -0.2016, 0.3803 0.5356, -0.4194, -0.5662, -0.9666, -0.2027, 0.1776 -0.2378, 0.3187, -0.8582, -0.6948, -0.9668, 0.5474 -0.1947, -0.3579, 0.1158, 0.9869, 0.6690, 0.2992 0.3992, 0.8365, -0.9205, -0.8593, -0.0520, 0.3154 -0.0209, 0.0793, 0.7905, -0.1067, 0.7541, 0.1864 -0.4928, -0.4524, -0.3433, 0.0951, -0.5597, 0.6261 -0.8118, 0.7404, -0.5263, -0.2280, 0.1431, 0.6349 0.0516, -0.8480, 0.7483, 0.9023, 0.6250, 0.1959 -0.3212, 0.1093, 0.9488, -0.3766, 0.3376, 0.2735 -0.3481, 0.5490, -0.3484, 0.7797, 0.5034, 0.4379 -0.5785, -0.9170, -0.3563, -0.9258, 0.3877, 0.4121 0.3407, -0.1391, 0.5356, 0.0720, -0.9203, 0.3458 -0.3287, -0.8954, 0.2102, 0.0241, 0.2349, 0.3247 -0.1353, 0.6954, -0.0919, -0.9692, 0.7461, 0.3338 0.9036, -0.8982, -0.5299, -0.8733, -0.1567, 0.1187 0.7277, -0.8368, -0.0538, -0.7489, 0.5458, 0.0830 0.9049, 0.8878, 0.2279, 0.9470, -0.3103, 0.2194 0.7957, -0.1308, -0.5284, 0.8817, 0.3684, 0.2172 0.4647, -0.4931, 0.2010, 0.6292, -0.8918, 0.3371 -0.7390, 0.6849, 0.2367, 0.0626, -0.5034, 0.7039 -0.1567, -0.8711, 0.7940, -0.5932, 0.6525, 0.1710 0.7635, -0.0265, 0.1969, 0.0545, 0.2496, 0.1445 0.7675, 0.1354, -0.7698, -0.5460, 0.1920, 0.1728 -0.5211, -0.7372, -0.6763, 0.6897, 0.2044, 0.5217 0.1913, 0.1980, 0.2314, -0.8816, 0.5006, 0.1998 0.8964, 0.0694, -0.6149, 0.5059, -0.9854, 0.1825 0.1767, 0.7104, 0.2093, 0.6452, 0.7590, 0.2832 -0.3580, -0.7541, 0.4426, -0.1193, -0.7465, 0.5657 -0.5996, 0.5766, -0.9758, -0.3933, -0.9572, 0.6800 0.9950, 0.1641, -0.4132, 0.8579, 0.0142, 0.2003 -0.4717, -0.3894, -0.2567, -0.5111, 0.1691, 0.4266 0.3917, -0.8561, 0.9422, 0.5061, 0.6123, 0.1212 -0.0366, -0.1087, 0.3449, -0.1025, 0.4086, 0.2475 0.3633, 0.3943, 0.2372, -0.6980, 0.5216, 0.1925 -0.5325, -0.6466, -0.2178, -0.3589, 0.6310, 0.3568 0.2271, 0.5200, -0.1447, -0.8011, -0.7699, 0.3128 0.6415, 0.1993, 0.3777, -0.0178, -0.8237, 0.2181 -0.5298, -0.0768, -0.6028, -0.9490, 0.4588, 0.4356 0.6870, -0.1431, 0.7294, 0.3141, 0.1621, 0.1632 -0.5985, 0.0591, 0.7889, -0.3900, 0.7419, 0.2945 0.3661, 0.7984, -0.8486, 0.7572, -0.6183, 0.3449 0.6995, 0.3342, -0.3113, -0.6972, 0.2707, 0.1712 0.2565, 0.9126, 0.1798, -0.6043, -0.1413, 0.2893 -0.3265, 0.9839, -0.2395, 0.9854, 0.0376, 0.4770 0.2690, -0.1722, 0.9818, 0.8599, -0.7015, 0.3954 -0.2102, -0.0768, 0.1219, 0.5607, -0.0256, 0.3949 0.8216, -0.9555, 0.6422, -0.6231, 0.3715, 0.0801 -0.2896, 0.9484, -0.7545, -0.6249, 0.7789, 0.4370 -0.9985, -0.5448, -0.7092, -0.5931, 0.7926, 0.5402
Test data:
# synthetic_test_40.txt # 0.7462, 0.4006, -0.0590, 0.6543, -0.0083, 0.1935 0.8495, -0.2260, -0.0142, -0.4911, 0.7699, 0.1078 -0.2335, -0.4049, 0.4352, -0.6183, -0.7636, 0.5088 0.1810, -0.5142, 0.2465, 0.2767, -0.3449, 0.3136 -0.8650, 0.7611, -0.0801, 0.5277, -0.4922, 0.7140 -0.2358, -0.7466, -0.5115, -0.8413, -0.3943, 0.4533 0.4834, 0.2300, 0.3448, -0.9832, 0.3568, 0.1360 -0.6502, -0.6300, 0.6885, 0.9652, 0.8275, 0.3046 -0.3053, 0.5604, 0.0929, 0.6329, -0.0325, 0.4756 -0.7995, 0.0740, -0.2680, 0.2086, 0.9176, 0.4565 -0.2144, -0.2141, 0.5813, 0.2902, -0.2122, 0.4119 -0.7278, -0.0987, -0.3312, -0.5641, 0.8515, 0.4438 0.3793, 0.1976, 0.4933, 0.0839, 0.4011, 0.1905 -0.8568, 0.9573, -0.5272, 0.3212, -0.8207, 0.7415 -0.5785, 0.0056, -0.7901, -0.2223, 0.0760, 0.5551 0.0735, -0.2188, 0.3925, 0.3570, 0.3746, 0.2191 0.1230, -0.2838, 0.2262, 0.8715, 0.1938, 0.2878 0.4792, -0.9248, 0.5295, 0.0366, -0.9894, 0.3149 -0.4456, 0.0697, 0.5359, -0.8938, 0.0981, 0.3879 0.8629, -0.8505, -0.4464, 0.8385, 0.5300, 0.1769 0.1995, 0.6659, 0.7921, 0.9454, 0.9970, 0.2330 -0.0249, -0.3066, -0.2927, -0.4923, 0.8220, 0.2437 0.4513, -0.9481, -0.0770, -0.4374, -0.9421, 0.2879 -0.3405, 0.5931, -0.3507, -0.3842, 0.8562, 0.3987 0.9538, 0.0471, 0.9039, 0.7760, 0.0361, 0.1706 -0.0887, 0.2104, 0.9808, 0.5478, -0.3314, 0.4128 -0.8220, -0.6302, 0.0537, -0.1658, 0.6013, 0.4306 -0.4123, -0.2880, 0.9074, -0.0461, -0.4435, 0.5144 0.0060, 0.2867, -0.7775, 0.5161, 0.7039, 0.3599 -0.7968, -0.5484, 0.9426, -0.4308, 0.8148, 0.2979 0.7811, 0.8450, -0.6877, 0.7594, 0.2640, 0.2362 -0.6802, -0.1113, -0.8325, -0.6694, -0.6056, 0.6544 0.3821, 0.1476, 0.7466, -0.5107, 0.2592, 0.1648 0.7265, 0.9683, -0.9803, -0.4943, -0.5523, 0.2454 -0.9049, -0.9797, -0.0196, -0.9090, -0.4433, 0.6447 -0.4607, 0.1811, -0.2389, 0.4050, -0.0078, 0.5229 0.2664, -0.2932, -0.4259, -0.7336, 0.8742, 0.1834 -0.4507, 0.1029, -0.6294, -0.1158, -0.6294, 0.6081 0.8948, -0.0124, 0.9278, 0.2899, -0.0314, 0.1534 -0.1323, -0.8813, -0.0146, -0.0697, 0.6135, 0.2386
Python scikit library program:
# svr_linear_scikit.py
import numpy as np
# from sklearn.svm import SVR
from sklearn.svm import LinearSVR
# -----------------------------------------------------------
def accuracy(model, data_X, data_y, pct_close):
# correct within pct of true target
n_correct = 0; n_wrong = 0
for i in range(len(data_X)):
X = data_X[i].reshape(1, -1) # one-item batch
y = data_y[i]
pred = model.predict(X) # predicted target value
if np.abs(pred - y) "lt" np.abs(pct_close * y):
n_correct += 1
else:
n_wrong += 1
acc = (n_correct * 1.0) / (n_correct + n_wrong)
return acc
# -----------------------------------------------------------
print("\nBegin scikit SVR linear demo ")
np.random.seed(1)
np.set_printoptions(suppress=True, precision=4,
floatmode='fixed')
print("\nLoading train and test data ")
train_file = ".\\Data\\synthetic_train_200.txt"
train_X = np.loadtxt(train_file, usecols=[0,1,2,3,4],
comments="#", delimiter=",", dtype=np.float64)
train_y = np.loadtxt(train_file, usecols=5, comments="#",
delimiter=",", dtype=np.float64)
test_file = ".\\Data\\synthetic_test_40.txt"
test_X = np.loadtxt(test_file, usecols=[0,1,2,3,4],
comments="#", delimiter=",", dtype=np.float64)
test_y = np.loadtxt(test_file, usecols=5, comments="#",
delimiter=",", dtype=np.float64)
print("Done ")
print("\nFirst three X data: ")
print(train_X[0:3][:])
print(". . .")
print("\nFirst three y targets: ")
print(train_y[0:3])
print(". . .")
print("\nCreating and training SVR linear model ")
C = 1.0
epsilon = 0.10
print("Setting C = %0.3f, epsilon = %0.3f " % (C, epsilon))
# model = SVR(kernel='linear', C=C, epsilon=epsilon)
model = LinearSVR(C=C, epsilon=epsilon,
max_iter=10000, dual='auto') # L1
model.fit(train_X, train_y)
print("Done ")
print("\nCoefficients: ")
print(model.coef_)
print("Intercept %0.4f " % model.intercept_)
acc_train = accuracy(model, train_X, train_y, 0.15)
print("\nAccuracy (0.15) train = %0.4f " % acc_train)
acc_test = accuracy(model, test_X, test_y, 0.15)
print("Accuracy (0.15) test = %0.4f " % acc_test)
print("\nEnd demo ")


.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2026 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2026 G2E Conference
2026 iSC West Conference
You must be logged in to post a comment.