An Example of AdaBoost Regression Using the scikit Library

The goal of a regression problem is to predict a single numeric value. Techniques include linear regression, nearest neighbors regression, kernel ridge regression, neural network regression, and decision tree regression.

Decision tree regression almost always overfits, leading to a model that predicts the training data well, but predicts new, previously unseen data poorly.

There are four common ensemble techniques that use a collection of simple decision trees to discourage model overfitting. In order of complexity they are: 1.) bagging (“bootstrap aggregation”), 2.) random forest, 3.) adaptive boosting, 4.) gradient boosting (several variations including extreme gradient boosting).

I put together a demo of adaptive boosting using the Python language scikit-learn library. Although there are several variations of adaptive boosting, as far as I know, only one specific algorithm is ever used — the AdaBoost.R2 technique. The name derives from AdaBoost, which is a binary classification technique, modified to do regression, second version developed in 1997 (“Improving Regressors using Boosting Techniques”, Drucker, 1997).

The AdaBoost.R2 algorithm creates a collection of decision trees, where each tree is trained using data items that were incorrectly predicted by the previous tree. So each tree gets better. To make a prediction, a weighted average of the predictions of the trees is computed. (Each tree has a weight that is a measure of prediction confidence).

For my demo, I used a set of synthetic data that I generated using a neural network with random weights and biases. The data looks like:

-0.1660,  0.4406, -0.9998, -0.3953, -0.7065, 0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562, 0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192, 0.8054
. . .

The first five values on each line are the predictors. The sixth value is the target to predict. All predictor values are between -1.0 and 1.0. Normalizing the predictor values is not necessary but is helpful when using the data with other regression techniques that require normalization (such as k-nearest neighbors regression). There are 200 items in the training data and 40 items in the test data.

The output of the scikit AdaBoost regression demo program is:

Begin scikit AdaBoost.R2 demo

Loading synthetic train (200), test (40) data
Done

First three X predictors:
[[-0.1660  0.4406 -0.9998 -0.3953 -0.7065]
 [ 0.0776 -0.1616  0.3704 -0.5911  0.7562]
 [-0.9452  0.3409 -0.1654  0.1174 -0.7192]]

First three y targets:
0.4840
0.1568
0.8054

Setting n_estimators = 100
Setting max_depth = 6
Setting min_samples_split = 2

Training AdaBoost.R2 model
Done
Created 100 learners

Accuracy train (within 0.10): 0.8950
Accuracy test (within 0.10): 0.6000

MSE train: 0.0001
MSE test: 0.0018

Predicting for x =
[-0.1660  0.4406 -0.9998 -0.3953 -0.7065]
predicted y = 0.4840

End demo

I implemented an accuracy() function and a mean squared error function. Alas, these results show why AdaBoost regression is rarely used — in spite of a very complex algorithm, the model still overfits significantly.

AdaBoost regression is a meta-algorithm, meaning that instead of using decision trees as the basic learners (“weak learners”), you can use any other basic regression technique. I have experimented with alternative weak learners, but AdaBoost rarely does any better than simpler regression techniques.



The old AdaBoost.R2 regression technique doesn’t have a very good reputation. I like all kinds of old science fiction movies, including many that don’t have good reputations.

Left: This is the palace throne room from “The Chronicles of Riddick” (2004). This movie gets very poor reviews, but I like it a lot — very creative with fantastic set designs.

Right: This is the palace in “Dune” (1984) when the Navigator enters in his chamber. Again, the movie gets very poor reviews, but I think it’s quite good.


Demo program. Replace “lt” (less than) with Boolean operator symbol. (My blog editor chokes on symbols).

# scikit_adaboostR2.py

import numpy as np
from sklearn.ensemble import AdaBoostRegressor
from sklearn.tree import DecisionTreeRegressor

# -----------------------------------------------------------

def accuracy(model, data_X, data_y, pct_close):
  # assumes model has a predict(X)
  n = len(data_X)
  n_correct = 0; n_wrong = 0
  for i in range(n):
    x = data_X[i].reshape(1,-1)  # make it a matrix
    y = data_y[i]
    y_pred = model.predict(x)  # predict() expects 2D

    if np.abs(y - y_pred) "lt" np.abs(y * pct_close):
      n_correct += 1
    else: 
      n_wrong += 1
  # print("Correct = " + str(n_correct))
  # print("Wrong   = " + str(n_wrong))
  return n_correct / (n_correct + n_wrong)

# -----------------------------------------------------------

def MSE(model, data_X, data_y):
  n = len(data_X)
  sum = 0.0
  for i in range(n):
    x = data_X[i].reshape(1,-1)
    y = data_y[i]
    y_pred = model.predict(x)
    sum += (y - y_pred) * (y - y_pred)

  return sum / n

# -----------------------------------------------------------

def main():
  print("\nBegin scikit AdaBoost.R2 demo ")

  np.set_printoptions(precision=4, suppress=True,
    floatmode='fixed')
  np.random.seed(0)  # not used this version

  # 1. load data
  print("\nLoading synthetic train (200), test (40) data ")
  train_file = ".\\Data\\synthetic_train_200.txt"
  # -0.1660,0.4406,-0.9998,-0.3953,-0.7065,0.4840
  #  0.0776,-0.1616,0.3704,-0.5911,0.7562,0.1568
  # -0.9452,0.3409,-0.1654,0.1174,-0.7192,0.8054
  # . . .

  train_X = np.loadtxt(train_file, comments="#",
    usecols=[0,1,2,3,4],
    delimiter=",",  dtype=np.float64)
  train_y = np.loadtxt(train_file, comments="#", usecols=5,
    delimiter=",",  dtype=np.float64)

  test_file = ".\\Data\\synthetic_test_40.txt"
  test_X = np.loadtxt(test_file, comments="#",
    usecols=[0,1,2,3,4],
    delimiter=",",  dtype=np.float64)
  test_y = np.loadtxt(test_file, comments="#", usecols=5,
    delimiter=",",  dtype=np.float64)
  print("Done ")

  print("\nFirst three X predictors: ")
  print(train_X[0:3,:])
  print("\nFirst three y targets: ")
  for i in range(3):
    print("%0.4f" % train_y[i])

  n_estimators = 100
  max_depth = 6
  min_samples_split = 2

  print("\nSetting n_estimators = " + \
    str(n_estimators))
  print("Setting max_depth = " + \
    str(max_depth))
  print("Setting min_samples_split = " + \
    str(min_samples_split))

  # sklearn.ensemble.AdaBoostRegressor(estimator=None,
  #   *, n_estimators=50, learning_rate=1.0, loss='linear',
  # random_state=None)

  model = \
    AdaBoostRegressor(estimator= \
    DecisionTreeRegressor(max_depth=6, 
    min_samples_split=2), n_estimators=100,
    random_state=1)

  print("\nTraining AdaBoost.R2 model ")
  model.fit(train_X, train_y)
  print("Done " )

  n_learners = len(model.estimators_)
  print("Created " + str(n_learners) + " learners ")

  acc_train = accuracy(model, train_X, train_y, 0.10)
  print("\nAccuracy train (within 0.10): %0.4f " % acc_train)
  acc_test = accuracy(model, test_X, test_y, 0.10)
  print("Accuracy test (within 0.10): %0.4f " % acc_test)

  mse_train = MSE(model, train_X, train_y)
  print("\nMSE train: %0.4f " % mse_train)
  mse_test = MSE(model, test_X, test_y)
  print("MSE test: %0.4f " % mse_test)

  print("\nPredicting for x = ")
  print(train_X[0])
  pred_y = model.predict(train_X[0].reshape(1,-1))
  print("Predicted y = %0.4f " % pred_y)

  print("\nEnd demo ")

if __name__ == "__main__":
  main()

Training data:

# synthetic_train_200.txt
#
-0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
-0.8299, -0.9219, -0.6603,  0.7563, -0.8033,  0.7955
 0.0663,  0.3838, -0.3690,  0.3730,  0.6693,  0.3206
-0.9634,  0.5003,  0.9777,  0.4963, -0.4391,  0.7377
-0.1042,  0.8172, -0.4128, -0.4244, -0.7399,  0.4801
-0.9613,  0.3577, -0.5767, -0.4689, -0.0169,  0.6861
-0.7065,  0.1786,  0.3995, -0.7953, -0.1719,  0.5569
 0.3888, -0.1716, -0.9001,  0.0718,  0.3276,  0.2500
 0.1731,  0.8068, -0.7251, -0.7214,  0.6148,  0.3297
-0.2046, -0.6693,  0.8550, -0.3045,  0.5016,  0.2129
 0.2473,  0.5019, -0.3022, -0.4601,  0.7918,  0.2613
-0.1438,  0.9297,  0.3269,  0.2434, -0.7705,  0.5171
 0.1568, -0.1837, -0.5259,  0.8068,  0.1474,  0.3307
-0.9943,  0.2343, -0.3467,  0.0541,  0.7719,  0.5581
 0.2467, -0.9684,  0.8589,  0.3818,  0.9946,  0.1092
-0.6553, -0.7257,  0.8652,  0.3936, -0.8680,  0.7018
 0.8460,  0.4230, -0.7515, -0.9602, -0.9476,  0.1996
-0.9434, -0.5076,  0.7201,  0.0777,  0.1056,  0.5664
 0.9392,  0.1221, -0.9627,  0.6013, -0.5341,  0.1533
 0.6142, -0.2243,  0.7271,  0.4942,  0.1125,  0.1661
 0.4260,  0.1194, -0.9749, -0.8561,  0.9346,  0.2230
 0.1362, -0.5934, -0.4953,  0.4877, -0.6091,  0.3810
 0.6937, -0.5203, -0.0125,  0.2399,  0.6580,  0.1460
-0.6864, -0.9628, -0.8600, -0.0273,  0.2127,  0.5387
 0.9772,  0.1595, -0.2397,  0.1019,  0.4907,  0.1611
 0.3385, -0.4702, -0.8673, -0.2598,  0.2594,  0.2270
-0.8669, -0.4794,  0.6095, -0.6131,  0.2789,  0.4700
 0.0493,  0.8496, -0.4734, -0.8681,  0.4701,  0.3516
 0.8639, -0.9721, -0.5313,  0.2336,  0.8980,  0.1412
 0.9004,  0.1133,  0.8312,  0.2831, -0.2200,  0.1782
 0.0991,  0.8524,  0.8375, -0.2102,  0.9265,  0.2150
-0.6521, -0.7473, -0.7298,  0.0113, -0.9570,  0.7422
 0.6190, -0.3105,  0.8802,  0.1640,  0.7577,  0.1056
 0.6895,  0.8108, -0.0802,  0.0927,  0.5972,  0.2214
 0.1982, -0.9689,  0.1870, -0.1326,  0.6147,  0.1310
-0.3695,  0.7858,  0.1557, -0.6320,  0.5759,  0.3773
-0.1596,  0.3581,  0.8372, -0.9992,  0.9535,  0.2071
-0.2468,  0.9476,  0.2094,  0.6577,  0.1494,  0.4132
 0.1737,  0.5000,  0.7166,  0.5102,  0.3961,  0.2611
 0.7290, -0.3546,  0.3416, -0.0983, -0.2358,  0.1332
-0.3652,  0.2438, -0.1395,  0.9476,  0.3556,  0.4170
-0.6029, -0.1466, -0.3133,  0.5953,  0.7600,  0.4334
-0.4596, -0.4953,  0.7098,  0.0554,  0.6043,  0.2775
 0.1450,  0.4663,  0.0380,  0.5418,  0.1377,  0.2931
-0.8636, -0.2442, -0.8407,  0.9656, -0.6368,  0.7429
 0.6237,  0.7499,  0.3768,  0.1390, -0.6781,  0.2185
-0.5499,  0.1850, -0.3755,  0.8326,  0.8193,  0.4399
-0.4858, -0.7782, -0.6141, -0.0008,  0.4572,  0.4197
 0.7033, -0.1683,  0.2334, -0.5327, -0.7961,  0.1776
 0.0317, -0.0457, -0.6947,  0.2436,  0.0880,  0.3345
 0.5031, -0.5559,  0.0387,  0.5706, -0.9553,  0.3107
-0.3513,  0.7458,  0.6894,  0.0769,  0.7332,  0.3170
 0.2205,  0.5992, -0.9309,  0.5405,  0.4635,  0.3532
-0.4806, -0.4859,  0.2646, -0.3094,  0.5932,  0.3202
 0.9809, -0.3995, -0.7140,  0.8026,  0.0831,  0.1600
 0.9495,  0.2732,  0.9878,  0.0921,  0.0529,  0.1289
-0.9476, -0.6792,  0.4913, -0.9392, -0.2669,  0.5966
 0.7247,  0.3854,  0.3819, -0.6227, -0.1162,  0.1550
-0.5922, -0.5045, -0.4757,  0.5003, -0.0860,  0.5863
-0.8861,  0.0170, -0.5761,  0.5972, -0.4053,  0.7301
 0.6877, -0.2380,  0.4997,  0.0223,  0.0819,  0.1404
 0.9189,  0.6079, -0.9354,  0.4188, -0.0700,  0.1907
-0.1428, -0.7820,  0.2676,  0.6059,  0.3936,  0.2790
 0.5324, -0.3151,  0.6917, -0.1425,  0.6480,  0.1071
-0.8432, -0.9633, -0.8666, -0.0828, -0.7733,  0.7784
-0.9444,  0.5097, -0.2103,  0.4939, -0.0952,  0.6787
-0.0520,  0.6063, -0.1952,  0.8094, -0.9259,  0.4836
 0.5477, -0.7487,  0.2370, -0.9793,  0.0773,  0.1241
 0.2450,  0.8116,  0.9799,  0.4222,  0.4636,  0.2355
 0.8186, -0.1983, -0.5003, -0.6531, -0.7611,  0.1511
-0.4714,  0.6382, -0.3788,  0.9648, -0.4667,  0.5950
 0.0673, -0.3711,  0.8215, -0.2669, -0.1328,  0.2677
-0.9381,  0.4338,  0.7820, -0.9454,  0.0441,  0.5518
-0.3480,  0.7190,  0.1170,  0.3805, -0.0943,  0.4724
-0.9813,  0.1535, -0.3771,  0.0345,  0.8328,  0.5438
-0.1471, -0.5052, -0.2574,  0.8637,  0.8737,  0.3042
-0.5454, -0.3712, -0.6505,  0.2142, -0.1728,  0.5783
 0.6327, -0.6297,  0.4038, -0.5193,  0.1484,  0.1153
-0.5424,  0.3282, -0.0055,  0.0380, -0.6506,  0.6613
 0.1414,  0.9935,  0.6337,  0.1887,  0.9520,  0.2540
-0.9351, -0.8128, -0.8693, -0.0965, -0.2491,  0.7353
 0.9507, -0.6640,  0.9456,  0.5349,  0.6485,  0.1059
-0.0462, -0.9737, -0.2940, -0.0159,  0.4602,  0.2606
-0.0627, -0.0852, -0.7247, -0.9782,  0.5166,  0.2977
 0.0478,  0.5098, -0.0723, -0.7504, -0.3750,  0.3335
 0.0090,  0.3477,  0.5403, -0.7393, -0.9542,  0.4415
-0.9748,  0.3449,  0.3736, -0.1015,  0.8296,  0.4358
 0.2887, -0.9895, -0.0311,  0.7186,  0.6608,  0.2057
 0.1570, -0.4518,  0.1211,  0.3435, -0.2951,  0.3244
 0.7117, -0.6099,  0.4946, -0.4208,  0.5476,  0.1096
-0.2929, -0.5726,  0.5346, -0.3827,  0.4665,  0.2465
 0.4889, -0.5572, -0.5718, -0.6021, -0.7150,  0.2163
-0.7782,  0.3491,  0.5996, -0.8389, -0.5366,  0.6516
-0.5847,  0.8347,  0.4226,  0.1078, -0.3910,  0.6134
 0.8469,  0.4121, -0.0439, -0.7476,  0.9521,  0.1571
-0.6803, -0.5948, -0.1376, -0.1916, -0.7065,  0.7156
 0.2878,  0.5086, -0.5785,  0.2019,  0.4979,  0.2980
 0.2764,  0.1943, -0.4090,  0.4632,  0.8906,  0.2960
-0.8877,  0.6705, -0.6155, -0.2098, -0.3998,  0.7107
-0.8398,  0.8093, -0.2597,  0.0614, -0.0118,  0.6502
-0.8476,  0.0158, -0.4769, -0.2859, -0.7839,  0.7715
 0.5751, -0.7868,  0.9714, -0.6457,  0.1448,  0.1175
 0.4802, -0.7001,  0.1022, -0.5668,  0.5184,  0.1090
 0.4458, -0.6469,  0.7239, -0.9604,  0.7205,  0.0779
 0.5175,  0.4339,  0.9747, -0.4438, -0.9924,  0.2879
 0.8678,  0.7158,  0.4577,  0.0334,  0.4139,  0.1678
 0.5406,  0.5012,  0.2264, -0.1963,  0.3946,  0.2088
-0.9938,  0.5498,  0.7928, -0.5214, -0.7585,  0.7687
 0.7661,  0.0863, -0.4266, -0.7233, -0.4197,  0.1466
 0.2277, -0.3517, -0.0853, -0.1118,  0.6563,  0.1767
 0.3499, -0.5570, -0.0655, -0.3705,  0.2537,  0.1632
 0.7547, -0.1046,  0.5689, -0.0861,  0.3125,  0.1257
 0.8186,  0.2110,  0.5335,  0.0094, -0.0039,  0.1391
 0.6858, -0.8644,  0.1465,  0.8855,  0.0357,  0.1845
-0.4967,  0.4015,  0.0805,  0.8977,  0.2487,  0.4663
 0.6760, -0.9841,  0.9787, -0.8446, -0.3557,  0.1509
-0.1203, -0.4885,  0.6054, -0.0443, -0.7313,  0.4854
 0.8557,  0.7919, -0.0169,  0.7134, -0.1628,  0.2002
 0.0115, -0.6209,  0.9300, -0.4116, -0.7931,  0.4052
-0.7114, -0.9718,  0.4319,  0.1290,  0.5892,  0.3661
 0.3915,  0.5557, -0.1870,  0.2955, -0.6404,  0.2954
-0.3564, -0.6548, -0.1827, -0.5172, -0.1862,  0.4622
 0.2392, -0.4959,  0.5857, -0.1341, -0.2850,  0.2470
-0.3394,  0.3947, -0.4627,  0.6166, -0.4094,  0.5325
 0.7107,  0.7768, -0.6312,  0.1707,  0.7964,  0.2757
-0.1078,  0.8437, -0.4420,  0.2177,  0.3649,  0.4028
-0.3139,  0.5595, -0.6505, -0.3161, -0.7108,  0.5546
 0.4335,  0.3986,  0.3770, -0.4932,  0.3847,  0.1810
-0.2562, -0.2894, -0.8847,  0.2633,  0.4146,  0.4036
 0.2272,  0.2966, -0.6601, -0.7011,  0.0284,  0.2778
-0.0743, -0.1421, -0.0054, -0.6770, -0.3151,  0.3597
-0.4762,  0.6891,  0.6007, -0.1467,  0.2140,  0.4266
-0.4061,  0.7193,  0.3432,  0.2669, -0.7505,  0.6147
-0.0588,  0.9731,  0.8966,  0.2902, -0.6966,  0.4955
-0.0627, -0.1439,  0.1985,  0.6999,  0.5022,  0.3077
 0.1587,  0.8494, -0.8705,  0.9827, -0.8940,  0.4263
-0.7850,  0.2473, -0.9040, -0.4308, -0.8779,  0.7199
 0.4070,  0.3369, -0.2428, -0.6236,  0.4940,  0.2215
-0.0242,  0.0513, -0.9430,  0.2885, -0.2987,  0.3947
-0.5416, -0.1322, -0.2351, -0.0604,  0.9590,  0.3683
 0.1055,  0.7783, -0.2901, -0.5090,  0.8220,  0.2984
-0.9129,  0.9015,  0.1128, -0.2473,  0.9901,  0.4776
-0.9378,  0.1424, -0.6391,  0.2619,  0.9618,  0.5368
 0.7498, -0.0963,  0.4169,  0.5549, -0.0103,  0.1614
-0.2612, -0.7156,  0.4538, -0.0460, -0.1022,  0.3717
 0.7720,  0.0552, -0.1818, -0.4622, -0.8560,  0.1685
-0.4177,  0.0070,  0.9319, -0.7812,  0.3461,  0.3052
-0.0001,  0.5542, -0.7128, -0.8336, -0.2016,  0.3803
 0.5356, -0.4194, -0.5662, -0.9666, -0.2027,  0.1776
-0.2378,  0.3187, -0.8582, -0.6948, -0.9668,  0.5474
-0.1947, -0.3579,  0.1158,  0.9869,  0.6690,  0.2992
 0.3992,  0.8365, -0.9205, -0.8593, -0.0520,  0.3154
-0.0209,  0.0793,  0.7905, -0.1067,  0.7541,  0.1864
-0.4928, -0.4524, -0.3433,  0.0951, -0.5597,  0.6261
-0.8118,  0.7404, -0.5263, -0.2280,  0.1431,  0.6349
 0.0516, -0.8480,  0.7483,  0.9023,  0.6250,  0.1959
-0.3212,  0.1093,  0.9488, -0.3766,  0.3376,  0.2735
-0.3481,  0.5490, -0.3484,  0.7797,  0.5034,  0.4379
-0.5785, -0.9170, -0.3563, -0.9258,  0.3877,  0.4121
 0.3407, -0.1391,  0.5356,  0.0720, -0.9203,  0.3458
-0.3287, -0.8954,  0.2102,  0.0241,  0.2349,  0.3247
-0.1353,  0.6954, -0.0919, -0.9692,  0.7461,  0.3338
 0.9036, -0.8982, -0.5299, -0.8733, -0.1567,  0.1187
 0.7277, -0.8368, -0.0538, -0.7489,  0.5458,  0.0830
 0.9049,  0.8878,  0.2279,  0.9470, -0.3103,  0.2194
 0.7957, -0.1308, -0.5284,  0.8817,  0.3684,  0.2172
 0.4647, -0.4931,  0.2010,  0.6292, -0.8918,  0.3371
-0.7390,  0.6849,  0.2367,  0.0626, -0.5034,  0.7039
-0.1567, -0.8711,  0.7940, -0.5932,  0.6525,  0.1710
 0.7635, -0.0265,  0.1969,  0.0545,  0.2496,  0.1445
 0.7675,  0.1354, -0.7698, -0.5460,  0.1920,  0.1728
-0.5211, -0.7372, -0.6763,  0.6897,  0.2044,  0.5217
 0.1913,  0.1980,  0.2314, -0.8816,  0.5006,  0.1998
 0.8964,  0.0694, -0.6149,  0.5059, -0.9854,  0.1825
 0.1767,  0.7104,  0.2093,  0.6452,  0.7590,  0.2832
-0.3580, -0.7541,  0.4426, -0.1193, -0.7465,  0.5657
-0.5996,  0.5766, -0.9758, -0.3933, -0.9572,  0.6800
 0.9950,  0.1641, -0.4132,  0.8579,  0.0142,  0.2003
-0.4717, -0.3894, -0.2567, -0.5111,  0.1691,  0.4266
 0.3917, -0.8561,  0.9422,  0.5061,  0.6123,  0.1212
-0.0366, -0.1087,  0.3449, -0.1025,  0.4086,  0.2475
 0.3633,  0.3943,  0.2372, -0.6980,  0.5216,  0.1925
-0.5325, -0.6466, -0.2178, -0.3589,  0.6310,  0.3568
 0.2271,  0.5200, -0.1447, -0.8011, -0.7699,  0.3128
 0.6415,  0.1993,  0.3777, -0.0178, -0.8237,  0.2181
-0.5298, -0.0768, -0.6028, -0.9490,  0.4588,  0.4356
 0.6870, -0.1431,  0.7294,  0.3141,  0.1621,  0.1632
-0.5985,  0.0591,  0.7889, -0.3900,  0.7419,  0.2945
 0.3661,  0.7984, -0.8486,  0.7572, -0.6183,  0.3449
 0.6995,  0.3342, -0.3113, -0.6972,  0.2707,  0.1712
 0.2565,  0.9126,  0.1798, -0.6043, -0.1413,  0.2893
-0.3265,  0.9839, -0.2395,  0.9854,  0.0376,  0.4770
 0.2690, -0.1722,  0.9818,  0.8599, -0.7015,  0.3954
-0.2102, -0.0768,  0.1219,  0.5607, -0.0256,  0.3949
 0.8216, -0.9555,  0.6422, -0.6231,  0.3715,  0.0801
-0.2896,  0.9484, -0.7545, -0.6249,  0.7789,  0.4370
-0.9985, -0.5448, -0.7092, -0.5931,  0.7926,  0.5402

Test data:

# synthetic_test_40.txt
#
 0.7462,  0.4006, -0.0590,  0.6543, -0.0083,  0.1935
 0.8495, -0.2260, -0.0142, -0.4911,  0.7699,  0.1078
-0.2335, -0.4049,  0.4352, -0.6183, -0.7636,  0.5088
 0.1810, -0.5142,  0.2465,  0.2767, -0.3449,  0.3136
-0.8650,  0.7611, -0.0801,  0.5277, -0.4922,  0.7140
-0.2358, -0.7466, -0.5115, -0.8413, -0.3943,  0.4533
 0.4834,  0.2300,  0.3448, -0.9832,  0.3568,  0.1360
-0.6502, -0.6300,  0.6885,  0.9652,  0.8275,  0.3046
-0.3053,  0.5604,  0.0929,  0.6329, -0.0325,  0.4756
-0.7995,  0.0740, -0.2680,  0.2086,  0.9176,  0.4565
-0.2144, -0.2141,  0.5813,  0.2902, -0.2122,  0.4119
-0.7278, -0.0987, -0.3312, -0.5641,  0.8515,  0.4438
 0.3793,  0.1976,  0.4933,  0.0839,  0.4011,  0.1905
-0.8568,  0.9573, -0.5272,  0.3212, -0.8207,  0.7415
-0.5785,  0.0056, -0.7901, -0.2223,  0.0760,  0.5551
 0.0735, -0.2188,  0.3925,  0.3570,  0.3746,  0.2191
 0.1230, -0.2838,  0.2262,  0.8715,  0.1938,  0.2878
 0.4792, -0.9248,  0.5295,  0.0366, -0.9894,  0.3149
-0.4456,  0.0697,  0.5359, -0.8938,  0.0981,  0.3879
 0.8629, -0.8505, -0.4464,  0.8385,  0.5300,  0.1769
 0.1995,  0.6659,  0.7921,  0.9454,  0.9970,  0.2330
-0.0249, -0.3066, -0.2927, -0.4923,  0.8220,  0.2437
 0.4513, -0.9481, -0.0770, -0.4374, -0.9421,  0.2879
-0.3405,  0.5931, -0.3507, -0.3842,  0.8562,  0.3987
 0.9538,  0.0471,  0.9039,  0.7760,  0.0361,  0.1706
-0.0887,  0.2104,  0.9808,  0.5478, -0.3314,  0.4128
-0.8220, -0.6302,  0.0537, -0.1658,  0.6013,  0.4306
-0.4123, -0.2880,  0.9074, -0.0461, -0.4435,  0.5144
 0.0060,  0.2867, -0.7775,  0.5161,  0.7039,  0.3599
-0.7968, -0.5484,  0.9426, -0.4308,  0.8148,  0.2979
 0.7811,  0.8450, -0.6877,  0.7594,  0.2640,  0.2362
-0.6802, -0.1113, -0.8325, -0.6694, -0.6056,  0.6544
 0.3821,  0.1476,  0.7466, -0.5107,  0.2592,  0.1648
 0.7265,  0.9683, -0.9803, -0.4943, -0.5523,  0.2454
-0.9049, -0.9797, -0.0196, -0.9090, -0.4433,  0.6447
-0.4607,  0.1811, -0.2389,  0.4050, -0.0078,  0.5229
 0.2664, -0.2932, -0.4259, -0.7336,  0.8742,  0.1834
-0.4507,  0.1029, -0.6294, -0.1158, -0.6294,  0.6081
 0.8948, -0.0124,  0.9278,  0.2899, -0.0314,  0.1534
-0.1323, -0.8813, -0.0146, -0.0697,  0.6135,  0.2386
Posted in Machine Learning | Leave a comment

“Linear Regression with Pseudo-Inverse Training Using C#” in Visual Studio Magazine

I wrote an article titled “Linear Regression with Pseudo-Inverse Training Using C#” in the December 2025 edition of Microsoft Visual Studio Magazine. See https://visualstudiomagazine.com/articles/2025/12/15/linear-regression-with-pseudo-inverse-training-using-csharp.aspx.

The goal of a machine learning regression problem is to predict a single numeric value. For example, you might want to predict the bank account balance of an employee based on his age, height, and years of work experience.

There are roughly a dozen main regression techniques, including nearest neighbors regression, and neural network regression. Linear regression is the most fundamental technique.

The form of a linear regression prediction equation is y’ = (w0 * x0) + (w1 * x1) + . . + (wn * xn) + b where y’ is the predicted value, the xi are predictor values, the wi are constants called model weights, and b is a constant called the bias. For example, y’ = predicted balance = (-0.54 * age) + (0.38 * height) + (0.11 * experience) + 0.72. Training the model is the process of finding the values of the weights and bias so that predicted y values are close to known correct target y values in a set of training data.

There are three main techniques to train a linear regression model: stochastic gradient descent (SGD), pseudo-inverse, and closed form training. My article explains how to implement the pseudo-inverse training technique.

The output of my demo program is:

Begin C# linear regression using pseudo-inverse training

Loading synthetic train (200) and test (40) data
Done

First three train X:
 -0.6046  0.7260  0.9668 -0.6723  0.1947
  0.9341  0.0945  0.9454  0.4296  0.3955
 -0.9820 -0.2269 -0.9117  0.9133 -0.1277

First three train y:
  0.7180
  0.2507
  0.5698

Creating and training Linear Regression model 
 using QR p-inverse
Done

Coefficients/weights:
-0.2500  -0.0220  0.0272  -0.1434  0.0511
Bias/constant: 0.4938

Evaluating model

Accuracy train (within 0.10) = 0.7500
Accuracy test (within 0.10) = 0.7750

MSE train = 0.0013
MSE test = 0.0011

Predicting for x =
  -0.6046   0.7260   0.9668  -0.6723   0.1947

Predicted y = 0.7616

End demo

There are several algorithms to compute a pseudo-inverse. The demo uses QR decomposition via the Householder algorithm.

For large datasets, training with SGD is often best, but it requires values for a learning rate parameter and a maximum epochs parameter, and those must be determined by trial and error. For medium size datasets, training with pseudo-inverse works well, but it is the most complicated technique. For small datasets, closed form training is simpler than pseudo-inverse, but it can fail more easily.



I’m a big fan of 1950s science fiction movies. Two of my favorites featured spacecraft with tubes.

Left: In “Forbidden Planet” (1956), the crew of the starship C-57D go to planet Altair IV to determine the fate of an earlier expedition. The crew enter “DC” stations as the ship decelerates out of hyperdrive. The tubes/stations are bordered by a green energy field rather than glass or plastic.

Right: In “This Island Earth” (1955), scientists from Earth are recruited by aliens from Metaluna to help in a war with the Zagons. The Metaluna spacecraft has some sort of conditioning tubes to deal with the differences between the atmospheres of Earth and Metaluna.


Posted in Machine Learning | Leave a comment

Christmas Potatoes

It’s Christmas today. I love Christmas because dinner is always a big event with many people involved. My favorite food on Christmas, somewhat strangely, is potatoes. Here are my ten favorite types of potatoes on Christmas. Listed in no particular order — I love all ten.


1. Mashed Potatoes – I make pretty good mashed potatoes, but I’ve got to admit my friend Ken and his wife Marcela, daughters Michelle and Melissa, and granddaughter Mila, make the best I’ve ever eaten (every year for the last 25+).


2. Potato Soup – I like plain potato soup. No “loaded”, no cheese, no this, no that (well, maybe a little bit of bacon allowed).


3. Hash Browns – I rarely eat hash brown potatoes except when I travel, most often to Las Vegas. When I was growing up in the 1950s and 60s, they were called “hashed browns”. I only eat breakfast from fast food McDonald’s two or three times a year, but their pseudo hash browns are surprisingly good.


4. Scalloped Potatoes – I like both plain and “au gratin” variations. I’ve got to admit that I really like the Betty Crocker kind that come out of a box.


5. Stir Fried Cubed Potatoes – A few times a year, I’ll get a craving, go to my local QFC grocery store, buy a couple of Yukon Gold potatoes, cut them up, and fry them in butter.


6. Baked Potato – I don’t eat baked potatoes very often. They take too long to cook at home, and when I’m eating out at a restaurant, I’ll always gravitate to mashed potatoes. But when I do eat a baked potato, I want it topped only with butter — no sour cream, no chives, no anything. Just butter — lots of it.


7. Potato Cakes – My father used to make potato cakes for me and my brother and my sister. Leftover mashed potatoes formed into patties, floured, fried in oil. Good memories. RIP mom, dad, Roger.


8. French Fries – I love all kinds. My favorite dipping sauce is tartar sauce, with ketchup a distant second.


9. Potato Skins – I usually only get these as appetizers at parties. There’s a wide range but I prefer a minimal version, without gobs of cheese and other toppings.


10. Potato Salad – I can, and often have, made an entire meal of nothing but a good-sized bowl of potato salad. I like the Reser’s brand. They make many varieties, but I like plain, old fashioned potato sald the best.


11. Tater Tots – I know this s supposed to be a “top 10” list, but I can’t leave out tater tots. I love tater tots but I don’t eat them very often because once I start, it’s difficult to stop eating them. I especially like tots when I eat steak. Tater Tots were invented in 1953 by brothers Nephi and Golden Grigg in Ontario, Oregon, as a solution to use up leftover potato shavings from their Ore-Ida frozen French fry production.



Posted in Top Ten | Leave a comment

NFL 2025 Week 17 Predictions – Zoltar Struggles with Wild Vegas Point Spread Swings

Zoltar is my NFL football prediction computer program. It uses a neural network and a type of reinforcement learning. Here are Zoltar’s predictions for week #17 of the 2025 season.

Zoltar:  commanders  by    1  opp =     cowboys    | Vegas:  commanders  by  3.5
Zoltar:       lions  by    0  opp =     vikings    | Vegas:       lions  by    3
Zoltar:     broncos  by    0  opp =      chiefs    | Vegas:      chiefs  by  4.5
Zoltar:    chargers  by    4  opp =      texans    | Vegas:      texans  by  1.5
Zoltar:     packers  by    6  opp =      ravens    | Vegas:     packers  by  2.5
Zoltar:    seahawks  by    5  opp =    panthers    | Vegas:    seahawks  by  7.5
Zoltar:     bengals  by    6  opp =   cardinals    | Vegas:     bengals  by  6.5
Zoltar:      saints  by    0  opp =      titans    | Vegas:      titans  by  3.5
Zoltar:    steelers  by    9  opp =      browns    | Vegas:    steelers  by  2.5
Zoltar:     jaguars  by    0  opp =       colts    | Vegas:       colts  by  1.5
Zoltar:  buccaneers  by    0  opp =    dolphins    | Vegas:  buccaneers  by  1.5
Zoltar:    patriots  by    5  opp =        jets    | Vegas:    patriots  by  1.5
Zoltar:     raiders  by    2  opp =      giants    | Vegas:      giants  by  3.5
Zoltar:       bills  by    4  opp =      eagles    | Vegas:       bills  by  1.5
Zoltar: fortyniners  by    2  opp =       bears    | Vegas: fortyniners  by  3.5
Zoltar:        rams  by    4  opp =     falcons    | Vegas:        rams  by  8.5

Zoltar theoretically suggests betting when the Vegas line is significantly different from Zoltar’s prediction. For the last few weeks of this season I’ve been using a very conservative threshold of 5 points difference. During the middle of the season I used a more aggressive threshold of 3 points.

For week #17, using a conservative threshold of 5 points, Zoltar likes two Vegas underdogs and one Vegas favorite:

texans       at     chargers: Bet on Vegas underdog chargers
steelers     at       browns: Bet on Vegas favorite  steelers
giants       at      raiders: Bet on Vegas underdog raiders

A bet on the Vegas underdog Chargers against the Texans will pay off if the Chargers win by any score, or if the favored Texans win but by less than 1.5 points (i.e., 1 point). If a favored team wins by exactly the point spread, the wager is a push. This is why point spreads often have a 0.5 added — called “the hook” — to eliminate pushes.

I use the early Vegas point spreads, usually posted on late Monday night, right after the Monday Night Football game. By the time you read this, the point spreads will have changed. This season the point spreads are changing wildly. A swing of 7 points is not uncommon.

I speculate that this is due to a huge increase in betting. When a lot of money is bet on one team in a matchup, the bookmakers must make a huge change in the point spread to encourage betting on the other team. Bookmakers only make money when the betting amounts are close to equal on both teams.

This week, 7 of the 16 scheduled games had a swing of 5 points or more. For example, the Patriots-Jets game opened with the Patriots favored by 1.5 points. But by Tuesday morning, the Patriots had become favored by 13.5 points — a swing of 12 points.

At the end of the season, predictions get very dicey. Injuries have accumulated and teams that have nothing to gain often sit key players to avoid injuries.

Put another way, Zoltar’s predictions have to be taken with a huge grain of salt.

Theoretically, if you must bet $110 to win $100 (typical in Vegas) then you’ll make money if you predict at 53% accuracy or better. But realistically, you need to predict at 60% accuracy or better, to account for logistics and things like data entry errors.

In week #16, against the Vegas point spread, Zoltar went 1-3 using a conservative 5.0 points as the advice threshold. Bad week, including a very bad recommendation on the Texans-Raiders game.

For the season, against the spread, Zoltar is 43-27 (~61% accuracy).

Just for fun, I track how well Zoltar does when just trying to predict just which team will win a game. This isn’t useful except for parlay betting. In week #16, just predicting the winning team, Zoltar went 7-6 with 3 games too close for Zoltar to express an opinion, which is poor. Vegas was one very poor too, going just 8-8 when just predicting the winning team.



My prediction system is named after the Zoltar fortune teller machine that can be found in arcades. Coin operated fortune tellers have been around for well over 100 years.

Center: The strange Sega Astrodata fortune teller was manufactured in 1971. Users would input their birthdate and then the machine would punch holes in a card to indicate seven predictions for things like “Your financial prospects will be” (poor, favorable, good, very good, excellent). When I was a teenager, one summer I worked at a miniature golf course in Anaheim, California. The property had an arcade that had some nice machines including an Astrodata, but Astrodata was broken more than half the time.

Right: The nice Egyptian Seeress was manufactured in 1929 by the Exhibit Supply Company (ESCO), a prominent maker of arcade games from the 1920s through the 1960s. The machine would give the user a business card sized fortune like, “You have a happy optimistic disposition. You are artistic, idealistic, and sometimes impractical. You are sympathetic and loving. You like to travel and you are eager to improve your mind.”


Posted in Zoltar | Leave a comment

Random Forest Regression from Scratch Using Python

Naive decision tree regression prediction models usually overfit the training data. The model is accurate on the training data, but has poor accuracy and MSE on new, previously unseen data.

One of several ways to deal with decision tree overfitting is to create a collection of trees, train each tree on a subset of the full training data. Then to predict, compute the average of the predictions of all the trees. Simple.

There are two closely related simple tree ensemble techniques: random forest regression, and bagging (“bootstrap aggregation”) tree regression. (More sophisticated ensemble techniques are adaptive boosting and gradient boosting).

create an empty collection of decision trees
loop each number of trees times
  create a rows-subset of training data
  train curr tree using rows-subset, but
    with random columns used for each training split
  add trained tree to collection of trees
end loop

The rows-subset can have the same number of rows as the source training data, or fewer rows. When rows are randomly selected, the selection is done “with replacement” so a specific row might be included in the subset more than once, and some rows might not be included at all. The training data subset starts with uses all columns. But in random forest regression, during training, when each split is computed, a different columns-subset of the rows-subset of the training data is used.

In bagging tree regression, everything is the same except there is no selection of random columns during splits — all columns are used.



When I write code, I often sketch out the data structures involved using paper and pencil. I grew up just after World War II, in a time before ball point pens and copy machines, and so using paper and pencil puts me in my intellectual comfort zone.


If you have a decision tree implementation, then implementing random forest regression from scratch is relatively easy. You do have to modify the decision tree regressor code to select random columns during the split process, but this is relatively easy. If you don’t have the source code for a decision tree regressor, I don’t think implementing a random forest regressor is feasible.

Here’s my demo code method that trains a random regressor:

  def fit(self, X, y):
    for i in range(self.n_trees):
      # special tree that allows random columns during split
      curr_tree = \
        MyDecisionTreeRegressor(max_depth=self.max_depth,
          min_samples=self.min_samples,
          n_split_cols=self.n_split_cols)

      # create random rows-subset of training data
      # for each tree
      rnd_rows = self.rnd.choice(self.n_rows, 
        size=(self.n_rows), replace=True)
      subset_X = X[rnd_rows,:]
      subset_y = y[rnd_rows]

      # train tree on subset and add to list
      # random cols will be used during splits
      curr_tree.fit(subset_X, subset_y)
      self.trees.append(curr_tree)

For my demo, I used a set of synthetic data that I generated using a neural network with random weights and biases. The data looks like:

-0.1660,  0.4406, -0.9998, -0.3953, -0.7065, 0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562, 0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192, 0.8054
. . .

The first five values on each line are the predictors. The sixth value is the target to predict. All predictor values are between -1.0 and 1.0. Normalizing the predictor values is not necessary but is helpful when using the data with other regression techniques that require normalization (such as k-nearest neighbors regression). There are 200 items in the training data and 40 items in the test data.

The output of the from-scratch random forest regression demo program is:

Begin random forest regression scratch Python

Loading synthetic train (200), test (40) data
Done

First three X predictors:
[[-0.1660  0.4406 -0.9998 -0.3953 -0.7065]
 [ 0.0776 -0.1616  0.3704 -0.5911  0.7562]
 [-0.9452  0.3409 -0.1654  0.1174 -0.7192]]

First three y targets:
0.4840
0.1568
0.8054

Setting num_trees = 10
Setting n_rows = 200
Setting n_split_cols = 4
Setting max_depth = 6
Setting min_samples = 2

Creating and training random forest model
Done

Accuracy train (within 0.10): 0.7800
Accuracy test (within 0.10): 0.5750

MSE train: 0.0006
MSE test: 0.0020

End random forest regression scratch Python demo

====================

Using scikit RandomForestRegressor:

Creating and training scikit Random Forest model
Done

Accuracy train (within 0.10): 0.7850
Accuracy test (within 0.10): 0.6500

MSE train: 0.0005
MSE test: 0.0019

I validated my from-scratch demo by running the data through the scikit-learn Python language library RandomForestRegressor module. The results were essentailly the same (given the randomness of column selection during tree splits).

Much fun.



Three nice images (to my eye anyway) of an Internet search for “alien forest”.


Demo program. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols. (My blog editor chokes on symbols).

# random_forest_regression_scratch.py

# each tree is trained on a subset of the data, with some
# rows possibly duplicated, and some rows possibly not used.
# subset uses all columns, but during training, at each
# split, only n_splt_cols randomly selected columns are used.

import numpy as np

# ===========================================================

class MyRandomForestRegressor:  # avoid scikit name collision
  def __init__(self, n_trees, n_rows, n_split_cols,
    max_depth=3, min_samples=2, seed=0):
    self.n_trees = n_trees
    self.n_rows = n_rows  # for main subset
    self.n_split_cols = n_split_cols  # each split calculation
    self.max_depth = max_depth
    self.min_samples = min_samples
    self.trees = []
    self.rnd = np.random.RandomState(seed)

  def fit(self, X, y):
    for i in range(self.n_trees):
      # special tree that allows random columns during split
      curr_tree = \
        MyDecisionTreeRegressor(max_depth=self.max_depth,
          min_samples=self.min_samples,
          n_split_cols=self.n_split_cols)

      # create random rows-subset of training data
      # for each tree
      rnd_rows = self.rnd.choice(self.n_rows, 
        size=(self.n_rows), replace=True)
      subset_X = X[rnd_rows,:]
      subset_y = y[rnd_rows]

      # train tree on subset and add to list
      # random cols will be used during splits
      curr_tree.fit(subset_X, subset_y)
      self.trees.append(curr_tree)

  def predict_one(self, x):
    sum = 0.0
    for i in range(self.n_trees):
      pred_y = self.trees[i].predict_one(x)
      sum += pred_y
    return sum / self.n_trees

  def predict(self, X):
    result = np.zeros(len(X), dtype=np.float64)
    for i in range(len(X)):
      result[i] = self.predict_one(X[i])
    return result    

# ===========================================================


# ===========================================================

class MyDecisionTreeRegressor:  # avoid scikit name collision
  # if max_depth = n, tree has at most 2^(n+1) - 1 nodes.

  def __init__(self, max_depth=3, min_samples=2,
    n_split_cols=-1, seed=0):
    self.max_depth = max_depth
    self.min_samples = min_samples # aka min_samples_split
    self.n_split_cols = n_split_cols  # mostly random forest
    self.root = None
    self.rnd = np.random.RandomState(seed) # split col order

  # ===============================================

  class Node:
    def __init__(self, id=0, col_idx=-1, thresh=0.0,
        left=None, right=None, value=0.0, is_leaf=False):
      self.id = id  # useful for debugging
      self.col_idx = col_idx
      self.thresh = thresh
      self.left = left
      self.right = right
      self.value = value
      self.is_leaf = is_leaf  # False for an in-node

  # ===============================================

  def best_split(self, X, y):
    best_col_idx = -1  # indicates a bad split
    best_thresh = 0.0
    best_mse = np.inf  # smaller is better
    n_rows, n_cols = X.shape

    rnd_cols = np.arange(n_cols)
    self.rnd.shuffle(rnd_cols)
    if self.n_split_cols != -1:  # just use some cols
      rnd_cols = rnd_cols[0:self.n_split_cols]

    for j in range(len(rnd_cols)):
      col_idx = rnd_cols[j]
      examined_threshs = set()
      for i in range(n_rows):
        thresh = X[i][col_idx]  # candidate threshold value

        if thresh in examined_threshs == True:
          continue
        examined_threshs.add(thresh)

        # get rows where x is lte, gt thresh
        left_idxs = np.where(X[:,col_idx] "lte" thresh)[0]
        right_idxs = np.where(X[:,col_idx] "gt" thresh)[0]

        # check proposed split
        if len(left_idxs) == 0 or \
          len(right_idxs) == 0:
          continue

        # get left and right y values
        left_y_vals = y[left_idxs]  # not empty
        right_y_vals = y[right_idxs]  # not empty

        # compute proposed split MSE
        mse_left = self.vector_mse(left_y_vals)
        mse_right = self.vector_mse(right_y_vals)
        split_mse = (len(left_y_vals) * mse_left + \
          len(right_y_vals) * mse_right) / n_rows

        if split_mse "lt" best_mse:
          best_col_idx = col_idx
          best_thresh = thresh
          best_mse = split_mse          

    return best_col_idx, best_thresh  # -1 is bad/no split

  # ---------------------------------------------------------

  def vector_mse(self, y):  # variance but called MSE
    if len(y) == 0:
      return 0.0  # should never get here
    return np.var(y)

  # ---------------------------------------------------------

  def make_tree(self, X, y):
    root = self.Node()  # is_leaf is False
    stack = [(root, X, y, 0)]  # curr depth = 0

    while (len(stack) "gt" 0):
      curr_node, curr_X, curr_y, curr_depth = stack.pop()

      if curr_depth == self.max_depth or \
        len(curr_y) "lt" self.min_samples:
        curr_node.value = np.mean(curr_y)
        curr_node.is_leaf = True
        continue

      col_idx, thresh = self.best_split(curr_X, curr_y) 

      if col_idx == -1:  # cannot split
        curr_node.value = np.mean(curr_y)
        curr_node.is_leaf = True
        continue
      
      # got a good split so at an internal, non-leaf node
      curr_node.col_idx = col_idx
      curr_node.thresh = thresh

      # create and attach child nodes
      left_idxs = np.where(curr_X[:,col_idx] "lte" thresh)[0]
      right_idxs = np.where(curr_X[:,col_idx] "gt" thresh)[0]

      left_X = curr_X[left_idxs,:]
      left_y = curr_y[left_idxs]
      right_X = curr_X[right_idxs,:]
      right_y = curr_y[right_idxs]

      curr_node.left = self.Node(id=2*curr_node.id+1)
      stack.append((curr_node.left,
        left_X, left_y, curr_depth+1))

      curr_node.right = self.Node(id=2*curr_node.id+2)
      stack.append((curr_node.right,
        right_X, right_y, curr_depth+1))
      
    return root

  # ---------------------------------------------------------      

  def fit(self, X, y):
    self.root = self.make_tree(X, y)

  # ---------------------------------------------------------

  def predict_one(self, x):
    curr = self.root
    while curr.is_leaf == False:
      if x[curr.col_idx] "lte" curr.thresh:
        curr = curr.left
      else:
        curr = curr.right
    return curr.value

  def predict(self, X):  # scikit always uses a matrix input
    result = np.zeros(len(X), dtype=np.float64)
    for i in range(len(X)):
      result[i] = self.predict_one(X[i])
    return result

  # ---------------------------------------------------------

# ===========================================================
# ===========================================================

# -----------------------------------------------------------

def accuracy(model, data_X, data_y, pct_close):
  # assumes model has a predict(X)
  n = len(data_X)
  n_correct = 0; n_wrong = 0
  for i in range(n):
    x = data_X[i].reshape(1,-1)  # make it a matrix
    y = data_y[i]
    y_pred = model.predict(x)  # predict() expects 2D

    if np.abs(y - y_pred) "lt" np.abs(y * pct_close):
      n_correct += 1
    else: 
      n_wrong += 1
  # print("Correct = " + str(n_correct))
  # print("Wrong   = " + str(n_wrong))
  return n_correct / (n_correct + n_wrong)

# -----------------------------------------------------------

def MSE(model, data_X, data_y):
  n = len(data_X)
  sum = 0.0
  for i in range(n):
    x = data_X[i].reshape(1,-1)
    y = data_y[i]
    y_pred = model.predict(x)
    sum += (y - y_pred) * (y - y_pred)

  return sum / n

# -----------------------------------------------------------

def main():
  print("\nBegin random forest regression scratch Python ")

  np.set_printoptions(precision=4, suppress=True,
    floatmode='fixed')
  np.random.seed(0)  # not used this version

  # 1. load data
  print("\nLoading synthetic train (200), test (40) data ")
  train_file = ".\\Data\\synthetic_train_200.txt"
  # -0.1660,0.4406,-0.9998,-0.3953,-0.7065,0.4840
  #  0.0776,-0.1616,0.3704,-0.5911,0.7562,0.1568
  # -0.9452,0.3409,-0.1654,0.1174,-0.7192,0.8054
  # . . .

  train_X = np.loadtxt(train_file, comments="#",
    usecols=[0,1,2,3,4],
    delimiter=",",  dtype=np.float64)
  train_y = np.loadtxt(train_file, comments="#", usecols=5,
    delimiter=",",  dtype=np.float64)

  test_file = ".\\Data\\synthetic_test_40.txt"
  test_X = np.loadtxt(test_file, comments="#",
    usecols=[0,1,2,3,4],
    delimiter=",",  dtype=np.float64)
  test_y = np.loadtxt(test_file, comments="#", usecols=5,
    delimiter=",",  dtype=np.float64)
  print("Done ")

  print("\nFirst three X predictors: ")
  print(train_X[0:3,:])
  print("\nFirst three y targets: ")
  for i in range(3):
    print("%0.4f" % train_y[i])

  # 2. create and train model
  nt = 10   # number trees
  nr = 200  # number rows
  nc = 4    # number cols to use during splits
  md = 6    # max_depth
  ms = 2    # min_samples to consider a split

  print("\nSetting num_trees = " + str(nt))
  print("Setting n_rows = " + str(nr))
  print("Setting n_split_cols = " + str(nc))
  print("Setting max_depth = " + str(md))
  print("Setting min_samples = " + str(ms))

  print("\nCreating and training random forest model ")
  model = MyRandomForestRegressor(n_trees=nt, n_rows=nr,
    n_split_cols=nc, max_depth=md, min_samples=ms, seed=0)
  model.fit(train_X, train_y)
  print("Done ")

  # 3. evaluate model
  acc_train = accuracy(model, train_X, train_y, 0.10)
  print("\nAccuracy train (within 0.10): %0.4f " % acc_train)
  acc_test = accuracy(model, test_X, test_y, 0.10)
  print("Accuracy test (within 0.10): %0.4f " % acc_test)

  mse_train = MSE(model, train_X, train_y)
  print("\nMSE train: %0.4f " % mse_train)
  mse_test = MSE(model, test_X, test_y)
  print("MSE test: %0.4f " % mse_test)

  # 4. use model
  x = train_X[0].reshape(1,-1)
  print("\nPredicting for: ")
  print(x)
  y_pred = model.predict(x)
  print("Predicted y = %0.4f " % y_pred)

  print("\nEnd random forest regression scratch Python demo ")

  print("\n==================== ")

  print("\nUsing scikit RandomForestRegressor: ")
  from sklearn.ensemble import RandomForestRegressor

  print("\nCreating and training scikit Random Forest model ")
  rfr = RandomForestRegressor(n_estimators=10,
    max_depth=6, max_features=4, random_state=0)
  rfr.fit(train_X, train_y)
  print("Done ")

  acc_train = accuracy(rfr, train_X, train_y, 0.10)
  print("\nAccuracy train (within 0.10): %0.4f " % acc_train)
  acc_test = accuracy(rfr, test_X, test_y, 0.10)
  print("Accuracy test (within 0.10): %0.4f " % acc_test)

  mse_train = MSE(rfr, train_X, train_y)
  print("\nMSE train: %0.4f " % mse_train)
  mse_test = MSE(rfr, test_X, test_y)
  print("MSE test: %0.4f " % mse_test)

  x = train_X[0].reshape(1,-1)
  print("\nPredicting for: ")
  print(x)
  y_pred = rfr.predict(x)
  print("Predicted y = %0.4f " % y_pred)


if __name__ == "__main__":
  main()

Training data:

# synthetic_train_200.txt
#
-0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
-0.8299, -0.9219, -0.6603,  0.7563, -0.8033,  0.7955
 0.0663,  0.3838, -0.3690,  0.3730,  0.6693,  0.3206
-0.9634,  0.5003,  0.9777,  0.4963, -0.4391,  0.7377
-0.1042,  0.8172, -0.4128, -0.4244, -0.7399,  0.4801
-0.9613,  0.3577, -0.5767, -0.4689, -0.0169,  0.6861
-0.7065,  0.1786,  0.3995, -0.7953, -0.1719,  0.5569
 0.3888, -0.1716, -0.9001,  0.0718,  0.3276,  0.2500
 0.1731,  0.8068, -0.7251, -0.7214,  0.6148,  0.3297
-0.2046, -0.6693,  0.8550, -0.3045,  0.5016,  0.2129
 0.2473,  0.5019, -0.3022, -0.4601,  0.7918,  0.2613
-0.1438,  0.9297,  0.3269,  0.2434, -0.7705,  0.5171
 0.1568, -0.1837, -0.5259,  0.8068,  0.1474,  0.3307
-0.9943,  0.2343, -0.3467,  0.0541,  0.7719,  0.5581
 0.2467, -0.9684,  0.8589,  0.3818,  0.9946,  0.1092
-0.6553, -0.7257,  0.8652,  0.3936, -0.8680,  0.7018
 0.8460,  0.4230, -0.7515, -0.9602, -0.9476,  0.1996
-0.9434, -0.5076,  0.7201,  0.0777,  0.1056,  0.5664
 0.9392,  0.1221, -0.9627,  0.6013, -0.5341,  0.1533
 0.6142, -0.2243,  0.7271,  0.4942,  0.1125,  0.1661
 0.4260,  0.1194, -0.9749, -0.8561,  0.9346,  0.2230
 0.1362, -0.5934, -0.4953,  0.4877, -0.6091,  0.3810
 0.6937, -0.5203, -0.0125,  0.2399,  0.6580,  0.1460
-0.6864, -0.9628, -0.8600, -0.0273,  0.2127,  0.5387
 0.9772,  0.1595, -0.2397,  0.1019,  0.4907,  0.1611
 0.3385, -0.4702, -0.8673, -0.2598,  0.2594,  0.2270
-0.8669, -0.4794,  0.6095, -0.6131,  0.2789,  0.4700
 0.0493,  0.8496, -0.4734, -0.8681,  0.4701,  0.3516
 0.8639, -0.9721, -0.5313,  0.2336,  0.8980,  0.1412
 0.9004,  0.1133,  0.8312,  0.2831, -0.2200,  0.1782
 0.0991,  0.8524,  0.8375, -0.2102,  0.9265,  0.2150
-0.6521, -0.7473, -0.7298,  0.0113, -0.9570,  0.7422
 0.6190, -0.3105,  0.8802,  0.1640,  0.7577,  0.1056
 0.6895,  0.8108, -0.0802,  0.0927,  0.5972,  0.2214
 0.1982, -0.9689,  0.1870, -0.1326,  0.6147,  0.1310
-0.3695,  0.7858,  0.1557, -0.6320,  0.5759,  0.3773
-0.1596,  0.3581,  0.8372, -0.9992,  0.9535,  0.2071
-0.2468,  0.9476,  0.2094,  0.6577,  0.1494,  0.4132
 0.1737,  0.5000,  0.7166,  0.5102,  0.3961,  0.2611
 0.7290, -0.3546,  0.3416, -0.0983, -0.2358,  0.1332
-0.3652,  0.2438, -0.1395,  0.9476,  0.3556,  0.4170
-0.6029, -0.1466, -0.3133,  0.5953,  0.7600,  0.4334
-0.4596, -0.4953,  0.7098,  0.0554,  0.6043,  0.2775
 0.1450,  0.4663,  0.0380,  0.5418,  0.1377,  0.2931
-0.8636, -0.2442, -0.8407,  0.9656, -0.6368,  0.7429
 0.6237,  0.7499,  0.3768,  0.1390, -0.6781,  0.2185
-0.5499,  0.1850, -0.3755,  0.8326,  0.8193,  0.4399
-0.4858, -0.7782, -0.6141, -0.0008,  0.4572,  0.4197
 0.7033, -0.1683,  0.2334, -0.5327, -0.7961,  0.1776
 0.0317, -0.0457, -0.6947,  0.2436,  0.0880,  0.3345
 0.5031, -0.5559,  0.0387,  0.5706, -0.9553,  0.3107
-0.3513,  0.7458,  0.6894,  0.0769,  0.7332,  0.3170
 0.2205,  0.5992, -0.9309,  0.5405,  0.4635,  0.3532
-0.4806, -0.4859,  0.2646, -0.3094,  0.5932,  0.3202
 0.9809, -0.3995, -0.7140,  0.8026,  0.0831,  0.1600
 0.9495,  0.2732,  0.9878,  0.0921,  0.0529,  0.1289
-0.9476, -0.6792,  0.4913, -0.9392, -0.2669,  0.5966
 0.7247,  0.3854,  0.3819, -0.6227, -0.1162,  0.1550
-0.5922, -0.5045, -0.4757,  0.5003, -0.0860,  0.5863
-0.8861,  0.0170, -0.5761,  0.5972, -0.4053,  0.7301
 0.6877, -0.2380,  0.4997,  0.0223,  0.0819,  0.1404
 0.9189,  0.6079, -0.9354,  0.4188, -0.0700,  0.1907
-0.1428, -0.7820,  0.2676,  0.6059,  0.3936,  0.2790
 0.5324, -0.3151,  0.6917, -0.1425,  0.6480,  0.1071
-0.8432, -0.9633, -0.8666, -0.0828, -0.7733,  0.7784
-0.9444,  0.5097, -0.2103,  0.4939, -0.0952,  0.6787
-0.0520,  0.6063, -0.1952,  0.8094, -0.9259,  0.4836
 0.5477, -0.7487,  0.2370, -0.9793,  0.0773,  0.1241
 0.2450,  0.8116,  0.9799,  0.4222,  0.4636,  0.2355
 0.8186, -0.1983, -0.5003, -0.6531, -0.7611,  0.1511
-0.4714,  0.6382, -0.3788,  0.9648, -0.4667,  0.5950
 0.0673, -0.3711,  0.8215, -0.2669, -0.1328,  0.2677
-0.9381,  0.4338,  0.7820, -0.9454,  0.0441,  0.5518
-0.3480,  0.7190,  0.1170,  0.3805, -0.0943,  0.4724
-0.9813,  0.1535, -0.3771,  0.0345,  0.8328,  0.5438
-0.1471, -0.5052, -0.2574,  0.8637,  0.8737,  0.3042
-0.5454, -0.3712, -0.6505,  0.2142, -0.1728,  0.5783
 0.6327, -0.6297,  0.4038, -0.5193,  0.1484,  0.1153
-0.5424,  0.3282, -0.0055,  0.0380, -0.6506,  0.6613
 0.1414,  0.9935,  0.6337,  0.1887,  0.9520,  0.2540
-0.9351, -0.8128, -0.8693, -0.0965, -0.2491,  0.7353
 0.9507, -0.6640,  0.9456,  0.5349,  0.6485,  0.1059
-0.0462, -0.9737, -0.2940, -0.0159,  0.4602,  0.2606
-0.0627, -0.0852, -0.7247, -0.9782,  0.5166,  0.2977
 0.0478,  0.5098, -0.0723, -0.7504, -0.3750,  0.3335
 0.0090,  0.3477,  0.5403, -0.7393, -0.9542,  0.4415
-0.9748,  0.3449,  0.3736, -0.1015,  0.8296,  0.4358
 0.2887, -0.9895, -0.0311,  0.7186,  0.6608,  0.2057
 0.1570, -0.4518,  0.1211,  0.3435, -0.2951,  0.3244
 0.7117, -0.6099,  0.4946, -0.4208,  0.5476,  0.1096
-0.2929, -0.5726,  0.5346, -0.3827,  0.4665,  0.2465
 0.4889, -0.5572, -0.5718, -0.6021, -0.7150,  0.2163
-0.7782,  0.3491,  0.5996, -0.8389, -0.5366,  0.6516
-0.5847,  0.8347,  0.4226,  0.1078, -0.3910,  0.6134
 0.8469,  0.4121, -0.0439, -0.7476,  0.9521,  0.1571
-0.6803, -0.5948, -0.1376, -0.1916, -0.7065,  0.7156
 0.2878,  0.5086, -0.5785,  0.2019,  0.4979,  0.2980
 0.2764,  0.1943, -0.4090,  0.4632,  0.8906,  0.2960
-0.8877,  0.6705, -0.6155, -0.2098, -0.3998,  0.7107
-0.8398,  0.8093, -0.2597,  0.0614, -0.0118,  0.6502
-0.8476,  0.0158, -0.4769, -0.2859, -0.7839,  0.7715
 0.5751, -0.7868,  0.9714, -0.6457,  0.1448,  0.1175
 0.4802, -0.7001,  0.1022, -0.5668,  0.5184,  0.1090
 0.4458, -0.6469,  0.7239, -0.9604,  0.7205,  0.0779
 0.5175,  0.4339,  0.9747, -0.4438, -0.9924,  0.2879
 0.8678,  0.7158,  0.4577,  0.0334,  0.4139,  0.1678
 0.5406,  0.5012,  0.2264, -0.1963,  0.3946,  0.2088
-0.9938,  0.5498,  0.7928, -0.5214, -0.7585,  0.7687
 0.7661,  0.0863, -0.4266, -0.7233, -0.4197,  0.1466
 0.2277, -0.3517, -0.0853, -0.1118,  0.6563,  0.1767
 0.3499, -0.5570, -0.0655, -0.3705,  0.2537,  0.1632
 0.7547, -0.1046,  0.5689, -0.0861,  0.3125,  0.1257
 0.8186,  0.2110,  0.5335,  0.0094, -0.0039,  0.1391
 0.6858, -0.8644,  0.1465,  0.8855,  0.0357,  0.1845
-0.4967,  0.4015,  0.0805,  0.8977,  0.2487,  0.4663
 0.6760, -0.9841,  0.9787, -0.8446, -0.3557,  0.1509
-0.1203, -0.4885,  0.6054, -0.0443, -0.7313,  0.4854
 0.8557,  0.7919, -0.0169,  0.7134, -0.1628,  0.2002
 0.0115, -0.6209,  0.9300, -0.4116, -0.7931,  0.4052
-0.7114, -0.9718,  0.4319,  0.1290,  0.5892,  0.3661
 0.3915,  0.5557, -0.1870,  0.2955, -0.6404,  0.2954
-0.3564, -0.6548, -0.1827, -0.5172, -0.1862,  0.4622
 0.2392, -0.4959,  0.5857, -0.1341, -0.2850,  0.2470
-0.3394,  0.3947, -0.4627,  0.6166, -0.4094,  0.5325
 0.7107,  0.7768, -0.6312,  0.1707,  0.7964,  0.2757
-0.1078,  0.8437, -0.4420,  0.2177,  0.3649,  0.4028
-0.3139,  0.5595, -0.6505, -0.3161, -0.7108,  0.5546
 0.4335,  0.3986,  0.3770, -0.4932,  0.3847,  0.1810
-0.2562, -0.2894, -0.8847,  0.2633,  0.4146,  0.4036
 0.2272,  0.2966, -0.6601, -0.7011,  0.0284,  0.2778
-0.0743, -0.1421, -0.0054, -0.6770, -0.3151,  0.3597
-0.4762,  0.6891,  0.6007, -0.1467,  0.2140,  0.4266
-0.4061,  0.7193,  0.3432,  0.2669, -0.7505,  0.6147
-0.0588,  0.9731,  0.8966,  0.2902, -0.6966,  0.4955
-0.0627, -0.1439,  0.1985,  0.6999,  0.5022,  0.3077
 0.1587,  0.8494, -0.8705,  0.9827, -0.8940,  0.4263
-0.7850,  0.2473, -0.9040, -0.4308, -0.8779,  0.7199
 0.4070,  0.3369, -0.2428, -0.6236,  0.4940,  0.2215
-0.0242,  0.0513, -0.9430,  0.2885, -0.2987,  0.3947
-0.5416, -0.1322, -0.2351, -0.0604,  0.9590,  0.3683
 0.1055,  0.7783, -0.2901, -0.5090,  0.8220,  0.2984
-0.9129,  0.9015,  0.1128, -0.2473,  0.9901,  0.4776
-0.9378,  0.1424, -0.6391,  0.2619,  0.9618,  0.5368
 0.7498, -0.0963,  0.4169,  0.5549, -0.0103,  0.1614
-0.2612, -0.7156,  0.4538, -0.0460, -0.1022,  0.3717
 0.7720,  0.0552, -0.1818, -0.4622, -0.8560,  0.1685
-0.4177,  0.0070,  0.9319, -0.7812,  0.3461,  0.3052
-0.0001,  0.5542, -0.7128, -0.8336, -0.2016,  0.3803
 0.5356, -0.4194, -0.5662, -0.9666, -0.2027,  0.1776
-0.2378,  0.3187, -0.8582, -0.6948, -0.9668,  0.5474
-0.1947, -0.3579,  0.1158,  0.9869,  0.6690,  0.2992
 0.3992,  0.8365, -0.9205, -0.8593, -0.0520,  0.3154
-0.0209,  0.0793,  0.7905, -0.1067,  0.7541,  0.1864
-0.4928, -0.4524, -0.3433,  0.0951, -0.5597,  0.6261
-0.8118,  0.7404, -0.5263, -0.2280,  0.1431,  0.6349
 0.0516, -0.8480,  0.7483,  0.9023,  0.6250,  0.1959
-0.3212,  0.1093,  0.9488, -0.3766,  0.3376,  0.2735
-0.3481,  0.5490, -0.3484,  0.7797,  0.5034,  0.4379
-0.5785, -0.9170, -0.3563, -0.9258,  0.3877,  0.4121
 0.3407, -0.1391,  0.5356,  0.0720, -0.9203,  0.3458
-0.3287, -0.8954,  0.2102,  0.0241,  0.2349,  0.3247
-0.1353,  0.6954, -0.0919, -0.9692,  0.7461,  0.3338
 0.9036, -0.8982, -0.5299, -0.8733, -0.1567,  0.1187
 0.7277, -0.8368, -0.0538, -0.7489,  0.5458,  0.0830
 0.9049,  0.8878,  0.2279,  0.9470, -0.3103,  0.2194
 0.7957, -0.1308, -0.5284,  0.8817,  0.3684,  0.2172
 0.4647, -0.4931,  0.2010,  0.6292, -0.8918,  0.3371
-0.7390,  0.6849,  0.2367,  0.0626, -0.5034,  0.7039
-0.1567, -0.8711,  0.7940, -0.5932,  0.6525,  0.1710
 0.7635, -0.0265,  0.1969,  0.0545,  0.2496,  0.1445
 0.7675,  0.1354, -0.7698, -0.5460,  0.1920,  0.1728
-0.5211, -0.7372, -0.6763,  0.6897,  0.2044,  0.5217
 0.1913,  0.1980,  0.2314, -0.8816,  0.5006,  0.1998
 0.8964,  0.0694, -0.6149,  0.5059, -0.9854,  0.1825
 0.1767,  0.7104,  0.2093,  0.6452,  0.7590,  0.2832
-0.3580, -0.7541,  0.4426, -0.1193, -0.7465,  0.5657
-0.5996,  0.5766, -0.9758, -0.3933, -0.9572,  0.6800
 0.9950,  0.1641, -0.4132,  0.8579,  0.0142,  0.2003
-0.4717, -0.3894, -0.2567, -0.5111,  0.1691,  0.4266
 0.3917, -0.8561,  0.9422,  0.5061,  0.6123,  0.1212
-0.0366, -0.1087,  0.3449, -0.1025,  0.4086,  0.2475
 0.3633,  0.3943,  0.2372, -0.6980,  0.5216,  0.1925
-0.5325, -0.6466, -0.2178, -0.3589,  0.6310,  0.3568
 0.2271,  0.5200, -0.1447, -0.8011, -0.7699,  0.3128
 0.6415,  0.1993,  0.3777, -0.0178, -0.8237,  0.2181
-0.5298, -0.0768, -0.6028, -0.9490,  0.4588,  0.4356
 0.6870, -0.1431,  0.7294,  0.3141,  0.1621,  0.1632
-0.5985,  0.0591,  0.7889, -0.3900,  0.7419,  0.2945
 0.3661,  0.7984, -0.8486,  0.7572, -0.6183,  0.3449
 0.6995,  0.3342, -0.3113, -0.6972,  0.2707,  0.1712
 0.2565,  0.9126,  0.1798, -0.6043, -0.1413,  0.2893
-0.3265,  0.9839, -0.2395,  0.9854,  0.0376,  0.4770
 0.2690, -0.1722,  0.9818,  0.8599, -0.7015,  0.3954
-0.2102, -0.0768,  0.1219,  0.5607, -0.0256,  0.3949
 0.8216, -0.9555,  0.6422, -0.6231,  0.3715,  0.0801
-0.2896,  0.9484, -0.7545, -0.6249,  0.7789,  0.4370
-0.9985, -0.5448, -0.7092, -0.5931,  0.7926,  0.5402

Test data:

# synthetic_test_40.txt
#
 0.7462,  0.4006, -0.0590,  0.6543, -0.0083,  0.1935
 0.8495, -0.2260, -0.0142, -0.4911,  0.7699,  0.1078
-0.2335, -0.4049,  0.4352, -0.6183, -0.7636,  0.5088
 0.1810, -0.5142,  0.2465,  0.2767, -0.3449,  0.3136
-0.8650,  0.7611, -0.0801,  0.5277, -0.4922,  0.7140
-0.2358, -0.7466, -0.5115, -0.8413, -0.3943,  0.4533
 0.4834,  0.2300,  0.3448, -0.9832,  0.3568,  0.1360
-0.6502, -0.6300,  0.6885,  0.9652,  0.8275,  0.3046
-0.3053,  0.5604,  0.0929,  0.6329, -0.0325,  0.4756
-0.7995,  0.0740, -0.2680,  0.2086,  0.9176,  0.4565
-0.2144, -0.2141,  0.5813,  0.2902, -0.2122,  0.4119
-0.7278, -0.0987, -0.3312, -0.5641,  0.8515,  0.4438
 0.3793,  0.1976,  0.4933,  0.0839,  0.4011,  0.1905
-0.8568,  0.9573, -0.5272,  0.3212, -0.8207,  0.7415
-0.5785,  0.0056, -0.7901, -0.2223,  0.0760,  0.5551
 0.0735, -0.2188,  0.3925,  0.3570,  0.3746,  0.2191
 0.1230, -0.2838,  0.2262,  0.8715,  0.1938,  0.2878
 0.4792, -0.9248,  0.5295,  0.0366, -0.9894,  0.3149
-0.4456,  0.0697,  0.5359, -0.8938,  0.0981,  0.3879
 0.8629, -0.8505, -0.4464,  0.8385,  0.5300,  0.1769
 0.1995,  0.6659,  0.7921,  0.9454,  0.9970,  0.2330
-0.0249, -0.3066, -0.2927, -0.4923,  0.8220,  0.2437
 0.4513, -0.9481, -0.0770, -0.4374, -0.9421,  0.2879
-0.3405,  0.5931, -0.3507, -0.3842,  0.8562,  0.3987
 0.9538,  0.0471,  0.9039,  0.7760,  0.0361,  0.1706
-0.0887,  0.2104,  0.9808,  0.5478, -0.3314,  0.4128
-0.8220, -0.6302,  0.0537, -0.1658,  0.6013,  0.4306
-0.4123, -0.2880,  0.9074, -0.0461, -0.4435,  0.5144
 0.0060,  0.2867, -0.7775,  0.5161,  0.7039,  0.3599
-0.7968, -0.5484,  0.9426, -0.4308,  0.8148,  0.2979
 0.7811,  0.8450, -0.6877,  0.7594,  0.2640,  0.2362
-0.6802, -0.1113, -0.8325, -0.6694, -0.6056,  0.6544
 0.3821,  0.1476,  0.7466, -0.5107,  0.2592,  0.1648
 0.7265,  0.9683, -0.9803, -0.4943, -0.5523,  0.2454
-0.9049, -0.9797, -0.0196, -0.9090, -0.4433,  0.6447
-0.4607,  0.1811, -0.2389,  0.4050, -0.0078,  0.5229
 0.2664, -0.2932, -0.4259, -0.7336,  0.8742,  0.1834
-0.4507,  0.1029, -0.6294, -0.1158, -0.6294,  0.6081
 0.8948, -0.0124,  0.9278,  0.2899, -0.0314,  0.1534
-0.1323, -0.8813, -0.0146, -0.0697,  0.6135,  0.2386
Posted in Machine Learning | Leave a comment

Implementing Quadratic Regression with SGD Training Using C#

The goal of machine learning regression problem is to predict a single numeric value. For example, you might want to predict an employee’s salary based on age, height, high school grade point average, and so on. There are approximately a dozen common regression techniques. The most basic technique is called linear regression.

Linear regression is simple, but it doesn’t work well with data that has a non-linear structure, or data that has interaction between predictor variables. Quadratic regression extends basic linear regression to handle complex data.

I put together a demo, using C#. The demo data is synthetic and looks like:

-0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
. . .

The first five values on each line are the x predictors. The last value on each line is the target y variable to predict. There are 200 training items and 40 test items. The data was generated by a neural network with random weights and biases.

The output of the demo program is:

Begin C# quadratic regression with SGD training

Loading synthetic train (200) and test (40) data
Done

First three train X:
 -0.1660  0.4406 -0.9998 -0.3953 -0.7065
  0.0776 -0.1616  0.3704 -0.5911  0.7562
 -0.9452  0.3409 -0.1654  0.1174 -0.7192

First three train y:
  0.4840
  0.1568
  0.8054

Creating quadratic regression model

Setting lrnRate = 0.001
Setting maxEpochs = 1000

Starting SGD training
epoch =     0  MSE =   0.0957
epoch =   200  MSE =   0.0003
epoch =   400  MSE =   0.0003
epoch =   600  MSE =   0.0003
epoch =   800  MSE =   0.0003
Done

Model base weights:
 -0.2630  0.0354 -0.0420  0.0341 -0.1124

Model quadratic weights:
  0.0655  0.0194  0.0051  0.0047  0.0243

Model interaction weights:
  0.0043  0.0249  0.0071  0.1081 -0.0012 -0.0093
  0.0362  0.0085 -0.0568  0.0016

Model bias/intercept:   0.3220

Evaluating model
Accuracy train (within 0.10) = 0.8850
Accuracy test (within 0.10) = 0.9250

MSE train = 0.0003
MSE test = 0.0005

Predicting for x =
  -0.1660   0.4406  -0.9998  -0.3953  -0.7065

Predicted y = 0.4843

End demo

The quadratic regression model accuracy is very good. A prediction is scored as accurate if it’s within 10% of the true target y value.

Quadratic regression is an extension of linear regression. Suppose, as in the demo data, each data item has five predictors (aka features), (x0, x1, x2, x3, x4). The prediction equation for basic linear regression is:

y' = (w0 * x0) + (w1 * x1) + (w2 * x2) +
     (w3 * x3) + (w4 * x4) + b

The wi are model weights (aka coefficients), and b is the model bias (aka intercept). The values of the weights and the bias must be determined by training, so that predicted y’ values are close to the known, correct y values in a set of training data.

The prediction equation for quadratic regression is:

y' = (w0 * x0) + (w1 * x1) + (w2 * x2) + 
     (w3 * x3) + (w4 * x4) + 

     (w5 * x0*x0) + (w6 * x1*x1) + (w7 * x2*x2) +
     (w8 * x3*x3) + (w9 * x4*x4) + 

     (w10 * x0*x1) + (w11 * x0*x2) + (w12 * x0*x3) +
     (w13 * x0*x4) + (w14 * x1*x2) + (w15 * x1*x3) +
     (w16 * x1*x4) + (w17 * x2*x3) + (w18 * x2*x4) + 
     (w19 * x3*x4)

     + b

The squared (aka “quadratic”) xi^2 terms handle non-linear structure. If there are n predictors, there are also n squared terms. The xi * xj terms between all possible pairs pf original predictors handle interactions between predictors. If there are n predictors, there (n * (n-1)) / 2 interaction terms.

Behind the scenes, the derived xi^2 squared terms and the derived xi*xj interactions terms are computed programmatically on-the-fly, as opposed to explicitly creating an augmented static dataset.

A quadratic regression model is trained on the derived data. In spite of the multiplication of the base predictors, the model is still a linear model.

Quadratic regression is a subset of polynomial regression. For example, cubic regression includes xi^3 terms, quartic regression includes xi^4, and so on.

Quadratic regression is a superset of linear regression with two-way interactions. Linear regression with two-way interactions includes only the derived xi * xj terms, and omits the xi^2 terms.



I remember learning about quadratic regression when I was a college student over 50 years ago. Quadratic regression is an interesting part of the history of machine learning.

For some unknown reason, I’ve always been interested in the history of early “atmospheric diving suits”.

Left: The “Iron Man” suit was designed and built by C. Macduffee and T.T. Gray, and operated circa 1910. It was clunky but functional.

Center: This suit is one of several designed and built by the German company Neufeldt-Kuhnke, circa 1920.

Right: The “Tritona” was developed by Joseph Peress in 1930. It dived to 404 feet in Loch Ness, a significant feat for the time.


Demo program. Replace “lt” (less than), “gt”, “lte”, “gte”, with Boolean operator symbols (my blog editor chokes on symbols).

using System;
using System.IO;
using System.Collections.Generic;

namespace QuadraticRegressionSGD
{
  internal class QuadraticRegressionProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nBegin C# quadratic regression" +
        " with SGD training ");

      // 1. load data
      Console.WriteLine("\nLoading synthetic train" +
        " (200) and test (40) data");
      string trainFile =
        "..\\..\\..\\Data\\synthetic_train_200.txt";
      int[] colsX = new int[] { 0, 1, 2, 3, 4 };
      double[][] trainX =
        MatLoad(trainFile, colsX, ',', "#");
      double[] trainY =
        MatToVec(MatLoad(trainFile,
        new int[] { 5 }, ',', "#"));

      string testFile =
        "..\\..\\..\\Data\\synthetic_test_40.txt";
      double[][] testX =
        MatLoad(testFile, colsX, ',', "#");
      double[] testY =
        MatToVec(MatLoad(testFile,
        new int[] { 5 }, ',', "#"));
      Console.WriteLine("Done ");

      Console.WriteLine("\nFirst three train X: ");
      for (int i = 0; i "lt" 3; ++i)
        VecShow(trainX[i], 4, 8);

      Console.WriteLine("\nFirst three train y: ");
      for (int i = 0; i "lt" 3; ++i)
        Console.WriteLine(trainY[i].ToString("F4").
          PadLeft(8));

      // 2. create and train model
      Console.WriteLine("\nCreating quadratic regression" +
        " model ");
      QuadraticRegressor model = new QuadraticRegressor();

      double lrnRate = 0.001;
      int maxEpochs = 1000;
      Console.WriteLine("\nSetting lrnRate = " + 
        lrnRate.ToString("F3"));
      Console.WriteLine("Setting maxEpochs = " + 
        maxEpochs);

      Console.WriteLine("\nStarting SGD training ");
      model.Train(trainX, trainY, lrnRate, maxEpochs);
      Console.WriteLine("Done ");

      // 3. show model weights
      Console.WriteLine("\nModel base weights: ");
      int dim = trainX[0].Length;
      for (int i = 0; i "lt" dim; ++i)
        Console.Write(model.weights[i].
          ToString("F4").PadLeft(8));
      Console.WriteLine("");

      Console.WriteLine("\nModel quadratic weights: ");
      for (int i = dim; i "lt" dim + dim; ++i)
        Console.Write(model.weights[i].
          ToString("F4").PadLeft(8));
      Console.WriteLine("");

      Console.WriteLine("\nModel interaction weights: ");
      for (int i = dim + dim; i "lt" model.weights.Length; ++i)
      {
        Console.Write(model.weights[i].
          ToString("F4").PadLeft(8));
        if (i "gt" dim+dim && i % dim == 0)
          Console.WriteLine("");
      }
      Console.WriteLine("");

      Console.WriteLine("\nModel bias/intercept: " +
        model.bias.ToString("F4").PadLeft(8));

      // 4. evaluate model
      Console.WriteLine("\nEvaluating model ");
      double accTrain = model.Accuracy(trainX, trainY, 0.10);
      Console.WriteLine("Accuracy train (within 0.10) = " +
        accTrain.ToString("F4"));
      double accTest = model.Accuracy(testX, testY, 0.10);
      Console.WriteLine("Accuracy test (within 0.10) = " +
        accTest.ToString("F4"));

      double mseTrain = model.MSE(trainX, trainY);
      Console.WriteLine("\nMSE train = " +
        mseTrain.ToString("F4"));
      double mseTest = model.MSE(testX, testY);
      Console.WriteLine("MSE test = " +
        mseTest.ToString("F4"));

      // 5. use model
      double[] x = trainX[0];
      Console.WriteLine("\nPredicting for x = ");
      VecShow(x, 4, 9);
      double predY = model.Predict(x);
      Console.WriteLine("\nPredicted y = " +
        predY.ToString("F4"));

      // 6. TODO: implement model Save() and Load()

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main

    // ------------------------------------------------------
    // helpers for Main()
    // ------------------------------------------------------

    static double[][] MatLoad(string fn, int[] usecols,
      char sep, string comment)
    {
      List"lt"double[]"gt" result =
        new List"lt"double[]"gt"();
      string line = "";
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      while ((line = sr.ReadLine()) != null)
      {
        if (line.StartsWith(comment) == true)
          continue;
        string[] tokens = line.Split(sep);
        List"lt"double"gt" lst = new List"lt"double"gt"();
        for (int j = 0; j "lt" usecols.Length; ++j)
          lst.Add(double.Parse(tokens[usecols[j]]));
        double[] row = lst.ToArray();
        result.Add(row);
      }
      sr.Close(); ifs.Close();
      return result.ToArray();
    }

    static double[] MatToVec(double[][] mat)
    {
      int nRows = mat.Length;
      int nCols = mat[0].Length;
      double[] result = new double[nRows * nCols];
      int k = 0;
      for (int i = 0; i "lt" nRows; ++i)
        for (int j = 0; j "lt" nCols; ++j)
          result[k++] = mat[i][j];
      return result;
    }

    static void VecShow(double[] vec, int dec, int wid)
    {
      for (int i = 0; i "lt" vec.Length; ++i)
        Console.Write(vec[i].ToString("F" + dec).
          PadLeft(wid));
      Console.WriteLine("");
    }
  } // class Program

  // ========================================================

  public class QuadraticRegressor
  {
    public double[] weights;  // regular, quad, interactions
    public double bias;
    private Random rnd;

    public QuadraticRegressor(int seed = 0)
    {
      this.weights = new double[0];  // empty, but not null
      this.bias = 0; // dummy value
      this.rnd = new Random(seed);  // shuffle order
    }

    // ------------------------------------------------------

    public double Predict(double[] x)
    {
      int dim = x.Length;
      double result = 0.0;

      int p = 0; // points into this.weights
      for (int i = 0; i "lt" dim; ++i)   // regular
        result += x[i] * this.weights[p++];

      for (int i = 0; i "lt" dim; ++i)  // quadratic
        result += x[i] * x[i] * this.weights[p++];

      for (int i = 0; i "lt" dim-1; ++i)  // interactions
        for (int j = i+1; j "lt" dim; ++j)
          result += x[i] * x[j] * this.weights[p++]; 
 
      result += this.bias;
      return result;
    }

    // ------------------------------------------------------

    public void Train(double[][] trainX, double[] trainY,
      double lrnRate, int maxEpochs)
    {
      int nRows = trainX.Length;
      int dim = trainX[0].Length;
      int nInteractions = (dim * (dim - 1)) / 2;
      this.weights = new double[dim + dim + nInteractions];
 
      double low = -0.01; double hi = 0.01;
      for (int i = 0; i "lt" dim; ++i)
        this.weights[i] = (hi - low) *
          this.rnd.NextDouble() + low;

      this.bias = (hi - low) *
          this.rnd.NextDouble() + low;

      int[] indices = new int[nRows];
      for (int i = 0; i "lt" nRows; ++i)
        indices[i] = i;
      
      for (int epoch = 0; epoch "lt" maxEpochs; ++epoch)
      {
        // shuffle order of train data
        int n = indices.Length;
        for (int i = 0; i "lt" n; ++i)
        {
          int ri = this.rnd.Next(i, n);
          int tmp = indices[i];
          indices[i] = indices[ri];
          indices[ri] = tmp;
        }

        for (int i = 0; i "lt" nRows; ++i)
        {
          int ii = indices[i];
          double[] x = trainX[ii];
          double predY = this.Predict(x);
          double actualY = trainY[ii];

          int p = 0; // points into weights
          // update regular weights
          for (int j = 0; j "lt" dim; ++j)
            this.weights[p++] -= lrnRate *
              (predY - actualY) * x[j];

          // update quadratic weights
          for (int j = 0; j "lt" dim; ++j)
            this.weights[p++] -= lrnRate *
              (predY - actualY) * x[j] * x[j];

          // update interaction weights
          for (int j = 0; j "lt" dim - 1; ++j)
            for (int k = j + 1; k "lt" dim; ++k)
               this.weights[p++] -= lrnRate *
                (predY - actualY) * x[j] * x[k];
 
          // update the bias
          this.bias -= lrnRate * (predY - actualY) * 1.0;
        }
        if (epoch % (int)(maxEpochs / 5) == 0)
        {
          double mse = this.MSE(trainX, trainY);
          string s = "";
          s += "epoch = " + epoch.ToString().PadLeft(5);
          s += "  MSE = " + mse.ToString("F4").PadLeft(8);
          Console.WriteLine(s);
        }
      }

    } // Train()

    // ------------------------------------------------------

    public double Accuracy(double[][] dataX, double[] dataY,
      double pctClose)
    {
      int numCorrect = 0; int numWrong = 0;
      for (int i = 0; i "lt" dataX.Length; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        if (Math.Abs(predY - actualY) "lt"
          Math.Abs(pctClose * actualY))
          ++numCorrect;
        else
          ++numWrong;
      }
      return (numCorrect * 1.0) / (numWrong + numCorrect);
    }

    public double MSE(double[][] dataX,
      double[] dataY)
    {
      int n = dataX.Length;
      double sum = 0.0;
      for (int i = 0; i "lt" n; ++i)
      {
        double actualY = dataY[i];
        double predY = this.Predict(dataX[i]);
        sum += (actualY - predY) * (actualY - predY);
      }
      return sum / n;
    }
   
  } // class QuadraticRegressor

  // ========================================================

} // ns

Training data:

# synthetic_train_200.txt
#
-0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
-0.8299, -0.9219, -0.6603,  0.7563, -0.8033,  0.7955
 0.0663,  0.3838, -0.3690,  0.3730,  0.6693,  0.3206
-0.9634,  0.5003,  0.9777,  0.4963, -0.4391,  0.7377
-0.1042,  0.8172, -0.4128, -0.4244, -0.7399,  0.4801
-0.9613,  0.3577, -0.5767, -0.4689, -0.0169,  0.6861
-0.7065,  0.1786,  0.3995, -0.7953, -0.1719,  0.5569
 0.3888, -0.1716, -0.9001,  0.0718,  0.3276,  0.2500
 0.1731,  0.8068, -0.7251, -0.7214,  0.6148,  0.3297
-0.2046, -0.6693,  0.8550, -0.3045,  0.5016,  0.2129
 0.2473,  0.5019, -0.3022, -0.4601,  0.7918,  0.2613
-0.1438,  0.9297,  0.3269,  0.2434, -0.7705,  0.5171
 0.1568, -0.1837, -0.5259,  0.8068,  0.1474,  0.3307
-0.9943,  0.2343, -0.3467,  0.0541,  0.7719,  0.5581
 0.2467, -0.9684,  0.8589,  0.3818,  0.9946,  0.1092
-0.6553, -0.7257,  0.8652,  0.3936, -0.8680,  0.7018
 0.8460,  0.4230, -0.7515, -0.9602, -0.9476,  0.1996
-0.9434, -0.5076,  0.7201,  0.0777,  0.1056,  0.5664
 0.9392,  0.1221, -0.9627,  0.6013, -0.5341,  0.1533
 0.6142, -0.2243,  0.7271,  0.4942,  0.1125,  0.1661
 0.4260,  0.1194, -0.9749, -0.8561,  0.9346,  0.2230
 0.1362, -0.5934, -0.4953,  0.4877, -0.6091,  0.3810
 0.6937, -0.5203, -0.0125,  0.2399,  0.6580,  0.1460
-0.6864, -0.9628, -0.8600, -0.0273,  0.2127,  0.5387
 0.9772,  0.1595, -0.2397,  0.1019,  0.4907,  0.1611
 0.3385, -0.4702, -0.8673, -0.2598,  0.2594,  0.2270
-0.8669, -0.4794,  0.6095, -0.6131,  0.2789,  0.4700
 0.0493,  0.8496, -0.4734, -0.8681,  0.4701,  0.3516
 0.8639, -0.9721, -0.5313,  0.2336,  0.8980,  0.1412
 0.9004,  0.1133,  0.8312,  0.2831, -0.2200,  0.1782
 0.0991,  0.8524,  0.8375, -0.2102,  0.9265,  0.2150
-0.6521, -0.7473, -0.7298,  0.0113, -0.9570,  0.7422
 0.6190, -0.3105,  0.8802,  0.1640,  0.7577,  0.1056
 0.6895,  0.8108, -0.0802,  0.0927,  0.5972,  0.2214
 0.1982, -0.9689,  0.1870, -0.1326,  0.6147,  0.1310
-0.3695,  0.7858,  0.1557, -0.6320,  0.5759,  0.3773
-0.1596,  0.3581,  0.8372, -0.9992,  0.9535,  0.2071
-0.2468,  0.9476,  0.2094,  0.6577,  0.1494,  0.4132
 0.1737,  0.5000,  0.7166,  0.5102,  0.3961,  0.2611
 0.7290, -0.3546,  0.3416, -0.0983, -0.2358,  0.1332
-0.3652,  0.2438, -0.1395,  0.9476,  0.3556,  0.4170
-0.6029, -0.1466, -0.3133,  0.5953,  0.7600,  0.4334
-0.4596, -0.4953,  0.7098,  0.0554,  0.6043,  0.2775
 0.1450,  0.4663,  0.0380,  0.5418,  0.1377,  0.2931
-0.8636, -0.2442, -0.8407,  0.9656, -0.6368,  0.7429
 0.6237,  0.7499,  0.3768,  0.1390, -0.6781,  0.2185
-0.5499,  0.1850, -0.3755,  0.8326,  0.8193,  0.4399
-0.4858, -0.7782, -0.6141, -0.0008,  0.4572,  0.4197
 0.7033, -0.1683,  0.2334, -0.5327, -0.7961,  0.1776
 0.0317, -0.0457, -0.6947,  0.2436,  0.0880,  0.3345
 0.5031, -0.5559,  0.0387,  0.5706, -0.9553,  0.3107
-0.3513,  0.7458,  0.6894,  0.0769,  0.7332,  0.3170
 0.2205,  0.5992, -0.9309,  0.5405,  0.4635,  0.3532
-0.4806, -0.4859,  0.2646, -0.3094,  0.5932,  0.3202
 0.9809, -0.3995, -0.7140,  0.8026,  0.0831,  0.1600
 0.9495,  0.2732,  0.9878,  0.0921,  0.0529,  0.1289
-0.9476, -0.6792,  0.4913, -0.9392, -0.2669,  0.5966
 0.7247,  0.3854,  0.3819, -0.6227, -0.1162,  0.1550
-0.5922, -0.5045, -0.4757,  0.5003, -0.0860,  0.5863
-0.8861,  0.0170, -0.5761,  0.5972, -0.4053,  0.7301
 0.6877, -0.2380,  0.4997,  0.0223,  0.0819,  0.1404
 0.9189,  0.6079, -0.9354,  0.4188, -0.0700,  0.1907
-0.1428, -0.7820,  0.2676,  0.6059,  0.3936,  0.2790
 0.5324, -0.3151,  0.6917, -0.1425,  0.6480,  0.1071
-0.8432, -0.9633, -0.8666, -0.0828, -0.7733,  0.7784
-0.9444,  0.5097, -0.2103,  0.4939, -0.0952,  0.6787
-0.0520,  0.6063, -0.1952,  0.8094, -0.9259,  0.4836
 0.5477, -0.7487,  0.2370, -0.9793,  0.0773,  0.1241
 0.2450,  0.8116,  0.9799,  0.4222,  0.4636,  0.2355
 0.8186, -0.1983, -0.5003, -0.6531, -0.7611,  0.1511
-0.4714,  0.6382, -0.3788,  0.9648, -0.4667,  0.5950
 0.0673, -0.3711,  0.8215, -0.2669, -0.1328,  0.2677
-0.9381,  0.4338,  0.7820, -0.9454,  0.0441,  0.5518
-0.3480,  0.7190,  0.1170,  0.3805, -0.0943,  0.4724
-0.9813,  0.1535, -0.3771,  0.0345,  0.8328,  0.5438
-0.1471, -0.5052, -0.2574,  0.8637,  0.8737,  0.3042
-0.5454, -0.3712, -0.6505,  0.2142, -0.1728,  0.5783
 0.6327, -0.6297,  0.4038, -0.5193,  0.1484,  0.1153
-0.5424,  0.3282, -0.0055,  0.0380, -0.6506,  0.6613
 0.1414,  0.9935,  0.6337,  0.1887,  0.9520,  0.2540
-0.9351, -0.8128, -0.8693, -0.0965, -0.2491,  0.7353
 0.9507, -0.6640,  0.9456,  0.5349,  0.6485,  0.1059
-0.0462, -0.9737, -0.2940, -0.0159,  0.4602,  0.2606
-0.0627, -0.0852, -0.7247, -0.9782,  0.5166,  0.2977
 0.0478,  0.5098, -0.0723, -0.7504, -0.3750,  0.3335
 0.0090,  0.3477,  0.5403, -0.7393, -0.9542,  0.4415
-0.9748,  0.3449,  0.3736, -0.1015,  0.8296,  0.4358
 0.2887, -0.9895, -0.0311,  0.7186,  0.6608,  0.2057
 0.1570, -0.4518,  0.1211,  0.3435, -0.2951,  0.3244
 0.7117, -0.6099,  0.4946, -0.4208,  0.5476,  0.1096
-0.2929, -0.5726,  0.5346, -0.3827,  0.4665,  0.2465
 0.4889, -0.5572, -0.5718, -0.6021, -0.7150,  0.2163
-0.7782,  0.3491,  0.5996, -0.8389, -0.5366,  0.6516
-0.5847,  0.8347,  0.4226,  0.1078, -0.3910,  0.6134
 0.8469,  0.4121, -0.0439, -0.7476,  0.9521,  0.1571
-0.6803, -0.5948, -0.1376, -0.1916, -0.7065,  0.7156
 0.2878,  0.5086, -0.5785,  0.2019,  0.4979,  0.2980
 0.2764,  0.1943, -0.4090,  0.4632,  0.8906,  0.2960
-0.8877,  0.6705, -0.6155, -0.2098, -0.3998,  0.7107
-0.8398,  0.8093, -0.2597,  0.0614, -0.0118,  0.6502
-0.8476,  0.0158, -0.4769, -0.2859, -0.7839,  0.7715
 0.5751, -0.7868,  0.9714, -0.6457,  0.1448,  0.1175
 0.4802, -0.7001,  0.1022, -0.5668,  0.5184,  0.1090
 0.4458, -0.6469,  0.7239, -0.9604,  0.7205,  0.0779
 0.5175,  0.4339,  0.9747, -0.4438, -0.9924,  0.2879
 0.8678,  0.7158,  0.4577,  0.0334,  0.4139,  0.1678
 0.5406,  0.5012,  0.2264, -0.1963,  0.3946,  0.2088
-0.9938,  0.5498,  0.7928, -0.5214, -0.7585,  0.7687
 0.7661,  0.0863, -0.4266, -0.7233, -0.4197,  0.1466
 0.2277, -0.3517, -0.0853, -0.1118,  0.6563,  0.1767
 0.3499, -0.5570, -0.0655, -0.3705,  0.2537,  0.1632
 0.7547, -0.1046,  0.5689, -0.0861,  0.3125,  0.1257
 0.8186,  0.2110,  0.5335,  0.0094, -0.0039,  0.1391
 0.6858, -0.8644,  0.1465,  0.8855,  0.0357,  0.1845
-0.4967,  0.4015,  0.0805,  0.8977,  0.2487,  0.4663
 0.6760, -0.9841,  0.9787, -0.8446, -0.3557,  0.1509
-0.1203, -0.4885,  0.6054, -0.0443, -0.7313,  0.4854
 0.8557,  0.7919, -0.0169,  0.7134, -0.1628,  0.2002
 0.0115, -0.6209,  0.9300, -0.4116, -0.7931,  0.4052
-0.7114, -0.9718,  0.4319,  0.1290,  0.5892,  0.3661
 0.3915,  0.5557, -0.1870,  0.2955, -0.6404,  0.2954
-0.3564, -0.6548, -0.1827, -0.5172, -0.1862,  0.4622
 0.2392, -0.4959,  0.5857, -0.1341, -0.2850,  0.2470
-0.3394,  0.3947, -0.4627,  0.6166, -0.4094,  0.5325
 0.7107,  0.7768, -0.6312,  0.1707,  0.7964,  0.2757
-0.1078,  0.8437, -0.4420,  0.2177,  0.3649,  0.4028
-0.3139,  0.5595, -0.6505, -0.3161, -0.7108,  0.5546
 0.4335,  0.3986,  0.3770, -0.4932,  0.3847,  0.1810
-0.2562, -0.2894, -0.8847,  0.2633,  0.4146,  0.4036
 0.2272,  0.2966, -0.6601, -0.7011,  0.0284,  0.2778
-0.0743, -0.1421, -0.0054, -0.6770, -0.3151,  0.3597
-0.4762,  0.6891,  0.6007, -0.1467,  0.2140,  0.4266
-0.4061,  0.7193,  0.3432,  0.2669, -0.7505,  0.6147
-0.0588,  0.9731,  0.8966,  0.2902, -0.6966,  0.4955
-0.0627, -0.1439,  0.1985,  0.6999,  0.5022,  0.3077
 0.1587,  0.8494, -0.8705,  0.9827, -0.8940,  0.4263
-0.7850,  0.2473, -0.9040, -0.4308, -0.8779,  0.7199
 0.4070,  0.3369, -0.2428, -0.6236,  0.4940,  0.2215
-0.0242,  0.0513, -0.9430,  0.2885, -0.2987,  0.3947
-0.5416, -0.1322, -0.2351, -0.0604,  0.9590,  0.3683
 0.1055,  0.7783, -0.2901, -0.5090,  0.8220,  0.2984
-0.9129,  0.9015,  0.1128, -0.2473,  0.9901,  0.4776
-0.9378,  0.1424, -0.6391,  0.2619,  0.9618,  0.5368
 0.7498, -0.0963,  0.4169,  0.5549, -0.0103,  0.1614
-0.2612, -0.7156,  0.4538, -0.0460, -0.1022,  0.3717
 0.7720,  0.0552, -0.1818, -0.4622, -0.8560,  0.1685
-0.4177,  0.0070,  0.9319, -0.7812,  0.3461,  0.3052
-0.0001,  0.5542, -0.7128, -0.8336, -0.2016,  0.3803
 0.5356, -0.4194, -0.5662, -0.9666, -0.2027,  0.1776
-0.2378,  0.3187, -0.8582, -0.6948, -0.9668,  0.5474
-0.1947, -0.3579,  0.1158,  0.9869,  0.6690,  0.2992
 0.3992,  0.8365, -0.9205, -0.8593, -0.0520,  0.3154
-0.0209,  0.0793,  0.7905, -0.1067,  0.7541,  0.1864
-0.4928, -0.4524, -0.3433,  0.0951, -0.5597,  0.6261
-0.8118,  0.7404, -0.5263, -0.2280,  0.1431,  0.6349
 0.0516, -0.8480,  0.7483,  0.9023,  0.6250,  0.1959
-0.3212,  0.1093,  0.9488, -0.3766,  0.3376,  0.2735
-0.3481,  0.5490, -0.3484,  0.7797,  0.5034,  0.4379
-0.5785, -0.9170, -0.3563, -0.9258,  0.3877,  0.4121
 0.3407, -0.1391,  0.5356,  0.0720, -0.9203,  0.3458
-0.3287, -0.8954,  0.2102,  0.0241,  0.2349,  0.3247
-0.1353,  0.6954, -0.0919, -0.9692,  0.7461,  0.3338
 0.9036, -0.8982, -0.5299, -0.8733, -0.1567,  0.1187
 0.7277, -0.8368, -0.0538, -0.7489,  0.5458,  0.0830
 0.9049,  0.8878,  0.2279,  0.9470, -0.3103,  0.2194
 0.7957, -0.1308, -0.5284,  0.8817,  0.3684,  0.2172
 0.4647, -0.4931,  0.2010,  0.6292, -0.8918,  0.3371
-0.7390,  0.6849,  0.2367,  0.0626, -0.5034,  0.7039
-0.1567, -0.8711,  0.7940, -0.5932,  0.6525,  0.1710
 0.7635, -0.0265,  0.1969,  0.0545,  0.2496,  0.1445
 0.7675,  0.1354, -0.7698, -0.5460,  0.1920,  0.1728
-0.5211, -0.7372, -0.6763,  0.6897,  0.2044,  0.5217
 0.1913,  0.1980,  0.2314, -0.8816,  0.5006,  0.1998
 0.8964,  0.0694, -0.6149,  0.5059, -0.9854,  0.1825
 0.1767,  0.7104,  0.2093,  0.6452,  0.7590,  0.2832
-0.3580, -0.7541,  0.4426, -0.1193, -0.7465,  0.5657
-0.5996,  0.5766, -0.9758, -0.3933, -0.9572,  0.6800
 0.9950,  0.1641, -0.4132,  0.8579,  0.0142,  0.2003
-0.4717, -0.3894, -0.2567, -0.5111,  0.1691,  0.4266
 0.3917, -0.8561,  0.9422,  0.5061,  0.6123,  0.1212
-0.0366, -0.1087,  0.3449, -0.1025,  0.4086,  0.2475
 0.3633,  0.3943,  0.2372, -0.6980,  0.5216,  0.1925
-0.5325, -0.6466, -0.2178, -0.3589,  0.6310,  0.3568
 0.2271,  0.5200, -0.1447, -0.8011, -0.7699,  0.3128
 0.6415,  0.1993,  0.3777, -0.0178, -0.8237,  0.2181
-0.5298, -0.0768, -0.6028, -0.9490,  0.4588,  0.4356
 0.6870, -0.1431,  0.7294,  0.3141,  0.1621,  0.1632
-0.5985,  0.0591,  0.7889, -0.3900,  0.7419,  0.2945
 0.3661,  0.7984, -0.8486,  0.7572, -0.6183,  0.3449
 0.6995,  0.3342, -0.3113, -0.6972,  0.2707,  0.1712
 0.2565,  0.9126,  0.1798, -0.6043, -0.1413,  0.2893
-0.3265,  0.9839, -0.2395,  0.9854,  0.0376,  0.4770
 0.2690, -0.1722,  0.9818,  0.8599, -0.7015,  0.3954
-0.2102, -0.0768,  0.1219,  0.5607, -0.0256,  0.3949
 0.8216, -0.9555,  0.6422, -0.6231,  0.3715,  0.0801
-0.2896,  0.9484, -0.7545, -0.6249,  0.7789,  0.4370
-0.9985, -0.5448, -0.7092, -0.5931,  0.7926,  0.5402

Test data:

# synthetic_test_40.txt
#
 0.7462,  0.4006, -0.0590,  0.6543, -0.0083,  0.1935
 0.8495, -0.2260, -0.0142, -0.4911,  0.7699,  0.1078
-0.2335, -0.4049,  0.4352, -0.6183, -0.7636,  0.5088
 0.1810, -0.5142,  0.2465,  0.2767, -0.3449,  0.3136
-0.8650,  0.7611, -0.0801,  0.5277, -0.4922,  0.7140
-0.2358, -0.7466, -0.5115, -0.8413, -0.3943,  0.4533
 0.4834,  0.2300,  0.3448, -0.9832,  0.3568,  0.1360
-0.6502, -0.6300,  0.6885,  0.9652,  0.8275,  0.3046
-0.3053,  0.5604,  0.0929,  0.6329, -0.0325,  0.4756
-0.7995,  0.0740, -0.2680,  0.2086,  0.9176,  0.4565
-0.2144, -0.2141,  0.5813,  0.2902, -0.2122,  0.4119
-0.7278, -0.0987, -0.3312, -0.5641,  0.8515,  0.4438
 0.3793,  0.1976,  0.4933,  0.0839,  0.4011,  0.1905
-0.8568,  0.9573, -0.5272,  0.3212, -0.8207,  0.7415
-0.5785,  0.0056, -0.7901, -0.2223,  0.0760,  0.5551
 0.0735, -0.2188,  0.3925,  0.3570,  0.3746,  0.2191
 0.1230, -0.2838,  0.2262,  0.8715,  0.1938,  0.2878
 0.4792, -0.9248,  0.5295,  0.0366, -0.9894,  0.3149
-0.4456,  0.0697,  0.5359, -0.8938,  0.0981,  0.3879
 0.8629, -0.8505, -0.4464,  0.8385,  0.5300,  0.1769
 0.1995,  0.6659,  0.7921,  0.9454,  0.9970,  0.2330
-0.0249, -0.3066, -0.2927, -0.4923,  0.8220,  0.2437
 0.4513, -0.9481, -0.0770, -0.4374, -0.9421,  0.2879
-0.3405,  0.5931, -0.3507, -0.3842,  0.8562,  0.3987
 0.9538,  0.0471,  0.9039,  0.7760,  0.0361,  0.1706
-0.0887,  0.2104,  0.9808,  0.5478, -0.3314,  0.4128
-0.8220, -0.6302,  0.0537, -0.1658,  0.6013,  0.4306
-0.4123, -0.2880,  0.9074, -0.0461, -0.4435,  0.5144
 0.0060,  0.2867, -0.7775,  0.5161,  0.7039,  0.3599
-0.7968, -0.5484,  0.9426, -0.4308,  0.8148,  0.2979
 0.7811,  0.8450, -0.6877,  0.7594,  0.2640,  0.2362
-0.6802, -0.1113, -0.8325, -0.6694, -0.6056,  0.6544
 0.3821,  0.1476,  0.7466, -0.5107,  0.2592,  0.1648
 0.7265,  0.9683, -0.9803, -0.4943, -0.5523,  0.2454
-0.9049, -0.9797, -0.0196, -0.9090, -0.4433,  0.6447
-0.4607,  0.1811, -0.2389,  0.4050, -0.0078,  0.5229
 0.2664, -0.2932, -0.4259, -0.7336,  0.8742,  0.1834
-0.4507,  0.1029, -0.6294, -0.1158, -0.6294,  0.6081
 0.8948, -0.0124,  0.9278,  0.2899, -0.0314,  0.1534
-0.1323, -0.8813, -0.0146, -0.0697,  0.6135,  0.2386
Posted in Machine Learning | Leave a comment

Bagging Tree Regression from Scratch Using Python

Naive decision tree regression prediction models usually overfit the training data. The model is accurate on the training data, but has poor accuracy and MSE on new, previously unseen data.

One of several ways to deal with decision tree overfitting is to create a collection of trees, train each tree on a subset of the full training data. Then to predict, compute the average of the predictions of all the trees. Simple.

There are two closely related tree ensemble techniques: bagging (“bootstrap aggregation”) tree regression, and random forest regression. In bagging tree regression, training looks like:

create an empty collection of decision trees
loop each number of trees times
  create a rows-subset of training data
  train curr tree using subset
  add trained tree to collection of trees
end loop

The rows-subset can have the same number of rows as the source training data, or fewer rows. When rows are randomly selected, the selection is done “with replacement” so a specific row might be included in the subset more than once, and some rows might not be included at all. The training data subset uses all columns.

In random forest regression, during training, when each split is computed, a different columns-subset of the rows-subset of the training data is used:

create an empty collection of decision trees
loop each number of trees times
  create a rows-subset of training data
  train curr tree using rows-subset, but
    with random columns used for each training split
  add trained tree to collection of trees
end loop

In other words, bagging and random forest are the same except that random forest uses only some of the columns of training data when finding split values, while bagging uses all the columns. Put another way, bagging regression is a type of random forest regression.

It turns out that bagging tree regression was invented first (Breiman, 1996) but then he realized the random forest was better (2001) but it was too late to drop the now-redundant “bagging tree” terminology.

Here’s my demo code method that trains a bagging tree regressor:

  def fit(self, X, y):
    for i in range(self.n_trees):
      curr_tree = \
        MyDecisionTreeRegressor(max_depth=self.max_depth,
          min_samples=self.min_samples, seed=0)

      # create random rows-subset of training data
      rnd_rows = self.rnd.choice(self.n_rows, 
        size=(self.n_rows), replace=True)
      subset_X = X[rnd_rows,:]
      subset_y = y[rnd_rows]

      # train tree on subset and add to list
      curr_tree.fit(subset_X, subset_y)
      self.trees.append(curr_tree)

For my demo, I used a set of synthetic data that I generated using a neural network with random weights and biases. The data looks like:

-0.1660,  0.4406, -0.9998, -0.3953, -0.7065, 0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562, 0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192, 0.8054
. . .

The first five values on each line are the predictors. The sixth value is the target to predict. All predictor values are between -1.0 and 1.0. Normalizing the predictor values is not necessary but is helpful when using the data with other regression techniques that require normalization (such as k-nearest neighbors regression). There are 200 items in the training data and 40 items in the test data.

The output of the from-scratch bagging tree regression demo program is:

Begin bagging tree regression scratch Python

Loading synthetic train (200), test (40) data
Done

First three X predictors:
[[-0.1660  0.4406 -0.9998 -0.3953 -0.7065]
 [ 0.0776 -0.1616  0.3704 -0.5911  0.7562]
 [-0.9452  0.3409 -0.1654  0.1174 -0.7192]]

First three y targets:
0.4840
0.1568
0.8054

Setting num_trees = 10
Setting num_rows = 200
Setting max_depth = 6
Setting min_samples = 2

Creating and training bagging tree model
Done

Accuracy train (within 0.10): 0.7950
Accuracy test (within 0.10): 0.5250

MSE train: 0.0005
MSE test: 0.0027

End bagging tree regression scratch Python demo

====================

Using scikit BaggingRegressor:

Setting n_estimators = 10

Creating and training scikit bagging tree model
Done

Accuracy train (within 0.10): 0.7850
Accuracy test (within 0.10): 0.6250

MSE train: 0.0005
MSE test: 0.0013

I validated my demo by running the data through the scikit-learn Python language library BaggingRegressor module. The results were essentially the same. But for both systems, the prediction model was overfitted. The effectiveness of bagging tree regression depends on the data — sometimes bagging tree works well, sometimes not.

My demo works correctly, but there are several things in the MyDecisionTreeRegressor class that I’m not at all happy with. I’ll revamp this code when I get the chance. . .

Good fun.



I’ve never really liked the term “bagging” as a shortcut for “bootstrap aggregation”. When I think of bagging, I usually think about the time-honored tradition of sports fans who are dissatisfied with their team. Left: This Detroit Lions fan has a nice knitted cap accessory to add some style to his bag. Center: This Cleveland Brows fan appears to be doubly-disappointed. Right: This New York Jets fan did not do a very good job with his bag head design.


Demo program. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols. (My blog editor chokes on symbols).

# bagging_tree_regression_scratch.py

# each tree is trained on a subset of the data, with some
# rows possibly duplicated, and some rows possibly not used.
# subset uses all columns.

import numpy as np

# ===========================================================

class BaggingTreeRegressor:
  def __init__(self, n_trees, n_rows, max_depth=3,
    min_samples=2, seed=1):
    self.n_trees = n_trees
    self.n_rows = n_rows
    self.max_depth = max_depth
    self.min_samples = min_samples
    self.trees = []
    self.rnd = np.random.RandomState(seed)

  def fit(self, X, y):
    for i in range(self.n_trees):
      curr_tree = \
        MyDecisionTreeRegressor(max_depth=self.max_depth,
          min_samples=self.min_samples, seed=0)

      # create random rows-subset of training data
      rnd_rows = self.rnd.choice(self.n_rows, 
        size=(self.n_rows), replace=True)
      subset_X = X[rnd_rows,:]
      subset_y = y[rnd_rows]

      # train tree on subset and add to list
      curr_tree.fit(subset_X, subset_y)
      self.trees.append(curr_tree)

  def predict_one(self, x):
    sum = 0.0
    for i in range(self.n_trees):
      pred_y = self.trees[i].predict_one(x)
      sum += pred_y
    return sum / self.n_trees

  def predict(self, X):
    result = np.zeros(len(X), dtype=np.float64)
    for i in range(len(X)):
      result[i] = self.predict_one(X[i])
    return result    

# ===========================================================

# ===========================================================

class MyDecisionTreeRegressor:  # avoid scikit name collision
  # if max_depth = n, tree has at most 2^(n+1) - 1 nodes.

  def __init__(self, max_depth=3, min_samples=2,
    n_split_cols=-1, seed=0):
    self.max_depth = max_depth
    self.min_samples = min_samples # aka min_samples_split
    self.n_split_cols = n_split_cols  # mostly random forest
    self.root = None
    self.rnd = np.random.RandomState(seed) # split col order

  # ===============================================

  class Node:
    def __init__(self, id=0, col_idx=-1, thresh=0.0,
        left=None, right=None, value=0.0, is_leaf=False):
      self.id = id  # useful for debugging
      self.col_idx = col_idx
      self.thresh = thresh
      self.left = left
      self.right = right
      self.value = value
      self.is_leaf = is_leaf  # False for an in-node

  # ===============================================

  def best_split(self, X, y):
    best_col_idx = -1  # indicates a bad split
    best_thresh = 0.0
    best_mse = np.inf  # smaller is better
    n_rows, n_cols = X.shape

    rnd_cols = np.arange(n_cols)
    self.rnd.shuffle(rnd_cols)
    if self.n_split_cols != -1:  # just use some cols
      rnd_cols = rnd_cols[0:self.n_split_cols]

    for j in range(len(rnd_cols)):
      col_idx = rnd_cols[j]
      examined_threshs = set()
      for i in range(n_rows):
        thresh = X[i][col_idx]  # candidate threshold value

        if thresh in examined_threshs == True:
          continue
        examined_threshs.add(thresh)

        # get rows where x is lte, gt thresh
        left_idxs = np.where(X[:,col_idx] "lte" thresh)[0]
        right_idxs = np.where(X[:,col_idx] "gt" thresh)[0]

        # check proposed split
        if len(left_idxs) == 0 or \
          len(right_idxs) == 0:
          continue

        # get left and right y values
        left_y_vals = y[left_idxs]  # not empty
        right_y_vals = y[right_idxs]  # not empty

        # compute proposed split MSE
        mse_left = self.vector_mse(left_y_vals)
        mse_right = self.vector_mse(right_y_vals)
        split_mse = (len(left_y_vals) * mse_left + \
          len(right_y_vals) * mse_right) / n_rows

        if split_mse "lt" best_mse:
          best_col_idx = col_idx
          best_thresh = thresh
          best_mse = split_mse          

    return best_col_idx, best_thresh  # -1 is bad/no split

  # ---------------------------------------------------------

  def vector_mse(self, y):  # variance but called MSE
    if len(y) == 0:
      return 0.0  # should never get here
    # return np.mean((y - np.mean(y)) ** 2)
    return np.var(y)

  # ---------------------------------------------------------

  def make_tree(self, X, y):
    root = self.Node()  # is_leaf is False
    stack = [(root, X, y, 0)]  # curr depth = 0

    while (len(stack) "gt" 0):
      curr_node, curr_X, curr_y, curr_depth = stack.pop()

      if curr_depth == self.max_depth or \
        len(curr_y) "lt" self.min_samples:
        curr_node.value = np.mean(curr_y)
        curr_node.is_leaf = True
        continue

      col_idx, thresh = self.best_split(curr_X, curr_y) 

      if col_idx == -1:  # cannot split
        curr_node.value = np.mean(curr_y)
        curr_node.is_leaf = True
        continue
      
      # got a good split so at an internal, non-leaf node
      curr_node.col_idx = col_idx
      curr_node.thresh = thresh

      # create and attach child nodes
      left_idxs = np.where(curr_X[:,col_idx] "lte" thresh)[0]
      right_idxs = np.where(curr_X[:,col_idx] "gt" thresh)[0]

      left_X = curr_X[left_idxs,:]
      left_y = curr_y[left_idxs]
      right_X = curr_X[right_idxs,:]
      right_y = curr_y[right_idxs]

      curr_node.left = self.Node(id=2*curr_node.id+1)
      stack.append((curr_node.left,
        left_X, left_y, curr_depth+1))

      curr_node.right = self.Node(id=2*curr_node.id+2)
      stack.append((curr_node.right,
        right_X, right_y, curr_depth+1))
      
    return root

  # ---------------------------------------------------------      

  def fit(self, X, y):
    self.root = self.make_tree(X, y)

  # ---------------------------------------------------------

  def predict_one(self, x):
    curr = self.root
    while curr.is_leaf == False:
      if x[curr.col_idx] "lte" curr.thresh:
        curr = curr.left
      else:
        curr = curr.right
    return curr.value

  def predict(self, X):  # scikit always uses a matrix input
    result = np.zeros(len(X), dtype=np.float64)
    for i in range(len(X)):
      result[i] = self.predict_one(X[i])
    return result

  # ---------------------------------------------------------

# ===========================================================



# ===========================================================

# -----------------------------------------------------------

def accuracy(model, data_X, data_y, pct_close):
  # assumes model has a predict(X)
  n = len(data_X)
  n_correct = 0; n_wrong = 0
  for i in range(n):
    x = data_X[i].reshape(1,-1)  # make it a matrix
    y = data_y[i]
    y_pred = model.predict(x)  # predict() expects 2D

    if np.abs(y - y_pred) "lt" np.abs(y * pct_close):
      n_correct += 1
    else: 
      n_wrong += 1
  # print("Correct = " + str(n_correct))
  # print("Wrong   = " + str(n_wrong))
  return n_correct / (n_correct + n_wrong)

# -----------------------------------------------------------

def MSE(model, data_X, data_y):
  n = len(data_X)
  sum = 0.0
  for i in range(n):
    x = data_X[i].reshape(1,-1)
    y = data_y[i]
    y_pred = model.predict(x)
    sum += (y - y_pred) * (y - y_pred)

  return sum / n

# -----------------------------------------------------------

def main():
  print("\nBegin bagging tree regression scratch Python ")

  np.set_printoptions(precision=4, suppress=True,
    floatmode='fixed')
  np.random.seed(0)  # not used this version

  # 1. load data
  print("\nLoading synthetic train (200), test (40) data ")
  train_file = ".\\Data\\synthetic_train_200.txt"
  # -0.1660,0.4406,-0.9998,-0.3953,-0.7065,0.4840
  #  0.0776,-0.1616,0.3704,-0.5911,0.7562,0.1568
  # -0.9452,0.3409,-0.1654,0.1174,-0.7192,0.8054
  # . . .

  train_X = np.loadtxt(train_file, comments="#",
    usecols=[0,1,2,3,4],
    delimiter=",",  dtype=np.float64)
  train_y = np.loadtxt(train_file, comments="#", usecols=5,
    delimiter=",",  dtype=np.float64)

  test_file = ".\\Data\\synthetic_test_40.txt"
  test_X = np.loadtxt(test_file, comments="#",
    usecols=[0,1,2,3,4],
    delimiter=",",  dtype=np.float64)
  test_y = np.loadtxt(test_file, comments="#", usecols=5,
    delimiter=",",  dtype=np.float64)
  print("Done ")

  print("\nFirst three X predictors: ")
  print(train_X[0:3,:])
  print("\nFirst three y targets: ")
  for i in range(3):
    print("%0.4f" % train_y[i])

  # 2. create and train model
  nt = 10  # number trees
  nr = 200  # number rows
  md = 6  # max_depth
  ms = 2  # min_samples to consider a split

  print("\nSetting num_trees = " + str(nt))
  print("Setting num_rows = " + str(nr))
  print("Setting max_depth = " + str(md))
  print("Setting min_samples = " + str(ms))

  print("\nCreating and training bagging tree model ")
  model = BaggingTreeRegressor(n_trees=nt, n_rows=nr,
    max_depth=md, min_samples=ms, seed=0)
  model.fit(train_X, train_y)
  print("Done ")

  # 3. evaluate model
  acc_train = accuracy(model, train_X, train_y, 0.10)
  print("\nAccuracy train (within 0.10): %0.4f " % acc_train)
  acc_test = accuracy(model, test_X, test_y, 0.10)
  print("Accuracy test (within 0.10): %0.4f " % acc_test)

  mse_train = MSE(model, train_X, train_y)
  print("\nMSE train: %0.4f " % mse_train)
  mse_test = MSE(model, test_X, test_y)
  print("MSE test: %0.4f " % mse_test)

  # 4. use model
  x = train_X[0].reshape(1,-1)
  print("\nPredicting for: ")
  print(x)
  y_pred = model.predict(x)
  print("Predicted y = %0.4f " % y_pred)

  print("\nEnd bagging tree regression scratch Python demo ")

  print("\n==================== ")

  print("\nUsing scikit BaggingRegressor: ")

  from sklearn.ensemble import BaggingRegressor
  from sklearn.tree import DecisionTreeRegressor

  n_estimators = 10
  print("\nSetting n_estimators = " + str(n_estimators))
  print("\nCreating and training scikit bagging tree model ")
  bt = \
    BaggingRegressor(estimator=\
      DecisionTreeRegressor(max_depth=md,
      min_samples_split=ms),
      n_estimators=10, random_state=0)
  bt.fit(train_X, train_y)
  print("Done ")

  acc_train = accuracy(bt, train_X, train_y, 0.10)
  print("\nAccuracy train (within 0.10): %0.4f " % acc_train)
  acc_test = accuracy(bt, test_X, test_y, 0.10)
  print("Accuracy test (within 0.10): %0.4f " % acc_test)

  mse_train = MSE(bt, train_X, train_y)
  print("\nMSE train: %0.4f " % mse_train)
  mse_test = MSE(bt, test_X, test_y)
  print("MSE test: %0.4f " % mse_test)

  x = train_X[0].reshape(1,-1)
  print("\nPredicting for: ")
  print(x)
  y_pred = bt.predict(x)
  print("Predicted y = %0.4f " % y_pred)

if __name__ == "__main__":
  main()

Training data:

# synthetic_train_200.txt
#
-0.1660,  0.4406, -0.9998, -0.3953, -0.7065,  0.4840
 0.0776, -0.1616,  0.3704, -0.5911,  0.7562,  0.1568
-0.9452,  0.3409, -0.1654,  0.1174, -0.7192,  0.8054
 0.9365, -0.3732,  0.3846,  0.7528,  0.7892,  0.1345
-0.8299, -0.9219, -0.6603,  0.7563, -0.8033,  0.7955
 0.0663,  0.3838, -0.3690,  0.3730,  0.6693,  0.3206
-0.9634,  0.5003,  0.9777,  0.4963, -0.4391,  0.7377
-0.1042,  0.8172, -0.4128, -0.4244, -0.7399,  0.4801
-0.9613,  0.3577, -0.5767, -0.4689, -0.0169,  0.6861
-0.7065,  0.1786,  0.3995, -0.7953, -0.1719,  0.5569
 0.3888, -0.1716, -0.9001,  0.0718,  0.3276,  0.2500
 0.1731,  0.8068, -0.7251, -0.7214,  0.6148,  0.3297
-0.2046, -0.6693,  0.8550, -0.3045,  0.5016,  0.2129
 0.2473,  0.5019, -0.3022, -0.4601,  0.7918,  0.2613
-0.1438,  0.9297,  0.3269,  0.2434, -0.7705,  0.5171
 0.1568, -0.1837, -0.5259,  0.8068,  0.1474,  0.3307
-0.9943,  0.2343, -0.3467,  0.0541,  0.7719,  0.5581
 0.2467, -0.9684,  0.8589,  0.3818,  0.9946,  0.1092
-0.6553, -0.7257,  0.8652,  0.3936, -0.8680,  0.7018
 0.8460,  0.4230, -0.7515, -0.9602, -0.9476,  0.1996
-0.9434, -0.5076,  0.7201,  0.0777,  0.1056,  0.5664
 0.9392,  0.1221, -0.9627,  0.6013, -0.5341,  0.1533
 0.6142, -0.2243,  0.7271,  0.4942,  0.1125,  0.1661
 0.4260,  0.1194, -0.9749, -0.8561,  0.9346,  0.2230
 0.1362, -0.5934, -0.4953,  0.4877, -0.6091,  0.3810
 0.6937, -0.5203, -0.0125,  0.2399,  0.6580,  0.1460
-0.6864, -0.9628, -0.8600, -0.0273,  0.2127,  0.5387
 0.9772,  0.1595, -0.2397,  0.1019,  0.4907,  0.1611
 0.3385, -0.4702, -0.8673, -0.2598,  0.2594,  0.2270
-0.8669, -0.4794,  0.6095, -0.6131,  0.2789,  0.4700
 0.0493,  0.8496, -0.4734, -0.8681,  0.4701,  0.3516
 0.8639, -0.9721, -0.5313,  0.2336,  0.8980,  0.1412
 0.9004,  0.1133,  0.8312,  0.2831, -0.2200,  0.1782
 0.0991,  0.8524,  0.8375, -0.2102,  0.9265,  0.2150
-0.6521, -0.7473, -0.7298,  0.0113, -0.9570,  0.7422
 0.6190, -0.3105,  0.8802,  0.1640,  0.7577,  0.1056
 0.6895,  0.8108, -0.0802,  0.0927,  0.5972,  0.2214
 0.1982, -0.9689,  0.1870, -0.1326,  0.6147,  0.1310
-0.3695,  0.7858,  0.1557, -0.6320,  0.5759,  0.3773
-0.1596,  0.3581,  0.8372, -0.9992,  0.9535,  0.2071
-0.2468,  0.9476,  0.2094,  0.6577,  0.1494,  0.4132
 0.1737,  0.5000,  0.7166,  0.5102,  0.3961,  0.2611
 0.7290, -0.3546,  0.3416, -0.0983, -0.2358,  0.1332
-0.3652,  0.2438, -0.1395,  0.9476,  0.3556,  0.4170
-0.6029, -0.1466, -0.3133,  0.5953,  0.7600,  0.4334
-0.4596, -0.4953,  0.7098,  0.0554,  0.6043,  0.2775
 0.1450,  0.4663,  0.0380,  0.5418,  0.1377,  0.2931
-0.8636, -0.2442, -0.8407,  0.9656, -0.6368,  0.7429
 0.6237,  0.7499,  0.3768,  0.1390, -0.6781,  0.2185
-0.5499,  0.1850, -0.3755,  0.8326,  0.8193,  0.4399
-0.4858, -0.7782, -0.6141, -0.0008,  0.4572,  0.4197
 0.7033, -0.1683,  0.2334, -0.5327, -0.7961,  0.1776
 0.0317, -0.0457, -0.6947,  0.2436,  0.0880,  0.3345
 0.5031, -0.5559,  0.0387,  0.5706, -0.9553,  0.3107
-0.3513,  0.7458,  0.6894,  0.0769,  0.7332,  0.3170
 0.2205,  0.5992, -0.9309,  0.5405,  0.4635,  0.3532
-0.4806, -0.4859,  0.2646, -0.3094,  0.5932,  0.3202
 0.9809, -0.3995, -0.7140,  0.8026,  0.0831,  0.1600
 0.9495,  0.2732,  0.9878,  0.0921,  0.0529,  0.1289
-0.9476, -0.6792,  0.4913, -0.9392, -0.2669,  0.5966
 0.7247,  0.3854,  0.3819, -0.6227, -0.1162,  0.1550
-0.5922, -0.5045, -0.4757,  0.5003, -0.0860,  0.5863
-0.8861,  0.0170, -0.5761,  0.5972, -0.4053,  0.7301
 0.6877, -0.2380,  0.4997,  0.0223,  0.0819,  0.1404
 0.9189,  0.6079, -0.9354,  0.4188, -0.0700,  0.1907
-0.1428, -0.7820,  0.2676,  0.6059,  0.3936,  0.2790
 0.5324, -0.3151,  0.6917, -0.1425,  0.6480,  0.1071
-0.8432, -0.9633, -0.8666, -0.0828, -0.7733,  0.7784
-0.9444,  0.5097, -0.2103,  0.4939, -0.0952,  0.6787
-0.0520,  0.6063, -0.1952,  0.8094, -0.9259,  0.4836
 0.5477, -0.7487,  0.2370, -0.9793,  0.0773,  0.1241
 0.2450,  0.8116,  0.9799,  0.4222,  0.4636,  0.2355
 0.8186, -0.1983, -0.5003, -0.6531, -0.7611,  0.1511
-0.4714,  0.6382, -0.3788,  0.9648, -0.4667,  0.5950
 0.0673, -0.3711,  0.8215, -0.2669, -0.1328,  0.2677
-0.9381,  0.4338,  0.7820, -0.9454,  0.0441,  0.5518
-0.3480,  0.7190,  0.1170,  0.3805, -0.0943,  0.4724
-0.9813,  0.1535, -0.3771,  0.0345,  0.8328,  0.5438
-0.1471, -0.5052, -0.2574,  0.8637,  0.8737,  0.3042
-0.5454, -0.3712, -0.6505,  0.2142, -0.1728,  0.5783
 0.6327, -0.6297,  0.4038, -0.5193,  0.1484,  0.1153
-0.5424,  0.3282, -0.0055,  0.0380, -0.6506,  0.6613
 0.1414,  0.9935,  0.6337,  0.1887,  0.9520,  0.2540
-0.9351, -0.8128, -0.8693, -0.0965, -0.2491,  0.7353
 0.9507, -0.6640,  0.9456,  0.5349,  0.6485,  0.1059
-0.0462, -0.9737, -0.2940, -0.0159,  0.4602,  0.2606
-0.0627, -0.0852, -0.7247, -0.9782,  0.5166,  0.2977
 0.0478,  0.5098, -0.0723, -0.7504, -0.3750,  0.3335
 0.0090,  0.3477,  0.5403, -0.7393, -0.9542,  0.4415
-0.9748,  0.3449,  0.3736, -0.1015,  0.8296,  0.4358
 0.2887, -0.9895, -0.0311,  0.7186,  0.6608,  0.2057
 0.1570, -0.4518,  0.1211,  0.3435, -0.2951,  0.3244
 0.7117, -0.6099,  0.4946, -0.4208,  0.5476,  0.1096
-0.2929, -0.5726,  0.5346, -0.3827,  0.4665,  0.2465
 0.4889, -0.5572, -0.5718, -0.6021, -0.7150,  0.2163
-0.7782,  0.3491,  0.5996, -0.8389, -0.5366,  0.6516
-0.5847,  0.8347,  0.4226,  0.1078, -0.3910,  0.6134
 0.8469,  0.4121, -0.0439, -0.7476,  0.9521,  0.1571
-0.6803, -0.5948, -0.1376, -0.1916, -0.7065,  0.7156
 0.2878,  0.5086, -0.5785,  0.2019,  0.4979,  0.2980
 0.2764,  0.1943, -0.4090,  0.4632,  0.8906,  0.2960
-0.8877,  0.6705, -0.6155, -0.2098, -0.3998,  0.7107
-0.8398,  0.8093, -0.2597,  0.0614, -0.0118,  0.6502
-0.8476,  0.0158, -0.4769, -0.2859, -0.7839,  0.7715
 0.5751, -0.7868,  0.9714, -0.6457,  0.1448,  0.1175
 0.4802, -0.7001,  0.1022, -0.5668,  0.5184,  0.1090
 0.4458, -0.6469,  0.7239, -0.9604,  0.7205,  0.0779
 0.5175,  0.4339,  0.9747, -0.4438, -0.9924,  0.2879
 0.8678,  0.7158,  0.4577,  0.0334,  0.4139,  0.1678
 0.5406,  0.5012,  0.2264, -0.1963,  0.3946,  0.2088
-0.9938,  0.5498,  0.7928, -0.5214, -0.7585,  0.7687
 0.7661,  0.0863, -0.4266, -0.7233, -0.4197,  0.1466
 0.2277, -0.3517, -0.0853, -0.1118,  0.6563,  0.1767
 0.3499, -0.5570, -0.0655, -0.3705,  0.2537,  0.1632
 0.7547, -0.1046,  0.5689, -0.0861,  0.3125,  0.1257
 0.8186,  0.2110,  0.5335,  0.0094, -0.0039,  0.1391
 0.6858, -0.8644,  0.1465,  0.8855,  0.0357,  0.1845
-0.4967,  0.4015,  0.0805,  0.8977,  0.2487,  0.4663
 0.6760, -0.9841,  0.9787, -0.8446, -0.3557,  0.1509
-0.1203, -0.4885,  0.6054, -0.0443, -0.7313,  0.4854
 0.8557,  0.7919, -0.0169,  0.7134, -0.1628,  0.2002
 0.0115, -0.6209,  0.9300, -0.4116, -0.7931,  0.4052
-0.7114, -0.9718,  0.4319,  0.1290,  0.5892,  0.3661
 0.3915,  0.5557, -0.1870,  0.2955, -0.6404,  0.2954
-0.3564, -0.6548, -0.1827, -0.5172, -0.1862,  0.4622
 0.2392, -0.4959,  0.5857, -0.1341, -0.2850,  0.2470
-0.3394,  0.3947, -0.4627,  0.6166, -0.4094,  0.5325
 0.7107,  0.7768, -0.6312,  0.1707,  0.7964,  0.2757
-0.1078,  0.8437, -0.4420,  0.2177,  0.3649,  0.4028
-0.3139,  0.5595, -0.6505, -0.3161, -0.7108,  0.5546
 0.4335,  0.3986,  0.3770, -0.4932,  0.3847,  0.1810
-0.2562, -0.2894, -0.8847,  0.2633,  0.4146,  0.4036
 0.2272,  0.2966, -0.6601, -0.7011,  0.0284,  0.2778
-0.0743, -0.1421, -0.0054, -0.6770, -0.3151,  0.3597
-0.4762,  0.6891,  0.6007, -0.1467,  0.2140,  0.4266
-0.4061,  0.7193,  0.3432,  0.2669, -0.7505,  0.6147
-0.0588,  0.9731,  0.8966,  0.2902, -0.6966,  0.4955
-0.0627, -0.1439,  0.1985,  0.6999,  0.5022,  0.3077
 0.1587,  0.8494, -0.8705,  0.9827, -0.8940,  0.4263
-0.7850,  0.2473, -0.9040, -0.4308, -0.8779,  0.7199
 0.4070,  0.3369, -0.2428, -0.6236,  0.4940,  0.2215
-0.0242,  0.0513, -0.9430,  0.2885, -0.2987,  0.3947
-0.5416, -0.1322, -0.2351, -0.0604,  0.9590,  0.3683
 0.1055,  0.7783, -0.2901, -0.5090,  0.8220,  0.2984
-0.9129,  0.9015,  0.1128, -0.2473,  0.9901,  0.4776
-0.9378,  0.1424, -0.6391,  0.2619,  0.9618,  0.5368
 0.7498, -0.0963,  0.4169,  0.5549, -0.0103,  0.1614
-0.2612, -0.7156,  0.4538, -0.0460, -0.1022,  0.3717
 0.7720,  0.0552, -0.1818, -0.4622, -0.8560,  0.1685
-0.4177,  0.0070,  0.9319, -0.7812,  0.3461,  0.3052
-0.0001,  0.5542, -0.7128, -0.8336, -0.2016,  0.3803
 0.5356, -0.4194, -0.5662, -0.9666, -0.2027,  0.1776
-0.2378,  0.3187, -0.8582, -0.6948, -0.9668,  0.5474
-0.1947, -0.3579,  0.1158,  0.9869,  0.6690,  0.2992
 0.3992,  0.8365, -0.9205, -0.8593, -0.0520,  0.3154
-0.0209,  0.0793,  0.7905, -0.1067,  0.7541,  0.1864
-0.4928, -0.4524, -0.3433,  0.0951, -0.5597,  0.6261
-0.8118,  0.7404, -0.5263, -0.2280,  0.1431,  0.6349
 0.0516, -0.8480,  0.7483,  0.9023,  0.6250,  0.1959
-0.3212,  0.1093,  0.9488, -0.3766,  0.3376,  0.2735
-0.3481,  0.5490, -0.3484,  0.7797,  0.5034,  0.4379
-0.5785, -0.9170, -0.3563, -0.9258,  0.3877,  0.4121
 0.3407, -0.1391,  0.5356,  0.0720, -0.9203,  0.3458
-0.3287, -0.8954,  0.2102,  0.0241,  0.2349,  0.3247
-0.1353,  0.6954, -0.0919, -0.9692,  0.7461,  0.3338
 0.9036, -0.8982, -0.5299, -0.8733, -0.1567,  0.1187
 0.7277, -0.8368, -0.0538, -0.7489,  0.5458,  0.0830
 0.9049,  0.8878,  0.2279,  0.9470, -0.3103,  0.2194
 0.7957, -0.1308, -0.5284,  0.8817,  0.3684,  0.2172
 0.4647, -0.4931,  0.2010,  0.6292, -0.8918,  0.3371
-0.7390,  0.6849,  0.2367,  0.0626, -0.5034,  0.7039
-0.1567, -0.8711,  0.7940, -0.5932,  0.6525,  0.1710
 0.7635, -0.0265,  0.1969,  0.0545,  0.2496,  0.1445
 0.7675,  0.1354, -0.7698, -0.5460,  0.1920,  0.1728
-0.5211, -0.7372, -0.6763,  0.6897,  0.2044,  0.5217
 0.1913,  0.1980,  0.2314, -0.8816,  0.5006,  0.1998
 0.8964,  0.0694, -0.6149,  0.5059, -0.9854,  0.1825
 0.1767,  0.7104,  0.2093,  0.6452,  0.7590,  0.2832
-0.3580, -0.7541,  0.4426, -0.1193, -0.7465,  0.5657
-0.5996,  0.5766, -0.9758, -0.3933, -0.9572,  0.6800
 0.9950,  0.1641, -0.4132,  0.8579,  0.0142,  0.2003
-0.4717, -0.3894, -0.2567, -0.5111,  0.1691,  0.4266
 0.3917, -0.8561,  0.9422,  0.5061,  0.6123,  0.1212
-0.0366, -0.1087,  0.3449, -0.1025,  0.4086,  0.2475
 0.3633,  0.3943,  0.2372, -0.6980,  0.5216,  0.1925
-0.5325, -0.6466, -0.2178, -0.3589,  0.6310,  0.3568
 0.2271,  0.5200, -0.1447, -0.8011, -0.7699,  0.3128
 0.6415,  0.1993,  0.3777, -0.0178, -0.8237,  0.2181
-0.5298, -0.0768, -0.6028, -0.9490,  0.4588,  0.4356
 0.6870, -0.1431,  0.7294,  0.3141,  0.1621,  0.1632
-0.5985,  0.0591,  0.7889, -0.3900,  0.7419,  0.2945
 0.3661,  0.7984, -0.8486,  0.7572, -0.6183,  0.3449
 0.6995,  0.3342, -0.3113, -0.6972,  0.2707,  0.1712
 0.2565,  0.9126,  0.1798, -0.6043, -0.1413,  0.2893
-0.3265,  0.9839, -0.2395,  0.9854,  0.0376,  0.4770
 0.2690, -0.1722,  0.9818,  0.8599, -0.7015,  0.3954
-0.2102, -0.0768,  0.1219,  0.5607, -0.0256,  0.3949
 0.8216, -0.9555,  0.6422, -0.6231,  0.3715,  0.0801
-0.2896,  0.9484, -0.7545, -0.6249,  0.7789,  0.4370
-0.9985, -0.5448, -0.7092, -0.5931,  0.7926,  0.5402

Test data:

# synthetic_test_40.txt
#
 0.7462,  0.4006, -0.0590,  0.6543, -0.0083,  0.1935
 0.8495, -0.2260, -0.0142, -0.4911,  0.7699,  0.1078
-0.2335, -0.4049,  0.4352, -0.6183, -0.7636,  0.5088
 0.1810, -0.5142,  0.2465,  0.2767, -0.3449,  0.3136
-0.8650,  0.7611, -0.0801,  0.5277, -0.4922,  0.7140
-0.2358, -0.7466, -0.5115, -0.8413, -0.3943,  0.4533
 0.4834,  0.2300,  0.3448, -0.9832,  0.3568,  0.1360
-0.6502, -0.6300,  0.6885,  0.9652,  0.8275,  0.3046
-0.3053,  0.5604,  0.0929,  0.6329, -0.0325,  0.4756
-0.7995,  0.0740, -0.2680,  0.2086,  0.9176,  0.4565
-0.2144, -0.2141,  0.5813,  0.2902, -0.2122,  0.4119
-0.7278, -0.0987, -0.3312, -0.5641,  0.8515,  0.4438
 0.3793,  0.1976,  0.4933,  0.0839,  0.4011,  0.1905
-0.8568,  0.9573, -0.5272,  0.3212, -0.8207,  0.7415
-0.5785,  0.0056, -0.7901, -0.2223,  0.0760,  0.5551
 0.0735, -0.2188,  0.3925,  0.3570,  0.3746,  0.2191
 0.1230, -0.2838,  0.2262,  0.8715,  0.1938,  0.2878
 0.4792, -0.9248,  0.5295,  0.0366, -0.9894,  0.3149
-0.4456,  0.0697,  0.5359, -0.8938,  0.0981,  0.3879
 0.8629, -0.8505, -0.4464,  0.8385,  0.5300,  0.1769
 0.1995,  0.6659,  0.7921,  0.9454,  0.9970,  0.2330
-0.0249, -0.3066, -0.2927, -0.4923,  0.8220,  0.2437
 0.4513, -0.9481, -0.0770, -0.4374, -0.9421,  0.2879
-0.3405,  0.5931, -0.3507, -0.3842,  0.8562,  0.3987
 0.9538,  0.0471,  0.9039,  0.7760,  0.0361,  0.1706
-0.0887,  0.2104,  0.9808,  0.5478, -0.3314,  0.4128
-0.8220, -0.6302,  0.0537, -0.1658,  0.6013,  0.4306
-0.4123, -0.2880,  0.9074, -0.0461, -0.4435,  0.5144
 0.0060,  0.2867, -0.7775,  0.5161,  0.7039,  0.3599
-0.7968, -0.5484,  0.9426, -0.4308,  0.8148,  0.2979
 0.7811,  0.8450, -0.6877,  0.7594,  0.2640,  0.2362
-0.6802, -0.1113, -0.8325, -0.6694, -0.6056,  0.6544
 0.3821,  0.1476,  0.7466, -0.5107,  0.2592,  0.1648
 0.7265,  0.9683, -0.9803, -0.4943, -0.5523,  0.2454
-0.9049, -0.9797, -0.0196, -0.9090, -0.4433,  0.6447
-0.4607,  0.1811, -0.2389,  0.4050, -0.0078,  0.5229
 0.2664, -0.2932, -0.4259, -0.7336,  0.8742,  0.1834
-0.4507,  0.1029, -0.6294, -0.1158, -0.6294,  0.6081
 0.8948, -0.0124,  0.9278,  0.2899, -0.0314,  0.1534
-0.1323, -0.8813, -0.0146, -0.0697,  0.6135,  0.2386
Posted in Machine Learning | Leave a comment

“How AI Is Accelerating Machine Learning Development” on the Pure AI Web Site

I contributed technical content and opinions to an article titled “How AI Is Accelerating Machine Learning Development” on the December 2025 edition of the Pure AI web site. See https://pureai.com/articles/2025/12/03/how-ai-is-accelerating-machine-learning-development.aspx.

In a nutshell, AI can 1.) accurately determine which machine learning techniques are most likely to produce the best prediction results, 2.) generate machine learning program code, and 3.) tune machine learning model hyperparameters. As a crude rule of thumb, using generative AI typically reduces the time needed to create a machine learning prediction model by at least half, and often much more.

One example I that provided shows how AI can generate machine learning program code:

Suppose you submit a prompt such as, “Please give me a simple program, using the scikit-learn library, that creates a kernel ridge regression model for the dataset.”

The resulting program code is:

import numpy as np
import pandas as pd
from sklearn.kernel_ridge import KernelRidge

np.set_printoptions(precision=4, suppress=True,
  floatmode='fixed')

# Load the dataset
data = \
  pd.read_csv("synthetic_train_200.txt",
    comment="#")

X = data.iloc[:, :-1].values   # predictors
y = data.iloc[:, -1].values    # target

# Create and train the Kernel Ridge Regression model
model = KernelRidge(kernel="rbf", alpha=1.0, gamma=0.5)
model.fit(X, y)

# Predict on the data (no split)
y_pred = model.predict(X)

print("\nFirst 3 actual values:")
print(y[:3])

print("\nFirst 3 predictions:")
print(y_pred[:3])

The code is correct and when run, the output is:

First 3 actual values:
[0.1568, 0.8054, 0.1345]

First 3 predictions:
[0.1798, 0.7659, 0.1279]

In the article, I offer some opinions: “AI is definitely going to eliminate many tech jobs, but I don’t think AI will replace the data scientist role. I suspect that it’s more likely that AI will dramatically increase the speed and efficiency of data science activities.

“However, it’s quite possible that AI will negatively impact the job market for entry-level data scientist positions, which is a role that tends to be filled by recent college graduates.”



Steampunk is a sub-genre of science fiction that combines Victorian-era sensibilities with retro-futuristic, steam-powered technology. Here’s a nice AI-generated example of a steampunk car.


Posted in Machine Learning | Leave a comment

NFL 2025 Week 16 Predictions – Zoltar Says Bet the Farm on Vegas Favorites Texans Against the Raiders

Zoltar is my NFL football prediction computer program. It uses a neural network and a type of specialized reinforcement learning. Here are Zoltar’s predictions for week #16 of the 2025 season.

Zoltar:    seahawks  by    4  opp =        rams    | Vegas:        rams  by  1.5
Zoltar:     packers  by    0  opp =       bears    | Vegas:     packers  by  1.5
Zoltar:      eagles  by    4  opp =  commanders    | Vegas:      eagles  by    6
Zoltar:    chargers  by    2  opp =     cowboys    | Vegas:     cowboys  by    2
Zoltar:      saints  by    6  opp =        jets    | Vegas:      saints  by    3
Zoltar:      chiefs  by    9  opp =      titans    | Vegas:      chiefs  by    4
Zoltar:  buccaneers  by    0  opp =    panthers    | Vegas:  buccaneers  by    3
Zoltar:       bills  by   12  opp =      browns    | Vegas:       bills  by    9
Zoltar:     vikings  by    5  opp =      giants    | Vegas:     vikings  by    3
Zoltar:    dolphins  by    6  opp =     bengals    | Vegas:     bengals  by  2.5
Zoltar:     falcons  by    0  opp =   cardinals    | Vegas:     falcons  by  2.5
Zoltar:     broncos  by    8  opp =     jaguars    | Vegas:     broncos  by    3
Zoltar:      texans  by   18  opp =     raiders    | Vegas:     texans   by    9
Zoltar:       lions  by    6  opp =    steelers    | Vegas:       lions  by    6
Zoltar:      ravens  by    4  opp =    patriots    | Vegas:      ravens  by    3
Zoltar:       colts  by    2  opp = fortyniners    | Vegas: fortyniners  by  6.5

Zoltar theoretically suggests betting when the Vegas line is “significantly” different from Zoltar’s prediction. In the beginning and end parts of the season, I use a conservative 4 points difference as the threshold. In the middle part of the season I use a more aggressive 3 points difference as the threshold.

For week #16 Zoltar likes Vegas favorites Chiefs and Broncos, and Vegas underdogs Seahawks, Dolphins, Texans, Colts:

rams         at     seahawks: Bet on Vegas underdog seahawks
chiefs       at       titans: Bet on Vegas favorite  chiefs
bengals      at     dolphins: Bet on Vegas underdog dolphins
jaguars      at      broncos: Bet on Vegas favorite broncos
raiders      at       texans: Bet on Vegas favorite texans
fortyniners  at        colts: Bet on Vegas underdog colts

For example, a bet on the underdog Seahawks against the Rams will pay off if the Seahawks win by any score, or if the favored Rams win but by less than 1.5 points (i.e., 1 point). If a favored team wins by exactly the point spread, the wager is a push. This is why point spreads often have a 0.5 added — called “the hook” — to eliminate pushes.

I use the early Vegas point spreads, usually posted on late Monday night, right after the Monday Night Football game. By the time you read this, the point spreads will certainly have changed. I’ve noticed that, compared to last year, point spreads are changing wildly by late Tuesday. A swing of 7 points is not uncommon.

I speculate that the wild swings are due to a huge increase in betting. When a lot of money is bet on one team in a matchup, the bookmakers must make a huge change in the point spread to encourage betting on the other team. Bookmakers only make money when the betting amounts are close to equal on both teams.

Theoretically, if you must bet $110 to win $100 (typical in Vegas) then you’ll make money if you predict at 53% accuracy or better. But realistically, you need to predict at 60% accuracy or better, to take into account overhead and things like data entry error.

In week #15, against the Vegas point spread, Zoltar went 3-0 using 4.0 points as the advice threshold. Zoltar correctly liked Vegas favorite Eagles (they destroyed the Raiders), and Vegas underdogs Colts (they only lost by 3 to the Seahawks) and Broncos (they won outright against the Packers).

For the season, against the spread, Zoltar is 42-24 (~63% accuracy).

Just for fun, I track how well Zoltar does when just trying to predict just which team will win a game. This isn’t useful except for parlay betting. In week #15, just predicting the winning team, Zoltar went 9-2 with 5 games too close for Zoltar to express an opinion.

Vegas was 10-6 at just predicting the winning team. Not good but not terrible either.


My system is named after the Zoltar fortune teller machine you can find in arcades. Many years ago (1971-73), the Kellogg’s breakfast products company put rules for an NFL simulation into their Pop Tarts snacks (introduced in 1964). The simulation used a deck of cards as the random number generator. For example, the player who has the ball decides to run or pass and then picks a card from the shuffled deck. Suppose the player chooses run and picks a 3 of clubs. The result is a 12-yard gain. A 3 of diamonds, hearts, or spades results in a 5-yard loss.


Posted in Zoltar | Leave a comment

You Must Use full_matrices=False with the NumPy SVD function to Compute the Pseudo-Inverse

Singular value decomposition (SVD) breaks down a matrix A into a matrix U, a vector s, and a matrix Vh, such that A = U * S * Vh. The S is a square matrix with the elements of vector s on the diagonal, 0s elsewhere.

If you want to compute the Moore-Penrose pseudo-inverse of A using SVD, it is A_pinv = V * Sinv * Uh. The V is the transpose of Vh, Sinv is S with the reciprocals of the elements, and Uh is the transpose of U.

As it turns out, the NumPy SVD function accepts a source matrix to decompose, and a strange full_matrices Boolean parameter. The default value is full_matrices=True, but in order to compute the pseudo-inverse, you must explicitly specify full_matrices=False.

Here’s a demo. It starts by setting up a matrix that has more rows than columns, such as the kind you’d find as training data in a machine learning problem. The demo computes the pseudo-inverse directly using the np.linalg.pinv() function:

import numpy as np
np.set_printoptions(precision=4, suppress=True)

A = np.array([
  [1, -2, 3],
  [-5, 0, 2],
  [8, 5, -4],
  [-1, 0, 9]])

print("\nA: ")
print(A)

print("\n============================== ")

print("\nA pinv() using np.linalg.pinv(): \n")
A_pinv = np.linalg.pinv(A)
print(A_pinv)

The output is:

A:
[[ 1 -2  3]
 [-5  0  2]
 [ 8  5 -4]
 [-1  0  9]]

==============================

A pinv() using np.linalg.pinv():

[[ 0.0979 -0.1201  0.0392  0.0115]
 [-0.1707  0.1607  0.1317  0.0797]
 [ 0.0297  0.0038  0.0119  0.1057]]

Next, the demo computes the pseudo-inverse by using the SVD with the full_matrices set to False:

print("\nComputing SVD with full_matrices=False ")
U, s, Vh = np.linalg.svd(A, full_matrices=False)
print("\nU: ")
print(U)
print("\ns: ")
print(s)
print("\nVh: ")
print(Vh)

Uh = U.T
V = Vh.T
p = len(s)
Sinv = np.zeros((p,p))
for i in range(p):
  Sinv[i][i] = 1.0 / (s[i] + 1.0e-12)

# V * Sinv * Uh
A_pinv = np.matmul(V, np.matmul(Sinv,Uh))
print("\nA pinv() scratch with full_matrices=False \n")
print(A_pinv)

The output is the same as when computed directly by np.linalg.pinv() and is correct:

Computing SVD with full_matrices=False

U:
[[ 0.1662 -0.3022  0.6397]
 [ 0.3568  0.2561 -0.6442]
 [-0.7379 -0.5111 -0.3448]
 [ 0.5483 -0.7628 -0.2387]]

s:
[12.8087  7.423   3.292 ]

Vh:
[[-0.63   -0.314   0.7103]
 [-0.6613 -0.2628 -0.7026]
 [ 0.4073 -0.9123 -0.0421]]

A pinv() scratch with full_matrices=False

[[ 0.0979 -0.1201  0.0392  0.0115]
 [-0.1707  0.1607  0.1317  0.0797]
 [ 0.0297  0.0038  0.0119  0.1057]]

However, when the demo tries to compute the pseudo-inverse using SVD with the default full_matrices=True, the program fails. The code:

print("\nComputing SVD with full_matrices=True ")
U, s, Vh = np.linalg.svd(A, full_matrices=True) # default!
print("\nU: ")
print(U)
print("\ns: ")
print(s)
print("\nVh: ")
print(Vh)
print("\n")

Uh = U.T
V = Vh.T
p = len(s)
Sinv = np.zeros((p,p))
for i in range(p):
  Sinv[i][i] = 1.0 / (s[i] + 1.0e-12)

# V * Sinv * Uh
A_pinv = np.matmul(V, np.matmul(Sinv,Uh))
print("\nA pinv() scratch with full_matrices=True \n")
print(A_pinv)

The output:

Computing SVD with full_matrices=True

U:
[[ 0.1662 -0.3022  0.6397 -0.6869]
 [ 0.3568  0.2561 -0.6442 -0.6262]
 [-0.7379 -0.5111 -0.3448 -0.2748]
 [ 0.5483 -0.7628 -0.2387  0.246 ]]

s:
[12.8087  7.423   3.292 ]

Vh:
[[-0.63   -0.314   0.7103]
 [-0.6613 -0.2628 -0.7026]
 [ 0.4073 -0.9123 -0.0421]]

Traceback (most recent call last):
  File "C:\VSM\PseudoInverse\pseudoinv_numpy.py",
 line 65, in 
    A_pinv = np.matmul(V, np.matmul(Sinv,Uh))
                          ^^^^^^^^^^^^^^^^^^
ValueError: matmul: Input operand 1 has a mismatch
 in its core dimension 0, with gufunc signature 
(n?,k),(k,m?)->(n?,m?) (size 4 is different from 3)

So, the question in my mind is, “Why is the default to svd() “full_matrices=True” when that causes calculation of the pseudo-inverse to fail?”

I have no answer to that. I know that SVD is used for dozens of different purposes in mathematics, so there may be scenarios where “full_matrices=True” is the correct option.



Over time, arcade machines have become more sensitive but much less interesting. Here are two fully_sensitive=False coin operated electro-mechanical games from Marvin’s Marvelous Mechanical Museum in West Bloomfield Township, Michigan.

Left: In “Dr. Chopandoff”, you place your hand is the metal glove. The doctor slams the guillotine blade down, apparently chopping your fingers off. Nice illusion.

Right: This old 1930s era automata machine from the UK depicts the Spanish Inquisition, complete with automated motion whipping, branding, and racking. Holy Moly!


Demo program:

# pseudoinv_numpy.py

import numpy as np
import scipy as sp

np.set_printoptions(precision=4, suppress=True)
A = np.array([
  [1, -2, 3],
  [-5, 0, 2],
  [8, 5, -4],
  [-1, 0, 9]])

print("\nA: ")
print(A)

print("\n============================== ")

print("\nA pinv() using np.linalg.pinv(): \n")
A_pinv = np.linalg.pinv(A)
print(A_pinv)

print("\n============================== ")

print("\nComputing SVD with full_matrices=False ")
U, s, Vh = np.linalg.svd(A, full_matrices=False)
print("\nU: ")
print(U)
print("\ns: ")
print(s)
print("\nVh: ")
print(Vh)

Uh = U.T
V = Vh.T
p = len(s)
Sinv = np.zeros((p,p))
for i in range(p):
  Sinv[i][i] = 1.0 / (s[i] + 1.0e-12)

# V * Sinv * Uh
A_pinv = np.matmul(V, np.matmul(Sinv,Uh))
print("\nA pinv() scratch with full_matrices=False \n")
print(A_pinv)

print("\n============================== ")

print("\nComputing SVD with full_matrices=True ")
U, s, Vh = np.linalg.svd(A, full_matrices=True)
print("\nU: ")
print(U)
print("\ns: ")
print(s)
print("\nVh: ")
print(Vh)
print("\n")

Uh = U.T
V = Vh.T
p = len(s)
Sinv = np.zeros((p,p))
for i in range(p):
  Sinv[i][i] = 1.0 / (s[i] + 1.0e-12)

# V * Sinv * Uh
A_pinv = np.matmul(V, np.matmul(Sinv,Uh))
print("\nA pinv() scratch with full_matrices=True \n")
print(A_pinv)

print("\n============================== ")
Posted in Machine Learning | Leave a comment