Monthly Archives: January 2025

Random Neighborhoods Regression Using C#

I woke up one morning with a mild cold and headache. The headache wasn’t too bad, but it was bad enough that I didn’t want to work on difficult topics like PyTorch neural systems or simulated quantum computing. So I … Continue reading

Posted in Machine Learning | Leave a comment

I Will Be Speaking At The Visual Studio Live Conference in March 2025

I will be giving a talk titled “Introduction to Neural Networks Using C#” at the 2025 Visual Studio Live conference. The event runs March 10-14, 2025, in Las Vegas. See https://vslive.com/lasvegas. Before I go any farther, the VS Live organizers … Continue reading

Posted in Conferences | Leave a comment

NFL 2024 Week 20 (Division Championships) Predictions – Zoltar Agrees with Las Vegas

Zoltar is my NFL football prediction computer program. It uses a neural network and a type of reinforcement learning. Here are Zoltar’s predictions for week #20 (division championship games) of the 2024 season. Zoltar: chiefs by 6 dog = texans … Continue reading

Posted in Zoltar | Leave a comment

Linear Regression With Two-Way Interactions From Scratch Using C#

Suppose you have a regression problem with three predictor variables, x0, x1, x2. Standard linear regression creates a prediction equation like: y = (w0 * x0) + (w1 * x1) + (w2 * x2) + b where w0, w1, w2 … Continue reading

Posted in Machine Learning | 1 Comment

Explaining Why PyTorch Multi-Class Classification Neural Networks Use NLLLoss (Negative Log-Likelihood Loss)

I was preparing to teach a class on PyTorch neural networks at the large tech company I work for. To prepare, I wanted to mentally review PyTorch neural network basics, including the mysterious NLLLoss function. 1. Early Days: One-Hot Targets, … Continue reading

Posted in PyTorch | Leave a comment

Gradient Boosting Regression From Semi-Scratch Python

Gradient boosting regression (GBR) is a common machine learning technique to predict a single numeric value. The technique combines many small decision trees (typically 100 or 200) to produce a final prediction. One morning before work, I figured I’d put … Continue reading

Posted in Machine Learning | Leave a comment

“Random Forest Regression and Bagging Regression Using C#” in Visual Studio Magazine

I wrote an article titled “Random Forest Regression and Bagging Regression Using C#” in the January 2025 edition of Microsoft Visual Studio Magazine. See https://visualstudiomagazine.com/Articles/2025/01/02/Random-Forest-Regression-and-Bagging-Regression-Using-CSharp.aspx. A machine learning random forest regression system predicts a single numeric value. A random forest … Continue reading

Posted in Machine Learning | Leave a comment

NFL 2024 Week 19 (Wildcard) Predictions – Zoltar Agrees with Las Vegas Except for Chargers vs. Texans

Zoltar is my NFL football prediction computer program. It uses a neural network and a type of reinforcement learning. Here are Zoltar’s predictions for week #19 (wildcard playoff games) of the 2024 season. Zoltar: texans by 4 dog = chargers … Continue reading

Posted in Zoltar | Leave a comment

Kernel Ridge Regression Using C# with Newton Iteration Matrix Inverse Training

A regression problem is one where the goal is to predict a single numeric value. Common machine learning regression techniques are linear regression, k-nearest neighbors, kernel ridge, Gaussian process, decision tree, AdaBoost (includes bagging), gradient boost, and kernel ridge regression … Continue reading

Posted in Machine Learning | Leave a comment

Gaussian Process Regression From Scratch C# Using Cholesky Decomposition Matrix Inverse

Gaussian process regression (GPR) is a powerful machine learning technique to predict a single numeric value. When GPR works, it often works very well. The major disadvantages of GPR are: 1.) it uses matrix inversion, which means GPR is not … Continue reading

Posted in Machine Learning | Leave a comment