I wrote an article titled “Neural Network Dropout Training” in the May 2014 issue of Visual Studio Magazine. See http://visualstudiomagazine.com/articles/2014/05/01/neural-network-dropout-training.aspx.
Dropout training is a relatively new technique. The purpose is to avoid over-fitting — a situation where training generates weights and bias values so that the neural network predicts with very high accuracy (often 100%) on the training data, but when the NN model is presented with new, non-training data, the model gives very poor accuracy.
Although there are a handful of papers available on the Internet that explain the theory of dropout training, I couldn’t find any concrete examples of how to actually implement the technique. So, I figured it out myself.
The idea of dropout is simple: During the training process, as each data item is presented, a random 50% of the hidden nodes and their connections are dropped from the neural network. This prevents the hidden nodes from co-adapting with each other, forcing the model to rely on only a subset of the hidden nodes. This makes the resulting neural network more robust. Another way of looking at dropout training is that dropout generates many different virtual subsets of the original neural network and then these subsets are averaged to give a final network that generalizes well.

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2026 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2026 G2E Conference
2026 iSC West Conference
You must be logged in to post a comment.