A Neural Network using Just Python and NumPy

For the past dozen months or so, I’ve been working with neural network libraries including TensorFlow, Keras, CNTK, and PyTorch. As mental exercise, I decided to implement a neural network from scratch, using just raw Python and NumPy.

I’ve done this before. In fact, until about three years ago, implementing a neural network from scratch was just about the only option available because the neural libraries didn’t exist.

The exercise was a lot of fun. My implementation used online training — processing one line of training data at a time. Using mini-batch training is quite a bit more complicated so I’ll save that for another day. Additionally, I used mean squared error rather than the more common cross entropy error, mostly because back-propagation with MSE is a bit easier to understand than back-propagation with cross entropy.

As I was coding, I was mildly surprised to realize how much I’d learned over the past several months, while using Keras and CNTK. For example, when I initialized weights and biases, I used a Uniform random distribution, but I could have used Glorot Uniform or Glorot Normal, because I now understand those algorithms well.

The moral of the story is that the field of machine earning is moving very fast and it’s important to keep as up to date as possible. And that requires daily effort.



The F-104 jet fighter set a world speed record of 1404 miles per hour (2260 km/h) in 1958. That’s fast, even by the standards of modern jet fighters 60 years later.

This entry was posted in Machine Learning. Bookmark the permalink.