An ordinal classification problem (confusingly, also called ordinal regression) is one where the goal is to predict a class label in situations where the labels have an ordering. For example, you might want to predict the price of a house, based on things like area in sq. feet, where the house price in the training data is 0 = low, 1 = medium, 2 = high, 3 = very high.
You could just use regular neural classification techniques, but that doesn’t take advantage of the ordering information in the data. Put differently, if a true class label is 2 = high, the error for a prediction of 0 = low should be greater than a prediction of 1 = medium.
For ordinal classification, I use a technique that I haven’t seen described anywhere else. But the idea is obvious so maybe the technique is used under some fancy name. If the training data has ordinal class labels like 0, 1, 2, 3 then I convert them to float targets of 0.125, 0.375, 0.625, 0.875. I create a neural network that emits a single numeric value between 0.0 and 1.0 and use mean squared error to compare a computed output with the associated float target. If you think this through, you’ll see how the ordering information is used.
I recently upgraded my Keras code library to version 2.6 and so I figured I’d code up a demo of ordinal classification using that version. I generated a 200-item set of synthetic training data that looks like:
-1 0.1275 0 1 0 2 0 0 1 1 0.1100 1 0 0 3 1 0 0 -1 0.1375 0 0 1 0 0 1 0 1 0.1975 0 1 0 2 0 0 1 . . .
Each item is a house. The first column is air conditioning, the second column is area in square feet (divided by 10,000), the next three columns are one-hot encoded style (1,0,0 = art_deco, 0,1,0 = bungalow, 0,0,1 = colonial), the next column is price (0 = low, 1 = medium, 2 = high, 3 =very high), and the last three columns are local school (1,0,0 = johnson, 0,1,0 = kennedy, 0,0,1 = lincoln).
The key to the ordinal classification technique I use is mapping ordinal labels to float targets. For k = 4 classes, the idea can be explained graphically:
0-------------1------------2------------3------------4
0.00 0.25 0.50 0.75 1.00
0.125 0.375 0.625 0.875
There are 4 bins, one for each class label. The float targets are the midpoints of the bins if the bins length is normalized to 1.0. A function to compute targets for ordinal classification is:
def make_float_targets(k):
targets = np.zeros(k, dtype=np.float32)
start = 1.0 / (2 * k) # like 0.125
delta = 1.0 / k # like 0.250
for i in range(k):
targets[i] = start + (i * delta)
return targets
I coded up a demo using Keras 2.6 without too much trouble, other than the usual glitches that happen with any neural system. I noticed that when I computed classification accuracy, using an item-by-item approach was brutally slow. I suspect this is because there is a lot of conversion between Numpy arrays and Keras/TensorFlow tensors. Anyway, I wrote an accuracy function that used a set approach.
Good fun. Neural network technologies have advanced quickly, but are still relatively crude. When more powerful computing engines become available (probably via quantum computing), neural networks will do things that are impossible to imagine today.

Advances in aircraft engines enabled amazing performance improvements in just a few years. Left: The British S.E.5a (1917) had a top speed of 130 mph. Center: Just 20 years later, the British Spitfire Mk I (1937) had a top speed of 360 mph. Right: Just 20 years later, the U.S. Vought F-8 Crusader (1957) had a top speed of 1,200 mph.
Code and data below. Long.
# houses_ordinal.py
# ordinal classification on House data
# Keras 2.6.0 in TensorFlow 2.6.0
# Anaconda3-2020.02 Python 3.7.6 Windows 10
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2' # suppress CPU warn
import numpy as np
from tensorflow import keras as K
class MyLogger(K.callbacks.Callback):
def __init__(self, n, data_x, data_y):
self.n = n # print loss every n epochs
self.data_x = data_x
self.data_y = data_y
def on_epoch_end(self, epoch, logs={}):
if epoch % self.n == 0:
curr_loss = logs.get('loss') # loss on curr batch
print("epoch = %4d curr batch loss (MSE) = \
%0.6f " % (epoch, curr_loss))
# -----------------------------------------------------------
def accuracy_fast(model, data_x, data_y, k):
max_delta = (1.0 / k) / 2 # if k=4 max_delta = 0.125
all_oupt = model.predict(data_x) # np
computed_deltas = np.abs(data_y - all_oupt)
correct_items = (computed_deltas "lte" max_delta) # replace
n_correct = np.sum(correct_items)
n = len(data_x)
acc = n_correct / n
return acc
# -----------------------------------------------------------
def float_oupt_to_class(oupt, k):
end_pts = np.zeros(k+1, dtype=np.float32)
delta = 1.0 / k
for i in range(k):
end_pts[i] = i * delta
end_pts[k] = 1.0
# if k=4, [0.0, 0.25, 0.50, 0.75, 1.0]
for i in range(k):
if oupt >= end_pts[i] and oupt <= end_pts[i+1]:
return i
return -1 # fatal error
# -----------------------------------------------------------
def main():
# 0. get ready
print("\nHouse price ordinal classification using Keras")
print("Prices: 0 = low, 1 = medium, 2 = high, 3 = v high ")
print("Features: AC, sq ft, style, (price), school ")
np.random.seed(1)
kv = K.__version__
print("Using Keras version: ", kv)
# 1. load ordinal train data from file
print("\nLoading raw House data into memory ")
train_file = ".\\Data\\houses_train_ord.txt" # 200 lines
train_x = np.loadtxt(train_file,
usecols=[0,1,2,3,4, 6,7,8],
delimiter="\t", skiprows=0, dtype=np.float32)
train_y = np.loadtxt(train_file, usecols=[5],
delimiter="\t", skiprows=0, dtype=np.float32)
# 2. convert ordinal labels (like 2) to float targets
print("\nConverting ordinal labels to float targets ")
n = len(train_y)
for i in range(n): # hard-coded is simple, efficient
if int(train_y[i]) == 0: train_y[i] = 0.125
elif int(train_y[i]) == 1: train_y[i] = 0.375
elif int(train_y[i]) == 2: train_y[i] = 0.625
elif int(train_y[i]) == 3: train_y[i] = 0.875
else: print("Fatal logic error ")
train_y = np.reshape(train_y, (-1,1)) # 2D
# 3. create neural network classifier
print("\nCreating 8-10-10-1 neural network ")
model = K.models.Sequential() # could use Model()
model.add(K.layers.Dense(units=10, input_dim=8,
activation='tanh')) # hid1
model.add(K.layers.Dense(units=10,
activation='tanh')) # hid2
model.add(K.layers.Dense(units=1,
activation='sigmoid')) # output layer
lrn_rate = 0.010
opt = K.optimizers.Adam(learning_rate=lrn_rate)
model.compile(loss='mean_squared_error',
optimizer=opt, metrics=['mse'])
# -----------------------------------------------------------
# 4. train model
bat_size = 4
max_epochs = 500
log_every = 100
my_logger = MyLogger(log_every, train_x,
train_y)
print("\nbat_size = %3d " % bat_size)
print("lrn_rate = %0.3f " % lrn_rate)
print("loss = MSELoss ")
print("optimizer = Adam ")
print("max_epochs = %3d " % max_epochs)
print("\nStarting training ")
h = model.fit(train_x, train_y, batch_size=4,
epochs=max_epochs, verbose=0, callbacks=[my_logger])
print("Training finished ")
# 5. evaluate trained model loss and accuracy
eval = model.evaluate(train_x, train_y, verbose=0)
print("\nFinal overall loss (MSE) on train = %0.6f" \
% eval[0])
train_acc = accuracy_fast(model, train_x, train_y, k=4)
print("Accuracy on train data = %0.4f" % train_acc)
# 6. save trained model
print("\nSaving trained model as houses_model.h5 ")
# model.save_weights(".\\Models\\houses_model_wts.h5")
# model.save(".\\Models\\houses_model.h5")
# -----------------------------------------------------------
# 7. use model to make a prediction
np.set_printoptions(formatter={'float': '{: 0.2f}'.format})
print("\nSetting up AC = no, sqft = 2300, style = colonial")
print(" school district = kennedy")
x = np.array([[-1, 0.2300, 0,0,1, 0,1,0]],
dtype=np.float32)
print("\nUsing model to predict house price for: ")
print(x)
pred_price = model.predict(x) # raw output in [0.0, 1.0]
c = float_oupt_to_class(pred_price, k=4)
labels = ["low", "medium", "high", "very high"]
print("\nPredicted price raw output: %0.4f" % pred_price)
print("Predicted price ordinal label: %d " % c)
print("Predicted price friendly class: \"%s\" " \
% labels[c])
print("\nEnd House ordinal price using Keras ")
if __name__=="__main__":
main()
# data: copy, paste, save as houses_train_ord.txt
# -1 0.1275 0 1 0 0 0 0 1
# 1 0.1100 1 0 0 0 1 0 0
# -1 0.1375 0 0 1 0 0 1 0
# 1 0.1975 0 1 0 2 0 0 1
# -1 0.1200 0 0 1 0 1 0 0
# -1 0.2500 0 1 0 2 0 1 0
# 1 0.1275 1 0 0 1 0 0 1
# -1 0.1750 0 0 1 1 0 0 1
# -1 0.2500 0 1 0 2 0 0 1
# 1 0.1800 0 1 0 1 1 0 0
# 1 0.0975 1 0 0 0 0 0 1
# -1 0.1100 0 1 0 0 0 1 0
# 1 0.1975 0 0 1 1 0 0 1
# -1 0.3175 1 0 0 3 0 1 0
# -1 0.1700 0 1 0 1 1 0 0
# 1 0.1650 0 1 0 1 0 1 0
# -1 0.2250 0 1 0 2 0 1 0
# -1 0.2125 0 1 0 2 0 1 0
# 1 0.1675 0 1 0 1 0 1 0
# 1 0.1550 1 0 0 1 0 1 0
# -1 0.1375 0 0 1 0 1 0 0
# -1 0.2425 0 1 0 2 1 0 0
# 1 0.3200 0 0 1 3 0 1 0
# -1 0.3075 1 0 0 3 0 1 0
# -1 0.2700 1 0 0 2 0 0 1
# 1 0.1700 0 1 0 1 0 0 1
# -1 0.1475 1 0 0 1 1 0 0
# -1 0.2500 0 1 0 2 0 0 1
# -1 0.2750 1 0 0 2 0 0 1
# -1 0.2000 1 0 0 2 1 0 0
# -1 0.1100 0 0 1 0 1 0 0
# -1 0.3400 1 0 0 3 0 1 0
# 1 0.3000 0 0 1 3 1 0 0
# 1 0.1550 0 1 0 1 0 1 0
# -1 0.2150 0 1 0 1 0 0 1
# -1 0.2900 0 0 1 3 0 1 0
# 1 0.2750 0 0 1 2 0 1 0
# 1 0.2175 0 1 0 2 0 1 0
# 1 0.2150 0 1 0 2 0 0 1
# 1 0.1050 1 0 0 1 1 0 0
# -1 0.2775 1 0 0 2 0 0 1
# -1 0.3225 1 0 0 3 0 1 0
# 1 0.2075 0 1 0 2 1 0 0
# -1 0.3225 1 0 0 3 0 0 1
# 1 0.2800 0 0 1 3 0 0 1
# -1 0.1575 0 1 0 1 0 0 1
# 1 0.3250 0 0 1 3 0 0 1
# -1 0.2750 1 0 0 2 0 0 1
# 1 0.1250 1 0 0 1 1 0 0
# -1 0.2325 0 1 0 2 0 0 1
# 1 0.1825 1 0 0 2 1 0 0
# -1 0.2600 0 1 0 2 0 1 0
# -1 0.3075 1 0 0 3 0 0 1
# -1 0.2875 1 0 0 3 0 0 1
# 1 0.2300 0 1 0 2 0 1 0
# 1 0.3100 0 0 1 3 1 0 0
# -1 0.2750 1 0 0 2 0 0 1
# 1 0.1125 0 1 0 0 0 0 1
# 1 0.2525 1 0 0 2 1 0 0
# 1 0.1625 0 1 0 1 0 1 0
# 1 0.1075 1 0 0 1 0 0 1
# -1 0.2200 0 1 0 2 0 1 0
# -1 0.2300 0 1 0 2 0 1 0
# -1 0.3100 1 0 0 3 0 1 0
# -1 0.2875 1 0 0 3 0 1 0
# 1 0.3375 0 0 1 3 0 0 1
# -1 0.1450 0 0 1 0 1 0 0
# -1 0.2650 1 0 0 2 1 0 0
# 1 0.2225 0 1 0 2 1 0 0
# -1 0.2300 0 1 0 2 0 1 0
# 1 0.1025 0 1 0 0 0 1 0
# 1 0.1925 0 1 0 2 1 0 0
# -1 0.2525 0 1 0 2 0 1 0
# -1 0.1650 0 1 0 1 0 1 0
# 1 0.1650 0 1 0 1 0 1 0
# -1 0.1300 1 0 0 1 0 1 0
# -1 0.2900 1 0 0 3 1 0 0
# -1 0.2175 0 1 0 1 0 0 1
# 1 0.2300 1 0 0 2 1 0 0
# -1 0.3000 1 0 0 3 1 0 0
# 1 0.2125 0 1 0 1 1 0 0
# 1 0.2825 0 0 1 2 0 0 1
# 1 0.3125 0 0 1 3 0 1 0
# 1 0.2500 0 1 0 2 1 0 0
# -1 0.2375 0 1 0 2 0 0 1
# 1 0.3375 0 0 1 3 0 1 0
# 1 0.2000 0 1 0 2 0 0 1
# -1 0.2100 0 1 0 1 0 1 0
# -1 0.3225 1 0 0 3 1 0 0
# 1 0.2375 0 0 1 2 1 0 0
# -1 0.2250 0 1 0 2 0 1 0
# 1 0.1250 1 0 0 1 0 0 1
# -1 0.1925 1 0 0 1 1 0 0
# -1 0.2750 0 1 0 2 0 0 1
# 1 0.2200 0 1 0 2 1 0 0
# -1 0.1675 0 1 0 1 1 0 0
# -1 0.1700 0 1 0 1 0 0 1
# -1 0.1350 0 0 1 0 0 1 0
# -1 0.1600 0 1 0 1 0 1 0
# -1 0.2125 0 1 0 1 0 0 1
# 1 0.1200 1 0 0 1 0 0 1
# -1 0.2100 0 1 0 2 0 1 0
# -1 0.1250 0 0 1 0 0 0 1
# -1 0.2550 0 1 0 2 0 1 0
# 1 0.2750 0 0 1 2 0 1 0
# -1 0.2200 0 0 1 1 1 0 0
# 1 0.0925 1 0 0 1 1 0 0
# 1 0.3350 0 0 1 3 0 1 0
# -1 0.2250 0 1 0 2 0 0 1
# -1 0.2425 0 1 0 2 1 0 0
# 1 0.1275 0 1 0 1 0 1 0
# 1 0.3350 0 1 0 3 1 0 0
# -1 0.1850 0 1 0 1 0 0 1
# 1 0.1600 0 1 0 1 1 0 0
# -1 0.2400 0 1 0 2 1 0 0
# 1 0.3300 0 0 1 3 0 0 1
# -1 0.3075 1 0 0 3 1 0 0
# 1 0.2900 0 1 0 3 0 0 1
# -1 0.0950 0 0 1 0 1 0 0
# -1 0.1900 0 1 0 1 0 0 1
# 1 0.1375 0 1 0 1 1 0 0
# -1 0.2100 0 1 0 1 1 0 0
# -1 0.3025 1 0 0 3 1 0 0
# 1 0.1375 1 0 0 0 0 0 1
# -1 0.1475 1 0 0 1 0 1 0
# 1 0.2150 0 1 0 2 1 0 0
# -1 0.2400 0 1 0 2 1 0 0
# -1 0.1375 0 0 1 0 0 0 1
# 1 0.2200 1 0 0 2 1 0 0
# -1 0.1150 0 0 1 0 0 1 0
# 1 0.1825 0 0 1 2 0 1 0
# -1 0.3225 1 0 0 3 0 0 1
# -1 0.1450 0 0 1 0 0 0 1
# 1 0.1675 0 1 0 1 1 0 0
# 1 0.3325 0 0 1 3 0 1 0
# 1 0.1075 1 0 0 0 0 0 1
# -1 0.1350 0 0 1 0 1 0 0
# -1 0.1450 0 0 1 0 1 0 0
# 1 0.1575 0 1 0 1 1 0 0
# -1 0.1825 0 1 0 1 0 0 1
# -1 0.2450 0 1 0 2 0 1 0
# 1 0.1425 1 0 0 1 1 0 0
# 1 0.2175 0 1 0 2 0 0 1
# 1 0.2325 0 1 0 2 0 1 0
# -1 0.2875 1 0 0 3 1 0 0
# 1 0.2625 0 1 0 2 0 0 1
# 1 0.1575 0 1 0 1 0 0 1
# 1 0.2750 0 0 1 2 1 0 0
# -1 0.2500 0 1 0 2 1 0 0
# -1 0.2400 0 1 0 2 0 1 0
# 1 0.1100 1 0 0 0 0 0 1
# -1 0.2975 1 0 0 3 0 0 1
# -1 0.1725 0 0 1 1 1 0 0
# 1 0.3225 0 0 1 3 1 0 0
# -1 0.1450 0 0 1 0 0 0 1
# 1 0.1725 0 1 0 1 0 1 0
# 1 0.3050 0 0 1 3 1 0 0
# -1 0.3200 1 0 0 3 0 0 1
# 1 0.1450 1 0 0 1 1 0 0
# -1 0.3175 1 0 0 3 0 1 0
# 1 0.1475 1 0 0 1 0 1 0
# 1 0.2575 0 1 0 2 1 0 0
# 1 0.1200 1 0 0 1 0 0 1
# -1 0.2425 0 1 0 2 0 1 0
# -1 0.0900 1 0 0 0 1 0 0
# -1 0.0925 0 0 1 0 1 0 0
# -1 0.1650 0 0 1 1 0 1 0
# 1 0.1025 1 0 0 0 0 0 1
# -1 0.1475 0 0 1 0 0 0 1
# 1 0.2225 1 0 0 2 0 0 1
# 1 0.3250 1 0 0 3 0 0 1
# 1 0.2800 0 0 1 2 1 0 0
# 1 0.2625 0 1 0 2 0 0 1
# 1 0.1450 1 0 0 1 0 1 0
# 1 0.2350 0 1 0 2 0 1 0
# -1 0.3425 0 0 1 3 1 0 0
# -1 0.1575 0 1 0 1 0 0 1
# -1 0.3075 0 0 1 2 0 1 0
# -1 0.0950 0 0 1 0 0 1 0
# -1 0.1925 0 1 0 1 0 0 1
# 1 0.1300 1 0 0 1 1 0 0
# -1 0.3075 1 0 0 3 0 1 0
# -1 0.2000 0 1 0 1 1 0 0
# 1 0.2475 0 1 0 3 1 0 0
# -1 0.2825 1 0 0 3 1 0 0
# 1 0.2425 0 1 0 3 0 1 0
# -1 0.2625 0 0 1 2 1 0 0
# 1 0.0900 1 0 0 0 1 0 0
# 1 0.2800 0 0 1 2 0 0 1
# 1 0.2600 0 1 0 2 0 1 0
# 1 0.0900 0 1 0 0 0 1 0
# 1 0.2900 0 0 1 3 1 0 0
# 1 0.1950 0 1 0 2 0 1 0
# 1 0.2325 0 1 0 2 1 0 0
# 1 0.2025 0 1 0 1 0 1 0
# 1 0.3025 0 0 1 3 1 0 0
# -1 0.1800 0 0 1 1 0 1 0
# -1 0.2225 0 1 0 2 1 0 0
# -1 0.1425 0 0 1 0 1 0 0
# -1 0.2725 1 0 0 2 0 0 1

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2026 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2026 G2E Conference
2026 iSC West Conference
You must be logged in to post a comment.