When I work with PyTorch I almost always use a traditional local installation. Some of my colleagues like to use the Google colab online environment because you can run a PyTorch program in a browser on a machine that has absolutely no prerequisites. This technique is also useful when I teach an introductory PyTorch training class because it doesn’t require a complicated setup process.
I hadn’t used colab in quite a long time so one cold, wet Pacific Northwest morning, after walking my dogs, I figured I’d run one of my standard PyTorch demo programs on colab.
I grabbed a Windows laptop machine that didn’t have Python or PyTorch installed. I launched a Chrome browser and logged in using my Google gmail account. I navigated to colab.research.google.com to launch a new colab project via File | New Notebook.
I wanted to use a GPU instead of the default CPU so I clicked on Runtime | Change runtime type | T4 GPU | Save.
Next, I created and uploaded a file of training data and a file of test data to the Session storage. The comma-delimited data looks like:
1, 0.24, 1, 0, 0, 0.2950, 2 -1, 0.39, 0, 0, 1, 0.5120, 1 1, 0.63, 0, 1, 0, 0.7580, 0 -1, 0.36, 1, 0, 0, 0.4450, 1 . . .
Each line represents a person. The fields are sex (male = -1, female = +1), age (divided by 100), State (Michigan = 100, Nebraska = 010, Oklahoma = 001), income (divided by $100,000), political leaning (0 = conservative, 1 = moderate, 2 = liberal). There are 200 training items and 40 test items.
Next, I copied the demo program into the single code cell in the project Notebook. I clicked on the Run icon in the code cell. My first attempt had a minor typo and the error was flagged with a red underline. After correcting the error, I clicked on Run, and this time the demo executed successfully. The output was:
Begin People predict politics type Creating People Datasets Creating 6-(10-10)-3 neural network bat_size = 10 loss = NLLLoss() optimizer = SGD max_epochs = 1000 lrn_rate = 0.010 Starting training epoch = 0 | loss = 23.1379 epoch = 200 | loss = 19.1540 epoch = 400 | loss = 16.1640 epoch = 600 | loss = 12.8285 epoch = 800 | loss = 11.5388 Training done Computing model accuracy Accuracy on training data = 0.8150 Accuracy on test data = 0.7500 Predicting politics for M 30 oklahoma $50,000: [[0.6905 0.3049 0.0047]] Saving trained model state End People predict politics demo
Very nice. A good way to start my day.

One of the characteristics of using colab is that it has multiple layers — browser to colab to Jupyter to Google cloud to PyTorch to Python to runtime.
The traditional clothing of many Russian republics has multiple layers to deal with extreme cold.
Left: Tuva is in southern Siberia and has a population of about 340,000 people. Center: Yakutsk is very large geographically and has a population of about 645,000 people. It’s probably best known (to American engineers anyway) as a territory in the Risk game. Right: Khakassia is medium size and has a population of about 540,000 people.
As I write this blog post, Russia is out of control with its terrible invasion of Ukraine. I have worked with many engineers from Russia at the huge U.S. tech company I work for, and they’ve all been good guys. I hope that by the time you are reading this post, the conflict in Ukraine is over.
Demo program:
# people_politics_colab.py
# predict politics type from sex, age, state, income
# running on colab-GPU
import numpy as np
import torch as T
device = T.device('cuda:0') # apply to Tensor or Module
# -----------------------------------------------------------
class PeopleDataset(T.utils.data.Dataset):
# sex age state income politics
# -1, 0.27, 0, 1, 0, 0.7610, 2
# +1, 0.19, 0, 0, 1, 0.6550, 0
# sex: -1 = male, +1 = female
# state: michigan, nebraska, oklahoma
# politics: conservative, moderate, liberal
def __init__(self, src_file):
all_xy = np.loadtxt(src_file, usecols=range(0,7),
delimiter=",", comments="#", dtype=np.float32)
tmp_x = all_xy[:,0:6] # cols [0,6) = [0,5]
tmp_y = all_xy[:,6] # 1-D
self.x_data = T.tensor(tmp_x,
dtype=T.float32).to(device)
self.y_data = T.tensor(tmp_y,
dtype=T.int64).to(device) # 1-D
def __len__(self):
return len(self.x_data)
def __getitem__(self, idx):
preds = self.x_data[idx]
trgts = self.y_data[idx]
return preds, trgts # as a Tuple
# -----------------------------------------------------------
class Net(T.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.hid1 = T.nn.Linear(6, 10) # 6-(10-10)-3
self.hid2 = T.nn.Linear(10, 10)
self.oupt = T.nn.Linear(10, 3)
T.nn.init.xavier_uniform_(self.hid1.weight)
T.nn.init.zeros_(self.hid1.bias)
T.nn.init.xavier_uniform_(self.hid2.weight)
T.nn.init.zeros_(self.hid2.bias)
T.nn.init.xavier_uniform_(self.oupt.weight)
T.nn.init.zeros_(self.oupt.bias)
def forward(self, x):
z = T.tanh(self.hid1(x))
z = T.tanh(self.hid2(z))
z = T.log_softmax(self.oupt(z), dim=1) # NLLLoss()
return z
# -----------------------------------------------------------
def accuracy(model, ds):
# assumes model.eval()
# item-by-item version
n_correct = 0; n_wrong = 0
for i in range(len(ds)):
X = ds[i][0].reshape(1,-1) # make it a batch
Y = ds[i][1].reshape(1) # 0 1 or 2, 1D
with T.no_grad():
oupt = model(X) # logits form
big_idx = T.argmax(oupt) # 0 or 1 or 2
if big_idx == Y:
n_correct += 1
else:
n_wrong += 1
acc = (n_correct * 1.0) / (n_correct + n_wrong)
return acc
# -----------------------------------------------------------
def accuracy_quick(model, dataset):
# assumes model.eval()
X = dataset[0:len(dataset)][0]
# Y = T.flatten(dataset[0:len(dataset)][1])
Y = dataset[0:len(dataset)][1]
with T.no_grad():
oupt = model(X)
# (_, arg_maxs) = T.max(oupt, dim=1)
arg_maxs = T.argmax(oupt, dim=1) # argmax() is new
num_correct = T.sum(Y==arg_maxs)
acc = (num_correct * 1.0 / len(dataset))
return acc.item()
# -----------------------------------------------------------
def confusion_matrix_multi(model, ds, n_classes):
# n_classes must be greater than 2
cm = np.zeros((n_classes,n_classes), dtype=np.int64)
for i in range(len(ds)):
X = ds[i][0].reshape(1,-1) # make it a batch
Y = ds[i][1].reshape(1) # actual class 0 1 or 2, 1D
with T.no_grad():
oupt = model(X) # logits form
pred_class = T.argmax(oupt) # 0,1,2
cm[Y][pred_class] += 1
return cm
# -----------------------------------------------------------
def show_confusion(cm):
dim = len(cm)
mx = np.max(cm) # largest count in cm
wid = len(str(mx)) + 1 # width to print
fmt = "%" + str(wid) + "d" # like "%3d"
for i in range(dim):
print("actual ", end="")
print("%3d:" % i, end="")
for j in range(dim):
print(fmt % cm[i][j], end="")
print("")
print("------------")
print("predicted ", end="")
for j in range(dim):
print(fmt % j, end="")
print("")
# -----------------------------------------------------------
def main():
# 0. get started
print("\nBegin People predict politics type ")
T.manual_seed(1)
np.random.seed(1)
# 1. create DataLoader objects
print("\nCreating People Datasets ")
train_file = "./people_train.txt" # colab tmp local
train_ds = PeopleDataset(train_file) # 200 rows
test_file = "./people_test.txt"
test_ds = PeopleDataset(test_file) # 40 rows
bat_size = 10
train_ldr = T.utils.data.DataLoader(train_ds,
batch_size=bat_size, shuffle=True)
# -----------------------------------------------------------
# 2. create network
print("\nCreating 6-(10-10)-3 neural network ")
net = Net().to(device)
net.train()
# -----------------------------------------------------------
# 3. train model
max_epochs = 1000
ep_log_interval = 200
lrn_rate = 0.01
loss_func = T.nn.NLLLoss() # assumes log_softmax()
optimizer = T.optim.SGD(net.parameters(), lr=lrn_rate)
print("\nbat_size = %3d " % bat_size)
print("loss = " + str(loss_func))
print("optimizer = SGD")
print("max_epochs = %3d " % max_epochs)
print("lrn_rate = %0.3f " % lrn_rate)
print("\nStarting training ")
for epoch in range(0, max_epochs):
# T.manual_seed(epoch+1) # checkpoint reproducibility
epoch_loss = 0 # for one full epoch
for (batch_idx, batch) in enumerate(train_ldr):
X = batch[0] # inputs
Y = batch[1] # correct class/label/politics
optimizer.zero_grad()
oupt = net(X)
loss_val = loss_func(oupt, Y) # a tensor
epoch_loss += loss_val.item() # accumulate
loss_val.backward()
optimizer.step()
if epoch % ep_log_interval == 0:
print("epoch = %5d | loss = %10.4f" % \
(epoch, epoch_loss))
print("Training done ")
# -----------------------------------------------------------
# 4. evaluate model accuracy
print("\nComputing model accuracy")
net.eval()
acc_train = accuracy(net, train_ds) # item-by-item
print("Accuracy on training data = %0.4f" % acc_train)
acc_test = accuracy(net, test_ds)
print("Accuracy on test data = %0.4f" % acc_test)
# 5. make a prediction
print("\nPredicting politics for M 30 oklahoma $50,000: ")
X = np.array([[-1, 0.30, 0,0,1, 0.5000]], dtype=np.float32)
X = T.tensor(X, dtype=T.float32).to(device)
with T.no_grad():
logits = net(X) # do not sum to 1.0
probs = T.exp(logits) # sum to 1.0
probs = probs.cpu().numpy() # numpy vector prints better
np.set_printoptions(precision=4, suppress=True)
print(probs)
# 6. save model (state_dict approach)
# print("\nSaving trained model state ")
# fn = "./people_model.pt"
# T.save(net.state_dict(), fn)
print("\nEnd People predict politics demo ")
if __name__ == "__main__":
main()
Training data:
# people_train.txt # sex (M=-1, F=1) age state (michigan, # nebraska, oklahoma) income # politics (consrvative, moderate, liberal) # 1, 0.24, 1, 0, 0, 0.2950, 2 -1, 0.39, 0, 0, 1, 0.5120, 1 1, 0.63, 0, 1, 0, 0.7580, 0 -1, 0.36, 1, 0, 0, 0.4450, 1 1, 0.27, 0, 1, 0, 0.2860, 2 1, 0.50, 0, 1, 0, 0.5650, 1 1, 0.50, 0, 0, 1, 0.5500, 1 -1, 0.19, 0, 0, 1, 0.3270, 0 1, 0.22, 0, 1, 0, 0.2770, 1 -1, 0.39, 0, 0, 1, 0.4710, 2 1, 0.34, 1, 0, 0, 0.3940, 1 -1, 0.22, 1, 0, 0, 0.3350, 0 1, 0.35, 0, 0, 1, 0.3520, 2 -1, 0.33, 0, 1, 0, 0.4640, 1 1, 0.45, 0, 1, 0, 0.5410, 1 1, 0.42, 0, 1, 0, 0.5070, 1 -1, 0.33, 0, 1, 0, 0.4680, 1 1, 0.25, 0, 0, 1, 0.3000, 1 -1, 0.31, 0, 1, 0, 0.4640, 0 1, 0.27, 1, 0, 0, 0.3250, 2 1, 0.48, 1, 0, 0, 0.5400, 1 -1, 0.64, 0, 1, 0, 0.7130, 2 1, 0.61, 0, 1, 0, 0.7240, 0 1, 0.54, 0, 0, 1, 0.6100, 0 1, 0.29, 1, 0, 0, 0.3630, 0 1, 0.50, 0, 0, 1, 0.5500, 1 1, 0.55, 0, 0, 1, 0.6250, 0 1, 0.40, 1, 0, 0, 0.5240, 0 1, 0.22, 1, 0, 0, 0.2360, 2 1, 0.68, 0, 1, 0, 0.7840, 0 -1, 0.60, 1, 0, 0, 0.7170, 2 -1, 0.34, 0, 0, 1, 0.4650, 1 -1, 0.25, 0, 0, 1, 0.3710, 0 -1, 0.31, 0, 1, 0, 0.4890, 1 1, 0.43, 0, 0, 1, 0.4800, 1 1, 0.58, 0, 1, 0, 0.6540, 2 -1, 0.55, 0, 1, 0, 0.6070, 2 -1, 0.43, 0, 1, 0, 0.5110, 1 -1, 0.43, 0, 0, 1, 0.5320, 1 -1, 0.21, 1, 0, 0, 0.3720, 0 1, 0.55, 0, 0, 1, 0.6460, 0 1, 0.64, 0, 1, 0, 0.7480, 0 -1, 0.41, 1, 0, 0, 0.5880, 1 1, 0.64, 0, 0, 1, 0.7270, 0 -1, 0.56, 0, 0, 1, 0.6660, 2 1, 0.31, 0, 0, 1, 0.3600, 1 -1, 0.65, 0, 0, 1, 0.7010, 2 1, 0.55, 0, 0, 1, 0.6430, 0 -1, 0.25, 1, 0, 0, 0.4030, 0 1, 0.46, 0, 0, 1, 0.5100, 1 -1, 0.36, 1, 0, 0, 0.5350, 0 1, 0.52, 0, 1, 0, 0.5810, 1 1, 0.61, 0, 0, 1, 0.6790, 0 1, 0.57, 0, 0, 1, 0.6570, 0 -1, 0.46, 0, 1, 0, 0.5260, 1 -1, 0.62, 1, 0, 0, 0.6680, 2 1, 0.55, 0, 0, 1, 0.6270, 0 -1, 0.22, 0, 0, 1, 0.2770, 1 -1, 0.50, 1, 0, 0, 0.6290, 0 -1, 0.32, 0, 1, 0, 0.4180, 1 -1, 0.21, 0, 0, 1, 0.3560, 0 1, 0.44, 0, 1, 0, 0.5200, 1 1, 0.46, 0, 1, 0, 0.5170, 1 1, 0.62, 0, 1, 0, 0.6970, 0 1, 0.57, 0, 1, 0, 0.6640, 0 -1, 0.67, 0, 0, 1, 0.7580, 2 1, 0.29, 1, 0, 0, 0.3430, 2 1, 0.53, 1, 0, 0, 0.6010, 0 -1, 0.44, 1, 0, 0, 0.5480, 1 1, 0.46, 0, 1, 0, 0.5230, 1 -1, 0.20, 0, 1, 0, 0.3010, 1 -1, 0.38, 1, 0, 0, 0.5350, 1 1, 0.50, 0, 1, 0, 0.5860, 1 1, 0.33, 0, 1, 0, 0.4250, 1 -1, 0.33, 0, 1, 0, 0.3930, 1 1, 0.26, 0, 1, 0, 0.4040, 0 1, 0.58, 1, 0, 0, 0.7070, 0 1, 0.43, 0, 0, 1, 0.4800, 1 -1, 0.46, 1, 0, 0, 0.6440, 0 1, 0.60, 1, 0, 0, 0.7170, 0 -1, 0.42, 1, 0, 0, 0.4890, 1 -1, 0.56, 0, 0, 1, 0.5640, 2 -1, 0.62, 0, 1, 0, 0.6630, 2 -1, 0.50, 1, 0, 0, 0.6480, 1 1, 0.47, 0, 0, 1, 0.5200, 1 -1, 0.67, 0, 1, 0, 0.8040, 2 -1, 0.40, 0, 0, 1, 0.5040, 1 1, 0.42, 0, 1, 0, 0.4840, 1 1, 0.64, 1, 0, 0, 0.7200, 0 -1, 0.47, 1, 0, 0, 0.5870, 2 1, 0.45, 0, 1, 0, 0.5280, 1 -1, 0.25, 0, 0, 1, 0.4090, 0 1, 0.38, 1, 0, 0, 0.4840, 0 1, 0.55, 0, 0, 1, 0.6000, 1 -1, 0.44, 1, 0, 0, 0.6060, 1 1, 0.33, 1, 0, 0, 0.4100, 1 1, 0.34, 0, 0, 1, 0.3900, 1 1, 0.27, 0, 1, 0, 0.3370, 2 1, 0.32, 0, 1, 0, 0.4070, 1 1, 0.42, 0, 0, 1, 0.4700, 1 -1, 0.24, 0, 0, 1, 0.4030, 0 1, 0.42, 0, 1, 0, 0.5030, 1 1, 0.25, 0, 0, 1, 0.2800, 2 1, 0.51, 0, 1, 0, 0.5800, 1 -1, 0.55, 0, 1, 0, 0.6350, 2 1, 0.44, 1, 0, 0, 0.4780, 2 -1, 0.18, 1, 0, 0, 0.3980, 0 -1, 0.67, 0, 1, 0, 0.7160, 2 1, 0.45, 0, 0, 1, 0.5000, 1 1, 0.48, 1, 0, 0, 0.5580, 1 -1, 0.25, 0, 1, 0, 0.3900, 1 -1, 0.67, 1, 0, 0, 0.7830, 1 1, 0.37, 0, 0, 1, 0.4200, 1 -1, 0.32, 1, 0, 0, 0.4270, 1 1, 0.48, 1, 0, 0, 0.5700, 1 -1, 0.66, 0, 0, 1, 0.7500, 2 1, 0.61, 1, 0, 0, 0.7000, 0 -1, 0.58, 0, 0, 1, 0.6890, 1 1, 0.19, 1, 0, 0, 0.2400, 2 1, 0.38, 0, 0, 1, 0.4300, 1 -1, 0.27, 1, 0, 0, 0.3640, 1 1, 0.42, 1, 0, 0, 0.4800, 1 1, 0.60, 1, 0, 0, 0.7130, 0 -1, 0.27, 0, 0, 1, 0.3480, 0 1, 0.29, 0, 1, 0, 0.3710, 0 -1, 0.43, 1, 0, 0, 0.5670, 1 1, 0.48, 1, 0, 0, 0.5670, 1 1, 0.27, 0, 0, 1, 0.2940, 2 -1, 0.44, 1, 0, 0, 0.5520, 0 1, 0.23, 0, 1, 0, 0.2630, 2 -1, 0.36, 0, 1, 0, 0.5300, 2 1, 0.64, 0, 0, 1, 0.7250, 0 1, 0.29, 0, 0, 1, 0.3000, 2 -1, 0.33, 1, 0, 0, 0.4930, 1 -1, 0.66, 0, 1, 0, 0.7500, 2 -1, 0.21, 0, 0, 1, 0.3430, 0 1, 0.27, 1, 0, 0, 0.3270, 2 1, 0.29, 1, 0, 0, 0.3180, 2 -1, 0.31, 1, 0, 0, 0.4860, 1 1, 0.36, 0, 0, 1, 0.4100, 1 1, 0.49, 0, 1, 0, 0.5570, 1 -1, 0.28, 1, 0, 0, 0.3840, 0 -1, 0.43, 0, 0, 1, 0.5660, 1 -1, 0.46, 0, 1, 0, 0.5880, 1 1, 0.57, 1, 0, 0, 0.6980, 0 -1, 0.52, 0, 0, 1, 0.5940, 1 -1, 0.31, 0, 0, 1, 0.4350, 1 -1, 0.55, 1, 0, 0, 0.6200, 2 1, 0.50, 1, 0, 0, 0.5640, 1 1, 0.48, 0, 1, 0, 0.5590, 1 -1, 0.22, 0, 0, 1, 0.3450, 0 1, 0.59, 0, 0, 1, 0.6670, 0 1, 0.34, 1, 0, 0, 0.4280, 2 -1, 0.64, 1, 0, 0, 0.7720, 2 1, 0.29, 0, 0, 1, 0.3350, 2 -1, 0.34, 0, 1, 0, 0.4320, 1 -1, 0.61, 1, 0, 0, 0.7500, 2 1, 0.64, 0, 0, 1, 0.7110, 0 -1, 0.29, 1, 0, 0, 0.4130, 0 1, 0.63, 0, 1, 0, 0.7060, 0 -1, 0.29, 0, 1, 0, 0.4000, 0 -1, 0.51, 1, 0, 0, 0.6270, 1 -1, 0.24, 0, 0, 1, 0.3770, 0 1, 0.48, 0, 1, 0, 0.5750, 1 1, 0.18, 1, 0, 0, 0.2740, 0 1, 0.18, 1, 0, 0, 0.2030, 2 1, 0.33, 0, 1, 0, 0.3820, 2 -1, 0.20, 0, 0, 1, 0.3480, 0 1, 0.29, 0, 0, 1, 0.3300, 2 -1, 0.44, 0, 0, 1, 0.6300, 0 -1, 0.65, 0, 0, 1, 0.8180, 0 -1, 0.56, 1, 0, 0, 0.6370, 2 -1, 0.52, 0, 0, 1, 0.5840, 1 -1, 0.29, 0, 1, 0, 0.4860, 0 -1, 0.47, 0, 1, 0, 0.5890, 1 1, 0.68, 1, 0, 0, 0.7260, 2 1, 0.31, 0, 0, 1, 0.3600, 1 1, 0.61, 0, 1, 0, 0.6250, 2 1, 0.19, 0, 1, 0, 0.2150, 2 1, 0.38, 0, 0, 1, 0.4300, 1 -1, 0.26, 1, 0, 0, 0.4230, 0 1, 0.61, 0, 1, 0, 0.6740, 0 1, 0.40, 1, 0, 0, 0.4650, 1 -1, 0.49, 1, 0, 0, 0.6520, 1 1, 0.56, 1, 0, 0, 0.6750, 0 -1, 0.48, 0, 1, 0, 0.6600, 1 1, 0.52, 1, 0, 0, 0.5630, 2 -1, 0.18, 1, 0, 0, 0.2980, 0 -1, 0.56, 0, 0, 1, 0.5930, 2 -1, 0.52, 0, 1, 0, 0.6440, 1 -1, 0.18, 0, 1, 0, 0.2860, 1 -1, 0.58, 1, 0, 0, 0.6620, 2 -1, 0.39, 0, 1, 0, 0.5510, 1 -1, 0.46, 1, 0, 0, 0.6290, 1 -1, 0.40, 0, 1, 0, 0.4620, 1 -1, 0.60, 1, 0, 0, 0.7270, 2 1, 0.36, 0, 1, 0, 0.4070, 2 1, 0.44, 1, 0, 0, 0.5230, 1 1, 0.28, 1, 0, 0, 0.3130, 2 1, 0.54, 0, 0, 1, 0.6260, 0
Test data:
# people_test.txt # -1, 0.51, 1, 0, 0, 0.6120, 1 -1, 0.32, 0, 1, 0, 0.4610, 1 1, 0.55, 1, 0, 0, 0.6270, 0 1, 0.25, 0, 0, 1, 0.2620, 2 1, 0.33, 0, 0, 1, 0.3730, 2 -1, 0.29, 0, 1, 0, 0.4620, 0 1, 0.65, 1, 0, 0, 0.7270, 0 -1, 0.43, 0, 1, 0, 0.5140, 1 -1, 0.54, 0, 1, 0, 0.6480, 2 1, 0.61, 0, 1, 0, 0.7270, 0 1, 0.52, 0, 1, 0, 0.6360, 0 1, 0.30, 0, 1, 0, 0.3350, 2 1, 0.29, 1, 0, 0, 0.3140, 2 -1, 0.47, 0, 0, 1, 0.5940, 1 1, 0.39, 0, 1, 0, 0.4780, 1 1, 0.47, 0, 0, 1, 0.5200, 1 -1, 0.49, 1, 0, 0, 0.5860, 1 -1, 0.63, 0, 0, 1, 0.6740, 2 -1, 0.30, 1, 0, 0, 0.3920, 0 -1, 0.61, 0, 0, 1, 0.6960, 2 -1, 0.47, 0, 0, 1, 0.5870, 1 1, 0.30, 0, 0, 1, 0.3450, 2 -1, 0.51, 0, 0, 1, 0.5800, 1 -1, 0.24, 1, 0, 0, 0.3880, 1 -1, 0.49, 1, 0, 0, 0.6450, 1 1, 0.66, 0, 0, 1, 0.7450, 0 -1, 0.65, 1, 0, 0, 0.7690, 0 -1, 0.46, 0, 1, 0, 0.5800, 0 -1, 0.45, 0, 0, 1, 0.5180, 1 -1, 0.47, 1, 0, 0, 0.6360, 0 -1, 0.29, 1, 0, 0, 0.4480, 0 -1, 0.57, 0, 0, 1, 0.6930, 2 -1, 0.20, 1, 0, 0, 0.2870, 2 -1, 0.35, 1, 0, 0, 0.4340, 1 -1, 0.61, 0, 0, 1, 0.6700, 2 -1, 0.31, 0, 0, 1, 0.3730, 1 1, 0.18, 1, 0, 0, 0.2080, 2 1, 0.26, 0, 0, 1, 0.2920, 2 -1, 0.28, 1, 0, 0, 0.3640, 2 -1, 0.59, 0, 0, 1, 0.6940, 2


.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2025 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2025 G2E Conference
2025 iSC West Conference
You must be logged in to post a comment.