I came across an obscure machine learning technique called Tsetlin Machine (TM) binary classification. See the Wikipedia article at en.wikipedia.org/wiki/Tsetlin_machine. Briefly, TM classification uses a combination of rules (like decision trees), and math modeling (like logistic regression), and automata theory, and symbolic/propositional/Boolean logic. Yes, it’s complicated.
Tsetlin binary classification is binary in two ways. The prediction is binary (0 or 1). But also, Tsetlin classification requires that all predictor values be binary 0-1 encoded too.
I found the more-or-less official code for Tsetlin binary classification at github.com/cair/TsetlinMachine/tree/master. But that code was in the .pyx (Pyrex) language, which is a hybrid of Python plus C. I refactored the code to plain Python with NumPy. And I implemented a complete end-to-end demo.
For my demo data, I used the Iris Dataset from the official Tsetlin github repository — sort of. The 150-item Iris Dataset has three classes of Iris species to predict (setosa, versicolor, virginica). Because Tsetlin classification is designed for binary classification, I used only the 50 setosa items (class 0) and 50 versicolor items (class 1). I used the first 40 of each class to make an 80-item training set, and the remaining 10 of each class to make a 20-item test set. (Note: like any binary classification technique, Tsetlin classification can deal with multi-class classification problems by using the hacky one-vs-all / one-vs-rest technique).
The output of my demo is:
Begin Tsetlin Machine binary classification Loading binary-features two-class Iris data First three X train items: [[0 0 1 1 0 0 1 0 0 0 0 0 0 0 0 0] [0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0] [0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0]] First three y target (0-1) values: 0 0 0 Setting n_clauses = 20 Setting n_states = 50 Setting s (random update inverse frequency) = 3.0 Setting threshold (voting max/min) = 10 Creating Tsetlin Machine binary classifier Done Setting max_epochs = 100 Starting training . . . . . . . . . . Done Accuracy (train) = 0.8875 Accuracy (test) = 0.9500 Prediciting class for train_X[0] Predicted y = 0 End demo
There is a lot going on here. First, the raw Iris Dataset looks like:
5.1,3.5,1.4,0.2,Iris-setosa 4.9,3.0,1.4,0.2,Iris-setosa 4.7,3.2,1.3,0.2,Iris-setosa . . . 7.0,3.2,4.7,1.4,Iris-versicolor 6.4,3.2,4.5,1.5,Iris-versicolor . . .
To use Tsetlin classification the data must be converted to binary. Here is how the Tsetlin github Iris data is encoded:
0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 . . . 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1 . . .
The raw Iris Dataset has four predictors: sepal length, sepal width, petal length, petal width. Each predictor value is mapped to four binary values, making 16 predictors per line, followed by the target class. Annoyingly, the github page doesn’t explain how the data is encoded, but after looking at it for a while, I deciphered that:
[0.0, 1.5] = 0000 [1.6, 3.1] = 0001 [3.2, 4.7] = 0010 [4.8, 6.3] = 0011 [6.4, 7.9] = 0100
This encoding is unusual. Notice that the first encoded value for each predictor is always 0, making it irrelevant. A more obvious encoding scheme would be to bucket the numeric predictor values into four buckets, and then use one-hot encoding. But I used the github encoding.
The Tsetlin classifier requires four parameter values that must be determined by trial and error:
Setting n_clauses = 20 Setting n_states = 50 Setting s (random update inverse frequency) = 3.0 Setting threshold (voting max/min) = 10
Loosely, the number of clauses is essentially the number of rules. The number of states refers to the automata component. The s parameter adds some randomness during training, to avoid model overfitting. The threshold value acts to clip output so that some clauses don’t dominate the prediction.
I spent many hours dissecting and translating the gihub Pyrex code to Python code, and now I have a pretty good understanding of how Tsetlin binary classification works.
Tsetlin classification has a small group of enthusiastic supporters, but the technique isn’t used very often. From a practical point of view, I think that Tsetlin binary classification isn’t used much because having to encode numeric predictors to binary is somewhat time-consuming, and the technique training process is a bit difficult to tune. On the other hand, for problem scenarios where the predictors are all categorical (e.g., color is red, blue, or green), the binary encoding isn’t a major issue.
The main source research paper is “The Tsetlin Machine – A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with Propositional Logic” by Ole-Christoffer Granmo, 2021. It’s available at arxiv.org/abs/1804.01508. The technique is name after M.L. Tsetlin, a Soviet Union mathematician from the 1960s.
Granmo has extended Tsetlin binary classification technique to multi-class classification, and to regression. I haven’t looked at these techniques yet.

Every couple of months or so, I check out the YouTube channel of an artist who goes by the tag Hey Anglomangler. He puts together creative videos using some sort of AI software. The videos are surreal and usually in binary (shades of black and white), which enhances the effect of nightmarish beauty. Here are two screen shots from a video titled “The Triangle” (youtube.com/watch?v=zNckfF6O2Xk).
Demo code. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols (my blog editor chokes on symbols).
# tsetlin_machine_binary.py
#
# binary classification
# based on TsetlinMachine.pyx (Pyrex)
# from github.com/cair/TsetlinMachine/tree/master
import numpy as np
class TsetlinMachine:
# binary classifier
def __init__(self, n_clauses, n_features,
n_states, s, threshold, seed=0):
self.n_clauses = n_clauses
self.n_features = n_features
self.n_states = n_states
self.s = s # inverse update freq
self.threshold = threshold # voting min/max
self.rnd = np.random.RandomState(seed)
self.ta_state = self.rnd.choice([self.n_states,
self.n_states+1], size=(self.n_clauses,
self.n_features, 2)).astype(dtype=np.int32)
# print(self.ta_state); input()
self.clause_output = \
np.zeros(shape=self.n_clauses, dtype=np.int32)
self.feedback_to_clauses = \
np.zeros(shape=self.n_clauses, dtype=np.int32)
self.clause_sign = np.zeros(self.n_clauses,
dtype=np.int32)
for j in range(self.n_clauses):
if j % 2 == 0:
self.clause_sign[j] = 1
else:
self.clause_sign[j] = -1
# -----------------------------------------------------------
def calculate_clause_output(self, x):
# each cell is 0 or 1
for j in range(self.n_clauses):
self.clause_output[j] = 1
for k in range(self.n_features):
# action_include = self.action(self.ta_state[j,k,0])
# action_include_negated = \
# self.action(self.ta_state[j,k,1])
if self.ta_state[j,k,0] "lte" self.n_states:
action_include = 0
else:
action_include = 1
if self.ta_state[j,k,1] "lte" self.n_states:
action_exclude = 0
else:
action_exclude = 1
if (action_include == 1 and x[k] == 0) or \
(action_exclude == 1 and x[k] == 1):
self.clause_output[j] = 0
break
# -----------------------------------------------------------
def predict(self, x):
self.calculate_clause_output(x)
output_sum = self.sum_clause_votes() # -thresh to +thresh
if output_sum "gte" 0:
return 1
else:
return 0
# -----------------------------------------------------------
# def action(self, state):
# if state "lte" self.n_states:
# return 0
# else:
# return 1
# -----------------------------------------------------------
# def get_state(self, clause, feature, automaton_type):
# return self.ta_state[clause,feature,automaton_type]
# -----------------------------------------------------------
def sum_clause_votes(self):
# value between -thresh and +thresh
output_sum = 0
for j in range(self.n_clauses):
output_sum += self.clause_output[j] * \
self.clause_sign[j]
if output_sum "gt" self.threshold:
output_sum = self.threshold
elif output_sum "lt" -self.threshold:
output_sum = -self.threshold
return output_sum
# -----------------------------------------------------------
# evaluate() replaced by accuracy()
# -----------------------------------------------------------
# def evaluate(self, X, y):
# n_examples = len(X)
# xi = np.zeros((self.n_features,), dtype=np.int32)
# errors = 0
# for l in range(n_examples):
# for j in range(self.n_features):
# xi[j] = X[l,j]
# self.calculate_clause_output(xi)
# output_sum = self.sum_clause_votes()
# if output_sum "gte" 0 and y[l] == 0:
# errors += 1
# elif output_sum "lt" 0 and y[l] == 1:
# errors += 1
#
# return 1.0 - ((1.0 * errors) / n_examples)
# -----------------------------------------------------------
def accuracy(self, X, y):
n_correct = 0; n_wrong = 0
for i in range(len(X)):
xi = X[i]
actual_y = y[i] # 0 or 1
pred_y = self.predict(xi) # 0 or 1
if actual_y == pred_y:
n_correct += 1
else:
n_wrong += 1
return (n_correct * 1.0) / (n_correct + n_wrong)
# -----------------------------------------------------------
def update(self, x, y):
# worker for fit()
self.calculate_clause_output(x) # each cell 0 or 1
output_sum = self.sum_clause_votes() # -thresh, +thresh
for j in range(self.n_clauses):
self.feedback_to_clauses[j] = 0 # -1 or +1
if y == 1:
for j in range(self.n_clauses):
if self.rnd.rand() "gt" 1.0 * (self.threshold - \
output_sum) / (2 * self.threshold):
continue
if self.clause_sign[j] == 1:
self.feedback_to_clauses[j] = 1 # Type I Feedback
else:
self.feedback_to_clauses[j] = -1 # Type II
elif y == 0:
for j in range(self.n_clauses):
if self.rnd.rand() "gt" 1.0 * (self.threshold + \
output_sum) / (2 * self.threshold):
continue
if self.clause_sign[j] == 1:
self.feedback_to_clauses[j] = -1 # Type II
else:
self.feedback_to_clauses[j] = 1 # Type I
for j in range(self.n_clauses):
if self.feedback_to_clauses[j] == 1:
if self.clause_output[j] == 0:
for k in range(self.n_features):
if self.rnd.rand() "lte" 1.0 / self.s:
if self.ta_state[j,k,0] "gt" 1:
self.ta_state[j,k,0] -= 1
if self.rnd.rand() "lte" 1.0 / self.s:
if self.ta_state[j,k,1] "gt" 1:
self.ta_state[j,k,1] -= 1
elif self.clause_output[j] == 1:
for k in range(self.n_features):
if x[k] == 1:
if self.rnd.rand() "lte" \
1.0 * (self.s-1) / self.s:
if self.ta_state[j,k,0] "lt" \
self.n_states * 2:
self.ta_state[j,k,0] += 1
if self.rnd.rand() "lte" 1.0 / self.s:
if self.ta_state[j,k,1] "gt" 1:
self.ta_state[j,k,1] -= 1
elif x[k] == 0:
if self.rnd.rand() "lte" \
1.0 * (self.s-1) / self.s:
if self.ta_state[j,k,1] "lt" \
self.n_states*2:
self.ta_state[j,k,1] += 1
if self.rnd.rand() "lte" 1.0 / self.s:
if self.ta_state[j,k,0] "gt" 1:
self.ta_state[j,k,0] -= 1
elif self.feedback_to_clauses[j] == -1:
if self.clause_output[j] == 1:
for k in range(self.n_features):
# action_include = \
# self.action(self.ta_state[j,k,0])
# action_include_negated = \
# self.action(self.ta_state[j,k,1])
if self.ta_state[j,k,0] "lte" self.n_states:
action_include = 0
else:
action_include = 1
if self.ta_state[j,k,1] "lte" self.n_states:
action_exclude = 0
else:
action_exclude = 1
if x[k] == 0:
if action_include == 0 and \
self.ta_state[j,k,0] "lt" self.n_states * 2:
self.ta_state[j,k,0] += 1
elif x[k] == 1:
if action_exclude== 0 and \
self.ta_state[j,k,1] "lt" self.n_states * 2:
self.ta_state[j,k,1] += 1
# -----------------------------------------------------------
def fit(self, X, y, max_epochs):
n_examples = len(X)
xi = np.zeros(self.n_features, dtype=np.int32)
# xi = np.zeros((self.n_features,), dtype=np.int32)
indices = np.arange(n_examples)
for epoch in range(max_epochs):
if (epoch % 10 == 0): print(". ", flush=True, end="")
self.rnd.shuffle(indices)
for i in range(n_examples):
example_id = indices[i]
target_y = y[example_id]
for j in range(self.n_features):
xi[j] = X[example_id,j]
self.update(xi, target_y)
print("")
return
# -----------------------------------------------------------
def main():
print("\nBegin Tsetlin Machine binary classification ")
print("\nLoading binary-features two-class Iris data ")
train_file = ".\\Data\\iris_two_classes_train_80.txt"
train_X = np.loadtxt(train_file,
usecols=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15],
delimiter=",", comments="#", dtype=np.int32)
print("\nFirst three X train items: ")
print(train_X[0:3,:])
train_y = np.loadtxt(train_file, usecols=16,
delimiter=",", comments="#", dtype=np.int32)
print("\nFirst three y target (0-1) values: ")
for i in range(3):
print(train_y[i])
test_file = ".\\Data\\iris_two_classes_test_20.txt"
test_X = np.loadtxt(test_file,
usecols=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15],
delimiter=",", comments="#", dtype=np.int32)
test_y = np.loadtxt(test_file, usecols=16,
delimiter=",", comments="#", dtype=np.int32)
n_clauses = 20 # number rules
n_features = 16
n_states = 50
s = 3.0 # how often random updates are, using 1/s
threshold = 10 # aka T controls voting
print("\nSetting n_clauses = " + str(n_clauses))
print("Setting n_states = " + str(n_states))
print("Setting s (random update inverse " + \
"frequency) = %0.1f " % s)
print("Setting threshold (voting max/min) = " + \
str(threshold))
print("Creating Tsetlin Machine binary classifier ")
tm = TsetlinMachine(n_clauses, n_features, n_states,
s, threshold)
print("Done ")
max_epochs = 100
print("\nSetting max_epochs = " + str(max_epochs))
print("Starting training ")
tm.fit(train_X, train_y, max_epochs)
print("Done ")
train_acc = tm.accuracy(train_X, train_y)
print("\nAccuracy (train) = %0.4f " % train_acc)
test_acc = tm.accuracy(test_X, test_y)
print("Accuracy (test) = %0.4f " % test_acc)
print("\nPrediciting class for train_X[0] ")
pred_y = tm.predict(train_X[0])
print("Predicted y = " + str(pred_y))
print("\nEnd demo ")
if __name__ == "__main__":
main()
Training data:
# iris_two_classes_train_80.txt # # sepal length (cols 0,1,2,3) # sepal width (cols 4,5,6,7) # petal length (cols 8,9,10,11) # petal width (cols 12,13,14,15) # species (col 16) setosa = 0, versicolor = 1 # (no virginica) # # apparent feature encoding: # [0.0, 1.5] = 0000 # [1.6, 3.1] = 0001 # [3.2, 4.7] = 0010 # [4.8, 6.3] = 0011 # [6.4, 7.9] = 0100 # 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 # 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1
Test data:
# iris_two_classes_test_20.txt # 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 # 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2025 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2025 G2E Conference
2025 iSC West Conference
You must be logged in to post a comment.