I was sitting in an airport waiting to board a flight to go speak at a conference. I was pretty bored so to pass the time I thought about using PyTorch to estimate f(x) = sin(x) using a third degree polynomial. Yes, my life is pretty sad.
After I boarded the plane in Seattle, I had about two hours before reaching Las Vegas. So I got my laptop out and started coding. The key idea is that when you create a PyTorch neural network using a Linear layer, the weights are automatically set up for you, but when you don’t use a neural network, you must explicitly create weights using the Parameter method so that the weights can be learned.
An interesting experiment that gave me some insights into how PyTorch works.

Left: Image from a search for “funny help sign”. Right: Image from a search for “funny sin() help”. Microsoft Paint is my graphic design tool of choice too. And I like the microscope in the lower right — no math classroom is complete without a microscope to examine details.
Demo Code:
# low_level.py
# approximate sin(x) using 3rd degree polynomial
import numpy as np
import torch as T
device = T.device('cpu')
class Poly(T.nn.Module):
# f(x) = ax^3 + bx^2 + cx + d
def __init__(self):
super(Poly, self).__init__() # pre Python 3.3
self.a = T.nn.Parameter( T.randn((1)).to(device) )
self.b = T.nn.Parameter( T.randn((1)).to(device) )
self.c = T.nn.Parameter( T.randn((1)).to(device) )
self.d = T.nn.Parameter( T.randn((1)).to(device) )
def forward(self, x):
y = (self.a * x**3) + (self.b * x**2) + \
(self.c * x) + self.d
return y
def main():
# 0. set up
print("\nEstimate sin(x) using ax^3 + bx^2 + cx + d ")
T.manual_seed(1)
np.random.seed(1)
# 1. create model
print("\nCreating polynomial model ")
poly = Poly().to(device)
# 2. train model
print("\nTraining ")
lo = -np.pi; hi = np.pi # range of input vals
loss_func = T.nn.MSELoss()
optimizer = T.optim.SGD(poly.parameters(), lr=1.0e-4)
acc_loss = 0.0
for i in range(100_000):
x = (hi - lo) * T.rand((100)).to(device) + lo # 100 x
target = T.sin(x) # 100 corrects
y = poly(x) # 100 predicteds
loss_val = loss_func(y, target) # MSE loss
acc_loss += loss_val.item() # accumulate
optimizer.zero_grad() # prep gradients
loss_val.backward() # compute grads
optimizer.step() # update wts
if i % 10_000 == 0:
print("iter = %6d | loss = %12.4f " % (i, acc_loss))
acc_loss = 0.0
print("Done ")
# 3. examine model
print("\nPolynomial model weights: ")
print("a = %10.4f " % poly.a.item())
print("b = %10.4f " % poly.b.item())
print("c = %10.4f " % poly.c.item())
print("d = %10.4f " % poly.d.item())
# 4. use model
print("\nUsing model: ")
X = T.linspace(-3.0, +3.0, 5)
for x in X:
with T.no_grad():
y = poly(x)
t = T.sin(x)
print("x = %9.4f | sin(x) = %9.4f | \
predicted = %9.4f " % (x, t, y))
print("\nEnd demo ")
if __name__ == "__main__":
main()

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2026 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2026 G2E Conference
2026 iSC West Conference
You must be logged in to post a comment.