Bottom line: I created an algorithm that combines particle swarm optimization (PSO) with evolutionary optimization (EO). The combined “evolutionary particle swarm optimization” (EPSO) worked about as well as ordinary PSO and about as well as ordinary EO, but didn’t significantly out-perform either. And all three techniques are much too slow for practical use, except for rare scenarios.
In PSO, you have a collection (swarm) of particles. A particle is a possible solution. For example, to train a kernel ridge regression system that has 40 training data items, a possible solution is a vector of 40 values/weights like (0.1234, -6.5432, 1.6677, . . -4.6802). The possible solution is called the position of the particle. Each particle also has a velocity vector that determines where the particle will move to (i.e., a new position/solution), and an error value that measures how good the particle is.
In EO, you have a collection (population) of individuals. An individual is a possible solution. Each individual/solution has an error value. An individual is like a particle without a velocity.
My EPSO is based mostly on PSO, but with the evolutionary mechanisms applied to particles. In high-level pseudo-code:
create a swarm of random particles/solutions
loop max_iter times
for-each particle
# standard PSO mechanism
compute new velocity based on
1.) curr velocity and inertia constant,
2.) best position known so far,
3.) best position known by any particle in swarm
compute new position/solution based on
1.) curr position,
2.) new velocity
check if new best position found
end-each particle
create a random particle
check if new best position found
replace a weak particle with random particle
slowly decay inertia constant
# EO mechanism
pick two good particles as parents
combine parents to create a child particle
mutate child position slightly
check if new best position found
replace a weak particle with child particle
end-loop
return best particle/position/solution found
My motivation for trying EPSO is that standard PSO works very well in the beginning loops, but then has difficulty improving. My thought was that maybe using EO evolutionary techniques would help zero-in on an optimal solution during the last few iterations.
A secondary motivation is that I stumbled upon a difficult, interesting problem to test the combined PSO + EO = EPSO algorithm: training a kernel ridge regression (KRR) prediction system on synthetic data that was generated by a neural network. KRR has a closed form solution that uses matrix inversion so I could use it (via the scikit library KernelRidge module) and evaluate how good an EPSO algorithm works.
The output of one EPSO experiment run is:
Begin Kernel Ridge Regression with particle swarm training
Loading train (40) and test (10) from file
Done
First three train X:
-0.1660 0.4406 -0.9998 -0.3953 -0.7065
0.0776 -0.1616 0.3704 -0.5911 0.7562
-0.9452 0.3409 -0.1654 0.1174 -0.7192
First three train y:
0.4840
0.1568
0.8054
Setting RBF gamma = 0.100
Setting alpha (not used) = 0.001
Setting numParticles = 200
Setting maxIter = 800000
Create and train KRR model
iteration = 0 error = 5.5992 acc (0.10) = 0.0000
iteration = 80000 error = 0.0001 acc (0.10) = 0.9750
iteration = 160000 error = 0.0001 acc (0.10) = 0.9750
iteration = 240000 error = 0.0000 acc (0.10) = 0.9750
iteration = 320000 error = 0.0000 acc (0.10) = 0.9750
iteration = 400000 error = 0.0000 acc (0.10) = 0.9750
iteration = 480000 error = 0.0000 acc (0.10) = 0.9750
iteration = 560000 error = 0.0000 acc (0.10) = 0.9750
iteration = 640000 error = 0.0000 acc (0.10) = 0.9750
iteration = 720000 error = 0.0000 acc (0.10) = 0.9750
Done
Model weights/coefficients:
-0.4925 -1.3416 -1.2886 0.4076
1.1884 -0.0383 2.0108 -0.2465
0.7647 0.3891 0.0929 1.8920
0.6258 0.0615 -0.0775 -0.6205
1.1838 0.6132 0.8567 0.1959
-1.7657 -0.1421 -1.6978 -1.2034
-2.5358 -0.0538 0.2271 0.4450
-1.4915 0.3418 0.4750 2.6162
1.2590 -2.0907 2.4247 0.3753
1.0250 -2.5499 -1.9359 2.0047
Computing model accuracy
Train acc (within 0.10) = 0.9750
Test acc (within 0.10) = 0.7000
Predicting for x =
-0.1660 0.4406 -0.9998 -0.3953 -0.7065
predicted y = 0.4865
End demo
Well, EPSO works, but it’s not a significant improvement on PSO or EO, and all three techniques are just too slow to be practical. But that’s part of working in research. You have to experiment to find these things out. And even failed experiments give me (or anyone else) new insights that will lead to successful ideas down the road.
Note: I normally post my code, but won’t do so for this experiment. The code is a bit sloppy and has several magic constants that should be parameterized.

My combination particle swarm and evolutionary optimization algorithm is pretty interesting to me. In the 1970s, I used to love playing old “bingo pinball” machines in Las Vegas with my college roommate Ed Koolish when we’d drive out there from our dorm room at U.C. Irvine. These electro-mechanical machines were produced in the 1950s and 1960s and were a combination of pinball, bingo, and slot machine. The machines were very complicated and required a bit of skill to maneuver the game configuration to optimize your chance of winning.

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2026 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2026 G2E Conference
2026 iSC West Conference
You must be logged in to post a comment.