In biological systems, spike-timing-dependent plasticity (STDP) is a process that adjusts the strength of synaptic connections between neurons.
In an engineering context, STDP is a way that the weights between spiking neurons can be adjusted. Put slightly differently, STDP is a technique that can be used to train a spiking neural network.
Note: The ideas behind STDP go back many years but as best I can determine the first explicit use of the term spike-timing-dependent plasticity was in a research article published in Nature in 2000. The article used three hyphens (spike-timing-dependent plasticity) but since that time all possible permutations of hyphens and no hyphens (spike timing-dependent plasticity, spike-timing dependent plasticity, etc.) are used interchangeably.
The STDP process adjusts weights based on the relative timing of a particular neuron’s input and output spikes. If an input spike to a neuron occurs closely before that neuron’s output spike, then the associated weight is increased. If an input spike occurs closely after an output spike, then the associated weight is decreased. The term biological plasticity means adaptive changes to a biological system due to changes in environment. The term was adopted by computer scientists for use with software spiking neural networks
The STDP process has many variations and is quite complex. A highly simplified explanation is illustrated by the diagram below. There are three input neurons labeled input0, input1, input2. The output neuron fired a spike at time t = 8. The input0 neuron fired spikes at t = 6 and t = 7, immediately before the output spike. Using STDP, the weight connecting input0 to output, wt0, is strengthened. The idea is that there is likely a cause-effect relationship.

Simplified example of spike-timing-dependent plasticity
The input1 neuron fired a single output spike at time t = 10, shortly after the output spike occurred at t = 8. Therefore, the weight connecting neuron input1 to output, wt1, will be decreased. Compared to the two spikes from input0 immediately before the output spike, there is only a single spike somewhat after the output spike, so, the wt1 will be decreased a bit less in magnitude than the increase in wt0.
The input2 neuron fires two spikes at times t = 2 and t = 4. Because these two spikes are not immediately before or after the output spike at t = 8, the connecting weight, w2, is not adjusted.

Weight update equations for a regular neural network (top) and a spiking neural network (bottom)
The STDP process has several parameters related to defining the time “immediately or closely before or after an output spike” and quantifying the numerical magnitude to increase or decrease a weight connecting neurons. The equations in the figure below show examples of the weight update equations for a standard artificial neural network using stochastic gradient descent (top) and a spiking neural network using basic STDP (bottom).
Training a standard neural network uses the Calculus gradient (sometimes shortened to “derivative” when the context is clear) of the activation function used on output nodes/neurons. The top equation shows the stochastic gradient descent technique weight update for a weight connecting node j to node k where o(k) is a computed output value (such as 0.8333) and t(k) is a target value (such as 1), and assumes the output nodes use the softmax function for activation.
The bottom three equations show the basic STDP technique weight update for a weight connecting input neuron j to output neuron k. Function W is called the window function and controls the magnitude in change of a weight based on the difference between the timing of input spikes (f) and output spikes (n). The use of the exp() function decays the magnitude of weight change so that input spiking close to output spiking has a greater effect on the associated weight, and outside of a certain time interval has no effect.
The A+ and A- constants are hyperparameters that must be determined by trial and error; they are somewhat analogous to the learning rate used with standard neural networks. The tau+ and tau- constants control the maximum time span before and after an output spike event where an input spike has an effect. In the STDP example diagram above, the tau spans are arbitrarily set to -3 and +3 and indicated by red braces in order to provide a concrete example. In biological systems, the values for tau typically have order of magnitude 10 milliseconds, but in spiking neural networks the tau values are purely hyperparameters.
Implementing SNN training from scratch is a huge undertaking and is not feasible in most scenarios. You usually need to use a spiking neural network library such as Brian (briansimulator.org), NEST (www.nest-simulator.org), PySNN (github.com/BasBuller/PySNN), or BindsNET (binds.cs.umass.edu/bindsnet.html).

One meaning of the word plasticity is “the quality of being easily shaped or molded”. Abstract artists seem to be able to view the world through a plasticity filter.
.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2026 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2026 G2E Conference
2026 iSC West Conference
You must be logged in to post a comment.