TY - JOUR
T1 - Spiking Neural Networks - Part II
T2 - Detecting Spatio-Temporal Patterns
AU - Skatchkovsky, Nicolas
AU - Jang, Hyeryung
AU - Simeone, Osvaldo
N1 - Publisher Copyright:
© 1997-2012 IEEE.
PY - 2021/6
Y1 - 2021/6
N2 - Inspired by the operation of biological brains, Spiking Neural Networks (SNNs) have the unique ability to detect information encoded in spatio-temporal patterns of spiking signals. Examples of data types requiring spatio-temporal processing include logs of time stamps, e.g., of tweets, and outputs of neural prostheses and neuromorphic sensors. In this letter, the second of a series of three review papers on SNNs, we first review models and training algorithms for the dominant approach that considers SNNs as a Recurrent Neural Network (RNN) and adapt learning rules based on backpropagation through time to the requirements of SNNs. In order to tackle the non-differentiability of the spiking mechanism, state-of-the-art solutions use surrogate gradients that approximate the threshold activation function with a differentiable function. Then, we describe an alternative approach that relies on probabilistic models for spiking neurons, allowing the derivation of local learning rules via stochastic estimates of the gradient. Finally, experiments are provided for neuromorphic data sets, yielding insights on accuracy and convergence under different SNN models.
AB - Inspired by the operation of biological brains, Spiking Neural Networks (SNNs) have the unique ability to detect information encoded in spatio-temporal patterns of spiking signals. Examples of data types requiring spatio-temporal processing include logs of time stamps, e.g., of tweets, and outputs of neural prostheses and neuromorphic sensors. In this letter, the second of a series of three review papers on SNNs, we first review models and training algorithms for the dominant approach that considers SNNs as a Recurrent Neural Network (RNN) and adapt learning rules based on backpropagation through time to the requirements of SNNs. In order to tackle the non-differentiability of the spiking mechanism, state-of-the-art solutions use surrogate gradients that approximate the threshold activation function with a differentiable function. Then, we describe an alternative approach that relies on probabilistic models for spiking neurons, allowing the derivation of local learning rules via stochastic estimates of the gradient. Finally, experiments are provided for neuromorphic data sets, yielding insights on accuracy and convergence under different SNN models.
KW - Neuromorphic computing
KW - spiking neural networks (SNNs)
UR - http://www.scopus.com/inward/record.url?scp=85099569279&partnerID=8YFLogxK
U2 - 10.1109/LCOMM.2021.3050242
DO - 10.1109/LCOMM.2021.3050242
M3 - Article
AN - SCOPUS:85099569279
SN - 1089-7798
VL - 25
SP - 1741
EP - 1745
JO - IEEE Communications Letters
JF - IEEE Communications Letters
IS - 6
M1 - 9317741
ER -