A generalized neural tangent kernel for surrogate gradient learning

L Eilers, RM Memmesheimer… - Advances in Neural …, 2025 - proceedings.neurips.cc
State-of-the-art neural network training methods depend on the gradient of the network
function. Therefore, they cannot be applied to networks whose activation functions do not …

Exact gradients for stochastic spiking neural networks driven by rough signals

C Holberg, C Salvi - arxiv preprint arxiv:2405.13587, 2024 - arxiv.org
We introduce a mathematically rigorous framework based on rough path theory to model
stochastic spiking neural networks (SSNNs) as stochastic differential equations with event …

Digital Computing Continuum Abstraction for Neuromorphic Systems

F Sandin, U Bodin, A Lindgren… - 2024 International …, 2024 - ieeexplore.ieee.org
The rising complexity and data generation in cyber-physical systems and the Internet of
Things require a shift towards an edge-to-cloud computing continuum ecosystem with …

Training Physical Neural Networks for Analog In-Memory Computing

Y Sakemi, Y Okamoto, T Morie, S Nobukawa… - arxiv preprint arxiv …, 2024 - arxiv.org
In-memory computing (IMC) architectures mitigate the von Neumann bottleneck
encountered in traditional deep learning accelerators. Its energy efficiency can realize deep …

IKUN: Initialization to Keep snn training and generalization great with sUrrogate-stable variaNce

D Chang, D Wang, X Yang - arxiv preprint arxiv:2411.18250, 2024 - arxiv.org
Weight initialization significantly impacts the convergence and performance of neural
networks. While traditional methods like Xavier and Kaiming initialization are widely used …