Memory in plain sight: A survey of the uncanny resemblances between diffusion models and associative memories

B Hoover, H Strobelt, D Krotov, J Hoffman… - … Memory {\&} Hopfield …, 2023 - openreview.net
Diffusion Models (DMs) have recently set state-of-the-art on many generation benchmarks.
However, there are myriad ways to describe them mathematically, which makes it difficult to …

Dine: Dimensional interpretability of node embeddings

S Piaggesi, M Khosla, A Panisson… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Graph representation learning methods, such as node embeddings, are powerful
approaches to map nodes into a latent vector space, allowing their use for various graph …

Binary Associative Memory Networks: A Review of Mathematical Framework and Capacity Analysis

H Bao, Z Zhao - Information Sciences, 2024 - Elsevier
In recent years, heightened interest has been ignited in associative memory networks,
largely attributed to their perceived equivalence with the attention mechanism, a …

Sparse distributed memory is a continual learner

T Bricken, X Davies, D Singh, D Krotov… - arxiv preprint arxiv …, 2023 - arxiv.org
Continual learning is a problem for artificial neural networks that their biological counterparts
are adept at solving. Building on work using Sparse Distributed Memory (SDM) to connect a …

Learning to modulate random weights can induce task-specific contexts for economical meta and continual learning

J Hong, TP Pavlic - arxiv preprint arxiv:2204.04297, 2022 - arxiv.org
Neural networks are vulnerable to catastrophic forgetting when data distributions are non-
stationary during continual online learning; learning of a later task often leads to forgetting of …

SNNLP: energy-efficient natural language processing using spiking neural networks

RA Knipper, K Mishty, M Sadi, SKK Santu - arxiv preprint arxiv …, 2024 - arxiv.org
As spiking neural networks receive more attention, we look toward applications of this
computing paradigm in fields other than computer vision and signal processing. One major …

Using connectome features to constrain echo state networks

J Morra, M Daley - 2023 International Joint Conference on …, 2023 - ieeexplore.ieee.org
We report an improvement to the conventional Echo State Network (ESN) across three
benchmark chaotic time-series prediction tasks using fruit fly connectome data alone. We …

Continual Learning in Bio-plausible Spiking Neural Networks with Hebbian and Spike Timing Dependent Plasticity: A Survey and Perspective

A Safa - arxiv preprint arxiv:2407.17305, 2024 - arxiv.org
Recently, the use bio-plausible learning techniques such as Hebbian and Spike-Timing-
Dependent Plasticity (STDP) have drawn significant attention for the design of compute …

Learning sparse binary code for maximum inner product search

C Ma, F Yu, Y Yu, W Li - Proceedings of the 30th ACM International …, 2021 - dl.acm.org
Maximum inner product search (MIPS), combined with the hashing method, has become a
standard solution to similarity search problems. It often achieves an order of magnitude …

Brain-inspired wiring economics for artificial neural networks

XJ Zhang, JM Moore, TT Gao, X Zhang, G Yan - PNAS nexus, 2025 - academic.oup.com
Wiring patterns of brain networks embody a trade-off between information transmission,
geometric constraints, and metabolic cost, all of which must be balanced to meet functional …