Computing of neuromorphic materials: an emerging approach for bioengineering solutions

C Prakash, LR Gupta, A Mehta, H Vasudev… - Materials …, 2023 - pubs.rsc.org
The potential of neuromorphic computing to bring about revolutionary advancements in
multiple disciplines, such as artificial intelligence (AI), robotics, neurology, and cognitive …

Application of complex systems topologies in artificial neural networks optimization: An overview

S Kaviani, I Sohn - Expert Systems with Applications, 2021 - Elsevier
Through the success of artificial neural networks (ANNs) in different domains, intense
research has been recently centered on changing the networks architecture to optimize the …

Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks

T Hoefler, D Alistarh, T Ben-Nun, N Dryden… - Journal of Machine …, 2021 - jmlr.org
The growing energy and performance costs of deep learning have driven the community to
reduce the size of neural networks by selectively pruning components. Similarly to their …

Chasing sparsity in vision transformers: An end-to-end exploration

T Chen, Y Cheng, Z Gan, L Yuan… - Advances in Neural …, 2021 - proceedings.neurips.cc
Vision transformers (ViTs) have recently received explosive popularity, but their enormous
model sizes and training costs remain daunting. Conventional post-training pruning often …

Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science

DC Mocanu, E Mocanu, P Stone, PH Nguyen… - Nature …, 2018 - nature.com
Through the success of deep learning in various domains, artificial neural networks are
currently among the most used artificial intelligence methods. Taking inspiration from the …

Outlier weighed layerwise sparsity (owl): A missing secret sauce for pruning llms to high sparsity

L Yin, Y Wu, Z Zhang, CY Hsieh, Y Wang, Y Jia… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs), renowned for their remarkable performance across diverse
domains, present a challenge when it comes to practical deployment due to their colossal …

Do we actually need dense over-parameterization? in-time over-parameterization in sparse training

S Liu, L Yin, DC Mocanu… - … on Machine Learning, 2021 - proceedings.mlr.press
In this paper, we introduce a new perspective on training deep neural networks capable of
state-of-the-art performance without the need for the expensive over-parameterization by …

Equivalence of restricted Boltzmann machines and tensor network states

J Chen, S Cheng, H **e, L Wang, T **ang - Physical Review B, 2018 - APS
The restricted Boltzmann machine (RBM) is one of the fundamental building blocks of deep
learning. RBM finds wide applications in dimensional reduction, feature extraction, and …

Dynamic sparse network for time series classification: Learning what to “see”

Q **ao, B Wu, Y Zhang, S Liu… - Advances in …, 2022 - proceedings.neurips.cc
The receptive field (RF), which determines the region of time series to be “seen” and used, is
critical to improve the performance for time series classification (TSC). However, the …

[HTML][HTML] Spacenet: Make free space for continual learning

G Sokar, DC Mocanu, M Pechenizkiy - Neurocomputing, 2021 - Elsevier
The continual learning (CL) paradigm aims to enable neural networks to learn tasks
continually in a sequential fashion. The fundamental challenge in this learning paradigm is …