Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments

X Bai, X Wang, X Liu, Q Liu, J Song, N Sebe, B Kim - Pattern Recognition, 2021 - Elsevier
Deep learning has recently achieved great success in many visual recognition tasks.
However, the deep neural networks (DNNs) are often perceived as black-boxes, making …

Patch diffusion: Faster and more data-efficient training of diffusion models

Z Wang, Y Jiang, H Zheng, P Wang… - Advances in neural …, 2024 - proceedings.neurips.cc
Diffusion models are powerful, but they require a lot of time and data to train. We propose
Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training …

Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks

T Hoefler, D Alistarh, T Ben-Nun, N Dryden… - Journal of Machine …, 2021 - jmlr.org
The growing energy and performance costs of deep learning have driven the community to
reduce the size of neural networks by selectively pruning components. Similarly to their …

Pruning and quantization for deep neural network acceleration: A survey

T Liang, J Glossner, L Wang, S Shi, X Zhang - Neurocomputing, 2021 - Elsevier
Deep neural networks have been applied in many applications exhibiting extraordinary
abilities in the field of computer vision. However, complex network architectures challenge …

Model compression and hardware acceleration for neural networks: A comprehensive survey

L Deng, G Li, S Han, L Shi, Y **e - Proceedings of the IEEE, 2020 - ieeexplore.ieee.org
Domain-specific hardware is becoming a promising topic in the backdrop of improvement
slow down for general-purpose processors due to the foreseeable end of Moore's Law …

Revisiting random channel pruning for neural network compression

Y Li, K Adamczewski, W Li, S Gu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Channel (or 3D filter) pruning serves as an effective way to accelerate the inference of
neural networks. There has been a flurry of algorithms that try to solve this practical problem …

Dreaming to distill: Data-free knowledge transfer via deepinversion

H Yin, P Molchanov, JM Alvarez, Z Li… - Proceedings of the …, 2020 - openaccess.thecvf.com
We introduce DeepInversion, a new method for synthesizing images from the image
distribution used to train a deep neural network. We" invert" a trained network (teacher) to …

Importance estimation for neural network pruning

P Molchanov, A Mallya, S Tyree… - Proceedings of the …, 2019 - openaccess.thecvf.com
Structural pruning of neural network parameters reduces computational, energy, and
memory transfer costs during inference. We propose a novel method that estimates the …