Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments
Deep learning has recently achieved great success in many visual recognition tasks.
However, the deep neural networks (DNNs) are often perceived as black-boxes, making …
However, the deep neural networks (DNNs) are often perceived as black-boxes, making …
Patch diffusion: Faster and more data-efficient training of diffusion models
Diffusion models are powerful, but they require a lot of time and data to train. We propose
Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training …
Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training …
Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks
The growing energy and performance costs of deep learning have driven the community to
reduce the size of neural networks by selectively pruning components. Similarly to their …
reduce the size of neural networks by selectively pruning components. Similarly to their …
Pruning and quantization for deep neural network acceleration: A survey
Deep neural networks have been applied in many applications exhibiting extraordinary
abilities in the field of computer vision. However, complex network architectures challenge …
abilities in the field of computer vision. However, complex network architectures challenge …
Model compression and hardware acceleration for neural networks: A comprehensive survey
Domain-specific hardware is becoming a promising topic in the backdrop of improvement
slow down for general-purpose processors due to the foreseeable end of Moore's Law …
slow down for general-purpose processors due to the foreseeable end of Moore's Law …
Revisiting random channel pruning for neural network compression
Channel (or 3D filter) pruning serves as an effective way to accelerate the inference of
neural networks. There has been a flurry of algorithms that try to solve this practical problem …
neural networks. There has been a flurry of algorithms that try to solve this practical problem …
Dreaming to distill: Data-free knowledge transfer via deepinversion
We introduce DeepInversion, a new method for synthesizing images from the image
distribution used to train a deep neural network. We" invert" a trained network (teacher) to …
distribution used to train a deep neural network. We" invert" a trained network (teacher) to …
Importance estimation for neural network pruning
Structural pruning of neural network parameters reduces computational, energy, and
memory transfer costs during inference. We propose a novel method that estimates the …
memory transfer costs during inference. We propose a novel method that estimates the …