Explainable reinforcement learning: A survey and comparative review
Explainable reinforcement learning (XRL) is an emerging subfield of explainable machine
learning that has attracted considerable attention in recent years. The goal of XRL is to …
learning that has attracted considerable attention in recent years. The goal of XRL is to …
A comprehensive review of binary neural network
Deep learning (DL) has recently changed the development of intelligent systems and is
widely adopted in many real-life applications. Despite their various benefits and potentials …
widely adopted in many real-life applications. Despite their various benefits and potentials …
A survey of quantization methods for efficient neural network inference
This chapter provides approaches to the problem of quantizing the numerical values in deep
Neural Network computations, covering the advantages/disadvantages of current methods …
Neural Network computations, covering the advantages/disadvantages of current methods …
Pruning and quantization for deep neural network acceleration: A survey
Deep neural networks have been applied in many applications exhibiting extraordinary
abilities in the field of computer vision. However, complex network architectures challenge …
abilities in the field of computer vision. However, complex network architectures challenge …
Vanillanet: the power of minimalism in deep learning
At the heart of foundation models is the philosophy of" more is different", exemplified by the
astonishing success in computer vision and natural language processing. However, the …
astonishing success in computer vision and natural language processing. However, the …
Dynamic convolution: Attention over convolution kernels
Light-weight convolutional neural networks (CNNs) suffer performance degradation as their
low computational budgets constrain both the depth (number of convolution layers) and the …
low computational budgets constrain both the depth (number of convolution layers) and the …
Binary neural networks: A survey
The binary neural network, largely saving the storage and computation, serves as a
promising technique for deploying deep models on resource-limited devices. However, the …
promising technique for deploying deep models on resource-limited devices. However, the …
A model or 603 exemplars: Towards memory-efficient class-incremental learning
Real-world applications require the classification model to adapt to new classes without
forgetting old ones. Correspondingly, Class-Incremental Learning (CIL) aims to train a …
forgetting old ones. Correspondingly, Class-Incremental Learning (CIL) aims to train a …
Forward and backward information retention for accurate binary neural networks
Weight and activation binarization is an effective approach to deep neural network
compression and can accelerate the inference by leveraging bitwise operations. Although …
compression and can accelerate the inference by leveraging bitwise operations. Although …
Pd-quant: Post-training quantization based on prediction difference metric
Post-training quantization (PTQ) is a neural network compression technique that converts a
full-precision model into a quantized model using lower-precision data types. Although it can …
full-precision model into a quantized model using lower-precision data types. Although it can …