Explainable reinforcement learning: A survey and comparative review

S Milani, N Topin, M Veloso, F Fang - ACM Computing Surveys, 2024 - dl.acm.org
Explainable reinforcement learning (XRL) is an emerging subfield of explainable machine
learning that has attracted considerable attention in recent years. The goal of XRL is to …

A comprehensive review of binary neural network

C Yuan, SS Agaian - Artificial Intelligence Review, 2023 - Springer
Deep learning (DL) has recently changed the development of intelligent systems and is
widely adopted in many real-life applications. Despite their various benefits and potentials …

A survey of quantization methods for efficient neural network inference

A Gholami, S Kim, Z Dong, Z Yao… - Low-Power Computer …, 2022 - taylorfrancis.com
This chapter provides approaches to the problem of quantizing the numerical values in deep
Neural Network computations, covering the advantages/disadvantages of current methods …

Pruning and quantization for deep neural network acceleration: A survey

T Liang, J Glossner, L Wang, S Shi, X Zhang - Neurocomputing, 2021 - Elsevier
Deep neural networks have been applied in many applications exhibiting extraordinary
abilities in the field of computer vision. However, complex network architectures challenge …

Vanillanet: the power of minimalism in deep learning

H Chen, Y Wang, J Guo, D Tao - Advances in Neural …, 2024 - proceedings.neurips.cc
At the heart of foundation models is the philosophy of" more is different", exemplified by the
astonishing success in computer vision and natural language processing. However, the …

Dynamic convolution: Attention over convolution kernels

Y Chen, X Dai, M Liu, D Chen… - Proceedings of the …, 2020 - openaccess.thecvf.com
Light-weight convolutional neural networks (CNNs) suffer performance degradation as their
low computational budgets constrain both the depth (number of convolution layers) and the …

Binary neural networks: A survey

H Qin, R Gong, X Liu, X Bai, J Song, N Sebe - Pattern Recognition, 2020 - Elsevier
The binary neural network, largely saving the storage and computation, serves as a
promising technique for deploying deep models on resource-limited devices. However, the …

A model or 603 exemplars: Towards memory-efficient class-incremental learning

DW Zhou, QW Wang, HJ Ye, DC Zhan - arxiv preprint arxiv:2205.13218, 2022 - arxiv.org
Real-world applications require the classification model to adapt to new classes without
forgetting old ones. Correspondingly, Class-Incremental Learning (CIL) aims to train a …

Forward and backward information retention for accurate binary neural networks

H Qin, R Gong, X Liu, M Shen, Z Wei… - Proceedings of the …, 2020 - openaccess.thecvf.com
Weight and activation binarization is an effective approach to deep neural network
compression and can accelerate the inference by leveraging bitwise operations. Although …

Pd-quant: Post-training quantization based on prediction difference metric

J Liu, L Niu, Z Yuan, D Yang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Post-training quantization (PTQ) is a neural network compression technique that converts a
full-precision model into a quantized model using lower-precision data types. Although it can …