Adabin: Improving binary neural networks with adaptive binary sets

Z Tu, X Chen, P Ren, Y Wang - European conference on computer vision, 2022 - Springer
This paper studies the Binary Neural Networks (BNNs) in which weights and activations are
both binarized into 1-bit values, thus greatly reducing the memory usage and computational …

MIMO radar unimodular waveform design with learned complex circle manifold network

K Zhong, J Hu, Z Zhao, X Yu, G Cui… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Waveform design with constant modulus constraint (CMC) is of great importance in multiple-
input–multiple-output radar systems. Both the relaxations in model-based waveform design …

Real-time controllable denoising for image and video

Z Zhang, Y Jiang, W Shao, X Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Controllable image denoising aims to generate clean samples with human perceptual priors
and balance sharpness and smoothness. In traditional filter-based denoising methods, this …

Mixed-precision quantization for federated learning on resource-constrained heterogeneous devices

H Chen, H Vikalo - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
While federated learning (FL) systems often utilize quantization to battle communication and
computational bottlenecks they have heretofore been limited to deploying fixed-precision …

Data quality-aware mixed-precision quantization via hybrid reinforcement learning

Y Wang, S Guo, J Guo, Y Zhang… - … on Neural Networks …, 2024 - ieeexplore.ieee.org
Mixed-precision quantization mostly predetermines the model bit-width settings before
actual training due to the non-differential bit-width sampling process, obtaining suboptimal …

An energy-and-area-efficient cnn accelerator for universal powers-of-two quantization

T **a, B Zhao, J Ma, G Fu, W Zhao… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
CNN model computation on edge devices is tightly restricted to the limited resource and
power budgets, which motivates the low-bit quantization technology to compress CNN …

Contemporary advances in neural network quantization: A survey

M Li, Z Huang, L Chen, J Ren, M Jiang… - … Joint Conference on …, 2024 - ieeexplore.ieee.org
In the realm of deep learning, the advent of large-scale pre-trained models has significantly
advanced computer vision and natural language processing. However, deploying these …

BiPer: Binary Neural Networks using a Periodic Function

E Vargas, CV Correa, C Hinojosa… - Proceedings of the …, 2024 - openaccess.thecvf.com
Quantized neural networks employ reduced precision representations for both weights and
activations. This quantization process significantly reduces the memory requirements and …

A 127.8 TOPS/W arbitrarily quantized 1-to-8b scalable-precision accelerator for general-purpose deep learning with reduction of storage, logic and latency waste

S Moon, HG Mun, H Son, JY Sim - 2023 IEEE International …, 2023 - ieeexplore.ieee.org
Research on deep learning accelerators has focused on inference tasks to improve
performance by means of maximally utilizing sparsity and quantization. Unlike CNN-only …

Convolutional neural networks quantization with double-stage squeeze-and-threshold

B Wu, B Waschneck, CG Mayr - International Journal of Neural …, 2022 - World Scientific
It has been proven that, compared to using 32-bit floating-point numbers in the training
phase, Deep Convolutional Neural Networks (DCNNs) can operate with low-precision …