Adabin: Improving binary neural networks with adaptive binary sets
This paper studies the Binary Neural Networks (BNNs) in which weights and activations are
both binarized into 1-bit values, thus greatly reducing the memory usage and computational …
both binarized into 1-bit values, thus greatly reducing the memory usage and computational …
MIMO radar unimodular waveform design with learned complex circle manifold network
Waveform design with constant modulus constraint (CMC) is of great importance in multiple-
input–multiple-output radar systems. Both the relaxations in model-based waveform design …
input–multiple-output radar systems. Both the relaxations in model-based waveform design …
Real-time controllable denoising for image and video
Controllable image denoising aims to generate clean samples with human perceptual priors
and balance sharpness and smoothness. In traditional filter-based denoising methods, this …
and balance sharpness and smoothness. In traditional filter-based denoising methods, this …
Mixed-precision quantization for federated learning on resource-constrained heterogeneous devices
While federated learning (FL) systems often utilize quantization to battle communication and
computational bottlenecks they have heretofore been limited to deploying fixed-precision …
computational bottlenecks they have heretofore been limited to deploying fixed-precision …
Data quality-aware mixed-precision quantization via hybrid reinforcement learning
Mixed-precision quantization mostly predetermines the model bit-width settings before
actual training due to the non-differential bit-width sampling process, obtaining suboptimal …
actual training due to the non-differential bit-width sampling process, obtaining suboptimal …
An energy-and-area-efficient cnn accelerator for universal powers-of-two quantization
CNN model computation on edge devices is tightly restricted to the limited resource and
power budgets, which motivates the low-bit quantization technology to compress CNN …
power budgets, which motivates the low-bit quantization technology to compress CNN …
Contemporary advances in neural network quantization: A survey
M Li, Z Huang, L Chen, J Ren, M Jiang… - … Joint Conference on …, 2024 - ieeexplore.ieee.org
In the realm of deep learning, the advent of large-scale pre-trained models has significantly
advanced computer vision and natural language processing. However, deploying these …
advanced computer vision and natural language processing. However, deploying these …
BiPer: Binary Neural Networks using a Periodic Function
Quantized neural networks employ reduced precision representations for both weights and
activations. This quantization process significantly reduces the memory requirements and …
activations. This quantization process significantly reduces the memory requirements and …
A 127.8 TOPS/W arbitrarily quantized 1-to-8b scalable-precision accelerator for general-purpose deep learning with reduction of storage, logic and latency waste
Research on deep learning accelerators has focused on inference tasks to improve
performance by means of maximally utilizing sparsity and quantization. Unlike CNN-only …
performance by means of maximally utilizing sparsity and quantization. Unlike CNN-only …
Convolutional neural networks quantization with double-stage squeeze-and-threshold
It has been proven that, compared to using 32-bit floating-point numbers in the training
phase, Deep Convolutional Neural Networks (DCNNs) can operate with low-precision …
phase, Deep Convolutional Neural Networks (DCNNs) can operate with low-precision …