Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Model quantization and hardware acceleration for vision transformers: A comprehensive survey
Vision Transformers (ViTs) have recently garnered considerable attention, emerging as a
promising alternative to convolutional neural networks (CNNs) in several vision-related …
promising alternative to convolutional neural networks (CNNs) in several vision-related …
Efficient multimodal large language models: A survey
In the past year, Multimodal Large Language Models (MLLMs) have demonstrated
remarkable performance in tasks such as visual question answering, visual understanding …
remarkable performance in tasks such as visual question answering, visual understanding …
Outlier-aware slicing for post-training quantization in vision transformer
Post-Training Quantization (PTQ) is a vital technique for network compression and
acceleration, gaining prominence as model sizes increase. This paper addresses a critical …
acceleration, gaining prominence as model sizes increase. This paper addresses a critical …
Erq: Error reduction for post-training quantization of vision transformers
Post-training quantization (PTQ) for vision transformers (ViTs) has garnered significant
attention due to its efficiency in compressing models. However, existing methods typically …
attention due to its efficiency in compressing models. However, existing methods typically …
I&s-vit: An inclusive & stable method for pushing the limit of post-training vits quantization
Albeit the scalable performance of vision transformers (ViTs), the dense computational costs
(training & inference) undermine their position in industrial applications. Post-training …
(training & inference) undermine their position in industrial applications. Post-training …
Data quality-aware mixed-precision quantization via hybrid reinforcement learning
Mixed-precision quantization mostly predetermines the model bit-width settings before
actual training due to the non-differential bit-width sampling process, obtaining suboptimal …
actual training due to the non-differential bit-width sampling process, obtaining suboptimal …
Magr: Weight magnitude reduction for enhancing post-training quantization
In this paper, we present a simple optimization-based preprocessing technique called
Weight Magnitude Reduction (MagR) to improve the performance of post-training …
Weight Magnitude Reduction (MagR) to improve the performance of post-training …
Comq: A backpropagation-free algorithm for post-training quantization
Post-training quantization (PTQ) has emerged as a practical approach to compress large
neural networks, making them highly efficient for deployment. However, effectively reducing …
neural networks, making them highly efficient for deployment. However, effectively reducing …
[HTML][HTML] Hierarchical Mixed-Precision Post-Training Quantization for SAR Ship Detection Networks
H Wei, Z Wang, Y Ni - Remote Sensing, 2024 - mdpi.com
Convolutional neural network (CNN)-based synthetic aperture radar (SAR) ship detection
models operating directly on satellites can reduce transmission latency and improve real …
models operating directly on satellites can reduce transmission latency and improve real …
MetaAug: Meta-data Augmentation for Post-training Quantization
Abstract Post-Training Quantization (PTQ) has received significant attention because it
requires only a small set of calibration data to quantize a full-precision model, which is more …
requires only a small set of calibration data to quantize a full-precision model, which is more …