Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization for Vision Transformers
Abstract Vision Transformers (ViTs) have exhibited exceptional performance across diverse
computer vision tasks, while their substantial parameter size incurs significantly increased …
computer vision tasks, while their substantial parameter size incurs significantly increased …
Towards Accurate Post-Training Quantization of Vision Transformers via Error Reduction
Post-training quantization (PTQ) for vision transformers (ViTs) has received increasing
attention from both academic and industrial communities due to its minimal data needs and …
attention from both academic and industrial communities due to its minimal data needs and …
Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers
Data-free quantization (DFQ), which facilitates model quantization without real data to
address increasing concerns about data security, has garnered significant attention within …
address increasing concerns about data security, has garnered significant attention within …
Low-Bit Quantization Favors Undertrained LLMs: Scaling Laws for Quantized LLMs with 100T Training Tokens
We reveal that low-bit quantization favors undertrained large language models (LLMs) by
observing that models with larger sizes or fewer training tokens experience less quantization …
observing that models with larger sizes or fewer training tokens experience less quantization …
Mixed Non-linear Quantization for Vision Transformers
The majority of quantization methods have been proposed to reduce the model size of
Vision Transformers, yet most of them have overlooked the quantization of non-linear …
Vision Transformers, yet most of them have overlooked the quantization of non-linear …