Dataset quantization with active learning based adaptive sampling

Z Zhao, Y Shang, J Wu, Y Yan - European Conference on Computer …, 2024 - Springer
Deep learning has made remarkable progress recently, largely due to the availability of
large, well-labeled datasets. However, the training on such datasets elevates costs and …

Enhancing Post-training Quantization Calibration through Contrastive Learning

Y Shang, G Liu, RR Kompella… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Post-training quantization (PTQ) converts a pre-trained full-precision (FP) model into a
quantized model in a training-free manner. Determining suitable quantization parameters …

Dataset distillation from first principles: Integrating core information extraction and purposeful learning

V Kungurtsev, Y Peng, J Gu, S Vahidian… - ar** Contrastive Pre-training for Data Efficiency
Y Guo, M Kankanhalli - arxiv preprint arxiv:2411.09126, 2024 - arxiv.org
While contrastive pre-training is widely employed, its data efficiency problem has remained
relatively under-explored thus far. Existing methods often rely on static coreset selection …

BACON: Bayesian Optimal Condensation Framework for Dataset Distillation

Z Zhou, H Zhao, G Cheng, X Li, S Lyu, W Feng… - arxiv preprint arxiv …, 2024 - arxiv.org
Dataset Distillation (DD) aims to distill knowledge from extensive datasets into more
compact ones while preserving performance on the test set, thereby reducing storage costs …

DCT: Divide-and-Conquer Transformer Network with Knowledge Transfer for Query-driven HOI Detection

C Sun, B Duan, H Latapie, G Liu, Y Yan - Proceedings of the 2024 …, 2024 - dl.acm.org
DCT: Divide-and-Conquer Transformer Network with Knowledge Transfer for Query-driven HOI
Detection Page 1 DCT: Divide-and-Conquer Transformer Network with Knowledge Transfer for …

Going Beyond Feature Similarity: Effective Dataset distillation based on Class-aware Conditional Mutual Information

X Zhong, B Chen, H Fang, X Gu, ST **a… - arxiv preprint arxiv …, 2024 - arxiv.org
Dataset distillation (DD) aims to minimize the time and memory consumption needed for
training deep neural networks on large datasets, by creating a smaller synthetic dataset that …