Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A comprehensive survey of dataset distillation
Deep learning technology has developed unprecedentedly in the last decade and has
become the primary choice in many application domains. This progress is mainly attributed …
become the primary choice in many application domains. This progress is mainly attributed …
Dataset quantization
State-of-the-art deep neural networks are trained with large amounts (millions or even
billions) of data. The expensive computation and memory costs make it difficult to train them …
billions) of data. The expensive computation and memory costs make it difficult to train them …
Preventing zero-shot transfer degradation in continual learning of vision-language models
Continual learning (CL) can help pre-trained vision-language models efficiently adapt to
new or under-trained data distributions without re-training. Nevertheless, during the …
new or under-trained data distributions without re-training. Nevertheless, during the …
Towards lossless dataset distillation via difficulty-aligned trajectory matching
The ultimate goal of Dataset Distillation is to synthesize a small synthetic dataset such that a
model trained on this synthetic set will perform equally well as a model trained on the full …
model trained on this synthetic set will perform equally well as a model trained on the full …
Does graph distillation see like vision dataset counterpart?
Training on large-scale graphs has achieved remarkable results in graph representation
learning, but its cost and storage have attracted increasing concerns. Existing graph …
learning, but its cost and storage have attracted increasing concerns. Existing graph …
Efficient dataset distillation via minimax diffusion
Dataset distillation reduces the storage and computational consumption of training a
network by generating a small surrogate dataset that encapsulates rich information of the …
network by generating a small surrogate dataset that encapsulates rich information of the …
Generalized large-scale data condensation via various backbone and statistical matching
The lightweight" local-match-global" matching introduced by SRe2L successfully creates a
distilled dataset with comprehensive information on the full 224x224 ImageNet-1k. However …
distilled dataset with comprehensive information on the full 224x224 ImageNet-1k. However …
You only condense once: Two rules for pruning condensed datasets
Dataset condensation is a crucial tool for enhancing training efficiency by reducing the size
of the training dataset, particularly in on-device scenarios. However, these scenarios have …
of the training dataset, particularly in on-device scenarios. However, these scenarios have …
M3d: Dataset condensation by minimizing maximum mean discrepancy
Training state-of-the-art (SOTA) deep models often requires extensive data, resulting in
substantial training and storage costs. To address these challenges, dataset condensation …
substantial training and storage costs. To address these challenges, dataset condensation …
Navigating complexity: Toward lossless graph condensation via expanding window matching
Graph condensation aims to reduce the size of a large-scale graph dataset by synthesizing
a compact counterpart without sacrificing the performance of Graph Neural Networks …
a compact counterpart without sacrificing the performance of Graph Neural Networks …