Mgdd: A meta generator for fast dataset distillation

S Liu, X Wang - Advances in Neural Information Processing …, 2024 - proceedings.neurips.cc
Existing dataset distillation (DD) techniques typically rely on iterative strategies to synthesize
condensed datasets, where datasets before and after distillation are forward and backward …

Dataset distillation by automatic training trajectories

D Liu, J Gu, H Cao, C Trinitis, M Schulz - European Conference on …, 2024 - Springer
Dataset Distillation is used to create a concise, yet informative, synthetic dataset that can
replace the original dataset for training purposes. Some leading methods in this domain …

M3d: Dataset condensation by minimizing maximum mean discrepancy

H Zhang, S Li, P Wang, D Zeng, S Ge - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Training state-of-the-art (SOTA) deep models often requires extensive data, resulting in
substantial training and storage costs. To address these challenges, dataset condensation …

Ameliorate Spurious Correlations in Dataset Condensation

J Cui, R Wang, Y **ong, CJ Hsieh - arxiv preprint arxiv:2406.06609, 2024 - arxiv.org
Dataset Condensation has emerged as a technique for compressing large datasets into
smaller synthetic counterparts, facilitating downstream training tasks. In this paper, we study …

γ-Razor: Hardness-Aware Dataset Pruning for Efficient Neural Network Training

L Liu, P Zhang, Y Liang, J Liu, L Morra… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Training deep neural networks (DNNs) on largescale datasets is often inefficient with large
computational needs and significant energy consumption. Although great efforts have been …

One-Shot Collaborative Data Distillation

W Holland, C Thapa, SA Siddiqui, W Shao… - arxiv preprint arxiv …, 2024 - arxiv.org
Large machine-learning training datasets can be distilled into small collections of
informative synthetic data samples. These synthetic sets support efficient model learning …

ECHO: Efficient Dataset Condensation by Higher-Order Distribution Alignment

H Zhang, S Li, P Wang, D Zeng, S Ge - arxiv preprint arxiv:2312.15927, 2023 - arxiv.org
In the era of deep learning, training deep neural networks often requires extensive data,
leading to substantial costs. Dataset condensation addresses this by learning a small …

Data-Efficient Generation for Dataset Distillation

Z Li, W Zhang, S Cechnicka, B Kainz - arxiv preprint arxiv:2409.03929, 2024 - arxiv.org
While deep learning techniques have proven successful in image-related tasks, the
exponentially increased data storage and computation costs become a significant …

Data-Efficient and Generalizable Machine Learning in Complex Environments

X **a - 2024 - ses.library.usyd.edu.au
In an age marked by an unprecedented influx of data across diverse domains, the quest for
effective machine learning (ML) solutions has increased significantly. However, data …

[PDF][PDF] Implementation of Neural Key Generation Algorithm For IoT Devices

Z Guitouni, A Zairi, M Zrigui - Journal of Computer Science …, 2023 - journal.ypidathu.or.id
In the realm of Internet of Things (IoT) systems, the generation of cryptographic keys is
crucial for ensuring secure data transmission and device authentication. However …