Survey: Image mixing and deleting for data augmentation

H Naveed, S Anwar, M Hayat, K Javed… - Engineering Applications of …, 2024 - Elsevier
Neural networks are prone to overfitting and memorizing data patterns. To avoid over-fitting
and enhance their generalization and performance, various methods have been suggested …

A survey of mix-based data augmentation: Taxonomy, methods, applications, and explainability

C Cao, F Zhou, Y Dai, J Wang, K Zhang - ACM Computing Surveys, 2024 - dl.acm.org
Data augmentation (DA) is indispensable in modern machine learning and deep neural
networks. The basic idea of DA is to construct new training data to improve the model's …

Rethinking federated learning with domain shift: A prototype view

W Huang, M Ye, Z Shi, H Li, B Du - 2023 IEEE/CVF Conference …, 2023 - ieeexplore.ieee.org
Federated learning shows a bright promise as a privacy-preserving collaborative learning
technique. However, prevalent solutions mainly focus on all private data sampled from the …

Partmix: Regularization strategy to learn part discovery for visible-infrared person re-identification

M Kim, S Kim, J Park, S Park… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Modern data augmentation using a mixture-based technique can regularize the models from
overfitting to the training data in various computer vision applications, but a proper data …

Supporting clustering with contrastive learning

D Zhang, F Nan, X Wei, S Li, H Zhu, K McKeown… - arxiv preprint arxiv …, 2021 - arxiv.org
Unsupervised clustering aims at discovering the semantic categories of data according to
some distance measured in the representation space. However, different categories often …

Exploring patch-wise semantic relation for contrastive learning in image-to-image translation tasks

C Jung, G Kwon, JC Ye - … of the IEEE/CVF conference on …, 2022 - openaccess.thecvf.com
Recently, contrastive learning-based image translation methods have been proposed, which
contrasts different spatial locations to enhance the spatial correspondence. However, the …

Equivariant contrastive learning

R Dangovski, L **g, C Loh, S Han… - arxiv preprint arxiv …, 2021 - arxiv.org
In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good
representations by encouraging them to be invariant under meaningful transformations …

Byol for audio: Self-supervised learning for general-purpose audio representation

D Niizumi, D Takeuchi, Y Ohishi… - … Joint Conference on …, 2021 - ieeexplore.ieee.org
Inspired by the recent progress in self-supervised learning for computer vision that
generates supervision using data augmentations, we explore a new general-purpose audio …

Semi-supervised vision transformers at scale

Z Cai, A Ravichandran, P Favaro… - Advances in …, 2022 - proceedings.neurips.cc
We study semi-supervised learning (SSL) for vision transformers (ViT), an under-explored
topic despite the wide adoption of the ViT architectures to different tasks. To tackle this …

Hallucination improves the performance of unsupervised visual representation learning

J Wu, J Hobbs, N Hovakimyan - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Contrastive learning models based on Siamese structure have demonstrated remarkable
performance in self-supervised learning. Such a success of contrastive learning relies on …