Survey: Image mixing and deleting for data augmentation
Neural networks are prone to overfitting and memorizing data patterns. To avoid over-fitting
and enhance their generalization and performance, various methods have been suggested …
and enhance their generalization and performance, various methods have been suggested …
A survey of mix-based data augmentation: Taxonomy, methods, applications, and explainability
Data augmentation (DA) is indispensable in modern machine learning and deep neural
networks. The basic idea of DA is to construct new training data to improve the model's …
networks. The basic idea of DA is to construct new training data to improve the model's …
Rethinking federated learning with domain shift: A prototype view
Federated learning shows a bright promise as a privacy-preserving collaborative learning
technique. However, prevalent solutions mainly focus on all private data sampled from the …
technique. However, prevalent solutions mainly focus on all private data sampled from the …
Partmix: Regularization strategy to learn part discovery for visible-infrared person re-identification
Modern data augmentation using a mixture-based technique can regularize the models from
overfitting to the training data in various computer vision applications, but a proper data …
overfitting to the training data in various computer vision applications, but a proper data …
Supporting clustering with contrastive learning
Unsupervised clustering aims at discovering the semantic categories of data according to
some distance measured in the representation space. However, different categories often …
some distance measured in the representation space. However, different categories often …
Exploring patch-wise semantic relation for contrastive learning in image-to-image translation tasks
Recently, contrastive learning-based image translation methods have been proposed, which
contrasts different spatial locations to enhance the spatial correspondence. However, the …
contrasts different spatial locations to enhance the spatial correspondence. However, the …
Equivariant contrastive learning
In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good
representations by encouraging them to be invariant under meaningful transformations …
representations by encouraging them to be invariant under meaningful transformations …
Byol for audio: Self-supervised learning for general-purpose audio representation
Inspired by the recent progress in self-supervised learning for computer vision that
generates supervision using data augmentations, we explore a new general-purpose audio …
generates supervision using data augmentations, we explore a new general-purpose audio …
Semi-supervised vision transformers at scale
We study semi-supervised learning (SSL) for vision transformers (ViT), an under-explored
topic despite the wide adoption of the ViT architectures to different tasks. To tackle this …
topic despite the wide adoption of the ViT architectures to different tasks. To tackle this …
Hallucination improves the performance of unsupervised visual representation learning
Contrastive learning models based on Siamese structure have demonstrated remarkable
performance in self-supervised learning. Such a success of contrastive learning relies on …
performance in self-supervised learning. Such a success of contrastive learning relies on …