Survey on self-supervised learning: auxiliary pretext tasks and contrastive learning methods in imaging

S Albelwi - Entropy, 2022 - mdpi.com
Although deep learning algorithms have achieved significant progress in a variety of
domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) …

Cross-image relational knowledge distillation for semantic segmentation

C Yang, H Zhou, Z An, X Jiang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract Current Knowledge Distillation (KD) methods for semantic segmentation often
guide the student to mimic the teacher's structured information generated from individual …

Fedx: Unsupervised federated learning with cross knowledge distillation

S Han, S Park, F Wu, S Kim, C Wu, X **e… - European Conference on …, 2022 - Springer
This paper presents FedX, an unsupervised federated learning framework. Our model learns
unbiased representation from decentralized and heterogeneous local data. It employs a two …

Mixskd: Self-knowledge distillation from mixup for image recognition

C Yang, Z An, H Zhou, L Cai, X Zhi, J Wu, Y Xu… - … on Computer Vision, 2022 - Springer
Abstract Unlike the conventional Knowledge Distillation (KD), Self-KD allows a network to
learn knowledge from itself without any guidance from extra networks. This paper proposes …

Multi-mode online knowledge distillation for self-supervised visual representation learning

K Song, J **e, S Zhang, Z Luo - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Self-supervised learning (SSL) has made remarkable progress in visual representation
learning. Some studies combine SSL with knowledge distillation (SSL-KD) to boost the …

Hetefedrec: Federated recommender systems with model heterogeneity

W Yuan, L Qu, L Cui, Y Tong, X Zhou… - 2024 IEEE 40th …, 2024 - ieeexplore.ieee.org
Owing to the nature of privacy protection, feder-ated recommender systems (FedRecs) have
garnered increasing interest in the realm of on-device recommender systems. However …

Promptkd: Unsupervised prompt distillation for vision-language models

Z Li, X Li, X Fu, X Zhang, W Wang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Prompt learning has emerged as a valuable technique in enhancing vision-language
models (VLMs) such as CLIP for downstream tasks in specific domains. Existing work mainly …

Distilling segmenters from cnns and transformers for remote sensing images semantic segmentation

Z Dong, G Gao, T Liu, Y Gu… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Semantic segmentation is a crucial task in remote sensing and has been predominantly
performed using convolutional neural networks (CNNs) for the past decade. Recently …

Online knowledge distillation via mutual contrastive learning for visual recognition

C Yang, Z An, H Zhou, F Zhuang, Y Xu… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
The teacher-free online Knowledge Distillation (KD) aims to train an ensemble of multiple
student models collaboratively and distill knowledge from each other. Although existing …

Contrastive learning models for sentence representations

L Xu, H **e, Z Li, FL Wang, W Wang, Q Li - ACM Transactions on …, 2023 - dl.acm.org
Sentence representation learning is a crucial task in natural language processing, as the
quality of learned representations directly influences downstream tasks, such as sentence …