Survey on self-supervised learning: auxiliary pretext tasks and contrastive learning methods in imaging
S Albelwi - Entropy, 2022 - mdpi.com
Although deep learning algorithms have achieved significant progress in a variety of
domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) …
domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) …
Cross-image relational knowledge distillation for semantic segmentation
Abstract Current Knowledge Distillation (KD) methods for semantic segmentation often
guide the student to mimic the teacher's structured information generated from individual …
guide the student to mimic the teacher's structured information generated from individual …
Fedx: Unsupervised federated learning with cross knowledge distillation
This paper presents FedX, an unsupervised federated learning framework. Our model learns
unbiased representation from decentralized and heterogeneous local data. It employs a two …
unbiased representation from decentralized and heterogeneous local data. It employs a two …
Mixskd: Self-knowledge distillation from mixup for image recognition
Abstract Unlike the conventional Knowledge Distillation (KD), Self-KD allows a network to
learn knowledge from itself without any guidance from extra networks. This paper proposes …
learn knowledge from itself without any guidance from extra networks. This paper proposes …
Multi-mode online knowledge distillation for self-supervised visual representation learning
K Song, J **e, S Zhang, Z Luo - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Self-supervised learning (SSL) has made remarkable progress in visual representation
learning. Some studies combine SSL with knowledge distillation (SSL-KD) to boost the …
learning. Some studies combine SSL with knowledge distillation (SSL-KD) to boost the …
Hetefedrec: Federated recommender systems with model heterogeneity
Owing to the nature of privacy protection, feder-ated recommender systems (FedRecs) have
garnered increasing interest in the realm of on-device recommender systems. However …
garnered increasing interest in the realm of on-device recommender systems. However …
Promptkd: Unsupervised prompt distillation for vision-language models
Prompt learning has emerged as a valuable technique in enhancing vision-language
models (VLMs) such as CLIP for downstream tasks in specific domains. Existing work mainly …
models (VLMs) such as CLIP for downstream tasks in specific domains. Existing work mainly …
Distilling segmenters from cnns and transformers for remote sensing images semantic segmentation
Z Dong, G Gao, T Liu, Y Gu… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Semantic segmentation is a crucial task in remote sensing and has been predominantly
performed using convolutional neural networks (CNNs) for the past decade. Recently …
performed using convolutional neural networks (CNNs) for the past decade. Recently …
Online knowledge distillation via mutual contrastive learning for visual recognition
The teacher-free online Knowledge Distillation (KD) aims to train an ensemble of multiple
student models collaboratively and distill knowledge from each other. Although existing …
student models collaboratively and distill knowledge from each other. Although existing …
Contrastive learning models for sentence representations
Sentence representation learning is a crucial task in natural language processing, as the
quality of learned representations directly influences downstream tasks, such as sentence …
quality of learned representations directly influences downstream tasks, such as sentence …