[HTML][HTML] RS-CLIP: Zero shot remote sensing scene classification via contrastive vision-language supervision

X Li, C Wen, Y Hu, N Zhou - … Journal of Applied Earth Observation and …, 2023 - Elsevier
Zero-shot remote sensing scene classification aims to solve the scene classification problem
on unseen categories and has attracted numerous research attention in the remote sensing …

Machine and deep learning methods for radiomics

M Avanzo, L Wei, J Stancanello, M Vallieres… - Medical …, 2020 - Wiley Online Library
Radiomics is an emerging area in quantitative image analysis that aims to relate large‐scale
extracted imaging information to clinical and biological endpoints. The development of …

Revisiting weak-to-strong consistency in semi-supervised semantic segmentation

L Yang, L Qi, L Feng, W Zhang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In this work, we revisit the weak-to-strong consistency framework, popularized by FixMatch
from semi-supervised classification, where the prediction of a weakly perturbed image …

Deep long-tailed learning: A survey

Y Zhang, B Kang, B Hooi, S Yan… - IEEE transactions on …, 2023 - ieeexplore.ieee.org
Deep long-tailed learning, one of the most challenging problems in visual recognition, aims
to train well-performing deep models from a large number of images that follow a long-tailed …

Multimae: Multi-modal multi-task masked autoencoders

R Bachmann, D Mizrahi, A Atanov, A Zamir - European Conference on …, 2022 - Springer
We propose a pre-training strategy called Multi-modal Multi-task Masked Autoencoders
(MultiMAE). It differs from standard Masked Autoencoding in two key aspects: I) it can …

St++: Make self-training work better for semi-supervised semantic segmentation

L Yang, W Zhuo, L Qi, Y Shi… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Self-training via pseudo labeling is a conventional, simple, and popular pipeline to leverage
unlabeled data. In this work, we first construct a strong baseline of self-training (namely ST) …

Dash: Semi-supervised learning with dynamic thresholding

Y Xu, L Shang, J Ye, Q Qian, YF Li… - International …, 2021 - proceedings.mlr.press
While semi-supervised learning (SSL) has received tremendous attentions in many machine
learning tasks due to its successful use of unlabeled data, existing SSL algorithms use either …

Self-training multi-sequence learning with transformer for weakly supervised video anomaly detection

S Li, F Liu, L Jiao - Proceedings of the AAAI Conference on Artificial …, 2022 - ojs.aaai.org
Abstract Weakly supervised Video Anomaly Detection (VAD) using Multi-Instance Learning
(MIL) is usually based on the fact that the anomaly score of an abnormal snippet is higher …

Rethinking pre-training and self-training

B Zoph, G Ghiasi, TY Lin, Y Cui, H Liu… - Advances in neural …, 2020 - proceedings.neurips.cc
Pre-training is a dominant paradigm in computer vision. For example, supervised ImageNet
pre-training is commonly used to initialize the backbones of object detection and …

Fda: Fourier domain adaptation for semantic segmentation

Y Yang, S Soatto - … of the IEEE/CVF conference on …, 2020 - openaccess.thecvf.com
We describe a simple method for unsupervised domain adaptation, whereby the
discrepancy between the source and target distributions is reduced by swap** the low …