Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
[HTML][HTML] RS-CLIP: Zero shot remote sensing scene classification via contrastive vision-language supervision
Zero-shot remote sensing scene classification aims to solve the scene classification problem
on unseen categories and has attracted numerous research attention in the remote sensing …
on unseen categories and has attracted numerous research attention in the remote sensing …
Machine and deep learning methods for radiomics
Radiomics is an emerging area in quantitative image analysis that aims to relate large‐scale
extracted imaging information to clinical and biological endpoints. The development of …
extracted imaging information to clinical and biological endpoints. The development of …
Revisiting weak-to-strong consistency in semi-supervised semantic segmentation
In this work, we revisit the weak-to-strong consistency framework, popularized by FixMatch
from semi-supervised classification, where the prediction of a weakly perturbed image …
from semi-supervised classification, where the prediction of a weakly perturbed image …
Deep long-tailed learning: A survey
Deep long-tailed learning, one of the most challenging problems in visual recognition, aims
to train well-performing deep models from a large number of images that follow a long-tailed …
to train well-performing deep models from a large number of images that follow a long-tailed …
Multimae: Multi-modal multi-task masked autoencoders
We propose a pre-training strategy called Multi-modal Multi-task Masked Autoencoders
(MultiMAE). It differs from standard Masked Autoencoding in two key aspects: I) it can …
(MultiMAE). It differs from standard Masked Autoencoding in two key aspects: I) it can …
St++: Make self-training work better for semi-supervised semantic segmentation
Self-training via pseudo labeling is a conventional, simple, and popular pipeline to leverage
unlabeled data. In this work, we first construct a strong baseline of self-training (namely ST) …
unlabeled data. In this work, we first construct a strong baseline of self-training (namely ST) …
Dash: Semi-supervised learning with dynamic thresholding
While semi-supervised learning (SSL) has received tremendous attentions in many machine
learning tasks due to its successful use of unlabeled data, existing SSL algorithms use either …
learning tasks due to its successful use of unlabeled data, existing SSL algorithms use either …
Self-training multi-sequence learning with transformer for weakly supervised video anomaly detection
Abstract Weakly supervised Video Anomaly Detection (VAD) using Multi-Instance Learning
(MIL) is usually based on the fact that the anomaly score of an abnormal snippet is higher …
(MIL) is usually based on the fact that the anomaly score of an abnormal snippet is higher …
Rethinking pre-training and self-training
Pre-training is a dominant paradigm in computer vision. For example, supervised ImageNet
pre-training is commonly used to initialize the backbones of object detection and …
pre-training is commonly used to initialize the backbones of object detection and …
Fda: Fourier domain adaptation for semantic segmentation
We describe a simple method for unsupervised domain adaptation, whereby the
discrepancy between the source and target distributions is reduced by swap** the low …
discrepancy between the source and target distributions is reduced by swap** the low …