Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
[HTML][HTML] Survey on self-supervised learning: auxiliary pretext tasks and contrastive learning methods in imaging
S Albelwi - Entropy, 2022 - mdpi.com
Although deep learning algorithms have achieved significant progress in a variety of
domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) …
domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) …
Does negative sampling matter? a review with insights into its theory and applications
Negative sampling has swiftly risen to prominence as a focal point of research, with wide-
ranging applications spanning machine learning, computer vision, natural language …
ranging applications spanning machine learning, computer vision, natural language …
Exploring cross-image pixel contrast for semantic segmentation
Current semantic segmentation methods focus only on mining" local" context, ie,
dependencies between pixels within individual images, by context-aggregation modules …
dependencies between pixels within individual images, by context-aggregation modules …
What to hide from your students: Attention-guided masked image modeling
Transformers and masked language modeling are quickly being adopted and explored in
computer vision as vision transformers and masked image modeling (MIM). In this work, we …
computer vision as vision transformers and masked image modeling (MIM). In this work, we …
Hybrid contrastive learning of tri-modal representation for multimodal sentiment analysis
The wide application of smart devices enables the availability of multimodal data, which can
be utilized in many tasks. In the field of multimodal sentiment analysis, most previous works …
be utilized in many tasks. In the field of multimodal sentiment analysis, most previous works …
Cross-image pixel contrasting for semantic segmentation
This work studies the problem of image semantic segmentation. Current approaches focus
mainly on mining “local” context, ie, dependencies between pixels within individual images …
mainly on mining “local” context, ie, dependencies between pixels within individual images …
Understanding contrastive learning via distributionally robust optimization
J Wu, J Chen, J Wu, W Shi… - Advances in Neural …, 2023 - proceedings.neurips.cc
This study reveals the inherent tolerance of contrastive learning (CL) towards sampling bias,
wherein negative samples may encompass similar semantics (\eg labels). However, existing …
wherein negative samples may encompass similar semantics (\eg labels). However, existing …
Contextrast: Contextual contrastive learning for semantic segmentation
Despite great improvements in semantic segmentation challenges persist because of the
lack of local/global contexts and the relationship between them. In this paper we propose …
lack of local/global contexts and the relationship between them. In this paper we propose …
When does contrastive visual representation learning work?
Recent self-supervised representation learning techniques have largely closed the gap
between supervised and unsupervised learning on ImageNet classification. While the …
between supervised and unsupervised learning on ImageNet classification. While the …
Timesurl: Self-supervised contrastive learning for universal time series representation learning
Learning universal time series representations applicable to various types of downstream
tasks is challenging but valuable in real applications. Recently, researchers have attempted …
tasks is challenging but valuable in real applications. Recently, researchers have attempted …