Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Attention-aligned transformer for image captioning
Z Fei - proceedings of the AAAI Conference on Artificial …, 2022 - ojs.aaai.org
Recently, attention-based image captioning models, which are expected to ground correct
image regions for proper word generations, have achieved remarkable performance …
image regions for proper word generations, have achieved remarkable performance …
Csot: Curriculum and structure-aware optimal transport for learning with noisy labels
Learning with noisy labels (LNL) poses a significant challenge in training a well-generalized
model while avoiding overfitting to corrupted labels. Recent advances have achieved …
model while avoiding overfitting to corrupted labels. Recent advances have achieved …
How well do unsupervised learning algorithms model human real-time and life-long learning?
Humans learn from visual inputs at multiple timescales, both rapidly and flexibly acquiring
visual knowledge over short periods, and robustly accumulating online learning progress …
visual knowledge over short periods, and robustly accumulating online learning progress …
Does continual learning equally forget all parameters?
Distribution shift (eg, task or domain shift) in continual learning (CL) usually results in
catastrophic forgetting of previously learned knowledge. Although it can be alleviated by …
catastrophic forgetting of previously learned knowledge. Although it can be alleviated by …
Robust Positive-Unlabeled Learning via Noise Negative Sample Self-correction
Learning from positive and unlabeled data is known as positive-unlabeled (PU) learning in
literature and has attracted much attention in recent years. One common approach in PU …
literature and has attracted much attention in recent years. One common approach in PU …
Efficient data subset selection to generalize training across models: transductive and inductive networks
Existing subset selection methods for efficient learning predominantly employ discrete
combinatorial and model-specific approaches, which lack generalizability---for each new …
combinatorial and model-specific approaches, which lack generalizability---for each new …
Minimax optimization: The case of convex-submodular
Minimax optimization has been central in addressing various applications in machine
learning, game theory, and control theory. Prior literature has thus far mainly focused on …
learning, game theory, and control theory. Prior literature has thus far mainly focused on …
SGD Biased towards Early Important Samples for Efficient Training
In deep learning, using larger training datasets usually leads to more accurate models.
However, simply adding more but redundant data may be inefficient, as some training …
However, simply adding more but redundant data may be inefficient, as some training …
Curriculum design for teaching via demonstrations: theory and applications
We consider the problem of teaching via demonstrations in sequential decision-making
settings. In particular, we study how to design a personalized curriculum over …
settings. In particular, we study how to design a personalized curriculum over …
Which samples should be learned first: Easy or hard?
X Zhou, O Wu - IEEE Transactions on Neural Networks and …, 2023 - ieeexplore.ieee.org
Treating each training sample unequally is prevalent in many machine-learning tasks.
Numerous weighting schemes have been proposed. Some schemes take the easy-first …
Numerous weighting schemes have been proposed. Some schemes take the easy-first …