Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Robust natural language processing: Recent advances, challenges, and future directions
Recent natural language processing (NLP) techniques have accomplished high
performance on benchmark data sets, primarily due to the significant improvement in the …
performance on benchmark data sets, primarily due to the significant improvement in the …
Recent advances of foundation language models-based continual learning: A survey
Recently, foundation language models (LMs) have marked significant achievements in the
domains of natural language processing and computer vision. Unlike traditional neural …
domains of natural language processing and computer vision. Unlike traditional neural …
Calibrate before use: Improving few-shot performance of language models
GPT-3 can perform numerous tasks when provided a natural language prompt that contains
a few training examples. We show that this type of few-shot learning can be unstable: the …
a few training examples. We show that this type of few-shot learning can be unstable: the …
Learning transferable visual models from natural language supervision
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined
object categories. This restricted form of supervision limits their generality and usability since …
object categories. This restricted form of supervision limits their generality and usability since …
A primer in BERTology: What we know about how BERT works
Transformer-based models have pushed state of the art in many areas of NLP, but our
understanding of what is behind their success is still limited. This paper is the first survey of …
understanding of what is behind their success is still limited. This paper is the first survey of …
[PDF][PDF] Language models are unsupervised multitask learners
Natural language processing tasks, such as question answering, machine translation,
reading comprehension, and summarization, are typically approached with supervised …
reading comprehension, and summarization, are typically approached with supervised …
oLMpics-on what language model pre-training captures
Recent success of pre-trained language models (LMs) has spurred widespread interest in
the language capabilities that they possess. However, efforts to understand whether LM …
the language capabilities that they possess. However, efforts to understand whether LM …
Artificial intelligence foundation and pre-trained models: Fundamentals, applications, opportunities, and social impacts
A Kolides, A Nawaz, A Rathor, D Beeman… - … Modelling Practice and …, 2023 - Elsevier
With the emergence of foundation models (FMs) that are trained on large amounts of data at
scale and adaptable to a wide range of downstream applications, AI is experiencing a …
scale and adaptable to a wide range of downstream applications, AI is experiencing a …
Selective question answering under domain shift
To avoid giving wrong answers, question answering (QA) models need to know when to
abstain from answering. Moreover, users often ask questions that diverge from the model's …
abstain from answering. Moreover, users often ask questions that diverge from the model's …
The effect of natural distribution shift on question answering models
We build four new test sets for the Stanford Question Answering Dataset (SQuAD) and
evaluate the ability of question-answering systems to generalize to new data. Our first test …
evaluate the ability of question-answering systems to generalize to new data. Our first test …