Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Ammus: A survey of transformer-based pretrained models in natural language processing
KS Kalyan, A Rajasekharan, S Sangeetha - arxiv preprint arxiv …, 2021 - arxiv.org
Transformer-based pretrained language models (T-PTLMs) have achieved great success in
almost every NLP task. The evolution of these models started with GPT and BERT. These …
almost every NLP task. The evolution of these models started with GPT and BERT. These …
[PDF][PDF] Language model behavior: A comprehensive survey
Transformer language models have received widespread public attention, yet their
generated text is often surprising even to NLP researchers. In this survey, we discuss over …
generated text is often surprising even to NLP researchers. In this survey, we discuss over …
A systematic review of transformer-based pre-trained language models through self-supervised learning
Transfer learning is a technique utilized in deep learning applications to transmit learned
inference to a different target domain. The approach is mainly to solve the problem of a few …
inference to a different target domain. The approach is mainly to solve the problem of a few …
Towards tracing trustworthiness dynamics: Revisiting pre-training period of large language models
Ensuring the trustworthiness of large language models (LLMs) is crucial. Most studies
concentrate on fully pre-trained LLMs to better understand and improve LLMs' …
concentrate on fully pre-trained LLMs to better understand and improve LLMs' …
Semantic structure in deep learning
E Pavlick - Annual Review of Linguistics, 2022 - annualreviews.org
Deep learning has recently come to dominate computational linguistics, leading to claims of
human-level performance in a range of language processing tasks. Like much previous …
human-level performance in a range of language processing tasks. Like much previous …
A closer look at how fine-tuning changes BERT
Given the prevalence of pre-trained contextualized representations in today's NLP, there
have been many efforts to understand what information they contain, and why they seem to …
have been many efforts to understand what information they contain, and why they seem to …
How transfer learning impacts linguistic knowledge in deep NLP models?
Transfer learning from pre-trained neural language models towards downstream tasks has
been a predominant theme in NLP recently. Several researchers have shown that deep NLP …
been a predominant theme in NLP recently. Several researchers have shown that deep NLP …
Towards trustworthy and aligned machine learning: A data-centric survey with causality perspectives
The trustworthiness of machine learning has emerged as a critical topic in the field,
encompassing various applications and research areas such as robustness, security …
encompassing various applications and research areas such as robustness, security …
Estimating knowledge in large language models without generating a single token
D Gottesman, M Geva - arxiv preprint arxiv:2406.12673, 2024 - arxiv.org
To evaluate knowledge in large language models (LLMs), current methods query the model
and then evaluate its generated responses. In this work, we ask whether evaluation can be …
and then evaluate its generated responses. In this work, we ask whether evaluation can be …
TopoBERT: Exploring the topology of fine-tuned word representations
Transformer-based language models such as BERT and its variants have found widespread
use in natural language processing (NLP). A common way of using these models is to fine …
use in natural language processing (NLP). A common way of using these models is to fine …