Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey of knowledge enhanced pre-trained language models
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …
supervised learning method, have yielded promising performance on various tasks in …
A comprehensive study of knowledge editing for large language models
Large Language Models (LLMs) have shown extraordinary capabilities in understanding
and generating text that closely mirrors human communication. However, a primary …
and generating text that closely mirrors human communication. However, a primary …
Mass-editing memory in a transformer
Recent work has shown exciting promise in updating large language models with new
memories, so as to replace obsolete information or add specialized knowledge. However …
memories, so as to replace obsolete information or add specialized knowledge. However …
Editing large language models: Problems, methods, and opportunities
Despite the ability to train capable LLMs, the methodology for maintaining their relevancy
and rectifying errors remains elusive. To this end, the past few years have witnessed a surge …
and rectifying errors remains elusive. To this end, the past few years have witnessed a surge …
Evaluating the ripple effects of knowledge editing in language models
Modern language models capture a large body of factual knowledge. However, some facts
can be incorrectly induced or become obsolete over time, resulting in factually incorrect …
can be incorrectly induced or become obsolete over time, resulting in factually incorrect …
Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space
Transformer-based language models (LMs) are at the core of modern NLP, but their internal
prediction construction process is opaque and largely not understood. In this work, we make …
prediction construction process is opaque and largely not understood. In this work, we make …
Calibrating factual knowledge in pretrained language models
Previous literature has proved that Pretrained Language Models (PLMs) can store factual
knowledge. However, we find that facts stored in the PLMs are not always correct. It …
knowledge. However, we find that facts stored in the PLMs are not always correct. It …
Ontoprotein: Protein pretraining with gene ontology embedding
Self-supervised protein language models have proved their effectiveness in learning the
proteins representations. With the increasing computational power, current protein language …
proteins representations. With the increasing computational power, current protein language …
A study on relu and softmax in transformer
The Transformer architecture consists of self-attention and feed-forward networks (FFNs)
which can be viewed as key-value memories according to previous works. However, FFN …
which can be viewed as key-value memories according to previous works. However, FFN …
Human parity on commonsenseqa: Augmenting self-attention with external attention
Most of today's AI systems focus on using self-attention mechanisms and transformer
architectures on large amounts of diverse data to achieve impressive performance gains. In …
architectures on large amounts of diverse data to achieve impressive performance gains. In …