Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
[PDF][PDF] Language model behavior: A comprehensive survey
Transformer language models have received widespread public attention, yet their
generated text is often surprising even to NLP researchers. In this survey, we discuss over …
generated text is often surprising even to NLP researchers. In this survey, we discuss over …
Position information in transformers: An overview
Transformers are arguably the main workhorse in recent natural language processing
research. By definition, a Transformer is invariant with respect to reordering of the input …
research. By definition, a Transformer is invariant with respect to reordering of the input …
DKPLM: decomposable knowledge-enhanced pre-trained language model for natural language understanding
Abstract Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained
models with relation triples injecting from knowledge graphs to improve language …
models with relation triples injecting from knowledge graphs to improve language …
Monotonic location attention for length generalization
We explore different ways to utilize position-based cross-attention in seq2seq networks to
enable length generalization in algorithmic tasks. We show that a simple approach of …
enable length generalization in algorithmic tasks. We show that a simple approach of …
Revisiting and advancing chinese natural language understanding with accelerated heterogeneous knowledge pre-training
Recently, knowledge-enhanced pre-trained language models (KEPLMs) improve context-
aware representations via learning from structured relations in knowledge graphs, and/or …
aware representations via learning from structured relations in knowledge graphs, and/or …
SeqNet: An efficient neural network for automatic malware detection
J Xu, W Fu, H Bu, Z Wang, L Ying - arxiv preprint arxiv:2205.03850, 2022 - arxiv.org
Malware continues to evolve rapidly, and more than 450,000 new samples are captured
every day, which makes manual malware analysis impractical. However, existing deep …
every day, which makes manual malware analysis impractical. However, existing deep …
Word order matters when you increase masking
Word order, an essential property of natural languages, is injected in Transformer-based
neural language models using position encoding. However, recent experiments have shown …
neural language models using position encoding. However, recent experiments have shown …
TRELM: Towards Robust and Efficient Pre-training for Knowledge-Enhanced Language Models
KEPLMs are pre-trained models that utilize external knowledge to enhance language
understanding. Previous language models facilitated knowledge acquisition by …
understanding. Previous language models facilitated knowledge acquisition by …
Bridging the gap between position-based and content-based self-attention for neural machine translation
F Schmidt, MA Di Gangi - … of the Eighth Conference on Machine …, 2023 - aclanthology.org
Position-based token-mixing approaches, such as FNet and MLPMixer, have shown to be
exciting attention alternatives for computer vision and natural language understanding. The …
exciting attention alternatives for computer vision and natural language understanding. The …
Capturing natural position relationships: A neural differential equation approach
C Ji, L Wang, J Qin, X Kang, Z Wang - Pattern Recognition Letters, 2024 - Elsevier
The Transformer has emerged as the predominant model in Natural Language Processing
due to its exceptional performance in various sequence modeling tasks, particularly in …
due to its exceptional performance in various sequence modeling tasks, particularly in …