Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Efficient methods for natural language processing: A survey
Recent work in natural language processing (NLP) has yielded appealing results from
scaling model parameters and training data; however, using only scale to improve …
scaling model parameters and training data; however, using only scale to improve …
Ulysses Tesemõ: a new large corpus for Brazilian legal and governmental domain
The increasing use of artificial intelligence methods in the legal field has sparked interest in
applying Natural Language Processing techniques to handle legal tasks and reduce the …
applying Natural Language Processing techniques to handle legal tasks and reduce the …
[PDF][PDF] Corpus complexity matters in pretraining language models
A Agrawal, S Singh - Proceedings of The Fourth Workshop on …, 2023 - aclanthology.org
It is well known that filtering low-quality data before pretraining language models or
selecting suitable data from domains similar to downstream task datasets generally leads to …
selecting suitable data from domains similar to downstream task datasets generally leads to …
Learning from Impairment: Leveraging Insights from Clinical Linguistics in Language Modelling Research
D Brunato - arxiv preprint arxiv:2412.15785, 2024 - arxiv.org
This position paper investigates the potential of integrating insights from language
impairment research and its clinical treatment to develop human-inspired learning strategies …
impairment research and its clinical treatment to develop human-inspired learning strategies …
Gradual Syntactic Label Replacement for Language Model Pre-Training
Pre-training serves as a foundation of recent NLP models, where language modeling tasks
are performed over large texts. Typical models like BERT and GPT take the corpus as a …
are performed over large texts. Typical models like BERT and GPT take the corpus as a …
Language model pre-training with linguistically motivated curriculum learning
Pre-training serves as a foundation of recent NLP models, where language modeling task is
performed over large texts. It has been shown that data affects the quality of pre-training, and …
performed over large texts. It has been shown that data affects the quality of pre-training, and …
Adaptive multi-corpora language model training for speech recognition
Neural network language model (NNLM) plays an essential role in automatic speech
recognition (ASR) systems, especially in adaptation tasks when text-only data is available. In …
recognition (ASR) systems, especially in adaptation tasks when text-only data is available. In …
Emergent Syntactic Behaviors and Mechanisms in Neural Language Models
A Mueller - 2023 - jscholarship.library.jhu.edu
Abstract Knowledge of syntax—the structure of phrases and sentences—is necessary for
robust generalization in natural language processing tasks. One reason for the success of …
robust generalization in natural language processing tasks. One reason for the success of …
任務式多模態對話狀態追蹤的遷移學習能力之研究
HK Hsu - 國立臺灣大學資訊工程學系學位論文, 2023 - airitilibrary.com
In multi-modal task-oriented dialogues, agents need the ability to comprehend the multi-
modal context perceived by the user. For this purpose, Simmc and Simmc2 were introduced …
modal context perceived by the user. For this purpose, Simmc and Simmc2 were introduced …