Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Exploring the landscape of machine unlearning: A comprehensive survey and taxonomy
Machine unlearning (MU) is gaining increasing attention due to the need to remove or
modify predictions made by machine learning (ML) models. While training models have …
modify predictions made by machine learning (ML) models. While training models have …
Revisiting out-of-distribution robustness in nlp: Benchmarks, analysis, and llms evaluations
This paper reexamines the research on out-of-distribution (OOD) robustness in the field of
NLP. We find that the distribution shift settings in previous studies commonly lack adequate …
NLP. We find that the distribution shift settings in previous studies commonly lack adequate …
Fine-tuning large neural language models for biomedical natural language processing
Large neural language models have transformed modern natural language processing
(NLP) applications. However, fine-tuning such models for specific tasks remains challenging …
(NLP) applications. However, fine-tuning such models for specific tasks remains challenging …
State-of-the-art generalisation research in NLP: a taxonomy and review
The ability to generalise well is one of the primary desiderata of natural language
processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …
processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …
Human parity on commonsenseqa: Augmenting self-attention with external attention
Most of today's AI systems focus on using self-attention mechanisms and transformer
architectures on large amounts of diverse data to achieve impressive performance gains. In …
architectures on large amounts of diverse data to achieve impressive performance gains. In …
Metro: Efficient denoising pretraining of large scale autoencoding language models with model generated signals
We present an efficient method of pretraining large-scale autoencoding language models
using training signals generated by an auxiliary model. Originated in ELECTRA, this training …
using training signals generated by an auxiliary model. Originated in ELECTRA, this training …
UnitedQA: A hybrid approach for open domain question answering
To date, most of recent work under the retrieval-reader framework for open-domain QA
focuses on either extractive or generative reader exclusively. In this paper, we study a hybrid …
focuses on either extractive or generative reader exclusively. In this paper, we study a hybrid …
Enhancing machine-generated text detection: adversarial fine-tuning of pre-trained language models
DH Lee, B Jang - IEEE Access, 2024 - ieeexplore.ieee.org
Advances in large language models (LLMs) have revolutionized the natural language
processing field. However, the text generated by LLMs can result in various issues, such as …
processing field. However, the text generated by LLMs can result in various issues, such as …
DIALKI: Knowledge identification in conversational systems through dialogue-document contextualization
Identifying relevant knowledge to be used in conversational systems that are grounded in
long documents is critical to effective response generation. We introduce a knowledge …
long documents is critical to effective response generation. We introduce a knowledge …
Reeval: Automatic hallucination evaluation for retrieval-augmented large language models via transferable adversarial attacks
Despite remarkable advancements in mitigating hallucinations in large language models
(LLMs) by retrieval augmentation, it remains challenging to measure the reliability of LLMs …
(LLMs) by retrieval augmentation, it remains challenging to measure the reliability of LLMs …