Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A comprehensive survey on pretrained foundation models: A history from bert to chatgpt
Abstract Pretrained Foundation Models (PFMs) are regarded as the foundation for various
downstream tasks across different data modalities. A PFM (eg, BERT, ChatGPT, GPT-4) is …
downstream tasks across different data modalities. A PFM (eg, BERT, ChatGPT, GPT-4) is …
Machine translation systems and quality assessment: a systematic review
I Rivera-Trigueros - Language Resources and Evaluation, 2022 - Springer
Nowadays, in the globalised context in which we find ourselves, language barriers can still
be an obstacle to accessing information. On occasions, it is impossible to satisfy the demand …
be an obstacle to accessing information. On occasions, it is impossible to satisfy the demand …
Rewarded soups: towards pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards
Foundation models are first pre-trained on vast unsupervised datasets and then fine-tuned
on labeled data. Reinforcement learning, notably from human feedback (RLHF), can further …
on labeled data. Reinforcement learning, notably from human feedback (RLHF), can further …
Bartscore: Evaluating generated text as text generation
A wide variety of NLP applications, such as machine translation, summarization, and dialog,
involve text generation. One major challenge for these applications is how to evaluate …
involve text generation. One major challenge for these applications is how to evaluate …
Benchmarking large language models on cmexam-a comprehensive chinese medical exam dataset
Recent advancements in large language models (LLMs) have transformed the field of
question answering (QA). However, evaluating LLMs in the medical field is challenging due …
question answering (QA). However, evaluating LLMs in the medical field is challenging due …
Exploring and distilling posterior and prior knowledge for radiology report generation
Automatically generating radiology reports can improve current clinical practice in diagnostic
radiology. On one hand, it can relieve radiologists from the heavy burden of report writing; …
radiology. On one hand, it can relieve radiologists from the heavy burden of report writing; …
On faithfulness and factuality in abstractive summarization
It is well known that the standard likelihood training and approximate decoding objectives in
neural text generation models lead to less human-like responses for open-ended tasks such …
neural text generation models lead to less human-like responses for open-ended tasks such …
Comparison of text preprocessing methods
CP Chai - Natural Language Engineering, 2023 - cambridge.org
Text preprocessing is not only an essential step to prepare the corpus for modeling but also
a key area that directly affects the natural language processing (NLP) application results. For …
a key area that directly affects the natural language processing (NLP) application results. For …
Extractive summarization as text matching
This paper creates a paradigm shift with regard to the way we build neural extractive
summarization systems. Instead of following the commonly used framework of extracting …
summarization systems. Instead of following the commonly used framework of extracting …
Neurologic a* esque decoding: Constrained text generation with lookahead heuristics
The dominant paradigm for neural text generation is left-to-right decoding from
autoregressive language models. Constrained or controllable generation under complex …
autoregressive language models. Constrained or controllable generation under complex …