Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Does fine-tuning LLMs on new knowledge encourage hallucinations?
When large language models are aligned via supervised fine-tuning, they may encounter
new factual information that was not acquired through pre-training. It is often conjectured that …
new factual information that was not acquired through pre-training. It is often conjectured that …
Monotonic paraphrasing improves generalization of language model prompting
Performance of large language models (LLMs) may vary with different prompts or
instructions of even the same task. One commonly recognized factor for this phenomenon is …
instructions of even the same task. One commonly recognized factor for this phenomenon is …
Panda: Preference adaptation for enhancing domain-specific abilities of llms
While Large language models (LLMs) have demonstrated considerable capabilities across
various natural language tasks, they often fall short of the performance achieved by domain …
various natural language tasks, they often fall short of the performance achieved by domain …
Famicom: Further demystifying prompts for language models with task-agnostic performance estimation
Language models have shown impressive in-context-learning capabilities, which allow them
to benefit from input prompts and perform better on downstream end tasks. Existing works …
to benefit from input prompts and perform better on downstream end tasks. Existing works …
Familiarity-aware evidence compression for retrieval augmented generation
Retrieval Augmented Generation (RAG) improves large language models (LMs) by
incorporating non-parametric knowledge through evidence retrieval from external sources …
incorporating non-parametric knowledge through evidence retrieval from external sources …
Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models
During the pretraining phase, large language models (LLMs) acquire vast amounts of
knowledge from extensive text corpora. Nevertheless, in later stages such as fine-tuning and …
knowledge from extensive text corpora. Nevertheless, in later stages such as fine-tuning and …
Delving into the Reversal Curse: How Far Can Large Language Models Generalize?
While large language models (LLMs) showcase unprecedented capabilities, they also
exhibit certain inherent limitations when facing seemingly trivial tasks. A prime example is …
exhibit certain inherent limitations when facing seemingly trivial tasks. A prime example is …
[HTML][HTML] Adapting Generative Large Language Models for Information Extraction from Unstructured Electronic Health Records in Residential Aged Care: A …
Abstract Information extraction (IE) of unstructured electronic health records is challenging
due to the semantic complexity of textual data. Generative large language models (LLMs) …
due to the semantic complexity of textual data. Generative large language models (LLMs) …
AmbigDocs: Reasoning across Documents on Different Entities under the Same Name
Different entities with the same name can be difficult to distinguish. Handling confusing entity
mentions is a crucial skill for language models (LMs). For example, given the question" …
mentions is a crucial skill for language models (LMs). For example, given the question" …
[PDF][PDF] Evaluating machine learning approaches for multi-label classification of unstructured electronic health records with a generative large language model
Multi-label classification of unstructured electronic health records (EHR) poses challenges
due to the inherent semantic complexity in textual data. Advances in natural language …
due to the inherent semantic complexity in textual data. Advances in natural language …