Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey on evaluation of large language models
Large language models (LLMs) are gaining increasing popularity in both academia and
industry, owing to their unprecedented performance in various applications. As LLMs …
industry, owing to their unprecedented performance in various applications. As LLMs …
[HTML][HTML] Empowering biomedical discovery with AI agents
We envision" AI scientists" as systems capable of skeptical learning and reasoning that
empower biomedical research through collaborative agents that integrate AI models and …
empower biomedical research through collaborative agents that integrate AI models and …
Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms
Empowering large language models to accurately express confidence in their answers is
essential for trustworthy decision-making. Previous confidence elicitation methods, which …
essential for trustworthy decision-making. Previous confidence elicitation methods, which …
Large legal fictions: Profiling legal hallucinations in large language models
Do large language models (LLMs) know the law? LLMs are increasingly being used to
augment legal practice, education, and research, yet their revolutionary potential is …
augment legal practice, education, and research, yet their revolutionary potential is …
" I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Widely deployed large language models (LLMs) can produce convincing yet incorrect
outputs, potentially misleading users who may rely on them as if they were correct. To …
outputs, potentially misleading users who may rely on them as if they were correct. To …
Evaluation and analysis of hallucination in large vision-language models
Large Vision-Language Models (LVLMs) have recently achieved remarkable success.
However, LVLMs are still plagued by the hallucination problem, which limits the practicality …
However, LVLMs are still plagued by the hallucination problem, which limits the practicality …
Does fine-tuning LLMs on new knowledge encourage hallucinations?
When large language models are aligned via supervised fine-tuning, they may encounter
new factual information that was not acquired through pre-training. It is often conjectured that …
new factual information that was not acquired through pre-training. It is often conjectured that …
Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration
Despite efforts to expand the knowledge of large language models (LLMs), knowledge gaps-
-missing or outdated information in LLMs--might always persist given the evolving nature of …
-missing or outdated information in LLMs--might always persist given the evolving nature of …
Alignment for honesty
Recent research has made significant strides in applying alignment techniques to enhance
the helpfulness and harmlessness of large language models (LLMs) in accordance with …
the helpfulness and harmlessness of large language models (LLMs) in accordance with …
Label-free node classification on graphs with large language models (llms)
In recent years, there have been remarkable advancements in node classification achieved
by Graph Neural Networks (GNNs). However, they necessitate abundant high-quality labels …
by Graph Neural Networks (GNNs). However, they necessitate abundant high-quality labels …