Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Foundation models in robotics: Applications, challenges, and the future
We survey applications of pretrained foundation models in robotics. Traditional deep
learning models in robotics are trained on small datasets tailored for specific tasks, which …
learning models in robotics are trained on small datasets tailored for specific tasks, which …
Lm vs lm: Detecting factual errors via cross examination
A prominent weakness of modern language models (LMs) is their tendency to generate
factually incorrect text, which hinders their usability. A natural question is whether such …
factually incorrect text, which hinders their usability. A natural question is whether such …
Navigating the grey area: How expressions of uncertainty and overconfidence affect language models
The increased deployment of LMs for real-world tasks involving knowledge and facts makes
it important to understand model epistemology: what LMs think they know, and how their …
it important to understand model epistemology: what LMs think they know, and how their …
Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration
Despite efforts to expand the knowledge of large language models (LLMs), knowledge gaps-
-missing or outdated information in LLMs--might always persist given the evolving nature of …
-missing or outdated information in LLMs--might always persist given the evolving nature of …
[HTML][HTML] Evaluation of uncertainty quantification methods in multi-label classification: A case study with automatic diagnosis of electrocardiogram
Artificial Intelligence (AI) use in automated Electrocardiogram (ECG) classification has
continuously attracted the research community's interest, motivated by their promising …
continuously attracted the research community's interest, motivated by their promising …
One vs. many: Comprehending accurate information from multiple erroneous and inconsistent AI generations
As Large Language Models (LLMs) are nondeterministic, the same input can generate
different outputs, some of which may be incorrect or hallucinated. If run again, the LLM may …
different outputs, some of which may be incorrect or hallucinated. If run again, the LLM may …
Knowing what llms do not know: A simple yet effective self-detection method
Large Language Models (LLMs) have shown great potential in Natural Language
Processing (NLP) tasks. However, recent literature reveals that LLMs generate nonfactual …
Processing (NLP) tasks. However, recent literature reveals that LLMs generate nonfactual …
Gaussian stochastic weight averaging for Bayesian low-rank adaptation of large language models
E Onal, K Flöge, E Caldwell, A Sheverdin… - arxiv preprint arxiv …, 2024 - arxiv.org
Fine-tuned Large Language Models (LLMs) often suffer from overconfidence and poor
calibration, particularly when fine-tuned on small datasets. To address these challenges, we …
calibration, particularly when fine-tuned on small datasets. To address these challenges, we …
Hallucination detection in llms: Fast and memory-efficient finetuned models
Uncertainty estimation is a necessary component when implementing AI in high-risk
settings, such as autonomous cars, medicine, or insurances. Large Language Models …
settings, such as autonomous cars, medicine, or insurances. Large Language Models …
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs
From common-sense reasoning to domain-specific tasks, parameter-efficient fine tuning
(PEFT) methods for large language models (LLMs) have showcased significant performance …
(PEFT) methods for large language models (LLMs) have showcased significant performance …