Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Attention heads of large language models: A survey
Z Zheng, Y Wang, Y Huang, S Song, M Yang… - arxiv preprint arxiv …, 2024 - arxiv.org
Since the advent of ChatGPT, Large Language Models (LLMs) have excelled in various
tasks but remain as black-box systems. Consequently, the reasoning bottlenecks of LLMs …
tasks but remain as black-box systems. Consequently, the reasoning bottlenecks of LLMs …
The impact of llm hallucinations on motor skill learning: A case study in badminton
Y Qiu - IEEE Access, 2024 - ieeexplore.ieee.org
The rise of Generative Artificial Intelligence, including Large Language Models (LLMs), has
enabled users to engage in self-guided learning of sports skills through conversation-based …
enabled users to engage in self-guided learning of sports skills through conversation-based …
Internal Activation as the Polar Star for Steering Unsafe LLM Behavior
Large language models (LLMs) have demonstrated exceptional capabilities across a wide
range of tasks but also pose significant risks due to their potential to generate harmful …
range of tasks but also pose significant risks due to their potential to generate harmful …
Correctness Assessment of Code Generated by Large Language Models Using Internal Representations
Ensuring the correctness of code generated by Large Language Models (LLMs) presents a
significant challenge in AI-driven software development. Existing approaches predominantly …
significant challenge in AI-driven software development. Existing approaches predominantly …
The Energy Loss Phenomenon in RLHF: A New Perspective on Mitigating Reward Hacking
This work identifies the Energy Loss Phenomenon in Reinforcement Learning from Human
Feedback (RLHF) and its connection to reward hacking. Specifically, energy loss in the final …
Feedback (RLHF) and its connection to reward hacking. Specifically, energy loss in the final …
What are Models Thinking about? Understanding Large Language Model Hallucinations" Psychology" through Model Inner State Analysis
P Wang, Y Liu, Y Lu, J Hong, Y Wu - arxiv preprint arxiv:2502.13490, 2025 - arxiv.org
Large language model (LLM) systems suffer from the models' unstable ability to generate
valid and factual content, resulting in hallucination generation. Current hallucination …
valid and factual content, resulting in hallucination generation. Current hallucination …
Consistency in Large Language Models Ensures Reliable Patient Feedback Classification
Z Loi, D Morquin, X Derzko, X Corbier, S Gauthier… - medRxiv, 2024 - medrxiv.org
Evaluating hospital service quality depends on analyzing patient satisfaction feedback.
Human-led analyses of patient feedback have been inconsistent and time-consuming, while …
Human-led analyses of patient feedback have been inconsistent and time-consuming, while …
Attention heads of large language models
Z Zheng, Y Wang, Y Huang, S Song, M Yang, B Tang… - Patterns - cell.com
Large language models (LLMs) have demonstrated performance approaching human levels
in tasks such as long-text comprehension and mathematical reasoning, but they remain …
in tasks such as long-text comprehension and mathematical reasoning, but they remain …
생성형 언어모델의 지속 가능성과 시간의 흐름에 따른 최신성 반영을 위한 기술 현황
정선호 - 한국산학기술학회논문지, 2024 - dbpia.co.kr
요 약 본 연구는 생성형 AI 모델들이 가지는 최신성이 반영되지 않은 언어모델 활용으로 발생할
수 있는 환각현상과사실성 문제 등 언어모델의 한계를 해결하고 최신성이 실시간으로 반영될 …
수 있는 환각현상과사실성 문제 등 언어모델의 한계를 해결하고 최신성이 실시간으로 반영될 …