Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Explainable generative ai (genxai): A survey, conceptualization, and research agenda
J Schneider - Artificial Intelligence Review, 2024 - Springer
Generative AI (GenAI) represents a shift from AI's ability to “recognize” to its ability to
“generate” solutions for a wide range of tasks. As generated solutions and applications grow …
“generate” solutions for a wide range of tasks. As generated solutions and applications grow …
Not all tokens are what you need for pretraining
Previous language model pre-training methods have uniformly applied a next-token
prediction loss to all training tokens. Challenging this norm, we posit that''Not all tokens in a …
prediction loss to all training tokens. Challenging this norm, we posit that''Not all tokens in a …
Rho-1: Not all tokens are what you need
Previous language model pre-training methods have uniformly applied a next-token
prediction loss to all training tokens. Challenging this norm, we posit that" 9l training". Our …
prediction loss to all training tokens. Challenging this norm, we posit that" 9l training". Our …
Gtbench: Uncovering the strategic reasoning limitations of llms via game-theoretic evaluations
As Large Language Models (LLMs) are integrated into critical real-world applications, their
strategic and logical reasoning abilities are increasingly crucial. This paper evaluates LLMs' …
strategic and logical reasoning abilities are increasingly crucial. This paper evaluates LLMs' …
A peek into token bias: Large language models are not yet genuine reasoners
This study introduces a hypothesis-testing framework to assess whether large language
models (LLMs) possess genuine reasoning abilities or primarily depend on token bias. We …
models (LLMs) possess genuine reasoning abilities or primarily depend on token bias. We …
[PDF][PDF] A comprehensive survey of small language models in the era of large language models: Techniques, enhancements, applications, collaboration with llms, and …
Large language models (LLM) have demonstrated emergent abilities in text generation,
question answering, and reasoning, facilitating various tasks and domains. Despite their …
question answering, and reasoning, facilitating various tasks and domains. Despite their …
Instructgraph: Boosting large language models via graph-centric instruction tuning and preference alignment
Do current large language models (LLMs) better solve graph reasoning and generation
tasks with parameter updates? In this paper, we propose InstructGraph, a framework that …
tasks with parameter updates? In this paper, we propose InstructGraph, a framework that …
Selfpico: Self-guided partial code execution with llms
Code executability plays a vital role in software debugging and testing (eg, detecting runtime
exceptions or assertion violations). However, code execution, especially partial or arbitrary …
exceptions or assertion violations). However, code execution, especially partial or arbitrary …
Can Large Language Models Understand Symbolic Graphics Programs?
Against the backdrop of enthusiasm for large language models (LLMs), there is an urgent
need to scientifically assess their capabilities and shortcomings. This is nontrivial in part …
need to scientifically assess their capabilities and shortcomings. This is nontrivial in part …
Mitigating catastrophic forgetting in language transfer via model merging
As open-weight large language models (LLMs) achieve ever more impressive performances
across a wide range of tasks in English, practitioners aim to adapt these models to different …
across a wide range of tasks in English, practitioners aim to adapt these models to different …