Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Psychometric predictive power of large language models
Instruction tuning aligns the response of large language models (LLMs) with human
preferences. Despite such efforts in human--LLM alignment, we find that instruction tuning …
preferences. Despite such efforts in human--LLM alignment, we find that instruction tuning …
Holmes ⌕ A Benchmark to Assess the Linguistic Competence of Language Models
We introduce Holmes, a new benchmark designed to assess language models'(LMs')
linguistic competence—their unconscious understanding of linguistic phenomena …
linguistic competence—their unconscious understanding of linguistic phenomena …
Can Large Language Models Interpret Noun-Noun Compounds? A Linguistically-Motivated Study on Lexicalized and Novel Compounds
Noun-noun compounds interpretation is the task where a model is given one of such
constructions, and it is asked to provide a paraphrase, making the semantic relation …
constructions, and it is asked to provide a paraphrase, making the semantic relation …
Brain-like language processing via a shallow untrained multihead attention network
Large Language Models (LLMs) have been shown to be effective models of the human
language system, with some models predicting most explainable variance of brain activity in …
language system, with some models predicting most explainable variance of brain activity in …
Large Language Models Are Human-Like Internally
Recent cognitive modeling studies have reported that larger language models (LMs) exhibit
a poorer fit to human reading behavior, leading to claims of their cognitive implausibility. In …
a poorer fit to human reading behavior, leading to claims of their cognitive implausibility. In …
Log Probabilities Are a Reliable Estimate of Semantic Plausibility in Base and Instruction-Tuned Language Models
Semantic plausibility (eg knowing that" the actor won the award" is more likely than" the
actor won the battle") serves as an effective proxy for general world knowledge. Language …
actor won the battle") serves as an effective proxy for general world knowledge. Language …
From Words to Worlds: Compositionality for Cognitive Architectures
Large language models (LLMs) are very performant connectionist systems, but do they
exhibit more compositionality? More importantly, is that part of why they perform so well? We …
exhibit more compositionality? More importantly, is that part of why they perform so well? We …
Activating Distributed Visual Region within LLMs for Efficient and Effective Vision-Language Training and Inference
Large Vision-Language Models (LVLMs) typically learn visual capacity through visual
instruction tuning, involving updates to both a projector and their LLM backbones. Drawing …
instruction tuning, involving updates to both a projector and their LLM backbones. Drawing …
The potential--and the pitfalls--of using pre-trained language models as cognitive science theories
Many studies have evaluated the cognitive alignment of Pre-trained Language Models
(PLMs), ie, their correspondence to adult performance across a range of cognitive domains …
(PLMs), ie, their correspondence to adult performance across a range of cognitive domains …
On Representational Dissociation of Language and Arithmetic in Large Language Models
The association between language and (non-linguistic) thinking ability in humans has long
been debated, and recently, neuroscientific evidence of brain activity patterns has been …
been debated, and recently, neuroscientific evidence of brain activity patterns has been …