Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Feature contamination: Neural networks learn uncorrelated features and fail to generalize
Learning representations that generalize under distribution shifts is critical for building
robust machine learning models. However, despite significant efforts in recent years …
robust machine learning models. However, despite significant efforts in recent years …
All or none: Identifiable linear properties of next-token predictors in language modeling
We analyze identifiability as a possible explanation for the ubiquity of linear properties
across language models, such as the vector difference between the representations of" …
across language models, such as the vector difference between the representations of" …
Generalization from Starvation: Hints of Universality in LLM Knowledge Graph Learning
Motivated by interpretability and reliability, we investigate how neural networks represent
knowledge during graph learning, We find hints of universality, where equivalent …
knowledge during graph learning, We find hints of universality, where equivalent …
Harmonic Loss Trains Interpretable AI Models
In this paper, we introduce** harmonic loss** as an alternative to the standard cross-entropy
loss for training neural networks and large language models (LLMs). Harmonic loss enables …
loss for training neural networks and large language models (LLMs). Harmonic loss enables …
Representational Analysis of Binding in Language Models
Entity tracking is essential for complex reasoning. To perform in-context entity tracking,
language models (LMs) must bind an entity to its attribute (eg, bind a container to its content) …
language models (LMs) must bind an entity to its attribute (eg, bind a container to its content) …
On Representational Dissociation of Language and Arithmetic in Large Language Models
The association between language and (non-linguistic) thinking ability in humans has long
been debated, and recently, neuroscientific evidence of brain activity patterns has been …
been debated, and recently, neuroscientific evidence of brain activity patterns has been …
Think-to-Talk or Talk-to-Think? When LLMs Come Up with an Answer in Multi-Step Reasoning
This study investigates the internal reasoning mechanism of language models during
symbolic multi-step reasoning, motivated by the question of whether chain-of-thought (CoT) …
symbolic multi-step reasoning, motivated by the question of whether chain-of-thought (CoT) …