Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Do prompt-based models really understand the meaning of their prompts?
Recently, a boom of papers has shown extraordinary progress in zero-shot and few-shot
learning with various prompt-based models. It is commonly argued that prompts help models …
learning with various prompt-based models. It is commonly argued that prompts help models …
Word order does matter and shuffled language models know it
Recent studies have shown that language models pretrained and/or fine-tuned on randomly
permuted sentences exhibit competitive performance on GLUE, putting into question the …
permuted sentences exhibit competitive performance on GLUE, putting into question the …
The ambiguity of BERTology: what do large language models represent?
T Buder-Gröndahl - Synthese, 2023 - Springer
The field of “BERTology” aims to locate linguistic representations in large language models
(LLMs). These have commonly been interpreted as representing structural descriptions …
(LLMs). These have commonly been interpreted as representing structural descriptions …
Explicitly representing syntax improves sentence-to-layout prediction of unexpected situations
Recognizing visual entities in a natural language sentence and arranging them in a 2D
spatial layout require a compositional understanding of language and space. This task of …
spatial layout require a compositional understanding of language and space. This task of …
Measuring the knowledge acquisition-utilization gap in pretrained language models
While pre-trained language models (PLMs) have shown evidence of acquiring vast amounts
of knowledge, it remains unclear how much of this parametric knowledge is actually usable …
of knowledge, it remains unclear how much of this parametric knowledge is actually usable …
Word order does matter (and shuffled language models know it)
Recent studies have shown that language models pretrained and/or fine-tuned on randomly
permuted sentences exhibit competitive performance on GLUE, putting into question the …
permuted sentences exhibit competitive performance on GLUE, putting into question the …
Multilingual Nonce Dependency Treebanks: Understanding how Language Models Represent and Process Syntactic Structure
We introduce SPUD (Semantically Perturbed Universal Dependencies), a framework for
creating nonce treebanks for the multilingual Universal Dependencies (UD) corpora. SPUD …
creating nonce treebanks for the multilingual Universal Dependencies (UD) corpora. SPUD …
Linguistic Structure Induction from Language Models
O Momen - arxiv preprint arxiv:2403.09714, 2024 - arxiv.org
Linear sequences of words are implicitly represented in our brains by hierarchical structures
that organize the composition of words in sentences. Linguists formalize different …
that organize the composition of words in sentences. Linguists formalize different …
Seeing Syntax: Uncovering Syntactic Learning Limitations in Vision-Language Models
Vision-language models (VLMs), serve as foundation models for multi-modal applications
such as image captioning and text-to-image generation. Recent studies have highlighted …
such as image captioning and text-to-image generation. Recent studies have highlighted …
The emergence of grammatical structure from inter-predictability
J Mansfield, C Kemp - 2023 - osf.io
Recent research has shown that words or morphemes that are closer to each other in linear
order tend to have higher statistical inter-predictability, measured as mutual information. We …
order tend to have higher statistical inter-predictability, measured as mutual information. We …