Expectations over unspoken alternatives predict pragmatic inferences
Scalar inferences (SI) are a signature example of how humans interpret language based on
unspoken alternatives. While empirical studies have demonstrated that human SI rates are …
unspoken alternatives. While empirical studies have demonstrated that human SI rates are …
From word types to tokens and back: A survey of approaches to word meaning representation and interpretation
M Apidianaki - Computational Linguistics, 2023 - direct.mit.edu
Vector-based word representation paradigms situate lexical meaning at different levels of
abstraction. Distributional and static embedding models generate a single vector per word …
abstraction. Distributional and static embedding models generate a single vector per word …
Unsupervised contrast-consistent ranking with language models
Language models contain ranking-based knowledge and are powerful solvers of in-context
ranking tasks. For instance, they may have parametric knowledge about the ordering of …
ranking tasks. For instance, they may have parametric knowledge about the ordering of …
Testing large language models on compositionality and inference with phrase-level adjective-noun entailment
Previous work has demonstrated that pre-trained large language models (LLM) acquire
knowledge during pre-training which enables reasoning over relationships between words …
knowledge during pre-training which enables reasoning over relationships between words …
The role of relevance for scalar diversity: a usage-based approach
Scalar inferences occur when a weaker statement like It's warm is used when a stronger one
like It's hot could have been used instead, resulting in the inference that whoever produced …
like It's hot could have been used instead, resulting in the inference that whoever produced …
Putting Words in BERT's Mouth: Navigating Contextualized Vector Spaces with Pseudowords
We present a method for exploring regions around individual points in a contextualized
vector space (particularly, BERT space), as a way to investigate how these regions …
vector space (particularly, BERT space), as a way to investigate how these regions …
Life after BERT: What do Other Muppets Understand about Language?
Existing pre-trained transformer analysis works usually focus only on one or two model
families at a time, overlooking the variability of the architecture and pre-training objectives. In …
families at a time, overlooking the variability of the architecture and pre-training objectives. In …
Adjective scale probe: can language models encode formal semantics information?
It is an open question what semantic representations transformer-based language models
can encode and whether they have access to more abstract aspects of semantic meaning …
can encode and whether they have access to more abstract aspects of semantic meaning …
Not wacky vs. definitely wacky: A study of scalar adverbs in pretrained language models
Vector space models of word meaning all share the assumption that words occurring in
similar contexts have similar meanings. In such models, words that are similar in their topical …
similar contexts have similar meanings. In such models, words that are similar in their topical …
Representation Of Lexical Stylistic Features In Language Models' Embedding Space
The representation space of pretrained Language Models (LMs) encodes rich information
about words and their relationships (eg, similarity, hypernymy, polysemy) as well as abstract …
about words and their relationships (eg, similarity, hypernymy, polysemy) as well as abstract …