BERT is to NLP what AlexNet is to CV: Can pre-trained language models identify analogies?

A Ushio, L Espinosa-Anke, S Schockaert… - arxiv preprint arxiv …, 2021 - arxiv.org
Analogies play a central role in human commonsense reasoning. The ability to recognize
analogies such as" eye is to seeing what ear is to hearing", sometimes referred to as …

The linear representation hypothesis and the geometry of large language models

K Park, YJ Choe, V Veitch - arxiv preprint arxiv:2311.03658, 2023 - arxiv.org
Informally, the'linear representation hypothesis' is the idea that high-level concepts are
represented linearly as directions in some representation space. In this paper, we address …

Analogykb: Unlocking analogical reasoning of language models with a million-scale knowledge base

S Yuan, J Chen, C Sun, J Liang, Y **ao… - arxiv preprint arxiv …, 2023 - arxiv.org
Analogical reasoning is a fundamental cognitive ability of humans. However, current
language models (LMs) still struggle to achieve human-like performance in analogical …

Beneath surface similarity: Large language models make reasonable scientific analogies after structure abduction

S Yuan, J Chen, X Ge, Y **ao, D Yang - arxiv preprint arxiv:2305.12660, 2023 - arxiv.org
The vital role of analogical reasoning in human cognition allows us to grasp novel concepts
by linking them with familiar ones through shared relational structures. Despite the attention …

[PDF][PDF] Understanding and fixing the modality gap in vision-language models

V Udandarao - Master's thesis, University of Cambridge, 2022 - mlmi.eng.cam.ac.uk
Contrastive language-image pre-training has emerged to be a simple yet effective way to
train largescale vision-language models [165, 83, 181, 220] that are capable of learning …

Rumor detection in social media based on multi-hop graphs and differential time series

J Chen, W Zhang, H Ma, S Yang - Mathematics, 2023 - mdpi.com
The widespread dissemination of rumors (fake information) on online social media has had
a detrimental impact on public opinion and the social environment. This necessitates the …

Contrastive loss is all you need to recover analogies as parallel lines

N Ri, FT Lee, N Verma - arxiv preprint arxiv:2306.08221, 2023 - arxiv.org
While static word embedding models are known to represent linguistic analogies as parallel
lines in high-dimensional space, the underlying mechanism as to why they result in such …

Past Meets Present: Creating Historical Analogy with Large Language Models

N Li, S Yuan, J Chen, J Liang, F Wei, Z Liang… - arxiv preprint arxiv …, 2024 - arxiv.org
Historical analogies, which compare known past events with contemporary but unfamiliar
events, are important abilities that help people make decisions and understand the world …

[PDF][PDF] Compositionality of complex graphemes in the undeciphered Proto-Elamite script using image and text embedding models

L Born, K Kelley, MW Monroe… - Findings of the …, 2021 - aclanthology.org
We introduce a language modeling architecture which operates over sequences of images,
or over multimodal sequences of images with associated labels. We use this architecture …

[PDF][PDF] Forging better axes: Evaluating and improving the measurement of semantic dimensions in word embeddings

A Boutyline, E Johnston - Retrieved from osf. io/preprints/socarxiv/576h3, 2023 - files.osf.io
Word embeddings are a powerful tool for measuring cultural meaning using large text
corpora. In sociology, some of their most common applications estimate relationships …