Distributional semantics and linguistic theory

G Boleda - Annual Review of Linguistics, 2020 - annualreviews.org
Distributional semantics provides multidimensional, graded, empirically induced word
representations that successfully capture many aspects of meaning in natural languages, as …

Distributional models of word meaning

A Lenci - Annual review of Linguistics, 2018 - annualreviews.org
Distributional semantics is a usage-based model of meaning, based on the assumption that
the statistical distribution of linguistic items in context plays a key role in characterizing their …

TimeLMs: Diachronic language models from Twitter

D Loureiro, F Barbieri, L Neves, LE Anke… - arxiv preprint arxiv …, 2022 - arxiv.org
Despite its importance, the time variable has been largely neglected in the NLP and
language model literature. In this paper, we present TimeLMs, a set of language models …

Climbing towards NLU: On meaning, form, and understanding in the age of data

EM Bender, A Koller - Proceedings of the 58th annual meeting of …, 2020 - aclanthology.org
The success of the large neural language models on many NLP tasks is exciting. However,
we find that these successes sometimes lead to hype in which these models are being …

Time-aware language models as temporal knowledge bases

B Dhingra, JR Cole, JM Eisenschlos… - Transactions of the …, 2022 - direct.mit.edu
Many facts come with an expiration date, from the name of the President to the basketball
team Lebron James plays for. However, most language models (LMs) are trained on …

Can large language models transform computational social science?

C Ziems, W Held, O Shaikh, J Chen, Z Zhang… - Computational …, 2024 - direct.mit.edu
Large language models (LLMs) are capable of successfully performing many language
processing tasks zero-shot (without training data). If zero-shot LLMs can also reliably classify …

Historical representations of social groups across 200 years of word embeddings from Google Books

TES Charlesworth, A Caliskan… - Proceedings of the …, 2022 - National Acad Sciences
Using word embeddings from 850 billion words in English-language Google Books, we
provide an extensive analysis of historical change and stability in social group …

Word embeddings: What works, what doesn't, and how to tell the difference for applied research

PL Rodriguez, A Spirling - The Journal of Politics, 2022 - journals.uchicago.edu
Word embeddings are becoming popular for political science research, yet we know little
about their properties and performance. To help scholars seeking to use these techniques …

SemEval-2020 task 1: Unsupervised lexical semantic change detection

D Schlechtweg, B McGillivray, S Hengchen… - arxiv preprint arxiv …, 2020 - arxiv.org
Lexical Semantic Change detection, ie, the task of identifying words that change meaning
over time, is a very active research area, with applications in NLP, lexicography, and …

Representation learning for dynamic graphs: A survey

SM Kazemi, R Goel, K Jain, I Kobyzev, A Sethi… - Journal of Machine …, 2020 - jmlr.org
Graphs arise naturally in many real-world applications including social networks,
recommender systems, ontologies, biology, and computational finance. Traditionally …