Vector-space models of semantic representation from a cognitive perspective: A discussion of common misconceptions

F Günther, L Rinaldi, M Marelli - … on Psychological Science, 2019 - journals.sagepub.com
Models that represent meaning as high-dimensional numerical vectors—such as latent
semantic analysis (LSA), hyperspace analogue to language (HAL), bound encoding of the …

Local interpretations for explainable natural language processing: A survey

S Luo, H Ivison, SC Han, J Poon - ACM Computing Surveys, 2024 - dl.acm.org
As the use of deep learning techniques has grown across various fields over the past
decade, complaints about the opaqueness of the black-box models have increased …

COMPS: Conceptual minimal pair sentences for testing robust property knowledge and its inheritance in pre-trained language models

K Misra, JT Rayz, A Ettinger - arxiv preprint arxiv:2210.01963, 2022 - arxiv.org
A characteristic feature of human semantic cognition is its ability to not only store and
retrieve the properties of concepts observed through experience, but to also facilitate the …

Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop

A Alishahi, G Chrupała, T Linzen - Natural Language Engineering, 2019 - cambridge.org
The Empirical Methods in Natural Language Processing (EMNLP) 2018 workshop
BlackboxNLP was dedicated to resources and techniques specifically developed for …

Exploring what is encoded in distributional word vectors: A neurobiologically motivated analysis

A Utsumi - Cognitive Science, 2020 - Wiley Online Library
The pervasive use of distributional semantic models or word embeddings for both cognitive
modeling and practical application is because of their remarkable ability to represent the …

Equity beyond bias in language technologies for education

E Mayfield, M Madaio, S Prabhumoye… - Proceedings of the …, 2019 - aclanthology.org
There is a long record of research on equity in schools. As machine learning researchers
begin to study fairness and bias in earnest, language technologies in education have an …

Digital begriffsgeschichte: Tracing semantic change using word embeddings

M Wevers, M Koolen - Historical Methods: A Journal of Quantitative …, 2020 - Taylor & Francis
Recently, the use of word embedding models (WEM) has received ample attention in the
natural language processing community. These models can capture semantic information in …

Probing neural language models for human tacit assumptions

N Weir, A Poliak, B Van Durme - arxiv preprint arxiv:2004.04877, 2020 - arxiv.org
Humans carry stereotypic tacit assumptions (STAs)(Prince, 1978), or propositional beliefs
about generic concepts. Such associations are crucial for understanding natural language …

Images of the unseen: Extrapolating visual representations for abstract and concrete words in a data-driven computational model

F Günther, MA Petilli, A Vergallito, M Marelli - Psychological Research, 2022 - Springer
Theories of grounded cognition assume that conceptual representations are grounded in
sensorimotor experience. However, abstract concepts such as jealousy or childhood have …

Better hit the nail on the head than beat around the bush: Removing protected attributes with a single projection

P Haghighatkhah, A Fokkens, P Sommerauer… - arxiv preprint arxiv …, 2022 - arxiv.org
Bias elimination and recent probing studies attempt to remove specific information from
embedding spaces. Here it is important to remove as much of the target information as …