[HTML][HTML] A reproducible survey on word embeddings and ontology-based methods for word similarity: Linear combinations outperform the state of the art

JJ Lastra-Díaz, J Goikoetxea, MAH Taieb… - … Applications of Artificial …, 2019 - Elsevier
Human similarity and relatedness judgements between concepts underlie most of cognitive
capabilities, such as categorisation, memory, decision-making and reasoning. For this …

Null it out: Guarding protected attributes by iterative nullspace projection

S Ravfogel, Y Elazar, H Gonen, M Twiton… - arxiv preprint arxiv …, 2020 - arxiv.org
The ability to control for the kinds of information encoded in neural representation has a
variety of use cases, especially in light of the challenge of interpreting these models. We …

Conceptnet 5.5: An open multilingual graph of general knowledge

R Speer, J Chin, C Havasi - Proceedings of the AAAI conference on …, 2017 - ojs.aaai.org
Abstract Machine learning about language can be improved by supplying it with specific
knowledge and sources of external information. We present here a new version of the linked …

Evaluating word embedding models: Methods and experimental results

B Wang, A Wang, F Chen, Y Wang… - APSIPA transactions on …, 2019 - cambridge.org
Extensive evaluation on a large number of word embedding models for language
processing applications is conducted in this work. First, we introduce popular word …

Using the output embedding to improve language models

O Press, L Wolf - arxiv preprint arxiv:1608.05859, 2016 - arxiv.org
We study the topmost weight matrix of neural network language models. We show that this
matrix constitutes a valid word embedding. When training language models, we recommend …

The “Small World of Words” English word association norms for over 12,000 cue words

S De Deyne, DJ Navarro, A Perfors, M Brysbaert… - Behavior research …, 2019 - Springer
Word associations have been used widely in psychology, but the validity of their application
strongly depends on the number of cues included in the study and the extent to which they …

Learning gender-neutral word embeddings

J Zhao, Y Zhou, Z Li, W Wang, KW Chang - arxiv preprint arxiv …, 2018 - arxiv.org
Word embedding models have become a fundamental component in a wide range of
Natural Language Processing (NLP) applications. However, embeddings trained on human …

[PDF][PDF] Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors

M Baroni, G Dinu, G Kruszewski - … of the 52nd Annual Meeting of …, 2014 - aclanthology.org
Context-predicting models (more commonly known as embeddings or neural language
models) are the new kids on the distributional semantics block. Despite the buzz …

On the dimensionality of word embedding

Z Yin, Y Shen - Advances in neural information processing …, 2018 - proceedings.neurips.cc
In this paper, we provide a theoretical understanding of word embedding and its
dimensionality. Motivated by the unitary-invariance of word embedding, we propose the …

Rethinking embedding coupling in pre-trained language models

HW Chung, T Fevry, H Tsai, M Johnson… - arxiv preprint arxiv …, 2020 - arxiv.org
We re-evaluate the standard practice of sharing weights between input and output
embeddings in state-of-the-art pre-trained language models. We show that decoupled …