On the explainability of natural language processing deep models

JE Zini, M Awad - ACM Computing Surveys, 2022 - dl.acm.org
Despite their success, deep networks are used as black-box models with outputs that are not
easily explainable during the learning and the prediction phases. This lack of interpretability …

An introduction to neural information retrieval

B Mitra, N Craswell - Foundations and Trends® in Information …, 2018 - nowpublishers.com
Neural ranking models for information retrieval (IR) use shallow or deep neural networks to
rank search results in response to a query. Traditional learning to rank models employ …

Word embedding for understanding natural language: a survey

Y Li, T Yang - Guide to big data applications, 2018 - Springer
Word embedding, where semantic and syntactic features are captured from unlabeled text
data, is a basic procedure in Natural Language Processing (NLP). The extracted features …

From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing

H Zamani, M Dehghani, WB Croft… - Proceedings of the 27th …, 2018 - dl.acm.org
The availability of massive data and computing power allowing for effective data driven
neural approaches is having a major impact on machine learning and information retrieval …

Learning deep sparse regularizers with applications to multi-view clustering and semi-supervised classification

S Wang, Z Chen, S Du, Z Lin - IEEE Transactions on Pattern …, 2021 - ieeexplore.ieee.org
Sparsity-constrained optimization problems are common in machine learning, such as
sparse coding, low-rank minimization and compressive sensing. However, most of previous …

Bag-of-concepts: Comprehending document representation through clustering words in distributed representation

HK Kim, H Kim, S Cho - Neurocomputing, 2017 - Elsevier
Two document representation methods are mainly used in solving text mining problems.
Known for its intuitive and simple interpretability, the bag-of-words method represents a …

Neural models for information retrieval

B Mitra, N Craswell - arxiv preprint arxiv:1705.01509, 2017 - arxiv.org
Neural ranking models for information retrieval (IR) use shallow or deep neural networks to
rank search results in response to a query. Traditional learning to rank models employ …

Mixed dimension embeddings with application to memory-efficient recommendation systems

AA Ginart, M Naumov, D Mudigere… - … on Information Theory …, 2021 - ieeexplore.ieee.org
Embedding representations power machine intelligence in many applications, including
recommendation systems, but they are space intensive-potentially occupying hundreds of …

Word2Sense: Sparse interpretable word embeddings

A Panigrahi, HV Simhadri… - Proceedings of the 57th …, 2019 - aclanthology.org
We present an unsupervised method to generate Word2Sense word embeddings that are
interpretable—each dimension of the embedding space corresponds to a fine-grained …

[HTML][HTML] Adaptive cross-contextual word embedding for word polysemy with unsupervised topic modeling

S Li, R Pan, H Luo, X Liu, G Zhao - Knowledge-Based Systems, 2021 - Elsevier
Because of its efficiency, word embedding has been widely used in many natural language
processing and text modeling tasks. It aims to represent each word by a vector so such that …