On the explainability of natural language processing deep models
Despite their success, deep networks are used as black-box models with outputs that are not
easily explainable during the learning and the prediction phases. This lack of interpretability …
easily explainable during the learning and the prediction phases. This lack of interpretability …
An introduction to neural information retrieval
B Mitra, N Craswell - Foundations and Trends® in Information …, 2018 - nowpublishers.com
Neural ranking models for information retrieval (IR) use shallow or deep neural networks to
rank search results in response to a query. Traditional learning to rank models employ …
rank search results in response to a query. Traditional learning to rank models employ …
Word embedding for understanding natural language: a survey
Word embedding, where semantic and syntactic features are captured from unlabeled text
data, is a basic procedure in Natural Language Processing (NLP). The extracted features …
data, is a basic procedure in Natural Language Processing (NLP). The extracted features …
From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing
The availability of massive data and computing power allowing for effective data driven
neural approaches is having a major impact on machine learning and information retrieval …
neural approaches is having a major impact on machine learning and information retrieval …
Learning deep sparse regularizers with applications to multi-view clustering and semi-supervised classification
Sparsity-constrained optimization problems are common in machine learning, such as
sparse coding, low-rank minimization and compressive sensing. However, most of previous …
sparse coding, low-rank minimization and compressive sensing. However, most of previous …
Bag-of-concepts: Comprehending document representation through clustering words in distributed representation
Two document representation methods are mainly used in solving text mining problems.
Known for its intuitive and simple interpretability, the bag-of-words method represents a …
Known for its intuitive and simple interpretability, the bag-of-words method represents a …
Neural models for information retrieval
B Mitra, N Craswell - arxiv preprint arxiv:1705.01509, 2017 - arxiv.org
Neural ranking models for information retrieval (IR) use shallow or deep neural networks to
rank search results in response to a query. Traditional learning to rank models employ …
rank search results in response to a query. Traditional learning to rank models employ …
Mixed dimension embeddings with application to memory-efficient recommendation systems
Embedding representations power machine intelligence in many applications, including
recommendation systems, but they are space intensive-potentially occupying hundreds of …
recommendation systems, but they are space intensive-potentially occupying hundreds of …
Word2Sense: Sparse interpretable word embeddings
A Panigrahi, HV Simhadri… - Proceedings of the 57th …, 2019 - aclanthology.org
We present an unsupervised method to generate Word2Sense word embeddings that are
interpretable—each dimension of the embedding space corresponds to a fine-grained …
interpretable—each dimension of the embedding space corresponds to a fine-grained …
[HTML][HTML] Adaptive cross-contextual word embedding for word polysemy with unsupervised topic modeling
Because of its efficiency, word embedding has been widely used in many natural language
processing and text modeling tasks. It aims to represent each word by a vector so such that …
processing and text modeling tasks. It aims to represent each word by a vector so such that …