Neural approaches to conversational AI

J Gao, M Galley, L Li - The 41st international ACM SIGIR conference on …, 2018 - dl.acm.org
This tutorial surveys neural approaches to conversational AI that were developed in the last
few years. We group conversational systems into three categories:(1) question answering …

Deep learning based recommender system: A survey and new perspectives

S Zhang, L Yao, A Sun, Y Tay - ACM computing surveys (CSUR), 2019 - dl.acm.org
With the growing volume of online information, recommender systems have been an
effective strategy to overcome information overload. The utility of recommender systems …

Graph neural networks for natural language processing: A survey

L Wu, Y Chen, K Shen, X Guo, H Gao… - … and Trends® in …, 2023 - nowpublishers.com
Deep learning has become the dominant approach in addressing various tasks in Natural
Language Processing (NLP). Although text inputs are typically represented as a sequence …

A unified MRC framework for named entity recognition

X Li, J Feng, Y Meng, Q Han, F Wu, J Li - arxiv preprint arxiv:1910.11476, 2019 - arxiv.org
The task of named entity recognition (NER) is normally divided into nested NER and flat
NER depending on whether named entities are nested or not. Models are usually separately …

Attention is not explanation

S Jain, BC Wallace - arxiv preprint arxiv:1902.10186, 2019 - arxiv.org
Attention mechanisms have seen wide adoption in neural NLP models. In addition to
improving predictive performance, these are often touted as affording transparency: models …

Towards vqa models that can read

A Singh, V Natarajan, M Shah… - Proceedings of the …, 2019 - openaccess.thecvf.com
Studies have shown that a dominant class of questions asked by visually impaired users on
images of their surroundings involves reading text in the image. But today's VQA models can …

Attention, please! A survey of neural attention models in deep learning

A de Santana Correia, EL Colombini - Artificial Intelligence Review, 2022 - Springer
In humans, Attention is a core property of all perceptual and cognitive operations. Given our
limited ability to process competing sources, attention mechanisms select, modulate, and …

See more, know more: Unsupervised video object segmentation with co-attention siamese networks

X Lu, W Wang, C Ma, J Shen… - Proceedings of the …, 2019 - openaccess.thecvf.com
We introduce a novel network, called as CO-attention Siamese Network (COSNet), to
address the unsupervised video object segmentation task from a holistic view. We …

Mining cross-image semantics for weakly supervised semantic segmentation

G Sun, W Wang, J Dai, L Van Gool - … , Glasgow, UK, August 23–28, 2020 …, 2020 - Springer
This paper studies the problem of learning semantic segmentation from image-level
supervision only. Current popular solutions leverage object localization maps from …

Qanet: Combining local convolution with global self-attention for reading comprehension

AW Yu, D Dohan, MT Luong, R Zhao, K Chen… - arxiv preprint arxiv …, 2018 - arxiv.org
Current end-to-end machine reading and question answering (Q\&A) models are primarily
based on recurrent neural networks (RNNs) with attention. Despite their success, these …