Conversational agents in therapeutic interventions for neurodevelopmental disorders: a survey

F Catania, M Spitale, F Garzotto - ACM Computing Surveys, 2023 - dl.acm.org
Neurodevelopmental Disorders (NDD) are a group of conditions with onset in the
developmental period characterized by deficits in the cognitive and social areas …

Hierarchical graph network for multi-hop question answering

Y Fang, S Sun, Z Gan, R Pillai, S Wang… - arxiv preprint arxiv …, 2019 - arxiv.org
In this paper, we present Hierarchical Graph Network (HGN) for multi-hop question
answering. To aggregate clues from scattered texts across multiple paragraphs, a …

Efficient visual tracking with exemplar transformers

P Blatter, M Kanakis, M Danelljan… - Proceedings of the …, 2023 - openaccess.thecvf.com
The design of more complex and powerful neural network models has significantly
advanced the state-of-the-art in visual object tracking. These advances can be attributed to …

Sparse self-attention transformer for image inpainting

W Huang, Y Deng, S Hui, Y Wu, S Zhou, J Wang - Pattern Recognition, 2024 - Elsevier
Learning-based image inpainting methods have made remarkable progress in recent years.
Nevertheless, these methods still suffer from issues such as blurring, artifacts, and …

Poolingformer: Long document modeling with pooling attention

H Zhang, Y Gong, Y Shen, W Li, J Lv… - International …, 2021 - proceedings.mlr.press
In this paper, we introduce a two-level attention schema, Poolingformer, for long document
modeling. Its first level uses a smaller sliding window pattern to aggregate information from …

Linrec: Linear attention mechanism for long-term sequential recommender systems

L Liu, L Cai, C Zhang, X Zhao, J Gao, W Wang… - Proceedings of the 46th …, 2023 - dl.acm.org
Transformer models have achieved remarkable success in sequential recommender
systems (SRSs). However, computing the attention matrix in traditional dot-product attention …

Token pooling in vision transformers for image classification

D Marin, JHR Chang, A Ranjan… - Proceedings of the …, 2023 - openaccess.thecvf.com
Pooling is commonly used to improve the computation-accuracy trade-off of convolutional
networks. By aggregating neighboring feature values on the image grid, pooling layers …

Efficient long sequence modeling via state space augmented transformer

S Zuo, X Liu, J Jiao, D Charles, E Manavoglu… - arxiv preprint arxiv …, 2022 - arxiv.org
Transformer models have achieved superior performance in various natural language
processing tasks. However, the quadratic computational cost of the attention mechanism …

Understanding self-attention mechanism via dynamical system perspective

Z Huang, M Liang, J Qin, S Zhong… - Proceedings of the …, 2023 - openaccess.thecvf.com
The self-attention mechanism (SAM) is widely used in various fields of artificial intelligence
and has successfully boosted the performance of different models. However, current …

Sparsity in transformers: A systematic literature review

M Farina, U Ahmad, A Taha, H Younes, Y Mesbah… - Neurocomputing, 2024 - Elsevier
Transformers have become the state-of-the-art architectures for various tasks in Natural
Language Processing (NLP) and Computer Vision (CV); however, their space and …