Transformers in medical imaging: A survey

F Shamshad, S Khan, SW Zamir, MH Khan… - Medical Image …, 2023 - Elsevier
Following unprecedented success on the natural language tasks, Transformers have been
successfully applied to several computer vision problems, achieving state-of-the-art results …

Transformers in vision: A survey

S Khan, M Naseer, M Hayat, SW Zamir… - ACM computing …, 2022 - dl.acm.org
Astounding results from Transformer models on natural language tasks have intrigued the
vision community to study their application to computer vision problems. Among their salient …

Full stack optimization of transformer inference: a survey

S Kim, C Hooper, T Wattanawong, M Kang… - arxiv preprint arxiv …, 2023 - arxiv.org
Recent advances in state-of-the-art DNN architecture design have been moving toward
Transformer models. These models achieve superior accuracy across a wide range of …

M³vit: Mixture-of-experts vision transformer for efficient multi-task learning with model-accelerator co-design

Z Fan, R Sarkar, Z Jiang, T Chen… - Advances in …, 2022 - proceedings.neurips.cc
Multi-task learning (MTL) encapsulates multiple learned tasks in a single model and often
lets those tasks learn better jointly. Multi-tasking models have become successful and often …

A survey of techniques for optimizing transformer inference

KT Chitty-Venkata, S Mittal, M Emani… - Journal of Systems …, 2023 - Elsevier
Recent years have seen a phenomenal rise in the performance and applications of
transformer neural networks. The family of transformer networks, including Bidirectional …

Sanger: A co-design framework for enabling sparse attention using reconfigurable architecture

L Lu, Y **, H Bi, Z Luo, P Li, T Wang… - MICRO-54: 54th Annual …, 2021 - dl.acm.org
In recent years, attention-based models have achieved impressive performance in natural
language processing and computer vision applications by effectively capturing contextual …

Accelerating transformer-based deep learning models on fpgas using column balanced block pruning

H Peng, S Huang, T Geng, A Li, W Jiang… - … on Quality Electronic …, 2021 - ieeexplore.ieee.org
Although Transformer-based language representations achieve state-of-the-art accuracy on
various natural language processing (NLP) tasks, the large model size has been …

An algorithm–hardware co-optimized framework for accelerating n: M sparse transformers

C Fang, A Zhou, Z Wang - IEEE Transactions on Very Large …, 2022 - ieeexplore.ieee.org
The Transformer has been an indispensable staple in deep learning. However, for real-life
applications, it is very challenging to deploy efficient Transformers due to the immense …

[HTML][HTML] A survey on hardware accelerators for large language models

C Kachris - Applied Sciences, 2025 - mdpi.com
Large language models (LLMs) have emerged as powerful tools for natural language
processing tasks, revolutionizing the field with their ability to understand and generate …

Auto-vit-acc: An fpga-aware automatic acceleration framework for vision transformer with mixed-scheme quantization

Z Li, M Sun, A Lu, H Ma, G Yuan, Y **e… - … Conference on Field …, 2022 - ieeexplore.ieee.org
Vision transformers (ViTs) are emerging with significantly improved accuracy in computer
vision tasks. However, their complex architecture and enormous computation/storage …