Transformers in medical imaging: A survey

F Shamshad, S Khan, SW Zamir, MH Khan… - Medical image …, 2023 - Elsevier
Following unprecedented success on the natural language tasks, Transformers have been
successfully applied to several computer vision problems, achieving state-of-the-art results …

Transformers in vision: A survey

S Khan, M Naseer, M Hayat, SW Zamir… - ACM computing …, 2022 - dl.acm.org
Astounding results from Transformer models on natural language tasks have intrigued the
vision community to study their application to computer vision problems. Among their salient …

A survey on efficient inference for large language models

Z Zhou, X Ning, K Hong, T Fu, J Xu, S Li, Y Lou… - ar** flow on fpgas
S Zeng, J Liu, G Dai, X Yang, T Fu, H Wang… - Proceedings of the …, 2024 - dl.acm.org
Transformer-based Large Language Models (LLMs) have made a significant impact on
various domains. However, LLMs' efficiency suffers from both heavy computation and …

Sanger: A co-design framework for enabling sparse attention using reconfigurable architecture

L Lu, Y **, H Bi, Z Luo, P Li, T Wang… - MICRO-54: 54th Annual …, 2021 - dl.acm.org
In recent years, attention-based models have achieved impressive performance in natural
language processing and computer vision applications by effectively capturing contextual …

Via: A novel vision-transformer accelerator based on fpga

T Wang, L Gong, C Wang, Y Yang… - … on Computer-Aided …, 2022 - ieeexplore.ieee.org
Since Google proposed Transformer in 2017, it has made significant natural language
processing (NLP) development. However, the increasing cost is a large amount of …

Heatvit: Hardware-efficient adaptive token pruning for vision transformers

P Dong, M Sun, A Lu, Y **e, K Liu… - … Symposium on High …, 2023 - ieeexplore.ieee.org
While vision transformers (ViTs) have continuously achieved new milestones in the field of
computer vision, their sophisticated network architectures with high computation and …