Transformer-patcher: One mistake worth one neuron

Z Huang, Y Shen, X Zhang, J Zhou, W Rong… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Transformer-based Pretrained Language Models (PLMs) dominate almost all Natural
Language Processing (NLP) tasks. Nevertheless, they still make mistakes from time to time …

What all do audio transformer models hear? probing acoustic representations for language delivery and its structure

J Shah, YK Singla, C Chen, RR Shah - arxiv preprint arxiv:2101.00387, 2021 - arxiv.org
In recent times, BERT based transformer models have become an inseparable part of
the'tech stack'of text processing models. Similar progress is being observed in the speech …

Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning?

C Han, Q Wang, Y Cui, W Wang, L Huang, S Qi… - arxiv preprint arxiv …, 2024 - arxiv.org
As the scale of vision models continues to grow, the emergence of Visual Prompt Tuning
(VPT) as a parameter-efficient transfer learning technique has gained attention due to its …

Triviahg: A dataset for automatic hint generation from factoid questions

J Mozafari, A Jangra, A Jatowt - … of the 47th International ACM SIGIR …, 2024 - dl.acm.org
Nowadays, individuals tend to engage in dialogues with Large Language Models, seeking
answers to their questions. In times when such answers are readily accessible to anyone …

What do audio transformers hear? probing their representations for language delivery & structure

YK Singla, J Shah, C Chen… - 2022 IEEE International …, 2022 - ieeexplore.ieee.org
Transformer models across multiple domains such as natural language processing and
speech form an unavoidable part of the tech stack of practitioners and researchers alike. Au …

[HTML][HTML] A method for extracting tumor events from clinical CT examination reports

Q Pan, F Zhao, X Chen, D Chen - Journal of Biomedical Informatics, 2023 - Elsevier
Accurate and efficient extraction of key information related to diseases from medical
examination reports, such as X-ray and ultrasound images, CT scans, and others, is crucial …

Emotion AWARE: an artificial intelligence framework for adaptable, robust, explainable, and multi-granular emotion analysis

G Gamage, D De Silva, N Mills, D Alahakoon… - Journal of Big Data, 2024 - Springer
Emotions are fundamental to human behaviour. How we feel, individually and collectively,
determines how humanity evolves and advances into our shared future. The rapid …

Visual explanation for open-domain question answering with bert

Z Shao, S Sun, Y Zhao, S Wang, Z Wei… - … on Visualization and …, 2023 - ieeexplore.ieee.org
Open-domain question answering (OpenQA) is an essential but challenging task in natural
language processing that aims to answer questions in natural language formats on the basis …

NLRG at SemEval-2021 task 5: toxic spans detection leveraging BERT-based token classification and span prediction techniques

G Chhablani, A Sharma, H Pandey, Y Bhartia… - arxiv preprint arxiv …, 2021 - arxiv.org
Toxicity detection of text has been a popular NLP task in the recent years. In SemEval-2021
Task-5 Toxic Spans Detection, the focus is on detecting toxic spans within passages. Most …

IS FND: a novel interpretable self-ensembled semi-supervised model based on transformers for fake news detection

S RBV - Journal of Intelligent Information Systems, 2024 - Springer
One of the serious consequences of social media usage is fake information dissemination
that locomotes society towards negativity. Existing solutions focus on supervised fake news …