Ammus: A survey of transformer-based pretrained models in natural language processing

KS Kalyan, A Rajasekharan, S Sangeetha - arxiv preprint arxiv …, 2021 - arxiv.org
Transformer-based pretrained language models (T-PTLMs) have achieved great success in
almost every NLP task. The evolution of these models started with GPT and BERT. These …

Pre-trained language models in biomedical domain: A systematic survey

B Wang, Q **e, J Pei, Z Chen, P Tiwari, Z Li… - ACM Computing …, 2023 - dl.acm.org
Pre-trained language models (PLMs) have been the de facto paradigm for most natural
language processing tasks. This also benefits the biomedical domain: researchers from …

Neural natural language processing for unstructured data in electronic health records: a review

I Li, J Pan, J Goldwasser, N Verma, WP Wong… - Computer Science …, 2022 - Elsevier
Electronic health records (EHRs), digital collections of patient healthcare events and
observations, are ubiquitous in medicine and critical to healthcare delivery, operations, and …

[HTML][HTML] AMMU: a survey of transformer-based biomedical pretrained language models

KS Kalyan, A Rajasekharan, S Sangeetha - Journal of biomedical …, 2022 - Elsevier
Transformer-based pretrained language models (PLMs) have started a new era in modern
natural language processing (NLP). These models combine the power of transformers …

Biomedical question answering: a survey of approaches and challenges

Q **, Z Yuan, G **ong, Q Yu, H Ying, C Tan… - ACM Computing …, 2022 - dl.acm.org
Automatic Question Answering (QA) has been successfully applied in various domains such
as search engines and chatbots. Biomedical QA (BQA), as an emerging QA task, enables …

Uncertainty quantification with pre-trained language models: A large-scale empirical analysis

Y **ao, PP Liang, U Bhatt, W Neiswanger… - arxiv preprint arxiv …, 2022 - arxiv.org
Pre-trained language models (PLMs) have gained increasing popularity due to their
compelling prediction performance in diverse natural language processing (NLP) tasks …

Artificial intelligence foundation and pre-trained models: Fundamentals, applications, opportunities, and social impacts

A Kolides, A Nawaz, A Rathor, D Beeman… - … Modelling Practice and …, 2023 - Elsevier
With the emergence of foundation models (FMs) that are trained on large amounts of data at
scale and adaptable to a wide range of downstream applications, AI is experiencing a …

Paper Plain: Making Medical Research Papers Approachable to Healthcare Consumers with Natural Language Processing

T August, LL Wang, J Bragg, MA Hearst… - ACM Transactions on …, 2023 - dl.acm.org
When seeking information not covered in patient-friendly documents, healthcare consumers
may turn to the research literature. Reading medical papers, however, can be a challenging …

Knowledgeable preference alignment for llms in domain-specific question answering

Y Zhang, Z Chen, Y Fang, Y Lu, F Li, W Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
Deploying large language models (LLMs) to real scenarios for domain-specific question
answering (QA) is a key thrust for LLM applications, which poses numerous challenges …

Transformers in healthcare: A survey

S Nerella, S Bandyopadhyay, J Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
With Artificial Intelligence (AI) increasingly permeating various aspects of society, including
healthcare, the adoption of the Transformers neural network architecture is rapidly changing …