Pre-trained language models in biomedical domain: A systematic survey

B Wang, Q **e, J Pei, Z Chen, P Tiwari, Z Li… - ACM Computing …, 2023 - dl.acm.org
Pre-trained language models (PLMs) have been the de facto paradigm for most natural
language processing tasks. This also benefits the biomedical domain: researchers from …

[HTML][HTML] AMMU: a survey of transformer-based biomedical pretrained language models

KS Kalyan, A Rajasekharan, S Sangeetha - Journal of biomedical …, 2022 - Elsevier
Transformer-based pretrained language models (PLMs) have started a new era in modern
natural language processing (NLP). These models combine the power of transformers …

[HTML][HTML] A survey of large language models for healthcare: from data, technology, and applications to accountability and ethics

K He, R Mao, Q Lin, Y Ruan, X Lan, M Feng… - Information …, 2025 - Elsevier
The utilization of large language models (LLMs) for Healthcare has generated both
excitement and concern due to their ability to effectively respond to free-text queries with …

Self-alignment pretraining for biomedical entity representations

F Liu, E Shareghi, Z Meng, M Basaldella… - arxiv preprint arxiv …, 2020 - arxiv.org
Despite the widespread success of self-supervised learning via masked language models
(MLM), accurately capturing fine-grained semantic relationships in the biomedical domain …

BioBART: Pretraining and evaluation of a biomedical generative language model

H Yuan, Z Yuan, R Gan, J Zhang, Y **e… - arxiv preprint arxiv …, 2022 - arxiv.org
Pretrained language models have served as important backbones for natural language
processing. Recently, in-domain pretraining has been shown to benefit various domain …

A survey on clinical natural language processing in the United Kingdom from 2007 to 2022

H Wu, M Wang, J Wu, F Francis, YH Chang… - NPJ digital …, 2022 - nature.com
Much of the knowledge and information needed for enabling high-quality clinical research is
stored in free-text format. Natural language processing (NLP) has been used to extract …

[HTML][HTML] A comprehensive evaluation of large language models on benchmark biomedical text processing tasks

I Jahan, MTR Laskar, C Peng, JX Huang - Computers in biology and …, 2024 - Elsevier
Abstract Recently, Large Language Models (LLMs) have demonstrated impressive
capability to solve a wide range of tasks. However, despite their success across various …

Fast, effective, and self-supervised: Transforming masked language models into universal lexical and sentence encoders

F Liu, I Vulić, A Korhonen, N Collier - arxiv preprint arxiv:2104.08027, 2021 - arxiv.org
Pretrained Masked Language Models (MLMs) have revolutionised NLP in recent years.
However, previous work has indicated that off-the-shelf MLMs are not effective as universal …

[HTML][HTML] Does the magic of BERT apply to medical code assignment? A quantitative study

S Ji, M Hölttä, P Marttinen - Computers in biology and medicine, 2021 - Elsevier
Unsupervised pretraining is an integral part of many natural language processing systems,
and transfer learning with language models has achieved remarkable results in downstream …

A comprehensive survey on evaluating large language model applications in the medical industry

Y Huang, K Tang, M Chen, B Wang - arxiv preprint arxiv:2404.15777, 2024 - arxiv.org
Since the inception of the Transformer architecture in 2017, Large Language Models (LLMs)
such as GPT and BERT have evolved significantly, impacting various industries with their …