ChatGPT for sha** the future of dentistry: the potential of multi-modal large language model

H Huang, O Zheng, D Wang, J Yin, Z Wang… - International Journal of …, 2023 - nature.com
The ChatGPT, a lite and conversational variant of Generative Pretrained Transformer 4 (GPT-
4) developed by OpenAI, is one of the milestone Large Language Models (LLMs) with …

Pre-trained language models in biomedical domain: A systematic survey

B Wang, Q **e, J Pei, Z Chen, P Tiwari, Z Li… - ACM Computing …, 2023 - dl.acm.org
Pre-trained language models (PLMs) have been the de facto paradigm for most natural
language processing tasks. This also benefits the biomedical domain: researchers from …

The flan collection: Designing data and methods for effective instruction tuning

S Longpre, L Hou, T Vu, A Webson… - International …, 2023 - proceedings.mlr.press
We study the design decision of publicly available instruction tuning methods, by
reproducing and breaking down the development of Flan 2022 (Chung et al., 2022) …

A generalist vision–language foundation model for diverse biomedical tasks

K Zhang, R Zhou, E Adhikarla, Z Yan, Y Liu, J Yu… - Nature Medicine, 2024 - nature.com
Traditional biomedical artificial intelligence (AI) models, designed for specific tasks or
modalities, often exhibit limited flexibility in real-world deployment and struggle to utilize …

Matching patients to clinical trials with large language models

Q **, Z Wang, CS Floudas, F Chen, C Gong… - Nature …, 2024 - nature.com
Patient recruitment is challenging for clinical trials. We introduce TrialGPT, an end-to-end
framework for zero-shot patient-to-trial matching with large language models. TrialGPT …

Prompt engineering for healthcare: Methodologies and applications

J Wang, E Shi, S Yu, Z Wu, C Ma, H Dai, Q Yang… - arxiv preprint arxiv …, 2023 - arxiv.org
Prompt engineering is a critical technique in the field of natural language processing that
involves designing and optimizing the prompts used to input information into models, aiming …

Publicly available clinical BERT embeddings

E Alsentzer, JR Murphy, W Boag, WH Weng… - arxiv preprint arxiv …, 2019 - arxiv.org
Contextual word embedding models such as ELMo (Peters et al., 2018) and BERT (Devlin et
al., 2018) have dramatically improved performance for many natural language processing …

Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets

Y Peng, S Yan, Z Lu - arxiv preprint arxiv:1906.05474, 2019 - arxiv.org
Inspired by the success of the General Language Understanding Evaluation benchmark, we
introduce the Biomedical Language Understanding Evaluation (BLUE) benchmark to …

Biomedgpt: A unified and generalist biomedical generative pre-trained transformer for vision, language, and multimodal tasks

K Zhang, J Yu, E Adhikarla, R Zhou, Z Yan… - arxiv e …, 2023 - ui.adsabs.harvard.edu
Conventional task-and modality-specific artificial intelligence (AI) models are inflexible in
real-world deployment and maintenance for biomedicine. At the same time, the growing …

[PDF][PDF] Revealing the Dark Secrets of BERT

O Kovaleva - arxiv preprint arxiv:1908.08593, 2019 - fq.pkwyx.com
BERT-based architectures currently give state-of-the-art performance on many NLP tasks,
but little is known about the exact mechanisms that contribute to its success. In the current …