The current and future state of AI interpretation of medical images

P Rajpurkar, MP Lungren - New England Journal of Medicine, 2023 - Mass Medical Soc
The Current and Future State of AI Interpretation of Medical Images | New England Journal of
Medicine Skip to main content The New England Journal of Medicine homepage Advanced …

The shaky foundations of large language models and foundation models for electronic health records

M Wornow, Y Xu, R Thapa, B Patel, E Steinberg… - npj Digital …, 2023 - nature.com
The success of foundation models such as ChatGPT and AlphaFold has spurred significant
interest in building similar models for electronic medical records (EMRs) to improve patient …

[HTML][HTML] A survey of large language models for healthcare: from data, technology, and applications to accountability and ethics

K He, R Mao, Q Lin, Y Ruan, X Lan, M Feng… - Information …, 2025 - Elsevier
The utilization of large language models (LLMs) for Healthcare has generated both
excitement and concern due to their ability to effectively respond to free-text queries with …

Large-scale multi-modal pre-trained models: A comprehensive survey

X Wang, G Chen, G Qian, P Gao, XY Wei… - Machine Intelligence …, 2023 - Springer
With the urgent demand for generalized deep models, many pre-trained big models are
proposed, such as bidirectional encoder representations (BERT), vision transformer (ViT) …

Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment

Y Liu, Y Yao, JF Ton, X Zhang, RGH Cheng… - ar** the generative artificial intelligence (ai) research landscape
TR McIntosh, T Susnjak, T Liu, P Watters… - arxiv preprint arxiv …, 2023 - arxiv.org
This comprehensive survey explored the evolving landscape of generative Artificial
Intelligence (AI), with a specific focus on the transformative impacts of Mixture of Experts …

Chinese clip: Contrastive vision-language pretraining in chinese

A Yang, J Pan, J Lin, R Men, Y Zhang, J Zhou… - arxiv preprint arxiv …, 2022 - arxiv.org
The tremendous success of CLIP (Radford et al., 2021) has promoted the research and
application of contrastive learning for vision-language pretraining. In this work, we construct …

问答ChatGPT 之后: 超大预训练模型的机遇和挑战

卢经纬, 郭超, 戴星原, 缪青海, 王兴霞, 杨静, 王飞跃 - 自动化学报, 2023 - aas.net.cn
超大预训练模型(Pre-trained model, PTM) 是人工智能领域**年来迅速崛起的研究方向,
在自然语言处理(Natural language processing, NLP) 和计算机视觉等多种任务中达到了有史 …