Large language models in medical and healthcare fields: applications, advances, and challenges

D Wang, S Zhang - Artificial Intelligence Review, 2024‏ - Springer
Large language models (LLMs) are increasingly recognized for their advanced language
capabilities, offering significant assistance in diverse areas like medical communication …

SemEval-2023 task 7: Multi-evidence natural language inference for clinical trial data

M Jullien, M Valentino, H Frost, P O'Regan… - arxiv preprint arxiv …, 2023‏ - arxiv.org
This paper describes the results of SemEval 2023 task 7--Multi-Evidence Natural Language
Inference for Clinical Trial Data (NLI4CT)--consisting of 2 tasks, a Natural Language …

Several categories of large language models (llms): A short survey

S Pahune, M Chandrasekharan - arxiv preprint arxiv:2307.10188, 2023‏ - arxiv.org
Large Language Models (LLMs) have become effective tools for natural language
processing and have been used in many different fields. This essay offers a succinct …

To the cutoff... and beyond? a longitudinal perspective on llm data contamination

M Roberts, H Thakur, C Herlihy, C White… - The Twelfth …, 2023‏ - openreview.net
Recent claims about the impressive abilities of large language models (LLMs) are often
supported by evaluating publicly available benchmarks. Since LLMs train on wide swaths of …

NLI4CT: Multi-evidence natural language inference for clinical trial reports

M Jullien, M Valentino, H Frost, P O'Regan… - arxiv preprint arxiv …, 2023‏ - arxiv.org
How can we interpret and retrieve medical evidence to support clinical decisions? Clinical
trial reports (CTR) amassed over the years contain indispensable information for the …

Natural language inference model for customer advocacy detection in online customer engagement

B Abu-Salih, M Alweshah, M Alazab, M Al-Okaily… - Machine Learning, 2024‏ - Springer
Online customer advocacy has developed as a distinctive strategic way to improve
organisational performance by fostering favourable reciprocal affinitive customer behaviours …

How often are errors in natural language reasoning due to paraphrastic variability?

N Srikanth, M Carpuat, R Rudinger - Transactions of the Association …, 2024‏ - direct.mit.edu
Large language models have been shown to behave inconsistently in response to meaning-
preserving paraphrastic inputs. At the same time, researchers evaluate the knowledge and …

Partial-input baselines show that NLI models can ignore context, but they don't

N Srikanth, R Rudinger - arxiv preprint arxiv:2205.12181, 2022‏ - arxiv.org
When strong partial-input baselines reveal artifacts in crowdsourced NLI datasets, the
performance of full-input models trained on such datasets is often dismissed as reliance on …

Understanding and mitigating spurious correlations in text classification with neighborhood analysis

O Chew, HT Lin, KW Chang, KH Huang - arxiv preprint arxiv:2305.13654, 2023‏ - arxiv.org
Recent research has revealed that machine learning models have a tendency to leverage
spurious correlations that exist in the training set but may not hold true in general …

[HTML][HTML] Exploring named entity recognition and relation extraction for ontology and medical records integration

DP da Silva, W da Rosa Fröhlich, BH de Mello… - Informatics in medicine …, 2023‏ - Elsevier
The available natural language data in electronic health records is of noteworthy interest to
health research and development. Nevertheless, their manual analysis is not feasible and …