Large language models for data annotation: A survey

Z Tan, A Beigi, S Wang, R Guo, A Bhattacharjee… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Data annotation is the labeling or tagging of raw data with relevant information, essential for
improving the efficacy of machine learning models. The process, however, is labor-intensive …

From generation to judgment: Opportunities and challenges of llm-as-a-judge

D Li, B Jiang, L Huang, A Beigi, C Zhao, Z Tan… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Assessment and evaluation have long been critical challenges in artificial intelligence (AI)
and natural language processing (NLP). However, traditional methods, whether matching …

Large language models as annotators: Enhancing generalization of nlp models at minimal cost

P Bansal, A Sharma - arxiv preprint arxiv:2306.15766, 2023‏ - arxiv.org
State-of-the-art supervised NLP models achieve high accuracy but are also susceptible to
failures on inputs from low-data regimes, such as domains that are not represented in …

A survey on stability of learning with limited labelled data and its sensitivity to the effects of randomness

B Pecher, I Srba, M Bielikova - ACM Computing Surveys, 2024‏ - dl.acm.org
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-
learning, or few-shot learning, aims to effectively train a model using only a small amount of …

Cost-effective in-context learning for entity resolution: A design space exploration

M Fan, X Han, J Fan, C Chai, N Tang… - 2024 IEEE 40th …, 2024‏ - ieeexplore.ieee.org
Entity resolution (ER) is an important data integration task with a wide spectrum of
applications. The state-of-the-art solutions on ER rely on pre-trained language models …

Cue-CoT: Chain-of-thought prompting for responding to in-depth dialogue questions with LLMs

H Wang, R Wang, F Mi, Y Deng, Z Wang… - arxiv preprint arxiv …, 2023‏ - arxiv.org
Large Language Models (LLMs), such as\texttt {ChatGPT}, greatly empower dialogue
systems with strong language understanding and generation capabilities. However, most of …

Causal prompting: Debiasing large language model prompting based on front-door adjustment

C Zhang, L Zhang, J Wu, Y He, D Zhou - arxiv preprint arxiv:2403.02738, 2024‏ - arxiv.org
Despite the notable advancements of existing prompting methods, such as In-Context
Learning and Chain-of-Thought for Large Language Models (LLMs), they still face …

Advancing entity recognition in biomedicine via instruction tuning of large language models

VK Keloth, Y Hu, Q **e, X Peng, Y Wang… - …, 2024‏ - academic.oup.com
Abstract Motivation Large Language Models (LLMs) have the potential to revolutionize the
field of Natural Language Processing, excelling not only in text generation and reasoning …

Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI

T Papamarkou, M Skoularidou, K Palla… - … on Machine Learning, 2024‏ - openreview.net
In the current landscape of deep learning research, there is a predominant emphasis on
achieving high predictive accuracy in supervised tasks involving large image and language …

Universal self-adaptive prompting

X Wan, R Sun, H Nakhost, H Dai… - arxiv preprint arxiv …, 2023‏ - arxiv.org
A hallmark of modern large language models (LLMs) is their impressive general zero-shot
and few-shot abilities, often elicited through in-context learning (ICL) via prompting …