Large language models for data annotation: A survey
Z Tan, A Beigi, S Wang, R Guo, A Bhattacharjee… - arxiv preprint arxiv …, 2024 - arxiv.org
Data annotation is the labeling or tagging of raw data with relevant information, essential for
improving the efficacy of machine learning models. The process, however, is labor-intensive …
improving the efficacy of machine learning models. The process, however, is labor-intensive …
From generation to judgment: Opportunities and challenges of llm-as-a-judge
Assessment and evaluation have long been critical challenges in artificial intelligence (AI)
and natural language processing (NLP). However, traditional methods, whether matching …
and natural language processing (NLP). However, traditional methods, whether matching …
Large language models as annotators: Enhancing generalization of nlp models at minimal cost
State-of-the-art supervised NLP models achieve high accuracy but are also susceptible to
failures on inputs from low-data regimes, such as domains that are not represented in …
failures on inputs from low-data regimes, such as domains that are not represented in …
A survey on stability of learning with limited labelled data and its sensitivity to the effects of randomness
B Pecher, I Srba, M Bielikova - ACM Computing Surveys, 2024 - dl.acm.org
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-
learning, or few-shot learning, aims to effectively train a model using only a small amount of …
learning, or few-shot learning, aims to effectively train a model using only a small amount of …
Cost-effective in-context learning for entity resolution: A design space exploration
Entity resolution (ER) is an important data integration task with a wide spectrum of
applications. The state-of-the-art solutions on ER rely on pre-trained language models …
applications. The state-of-the-art solutions on ER rely on pre-trained language models …
Cue-CoT: Chain-of-thought prompting for responding to in-depth dialogue questions with LLMs
Large Language Models (LLMs), such as\texttt {ChatGPT}, greatly empower dialogue
systems with strong language understanding and generation capabilities. However, most of …
systems with strong language understanding and generation capabilities. However, most of …
Causal prompting: Debiasing large language model prompting based on front-door adjustment
Despite the notable advancements of existing prompting methods, such as In-Context
Learning and Chain-of-Thought for Large Language Models (LLMs), they still face …
Learning and Chain-of-Thought for Large Language Models (LLMs), they still face …
Advancing entity recognition in biomedicine via instruction tuning of large language models
Abstract Motivation Large Language Models (LLMs) have the potential to revolutionize the
field of Natural Language Processing, excelling not only in text generation and reasoning …
field of Natural Language Processing, excelling not only in text generation and reasoning …
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI
T Papamarkou, M Skoularidou, K Palla… - … on Machine Learning, 2024 - openreview.net
In the current landscape of deep learning research, there is a predominant emphasis on
achieving high predictive accuracy in supervised tasks involving large image and language …
achieving high predictive accuracy in supervised tasks involving large image and language …
Universal self-adaptive prompting
A hallmark of modern large language models (LLMs) is their impressive general zero-shot
and few-shot abilities, often elicited through in-context learning (ICL) via prompting …
and few-shot abilities, often elicited through in-context learning (ICL) via prompting …