Large language models for data annotation: A survey
Data annotation is the labeling or tagging of raw data with relevant information, essential for
improving the efficacy of machine learning models. The process, however, is labor-intensive …
improving the efficacy of machine learning models. The process, however, is labor-intensive …
Large language models for data annotation and synthesis: A survey
Data annotation and synthesis generally refers to the labeling or generating of raw data with
relevant information, which could be used for improving the efficacy of machine learning …
relevant information, which could be used for improving the efficacy of machine learning …
Sparsity-guided holistic explanation for llms with interpretable inference-time intervention
Abstract Large Language Models (LLMs) have achieved unprecedented breakthroughs in
various natural language processing domains. However, the enigmatic``black-box''nature of …
various natural language processing domains. However, the enigmatic``black-box''nature of …
Disinformation detection: An evolving challenge in the age of llms
The advent of generative Large Language Models (LLMs) such as ChatGPT has catalyzed
transformative advancements across multiple domains. However, alongside these …
transformative advancements across multiple domains. However, alongside these …
Ceb: Compositional evaluation benchmark for fairness in large language models
As Large Language Models (LLMs) are increasingly deployed to handle various natural
language processing (NLP) tasks, concerns regarding the potential negative societal …
language processing (NLP) tasks, concerns regarding the potential negative societal …
Exploring large language models for feature selection: A data-centric perspective
The rapid advancement of Large Language Models (LLMs) has significantly influenced
various domains, leveraging their exceptional few-shot and zero-shot learning capabilities …
various domains, leveraging their exceptional few-shot and zero-shot learning capabilities …
Hide and seek in noise labels: Noise-robust collaborative active learning with LLMs-powered assistance
B Yuan, Y Chen, Y Zhang, W Jiang - Proceedings of the 62nd …, 2024 - aclanthology.org
Learning from noisy labels (LNL) is a challenge that arises in many real-world scenarios
where collected training data can contain incorrect or corrupted labels. Most existing …
where collected training data can contain incorrect or corrupted labels. Most existing …
Catching chameleons: Detecting evolving disinformation generated using large language models
Despite recent advancements in detecting disinformation generated by large language
models (LLMs), current efforts overlook the ever-evolving nature of this disinformation. In this …
models (LLMs), current efforts overlook the ever-evolving nature of this disinformation. In this …
Towards robust and generalized parameter-efficient fine-tuning for noisy label learning
Parameter-efficient fine-tuning (PEFT) has enabled the efficient optimization of cumbersome
language models in real-world settings. However, as datasets in such environments often …
language models in real-world settings. However, as datasets in such environments often …
Constructing Concept-Based Models to Mitigate Spurious Correlations with Minimal Human Effort
Enhancing model interpretability can address spurious correlations by revealing how
models draw their predictions. Concept Bottleneck Models (CBMs) can provide a principled …
models draw their predictions. Concept Bottleneck Models (CBMs) can provide a principled …