Large language models for generative information extraction: A survey
Abstract Information Extraction (IE) aims to extract structural knowledge from plain natural
language texts. Recently, generative Large Language Models (LLMs) have demonstrated …
language texts. Recently, generative Large Language Models (LLMs) have demonstrated …
[HTML][HTML] Chinese named entity recognition: The state of the art
Abstract Named Entity Recognition (NER), one of the most fundamental problems in natural
language processing, seeks to identify the boundaries and types of entities with specific …
language processing, seeks to identify the boundaries and types of entities with specific …
[HTML][HTML] Ptr: Prompt tuning with rules for text classification
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
Kola: Carefully benchmarking world knowledge of large language models
The unprecedented performance of large language models (LLMs) necessitates
improvements in evaluations. Rather than merely exploring the breadth of LLM abilities, we …
improvements in evaluations. Rather than merely exploring the breadth of LLM abilities, we …
Openprompt: An open-source framework for prompt-learning
Prompt-learning has become a new paradigm in modern natural language processing,
which directly adapts pre-trained language models (PLMs) to $ cloze $-style prediction …
which directly adapts pre-trained language models (PLMs) to $ cloze $-style prediction …
Large language model is not a good few-shot information extractor, but a good reranker for hard samples!
Large Language Models (LLMs) have made remarkable strides in various tasks. Whether
LLMs are competitive few-shot solvers for information extraction (IE) tasks, however, remains …
LLMs are competitive few-shot solvers for information extraction (IE) tasks, however, remains …
[PDF][PDF] Is information extraction solved by chatgpt? an analysis of performance, evaluation criteria, robustness and errors
ChatGPT has stimulated the research boom in the field of large language models. In this
paper, we assess the capabilities of ChatGPT from four perspectives including Performance …
paper, we assess the capabilities of ChatGPT from four perspectives including Performance …
Template-free prompt tuning for few-shot NER
Prompt-based methods have been successfully applied in sentence-level few-shot learning
tasks, mostly owing to the sophisticated design of templates and label words. However …
tasks, mostly owing to the sophisticated design of templates and label words. However …
Revisiting out-of-distribution robustness in nlp: Benchmarks, analysis, and LLMs evaluations
This paper reexamines the research on out-of-distribution (OOD) robustness in the field of
NLP. We find that the distribution shift settings in previous studies commonly lack adequate …
NLP. We find that the distribution shift settings in previous studies commonly lack adequate …
CONTaiNER: Few-shot named entity recognition via contrastive learning
Named Entity Recognition (NER) in Few-Shot setting is imperative for entity tagging in low
resource domains. Existing approaches only learn class-specific semantic features and …
resource domains. Existing approaches only learn class-specific semantic features and …