Large language models for generative information extraction: A survey

D Xu, W Chen, W Peng, C Zhang, T Xu, X Zhao… - Frontiers of Computer …, 2024 - Springer
Abstract Information Extraction (IE) aims to extract structural knowledge from plain natural
language texts. Recently, generative Large Language Models (LLMs) have demonstrated …

[HTML][HTML] Chinese named entity recognition: The state of the art

P Liu, Y Guo, F Wang, G Li - Neurocomputing, 2022 - Elsevier
Abstract Named Entity Recognition (NER), one of the most fundamental problems in natural
language processing, seeks to identify the boundaries and types of entities with specific …

[HTML][HTML] Ptr: Prompt tuning with rules for text classification

X Han, W Zhao, N Ding, Z Liu, M Sun - AI Open, 2022 - Elsevier
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …

Kola: Carefully benchmarking world knowledge of large language models

J Yu, X Wang, S Tu, S Cao, D Zhang-Li, X Lv… - arxiv preprint arxiv …, 2023 - arxiv.org
The unprecedented performance of large language models (LLMs) necessitates
improvements in evaluations. Rather than merely exploring the breadth of LLM abilities, we …

Openprompt: An open-source framework for prompt-learning

N Ding, S Hu, W Zhao, Y Chen, Z Liu, HT Zheng… - arxiv preprint arxiv …, 2021 - arxiv.org
Prompt-learning has become a new paradigm in modern natural language processing,
which directly adapts pre-trained language models (PLMs) to $ cloze $-style prediction …

Large language model is not a good few-shot information extractor, but a good reranker for hard samples!

Y Ma, Y Cao, YC Hong, A Sun - arxiv preprint arxiv:2303.08559, 2023 - arxiv.org
Large Language Models (LLMs) have made remarkable strides in various tasks. Whether
LLMs are competitive few-shot solvers for information extraction (IE) tasks, however, remains …

[PDF][PDF] Is information extraction solved by chatgpt? an analysis of performance, evaluation criteria, robustness and errors

R Han, T Peng, C Yang, B Wang, L Liu… - arxiv preprint arxiv …, 2023 - researchgate.net
ChatGPT has stimulated the research boom in the field of large language models. In this
paper, we assess the capabilities of ChatGPT from four perspectives including Performance …

Template-free prompt tuning for few-shot NER

R Ma, X Zhou, T Gui, Y Tan, L Li, Q Zhang… - arxiv preprint arxiv …, 2021 - arxiv.org
Prompt-based methods have been successfully applied in sentence-level few-shot learning
tasks, mostly owing to the sophisticated design of templates and label words. However …

Revisiting out-of-distribution robustness in nlp: Benchmarks, analysis, and LLMs evaluations

L Yuan, Y Chen, G Cui, H Gao, F Zou… - Advances in …, 2023 - proceedings.neurips.cc
This paper reexamines the research on out-of-distribution (OOD) robustness in the field of
NLP. We find that the distribution shift settings in previous studies commonly lack adequate …

CONTaiNER: Few-shot named entity recognition via contrastive learning

SSS Das, A Katiyar, RJ Passonneau… - arxiv preprint arxiv …, 2021 - arxiv.org
Named Entity Recognition (NER) in Few-Shot setting is imperative for entity tagging in low
resource domains. Existing approaches only learn class-specific semantic features and …