Recent advances in natural language processing via large pre-trained language models: A survey

B Min, H Ross, E Sulem, APB Veyseh… - ACM Computing …, 2023 - dl.acm.org
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …

A comprehensive survey on relation extraction: Recent advances and new frontiers

X Zhao, Y Deng, M Yang, L Wang, R Zhang… - ACM Computing …, 2024 - dl.acm.org
Relation extraction (RE) involves identifying the relations between entities from underlying
content. RE serves as the foundation for many natural language processing (NLP) and …

P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks

X Liu, K Ji, Y Fu, WL Tam, Z Du, Z Yang… - arxiv preprint arxiv …, 2021 - arxiv.org
Prompt tuning, which only tunes continuous prompts with a frozen language model,
substantially reduces per-task storage and memory usage at training. However, in the …

Knowledge graph-enhanced molecular contrastive learning with functional prompt

Y Fang, Q Zhang, N Zhang, Z Chen, X Zhuang… - Nature Machine …, 2023 - nature.com
Deep learning models can accurately predict molecular properties and help making the
search for potential drug candidates faster and more efficient. Many existing methods are …

Llmaaa: Making large language models as active annotators

R Zhang, Y Li, Y Ma, M Zhou, L Zou - arxiv preprint arxiv:2310.19596, 2023 - arxiv.org
Prevalent supervised learning methods in natural language processing (NLP) are
notoriously data-hungry, which demand large amounts of high-quality annotated data. In …

DeepStruct: Pretraining of language models for structure prediction

C Wang, X Liu, Z Chen, H Hong, J Tang… - arxiv preprint arxiv …, 2022 - arxiv.org
We introduce a method for improving the structural understanding abilities of language
models. Unlike previous approaches that finetune the models with task-specific …

Universal information extraction as unified semantic matching

J Lou, Y Lu, D Dai, W Jia, H Lin, X Han… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
The challenge of information extraction (IE) lies in the diversity of label schemas and the
heterogeneity of structures. Traditional methods require task-specific model design and rely …

Augmenting low-resource text classification with graph-grounded pre-training and prompting

Z Wen, Y Fang - Proceedings of the 46th International ACM SIGIR …, 2023 - dl.acm.org
Text classification is a fundamental problem in information retrieval with many real-world
applications, such as predicting the topics of online articles and the categories of e …

Revisiting large language models as zero-shot relation extractors

G Li, P Wang, W Ke - arxiv preprint arxiv:2310.05028, 2023 - arxiv.org
Relation extraction (RE) consistently involves a certain degree of labeled or unlabeled data
even if under zero-shot setting. Recent studies have shown that large language models …

Consistency guided knowledge retrieval and denoising in llms for zero-shot document-level relation triplet extraction

Q Sun, K Huang, X Yang, R Tong, K Zhang… - Proceedings of the ACM …, 2024 - dl.acm.org
Document-level Relation Triplet Extraction (DocRTE) is a fundamental task in information
systems that aims to simultaneously extract entities with semantic relations from a document …