Nested named entity recognition: a survey

Y Wang, H Tong, Z Zhu, Y Li - ACM Transactions on Knowledge …, 2022 - dl.acm.org
With the rapid development of text mining, many studies observe that text generally contains
a variety of implicit information, and it is important to develop techniques for extracting such …

A survey on arabic named entity recognition: Past, recent advances, and future trends

X Qu, Y Gu, Q **a, Z Li, Z Wang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
As more and more Arabic texts emerged on the Internet, extracting important information
from these Arabic texts is especially useful. As a fundamental technology, Named entity …

Template-free prompt tuning for few-shot NER

R Ma, X Zhou, T Gui, Y Tan, L Li, Q Zhang… - arxiv preprint arxiv …, 2021 - arxiv.org
Prompt-based methods have been successfully applied in sentence-level few-shot learning
tasks, mostly owing to the sophisticated design of templates and label words. However …

A survey on programmatic weak supervision

J Zhang, CY Hsieh, Y Yu, C Zhang, A Ratner - arxiv preprint arxiv …, 2022 - arxiv.org
Labeling training data has become one of the major roadblocks to using machine learning.
Among various weak supervision paradigms, programmatic weak supervision (PWS) has …

WRENCH: A comprehensive benchmark for weak supervision

J Zhang, Y Yu, Y Li, Y Wang, Y Yang, M Yang… - arxiv preprint arxiv …, 2021 - arxiv.org
Recent Weak Supervision (WS) approaches have had widespread success in easing the
bottleneck of labeling training data for machine learning by synthesizing labels from multiple …

SumGNN: multi-typed drug interaction prediction via efficient knowledge graph summarization

Y Yu, K Huang, C Zhang, LM Glass, J Sun… - …, 2021 - academic.oup.com
Motivation Thanks to the increasing availability of drug–drug interactions (DDI) datasets and
large biomedical knowledge graphs (KGs), accurate detection of adverse DDI using …

Assessing generalizability of codebert

X Zhou, DG Han, D Lo - 2021 IEEE International Conference on …, 2021 - ieeexplore.ieee.org
Pre-trained models like BERT have achieved strong improvements on many natural
language processing (NLP) tasks, showing their great generalizability. The success of pre …

Fine-tuning pre-trained language model with weak supervision: A contrastive-regularized self-training approach

Y Yu, S Zuo, H Jiang, W Ren, T Zhao… - arxiv preprint arxiv …, 2020 - arxiv.org
Fine-tuned pre-trained language models (LMs) have achieved enormous success in many
natural language processing (NLP) tasks, but they still require excessive labeled data in the …

Meta self-training for few-shot neural sequence labeling

Y Wang, S Mukherjee, H Chu, Y Tu, M Wu… - Proceedings of the 27th …, 2021 - dl.acm.org
Neural sequence labeling is widely adopted for many Natural Language Processing (NLP)
tasks, such as Named Entity Recognition (NER) and slot tagging for dialog systems and …

Distantly-supervised named entity recognition with noise-robust learning and language model augmented self-training

Y Meng, Y Zhang, J Huang, X Wang, Y Zhang… - arxiv preprint arxiv …, 2021 - arxiv.org
We study the problem of training named entity recognition (NER) models using only distantly-
labeled data, which can be automatically obtained by matching entity mentions in the raw …