Revisiting out-of-distribution robustness in nlp: Benchmarks, analysis, and llms evaluations

L Yuan, Y Chen, G Cui, H Gao, F Zou… - Advances in …, 2023‏ - proceedings.neurips.cc
This paper reexamines the research on out-of-distribution (OOD) robustness in the field of
NLP. We find that the distribution shift settings in previous studies commonly lack adequate …

Promptner: Prompting for named entity recognition

D Ashok, ZC Lipton - arxiv preprint arxiv:2305.15444, 2023‏ - arxiv.org
In a surprising turn, Large Language Models (LLMs) together with a growing arsenal of
prompt-based heuristics now offer powerful off-the-shelf approaches providing few-shot …

Glue-x: Evaluating natural language understanding models from an out-of-distribution generalization perspective

L Yang, S Zhang, L Qin, Y Li, Y Wang, H Liu… - arxiv preprint arxiv …, 2022‏ - arxiv.org
Pre-trained language models (PLMs) are known to improve the generalization performance
of natural language understanding models by leveraging large amounts of data during the …

Cross-domain data augmentation with domain-adaptive language modeling for aspect-based sentiment analysis

J Yu, Q Zhao, R **a - Proceedings of the 61st Annual Meeting of …, 2023‏ - aclanthology.org
Abstract Cross-domain Aspect-Based Sentiment Analysis (ABSA) aims to leverage the
useful knowledge from a source domain to identify aspect-sentiment pairs in sentences from …

Rfid: Towards rational fusion-in-decoder for open-domain question answering

C Wang, H Yu, Y Zhang - arxiv preprint arxiv:2305.17041, 2023‏ - arxiv.org
Open-Domain Question Answering (ODQA) systems necessitate a reader model capable of
generating answers by simultaneously referring to multiple passages. Although …

Prompting large language models for counterfactual generation: An empirical study

Y Li, M Xu, X Miao, S Zhou, T Qian - arxiv preprint arxiv:2305.14791, 2023‏ - arxiv.org
Large language models (LLMs) have made remarkable progress in a wide range of natural
language understanding and generation tasks. However, their ability to generate …

VerifiNER: verification-augmented NER via knowledge-grounded reasoning with large language models

S Kim, K Seo, H Chae, J Yeo, D Lee - arxiv preprint arxiv:2402.18374, 2024‏ - arxiv.org
Recent approaches in domain-specific named entity recognition (NER), such as biomedical
NER, have shown remarkable advances. However, they still lack of faithfulness, producing …

Mere contrastive learning for cross-domain sentiment analysis

Y Luo, F Guo, Z Liu, Y Zhang - arxiv preprint arxiv:2208.08678, 2022‏ - arxiv.org
Cross-domain sentiment analysis aims to predict the sentiment of texts in the target domain
using the model trained on the source domain to cope with the scarcity of labeled data …

Decoupled hyperbolic graph attention network for cross-domain named entity recognition

J Xu, Y Cai - Proceedings of the 46th International ACM SIGIR …, 2023‏ - dl.acm.org
To address the scarcity of massive labeled data, cross-domain named entity recognition
(cross-domain NER) attracts increasing attention. Recent studies focus on decomposing …

[HTML][HTML] A novel prompting method for few-shot ner via llms

Q Cheng, L Chen, Z Hu, J Tang, Q Xu, B Ning - Natural Language …, 2024‏ - Elsevier
In various natural language processing tasks, significant strides have been made by Large
Language Models (LLMs). Researchers leverage prompt method to conduct LLMs in …