Going beyond xai: A systematic survey for explanation-guided learning

Y Gao, S Gu, J Jiang, SR Hong, D Yu, L Zhao - ACM Computing Surveys, 2024 - dl.acm.org
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing
DNNs become more complex and diverse, ranging from improving a conventional model …

A survey on text classification: From traditional to deep learning

Q Li, H Peng, J Li, C **a, R Yang, L Sun… - ACM Transactions on …, 2022 - dl.acm.org
Text classification is the most fundamental and essential task in natural language
processing. The last decade has seen a surge of research in this area due to the …

A survey on text classification: From shallow to deep learning

Q Li, H Peng, J Li, C **a, R Yang, L Sun, PS Yu… - arxiv preprint arxiv …, 2020 - arxiv.org
Text classification is the most fundamental and essential task in natural language
processing. The last decade has seen a surge of research in this area due to the …

C2l: Causally contrastive learning for robust text classification

S Choi, M Jeong, H Han, S Hwang - … of the AAAI conference on artificial …, 2022 - ojs.aaai.org
Despite the super-human accuracy of recent deep models in NLP tasks, their robustness is
reportedly limited due to their reliance on spurious patterns. We thus aim to leverage …

Debiasing nlu models via causal intervention and counterfactual reasoning

B Tian, Y Cao, Y Zhang, C **ng - … of the AAAI Conference on Artificial …, 2022 - ojs.aaai.org
Recent studies have shown that strong Natural Language Understanding (NLU) models are
prone to relying on annotation biases of the datasets as a shortcut, which goes against the …

De-biased attention supervision for text classification with causality

Y Wu, Y Liu, Z Zhao, W Lu, Y Zhang, C Sun… - Proceedings of the …, 2024 - ojs.aaai.org
In text classification models, while the unsupervised attention mechanism can enhance
performance, it often produces attention distributions that are puzzling to humans, such as …

Causal keyword driven reliable text classification with large language model feedback

R Song, Y Li, M Tian, H Wang, F Giunchiglia… - Information Processing & …, 2025 - Elsevier
Recent studies show Pre-trained Language Models (PLMs) tend to shortcut learning,
reducing effectiveness with Out-Of-Distribution (OOD) samples, prompting research on the …

Supervised copy mechanism for grammatical error correction

K Al-Sabahi, K Yang - IEEE Access, 2023 - ieeexplore.ieee.org
AI has introduced a new reform direction for traditional education, such as automating
Grammatical Error Correction (GEC) to reduce teachers' workload and improve efficiency …

Perturbation-based self-supervised attention for attention bias in text classification

H Feng, Z Lin, Q Ma - IEEE/ACM Transactions on Audio …, 2023 - ieeexplore.ieee.org
In text classification, the traditional attention mechanisms usually focus too much on frequent
words, and need extensive labeled data in order to learn. This article proposes a …

Automatic construction of context-aware sentiment lexicon in the financial domain using direction-dependent words

J Park, HJ Lee, S Cho - arxiv preprint arxiv:2106.05723, 2021 - arxiv.org
Increasing attention has been drawn to the sentiment analysis of financial documents. The
most popular examples of such documents include analyst reports and economic news, the …