Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-Augmenting
Recent studies have revealed that grammatical error correction methods in the sequence-to-
sequence paradigm are vulnerable to adversarial attacks. Large Language Models (LLMs) …
sequence paradigm are vulnerable to adversarial attacks. Large Language Models (LLMs) …
PROTECT: Parameter-Efficient Tuning for Few-Shot Robust Chinese Text Correction
Non-normative texts and euphemisms are widely spread on the Internet, making it more
difficult to moderate the content. These phenomena result from misspelling errors or …
difficult to moderate the content. These phenomena result from misspelling errors or …
Learning from Mistakes: Self-correct Adversarial Training for Chinese Unnatural Text Correction
Unnatural text correction aims to automatically detect and correct spelling errors or
adversarial perturbation errors in sentences. Existing methods typically rely on fine-tuning or …
adversarial perturbation errors in sentences. Existing methods typically rely on fine-tuning or …
Tibyan Corpus: Balanced and Comprehensive Error Coverage Corpus Using ChatGPT for Arabic Grammatical Error Correction
A Alrehili, A Alhothali - arxiv preprint arxiv:2411.04588, 2024 - arxiv.org
Natural language processing (NLP) utilizes text data augmentation to overcome sample size
constraints. Increasing the sample size is a natural and widely used strategy for alleviating …
constraints. Increasing the sample size is a natural and widely used strategy for alleviating …
Evaluating Performance of LLaMA2 Large Language Model Enhanced by QLoRA Fine-Tuning for English Grammatical Error Correction
Abstract Large Language Models (LLMs) have experienced significant advancements
across various contexts. However, their impact on vertical fields remains understudied and …
across various contexts. However, their impact on vertical fields remains understudied and …