Prompting large language model for machine translation: A case study

B Zhang, B Haddow, A Birch - International Conference on …, 2023 - proceedings.mlr.press
Research on prompting has shown excellent performance with little or even no supervised
training across many tasks. However, prompting for machine translation is still under …

Auggpt: Leveraging chatgpt for text data augmentation

H Dai, Z Liu, W Liao, X Huang, Y Cao… - … Transactions on Big …, 2025 - ieeexplore.ieee.org
Text data augmentation is an effective strategy for overcoming the challenge of limited
sample sizes in many natural language processing (NLP) tasks. This challenge is especially …

Prompt learning for news recommendation

Z Zhang, B Wang - Proceedings of the 46th International ACM SIGIR …, 2023 - dl.acm.org
Some recent news recommendation (NR) methods introduce a Pre-trained Language Model
(PLM) to encode news representation by following the vanilla pre-train and fine-tune …

Multitask prompt tuning enables parameter-efficient transfer learning

Z Wang, R Panda, L Karlinsky, R Feris, H Sun… - arxiv preprint arxiv …, 2023 - arxiv.org
Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on
learned prompt vectors, has emerged as a promising approach for efficiently adapting large …

Mask-guided BERT for few-shot text classification

W Liao, Z Liu, H Dai, Z Wu, Y Zhang, X Huang, Y Chen… - Neurocomputing, 2024 - Elsevier
Transformer-based language models have achieved significant success in various domains.
However, the data-intensive nature of the transformer architecture requires much labeled …

ConnPrompt: Connective-cloze prompt learning for implicit discourse relation recognition

W **ang, Z Wang, L Dai, B Wang - Proceedings of the 29th …, 2022 - aclanthology.org
Abstract Implicit Discourse Relation Recognition (IDRR) is to detect and classify relation
sense between two text segments without an explicit connective. Vanilla pre-train and fine …

Xprompt: Exploring the extreme of prompt tuning

F Ma, C Zhang, L Ren, J Wang, Q Wang, W Wu… - arxiv preprint arxiv …, 2022 - arxiv.org
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs)
for performing downstream tasks in a parameter-efficient manner. While prompt tuning has …

Unified multi-modal pre-training for few-shot sentiment analysis with prompt-based learning

Y Yu, D Zhang, S Li - Proceedings of the 30th ACM international …, 2022 - dl.acm.org
Multi-modal sentiment analysis (MSA) has become more and more attractive in both
academia and industry. The conventional studies normally require massive labeled data to …

Batched low-rank adaptation of foundation models

Y Wen, S Chaudhuri - arxiv preprint arxiv:2312.05677, 2023 - arxiv.org
Low-Rank Adaptation (LoRA) has recently gained attention for fine-tuning foundation
models by incorporating trainable low-rank matrices, thereby reducing the number of …

Meta-prompt based learning for low-resource false information detection

Y Huang, M Gao, J Wang, J Yin, K Shu, Q Fan… - Information Processing & …, 2023 - Elsevier
The wide spread of false information has detrimental effects on society, and false information
detection has received wide attention. When new domains appear, the relevant labeled data …