Prompting large language model for machine translation: A case study
Research on prompting has shown excellent performance with little or even no supervised
training across many tasks. However, prompting for machine translation is still under …
training across many tasks. However, prompting for machine translation is still under …
Auggpt: Leveraging chatgpt for text data augmentation
Text data augmentation is an effective strategy for overcoming the challenge of limited
sample sizes in many natural language processing (NLP) tasks. This challenge is especially …
sample sizes in many natural language processing (NLP) tasks. This challenge is especially …
Prompt learning for news recommendation
Some recent news recommendation (NR) methods introduce a Pre-trained Language Model
(PLM) to encode news representation by following the vanilla pre-train and fine-tune …
(PLM) to encode news representation by following the vanilla pre-train and fine-tune …
Multitask prompt tuning enables parameter-efficient transfer learning
Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on
learned prompt vectors, has emerged as a promising approach for efficiently adapting large …
learned prompt vectors, has emerged as a promising approach for efficiently adapting large …
Mask-guided BERT for few-shot text classification
Transformer-based language models have achieved significant success in various domains.
However, the data-intensive nature of the transformer architecture requires much labeled …
However, the data-intensive nature of the transformer architecture requires much labeled …
ConnPrompt: Connective-cloze prompt learning for implicit discourse relation recognition
Abstract Implicit Discourse Relation Recognition (IDRR) is to detect and classify relation
sense between two text segments without an explicit connective. Vanilla pre-train and fine …
sense between two text segments without an explicit connective. Vanilla pre-train and fine …
Xprompt: Exploring the extreme of prompt tuning
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs)
for performing downstream tasks in a parameter-efficient manner. While prompt tuning has …
for performing downstream tasks in a parameter-efficient manner. While prompt tuning has …
Unified multi-modal pre-training for few-shot sentiment analysis with prompt-based learning
Multi-modal sentiment analysis (MSA) has become more and more attractive in both
academia and industry. The conventional studies normally require massive labeled data to …
academia and industry. The conventional studies normally require massive labeled data to …
Batched low-rank adaptation of foundation models
Low-Rank Adaptation (LoRA) has recently gained attention for fine-tuning foundation
models by incorporating trainable low-rank matrices, thereby reducing the number of …
models by incorporating trainable low-rank matrices, thereby reducing the number of …
Meta-prompt based learning for low-resource false information detection
The wide spread of false information has detrimental effects on society, and false information
detection has received wide attention. When new domains appear, the relevant labeled data …
detection has received wide attention. When new domains appear, the relevant labeled data …