GPT3Mix: Leveraging large-scale language models for text augmentation

KM Yoo, D Park, J Kang, SW Lee, W Park - arxiv preprint arxiv …, 2021 - arxiv.org
Large-scale language models such as GPT-3 are excellent few-shot learners, allowing them
to be controlled via natural text prompts. Recent studies report that prompt-based direct …

Federated social recommendation with graph neural network

Z Liu, L Yang, Z Fan, H Peng, PS Yu - ACM Transactions on Intelligent …, 2022 - dl.acm.org
Recommender systems have become prosperous nowadays, designed to predict users'
potential interests in items by learning embeddings. Recent developments of the Graph …

Augmenting sequential recommendation with pseudo-prior items via reversely pre-training transformer

Z Liu, Z Fan, Y Wang, PS Yu - Proceedings of the 44th international ACM …, 2021 - dl.acm.org
Sequential Recommendation characterizes the evolving patterns by modeling item
sequences chronologically. The essential target of it is to capture the item transition …

Mixup-transformer: Dynamic data augmentation for NLP tasks

L Sun, C **a, W Yin, T Liang, PS Yu, L He - arxiv preprint arxiv …, 2020 - arxiv.org
Mixup is the latest data augmentation technique that linearly interpolates input examples
and the corresponding labels. It has shown strong effectiveness in image classification by …

State-of-the-art generalisation research in NLP: a taxonomy and review

D Hupkes, M Giulianelli, V Dankers, M Artetxe… - arxiv preprint arxiv …, 2022 - arxiv.org
The ability to generalise well is one of the primary desiderata of natural language
processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …

Few-shot intent detection via contrastive pre-training and fine-tuning

J Zhang, T Bui, S Yoon, X Chen, Z Liu, C **a… - arxiv preprint arxiv …, 2021 - arxiv.org
In this work, we focus on a more challenging few-shot intent detection scenario where many
intents are fine-grained and semantically similar. We present a simple yet effective few-shot …

Crafting clarity: Leveraging large language models to decode consumer reviews

SV Praveen, P Gajjar, RK Ray, A Dutt - Journal of Retailing and Consumer …, 2024 - Elsevier
Abstract Large Language Models (LLMs) have emerged as powerful tools for understanding
consumer perceptions and extracting insights from unstructured textual data. This study …

Incremental few-shot text classification with multi-round new classes: Formulation, dataset and system

C **a, W Yin, Y Feng, P Yu - arxiv preprint arxiv:2104.11882, 2021 - arxiv.org
Text classification is usually studied by labeling natural language texts with relevant
categories from a predefined set. In the real world, new classes might keep challenging the …

Effectiveness of pre-training for few-shot intent classification

H Zhang, Y Zhang, LM Zhan, J Chen, G Shi… - arxiv preprint arxiv …, 2021 - arxiv.org
This paper investigates the effectiveness of pre-training for few-shot intent classification.
While existing paradigms commonly further pre-train language models such as BERT on a …

Fine-tuning pre-trained language models for few-shot intent detection: Supervised pre-training and isotropization

H Zhang, H Liang, Y Zhang, L Zhan, X Lu… - arxiv preprint arxiv …, 2022 - arxiv.org
It is challenging to train a good intent classifier for a task-oriented dialogue system with only
a few annotations. Recent studies have shown that fine-tuning pre-trained language models …