Camel: Communicative agents for" mind" exploration of large language model society

G Li, H Hammoud, H Itani… - Advances in Neural …, 2023 - proceedings.neurips.cc
The rapid advancement of chat-based language models has led to remarkable progress in
complex task-solving. However, their success heavily relies on human input to guide the …

Baize: An open-source chat model with parameter-efficient tuning on self-chat data

C Xu, D Guo, N Duan, J McAuley - arxiv preprint arxiv:2304.01196, 2023 - arxiv.org
Chat models, such as ChatGPT, have shown impressive capabilities and have been rapidly
adopted across numerous domains. However, these models are only accessible through a …

Data augmentation using llms: Data perspectives, learning paradigms and challenges

B Ding, C Qin, R Zhao, T Luo, X Li… - Findings of the …, 2024 - aclanthology.org
In the rapidly evolving field of large language models (LLMs), data augmentation (DA) has
emerged as a pivotal technique for enhancing model performance by diversifying training …

User simulation for evaluating information access systems

K Balog, CX Zhai - Proceedings of the Annual International ACM SIGIR …, 2023 - dl.acm.org
With the emergence of various information access systems exhibiting increasing complexity,
there is a critical need for sound and scalable means of automatic evaluation. To address …

Soda: Million-scale dialogue distillation with social commonsense contextualization

H Kim, J Hessel, L Jiang, P West, X Lu, Y Yu… - arxiv preprint arxiv …, 2022 - arxiv.org
We present SODA: the first publicly available, million-scale high-quality social dialogue
dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming …

Explanations from large language models make small reasoners better

S Li, J Chen, Y Shen, Z Chen, X Zhang, Z Li… - arxiv preprint arxiv …, 2022 - arxiv.org
Integrating free-text explanations to in-context learning of large language models (LLM) is
shown to elicit strong reasoning capabilities along with reasonable explanations. In this …

Lift yourself up: Retrieval-augmented text generation with self-memory

X Cheng, D Luo, X Chen, L Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
With direct access to human-written reference as memory, retrieval-augmented generation
has achieved much progress in a wide range of text generation tasks. Since better memory …

A survey on data synthesis and augmentation for large language models

K Wang, J Zhu, M Ren, Z Liu, S Li, Z Zhang… - arxiv preprint arxiv …, 2024 - arxiv.org
The success of Large Language Models (LLMs) is inherently linked to the availability of vast,
diverse, and high-quality data for training and evaluation. However, the growth rate of high …

Sgp-tod: Building task bots effortlessly via schema-guided llm prompting

X Zhang, B Peng, K Li, J Zhou, H Meng - arxiv preprint arxiv:2305.09067, 2023 - arxiv.org
Building end-to-end task bots and maintaining their integration with new functionalities using
minimal human efforts is a long-standing challenge in dialog research. Recently large …

Unlocking the potential of user feedback: Leveraging large language model as user simulators to enhance dialogue system

Z Hu, Y Feng, AT Luu, B Hooi, A Lipani - Proceedings of the 32nd ACM …, 2023 - dl.acm.org
Dialogue systems and large language models (LLMs) have gained considerable attention.
However, the direct utilization of LLMs as task-oriented dialogue (TOD) models has been …