The rise and potential of large language model based agents: A survey

Z **, W Chen, X Guo, W He, Y Ding, B Hong… - Science China …, 2025 - Springer
For a long time, researchers have sought artificial intelligence (AI) that matches or exceeds
human intelligence. AI agents, which are artificial entities capable of sensing the …

Optimizing prompts for text-to-image generation

Y Hao, Z Chi, L Dong, F Wei - Advances in Neural …, 2024 - proceedings.neurips.cc
Well-designed prompts can guide text-to-image models to generate amazing images.
However, the performant prompts are often model-specific and misaligned with user input …

Bridging the gap: A survey on integrating (human) feedback for natural language generation

P Fernandes, A Madaan, E Liu, A Farinhas… - Transactions of the …, 2023 - direct.mit.edu
Natural language generation has witnessed significant advancements due to the training of
large language models on vast internet-scale datasets. Despite these advancements, there …

Fine-grained human feedback gives better rewards for language model training

Z Wu, Y Hu, W Shi, N Dziri, A Suhr… - Advances in …, 2024 - proceedings.neurips.cc
Abstract Language models (LMs) often exhibit undesirable text generation behaviors,
including generating false, toxic, or irrelevant outputs. Reinforcement learning from human …

Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback

HR Kirk, B Vidgen, P Röttger, SA Hale - arxiv preprint arxiv:2303.05453, 2023 - arxiv.org
Large language models (LLMs) are used to generate content for a wide range of tasks, and
are set to reach a growing audience in coming years due to integration in product interfaces …

PLACES: Prompting language models for social conversation synthesis

M Chen, A Papangelis, C Tao, S Kim… - arxiv preprint arxiv …, 2023 - arxiv.org
Collecting high quality conversational data can be very expensive for most applications and
infeasible for others due to privacy, ethical, or similar concerns. A promising direction to …

A Primer on Seq2Seq Models for Generative Chatbots

V Scotti, L Sbattella, R Tedesco - ACM Computing Surveys, 2023 - dl.acm.org
The recent spread of Deep Learning-based solutions for Artificial Intelligence and the
development of Large Language Models has pushed forwards significantly the Natural …

On improving summarization factual consistency from natural language feedback

Y Liu, B Deb, M Teruel, A Halfaker, D Radev… - arxiv preprint arxiv …, 2022 - arxiv.org
Despite the recent progress in language generation models, their outputs may not always
meet user expectations. In this work, we study whether informational feedback in natural …

I2d2: Inductive knowledge distillation with neurologic and self-imitation

C Bhagavatula, JD Hwang, D Downey, RL Bras… - arxiv preprint arxiv …, 2022 - arxiv.org
Commonsense capabilities of pre-trained language models dramatically improve with scale,
leading many to believe that scale is the only winning recipe. But is it? Here, we investigate …

Reasons to reject? aligning language models with judgments

W Xu, D Cai, Z Zhang, W Lam, S Shi - arxiv preprint arxiv:2312.14591, 2023 - arxiv.org
As humans, we consistently engage in interactions with our peers and receive feedback in
the form of natural language. This language feedback allows us to reflect on our actions …