A comprehensive overview of large language models

H Naveed, AU Khan, S Qiu, M Saqib, S Anwar… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in
natural language processing tasks and beyond. This success of LLMs has led to a large …

Crosslingual generalization through multitask finetuning

N Muennighoff, T Wang, L Sutawika, A Roberts… - arxiv preprint arxiv …, 2022 - arxiv.org
Multitask prompted finetuning (MTF) has been shown to help large language models
generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused …

Datasets for large language models: A comprehensive survey

Y Liu, J Cao, C Liu, K Ding, L ** - arxiv preprint arxiv:2402.18041, 2024 - arxiv.org
This paper embarks on an exploration into the Large Language Model (LLM) datasets,
which play a crucial role in the remarkable advancements of LLMs. The datasets serve as …

Visit-bench: A benchmark for vision-language instruction following inspired by real-world use

Y Bitton, H Bansal, J Hessel, R Shao, W Zhu… - arxiv preprint arxiv …, 2023 - arxiv.org
We introduce VisIT-Bench (Visual InsTruction Benchmark), a benchmark for evaluation of
instruction-following vision-language models for real-world use. Our starting point is curating …

Datacomp-lm: In search of the next generation of training sets for language models

J Li, A Fang, G Smyrnis, M Ivgi, M Jordan… - arxiv preprint arxiv …, 2024 - arxiv.org
We introduce DataComp for Language Models (DCLM), a testbed for controlled dataset
experiments with the goal of improving language models. As part of DCLM, we provide a …

Data management for large language models: A survey

Z Wang, W Zhong, Y Wang, Q Zhu, F Mi, B Wang… - CoRR, 2023 - openreview.net
Data plays a fundamental role in the training of Large Language Models (LLMs). Effective
data management, particularly in the formulation of a well-suited training dataset, holds …

The shifted and the overlooked: A task-oriented investigation of user-GPT interactions

S Ouyang, S Wang, Y Liu, M Zhong, Y Jiao… - arxiv preprint arxiv …, 2023 - arxiv.org
Recent progress in Large Language Models (LLMs) has produced models that exhibit
remarkable performance across a variety of NLP tasks. However, it remains unclear whether …

Active instruction tuning: Improving cross-task generalization by training on prompt sensitive tasks

PN Kung, F Yin, D Wu, KW Chang, N Peng - arxiv preprint arxiv …, 2023 - arxiv.org
Instruction tuning (IT) achieves impressive zero-shot generalization results by training large
language models (LLMs) on a massive amount of diverse tasks with instructions. However …

Suri: Multi-constraint instruction following for long-form text generation

CM Pham, S Sun, M Iyyer - arxiv preprint arxiv:2406.19371, 2024 - arxiv.org
Existing research on instruction following largely focuses on tasks with simple instructions
and short responses. In this work, we explore multi-constraint instruction following for …

Muffin: Curating multi-faceted instructions for improving instruction following

R Lou, K Zhang, J **e, Y Sun, J Ahn, H Xu… - The Twelfth …, 2023 - openreview.net
In the realm of large language models (LLMs), enhancing instruction-following capability
often involves curating expansive training data. This is achieved through two primary …