The rise and potential of large language model based agents: A survey
For a long time, researchers have sought artificial intelligence (AI) that matches or exceeds
human intelligence. AI agents, which are artificial entities capable of sensing the …
human intelligence. AI agents, which are artificial entities capable of sensing the …
A comprehensive overview of large language models
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in
natural language processing tasks and beyond. This success of LLMs has led to a large …
natural language processing tasks and beyond. This success of LLMs has led to a large …
Gpt-4 technical report
We report the development of GPT-4, a large-scale, multimodal model which can accept
image and text inputs and produce text outputs. While less capable than humans in many …
image and text inputs and produce text outputs. While less capable than humans in many …
Llama 2: Open foundation and fine-tuned chat models
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large
language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine …
language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine …
Qlora: Efficient finetuning of quantized llms
We present QLoRA, an efficient finetuning approach that reduces memory usage enough to
finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit …
finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit …
A survey of large language models
Language is essentially a complex, intricate system of human expressions governed by
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
Self-refine: Iterative refinement with self-feedback
Like humans, large language models (LLMs) do not always generate the best output on their
first try. Motivated by how humans refine their written text, we introduce Self-Refine, an …
first try. Motivated by how humans refine their written text, we introduce Self-Refine, an …
Universal and transferable adversarial attacks on aligned language models
Because" out-of-the-box" large language models are capable of generating a great deal of
objectionable content, recent work has focused on aligning these models in an attempt to …
objectionable content, recent work has focused on aligning these models in an attempt to …
Lima: Less is more for alignment
Large language models are trained in two stages:(1) unsupervised pretraining from raw text,
to learn general-purpose representations, and (2) large scale instruction tuning and …
to learn general-purpose representations, and (2) large scale instruction tuning and …
The flan collection: Designing data and methods for effective instruction tuning
We study the design decision of publicly available instruction tuning methods, by
reproducing and breaking down the development of Flan 2022 (Chung et al., 2022) …
reproducing and breaking down the development of Flan 2022 (Chung et al., 2022) …