Recent advances in natural language processing via large pre-trained language models: A survey
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
Advances, challenges and opportunities in creating data for trustworthy AI
As artificial intelligence (AI) transitions from research to deployment, creating the appropriate
datasets and data pipelines to develop and evaluate AI models is increasingly the biggest …
datasets and data pipelines to develop and evaluate AI models is increasingly the biggest …
Llama 2: Open foundation and fine-tuned chat models
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large
language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine …
language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine …
Pythia: A suite for analyzing large language models across training and scaling
How do large language models (LLMs) develop and evolve over the course of training?
How do these patterns change as models scale? To answer these questions, we introduce …
How do these patterns change as models scale? To answer these questions, we introduce …
Palm: Scaling language modeling with pathways
Large language models have been shown to achieve remarkable performance across a
variety of natural language tasks using few-shot learning, which drastically reduces the …
variety of natural language tasks using few-shot learning, which drastically reduces the …
Bloom: A 176b-parameter open-access multilingual language model
Large language models (LLMs) have been shown to be able to perform new tasks based on
a few demonstrations or natural language instructions. While these capabilities have led to …
a few demonstrations or natural language instructions. While these capabilities have led to …
Holistic evaluation of language models
Language models (LMs) are becoming the foundation for almost all major language
technologies, but their capabilities, limitations, and risks are not well understood. We present …
technologies, but their capabilities, limitations, and risks are not well understood. We present …
The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only
Large language models are commonly trained on a mixture of filtered web data and curated
high-quality corpora, such as social media conversations, books, or technical papers. This …
high-quality corpora, such as social media conversations, books, or technical papers. This …
Taxonomy of risks posed by language models
Responsible innovation on large-scale Language Models (LMs) requires foresight into and
in-depth understanding of the risks these models may pose. This paper develops a …
in-depth understanding of the risks these models may pose. This paper develops a …
Obelics: An open web-scale filtered dataset of interleaved image-text documents
Large multimodal models trained on natural documents, which interleave images and text,
outperform models trained on image-text pairs on various multimodal benchmarks …
outperform models trained on image-text pairs on various multimodal benchmarks …