Tool learning with foundation models
Humans possess an extraordinary ability to create and utilize tools. With the advent of
foundation models, artificial intelligence systems have the potential to be equally adept in …
foundation models, artificial intelligence systems have the potential to be equally adept in …
Woodpecker: Hallucination correction for multimodal large language models
Hallucinations is a big shadow hanging over the rapidly evolving multimodal large language
models (MLLMs), referring to that the generated text is inconsistent with the image content …
models (MLLMs), referring to that the generated text is inconsistent with the image content …
Enabling large language models to generate text with citations
Large language models (LLMs) have emerged as a widely-used tool for information
seeking, but their generated outputs are prone to hallucination. In this work, our aim is to …
seeking, but their generated outputs are prone to hallucination. In this work, our aim is to …
Factuality enhanced language models for open-ended text generation
Pretrained language models (LMs) are susceptible to generate text with nonfactual
information. In this work, we measure and improve the factual accuracy of large-scale LMs …
information. In this work, we measure and improve the factual accuracy of large-scale LMs …
Dense text retrieval based on pretrained language models: A survey
Text retrieval is a long-standing research topic on information seeking, where a system is
required to return relevant information resources to user's queries in natural language. From …
required to return relevant information resources to user's queries in natural language. From …
Autoregressive search engines: Generating substrings as document identifiers
Abstract Knowledge-intensive language tasks require NLP systems to both provide the
correct answer and retrieve supporting evidence for it in a given corpus. Autoregressive …
correct answer and retrieve supporting evidence for it in a given corpus. Autoregressive …
Rarr: Researching and revising what language models say, using language models
Language models (LMs) now excel at many tasks such as few-shot learning, question
answering, reasoning, and dialog. However, they sometimes generate unsupported or …
answering, reasoning, and dialog. However, they sometimes generate unsupported or …
Internet-augmented language models through few-shot prompting for open-domain question answering
In this work, we aim to capitalize on the unique few-shot capabilities of large-scale language
models (LSLMs) to overcome some of their challenges with respect to grounding to factual …
models (LSLMs) to overcome some of their challenges with respect to grounding to factual …
Re2G: Retrieve, rerank, generate
As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces
become larger and larger. However, for tasks that require a large amount of knowledge, non …
become larger and larger. However, for tasks that require a large amount of knowledge, non …
Temporalwiki: A lifelong benchmark for training and evaluating ever-evolving language models
Language Models (LMs) become outdated as the world changes; they often fail to perform
tasks requiring recent factual information which was absent or different during training, a …
tasks requiring recent factual information which was absent or different during training, a …