Retrieval-augmented generation for natural language processing: A survey
Large language models (LLMs) have demonstrated great success in various fields,
benefiting from their huge amount of parameters that store knowledge. However, LLMs still …
benefiting from their huge amount of parameters that store knowledge. However, LLMs still …
Text-to-image diffusion models in generative ai: A survey
This survey reviews text-to-image diffusion models in the context that diffusion models have
emerged to be popular for a wide range of generative tasks. As a self-contained work, this …
emerged to be popular for a wide range of generative tasks. As a self-contained work, this …
Learning to retrieve prompts for in-context learning
In-context learning is a recent paradigm in natural language understanding, where a large
pre-trained language model (LM) observes a test instance and a few training examples as …
pre-trained language model (LM) observes a test instance and a few training examples as …
What Makes Good In-Context Examples for GPT-?
GPT-$3 $ has attracted lots of attention due to its superior performance across a wide range
of NLP tasks, especially with its powerful and versatile in-context few-shot learning ability …
of NLP tasks, especially with its powerful and versatile in-context few-shot learning ability …
Retrieval augmentation reduces hallucination in conversation
Despite showing increasingly human-like conversational abilities, state-of-the-art dialogue
models often suffer from factual incorrectness and hallucination of knowledge (Roller et al …
models often suffer from factual incorrectness and hallucination of knowledge (Roller et al …
In-context examples selection for machine translation
Large-scale generative models show an impressive ability to perform a wide range of
Natural Language Processing (NLP) tasks using in-context learning, where a few examples …
Natural Language Processing (NLP) tasks using in-context learning, where a few examples …
Beyond english-centric multilingual machine translation
Existing work in translation demonstrated the potential of massively multilingual machine
translation by training a single model able to translate between any pair of languages …
translation by training a single model able to translate between any pair of languages …
Text classification via large language models
Despite the remarkable success of large-scale Language Models (LLMs) such as GPT-3,
their performances still significantly underperform fine-tuned models in the task of text …
their performances still significantly underperform fine-tuned models in the task of text …
Retrieval-augmented diffusion models
Novel architectures have recently improved generative image synthesis leading to excellent
visual quality in various tasks. Much of this success is due to the scalability of these …
visual quality in various tasks. Much of this success is due to the scalability of these …
Findings of the 2022 conference on machine translation (WMT22)
This paper presents the results of the General Machine Translation Task organised as part
of the Conference on Machine Translation (WMT) 2022. In the general MT task, participants …
of the Conference on Machine Translation (WMT) 2022. In the general MT task, participants …