A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt
Recently, ChatGPT, along with DALL-E-2 and Codex, has been gaining significant attention
from society. As a result, many individuals have become interested in related resources and …
from society. As a result, many individuals have become interested in related resources and …
When large language models meet personalization: Perspectives of challenges and opportunities
The advent of large language models marks a revolutionary breakthrough in artificial
intelligence. With the unprecedented scale of training and model parameters, the capability …
intelligence. With the unprecedented scale of training and model parameters, the capability …
A survey of knowledge enhanced pre-trained language models
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …
supervised learning method, have yielded promising performance on various tasks in …
Sparks: Inspiration for science writing using language models
Large-scale language models are rapidly improving, performing well on a wide variety of
tasks with little to no customization. In this work we investigate how language models can …
tasks with little to no customization. In this work we investigate how language models can …
Kagnet: Knowledge-aware graph networks for commonsense reasoning
Commonsense reasoning aims to empower machines with the human ability to make
presumptions about ordinary situations in our daily life. In this paper, we propose a textual …
presumptions about ordinary situations in our daily life. In this paper, we propose a textual …
COMET: Commonsense transformers for automatic knowledge graph construction
We present the first comprehensive study on automatic knowledge base construction for two
prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and ConceptNet …
prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and ConceptNet …
Krisp: Integrating implicit and symbolic knowledge for open-domain knowledge-based vqa
One of the most challenging question types in VQA is when answering the question requires
outside knowledge not present in the image. In this work we study open-domain knowledge …
outside knowledge not present in the image. In this work we study open-domain knowledge …
Swag: A large-scale adversarial dataset for grounded commonsense inference
Given a partial description like" she opened the hood of the car," humans can reason about
the situation and anticipate what might come next (" then, she examined the engine"). In this …
the situation and anticipate what might come next (" then, she examined the engine"). In this …
CommonGen: A constrained text generation challenge for generative commonsense reasoning
Recently, large-scale pre-trained language models have demonstrated impressive
performance on several commonsense-reasoning benchmark datasets. However, building …
performance on several commonsense-reasoning benchmark datasets. However, building …
Commonsense knowledge mining from pretrained models
Inferring commonsense knowledge is a key challenge in machine learning. Due to the
sparsity of training data, previous work has shown that supervised methods for …
sparsity of training data, previous work has shown that supervised methods for …