A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt

Y Cao, S Li, Y Liu, Z Yan, Y Dai, PS Yu… - arxiv preprint arxiv …, 2023 - arxiv.org
Recently, ChatGPT, along with DALL-E-2 and Codex, has been gaining significant attention
from society. As a result, many individuals have become interested in related resources and …

When large language models meet personalization: Perspectives of challenges and opportunities

J Chen, Z Liu, X Huang, C Wu, Q Liu, G Jiang, Y Pu… - World Wide Web, 2024 - Springer
The advent of large language models marks a revolutionary breakthrough in artificial
intelligence. With the unprecedented scale of training and model parameters, the capability …

A survey of knowledge enhanced pre-trained language models

L Hu, Z Liu, Z Zhao, L Hou, L Nie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …

Sparks: Inspiration for science writing using language models

KI Gero, V Liu, L Chilton - Proceedings of the 2022 ACM Designing …, 2022 - dl.acm.org
Large-scale language models are rapidly improving, performing well on a wide variety of
tasks with little to no customization. In this work we investigate how language models can …

Kagnet: Knowledge-aware graph networks for commonsense reasoning

BY Lin, X Chen, J Chen, X Ren - arxiv preprint arxiv:1909.02151, 2019 - arxiv.org
Commonsense reasoning aims to empower machines with the human ability to make
presumptions about ordinary situations in our daily life. In this paper, we propose a textual …

COMET: Commonsense transformers for automatic knowledge graph construction

A Bosselut, H Rashkin, M Sap, C Malaviya… - arxiv preprint arxiv …, 2019 - arxiv.org
We present the first comprehensive study on automatic knowledge base construction for two
prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and ConceptNet …

Krisp: Integrating implicit and symbolic knowledge for open-domain knowledge-based vqa

K Marino, X Chen, D Parikh, A Gupta… - Proceedings of the …, 2021 - openaccess.thecvf.com
One of the most challenging question types in VQA is when answering the question requires
outside knowledge not present in the image. In this work we study open-domain knowledge …

Swag: A large-scale adversarial dataset for grounded commonsense inference

R Zellers, Y Bisk, R Schwartz, Y Choi - arxiv preprint arxiv:1808.05326, 2018 - arxiv.org
Given a partial description like" she opened the hood of the car," humans can reason about
the situation and anticipate what might come next (" then, she examined the engine"). In this …

CommonGen: A constrained text generation challenge for generative commonsense reasoning

BY Lin, W Zhou, M Shen, P Zhou… - arxiv preprint arxiv …, 2019 - arxiv.org
Recently, large-scale pre-trained language models have demonstrated impressive
performance on several commonsense-reasoning benchmark datasets. However, building …

Commonsense knowledge mining from pretrained models

J Davison, J Feldman, AM Rush - Proceedings of the 2019 …, 2019 - aclanthology.org
Inferring commonsense knowledge is a key challenge in machine learning. Due to the
sparsity of training data, previous work has shown that supervised methods for …