A survey of knowledge enhanced pre-trained language models

L Hu, Z Liu, Z Zhao, L Hou, L Nie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …

A survey of knowledge-enhanced text generation

W Yu, C Zhu, Z Li, Z Hu, Q Wang, H Ji… - ACM Computing …, 2022 - dl.acm.org
The goal of text-to-text generation is to make machines express like a human in many
applications such as conversation, summarization, and translation. It is one of the most …

Unifying large language models and knowledge graphs: A roadmap

S Pan, L Luo, Y Wang, C Chen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the
field of natural language processing and artificial intelligence, due to their emergent ability …

K-lite: Learning transferable visual models with external knowledge

S Shen, C Li, X Hu, Y **e, J Yang… - Advances in …, 2022 - proceedings.neurips.cc
The new generation of state-of-the-art computer vision systems are trained from natural
language supervision, ranging from simple object category names to descriptive captions …

Jaket: Joint pre-training of knowledge graph and language understanding

D Yu, C Zhu, Y Yang, M Zeng - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
Abstract Knowledge graphs (KGs) contain rich information about world knowledge, entities,
and relations. Thus, they can be great supplements to existing pre-trained language models …

A survey of multi-task learning in natural language processing: Regarding task relatedness and training methods

Z Zhang, W Yu, M Yu, Z Guo, M Jiang - arxiv preprint arxiv:2204.03508, 2022 - arxiv.org
Multi-task learning (MTL) has become increasingly popular in natural language processing
(NLP) because it improves the performance of related tasks by exploiting their …

Learning customized visual models with retrieval-augmented knowledge

H Liu, K Son, J Yang, C Liu, J Gao… - Proceedings of the …, 2023 - openaccess.thecvf.com
Image-text contrastive learning models such as CLIP have demonstrated strong task transfer
ability. The high generality and usability of these visual models is achieved via a web-scale …

Adaprompt: Adaptive model training for prompt-based nlp

Y Chen, Y Liu, L Dong, S Wang, C Zhu, M Zeng… - arxiv preprint arxiv …, 2022 - arxiv.org
Prompt-based learning, with its capability to tackle zero-shot and few-shot NLP tasks, has
gained much attention in community. The main idea is to bridge the gap between NLP …

Knowledge-augmented methods for natural language processing

C Zhu, Y Xu, X Ren, BY Lin, M Jiang, W Yu - Proceedings of the sixteenth …, 2023 - dl.acm.org
Knowledge in NLP has been a rising trend especially after the advent of large-scale pre-
trained models. Knowledge is critical to equip statistics-based models with common sense …

Diversifying content generation for commonsense reasoning with mixture of knowledge graph experts

W Yu, C Zhu, L Qin, Z Zhang, T Zhao… - arxiv preprint arxiv …, 2022 - arxiv.org
Generative commonsense reasoning (GCR) in natural language is to reason about the
commonsense while generating coherent text. Recent years have seen a surge of interest in …