Pre-trained language models for text generation: A survey

J Li, T Tang, WX Zhao, JY Nie, JR Wen - ACM Computing Surveys, 2024 - dl.acm.org
Text Generation aims to produce plausible and readable text in human language from input
data. The resurgence of deep learning has greatly advanced this field, in particular, with the …

Transformer technology in molecular science

J Jiang, L Ke, L Chen, B Dou, Y Zhu… - Wiley …, 2024 - Wiley Online Library
A transformer is the foundational architecture behind large language models designed to
handle sequential data by using mechanisms of self‐attention to weigh the importance of …

One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline

M Bevilacqua, R Blloshmi, R Navigli - Proceedings of the AAAI …, 2021 - ojs.aaai.org
In Text-to-AMR parsing, current state-of-the-art semantic parsers use cumbersome pipelines
integrating several different modules or components, and exploit graph recategorization, ie …

Investigating pretrained language models for graph-to-text generation

LFR Ribeiro, M Schmitt, H Schütze… - arxiv preprint arxiv …, 2020 - arxiv.org
Graph-to-text generation aims to generate fluent texts from graph-based data. In this paper,
we investigate two recently proposed pretrained language models (PLMs) and analyze the …

Graph pre-training for AMR parsing and generation

X Bai, Y Chen, Y Zhang - arxiv preprint arxiv:2203.07836, 2022 - arxiv.org
Abstract meaning representation (AMR) highlights the core semantic information of text in a
graph structure. Recently, pre-trained language models (PLMs) have advanced tasks of …

Jointgt: Graph-text joint representation learning for text generation from knowledge graphs

P Ke, H Ji, Y Ran, X Cui, L Wang, L Song, X Zhu… - arxiv preprint arxiv …, 2021 - arxiv.org
Existing pre-trained models for knowledge-graph-to-text (KG-to-text) generation simply fine-
tune text-to-text pre-trained models such as BART or T5 on KG-to-text datasets, which …

Tailor: Generating and perturbing text with semantic controls

A Ross, T Wu, H Peng, ME Peters… - arxiv preprint arxiv …, 2021 - arxiv.org
Controlled text perturbation is useful for evaluating and improving model generalizability.
However, current techniques rely on training a model for every target perturbation, which is …

Structural adapters in pretrained language models for amr-to-text generation

LFR Ribeiro, Y Zhang, I Gurevych - arxiv preprint arxiv:2103.09120, 2021 - arxiv.org
Pretrained language models (PLM) have recently advanced graph-to-text generation, where
the input graph is linearized into a sequence and fed into the PLM to obtain its …

An information fusion based approach to context-based fine-tuning of GPT models

T Nguyen-Mau, AC Le, DH Pham, VN Huynh - Information Fusion, 2024 - Elsevier
In the new era of Artificial Intelligence (AI), Generative Pre-Trained Transformer (GPT) has
emerged as a central technique for generating human-like texts. Over recent years, there …

DEAM: Dialogue coherence evaluation using AMR-based semantic manipulations

S Ghazarian, N Wen, A Galstyan, N Peng - arxiv preprint arxiv …, 2022 - arxiv.org
Automatic evaluation metrics are essential for the rapid development of open-domain
dialogue systems as they facilitate hyper-parameter tuning and comparison between …