Pre-trained language models for text generation: A survey
Text Generation aims to produce plausible and readable text in human language from input
data. The resurgence of deep learning has greatly advanced this field, in particular, with the …
data. The resurgence of deep learning has greatly advanced this field, in particular, with the …
Transformer technology in molecular science
A transformer is the foundational architecture behind large language models designed to
handle sequential data by using mechanisms of self‐attention to weigh the importance of …
handle sequential data by using mechanisms of self‐attention to weigh the importance of …
One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline
In Text-to-AMR parsing, current state-of-the-art semantic parsers use cumbersome pipelines
integrating several different modules or components, and exploit graph recategorization, ie …
integrating several different modules or components, and exploit graph recategorization, ie …
Investigating pretrained language models for graph-to-text generation
Graph-to-text generation aims to generate fluent texts from graph-based data. In this paper,
we investigate two recently proposed pretrained language models (PLMs) and analyze the …
we investigate two recently proposed pretrained language models (PLMs) and analyze the …
Graph pre-training for AMR parsing and generation
Abstract meaning representation (AMR) highlights the core semantic information of text in a
graph structure. Recently, pre-trained language models (PLMs) have advanced tasks of …
graph structure. Recently, pre-trained language models (PLMs) have advanced tasks of …
Jointgt: Graph-text joint representation learning for text generation from knowledge graphs
Existing pre-trained models for knowledge-graph-to-text (KG-to-text) generation simply fine-
tune text-to-text pre-trained models such as BART or T5 on KG-to-text datasets, which …
tune text-to-text pre-trained models such as BART or T5 on KG-to-text datasets, which …
Tailor: Generating and perturbing text with semantic controls
Controlled text perturbation is useful for evaluating and improving model generalizability.
However, current techniques rely on training a model for every target perturbation, which is …
However, current techniques rely on training a model for every target perturbation, which is …
Structural adapters in pretrained language models for amr-to-text generation
Pretrained language models (PLM) have recently advanced graph-to-text generation, where
the input graph is linearized into a sequence and fed into the PLM to obtain its …
the input graph is linearized into a sequence and fed into the PLM to obtain its …
An information fusion based approach to context-based fine-tuning of GPT models
In the new era of Artificial Intelligence (AI), Generative Pre-Trained Transformer (GPT) has
emerged as a central technique for generating human-like texts. Over recent years, there …
emerged as a central technique for generating human-like texts. Over recent years, there …
DEAM: Dialogue coherence evaluation using AMR-based semantic manipulations
Automatic evaluation metrics are essential for the rapid development of open-domain
dialogue systems as they facilitate hyper-parameter tuning and comparison between …
dialogue systems as they facilitate hyper-parameter tuning and comparison between …