Large language models on graphs: A comprehensive survey

B **, G Liu, C Han, M Jiang, H Ji… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Large language models (LLMs), such as GPT4 and LLaMA, are creating significant
advancements in natural language processing, due to their strong text encoding/decoding …

Recent advances in natural language processing via large pre-trained language models: A survey

B Min, H Ross, E Sulem, APB Veyseh… - ACM Computing …, 2023 - dl.acm.org
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …

Beavertails: Towards improved safety alignment of llm via a human-preference dataset

J Ji, M Liu, J Dai, X Pan, C Zhang… - Advances in …, 2024 - proceedings.neurips.cc
In this paper, we introduce the BeaverTails dataset, aimed at fostering research on safety
alignment in large language models (LLMs). This dataset uniquely separates annotations of …

Benchmarking large language models for news summarization

T Zhang, F Ladhak, E Durmus, P Liang… - Transactions of the …, 2024 - direct.mit.edu
Large language models (LLMs) have shown promise for automatic summarization but the
reasons behind their successes are poorly understood. By conducting a human evaluation …

An empirical study of training end-to-end vision-and-language transformers

ZY Dou, Y Xu, Z Gan, J Wang, S Wang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract Vision-and-language (VL) pre-training has proven to be highly effective on various
VL downstream tasks. While recent work has shown that fully transformer-based VL models …

Bartscore: Evaluating generated text as text generation

W Yuan, G Neubig, P Liu - Advances in Neural Information …, 2021 - proceedings.neurips.cc
A wide variety of NLP applications, such as machine translation, summarization, and dialog,
involve text generation. One major challenge for these applications is how to evaluate …

Chatgpt as a factual inconsistency evaluator for text summarization

Z Luo, Q **e, S Ananiadou - arxiv preprint arxiv:2303.15621, 2023 - arxiv.org
The performance of text summarization has been greatly boosted by pre-trained language
models. A main concern of existing methods is that most generated summaries are not …

Glm: General language model pretraining with autoregressive blank infilling

Z Du, Y Qian, X Liu, M Ding, J Qiu, Z Yang… - arxiv preprint arxiv …, 2021 - arxiv.org
There have been various types of pretraining architectures including autoencoding models
(eg, BERT), autoregressive models (eg, GPT), and encoder-decoder models (eg, T5) …

[HTML][HTML] Ptr: Prompt tuning with rules for text classification

X Han, W Zhao, N Ding, Z Liu, M Sun - AI Open, 2022 - Elsevier
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …

Prefix-tuning: Optimizing continuous prompts for generation

XL Li, P Liang - arxiv preprint arxiv:2101.00190, 2021 - arxiv.org
Fine-tuning is the de facto way to leverage large pretrained language models to perform
downstream tasks. However, it modifies all the language model parameters and therefore …