Large language models on graphs: A comprehensive survey
Large language models (LLMs), such as GPT4 and LLaMA, are creating significant
advancements in natural language processing, due to their strong text encoding/decoding …
advancements in natural language processing, due to their strong text encoding/decoding …
Recent advances in natural language processing via large pre-trained language models: A survey
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
Beavertails: Towards improved safety alignment of llm via a human-preference dataset
In this paper, we introduce the BeaverTails dataset, aimed at fostering research on safety
alignment in large language models (LLMs). This dataset uniquely separates annotations of …
alignment in large language models (LLMs). This dataset uniquely separates annotations of …
Benchmarking large language models for news summarization
Large language models (LLMs) have shown promise for automatic summarization but the
reasons behind their successes are poorly understood. By conducting a human evaluation …
reasons behind their successes are poorly understood. By conducting a human evaluation …
An empirical study of training end-to-end vision-and-language transformers
Abstract Vision-and-language (VL) pre-training has proven to be highly effective on various
VL downstream tasks. While recent work has shown that fully transformer-based VL models …
VL downstream tasks. While recent work has shown that fully transformer-based VL models …
Bartscore: Evaluating generated text as text generation
A wide variety of NLP applications, such as machine translation, summarization, and dialog,
involve text generation. One major challenge for these applications is how to evaluate …
involve text generation. One major challenge for these applications is how to evaluate …
Chatgpt as a factual inconsistency evaluator for text summarization
The performance of text summarization has been greatly boosted by pre-trained language
models. A main concern of existing methods is that most generated summaries are not …
models. A main concern of existing methods is that most generated summaries are not …
Glm: General language model pretraining with autoregressive blank infilling
There have been various types of pretraining architectures including autoencoding models
(eg, BERT), autoregressive models (eg, GPT), and encoder-decoder models (eg, T5) …
(eg, BERT), autoregressive models (eg, GPT), and encoder-decoder models (eg, T5) …
[HTML][HTML] Ptr: Prompt tuning with rules for text classification
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
Prefix-tuning: Optimizing continuous prompts for generation
Fine-tuning is the de facto way to leverage large pretrained language models to perform
downstream tasks. However, it modifies all the language model parameters and therefore …
downstream tasks. However, it modifies all the language model parameters and therefore …