Chatting about ChatGPT: how may AI and GPT impact academia and libraries?

BD Lund, T Wang - Library hi tech news, 2023 - emerald.com
Purpose This paper aims to provide an overview of key definitions related to ChatGPT, a
public tool developed by OpenAI, and its underlying technology, Generative Pretrained …

ChatGPT is not all you need. A State of the Art Review of large Generative AI models

R Gozalo-Brizuela, EC Garrido-Merchan - arxiv preprint arxiv:2301.04655, 2023 - arxiv.org
During the last two years there has been a plethora of large generative models such as
ChatGPT or Stable Diffusion that have been published. Concretely, these models are able to …

ChatGPT and a new academic reality: Artificial Intelligence‐written research papers and the ethics of the large language models in scholarly publishing

BD Lund, T Wang, NR Mannuru, B Nie… - Journal of the …, 2023 - Wiley Online Library
This article discusses OpenAI's ChatGPT, a generative pre‐trained transformer, which uses
natural language processing to fulfill text‐based user requests (ie, a “chatbot”). The history …

Analyzing leakage of personally identifiable information in language models

N Lukas, A Salem, R Sim, S Tople… - … IEEE Symposium on …, 2023 - ieeexplore.ieee.org
Language Models (LMs) have been shown to leak information about training data through
sentence-level membership inference and reconstruction attacks. Understanding the risk of …

[HTML][HTML] Decoding ChatGPT: A taxonomy of existing research, current challenges, and possible future directions

SS Sohail, F Farhat, Y Himeur, M Nadeem… - Journal of King Saud …, 2023 - Elsevier
Abstract Chat Generative Pre-trained Transformer (ChatGPT) has gained significant interest
and attention since its launch in November 2022. It has shown impressive performance in …

Pre-trained language models for text generation: A survey

J Li, T Tang, WX Zhao, JY Nie, JR Wen - ACM Computing Surveys, 2024 - dl.acm.org
Text Generation aims to produce plausible and readable text in human language from input
data. The resurgence of deep learning has greatly advanced this field, in particular, with the …

Transformer-patcher: One mistake worth one neuron

Z Huang, Y Shen, X Zhang, J Zhou, W Rong… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Transformer-based Pretrained Language Models (PLMs) dominate almost all Natural
Language Processing (NLP) tasks. Nevertheless, they still make mistakes from time to time …

[HTML][HTML] Creating and detecting fake reviews of online products

J Salminen, C Kandpal, AM Kamel, S Jung… - Journal of Retailing and …, 2022 - Elsevier
Customers increasingly rely on reviews for product information. However, the usefulness of
online reviews is impeded by fake reviews that give an untruthful picture of product quality …

Transforming conversations with AI—A comprehensive study of ChatGPT

G Bansal, V Chamola, A Hussain, M Guizani… - Cognitive …, 2024 - Springer
The field of cognitive computing, conversational AI has witnessed remarkable progress,
largely driven by the development of the Generative Pre-trained Transformer (GPT) series …

A simple language model for task-oriented dialogue

E Hosseini-Asl, B McCann, CS Wu… - Advances in Neural …, 2020 - proceedings.neurips.cc
Task-oriented dialogue is often decomposed into three tasks: understanding user input,
deciding actions, and generating a response. While such decomposition might suggest a …