[HTML][HTML] A survey of GPT-3 family large language models including ChatGPT and GPT-4

KS Kalyan - Natural Language Processing Journal, 2024 - Elsevier
Large language models (LLMs) are a special class of pretrained language models (PLMs)
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …

[HTML][HTML] A review on sentiment analysis from social media platforms

M Rodríguez-Ibánez, A Casánez-Ventura… - Expert Systems with …, 2023 - Elsevier
Sentiment analysis has proven to be a valuable tool to gauge public opinion in different
disciplines. It has been successfully employed in financial market prediction, health issues …

Overview of exist 2021: sexism identification in social networks

F Rodríguez-Sánchez… - … del Lenguaje Natural, 2021 - journal.sepln.org
The paper describes the organization, goals, and results of the sE**sm Identification in
Social neTworks (EXIST) challenge, a shared task proposed for the first time at IberLEF …

[HTML][HTML] ChatGPT: Jack of all trades, master of none

J Kocoń, I Cichecki, O Kaszyca, M Kochanek, D Szydło… - Information …, 2023 - Elsevier
OpenAI has released the Chat Generative Pre-trained Transformer (ChatGPT) and
revolutionized the approach in artificial intelligence to human-model interaction. The first …

Rwkv: Reinventing rnns for the transformer era

B Peng, E Alcaide, Q Anthony, A Albalak… - arxiv preprint arxiv …, 2023 - arxiv.org
Transformers have revolutionized almost all natural language processing (NLP) tasks but
suffer from memory and computational complexity that scales quadratically with sequence …

GPT is an effective tool for multilingual psychological text analysis

S Rathje, DM Mirea, I Sucholutsky, R Marjieh… - Proceedings of the …, 2024 - pnas.org
The social and behavioral sciences have been increasingly using automated text analysis to
measure psychological constructs in text. We explore whether GPT, the large-language …

Rethinking the role of demonstrations: What makes in-context learning work?

S Min, X Lyu, A Holtzman, M Artetxe, M Lewis… - arxiv preprint arxiv …, 2022 - arxiv.org
Large language models (LMs) are able to in-context learn--perform a new task via inference
alone by conditioning on a few input-label pairs (demonstrations) and making predictions for …

From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair NLP models

S Feng, CY Park, Y Liu, Y Tsvetkov - arxiv preprint arxiv:2305.08283, 2023 - arxiv.org
Language models (LMs) are pretrained on diverse data sources, including news, discussion
forums, books, and online encyclopedias. A significant portion of this data includes opinions …

Metaicl: Learning to learn in context

S Min, M Lewis, L Zettlemoyer, H Hajishirzi - arxiv preprint arxiv …, 2021 - arxiv.org
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training
framework for few-shot learning where a pretrained language model is tuned to do in …

A holistic approach to undesired content detection in the real world

T Markov, C Zhang, S Agarwal, FE Nekoul… - Proceedings of the …, 2023 - ojs.aaai.org
We present a holistic approach to building a robust and useful natural language
classification system for real-world content moderation. The success of such a system relies …