[HTML][HTML] A survey of GPT-3 family large language models including ChatGPT and GPT-4
KS Kalyan - Natural Language Processing Journal, 2024 - Elsevier
Large language models (LLMs) are a special class of pretrained language models (PLMs)
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
[HTML][HTML] A review on sentiment analysis from social media platforms
M Rodríguez-Ibánez, A Casánez-Ventura… - Expert Systems with …, 2023 - Elsevier
Sentiment analysis has proven to be a valuable tool to gauge public opinion in different
disciplines. It has been successfully employed in financial market prediction, health issues …
disciplines. It has been successfully employed in financial market prediction, health issues …
Overview of exist 2021: sexism identification in social networks
F Rodríguez-Sánchez… - … del Lenguaje Natural, 2021 - journal.sepln.org
The paper describes the organization, goals, and results of the sE**sm Identification in
Social neTworks (EXIST) challenge, a shared task proposed for the first time at IberLEF …
Social neTworks (EXIST) challenge, a shared task proposed for the first time at IberLEF …
[HTML][HTML] ChatGPT: Jack of all trades, master of none
OpenAI has released the Chat Generative Pre-trained Transformer (ChatGPT) and
revolutionized the approach in artificial intelligence to human-model interaction. The first …
revolutionized the approach in artificial intelligence to human-model interaction. The first …
Rwkv: Reinventing rnns for the transformer era
Transformers have revolutionized almost all natural language processing (NLP) tasks but
suffer from memory and computational complexity that scales quadratically with sequence …
suffer from memory and computational complexity that scales quadratically with sequence …
GPT is an effective tool for multilingual psychological text analysis
The social and behavioral sciences have been increasingly using automated text analysis to
measure psychological constructs in text. We explore whether GPT, the large-language …
measure psychological constructs in text. We explore whether GPT, the large-language …
Rethinking the role of demonstrations: What makes in-context learning work?
Large language models (LMs) are able to in-context learn--perform a new task via inference
alone by conditioning on a few input-label pairs (demonstrations) and making predictions for …
alone by conditioning on a few input-label pairs (demonstrations) and making predictions for …
From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair NLP models
Language models (LMs) are pretrained on diverse data sources, including news, discussion
forums, books, and online encyclopedias. A significant portion of this data includes opinions …
forums, books, and online encyclopedias. A significant portion of this data includes opinions …
Metaicl: Learning to learn in context
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training
framework for few-shot learning where a pretrained language model is tuned to do in …
framework for few-shot learning where a pretrained language model is tuned to do in …
A holistic approach to undesired content detection in the real world
We present a holistic approach to building a robust and useful natural language
classification system for real-world content moderation. The success of such a system relies …
classification system for real-world content moderation. The success of such a system relies …