A comprehensive overview of large language models
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in
natural language processing tasks and beyond. This success of LLMs has led to a large …
natural language processing tasks and beyond. This success of LLMs has led to a large …
[HTML][HTML] Pre-trained language models and their applications
Pre-trained language models have achieved striking success in natural language
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …
A survey of large language models
Language is essentially a complex, intricate system of human expressions governed by
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
Regulating ChatGPT and other large generative AI models
P Hacker, A Engel, M Mauer - Proceedings of the 2023 ACM Conference …, 2023 - dl.acm.org
Large generative AI models (LGAIMs), such as ChatGPT, GPT-4 or Stable Diffusion, are
rapidly transforming the way we communicate, illustrate, and create. However, AI regulation …
rapidly transforming the way we communicate, illustrate, and create. However, AI regulation …
P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks
Prompt tuning, which only tunes continuous prompts with a frozen language model,
substantially reduces per-task storage and memory usage at training. However, in the …
substantially reduces per-task storage and memory usage at training. However, in the …
A bibliometric review of large language models research from 2017 to 2023
Large language models (LLMs), such as OpenAI's Generative Pre-trained Transformer
(GPT), are a class of language models that have demonstrated outstanding performance …
(GPT), are a class of language models that have demonstrated outstanding performance …
Predictability and surprise in large generative models
Large-scale pre-training has recently emerged as a technique for creating capable, general-
purpose, generative models such as GPT-3, Megatron-Turing NLG, Gopher, and many …
purpose, generative models such as GPT-3, Megatron-Turing NLG, Gopher, and many …
Generating training data with language models: Towards zero-shot language understanding
Pretrained language models (PLMs) have demonstrated remarkable performance in various
natural language processing tasks: Unidirectional PLMs (eg, GPT) are well known for their …
natural language processing tasks: Unidirectional PLMs (eg, GPT) are well known for their …
Multitask prompted training enables zero-shot task generalization
Large language models have recently been shown to attain reasonable zero-shot
generalization on a diverse set of tasks (Brown et al., 2020). It has been hypothesized that …
generalization on a diverse set of tasks (Brown et al., 2020). It has been hypothesized that …
[HTML][HTML] Democratizing artificial intelligence: How no-code AI can leverage machine learning operations
Organizations are increasingly seeking to generate value and insights from their data by
integrating advances in artificial intelligence (AI)(eg, machine learning (ML) systems) into …
integrating advances in artificial intelligence (AI)(eg, machine learning (ML) systems) into …