Large language models for software engineering: A systematic literature review
Large Language Models (LLMs) have significantly impacted numerous domains, including
Software Engineering (SE). Many recent publications have explored LLMs applied to …
Software Engineering (SE). Many recent publications have explored LLMs applied to …
A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt
Recently, ChatGPT, along with DALL-E-2 and Codex, has been gaining significant attention
from society. As a result, many individuals have become interested in related resources and …
from society. As a result, many individuals have become interested in related resources and …
Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation
Program synthesis has been long studied with recent approaches focused on directly using
the power of Large Language Models (LLMs) to generate code. Programming benchmarks …
the power of Large Language Models (LLMs) to generate code. Programming benchmarks …
[PDF][PDF] DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models.
Abstract Generative Pre-trained Transformer (GPT) models have exhibited exciting progress
in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the …
in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the …
[HTML][HTML] A survey of GPT-3 family large language models including ChatGPT and GPT-4
KS Kalyan - Natural Language Processing Journal, 2024 - Elsevier
Large language models (LLMs) are a special class of pretrained language models (PLMs)
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
Generalizing to unseen domains: A survey on domain generalization
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …
the same. To this end, a key requirement is to develop models that can generalize to unseen …
Unixcoder: Unified cross-modal pre-training for code representation
Pre-trained models for programming languages have recently demonstrated great success
on code intelligence. To support both code-related understanding and generation tasks …
on code intelligence. To support both code-related understanding and generation tasks …
Codet5+: Open code large language models for code understanding and generation
Large language models (LLMs) pretrained on vast source code have achieved prominent
progress in code intelligence. However, existing code LLMs have two main limitations in …
progress in code intelligence. However, existing code LLMs have two main limitations in …
Coderl: Mastering code generation through pretrained models and deep reinforcement learning
Program synthesis or code generation aims to generate a program that satisfies a problem
specification. Recent approaches using large-scale pretrained language models (LMs) have …
specification. Recent approaches using large-scale pretrained language models (LMs) have …
Unified pre-training for program understanding and generation
Code summarization and generation empower conversion between programming language
(PL) and natural language (NL), while code translation avails the migration of legacy code …
(PL) and natural language (NL), while code translation avails the migration of legacy code …