An empirical study on fine-tuning large language models of code for automated program repair

K Huang, X Meng, J Zhang, Y Liu… - 2023 38th IEEE/ACM …, 2023 - ieeexplore.ieee.org
The advent of large language models (LLMs) has opened up new opportunities for
automated program repair (APR). In particular, some recent studies have explored how to …

Unveiling memorization in code models

Z Yang, Z Zhao, C Wang, J Shi, D Kim, D Han… - Proceedings of the IEEE …, 2024 - dl.acm.org
The availability of large-scale datasets, advanced architectures, and powerful computational
resources have led to effective code models that automate diverse software engineering …

An empirical study of automated vulnerability localization with large language models

J Zhang, C Wang, A Li, W Sun, C Zhang, W Ma… - arxiv preprint arxiv …, 2024 - arxiv.org
Recently, Automated Vulnerability Localization (AVL) has attracted much attention, aiming to
facilitate diagnosis by pinpointing the lines of code responsible for discovered …

Codegen4libs: A two-stage approach for library-oriented code generation

M Liu, T Yang, Y Lou, X Du, Y Wang… - 2023 38th IEEE/ACM …, 2023 - ieeexplore.ieee.org
Automated code generation has been extensively studied in recent literature. In this work,
we first survey 66 participants to motivate a more pragmatic code generation scenario, ie …

An empirical study of parameter-efficient fine-tuning methods for pre-trained code models

J Liu, C Sha, X Peng - 2023 38th IEEE/ACM International …, 2023 - ieeexplore.ieee.org
Pre-trained code models (eg CodeBERT and CodeT5) have demonstrated their code
intelligence in various software engineering tasks, such as code summarization. And full fine …