Deep learning-based software engineering: progress, challenges, and opportunities

X Chen, X Hu, Y Huang, H Jiang, W Ji, Y Jiang… - Science China …, 2025 - Springer
Researchers have recently achieved significant advances in deep learning techniques,
which in turn has substantially advanced other research disciplines, such as natural …

A systematic survey and critical review on evaluating large language models: Challenges, limitations, and recommendations

MTR Laskar, S Alqahtani, MS Bari… - Proceedings of the …, 2024 - aclanthology.org
Abstract Large Language Models (LLMs) have recently gained significant attention due to
their remarkable capabilities in performing diverse tasks across various domains. However …

Autogen: Enabling next-gen llm applications via multi-agent conversation framework

Q Wu, G Bansal, J Zhang, Y Wu, S Zhang, E Zhu… - arxiv preprint arxiv …, 2023 - arxiv.org
This technical report presents AutoGen, a new framework that enables development of LLM
applications using multiple agents that can converse with each other to solve tasks. AutoGen …

Leandojo: Theorem proving with retrieval-augmented language models

K Yang, A Swope, A Gu, R Chalamala… - Advances in …, 2024 - proceedings.neurips.cc
Large language models (LLMs) have shown promise in proving formal theorems using proof
assistants such as Lean. However, existing methods are difficult to reproduce or build on …

A survey on rag meeting llms: Towards retrieval-augmented large language models

W Fan, Y Ding, L Ning, S Wang, H Li, D Yin… - Proceedings of the 30th …, 2024 - dl.acm.org
As one of the most advanced techniques in AI, Retrieval-Augmented Generation (RAG) can
offer reliable and up-to-date external knowledge, providing huge convenience for numerous …

Codet5+: Open code large language models for code understanding and generation

Y Wang, H Le, AD Gotmare, NDQ Bui, J Li… - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models (LLMs) pretrained on vast source code have achieved prominent
progress in code intelligence. However, existing code LLMs have two main limitations in …

Is ChatGPT the ultimate programming assistant--how far is it?

H Tian, W Lu, TO Li, X Tang, SC Cheung… - arxiv preprint arxiv …, 2023 - arxiv.org
Recently, the ChatGPT LLM has received great attention: it can be used as a bot for
discussing source code, prompting it to suggest changes, provide descriptions or even …

Exploring parameter-efficient fine-tuning techniques for code generation with large language models

M Weyssow, X Zhou, K Kim, D Lo… - ACM Transactions on …, 2023 - dl.acm.org
Large language models (LLMs) demonstrate impressive capabilities to generate accurate
code snippets given natural language intents in a zero-shot manner, ie, without the need for …

Natgen: generative pre-training by “naturalizing” source code

S Chakraborty, T Ahmed, Y Ding, PT Devanbu… - Proceedings of the 30th …, 2022 - dl.acm.org
Pre-trained Generative Language models (eg, PLBART, CodeT5, SPT-Code) for source
code yielded strong results on several tasks in the past few years, including code generation …

Lift yourself up: Retrieval-augmented text generation with self-memory

X Cheng, D Luo, X Chen, L Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
With direct access to human-written reference as memory, retrieval-augmented generation
has achieved much progress in a wide range of text generation tasks. Since better memory …