How to Select Pre-Trained Code Models for Reuse? A Learning Perspective

Z Bi, Y Wan, Z Chu, Y Hu, J Zhang, H Zhang… - arxiv preprint arxiv …, 2025 - arxiv.org
Pre-training a language model and then fine-tuning it has shown to be an efficient and
effective technique for a wide range of code intelligence tasks, such as code generation …

Process-Supervised Reinforcement Learning for Code Generation

Y Ye, T Zhang, W Jiang, H Huang - arxiv preprint arxiv:2502.01715, 2025 - arxiv.org
Existing reinforcement learning strategies based on outcome supervision have proven
effective in enhancing the performance of large language models (LLMs) for code …

PATCH: Empowering Large Language Model with Programmer-Intent Guidance and Collaborative-Behavior Simulation for Automatic Bug Fixing

Y Zhang, Z **, Y **ng, G Li, F Liu, J Zhu, W Dou… - arxiv preprint arxiv …, 2025 - arxiv.org
Bug fixing holds significant importance in software development and maintenance. Recent
research has made substantial strides in exploring the potential of large language models …