Trustworthy and synergistic artificial intelligence for software engineering: Vision and roadmaps

D Lo - 2023 IEEE/ACM International Conference on Software …, 2023 - ieeexplore.ieee.org
For decades, much software engineering research has been dedicated to devising
automated solutions aimed at enhancing developer productivity and elevating software …

Security of Language Models for Code: A Systematic Literature Review

Y Chen, W Sun, C Fang, Z Chen, Y Ge, T Han… - arxiv preprint arxiv …, 2024 - arxiv.org
Language models for code (CodeLMs) have emerged as powerful tools for code-related
tasks, outperforming traditional methods and standard machine learning approaches …

Exploiting the adversarial example vulnerability of transfer learning of source code

Y Yang, H Fan, C Lin, Q Li, Z Zhao… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
State-of-the-art source code classification models exhibit excellent task transferability, in
which the source code encoders are first pre-trained on a source domain dataset in a self …

Unveiling code pre-trained models: Investigating syntax and semantics capacities

W Ma, S Liu, M Zhao, X **e, W Wang, Q Hu… - ACM Transactions on …, 2024 - dl.acm.org
Code models have made significant advancements in code intelligence by encoding
knowledge about programming languages. While previous studies have explored the …

A survey on robustness attacks for deep code models

Y Qu, S Huang, Y Yao - Automated Software Engineering, 2024 - Springer
With the widespread application of deep learning in software engineering, deep code
models have played an important role in improving code quality and development efficiency …

An explanation method for models of code

Y Wang, K Wang, L Wang - Proceedings of the ACM on Programming …, 2023 - dl.acm.org
This paper introduces a novel method, called WheaCha, for explaining the predictions of
code models. Similar to attribution methods, WheaCha seeks to identify input features that …

ALANCA: Active Learning Guided Adversarial Attacks for Code Comprehension on Diverse Pre-trained and Large Language Models

D Liu, S Zhang - 2024 IEEE International Conference on …, 2024 - ieeexplore.ieee.org
Neural code models have demonstrated their efficacy across a range of code
comprehension tasks, including vulnerability detection, code classification, automatic code …

Transfer attacks and defenses for large language models on coding tasks

C Zhang, Z Wang, R Mangal, M Fredrikson… - arxiv preprint arxiv …, 2023 - arxiv.org
Modern large language models (LLMs), such as ChatGPT, have demonstrated impressive
capabilities for coding tasks including writing and reasoning about code. They improve upon …

Exploiting code symmetries for learning program semantics

K Pei, W Li, Q **, S Liu, S Geng, L Cavallaro… - arxiv preprint arxiv …, 2023 - arxiv.org
This paper tackles the challenge of teaching code semantics to Large Language Models
(LLMs) for program analysis by incorporating code symmetries into the model architecture …