Pitfalls in language models for code intelligence: A taxonomy and survey

X She, Y Liu, Y Zhao, Y He, L Li… - arxiv preprint arxiv …, 2023 - arxiv.org
Modern language models (LMs) have been successfully employed in source code
generation and understanding, leading to a significant increase in research focused on …

A systematic literature review on explainability for machine/deep learning-based software engineering research

S Cao, X Sun, R Widyasari, D Lo, X Wu, L Bo… - arxiv preprint arxiv …, 2024 - arxiv.org
The remarkable achievements of Artificial Intelligence (AI) algorithms, particularly in
Machine Learning (ML) and Deep Learning (DL), have fueled their extensive deployment …

Counterfactual explanations for models of code

J Cito, I Dillig, V Murali, S Chandra - Proceedings of the 44th …, 2022 - dl.acm.org
Machine learning (ML) models play an increasingly prevalent role in many software
engineering tasks. However, because most models are now powered by opaque deep …

Graph neural networks for vulnerability detection: A counterfactual explanation

Z Chu, Y Wan, Q Li, Y Wu, H Zhang, Y Sui… - Proceedings of the 33rd …, 2024 - dl.acm.org
Vulnerability detection is crucial for ensuring the security and reliability of software systems.
Recently, Graph Neural Networks (GNNs) have emerged as a prominent code embedding …

Test optimization in dnn testing: A survey

Q Hu, Y Guo, X **e, M Cordy, L Ma… - ACM Transactions on …, 2024 - dl.acm.org
This article presents a comprehensive survey on test optimization in deep neural network
(DNN) testing. Here, test optimization refers to testing with low data labeling effort. We …

Self-adapting machine learning-based systems via a probabilistic model checking framework

M Casimiro, D Soares, D Garlan, L Rodrigues… - ACM Transactions on …, 2024 - dl.acm.org
This article focuses on the problem of optimizing the system utility of Machine Learning (ML)-
based systems in the presence of ML mispredictions. This is achieved via the use of self …

Leveraging feature bias for scalable misprediction explanation of machine learning models

J Gesi, X Shen, Y Geng, Q Chen… - 2023 IEEE/ACM 45th …, 2023 - ieeexplore.ieee.org
Interpreting and debugging machine learning models is necessary to ensure the robustness
of the machine learning models. Explaining mispredictions can help significantly in doing so …

Interpretation-based code summarization

M Geng, S Wang, D Dong, H Wang… - 2023 IEEE/ACM 31st …, 2023 - ieeexplore.ieee.org
Code comment, ie, the natural language text to describe the semantic of a code snippet, is
an important way for developers to comprehend the code. Recently, a number of …

A survey of trojans in neural models of source code: Taxonomy and techniques

A Hussain, MRI Rabin, T Ahmed, N Ayoobi… - arxiv preprint arxiv …, 2023 - arxiv.org
In this work, we study literature in Explainable AI and Safe AI to understand poisoning of
neural models of code. In order to do so, we first establish a novel taxonomy for Trojan AI for …

Inferring data preconditions from deep learning models for trustworthy prediction in deployment

S Ahmed, H Gao, H Rajan - Proceedings of the 46th IEEE/ACM …, 2024 - dl.acm.org
Deep learning models are trained with certain assumptions about the data during the
development stage and then used for prediction in the deployment stage. It is important to …