Large language models for software engineering: A systematic literature review

X Hou, Y Zhao, Y Liu, Z Yang, K Wang, L Li… - ACM Transactions on …, 2024 - dl.acm.org
Large Language Models (LLMs) have significantly impacted numerous domains, including
Software Engineering (SE). Many recent publications have explored LLMs applied to …

Llm4vuln: A unified evaluation framework for decoupling and enhancing llms' vulnerability reasoning

Y Sun, D Wu, Y Xue, H Liu, W Ma, L Zhang… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have demonstrated significant potential in various tasks,
including vulnerability detection. However, current efforts in this area are preliminary, lacking …

When llms meet cybersecurity: A systematic literature review

J Zhang, H Bu, H Wen, Y Chen, L Li, H Zhu - arxiv preprint arxiv …, 2024 - arxiv.org
The rapid advancements in large language models (LLMs) have opened new avenues
across various fields, including cybersecurity, which faces an ever-evolving threat landscape …

CEBin: A cost-effective framework for large-scale binary code similarity detection

H Wang, Z Gao, C Zhang, M Sun, Y Zhou… - Proceedings of the 33rd …, 2024 - dl.acm.org
Binary code similarity detection (BCSD) is a fundamental technique for various applications.
Many BCSD solutions have been proposed recently, which mostly are embedding-based …

CLAP: Learning transferable binary code representations with natural language supervision

H Wang, Z Gao, C Zhang, Z Sha, M Sun… - Proceedings of the 33rd …, 2024 - dl.acm.org
Binary code representation learning has shown significant performance in binary analysis
tasks. But existing solutions often have poor transferability, particularly in few-shot and zero …

Llm for mobile: An initial roadmap

D Chen, Y Liu, M Zhou, Y Zhao, H Wang… - ACM Transactions on …, 2024 - dl.acm.org
When mobile meets LLMs, mobile app users deserve to have more intelligent usage
experiences. For this to happen, we argue that there is a strong need to apply LLMs for the …

LLM4Decompile: Decompiling Binary Code with Large Language Models

H Tan, Q Luo, J Li, Y Zhang - arxiv preprint arxiv:2403.05286, 2024 - arxiv.org
Decompilation aims to restore compiled code to human-readable source code, but struggles
with details like names and structure. Large language models (LLMs) show promise for …

A Progressive Transformer for Unifying Binary Code Embedding and Knowledge Transfer

H Lu, H Cai, Y Liang, A Bianchi, ZB Celik - arxiv preprint arxiv:2412.11177, 2024 - arxiv.org
Language model approaches have recently been integrated into binary analysis tasks, such
as function similarity detection and function signature recovery. These models typically …

Fast, Fine-Grained Equivalence Checking for Neural Decompilers

L Dramko, CL Goues, EJ Schwartz - arxiv preprint arxiv:2501.04811, 2025 - arxiv.org
Neural decompilers are machine learning models that reconstruct the source code from an
executable program. Critical to the lifecycle of any machine learning model is an evaluation …

Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases

Z Su, X Xu, Z Huang, K Zhang, X Zhang - arxiv preprint arxiv:2405.19581, 2024 - arxiv.org
Human-Oriented Binary Reverse Engineering (HOBRE) lies at the intersection of binary and
source code, aiming to lift binary code to human-readable content relevant to source code …