Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
An extensive study on pre-trained models for program understanding and generation
Automatic program understanding and generation techniques could significantly advance
the productivity of programmers and have been widely studied by academia and industry …
the productivity of programmers and have been widely studied by academia and industry …
Deep Learning for Code Intelligence: Survey, Benchmark and Toolkit
Code intelligence leverages machine learning techniques to extract knowledge from
extensive code corpora, with the aim of develo** intelligent tools to improve the quality …
extensive code corpora, with the aim of develo** intelligent tools to improve the quality …
Pitfalls in language models for code intelligence: A taxonomy and survey
Modern language models (LMs) have been successfully employed in source code
generation and understanding, leading to a significant increase in research focused on …
generation and understanding, leading to a significant increase in research focused on …
Self-supervised bug detection and repair
M Allamanis, H Jackson-Flux… - Advances in Neural …, 2021 - proceedings.neurips.cc
Abstract Machine learning-based program analyses have recently shown the promise of
integrating formal and probabilistic reasoning towards aiding software development …
integrating formal and probabilistic reasoning towards aiding software development …
Code prediction by feeding trees to transformers
Code prediction, more specifically autocomplete, has become an essential feature in
modern IDEs. Autocomplete is more effective when the desired next token is at (or close to) …
modern IDEs. Autocomplete is more effective when the desired next token is at (or close to) …
Bridging pre-trained models and downstream tasks for source code understanding
With the great success of pre-trained models, the pretrain-then-finetune paradigm has been
widely adopted on downstream tasks for source code understanding. However, compared to …
widely adopted on downstream tasks for source code understanding. However, compared to …
Contrabert: Enhancing code pre-trained models via contrastive learning
Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned
widespread attention from both academia and industry. Attributed to the superior ability in …
widespread attention from both academia and industry. Attributed to the superior ability in …
Adversarial examples for models of code
Neural models of code have shown impressive results when performing tasks such as
predicting method names and identifying certain kinds of bugs. We show that these models …
predicting method names and identifying certain kinds of bugs. We show that these models …
You see what i want you to see: poisoning vulnerabilities in neural code search
Searching and reusing code snippets from open-source software repositories based on
natural-language queries can greatly improve programming productivity. Recently, deep …
natural-language queries can greatly improve programming productivity. Recently, deep …
Transformers meet directed graphs
Transformers were originally proposed as a sequence-to-sequence model for text but have
become vital for a wide range of modalities, including images, audio, video, and undirected …
become vital for a wide range of modalities, including images, audio, video, and undirected …