Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Trustworthy distributed ai systems: Robustness, privacy, and governance
Emerging Distributed AI systems are revolutionizing big data computing and data
processing capabilities with growing economic and societal impact. However, recent studies …
processing capabilities with growing economic and societal impact. However, recent studies …
Exploring model learning heterogeneity for boosting ensemble robustness
Deep neural network ensembles hold the potential of improving generalization performance
for complex learning tasks. This paper presents formal analysis and empirical evaluation to …
for complex learning tasks. This paper presents formal analysis and empirical evaluation to …
ShiftAttack: Towards Attacking the Localization Ability of Object Detector
State-of-the-art (SOTA) adversarial attacks expose vulnerabilities in object detectors, often
resulting in erroneous predictions. However, existing adversarial attacks neglect the stealth …
resulting in erroneous predictions. However, existing adversarial attacks neglect the stealth …
Adversarial defenses for object detectors based on gabor convolutional layers
A Amirkhani, MP Karimi - The visual computer, 2022 - Springer
Despite their many advantages and positive features, the deep neural networks are
extremely vulnerable against adversarial attacks. This drawback has substantially reduced …
extremely vulnerable against adversarial attacks. This drawback has substantially reduced …
Perception poisoning attacks in federated learning
Federated learning (FL) enables decentralized training of deep neural networks (DNNs) for
object detection over a distributed population of clients. It allows edge clients to keep their …
object detection over a distributed population of clients. It allows edge clients to keep their …
Pick-object-attack: Type-specific adversarial attack for object detection
Many recent studies have shown that deep neural models are vulnerable to adversarial
samples: images with imperceptible perturbations, for example, can fool image classifiers. In …
samples: images with imperceptible perturbations, for example, can fool image classifiers. In …
PapMOT: Exploring Adversarial Patch Attack Against Multiple Object Tracking
Tracking multiple objects in a continuous video stream is crucial for many computer vision
tasks. It involves detecting and associating objects with their respective identities across …
tasks. It involves detecting and associating objects with their respective identities across …
Using frequency attention to make adversarial patch powerful against person detector
X Lei, X Cai, C Lu, Z Jiang, Z Gong, L Lu - IEEE Access, 2022 - ieeexplore.ieee.org
Deep neural networks (DNNs) are vulnerable to adversarial attacks. In particular, object
detectors may be attacked by applying a particular adversarial patch to the image. However …
detectors may be attacked by applying a particular adversarial patch to the image. However …
Robust object detection fusion against deception
Deep neural network (DNN) based object detection has become an integral part of
numerous cyber-physical systems, perceiving physical environments and responding …
numerous cyber-physical systems, perceiving physical environments and responding …
Imperio: language-guided backdoor attacks for arbitrary model control
Natural language processing (NLP) has received unprecedented attention. While
advancements in NLP models have led to extensive research into their backdoor …
advancements in NLP models have led to extensive research into their backdoor …