Trustworthy distributed ai systems: Robustness, privacy, and governance
Emerging Distributed AI systems are revolutionizing big data computing and data
processing capabilities with growing economic and societal impact. However, recent studies …
processing capabilities with growing economic and societal impact. However, recent studies …
Exploring model learning heterogeneity for boosting ensemble robustness
Deep neural network ensembles hold the potential of improving generalization performance
for complex learning tasks. This paper presents formal analysis and empirical evaluation to …
for complex learning tasks. This paper presents formal analysis and empirical evaluation to …
ShiftAttack: Towards Attacking the Localization Ability of Object Detector
State-of-the-art (SOTA) adversarial attacks expose vulnerabilities in object detectors, often
resulting in erroneous predictions. However, existing adversarial attacks neglect the stealth …
resulting in erroneous predictions. However, existing adversarial attacks neglect the stealth …
Adversarial defenses for object detectors based on Gabor convolutional layers
Despite their many advantages and positive features, the deep neural networks are
extremely vulnerable against adversarial attacks. This drawback has substantially reduced …
extremely vulnerable against adversarial attacks. This drawback has substantially reduced …
Perception poisoning attacks in federated learning
Federated learning (FL) enables decentralized training of deep neural networks (DNNs) for
object detection over a distributed population of clients. It allows edge clients to keep their …
object detection over a distributed population of clients. It allows edge clients to keep their …
Pick-object-attack: Type-specific adversarial attack for object detection
Many recent studies have shown that deep neural models are vulnerable to adversarial
samples: images with imperceptible perturbations, for example, can fool image classifiers. In …
samples: images with imperceptible perturbations, for example, can fool image classifiers. In …
PapMOT: Exploring Adversarial Patch Attack Against Multiple Object Tracking
Tracking multiple objects in a continuous video stream is crucial for many computer vision
tasks. It involves detecting and associating objects with their respective identities across …
tasks. It involves detecting and associating objects with their respective identities across …
Using frequency attention to make adversarial patch powerful against person detector
X Lei, X Cai, C Lu, Z Jiang, Z Gong, L Lu - IEEE Access, 2022 - ieeexplore.ieee.org
Deep neural networks (DNNs) are vulnerable to adversarial attacks. In particular, object
detectors may be attacked by applying a particular adversarial patch to the image. However …
detectors may be attacked by applying a particular adversarial patch to the image. However …
Transrpn: Towards the transferable adversarial perturbations using region proposal networks and beyond
The adversarial perturbation for object detectors has drawn increasing attention due to the
application in video surveillance and autonomous driving. However, few works have …
application in video surveillance and autonomous driving. However, few works have …
Towards interpreting vulnerability of object detection models via adversarial distillation
Recent works have shown that deep learning models are highly vulnerable to adversarial
examples, limiting the application of deep learning in security-critical systems. This paper …
examples, limiting the application of deep learning in security-critical systems. This paper …