Advsim: Generating safety-critical scenarios for self-driving vehicles

J Wang, A Pun, J Tu, S Manivasagam… - Proceedings of the …, 2021 - openaccess.thecvf.com
As self-driving systems become better, simulating scenarios where the autonomy stack may
fail becomes more important. Traditionally, those scenarios are generated for a few scenes …

Artificial intelligence and crime: A primer for criminologists

KJ Hayward, MM Maas - Crime, Media, Culture, 2021 - journals.sagepub.com
This article introduces the concept of Artificial Intelligence (AI) to a criminological audience.
After a general review of the phenomenon (including brief explanations of important cognate …

Fooling thermal infrared pedestrian detectors in real world using small bulbs

X Zhu, X Li, J Li, Z Wang, X Hu - Proceedings of the AAAI conference on …, 2021 - ojs.aaai.org
Thermal infrared detection systems play an important role in many areas such as night
security, autonomous driving, and body temperature detection. They have the unique …

Clipped bagnet: Defending against sticker attacks with clipped bag-of-features

Z Zhang, B Yuan, M McCoyd… - 2020 IEEE Security and …, 2020 - ieeexplore.ieee.org
Many works have demonstrated that neural networks are vulnerable to adversarial
examples. We examine the adversarial sticker attack, where the attacker places a sticker …

Adversarial pixel masking: A defense against physical attacks for pre-trained object detectors

PH Chiang, CS Chan, SH Wu - … of the 29th ACM international conference …, 2021 - dl.acm.org
Object detection based on pre-trained deep neural networks (DNNs) has achieved
impressive performance and enabled many applications. However, DNN-based object …

Reverse engineering of imperceptible adversarial image perturbations

Y Gong, Y Yao, Y Li, Y Zhang, X Liu, X Lin… - arxiv preprint arxiv …, 2022 - arxiv.org
It has been well recognized that neural network based image classifiers are easily fooled by
images with tiny perturbations crafted by an adversary. There has been a vast volume of …

Adv3d: Generating safety-critical 3d objects through closed-loop simulation

J Sarva, J Wang, J Tu, Y **ong, S Manivasagam… - arxiv preprint arxiv …, 2023 - arxiv.org
Self-driving vehicles (SDVs) must be rigorously tested on a wide range of scenarios to
ensure safe deployment. The industry typically relies on closed-loop simulation to evaluate …

[HTML][HTML] Surreptitious adversarial examples through functioning qr code

A Chindaudom, P Siritanawan, K Sumongkayothin… - Journal of …, 2022 - mdpi.com
The continuous advances in the technology of Convolutional Neural Network (CNN) and
Deep Learning have been applied to facilitate various tasks of human life. However, security …

Distributed adversarial training to robustify deep neural networks at scale

G Zhang, S Lu, Y Zhang, X Chen… - Uncertainty in …, 2022 - proceedings.mlr.press
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where
adversarial perturbations to the inputs can change or manipulate classification. To defend …

Learning transferable 3D adversarial cloaks for deep trained detectors

A Maesumi, M Zhu, Y Wang, T Chen, Z Wang… - arxiv preprint arxiv …, 2021 - arxiv.org
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial
patches on 3D human meshes. We sample triangular faces on a reference human mesh …