Review the state-of-the-art technologies of semantic segmentation based on deep learning

Y Mo, Y Wu, X Yang, F Liu, Y Liao - Neurocomputing, 2022 - Elsevier
The goal of semantic segmentation is to segment the input image according to semantic
information and predict the semantic category of each pixel from a given label set. With the …

A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability

X Huang, D Kroening, W Ruan, J Sharp, Y Sun… - Computer Science …, 2020 - Elsevier
In the past few years, significant progress has been made on deep neural networks (DNNs)
in achieving human-level performance on several long-standing tasks. With the broader …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST **a - IEEE transactions on neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Frequency-driven imperceptible adversarial attack on semantic similarity

C Luo, Q Lin, W **e, B Wu, J **e… - Proceedings of the …, 2022 - openaccess.thecvf.com
Current adversarial attack research reveals the vulnerability of learning-based classifiers
against carefully crafted perturbations. However, most existing attack methods have inherent …

Invisible backdoor attacks on deep neural networks via steganography and regularization

S Li, M Xue, BZH Zhao, H Zhu… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Deep neural networks (DNNs) have been proven vulnerable to backdoor attacks, where
hidden features (patterns) trained to a normal model, which is only activated by some …

Artificial intelligence security: Threats and countermeasures

Y Hu, W Kuang, Z Qin, K Li, J Zhang, Y Gao… - ACM Computing …, 2021 - dl.acm.org
In recent years, with rapid technological advancement in both computing hardware and
algorithm, Artificial Intelligence (AI) has demonstrated significant advantage over human …

Enhancing adversarial example transferability with an intermediate level attack

Q Huang, I Katsman, H He, Z Gu… - Proceedings of the …, 2019 - openaccess.thecvf.com
Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool
trained models. Adversarial examples often exhibit black-box transfer, meaning that …

Domain impression: A source data free domain adaptation method

VK Kurmi, VK Subramanian… - Proceedings of the …, 2021 - openaccess.thecvf.com
Unsupervised Domain adaptation methods solve the adaptation problem for an unlabeled
target set, assuming that the source dataset is available with all labels. However, the …

Towards transferable adversarial attack against deep face recognition

Y Zhong, W Deng - IEEE Transactions on Information Forensics …, 2020 - ieeexplore.ieee.org
Face recognition has achieved great success in the last five years due to the development of
deep learning methods. However, deep convolutional neural networks (DCNNs) have been …

Rethinking the trigger of backdoor attack

Y Li, T Zhai, B Wu, Y Jiang, Z Li, S **a - arxiv preprint arxiv:2004.04692, 2020 - arxiv.org
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs),
such that the prediction of the infected model will be maliciously changed if the hidden …