Model architecture level privacy leakage in neural networks

Y Li, H Yan, T Huang, Z Pan, J Lai, X Zhang… - Science China …, 2024 - Springer
Privacy leakage is one of the most critical issues in machine learning and has attracted
growing interest for tasks such as demonstrating potential threats in model attacks and …

PISA: Pixel skip**-based attentional black-box adversarial attack

J Wang, Z Yin, J Jiang, J Tang, B Luo - Computers & Security, 2022 - Elsevier
The studies on black-box and evolutionary algorithm-based adversarial attacks have
become increasingly popular due to the intractable acquisition of the structural knowledge of …

Suny: A visual interpretation framework for convolutional neural networks from a necessary and sufficient perspective

X Xuan, Z Deng, HT Lin, Z Kong… - Proceedings of the …, 2024 - openaccess.thecvf.com
In spite of the ongoing evolution of deep learning Convolutional Neural Networks (CNNs)
remain the de facto choice for numerous vision applications. To foster trust researchers have …

A two-stage frequency-domain generation algorithm based on differential evolution for black-box adversarial samples

X Song, D Xu, C Peng, Y Zhang, Y Xue - Expert Systems with Applications, 2024 - Elsevier
Adversarial sample generation problem is a hot issue in the security field of deep learning.
Evolutionary algorithm has been widely used to solve this problem in recent years because …

Active forgetting via influence estimation for neural networks

X Meng, Y Yang, X Liu, N Jiang - International Journal of …, 2022 - Wiley Online Library
The rapidly exploding of user data, especially applications of neural networks, involves
analyzing data collected from individuals, which brings convenience to life. Meanwhile …

Trustworthy adaptive adversarial perturbations in social networks

J Zhang, J Wang, H Wang, X Luo, B Ma - Journal of Information Security …, 2024 - Elsevier
Deep neural networks have achieved excellent performance in various research and
applications, but they have proven to be susceptible to adversarial examples. Generating …

Adversarial example defense via perturbation grading strategy

S Zhu, W Lyu, B Li, Z Yin, B Luo - International Forum on Digital TV and …, 2022 - Springer
Abstract Deep Neural Networks have been widely used in many fields. However, studies
have shown that DNNs are easily attacked by adversarial examples, which have tiny …

Multi-Objective Differential Evolution and Unseen Adversarial Sample Generation

W Zhao, S Alwidian, H Ge, S Rahnamayan… - Proceedings of the …, 2022 - dl.acm.org
Adversarial attacks expose the vulnerabilities of machine learning models that could alter
the models' outputs by slightly perturbing the input data of the models. In this paper, we …

One-for-Many: A flexible adversarial attack on different DNN-based systems

AE Baia - 2023 - flore.unifi.it
Deep Neural Networks (DNNs) have become the standard-de-facto technology in most
computer vision applications due to the exceptional performance and versatile applicability …