Model architecture level privacy leakage in neural networks
Privacy leakage is one of the most critical issues in machine learning and has attracted
growing interest for tasks such as demonstrating potential threats in model attacks and …
growing interest for tasks such as demonstrating potential threats in model attacks and …
PISA: Pixel skip**-based attentional black-box adversarial attack
The studies on black-box and evolutionary algorithm-based adversarial attacks have
become increasingly popular due to the intractable acquisition of the structural knowledge of …
become increasingly popular due to the intractable acquisition of the structural knowledge of …
Suny: A visual interpretation framework for convolutional neural networks from a necessary and sufficient perspective
In spite of the ongoing evolution of deep learning Convolutional Neural Networks (CNNs)
remain the de facto choice for numerous vision applications. To foster trust researchers have …
remain the de facto choice for numerous vision applications. To foster trust researchers have …
A two-stage frequency-domain generation algorithm based on differential evolution for black-box adversarial samples
X Song, D Xu, C Peng, Y Zhang, Y Xue - Expert Systems with Applications, 2024 - Elsevier
Adversarial sample generation problem is a hot issue in the security field of deep learning.
Evolutionary algorithm has been widely used to solve this problem in recent years because …
Evolutionary algorithm has been widely used to solve this problem in recent years because …
Active forgetting via influence estimation for neural networks
The rapidly exploding of user data, especially applications of neural networks, involves
analyzing data collected from individuals, which brings convenience to life. Meanwhile …
analyzing data collected from individuals, which brings convenience to life. Meanwhile …
Trustworthy adaptive adversarial perturbations in social networks
Deep neural networks have achieved excellent performance in various research and
applications, but they have proven to be susceptible to adversarial examples. Generating …
applications, but they have proven to be susceptible to adversarial examples. Generating …
Adversarial example defense via perturbation grading strategy
Abstract Deep Neural Networks have been widely used in many fields. However, studies
have shown that DNNs are easily attacked by adversarial examples, which have tiny …
have shown that DNNs are easily attacked by adversarial examples, which have tiny …
Multi-Objective Differential Evolution and Unseen Adversarial Sample Generation
Adversarial attacks expose the vulnerabilities of machine learning models that could alter
the models' outputs by slightly perturbing the input data of the models. In this paper, we …
the models' outputs by slightly perturbing the input data of the models. In this paper, we …
One-for-Many: A flexible adversarial attack on different DNN-based systems
AE Baia - 2023 - flore.unifi.it
Deep Neural Networks (DNNs) have become the standard-de-facto technology in most
computer vision applications due to the exceptional performance and versatile applicability …
computer vision applications due to the exceptional performance and versatile applicability …