Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks
With wide deployment of machine learning (ML)-based systems for a variety of applications
including medical, military, automotive, genomic, multimedia, and social networking, there is …
including medical, military, automotive, genomic, multimedia, and social networking, there is …
Activenerf: Learning where to see with uncertainty estimation
Abstract Recently, Neural Radiance Fields (NeRF) has shown promising performances on
reconstructing 3D scenes and synthesizing novel views from a sparse set of 2D images …
reconstructing 3D scenes and synthesizing novel views from a sparse set of 2D images …
A review of deep learning security and privacy defensive techniques
In recent past years, Deep Learning presented an excellent performance in different areas
like image recognition, pattern matching, and even in cybersecurity. The Deep Learning has …
like image recognition, pattern matching, and even in cybersecurity. The Deep Learning has …
{CADE}: Detecting and explaining concept drift samples for security applications
Concept drift poses a critical challenge to deploy machine learning models to solve practical
security problems. Due to the dynamic behavior changes of attackers (and/or the benign …
security problems. Due to the dynamic behavior changes of attackers (and/or the benign …
A survey on decentralized federated learning
E Gabrielli, G Pica, G Tolomei - arxiv preprint arxiv:2308.04604, 2023 - arxiv.org
In recent years, federated learning (FL) has become a very popular paradigm for training
distributed, large-scale, and privacy-preserving machine learning (ML) systems. In contrast …
distributed, large-scale, and privacy-preserving machine learning (ML) systems. In contrast …
How to steal a machine learning classifier with deep learning
This paper presents an exploratory machine learning attack based on deep learning to infer
the functionality of an arbitrary classifier by polling it as a black box, and using returned …
the functionality of an arbitrary classifier by polling it as a black box, and using returned …
Poisoning attacks and defenses on artificial intelligence: A survey
Machine learning models have been widely adopted in several fields. However, most recent
studies have shown several vulnerabilities from attacks with a potential to jeopardize the …
studies have shown several vulnerabilities from attacks with a potential to jeopardize the …
Cognition-based networks: A new perspective on network optimization using learning and distributed intelligence
In response to the new challenges in the design and operation of communication networks,
and taking inspiration from how living beings deal with complexity and scalability, in this …
and taking inspiration from how living beings deal with complexity and scalability, in this …
Living-off-the-land command detection using active learning
In recent years, enterprises have been targeted by advanced adversaries who leverage
creative ways to infiltrate their systems and move laterally to gain access to critical data. One …
creative ways to infiltrate their systems and move laterally to gain access to critical data. One …
Handling adversarial concept drift in streaming data
TS Sethi, M Kantardzic - Expert systems with applications, 2018 - Elsevier
Classifiers operating in a dynamic, real world environment, are vulnerable to adversarial
activity, which causes the data distribution to change over time. These changes are …
activity, which causes the data distribution to change over time. These changes are …