Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks

DJ Miller, Z **ang, G Kesidis - Proceedings of the IEEE, 2020 - ieeexplore.ieee.org
With wide deployment of machine learning (ML)-based systems for a variety of applications
including medical, military, automotive, genomic, multimedia, and social networking, there is …

Activenerf: Learning where to see with uncertainty estimation

X Pan, Z Lai, S Song, G Huang - European Conference on Computer …, 2022 - Springer
Abstract Recently, Neural Radiance Fields (NeRF) has shown promising performances on
reconstructing 3D scenes and synthesizing novel views from a sparse set of 2D images …

A review of deep learning security and privacy defensive techniques

MI Tariq, NA Memon, S Ahmed… - Mobile Information …, 2020 - Wiley Online Library
In recent past years, Deep Learning presented an excellent performance in different areas
like image recognition, pattern matching, and even in cybersecurity. The Deep Learning has …

{CADE}: Detecting and explaining concept drift samples for security applications

L Yang, W Guo, Q Hao, A Ciptadi… - 30th USENIX Security …, 2021 - usenix.org
Concept drift poses a critical challenge to deploy machine learning models to solve practical
security problems. Due to the dynamic behavior changes of attackers (and/or the benign …

A survey on decentralized federated learning

E Gabrielli, G Pica, G Tolomei - arxiv preprint arxiv:2308.04604, 2023 - arxiv.org
In recent years, federated learning (FL) has become a very popular paradigm for training
distributed, large-scale, and privacy-preserving machine learning (ML) systems. In contrast …

How to steal a machine learning classifier with deep learning

Y Shi, Y Sagduyu, A Grushin - 2017 IEEE International …, 2017 - ieeexplore.ieee.org
This paper presents an exploratory machine learning attack based on deep learning to infer
the functionality of an arbitrary classifier by polling it as a black box, and using returned …

Poisoning attacks and defenses on artificial intelligence: A survey

MA Ramirez, SK Kim, HA Hamadi, E Damiani… - arxiv preprint arxiv …, 2022 - arxiv.org
Machine learning models have been widely adopted in several fields. However, most recent
studies have shown several vulnerabilities from attacks with a potential to jeopardize the …

Cognition-based networks: A new perspective on network optimization using learning and distributed intelligence

M Zorzi, A Zanella, A Testolin, MDF De Grazia… - IEEE …, 2015 - ieeexplore.ieee.org
In response to the new challenges in the design and operation of communication networks,
and taking inspiration from how living beings deal with complexity and scalability, in this …

Living-off-the-land command detection using active learning

T Ongun, JW Stokes, JB Or, K Tian… - Proceedings of the 24th …, 2021 - dl.acm.org
In recent years, enterprises have been targeted by advanced adversaries who leverage
creative ways to infiltrate their systems and move laterally to gain access to critical data. One …

Handling adversarial concept drift in streaming data

TS Sethi, M Kantardzic - Expert systems with applications, 2018 - Elsevier
Classifiers operating in a dynamic, real world environment, are vulnerable to adversarial
activity, which causes the data distribution to change over time. These changes are …