A systematic review of adversarial machine learning attacks, defensive controls and technologies

J Malik, R Muthalagu, PM Pawar - IEEE Access, 2024 - ieeexplore.ieee.org
Adversarial machine learning (AML) attacks have become a major concern for organizations
in recent years, as AI has become the industry's focal point and GenAI applications have …

Double-win quant: Aggressively winning robustness of quantized deep neural networks via random precision training and inference

Y Fu, Q Yu, M Li, V Chandra… - … Conference on Machine …, 2021 - proceedings.mlr.press
Quantization is promising in enabling powerful yet complex deep neural networks (DNNs) to
be deployed into resource constrained platforms. However, quantized DNNs are vulnerable …

2-in-1 accelerator: Enabling random precision switch for winning both adversarial robustness and efficiency

Y Fu, Y Zhao, Q Yu, C Li, Y Lin - MICRO-54: 54th Annual IEEE/ACM …, 2021 - dl.acm.org
The recent breakthroughs of deep neural networks (DNNs) and the advent of billions of
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …

Improving adversarial robustness in weight-quantized neural networks

C Song, E Fallon, H Li - arxiv preprint arxiv:2012.14965, 2020 - arxiv.org
Neural networks are getting deeper and more computation-intensive nowadays.
Quantization is a useful technique in deploying neural networks on hardware platforms and …

A layer-wise adversarial-aware quantization optimization for improving robustness

C Song, R Ranjan, H Li - arxiv preprint arxiv:2110.12308, 2021 - arxiv.org
Neural networks are getting better accuracy with higher energy and computational cost. After
quantization, the cost can be greatly saved, and the quantized models are more hardware …

Analyzing and improving the robustness of tabular classifiers using counterfactual explanations

P Rasouli, IC Yu - 2021 20th IEEE International Conference on …, 2021 - ieeexplore.ieee.org
Recent studies have revealed that Machine Learning (ML) models are vulnerable to
adversarial perturbations. Such perturbations can be intentionally or accidentally added to …

Enhancing Dehaze Method in Real hill Based Images using Gaussian Filter Over Gabor Filter for Better Accuracy

SH venkata Sai, S Parthiban… - 2023 Second …, 2023 - ieeexplore.ieee.org
Enhancing dehaze method in real hill based images using Gaussian filter over gabor filter
for better exactness. The Gaussian filter (N= 10) and gabor filter method (N= 10) these two …

Algorithm-Hardware Co-Design Towards Efficient and Robust Edge Vision Applications

Y Fu - 2022 - search.proquest.com
The recent breakthroughs of deep neural networks (DNNs) and the advent of billions of
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …

[PDF][PDF] Local Explainability of Tabular Machine Learning Models and its Impact on Model Reliability

P Rasouli - 2023 - duo.uio.no
ML models are widely used in real-world applications, but their increasing complexity has
made them opaque black boxes, hindering their safe adoption in critical areas. This thesis …

Robustness Analysis and Improvement in Neural Networks and Neuromorphic Computing

C Song - 2021 - search.proquest.com
Deep learning and neural networks have great potential while still at risk. The so-called
adversarial attacks, which apply small perturbations on input samples to fool models …