Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A systematic review of adversarial machine learning attacks, defensive controls and technologies
Adversarial machine learning (AML) attacks have become a major concern for organizations
in recent years, as AI has become the industry's focal point and GenAI applications have …
in recent years, as AI has become the industry's focal point and GenAI applications have …
Double-win quant: Aggressively winning robustness of quantized deep neural networks via random precision training and inference
Quantization is promising in enabling powerful yet complex deep neural networks (DNNs) to
be deployed into resource constrained platforms. However, quantized DNNs are vulnerable …
be deployed into resource constrained platforms. However, quantized DNNs are vulnerable …
2-in-1 accelerator: Enabling random precision switch for winning both adversarial robustness and efficiency
The recent breakthroughs of deep neural networks (DNNs) and the advent of billions of
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …
Improving adversarial robustness in weight-quantized neural networks
Neural networks are getting deeper and more computation-intensive nowadays.
Quantization is a useful technique in deploying neural networks on hardware platforms and …
Quantization is a useful technique in deploying neural networks on hardware platforms and …
A layer-wise adversarial-aware quantization optimization for improving robustness
Neural networks are getting better accuracy with higher energy and computational cost. After
quantization, the cost can be greatly saved, and the quantized models are more hardware …
quantization, the cost can be greatly saved, and the quantized models are more hardware …
Analyzing and improving the robustness of tabular classifiers using counterfactual explanations
Recent studies have revealed that Machine Learning (ML) models are vulnerable to
adversarial perturbations. Such perturbations can be intentionally or accidentally added to …
adversarial perturbations. Such perturbations can be intentionally or accidentally added to …
Enhancing Dehaze Method in Real hill Based Images using Gaussian Filter Over Gabor Filter for Better Accuracy
SH venkata Sai, S Parthiban… - 2023 Second …, 2023 - ieeexplore.ieee.org
Enhancing dehaze method in real hill based images using Gaussian filter over gabor filter
for better exactness. The Gaussian filter (N= 10) and gabor filter method (N= 10) these two …
for better exactness. The Gaussian filter (N= 10) and gabor filter method (N= 10) these two …
Algorithm-Hardware Co-Design Towards Efficient and Robust Edge Vision Applications
Y Fu - 2022 - search.proquest.com
The recent breakthroughs of deep neural networks (DNNs) and the advent of billions of
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …
[PDF][PDF] Local Explainability of Tabular Machine Learning Models and its Impact on Model Reliability
P Rasouli - 2023 - duo.uio.no
ML models are widely used in real-world applications, but their increasing complexity has
made them opaque black boxes, hindering their safe adoption in critical areas. This thesis …
made them opaque black boxes, hindering their safe adoption in critical areas. This thesis …
Robustness Analysis and Improvement in Neural Networks and Neuromorphic Computing
C Song - 2021 - search.proquest.com
Deep learning and neural networks have great potential while still at risk. The so-called
adversarial attacks, which apply small perturbations on input samples to fool models …
adversarial attacks, which apply small perturbations on input samples to fool models …