Explainable ai: A review of machine learning interpretability methods

P Linardatos, V Papastefanopoulos, S Kotsiantis - Entropy, 2020 - mdpi.com
Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption,
with machine learning systems demonstrating superhuman performance in a significant …

Machine learning in mental health: A systematic review of the HCI literature to support the development of effective and implementable ML systems

A Thieme, D Belgrave, G Doherty - ACM Transactions on Computer …, 2020 - dl.acm.org
High prevalence of mental illness and the need for effective mental health care, combined
with recent advances in AI, has led to an increase in explorations of how the field of machine …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Robust physical-world attacks on deep learning visual classification

K Eykholt, I Evtimov, E Fernandes… - Proceedings of the …, 2018 - openaccess.thecvf.com
Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to
adversarial examples, resulting from small-magnitude perturbations added to the input …

Countering adversarial images using input transformations

C Guo, M Rana, M Cisse, L Van Der Maaten - arxiv preprint arxiv …, 2017 - arxiv.org
This paper investigates strategies that defend against adversarial-example attacks on image-
classification systems by transforming the inputs before feeding them to the system …

Improving transferability of adversarial examples with input diversity

C **e, Z Zhang, Y Zhou, S Bai, J Wang… - Proceedings of the …, 2019 - openaccess.thecvf.com
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they
are vulnerable to adversarial examples---crafted by adding human-imperceptible …

Audio adversarial examples: Targeted attacks on speech-to-text

N Carlini, D Wagner - 2018 IEEE security and privacy …, 2018 - ieeexplore.ieee.org
We construct targeted audio adversarial examples on automatic speech recognition. Given
any audio waveform, we can produce another that is over 99.9% similar, but transcribes as …

Mitigating adversarial effects through randomization

C **e, J Wang, Z Zhang, Z Ren, A Yuille - arxiv preprint arxiv:1711.01991, 2017 - arxiv.org
Convolutional neural networks have demonstrated high accuracy on various tasks in recent
years. However, they are extremely vulnerable to adversarial examples. For example …

[HTML][HTML] Adversarial attacks and defenses in deep learning

K Ren, T Zheng, Z Qin, X Liu - Engineering, 2020 - Elsevier
With the rapid developments of artificial intelligence (AI) and deep learning (DL) techniques,
it is critical to ensure the security and robustness of the deployed algorithms. Recently, the …

Adversarially robust generalization requires more data

L Schmidt, S Santurkar, D Tsipras… - Advances in neural …, 2018 - proceedings.neurips.cc
Abstract Machine learning models are often susceptible to adversarial perturbations of their
inputs. Even small perturbations can cause state-of-the-art classifiers with high" standard" …