Learning mlps on graphs: A unified view of effectiveness, robustness, and efficiency

Y Tian, C Zhang, Z Guo, X Zhang… - … Conference on Learning …, 2022 - openreview.net
While Graph Neural Networks (GNNs) have demonstrated their efficacy in dealing with non-
Euclidean structural data, they are difficult to be deployed in real applications due to the …

Sensitivity analysis of Takagi–Sugeno fuzzy neural network

J Wang, Q Chang, T Gao, K Zhang, NR Pal - Information Sciences, 2022 - Elsevier
In this paper, we first define a measure of statistical sensitivity of a zero-order Takagi–
Sugeno (TS) fuzzy neural network (FNN) with respect to perturbation of weights and …

Exploring Winograd convolution for cost-effective neural network fault tolerance

X Xue, C Liu, B Liu, H Huang, Y Wang… - … Transactions on Very …, 2023 - ieeexplore.ieee.org
Winograd is generally utilized to optimize convolution performance and computational
efficiency because of the reduced multiplication operations, but the reliability issues brought …

FAT: Training neural networks for reliable inference under hardware faults

U Zahid, G Gambardella, NJ Fraser… - 2020 IEEE …, 2020 - ieeexplore.ieee.org
Deep neural networks (DNNs) are state-of-the-art algorithms for multiple applications,
spanning from image classification to speech recognition. While providing excellent …

A weight perturbation-based regularisation technique for convolutional neural networks and the application in medical imaging

A Khatami, A Nazari, A Khosravi, CP Lim… - Expert systems with …, 2020 - Elsevier
A convolutional neural network has the capacity to learn multiple representation levels and
abstraction in order to provide a better understanding of image data. In addition, a good …

Learning Optimized Structure of Neural Networks by Hidden Node Pruning With Regularization

X **e, H Zhang, J Wang, Q Chang… - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
We propose three different methods to determine the optimal number of hidden nodes
based on L 1 regularization for a multilayer perceptron network. The first two methods …

Nosmog: Learning noise-robust and structure-aware mlps on graphs

Y Tian, C Zhang, Z Guo, X Zhang… - arxiv preprint arxiv …, 2022 - arxiv.org
While Graph Neural Networks (GNNs) have demonstrated their efficacy in dealing with non-
Euclidean structural data, they are difficult to be deployed in real applications due to the …

A simple and efficient tensor calculus

S Laue, M Mitterreiter, J Giesen - … of the AAAI Conference on Artificial …, 2020 - ojs.aaai.org
Computing derivatives of tensor expressions, also known as tensor calculus, is a
fundamental task in machine learning. A key concern is the efficiency of evaluating the …

On the robustness of kolmogorov-arnold networks: An adversarial perspective

T Alter, R Lapid, M Sipper - arxiv preprint arxiv:2408.13809, 2024 - arxiv.org
Kolmogorov-Arnold Networks (KANs) have recently emerged as a novel approach to
function approximation, demonstrating remarkable potential in various domains. Despite …

Improving noise tolerance of mixed-signal neural networks

M Klachko, MR Mahmoodi… - 2019 International Joint …, 2019 - ieeexplore.ieee.org
Mixed-signal hardware accelerators for deep learning achieve orders of magnitude better
power efficiency than their digital counterparts. In the ultra-low power consumption regime …