Learning mlps on graphs: A unified view of effectiveness, robustness, and efficiency
While Graph Neural Networks (GNNs) have demonstrated their efficacy in dealing with non-
Euclidean structural data, they are difficult to be deployed in real applications due to the …
Euclidean structural data, they are difficult to be deployed in real applications due to the …
Sensitivity analysis of Takagi–Sugeno fuzzy neural network
In this paper, we first define a measure of statistical sensitivity of a zero-order Takagi–
Sugeno (TS) fuzzy neural network (FNN) with respect to perturbation of weights and …
Sugeno (TS) fuzzy neural network (FNN) with respect to perturbation of weights and …
Exploring Winograd convolution for cost-effective neural network fault tolerance
Winograd is generally utilized to optimize convolution performance and computational
efficiency because of the reduced multiplication operations, but the reliability issues brought …
efficiency because of the reduced multiplication operations, but the reliability issues brought …
FAT: Training neural networks for reliable inference under hardware faults
Deep neural networks (DNNs) are state-of-the-art algorithms for multiple applications,
spanning from image classification to speech recognition. While providing excellent …
spanning from image classification to speech recognition. While providing excellent …
A weight perturbation-based regularisation technique for convolutional neural networks and the application in medical imaging
A convolutional neural network has the capacity to learn multiple representation levels and
abstraction in order to provide a better understanding of image data. In addition, a good …
abstraction in order to provide a better understanding of image data. In addition, a good …
Learning Optimized Structure of Neural Networks by Hidden Node Pruning With Regularization
X **e, H Zhang, J Wang, Q Chang… - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
We propose three different methods to determine the optimal number of hidden nodes
based on L 1 regularization for a multilayer perceptron network. The first two methods …
based on L 1 regularization for a multilayer perceptron network. The first two methods …
Nosmog: Learning noise-robust and structure-aware mlps on graphs
While Graph Neural Networks (GNNs) have demonstrated their efficacy in dealing with non-
Euclidean structural data, they are difficult to be deployed in real applications due to the …
Euclidean structural data, they are difficult to be deployed in real applications due to the …
A simple and efficient tensor calculus
Computing derivatives of tensor expressions, also known as tensor calculus, is a
fundamental task in machine learning. A key concern is the efficiency of evaluating the …
fundamental task in machine learning. A key concern is the efficiency of evaluating the …
On the robustness of kolmogorov-arnold networks: An adversarial perspective
Kolmogorov-Arnold Networks (KANs) have recently emerged as a novel approach to
function approximation, demonstrating remarkable potential in various domains. Despite …
function approximation, demonstrating remarkable potential in various domains. Despite …
Improving noise tolerance of mixed-signal neural networks
Mixed-signal hardware accelerators for deep learning achieve orders of magnitude better
power efficiency than their digital counterparts. In the ultra-low power consumption regime …
power efficiency than their digital counterparts. In the ultra-low power consumption regime …