Deep randomized neural networks

C Gallicchio, S Scardapane - Recent Trends in Learning From Data …, 2020 - Springer
Abstract Randomized Neural Networks explore the behavior of neural systems where the
majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical …

Sparse random neural networks for online anomaly detection on sensor nodes

S Leroux, P Simoens - Future Generation Computer Systems, 2023 - Elsevier
Whether it is used for predictive maintenance, intrusion detection or surveillance, on-device
anomaly detection is a very valuable functionality in sensor and Internet-of-things (IoT) …

Regret bounds for meta bayesian optimization with an unknown gaussian process prior

Z Wang, B Kim, LP Kaelbling - Advances in Neural …, 2018 - proceedings.neurips.cc
Bayesian optimization usually assumes that a Bayesian prior is given. However, the strong
theoretical guarantees in Bayesian optimization are often regrettably compromised in …

Deep neural networks with multi-branch architectures are intrinsically less non-convex

H Zhang, J Shao… - The 22nd International …, 2019 - proceedings.mlr.press
Several recently proposed architectures of neural networks such as ResNeXt, Inception,
Xception, SqueezeNet and Wide ResNet are based on the designing idea of having multiple …

Orthogonal over-parameterized training

W Liu, R Lin, Z Liu, JM Rehg, L Paull… - Proceedings of the …, 2021 - openaccess.thecvf.com
The inductive bias of a neural network is largely determined by the architecture and the
training algorithm. To achieve good generalization, how to effectively train a neural network …

Regularizing neural networks via minimizing hyperspherical energy

R Lin, W Liu, Z Liu, C Feng, Z Yu… - Proceedings of the …, 2020 - openaccess.thecvf.com
Inspired by the Thomson problem in physics where the distribution of multiple propelling
electrons on a unit sphere can be modeled via minimizing some potential energy …

Every local minimum value is the global minimum value of induced model in nonconvex machine learning

K Kawaguchi, J Huang, LP Kaelbling - Neural Computation, 2019 - direct.mit.edu
For nonconvex optimization in machine learning, this article proves that every local minimum
achieves the globally optimal value of the perturbable gradient basis model at any …

Deep kernel learning networks with multiple learning paths

P Xu, Y Wang, X Chen, Z Tian - ICASSP 2022-2022 IEEE …, 2022 - ieeexplore.ieee.org
This paper proposes deep kernel learning networks with multiple learning paths (DKL-MLP)
for nonlinear function approximation. Leveraging the random feature (RF) map** …

Deep neural networks with multi-branch architectures are less non-convex

H Zhang, J Shao, R Salakhutdinov - arxiv preprint arxiv:1806.01845, 2018 - arxiv.org
Several recently proposed architectures of neural networks such as ResNeXt, Inception,
Xception, SqueezeNet and Wide ResNet are based on the designing idea of having multiple …

Diffusion Random Feature Model

E Saha, G Tran - arxiv preprint arxiv:2310.04417, 2023 - arxiv.org
Diffusion probabilistic models have been successfully used to generate data from noise.
However, most diffusion models are computationally expensive and difficult to interpret with …