Relative loss bounds for single neurons
DP Helmbold, J Kivinen… - IEEE Transactions on …, 1999 - ieeexplore.ieee.org
We analyze and compare the well-known gradient descent algorithm and the more recent
exponentiated gradient algorithm for training a single neuron with an arbitrary transfer …
exponentiated gradient algorithm for training a single neuron with an arbitrary transfer …
A new convex objective function for the supervised learning of single-layer neural networks
This paper proposes a novel supervised learning method for single-layer feedforward neural
networks. This approach uses an alternative objective function to that based on the MSE …
networks. This approach uses an alternative objective function to that based on the MSE …
A global optimum approach for one-layer neural networks
The article presents a method for learning the weights in one-layer feed-forward neural
networks minimizing either the sum of squared errors or the maximum absolute error …
networks minimizing either the sum of squared errors or the maximum absolute error …
LANN-SVD: a non-iterative SVD-based learning algorithm for one-layer neural networks
O Fontenla-Romero, B Pérez-Sánchez… - IEEE transactions on …, 2017 - ieeexplore.ieee.org
In the scope of data analytics, the volume of a data set can be defined as a product of
instance size and dimensionality of the data. In many real problems, data sets are mainly …
instance size and dimensionality of the data. In many real problems, data sets are mainly …
DSVD‐autoencoder: a scalable distributed privacy‐preserving method for one‐class classification
O Fontenla‐Romero, B Pérez‐Sánchez… - … Journal of Intelligent …, 2021 - Wiley Online Library
One‐class classification has gained interest as a solution to certain kinds of problems typical
in a wide variety of real environments like anomaly or novelty detection. Autoencoder is the …
in a wide variety of real environments like anomaly or novelty detection. Autoencoder is the …
An incremental non-iterative learning method for one-layer feedforward neural networks
O Fontenla-Romero, B Perez-Sanchez… - Applied Soft …, 2018 - Elsevier
In machine learning literature, and especially in the literature referring to artificial neural
networks, most methods are iterative and operate in batch mode. However, many of the …
networks, most methods are iterative and operate in batch mode. However, many of the …
On the uniqueness of weights in single-layer perceptrons
FM Coetzee, VL Stonick - IEEE transactions on neural networks, 1996 - ieeexplore.ieee.org
In this paper the geometric formulation of the single layer perceptron weight optimization
problem previously described by Coetzee et al.(1993, 1996) is combined with results from …
problem previously described by Coetzee et al.(1993, 1996) is combined with results from …
Some notes on perceptron learning
M Budinich - Journal of Physics A: Mathematical and General, 1993 - iopscience.iop.org
The author extends the geometrical approach to the perceptron and shows that, given n
examples, learning is of maximal difficulty when the number of inputs d is such that n= 5d …
examples, learning is of maximal difficulty when the number of inputs d is such that n= 5d …
Properties of feedforward neural networks
In his seminal paper Cover (1965) used geometrical arguments to compute the probability of
separating two sets of patterns with a perceptron. The authors extend these ideas to …
separating two sets of patterns with a perceptron. The authors extend these ideas to …
A temporal memory network with state-dependent thresholds
J Ghosh, S Wang - IEEE International Conference on Neural …, 1993 - ieeexplore.ieee.org
A fully connected recurrent network that is capable of storing, recalling, and generating a
pattern sequence, is presented. This network reproduces a memorized sequence by …
pattern sequence, is presented. This network reproduces a memorized sequence by …