Folgen
Tolga Ergen
Tolga Ergen
Research Scientist, LG AI Research
Bestätigte E-Mail-Adresse bei stanford.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Unsupervised anomaly detection with LSTM neural networks
T Ergen, SS Kozat
IEEE transactions on neural networks and learning systems 31 (8), 3127-3141, 2019
3902019
Online training of LSTM networks in distributed systems for variable length data sequences
T Ergen, SS Kozat
IEEE transactions on neural networks and learning systems 29 (10), 5159-5165, 2017
1202017
Efficient online learning algorithms based on LSTM neural networks
T Ergen, SS Kozat
IEEE transactions on neural networks and learning systems 29 (8), 3772-3783, 2017
1172017
Neural networks are convex regularizers: Exact polynomial-time convex optimization formulations for two-layer networks
M Pilanci, T Ergen
International Conference on Machine Learning, 7695-7705, 2020
1162020
Revealing the Structure of Deep Neural Networks via Convex Duality
T Ergen, M Pilanci
arXiv preprint arXiv:2002.09773, 2020
91*2020
Convex geometry and duality of over-parameterized neural networks
T Ergen, M Pilanci
Journal of machine learning research 22 (212), 1-63, 2021
632021
Implicit Convex Regularizers of CNN Architectures: Convex Optimization of Two- and Three-Layer Networks in Polynomial Time
T Ergen, M Pilanci
arXiv preprint arXiv:2006.14798, 2020
502020
Vector-output relu neural network problems are copositive programs: Convex analysis of two layer networks and polynomial-time algorithms
A Sahiner, T Ergen, J Pauly, M Pilanci
arXiv preprint arXiv:2012.13329, 2020
442020
Global optimality beyond two layers: Training deep relu networks via convex programs
T Ergen, M Pilanci
International Conference on Machine Learning, 2993-3003, 2021
412021
Demystifying batch normalization in relu networks: Equivalent convex optimization models and implicit regularization
T Ergen, A Sahiner, B Ozturkler, J Pauly, M Mardani, M Pilanci
arXiv preprint arXiv:2103.01499, 2021
352021
Unraveling attention via convex duality: Analysis and interpretations of vision transformers
A Sahiner, T Ergen, B Ozturkler, J Pauly, M Mardani, M Pilanci
International Conference on Machine Learning, 19050-19088, 2022
342022
Convex geometry of two-layer relu networks: Implicit autoencoding and interpretable models
T Ergen, M Pilanci
International Conference on Artificial Intelligence and Statistics, 4024-4033, 2020
322020
Energy-efficient LSTM networks for online learning
T Ergen, AH Mirza, SS Kozat
IEEE transactions on neural networks and learning systems 31 (8), 3114-3126, 2019
252019
Hidden convexity of wasserstein GANs: Interpretable generative models with closed-form solutions
A Sahiner, T Ergen, B Ozturkler, B Bartan, J Pauly, M Mardani, M Pilanci
arXiv preprint arXiv:2107.05680, 2021
212021
Path regularization: A convexity and sparsity inducing regularization for parallel relu networks
T Ergen, M Pilanci
Advances in Neural Information Processing Systems 36, 59761-59786, 2023
192023
Convex optimization for shallow neural networks
T Ergen, M Pilanci
2019 57th Annual Allerton Conference on Communication, Control, and …, 2019
182019
Parallel deep neural networks have zero duality gap
Y Wang, T Ergen, M Pilanci
arXiv preprint arXiv:2110.06482, 2021
162021
Convex neural autoregressive models: Towards tractable, expressive, and theoretically-backed models for sequential forecasting and generation
V Gupta, B Bartan, T Ergen, M Pilanci
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
16*2021
A novel distributed anomaly detection algorithm based on support vector machines
T Ergen, SS Kozat
Digital Signal Processing 99, 102657, 2020
142020
Globally optimal training of neural networks with threshold activation functions
T Ergen, HI Gulluk, J Lacotte, M Pilanci
arXiv preprint arXiv:2303.03382, 2023
102023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20