Towards the first adversarially robust neural network model on MNIST L Schott, J Rauber, M Bethge, W Brendel International Conference on Learning Representations 2019, 2018 | 457 | 2018 |
A simple way to make neural networks robust against diverse image corruptions E Rusak, L Schott, RS Zimmermann, J Bitterwolf, O Bringmann, M Bethge, ... Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020 | 236 | 2020 |
Comparative study of deep learning software frameworks S Bahrampour, N Ramakrishnan, L Schott, M Shah arXiv preprint arXiv:1511.06435, 2015 | 235 | 2015 |
Towards nonlinear disentanglement in natural data with temporal sparse coding D Klindt, L Schott, Y Sharma, I Ustyuzhaninov, W Brendel, M Bethge, ... arXiv preprint arXiv:2007.10930, 2020 | 149 | 2020 |
Comparative study of caffe, neon, theano, and torch for deep learning S Bahrampour, N Ramakrishnan, L Schott, M Shah | 140 | 2016 |
Visual representation learning does not generalize strongly within the same domain L Schott, J Von Kügelgen, F Träuble, P Gehler, C Russell, M Bethge, ... arXiv preprint arXiv:2107.08221, 2021 | 73 | 2021 |
Score-based generative classifiers RS Zimmermann, L Schott, Y Song, BA Dunn, DA Klindt arXiv preprint arXiv:2110.00473, 2021 | 72 | 2021 |
Increasing the robustness of dnns against im-age corruptions by playing the game of noise E Rusak, L Schott, R Zimmermann, J Bitterwolfb, O Bringmann, M Bethge, ... | 54 | 2020 |
Learned watershed: End-to-end learning of seeded segmentation S Wolf, L Schott, U Kothe, F Hamprecht Proceedings of the IEEE International Conference on Computer Vision, 2011-2019, 2017 | 53 | 2017 |
Deep learning on symbolic representations for large-scale heterogeneous time-series event prediction S Zhang, S Bahrampour, N Ramakrishnan, L Schott, M Shah International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2016 | 30 | 2016 |
Understanding neural coding on latent manifolds by sharing features and dividing ensembles M Bjerke, L Schott, KT Jensen, C Battistin, DA Klindt, BA Dunn arXiv preprint arXiv:2210.03155, 2022 | 9 | 2022 |
Towards the first adversarially robust neural network model on mnist (2018) L Schott, J Rauber, M Bethge, W Brendel arXiv preprint arXiv:1805.09190, 1805 | 9 | 1805 |
Comparative study of deep learning software frameworks. arXiv 2015 S Bahrampour, N Ramakrishnan, L Schott, M Shah arXiv preprint arXiv:1511.06435 3, 0 | 5 | |
Comparative study of Caffe S Bahrampour, N Ramakrishnan, L Schott, M Shah Neon, Theano, and Torch for Deep Learning. arXiv 1511, 2015 | 4 | 2015 |
Mind the gap between synthetic and real: Utilizing transfer learning to probe the boundaries of stable diffusion generated data L Hennicke, CM Adriano, H Giese, JM Koehler, L Schott arXiv preprint arXiv:2405.03243, 2024 | 3 | 2024 |
Analytical uncertainty-based loss weighting in multi-task learning L Kirchdorfer, C Elich, S Kutsche, H Stuckenschmidt, L Schott, JM Köhler arXiv preprint arXiv:2408.07985, 2024 | 2 | 2024 |
Challenging common assumptions in multi-task learning C Elich, L Kirchdorfer, JM Köhler, L Schott arXiv preprint arXiv:2311.04698, 2023 | 2 | 2023 |
METHOD FOR TRAINING A MACHINE LEARNING MODEL L Schott, JM Koehler, C Blaiotta US Patent App. 18/774,344, 2025 | | 2025 |
Device and method for classifying a digital image with an image classifier, for training the image classifier, and for determining an image dataset for the training L Schott, C Blaiotta US Patent App. 18/771,370, 2025 | | 2025 |
Attention Is All You Need For Mixture-of-Depths Routing A Gadhikar, SK Majumdar, N Popp, P Saranrittichai, M Rapp, L Schott arXiv preprint arXiv:2412.20875, 2024 | | 2024 |