Takip et
Daniel Soudry
Daniel Soudry
Associate Professor
technion.ac.il üzerinde doğrulanmış e-posta adresine sahip - Ana Sayfa
Başlık
Alıntı yapanlar
Alıntı yapanlar
Yıl
Binarized neural networks
I Hubara, M Courbariaux, D Soudry, R El-Yaniv, Y Bengio
Advances in neural information processing systems 29, 2016
6217*2016
Quantized neural networks: Training neural networks with low precision weights and activations
I Hubara, M Courbariaux, D Soudry, R El-Yaniv, Y Bengio
journal of machine learning research 18 (187), 1-30, 2018
23852018
Simultaneous denoising, deconvolution, and demixing of calcium imaging data
EA Pnevmatikakis, D Soudry, Y Gao, TA Machado, J Merel, D Pfau, ...
Neuron 89 (2), 285-299, 2016
11522016
The implicit bias of gradient descent on separable data
D Soudry, E Hoffer, MS Nacson, S Gunasekar, N Srebro
Journal of Machine Learning Research 19 (70), 1-57, 2018
10412018
Train longer, generalize better: closing the generalization gap in large batch training of neural networks
E Hoffer, I Hubara, D Soudry
Advances in neural information processing systems 30, 2017
10292017
Post training 4-bit quantization of convolutional networks for rapid-deployment
R Banner, Y Nahshan, D Soudry
Advances in Neural Information Processing Systems 32, 2019
799*2019
Characterizing implicit bias in terms of optimization geometry
S Gunasekar, J Lee, D Soudry, N Srebro
International Conference on Machine Learning, 1832-1841, 2018
5002018
Implicit bias of gradient descent on linear convolutional networks
S Gunasekar, JD Lee, D Soudry, N Srebro
Advances in neural information processing systems 31, 2018
4742018
Scalable methods for 8-bit training of neural networks
R Banner, I Hubara, E Hoffer, D Soudry
Advances in neural information processing systems 31, 2018
4212018
Kernel and rich regimes in overparametrized models
B Woodworth, S Gunasekar, JD Lee, E Moroshko, P Savarese, I Golan, ...
Conference on Learning Theory, 3635-3673, 2020
4172020
Augment your batch: Improving generalization through instance repetition
E Hoffer, T Ben-Nun, I Hubara, N Giladi, T Hoefler, D Soudry
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
332*2020
Accurate post training quantization with small calibration sets
I Hubara, Y Nahshan, Y Hanani, R Banner, D Soudry
International Conference on Machine Learning, 4466-4475, 2021
323*2021
Memristor-based multilayer neural networks with online gradient descent training
D Soudry, D Di Castro, A Gal, A Kolodny, S Kvatinsky
IEEE transactions on neural networks and learning systems 26 (10), 2408-2421, 2015
3192015
Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights
D Soudry, I Hubara, R Meir
Advances in neural information processing systems 27, 2014
3072014
No bad local minima: Data independent training error guarantees for multilayer neural networks
D Soudry, Y Carmon
arXiv preprint arXiv:1605.08361, 2016
2722016
Norm matters: efficient and accurate normalization schemes in deep networks
E Hoffer, R Banner, I Golan, D Soudry
Advances in Neural Information Processing Systems 31, 2018
1902018
Convergence of gradient descent on separable data
MS Nacson, J Lee, S Gunasekar, PHP Savarese, N Srebro, D Soudry
arXiv preprint arXiv:1803.01905, 2018
1792018
Task-agnostic continual learning using online variational bayes with fixed-point updates
C Zeno, I Golan, E Hoffer, D Soudry
Neural Computation 33 (11), 3139-3177, 2021
1782021
How do infinite width bounded norm networks look in function space?
P Savarese, I Evron, D Soudry, N Srebro
Conference on Learning Theory, 2667-2690, 2019
1782019
A function space view of bounded norm infinite width relu nets: The multivariate case
G Ongie, R Willett, D Soudry, N Srebro
arXiv preprint arXiv:1910.01635, 2019
1672019
Sistem, işlemi şu anda gerçekleştiremiyor. Daha sonra yeniden deneyin.
Makaleler 1–20