Следене
Hanie Sedghi
Hanie Sedghi
Staff Research Scientist, Google DeepMind
Потвърден имейл адрес: google.com - Начална страница
Заглавие
Позовавания
Позовавания
Година
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ...
arXiv preprint arXiv:2403.05530, 2024
12112024
What is being transferred in transfer learning?
B Neyshabur, H Sedghi, C Zhang
Neural Information Processing Systems (NeurIPS), 2020
5922020
Beating the perils of non-convexity: Guaranteed training of neural networks using tensor methods
M Janzamin, H Sedghi, A Anandkumar
arXiv preprint arXiv:1506.08473, 2015
2642015
The singular values of convolutional layers
H Sedghi, V Gupta, PM Long
arXiv preprint arXiv:1805.10408, 2018
2302018
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R Entezari, H Sedghi, O Saukh, B Neyshabur
International Conference on Learning Representations, 2022
2102022
Leveraging Unlabeled Data to Predict Out-of-Distribution Performance
S Garg, S Balakrishnan, ZC Lipton, B Neyshabur, H Sedghi
International Conference on Learning Representations, 2022
1562022
Generalization bounds for deep convolutional neural networks
PM Long, H Sedghi
International Conference on Learning Representations, 2020
154*2020
Exploring the Limits of Large Scale Pre-training
S Abnar, M Dehghani, B Neyshabur, H Sedghi
International Conference on Learning Representations, 2022
1332022
Provable tensor methods for learning mixtures of generalized linear models
H Sedghi, M Janzamin, A Anandkumar
Artificial Intelligence and Statistics, 1223-1231, 2016
1112016
Beyond human data: Scaling self-training for problem-solving with language models
A Singh, JD Co-Reyes, R Agarwal, A Anand, P Patil, X Garcia, PJ Liu, ...
arXiv preprint arXiv:2312.06585, 2023
982023
The Deep Bootstrap Framework: Good Online Learners are Good Offline Generalizers
P Nakkiran, B Neyshabur, H Sedghi
International Conference on Learning Representations, 2021
882021
Provable methods for training neural networks with sparse connectivity
H Sedghi, A Anandkumar
arXiv preprint arXiv:1412.2693, 2014
842014
REPAIR: REnormalizing Permuted Activations for Interpolation Repair
K Jordan, H Sedghi, O Saukh, R Entezari, B Neyshabur
ICLR 2023, 2022
772022
Statistical structure learning to ensure data integrity in smart grid
H Sedghi, E Jonckheere
IEEE Transactions on Smart Grid 6 (4), 1924-1933, 2015
732015
The intriguing role of module criticality in the generalization of deep networks
NS Chatterji, B Neyshabur, H Sedghi
International Conference on Learning Representations, 2020
702020
Can Neural Network Memorization Be Localized?
P Maini, MC Mozer, H Sedghi, ZC Lipton, JZ Kolter, C Zhang
ICML 2023, 2023
502023
MLSys: The new frontier of machine learning systems
A Ratner, D Alistarh, G Alonso, DG Andersen, P Bailis, S Bird, N Carlini, ...
arXiv preprint arXiv:1904.03257, 2019
502019
Score function features for discriminative learning: Matrix and tensor framework
M Janzamin, H Sedghi, A Anandkumar
arXiv preprint arXiv:1412.2863, 2014
502014
Statistical structure learning of smart grid for detection of false data injection
H Sedghi, E Jonckheere
2013 IEEE Power & Energy Society General Meeting, 1-5, 2013
452013
Teaching algorithmic reasoning via in-context learning
H Zhou, A Nova, H Larochelle, A Courville, B Neyshabur, H Sedghi
arXiv preprint arXiv:2211.09066, 2022
372022
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20