On the power of curriculum learning in training deep networks G Hacohen, D Weinshall International Conference on Machine Learning, 2535-2544, 2019 | 518 | 2019 |
Active learning on a budget: Opposite strategies suit high and low budgets G Hacohen, A Dekel, D Weinshall International Conference on Machine Learning (ICML) 2022, 8175-8195, 2022 | 122 | 2022 |
Let’s agree to agree: Neural networks share classification order on real datasets G Hacohen, L Choshen, D Weinshall International Conference on Machine Learning, 3950-3960, 2020 | 60 | 2020 |
Active Learning Through a Covering Lens O Yehuda, A Dekel, G Hacohen, D Weinshall Advances in Neural Information Processing Systems (NeurIPS) 2022, 2022 | 52 | 2022 |
The grammar-learning trajectories of neural language models L Choshen, G Hacohen, D Weinshall, O Abend ACL 2022, 2021 | 41 | 2021 |
Principal components bias in over-parameterized linear models, and its manifestation in deep neural networks G Hacohen, D Weinshall Journal of Machine Learning Research 23 (155), 1-46, 2022 | 18* | 2022 |
How to select which active learning strategy is best suited for your specific problem and budget G Hacohen, D Weinshall Advances in Neural Information Processing Systems 36, 2024 | 6 | 2024 |
Pruning the unlabeled data to improve semi-supervised learning G Hacohen, D Weinshall arXiv preprint arXiv:2308.14058, 2023 | 2 | 2023 |
Semi-supervised learning in the few-shot zero-shot scenario N Fluss, G Hacohen, D Weinshall arXiv preprint arXiv:2308.14119, 2023 | 1 | 2023 |
Forgetting Order of Continual Learning: Examples That are Learned First are Forgotten Last G Hacohen, T Tuytelaars arXiv preprint arXiv:2406.09935, 2024 | | 2024 |
MiSAL: Active Learning for Every Budget G Hacohen, D Weinshall | | |