Super-convergence: Very fast training of neural networks using large learning rates LN Smith, N Topin Artificial intelligence and machine learning for multi-domain operations …, 2019 | 1773 | 2019 |
Super-convergence: Very fast training of residual networks using large learning rates LN Smith, N Topin | 279* | 2017 |
MineRL: A Large-Scale Dataset of Minecraft Demonstrations WH Guss, B Houghton, N Topin, P Wang, C Codel, M Veloso, ... arXiv preprint arXiv:1907.13440, 2019 | 248 | 2019 |
The minerl competition on sample efficient reinforcement learning using human priors WH Guss, C Codel, K Hofmann, B Houghton, N Kuno, S Milani, ... arXiv preprint arXiv:1904.10079 2, 2019 | 110 | 2019 |
Generation of Policy-Level Explanations for Reinforcement Learning N Topin, M Veloso AAAI 2019, 2019 | 89 | 2019 |
Deep convolutional neural network design patterns LN Smith, N Topin arXiv preprint arXiv:1611.00847, 2016 | 88 | 2016 |
Explainable reinforcement learning: A survey and comparative review S Milani, N Topin, M Veloso, F Fang ACM Computing Surveys 56 (7), 1-36, 2024 | 75 | 2024 |
Conservative Q-Improvement: Reinforcement Learning for an Interpretable Decision-Tree Policy AM Roth, N Topin, P Jamshidi, M Veloso arXiv preprint arXiv:1907.01180, 2019 | 75 | 2019 |
A Survey of Explainable Reinforcement Learning S Milani, N Topin, M Veloso, F Fang arXiv preprint arXiv:2202.08434, 2022 | 65 | 2022 |
Iterative Bounding MDPs: Learning Interpretable Policies via Non-Interpretable Methods N Topin, S Milani, F Fang, M Veloso AAAI 2021, 2021 | 40 | 2021 |
Retrospective analysis of the 2019 minerl competition on sample efficient reinforcement learning S Milani, N Topin, B Houghton, WH Guss, SP Mohanty, K Nakata, ... NeurIPS 2019 Competition and Demonstration Track, 203-214, 2020 | 34* | 2020 |
Exploring loss function topology with cyclical learning rates LN Smith, N Topin arXiv preprint arXiv:1702.04283, 2017 | 30 | 2017 |
Minerl diamond 2021 competition: Overview, results, and lessons learned A Kanervisto, S Milani, K Ramanauskas, N Topin, Z Lin, J Li, J Shi, D Ye, ... NeurIPS 2021 Competitions and Demonstrations Track, 13-28, 2022 | 29 | 2022 |
The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors WH Guss, MY Castro, S Devlin, B Houghton, NS Kuno, C Loomis, S Milani, ... arXiv preprint arXiv:2101.11071, 2021 | 29 | 2021 |
Portable Option Discovery for Automated Learning Transfer in Object-Oriented Markov Decision Processes. N Topin, N Haltmeyer, S Squire, J Winder, Marie desJardins, ... IJCAI, 3856-3864, 2015 | 29 | 2015 |
The MineRL BASALT Competition on Learning from Human Feedback R Shah, C Wild, SH Wang, N Alex, B Houghton, W Guss, S Mohanty, ... arXiv preprint arXiv:2107.01969, 2021 | 26 | 2021 |
Use-case-grounded simulations for explanation evaluation V Chen, N Johnson, N Topin, G Plumb, A Talwalkar Advances in Neural Information Processing Systems 35, 1764-1775, 2022 | 21 | 2022 |
MAVIPER: Learning Decision Tree Policies for Interpretable Multi-Agent Reinforcement Learning S Milani, Z Zhang, N Topin, ZR Shi, C Kamhoua, EE Papalexakis, F Fang Machine Learning and Knowledge Discovery in Databases: European Conference …, 2023 | 19 | 2023 |
Towards robust and domain agnostic reinforcement learning competitions: MineRL 2020 WH Guss, S Milani, N Topin, B Houghton, S Mohanty, A Melnik, A Harter, ... NeurIPS 2020 Competition and Demonstration Track, 233-252, 2021 | 17 | 2021 |
Guaranteeing reproducibility in deep learning competitions B Houghton, S Milani, N Topin, W Guss, K Hofmann, D Perez-Liebana, ... arXiv preprint arXiv:2005.06041, 2020 | 10 | 2020 |