Stable-Baselines3: Reliable Reinforcement Learning Implementations A Raffin, A Hill, M Ernestus, A Gleave, A Kanervisto, N Dormann Journal of Machine Learning Research (JMLR) 22 (268), 1-8, 2021 | 2765 | 2021 |
Stable Baselines A Hill, A Raffin, M Ernestus, A Gleave, A Kanervisto, R Traoré, P Dhariwal, ... GitHub repository, 2018 | 979 | 2018 |
Open X-Embodiment: Robotic Learning Datasets and RT-X Models : Open X-Embodiment Collaboration0 A O’Neill, A Rehman, A Maddukuri, A Gupta, A Padalkar, A Lee, A Pooley, ... 2024 IEEE International Conference on Robotics and Automation (ICRA), 6892-6903, 2024 | 455* | 2024 |
RL Baselines Zoo A Raffin GitHub repository, 2018 | 248 | 2018 |
Pythonrobotics: a python code collection of robotics algorithms A Sakai, D Ingram, J Dinius, K Chawla, A Raffin, A Paques arXiv preprint arXiv:1808.10703, 2018 | 134 | 2018 |
The 37 Implementation Details of Proximal Policy Optimization S Huang, RFJ Dossa, A Raffin, A Kanervisto, W Wang ICLR, 2022 | 121 | 2022 |
Smooth Exploration for Robotic Reinforcement Learning A Raffin, J Kober, F Stulp Conference on Robot Learning, 2021 | 105* | 2021 |
Decoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based robotics A Raffin, A Hill, R Traoré, T Lesort, N Díaz-Rodríguez, D Filliat arXiv preprint arXiv:1901.08651, 2019 | 66 | 2019 |
S-RL toolbox: Environments, datasets and evaluation metrics for state representation learning A Raffin, A Hill, R Traoré, T Lesort, N Díaz-Rodríguez, D Filliat arXiv preprint arXiv:1809.09369, 2018 | 37 | 2018 |
A2C is a special case of PPO S Huang, A Kanervisto, A Raffin, W Wang, S Ontañón, RFJ Dossa arXiv preprint arXiv:2205.09123, 2022 | 27 | 2022 |
Unsupervised learning of state representations for multiple tasks A Raffin, S Höfer, R Jonschkowski, O Brock, F Stulp | 11* | 2016 |
Learning to exploit elastic actuators for quadruped locomotion A Raffin, D Seidel, J Kober, A Albu-Schäffer, J Silvério, F Stulp arXiv preprint arXiv:2209.07171, 2022 | 10 | 2022 |
Guiding real-world reinforcement learning for in-contact manipulation tasks with Shared Control Templates A Padalkar, G Quere, A Raffin, J Silvério, F Stulp Autonomous Robots 48 (4), 12, 2024 | 8* | 2024 |
Making reinforcement learning work on swimmer M Franceschetti, C Lacoux, R Ohouens, A Raffin, O Sigaud arXiv preprint arXiv:2208.07587, 2022 | 8 | 2022 |
Fault-tolerant six-DoF pose estimation for tendon-driven continuum mechanisms A Raffin, B Deutschmann, F Stulp Frontiers in Robotics and AI 8, 619238, 2021 | 8 | 2021 |
Open RL Benchmark: Comprehensive Tracked Experiments for Reinforcement Learning S Huang, Q Gallouédec, F Felten, A Raffin, RFJ Dossa, Y Zhao, ... arXiv preprint arXiv:2402.03046, 2024 | 7 | 2024 |
Two-Stage Learning of Highly Dynamic Motions with Rigid and Articulated Soft Quadrupeds F Vezzi, J Ding, A Raffin, J Kober, C Della Santina IEEE International Conference on Robotics and Automation (ICRA) 2024, 2023 | 7 | 2023 |
An Open-Loop Baseline for Reinforcement Learning Locomotion Tasks A Raffin, O Sigaud, J Kober, A Albu-Schäffer, J Silvério, F Stulp Reinforcement Learning Conference (RLC) 2024, 2023 | 4* | 2023 |
Toward Space Exploration on Legs: ISS-to-Earth Teleoperation Experiments with a Quadruped Robot D Seidel, A Schmidt, X Luo, A Raffin, L Mayershofer, T Ehlert, D Calzolari, ... 2024 IEEE Conference on Telepresence, 10-15, 2024 | | 2024 |
Everything Is Awesome If You Are Part of a (Robotic) Team: Preliminary Insights from the First ISS-to-Surface Multi-Robot Collaboration with Scalable Autonomy Teleoperation NYS Lii, T Krueger, P Schmaus, D Leidner, S Paternostro, AS Bauer, ... 75th International Astronautical Congress, IAC 2024, 2024 | | 2024 |