A review of safe reinforcement learning: Methods, theory and applications

S Gu, L Yang, Y Du, G Chen, F Walter, J Wang… - arxiv preprint arxiv …, 2022‏ - arxiv.org
Reinforcement Learning (RL) has achieved tremendous success in many complex decision-
making tasks. However, safety concerns are raised during deploying RL in real-world …

A review of safe reinforcement learning: Methods, theories and applications

S Gu, L Yang, Y Du, G Chen, F Walter… - … on Pattern Analysis …, 2024‏ - ieeexplore.ieee.org
Reinforcement Learning (RL) has achieved tremendous success in many complex decision-
making tasks. However, safety concerns are raised during deploying RL in real-world …

[HTML][HTML] A survey of robot manipulation in contact

M Suomalainen, Y Karayiannidis, V Kyrki - Robotics and Autonomous …, 2022‏ - Elsevier
In this survey, we present the current status on robots performing manipulation tasks that
require varying contact with the environment, such that the robot must either implicitly or …

Safe control under input limits with neural control barrier functions

S Liu, C Liu, J Dolan - Conference on Robot Learning, 2023‏ - proceedings.mlr.press
We propose new methods to synthesize control barrier function (CBF) based safe controllers
that avoid input saturation, which can cause safety violations. In particular, our method is …

Model-based safe deep reinforcement learning via a constrained proximal policy optimization algorithm

AK Jayant, S Bhatnagar - Advances in Neural Information …, 2022‏ - proceedings.neurips.cc
During initial iterations of training in most Reinforcement Learning (RL) algorithms, agents
perform a significant number of random exploratory steps. In the real world, this can limit the …

The exact sample complexity gain from invariances for kernel regression

B Tahmasebi, S Jegelka - Advances in Neural Information …, 2023‏ - proceedings.neurips.cc
In practice, encoding invariances into models improves sample complexity. In this work, we
study this phenomenon from a theoretical perspective. In particular, we provide minimax …

Safe reinforcement learning using black-box reachability analysis

M Selim, A Alanwar, S Kousik, G Gao… - IEEE Robotics and …, 2022‏ - ieeexplore.ieee.org
Reinforcement learning (RL) is capable of sophisticated motion planning and control for
robots in uncertain environments. However, state-of-the-art deep RL approaches typically …

Fast kinodynamic planning on the constraint manifold with deep neural networks

P Kicki, P Liu, D Tateo, H Bou-Ammar… - IEEE Transactions …, 2023‏ - ieeexplore.ieee.org
Motion planning is a mature area of research in robotics with many well-established
methods based on optimization or sampling the state space, suitable for solving kinematic …

Regularized deep signed distance fields for reactive motion generation

P Liu, K Zhang, D Tateo, S Jauhri… - 2022 IEEE/RSJ …, 2022‏ - ieeexplore.ieee.org
Autonomous robots should operate in real-world dynamic environments and collaborate
with humans in tight spaces. A key component for allowing robots to leave structured lab and …

Triple-q: A model-free algorithm for constrained reinforcement learning with sublinear regret and zero constraint violation

H Wei, X Liu, L Ying - International Conference on Artificial …, 2022‏ - proceedings.mlr.press
This paper presents the first model-free, simulator-free reinforcement learning algorithm for
Constrained Markov Decision Processes (CMDPs) with sublinear regret and zero constraint …