Provable and practical: Efficient exploration in reinforcement learning via langevin monte carlo
We present a scalable and effective exploration strategy based on Thompson sampling for
reinforcement learning (RL). One of the key shortcomings of existing Thompson sampling …
reinforcement learning (RL). One of the key shortcomings of existing Thompson sampling …
Optimistic posterior sampling for reinforcement learning with few samples and tight guarantees
D Tiapkin, D Belomestny… - Advances in …, 2022 - proceedings.neurips.cc
We consider reinforcement learning in an environment modeled by an episodic, tabular,
step-dependent Markov decision process of horizon $ H $ with $ S $ states, and $ A …
step-dependent Markov decision process of horizon $ H $ with $ S $ states, and $ A …
Model-based uncertainty in value functions
We consider the problem of quantifying uncertainty over expected cumulative rewards in
model-based reinforcement learning. In particular, we focus on characterizing the variance …
model-based reinforcement learning. In particular, we focus on characterizing the variance …
Model-free posterior sampling via learning rate randomization
D Tiapkin, D Belomestny… - Advances in …, 2024 - proceedings.neurips.cc
In this paper, we introduce Randomized Q-learning (RandQL), a novel randomized model-
free algorithm for regret minimization in episodic Markov Decision Processes (MDPs). To the …
free algorithm for regret minimization in episodic Markov Decision Processes (MDPs). To the …
Posterior sampling for deep reinforcement learning
R Sasso, M Conserva… - … Conference on Machine …, 2023 - proceedings.mlr.press
Despite remarkable successes, deep reinforcement learning algorithms remain sample
inefficient: they require an enormous amount of trial and error to find good policies. Model …
inefficient: they require an enormous amount of trial and error to find good policies. Model …
Optimistic Thompson sampling-based algorithms for episodic reinforcement learning
Abstract We propose two Thompson Sampling-like, model-based learning algorithms for
episodic Markov decision processes (MDPs) with a finite time horizon. Our proposed …
episodic Markov decision processes (MDPs) with a finite time horizon. Our proposed …
\textit{MinMaxMin} -learning
\textit {MinMaxMin} $ Q $-learning is a novel\textit {optimistic} Actor-Critic algorithm that
addresses the problem of\textit {overestimation} bias ($ Q $-estimations are overestimating …
addresses the problem of\textit {overestimation} bias ($ Q $-estimations are overestimating …
A general recipe for the analysis of randomized multi-armed bandit algorithms
In this paper we propose a general methodology to derive regret bounds for randomized
multi-armed bandit algorithms. It consists in checking a set of sufficient conditions on the …
multi-armed bandit algorithms. It consists in checking a set of sufficient conditions on the …
Обзор выпуклой оптимизации марковских процессов принятия решений
ВД Руденко, НЕ Юдин, АА Васин - Компьютерные исследования и …, 2023 - mathnet.ru
В данной статье проведен обзор как исторических достижений, так и современных
результатов в области марковских процессов принятия решений (Markov Decision …
результатов в области марковских процессов принятия решений (Markov Decision …
Efficient and stable deep reinforcement learning: selective priority timing entropy
L Huo, J Mao, H San, S Zhang, R Li, L Fu - Applied Intelligence, 2024 - Springer
Deep reinforcement learning (DRL) has made significant strides in addressing tasks with
high-dimensional continuous action spaces. However, the field still faces the challenges of …
high-dimensional continuous action spaces. However, the field still faces the challenges of …