Dynamic pricing and learning: historical origins, current research, and new directions

AV Den Boer - Surveys in operations research and management …, 2015 - Elsevier
The topic of dynamic pricing and learning has received a considerable amount of attention
in recent years, from different scientific communities. We survey these literature streams: we …

Neural thompson sampling

W Zhang, D Zhou, L Li, Q Gu - arxiv preprint arxiv:2010.00827, 2020 - arxiv.org
Thompson Sampling (TS) is one of the most effective algorithms for solving contextual multi-
armed bandit problems. In this paper, we propose a new algorithm, called Neural Thompson …

[LIBRO][B] Bandit algorithms

T Lattimore, C Szepesvári - 2020 - books.google.com
Decision-making in the face of uncertainty is a significant challenge in machine learning,
and the multi-armed bandit model is a commonly used framework to address it. This …

Introduction to multi-armed bandits

A Slivkins - Foundations and Trends® in Machine Learning, 2019 - nowpublishers.com
Multi-armed bandits a simple but very powerful framework for algorithms that make
decisions over time under uncertainty. An enormous body of work has accumulated over the …

Neural contextual bandits with ucb-based exploration

D Zhou, L Li, Q Gu - International Conference on Machine …, 2020 - proceedings.mlr.press
We study the stochastic contextual bandit problem, where the reward is generated from an
unknown function with additive noise. No assumption is made about the reward function …

Weight uncertainty in neural network

C Blundell, J Cornebise… - … on machine learning, 2015 - proceedings.mlr.press
We introduce a new, efficient, principled and backpropagation-compatible algorithm for
learning a probability distribution on the weights of a neural network, called Bayes by …

Regret analysis of stochastic and nonstochastic multi-armed bandit problems

S Bubeck, N Cesa-Bianchi - Foundations and Trends® in …, 2012 - nowpublishers.com
Multi-armed bandit problems are the most basic examples of sequential decision problems
with an exploration-exploitation trade-off. This is the balance between staying with the option …

Provably optimal algorithms for generalized linear contextual bandits

L Li, Y Lu, D Zhou - International Conference on Machine …, 2017 - proceedings.mlr.press
Contextual bandits are widely used in Internet services from news recommendation to
advertising, and to Web search. Generalized linear models (logistical regression in …

Thompson sampling for contextual bandits with linear payoffs

S Agrawal, N Goyal - International conference on machine …, 2013 - proceedings.mlr.press
Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a
randomized algorithm based on Bayesian ideas, and has recently generated significant …

Learning to optimize via posterior sampling

D Russo, B Van Roy - Mathematics of Operations Research, 2014 - pubsonline.informs.org
This paper considers the use of a simple posterior sampling algorithm to balance between
exploration and exploitation when learning to optimize actions such as in multiarmed bandit …