Decision-theoretic planning under uncertainty with information rewards for active cooperative perception
Partially observable Markov decision processes (POMDPs) provide a principled framework
for modeling an agent's decision-making problem when the agent needs to consider noisy …
for modeling an agent's decision-making problem when the agent needs to consider noisy …
Bayesian reinforcement learning in factored pomdps
Bayesian approaches provide a principled solution to the exploration-exploitation trade-off
in Reinforcement Learning. Typical approaches, however, either assume a fully observable …
in Reinforcement Learning. Typical approaches, however, either assume a fully observable …
Probabilistic decision model for adaptive task planning in human-robot collaborative assembly based on designer and operator intents
M Cramer, K Kellens… - IEEE Robotics and …, 2021 - ieeexplore.ieee.org
In the manufacturing industry, the era of mass customization has arrived. Combining the
complementary strengths of humans and robots will allow to cope with growing product …
complementary strengths of humans and robots will allow to cope with growing product …
Learning state-variable relationships in POMCP: A framework for mobile robots
We address the problem of learning relationships on state variables in Partially Observable
Markov Decision Processes (POMDPs) to improve planning performance. Specifically, we …
Markov Decision Processes (POMDPs) to improve planning performance. Specifically, we …
Exploiting submodular value functions for scaling up active perception
In active perception tasks, an agent aims to select sensory actions that reduce its uncertainty
about one or more hidden variables. For example, a mobile robot takes sensory actions to …
about one or more hidden variables. For example, a mobile robot takes sensory actions to …
Multi-goal motion planning using traveling salesman problem in belief space
In this paper, the multi-goal motion planning problem of an environment with some
background information about its map is addressed in detail. The motion planning goal is to …
background information about its map is addressed in detail. The motion planning goal is to …
Constrained control of large graph-based MDPs under measurement uncertainty
We consider controlling a graph-based Markov decision process (GMDP) with a control
capacity constraint given only uncertain measurements of the underlying state. We also …
capacity constraint given only uncertain measurements of the underlying state. We also …
An integrated approach to solving influence diagrams and finite-horizon partially observable decision processes
EA Hansen - Artificial Intelligence, 2021 - Elsevier
We show how to integrate a variable elimination approach to solving influence diagrams
with a value iteration approach to solving finite-horizon partially observable Markov decision …
with a value iteration approach to solving finite-horizon partially observable Markov decision …
A value equivalence approach for solving interactive dynamic influence diagrams
Interactive dynamic influence diagrams (I-DIDs) are recognized graphical models for
sequential multiagent decision making under uncertainty. They represent the problem of …
sequential multiagent decision making under uncertainty. They represent the problem of …
A partially observable Markov-decision-process-based blackboard architecture for cognitive agents in partially observable environments
H Itoh, H Nakano, R Tokushima… - … on Cognitive and …, 2020 - ieeexplore.ieee.org
Partial observability, or the inability of an agent to fully observe the state of its environment,
exists in many real-world problem domains. However, most cognitive architectures do not …
exists in many real-world problem domains. However, most cognitive architectures do not …