Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Partially observable markov decision processes and robotics
H Kurniawati - Annual Review of Control, Robotics, and …, 2022 - annualreviews.org
Planning under uncertainty is critical to robotics. The partially observable Markov decision
process (POMDP) is a mathematical framework for such planning problems. POMDPs are …
process (POMDP) is a mathematical framework for such planning problems. POMDPs are …
Pomdp-based statistical spoken dialog systems: A review
Statistical dialog systems (SDSs) are motivated by the need for a data-driven framework that
reduces the cost of laboriously handcrafting complex dialog managers and that provides …
reduces the cost of laboriously handcrafting complex dialog managers and that provides …
[LIBRO][B] Algorithms for decision making
A broad introduction to algorithms for decision making under uncertainty, introducing the
underlying mathematical problem formulations and the algorithms for solving them …
underlying mathematical problem formulations and the algorithms for solving them …
Partially observable markov decision processes in robotics: A survey
Noisy sensing, imperfect control, and environment changes are defining characteristics of
many real-world robot tasks. The partially observable Markov decision process (POMDP) …
many real-world robot tasks. The partially observable Markov decision process (POMDP) …
Model-based reinforcement learning: A survey
Sequential decision making, commonly formalized as Markov Decision Process (MDP)
optimization, is an important challenge in artificial intelligence. Two key approaches to this …
optimization, is an important challenge in artificial intelligence. Two key approaches to this …
[HTML][HTML] Perception, planning, control, and coordination for autonomous vehicles
Autonomous vehicles are expected to play a key role in the future of urban transportation
systems, as they offer potential for additional safety, increased productivity, greater …
systems, as they offer potential for additional safety, increased productivity, greater …
Robust reinforcement learning on state observations with learned optimal adversary
We study the robustness of reinforcement learning (RL) with adversarially perturbed state
observations, which aligns with the setting of many adversarial attacks to deep …
observations, which aligns with the setting of many adversarial attacks to deep …
[LIBRO][B] Partially observed Markov decision processes
V Krishnamurthy - 2016 - books.google.com
Covering formulation, algorithms, and structural results, and linking theory to real-world
applications in controlled sensing (including social learning, adaptive radars and sequential …
applications in controlled sensing (including social learning, adaptive radars and sequential …
Online algorithms for POMDPs with continuous state, action, and observation spaces
Online solvers for partially observable Markov decision processes have been applied to
problems with large discrete state spaces, but continuous state, action, and observation …
problems with large discrete state spaces, but continuous state, action, and observation …
Rational quantitative attribution of beliefs, desires and percepts in human mentalizing
Social cognition depends on our capacity for 'mentalizing', or explaining an agent's
behaviour in terms of their mental states. The development and neural substrates of …
behaviour in terms of their mental states. The development and neural substrates of …