Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
The probabilistic model checker Storm
We present the probabilistic model checker Storm. Storm supports the analysis of discrete-
and continuous-time variants of both Markov chains and Markov decision processes. Storm …
and continuous-time variants of both Markov chains and Markov decision processes. Storm …
Safe reinforcement learning via shielding under partial observability
Safe exploration is a common problem in reinforcement learning (RL) that aims to prevent
agents from making disastrous decisions while exploring their environment. A family of …
agents from making disastrous decisions while exploring their environment. A family of …
[PDF][PDF] Parameter synthesis in Markov models
S Junges - 2020 - publications.rwth-aachen.de
Markov models comprise states with probabilistic transitions. The analysis of these models is
ubiquitous and studied in, among others, reliability engineering, artificial intelligence …
ubiquitous and studied in, among others, reliability engineering, artificial intelligence …
Model-free, model-based, and general intelligence
H Geffner - arxiv preprint arxiv:1806.02308, 2018 - arxiv.org
During the 60s and 70s, AI researchers explored intuitions about intelligence by writing
programs that displayed intelligent behavior. Many good ideas came out from this work but …
programs that displayed intelligent behavior. Many good ideas came out from this work but …
[PDF][PDF] Finite-state controllers of POMDPs via parameter synthesis
We study finite-state controllers (FSCs) for partially observable Markov decision processes
(POMDPs) that are provably correct with respect to given specifications. The key insight is …
(POMDPs) that are provably correct with respect to given specifications. The key insight is …
Enforcing almost-sure reachability in POMDPs
Abstract Partially-Observable Markov Decision Processes (POMDPs) are a well-known
stochastic model for sequential decision making under limited information. We consider the …
stochastic model for sequential decision making under limited information. We consider the …
Verifiable RNN-based policies for POMDPs under temporal logic constraints
Recurrent neural networks (RNNs) have emerged as an effective representation of control
policies in sequential decision-making problems. However, a major drawback in the …
policies in sequential decision-making problems. However, a major drawback in the …
Under-approximating expected total rewards in POMDPs
We consider the problem: is the optimal expected total reward to reach a goal state in a
partially observable Markov decision process (POMDP) below a given threshold? We tackle …
partially observable Markov decision process (POMDP) below a given threshold? We tackle …
Task-aware verifiable RNN-based policies for partially observable Markov decision processes
Partially observable Markov decision processes (POMDPs) are models for sequential
decision-making under uncertainty and incomplete information. Machine learning methods …
decision-making under uncertainty and incomplete information. Machine learning methods …
Planning and SAT
J Rintanen - Handbook of Satisfiability, 2021 - ebooks.iospress.nl
The planning problem in Artificial Intelligence was the first application of SAT to reasoning
about transition systems and a direct precursor to the use of SAT in a number of other …
about transition systems and a direct precursor to the use of SAT in a number of other …