Optimal or Greedy Decision Trees? Revisiting their Objectives, Tuning, and Performance

JGM van der Linden, D Vos, MM de Weerdt… - arxiv preprint arxiv …, 2024 - arxiv.org
Decision trees are traditionally trained using greedy heuristics that locally optimize an
impurity or information metric. Recently there has been a surge of interest in optimal …

Necessary and sufficient conditions for optimal decision trees using dynamic programming

J van der Linden, M de Weerdt… - Advances in Neural …, 2024 - proceedings.neurips.cc
Global optimization of decision trees has shown to be promising in terms of accuracy, size,
and consequently human comprehensibility. However, many of the methods used rely on …

Safety verification of decision-tree policies in continuous time

C Schilling, A Lukina, E Demirović… - Advances in Neural …, 2024 - proceedings.neurips.cc
Decision trees have gained popularity as interpretable surrogate models for learning-based
control policies. However, providing safety guarantees for systems controlled by decision …

An Oracle-Guided Approach to Constrained Policy Synthesis Under Uncertainty

R Andriushchenko, M Češka, F Macák, S Junges… - Journal of Artificial …, 2025 - jair.org
Dealing with aleatoric uncertainty is key in many domains involving sequential decision
making, eg, planning in AI, network protocols, and symbolic program synthesis. This paper …

A Novel Tree-Based Method for Interpretable Reinforcement Learning

Y Li, S Qi, X Wang, J Zhang, L Cui - ACM Transactions on Knowledge …, 2024 - dl.acm.org
Deep reinforcement learning (DRL) has garnered remarkable success across various
domains, propelled by advancements in deep learning (DL) technologies. However, the …

Policies Grow on Trees: Model Checking Families of MDPs

R Andriushchenko, M Češka, S Junges… - arxiv preprint arxiv …, 2024 - arxiv.org
Markov decision processes (MDPs) provide a fundamental model for sequential decision
making under process uncertainty. A classical synthesis task is to compute for a given MDP …

Optimizing Interpretable Decision Tree Policies for Reinforcement Learning

D Vos, S Verwer - arxiv preprint arxiv:2408.11632, 2024 - arxiv.org
Reinforcement learning techniques leveraging deep learning have made tremendous
progress in recent years. However, the complexity of neural networks prevents practitioners …

Constraint-Generation Policy Optimization (CGPO): Nonlinear Programming for Policy Optimization in Mixed Discrete-Continuous MDPs

M Gimelfarb, A Taitler, S Sanner - arxiv preprint arxiv:2401.12243, 2024 - arxiv.org
We propose Constraint-Generation Policy Optimization (CGPO) for optimizing policy
parameters within compact and interpretable policy classes for mixed discrete-continuous …

Small Decision Trees for MDPs with Deductive Synthesis

R Andriushchenko, M Češka, S Junges… - arxiv preprint arxiv …, 2025 - arxiv.org
Markov decision processes (MDPs) describe sequential decision-making processes; MDP
policies return for every state in that process an advised action. Classical algorithms can …

In Search of Trees: Decision-Tree Policy Synthesis for Black-Box Systems via Search

E Demirović, C Schilling, A Lukina - arxiv preprint arxiv:2409.03260, 2024 - arxiv.org
Decision trees, owing to their interpretability, are attractive as control policies for (dynamical)
systems. Unfortunately, constructing, or synthesising, such policies is a challenging task …