Convergence of finite memory Q learning for POMDPs and near optimality of learned policies under filter stability AD Kara, S Yüksel Mathematics of Operations Research 48 (4), 2066-2093, 2023 | 44 | 2023 |
Near optimality of finite memory feedback policies in partially observed Markov decision processes A Kara, S Yuksel Journal of Machine Learning Research 23 (11), 1-46, 2022 | 38 | 2022 |
Robustness to incorrect system models in stochastic control AD Kara, S Yuksel SIAM Journal on Control and Optimization 58 (2), 1144-1182, 2020 | 37 | 2020 |
Weak Feller property of non-linear filters AD Kara, N Saldi, S Yüksel Systems & Control Letters 134, 104512, 2019 | 36 | 2019 |
Q-learning for MDPs with general spaces: Convergence and near optimality via quantization under weak continuity A Kara, N Saldi, S Yüksel Journal of Machine Learning Research 24 (199), 1-34, 2023 | 28 | 2023 |
Robustness to incorrect priors in partially observed stochastic control AD Kara, S Yüksel SIAM Journal on Control and Optimization 57 (3), 1929-1964, 2019 | 28 | 2019 |
Q-learning for stochastic control under general information structures and non-Markovian environments AD Kara, S Yuksel arXiv preprint arXiv:2311.00123, 2023 | 15 | 2023 |
Robustness to incorrect models and data-driven learning in average-cost optimal stochastic control AD Kara, M Raginsky, S Yüksel Automatica 139, 110179, 2022 | 15 | 2022 |
Approximate q learning for controlled diffusion processes and its near optimality E Bayraktar, AD Kara SIAM Journal on Mathematics of Data Science 5 (3), 615-638, 2023 | 9 | 2023 |
Robustness to approximations and model learning in MDPs and POMDPs AD Kara, S Yüksel Modern Trends in Controlled Stochastic Processes: Theory and Applications, V …, 2021 | 8 | 2021 |
Average Cost Optimality of Partially Observed MDPs: Contraction of Nonlinear Filters and Existence of Optimal Solutions and Approximations YE Demirci, AD Kara, S Yüksel SIAM Journal on Control and Optimization 62 (6), 2859-2883, 2024 | 7 | 2024 |
Robustness to incorrect system models in stochastic control and application to data-driven learning AD Kara, S Yüksel 2018 IEEE Conference on Decision and Control (CDC), 2753-2758, 2018 | 7 | 2018 |
Finite approximations and Q learning for mean field type multi agent control E Bayraktar, N Bäuerle, AD Kara arXiv preprint arXiv:2211.09633, 2023 | 4 | 2023 |
Finite Approximations for Mean Field Type Multi-Agent Control and Their Near Optimality E Bayraktar, N Bauerle, AD Kara arXiv preprint arXiv:2211.09633, 2022 | 4 | 2022 |
Robustness to incorrect models in average-cost optimal stochastic control AD Kara, M Raginsky, S Yüksel 2019 IEEE 58th Conference on Decision and Control (CDC), 7970-7975, 2019 | 4 | 2019 |
Q-learning for continuous state and action MDPs under average cost criteria AD Kara, S Yuksel arXiv preprint arXiv:2308.07591, 2023 | 3 | 2023 |
Near optimality of finite memory policies for POMPDs with continuous spaces AD Kara, E Bayraktar, S Yüksel 2022 IEEE 61st Conference on Decision and Control (CDC), 2301-2306, 2022 | 2 | 2022 |
Convergence and near optimality of Q-learning with finite memory for partially observed models AD Kara, S Yüksel 2021 60th IEEE Conference on Decision and Control (CDC), 1603-1608, 2021 | 2 | 2021 |
Infinite Horizon Average Cost Optimality Criteria for Mean-Field Control E Bayraktar, AD Kara SIAM Journal on Control and Optimization 62 (5), 2776-2806, 2024 | 1 | 2024 |
Learning with Linear Function Approximations in Mean-Field Control E Bayraktar, AD Kara arXiv preprint arXiv:2408.00991, 2024 | 1 | 2024 |