Linear quadratic control using model-free reinforcement learning

FA Yaghmaie, F Gustafsson… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
In this article, we consider linear quadratic (LQ) control problem with process and
measurement noises. We analyze the LQ problem in terms of the average cost and the …

Optimal leader-following consensus control of multi-agent systems: A neural network based graphical game approach

Y Ren, Q Wang, Z Duan - IEEE Transactions on Network …, 2022 - ieeexplore.ieee.org
In this article, the optimal leader-following consensus control problem of multi-agent systems
is solved using a novel neural network-based (NN-based) integrated heuristic dynamic …

Distributed consensus protocol for multi-agent differential graphical games

S Zhang, Z Zhang, R Cui, W Yan… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
This brief investigates the multi-agent differential graphical game for high-order systems. A
modified cost function for each agent is presented, and a fully distributed control protocol …

Output‐feedback Q‐learning for discrete‐time linear H tracking control: A Stackelberg game approach

Y Ren, Q Wang, Z Duan - International Journal of Robust and …, 2022 - Wiley Online Library
In this article, an output‐feedback Q‐learning algorithm is proposed for the discrete‐time
linear system to deal with the H∞ H _ ∞ tracking control problem. The problem is formulated …

Adaptive fuzzy sliding-mode consensus control of nonlinear under-actuated agents in a near-optimal reinforcement learning framework

A Mousavi, AHD Markazi, E Khanmirza - Journal of the Franklin Institute, 2022 - Elsevier
This study presents a new framework for merging the Adaptive Fuzzy Sliding-Mode Control
(AFSMC) with an off-policy Reinforcement Learning (RL) algorithm to control nonlinear …

Using reinforcement learning for model-free linear quadratic control with process and measurement noises

FA Yaghmaie, F Gustafsson - 2019 IEEE 58th Conference on …, 2019 - ieeexplore.ieee.org
In this paper, we analyze a Linear Quadratic (LQ) control problem in terms of the average
cost and the structure of the value function. We develop a completely model-free …

[HTML][HTML] Numerically efficient H∞ analysis of cooperative multi-agent systems

I Nakić, D Tolić, Z Tomljanović, I Palunko - Journal of the Franklin Institute, 2022 - Elsevier
This article proposes a numerically efficient approach for computing the maximal (or
minimal) impact one agent has on the cooperative system it belongs to. For example, if one …

Robust distributed Nash equilibrium solution for multi‐agent differential graphical games

S Zhang, Z Zhang, R Cui, W Yan - IET Control Theory & …, 2024 - Wiley Online Library
This paper studies the differential graphical games for linear multi‐agent systems with
modelling uncertainties. A robust optimal control policy that seeks the distributed Nash …

Distributed Nash equilibrium learning: A second‐order proximal algorithm

W Pan, Y Lu, Z Jia, W Zhang - International Journal of Robust …, 2021 - Wiley Online Library
This article addresses the distributed Nash equilibrium (NE) seeking problem for multiagent
networked games with partial decision information. We employ a quadratically approximated …

Model-free H synchronization of leader–follower systems with guaranteed convergence rate using reinforcement learning

A Rahdarian, S Shamaghdari - International Journal of Dynamics and …, 2023 - Springer
In this paper, a model-free optimal reinforcement learning (RL)-based approach is
presented for solving optimal synchronization problem for leader–follower multi-agent …