quotation:[Copy]
Jinxuan Zhang1,Chang-E Ren1.[en_title][J].Control Theory and Technology,2024,22(1):25~38.[Copy]
【Print page】 【Online reading】【Download 【PDF Full text】 View/Add CommentDownload reader Close

←Previous page|Page Next →

Back Issue    Advanced search

This Paper:Browse 312   Download 0 本文二维码信息
码上扫一扫!
Event-triggered H∞ consensus control for input-constrained multi-agent systems via reinforcement learning
JinxuanZhang1,Chang-ERen1
0
(1 College of Information Engineering, Capital Normal University, Beijing 100089, China)
摘要:
This article presents an event-triggered H∞ consensus control scheme using reinforcement learning (RL) for nonlinear second-order multi-agent systems (MASs) with control constraints. First, considering control constraints, the constrained H∞ consensus problem is transformed into a multi-player zero-sum game with non-quadratic performance functions. Then, an event-triggered control method is presented to conserve communication resources and a new triggering condition is developed for each agent to make the triggering threshold independent of the disturbance attenuation level. To derive the optimal controller that can minimize the cost function in the case of worst disturbance, a constrained Hamilton–Jacobi–Bellman (HJB) equation is defined. Since it is difficult to solve analytically due to its strongly non-linearity, reinforcement learning (RL) is implemented to obtain the optimal controller. In specific, the optimal performance function and the worst-case disturbance are approximated by a time-triggered critic network; meanwhile, the optimal controller is approximated by event-triggered actor network. After that, Lyapunov analysis is utilized to prove the uniformly ultimately bounded (UUB) stability of the system and that the network weight errors are UUB. Finally, a simulation example is utilized to demonstrate the effectiveness of the control strategy provided.
关键词:  H∞ optimal control · Input constrains · Multi-agent systems (MASs) · Reinforcement learning (RL)
DOI:https://doi.org/10.1007/s11768-023-00177-4
基金项目:
Event-triggered H∞ consensus control for input-constrained multi-agent systems via reinforcement learning
Jinxuan Zhang1,Chang-E Ren1
(1 College of Information Engineering, Capital Normal University, Beijing 100089, China)
Abstract:
This article presents an event-triggered H∞ consensus control scheme using reinforcement learning (RL) for nonlinear second-order multi-agent systems (MASs) with control constraints. First, considering control constraints, the constrained H∞ consensus problem is transformed into a multi-player zero-sum game with non-quadratic performance functions. Then, an event-triggered control method is presented to conserve communication resources and a new triggering condition is developed for each agent to make the triggering threshold independent of the disturbance attenuation level. To derive the optimal controller that can minimize the cost function in the case of worst disturbance, a constrained Hamilton–Jacobi–Bellman (HJB) equation is defined. Since it is difficult to solve analytically due to its strongly non-linearity, reinforcement learning (RL) is implemented to obtain the optimal controller. In specific, the optimal performance function and the worst-case disturbance are approximated by a time-triggered critic network; meanwhile, the optimal controller is approximated by event-triggered actor network. After that, Lyapunov analysis is utilized to prove the uniformly ultimately bounded (UUB) stability of the system and that the network weight errors are UUB. Finally, a simulation example is utilized to demonstrate the effectiveness of the control strategy provided.
Key words:  H∞ optimal control · Input constrains · Multi-agent systems (MASs) · Reinforcement learning (RL)