quotation:[Copy]
Y. Geng,X. Ruan.[en_title][J].Control Theory and Technology,2015,13(3):256~265.[Copy]
【Print page】 【Online reading】【Download 【PDF Full text】 View/Add CommentDownload reader Close

←Previous page|Page Next →

Back Issue    Advanced search

This Paper:Browse 1401   Download 1168 本文二维码信息
码上扫一扫!
Quasi-Newton-type optimized iterative learning control for discrete linear time invariant systems
Y.Geng,X.Ruan
0
(School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an Shaanxi 710049, China)
摘要:
In this paper, a quasi-Newton-type optimized iterative learning control (ILC) algorithm is investigated for a class of discrete linear time-invariant systems. The proposed learning algorithm is to update the learning gain matrix by a quasi-Newton-type matrix instead of the inversion of the plant. By means of the mathematical inductive method, the monotone convergence of the proposed algorithm is analyzed, which shows that the tracking error monotonously converges to zero after a finite number of iterations. Compared with the existing optimized ILC algorithms, due to the superlinear convergence of quasi-Newton method, the proposed learning law operates with a faster convergent rate and is robust to the ill-condition of the system model, and thus owns a wide range of applications. Numerical simulations demonstrate the validity and effectiveness.
关键词:  Iterative learning control, optimization, quasi-Newton method, inverse plant
DOI:
Received:November 05, 2014Revised:July 04, 2015
基金项目:
Quasi-Newton-type optimized iterative learning control for discrete linear time invariant systems
Y. Geng,X. Ruan
(School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an Shaanxi 710049, China)
Abstract:
In this paper, a quasi-Newton-type optimized iterative learning control (ILC) algorithm is investigated for a class of discrete linear time-invariant systems. The proposed learning algorithm is to update the learning gain matrix by a quasi-Newton-type matrix instead of the inversion of the plant. By means of the mathematical inductive method, the monotone convergence of the proposed algorithm is analyzed, which shows that the tracking error monotonously converges to zero after a finite number of iterations. Compared with the existing optimized ILC algorithms, due to the superlinear convergence of quasi-Newton method, the proposed learning law operates with a faster convergent rate and is robust to the ill-condition of the system model, and thus owns a wide range of applications. Numerical simulations demonstrate the validity and effectiveness.
Key words:  Iterative learning control, optimization, quasi-Newton method, inverse plant