Fuzzy-Based adaptive optimization of unknown Discrete-Time Nonlinear Markov jump systems with Off-Policy reinforcement learning
Fang, H., Tu, Y., Wang, H.ORCID: 0000-0003-2789-9530, He, S., Liu, F., Ding, Z. and Cheng, S.S.
(2022)
Fuzzy-Based adaptive optimization of unknown Discrete-Time Nonlinear Markov jump systems with Off-Policy reinforcement learning.
IEEE Transactions on Fuzzy Systems
.
Early Access.
*Subscription may be required
Abstract
This paper explores a novel adaptive optimal control strategy for a class of sophisticated discrete-time nonlinear Markov jump systems (DTNMJSs) via Takagi-Sugeno (T-S) fuzzy models and reinforcement learning (RL) techniques. Firstly, the original nonlinear system model is represented by fuzzy approximation, while the relevant optimal control problem is equivalent to designing fuzzy controllers for linear fuzzy systems with Markov jumping parameters. Subsequently, we derive the fuzzy coupled algebraic Riccati equations (FCAREs) for the fuzzy-based discrete-time linear Markov jump systems by using Hamiltonian-Bellman methods. Following this, an online fuzzy optimization algorithm for DTNMJSs as well as the associated equivalence proof is given. Then, a fully model-free off-policy fuzzy RL algorithm is derived with proved convergence for the DTNMJSs without using the information of system dynamics and transition probability. Finally, two simulation examples respectively related to the single-link robotic arm and the half-car active suspension are given to verify the effectiveness and good performance of the proposed approach.
Item Type: | Journal Article |
---|---|
Murdoch Affiliation(s): | Engineering and Energy |
Publisher: | IEEE |
Copyright: | © 2022 IEEE |
URI: | http://researchrepository.murdoch.edu.au/id/eprint/64791 |
![]() |
Item Control Page |