Murdoch University Research Repository

Welcome to the Murdoch University Research Repository

The Murdoch University Research Repository is an open access digital collection of research
created by Murdoch University staff, researchers and postgraduate students.

Learn more

A fuzzy system with common linear-term consequents equivalent to FLNN and GMM

Zhang, Y., Wang, G.ORCID: 0000-0002-5258-0532, Chung, F-L and Wang, S. (2022) A fuzzy system with common linear-term consequents equivalent to FLNN and GMM. International Journal of Machine Learning and Cybernetics .

Link to Published Version: https://doi.org/10.1007/s13042-021-01460-z
*Subscription may be required

Abstract

In this study, a novel Takagi–Sugeno–Kang (TSK) fuzzy system termed as CLT–TSK in which the consequent of each fuzzy rule owns a common linear term is exploited to demonstrate its four distinctive merits. They are: (1) since much less parameters are involved, CLT–TSK has enhanced interpretability. (2) As an extensively used computational intelligence tool, a slightly changed function-link neural network (FLNN) is provably equivalent to CLT–TSK. As a result, FLNN is actually revisited with the first attempt from the philosophy of fuzzy models. (3) With a mild assumption that each component in Gaussian mixture model (GMM) equally contributes to the formulation of GMM in the sense of the effect of the intrinsic structure of training samples in each component on the corresponding label structure, CLT–TSK is theoretically proved to be equivalent to GMM, which actually helps us understand both CLT–TSK and FLNN from a new statistical perspective. (4) The output expression of CLT–TSK is exactly in accordance with the recently-drawn observation that a simple regression should be a basic yet very important component of final prediction models for various data modeling tasks. Owing to the equivalence among CLT–TSK, FLNN and GMM, their respective effective learning methods can be mutually employed from now on, and any new effort in training one model will actually provide a potential new learning method for another among these three models. In particular, with the help of our previous work about the least learning machine, we develop a fast-learning method for CLT–TSK in this study. Experimental results on different kinds of datasets demonstrate the promising classification and runtime performance of CLT–TSK.

Item Type: Journal Article
Murdoch Affiliation(s): IT, Media and Communications
Publisher: Springer Nature
Copyright: © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany
URI: http://researchrepository.murdoch.edu.au/id/eprint/63542
Item Control Page Item Control Page