Catalog Home Page

Exploiting layerwise convexity of rectifier networks with sign constrained weights

An, S., Boussaid, F., Bennamoun, M. and Sohel, F. (2018) Exploiting layerwise convexity of rectifier networks with sign constrained weights. Neural Networks, 105 . pp. 419-430.

[img]
PDF - Authors' Version
Embargoed until June 2020.

Link to Published Version: https://doi.org/10.1016/j.neunet.2018.06.005
*Subscription may be required

Abstract

By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by the well known majorization–minimization (MM) algorithms. We prove that the proposed two-hidden-layer SCRNs, which exhibit negative weights in the second hidden layer and negative weights in the output layer, are capable of separating any number of disjoint pattern sets. Furthermore, the proposed two-hidden-layer SCRNs can decompose the patterns of each class into several clusters so that each cluster is convexly separable from all the patterns from the other classes. This provides a means to learn the pattern structures and analyse the discriminant factors between different classes of patterns. Experimental results are provided to show the benefits of sign constraints in improving classification performance and the efficiency of the proposed MM algorithm.

Publication Type: Journal Article
Murdoch Affiliation: School of Engineering and Information Technology
Publisher: Elsevier
Copyright: © 2018 Elsevier Ltd.
URI: http://researchrepository.murdoch.edu.au/id/eprint/41283
Item Control Page Item Control Page