Computer vision for human-machine interaction
Ke, Q., Liu, J., Bennamoun, M., An, S., Sohel, F. and Boussaid, F. (2018) Computer vision for human-machine interaction. In: Marco, L. and Farinella, G.M., (eds.) Computer Vision for Assistive Healthcare. Academic Press, pp. 127-145.
Abstract
Human-machine Interaction (HMI) refers to the communication and interaction between a human and a machine via a user interface. Nowadays, natural user interfaces such as gestures have gained increasing attention as they allow humans to control machines through natural and intuitive behaviours. In gesture-based HMI, a sensor such as Microsoft Kinect is used to capture the human postures and motions, which are processed to control a machine. The key task of gesture-based HMI is to recognize the meaningful expressions of human motions using the data provided by kinect, including RGB(red, green, blue), depth, and skeleton information. In this chapter, we focus on the gesture recognition task for HMI and introduce current deep learning methods that have been used for human motion analysis and RGB-D-based gesture recognition. More specifically, we briefly introduce the convolutional neural networks (CNNs), and the present several deep learning frameworks based on CNNs that have been used for gesture recognition by using RGB, depth and skeleton sequences.
Item Type: | Book Chapter |
---|---|
Murdoch Affiliation(s): | School of Engineering and Information Technology |
Publisher: | Academic Press |
Publisher's Website: | https://www.elsevier.com/books/computer-vision-for... |
URI: | http://researchrepository.murdoch.edu.au/id/eprint/41411 |
![]() |
Item Control Page |