Murdoch University Research Repository

Welcome to the Murdoch University Research Repository

The Murdoch University Research Repository is an open access digital collection of research
created by Murdoch University staff, researchers and postgraduate students.

Learn more

Acoustic scene classification using joint time-frequency image-based feature representations

Abidin, S., Togneri, R. and Sohel, F. (2018) Acoustic scene classification using joint time-frequency image-based feature representations. In: 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) 2018, 27 - 30 November 2017, Auckland, New Zealand

Link to Published Version:
*Subscription may be required


The classification of acoustic scenes is important in emerging applications such as automatic audio surveillance, machine listening and multimedia content analysis. In this paper, we present an approach for acoustic scene classification by using joint time-frequency image-based feature representations. In acoustic scene classification, joint time-frequency representation (TFR) is shown to better represent important information across a wide range of low and middle frequencies in the audio signal. The audio signal is converted to Constant-Q Transform (CQT) and Mel-spectrum TFRs and local binary patterns (LBP) are used to extract the features from these TFRs. To ensure localized spectral information is not lost, the TFRs are divided into a number of zones. Then, we perform score level fusion to further improve the classification performance accuracy. Our technique achieves a competitive performance with a classification accuracy of 83.4% on the DCASE 2016 development dataset compared to the existing current state of the art

Item Type: Conference Paper
Murdoch Affiliation(s): School of Engineering and Information Technology
Item Control Page Item Control Page