Catalog Home Page

Learning shape retrieval from different modalities

Tabia, H. and Laga, H. (2017) Learning shape retrieval from different modalities. Neurocomputing, 253 . pp. 24-33.

[img]
PDF - Authors' Version
Embargoed until August 2019.

Link to Published Version: http://dx.doi.org/10.1016/j.neucom.2017.01.101
*Subscription may be required

Abstract

We propose in this paper a new framework for 3D shape retrieval using queries of different modalities, which can include 3D models, images and sketches. The main scientific challenge is that different modalities have different representations and thus lie in different spaces. Moreover, the features that can be extracted from 2D images or 2D sketches are often different from those that can be computed from 3D models. Our solution is a new method based on Convolutional Neural Networks (CNN) that embeds all these entities into a common space. We propose a novel 3D shape descriptor based on local CNN features encoded using vectors of locally aggregated descriptors instead of conventional global CNN. Using a kernel function computed from 3D shape similarity, we build a target space in which wild images and sketches can be projected via two different CNNs. With this construction, matching can be performed in the common target space between same entities (sketch–sketch, image–image and 3D shape–3D shape) and more importantly across different entities (sketch-image, sketch-3D shape and image-3D shape). We demonstrate the performance of the proposed framework using different benchmarks including large scale SHREC 3D datasets.

Publication Type: Journal Article
Murdoch Affiliation: School of Engineering and Information Technology
Publisher: Elsevier BV
Copyright: © 2017 Elsevier B.V.
URI: http://researchrepository.murdoch.edu.au/id/eprint/36216
Item Control Page Item Control Page