Attention-Based image captioning using DenseNet features
Hossain, M.Z., Sohel, F., Shiratuddin, M.F., Laga, H.ORCID: 0000-0002-4758-7510 and Bennamoun, M.
(2019)
Attention-Based image captioning using DenseNet features.
In: 26th International Conference, ICONIP 2019, 12 - 15 December 2019, Sydney, NSW
*Subscription may be required
Abstract
We present an attention-based image captioning method using DenseNet features. Conventional image captioning methods depend on visual information of the whole scene to generate image captions. Such a mechanism often fails to get the information of salient objects and cannot generate semantically correct captions. We consider an attention mechanism that can focus on relevant parts of the image to generate fine-grained description of that image. We use image features from DenseNet. We conduct our experiments on the MSCOCO dataset. Our proposed method achieved 53.6, 39.8, and 29.5 on BLEU-2, 3, and 4 metrics, respectively, which are superior to the state-of-the-art methods.
Item Type: | Conference Paper |
---|---|
Murdoch Affiliation(s): | Information Technology, Mathematics and Statistics |
URI: | http://researchrepository.murdoch.edu.au/id/eprint/54609 |
![]() |
Item Control Page |