Document Type
Book Chapter
Rights
Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence
Disciplines
1.2 COMPUTER AND INFORMATION SCIENCE
Abstract
This paper examines to what degree current deep learning architectures for image caption generation capture spatial lan- guage. On the basis of the evaluation of examples of generated captions from the literature we argue that systems capture what objects are in the image data but not where these objects are located: the cap- tions generated by these systems are the output of a language model conditioned on the output of an object detector that cannot capture fine-grained location information. Although language models provide useful knowledge for image captions, we argue that deep learning image captioning architectures should also model geometric rela- tions between objects.
Recommended Citation
Kelleher J.D.& Dobnik S.(2017) What is not where: the challenge of integrating spatial representations into deep learning architectures In CLASP Papers in Computational Linguistics Vol 1: Proceedings of the Conference on Logic and Machine Learning in Natural Language (LaML 2017), 41-52pp. Gothenburg, 12–13 June 2017, edited by Simon Dobnik and Shalom Lappin. ISSN: 2002-9764. URI: http://hdl.handle.net/2077/54911
Funder
ADAPT Research Centre
Publication Details
In CLASP Papers in Computational Linguistics Volume 1: Proceedings of the Conference on Logic and Machine Learning in Natural Language (LaML 2017), pages 41-52, Gothenburg, 12–13 June 2017, edited by Simon Dobnik and Shalom Lappin. ISSN: 2002-9764. URI: http://hdl.handle.net/2077/54911