Document Type
Conference Paper
Rights
Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence
Disciplines
1.2 COMPUTER AND INFORMATION SCIENCE, Linguistics
Abstract
The challenge for computational models of spatial descriptions for situated dialogue systems is the integration of information from different modalities. The semantics of spatial descriptions are grounded in at least two sources of information: (i) a geometric representation of space and (ii) the functional interaction of related objects that. We train several neural language models on descriptions of scenes from a dataset of image captions and examine whether the functional or geometric bias of spatial descriptions reported in the literature is reflected in the estimated perplexity of these models. The results of these experiments have implications for the creation of models of spatial lexical semantics for human-robot dialogue systems. Furthermore, they also provide an insight into the kinds of the semantic knowledge captured by neural language models trained on spatial descriptions, which has implications for image captioning systems.
DOI
https://www.aclweb.org/anthology/W18-1401
Recommended Citation
Dobnik, S., Ghanimifard, M., & Kelleher, J. (2018). Exploring the functional and geometric bias of spatial relations using neural language models. . In Proceedings of the First International Workshop on Spatial Language Understanding (SpLU-2018) at the 2018 Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT), New Orleans, Louisiana, USA. June 5/6, 2018. doi:anthology/W18-1401
Funder
ADAPT Center for Digital Content Technology
Publication Details
In Proceedings of the First International Workshop on Spatial Language Understanding (SpLU-2018) at the 2018 Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT), New Orleans, Louisiana, USA. June 5/6, 2018.