Document Type
Article
Rights
Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence
Abstract
This article describes the application of computational models of spatial prepositions to visually situated dialog systems. In these dialogs, spatial prepositions are important because people often use them to refer to entities in the visual context of a dialog. We first describe a generic architecture for a visually situated dialog system and highlight the interactions between the spatial cognition module, which provides the interface to the models of prepositional semantics, and the other components in the architecture. Following this, we present two new computational models of topological and projective spatial prepositions. The main novelty within these models is the fact that they account for the contextual effect which other distractor objects in a visual scene can have on the region described by a given preposition. We next present psycholinguistic tests evaluating our approach to distractor interference on prepositional semantics, and illustrate how these models are used for both interpretation and generation of prepositional expressions.
DOI
https://doi.org/10.1162/coli.06-78-prep14
Recommended Citation
Kelleher, J., Costello, F.: Applying Computational Models of Spatial Prepositions to Visually Situated Dialog. Computational Linguistics, Vol. 35, No. 2, Pages 271-306. 2009. doi:10.1162/coli.06-78-prep14
Included in
Artificial Intelligence and Robotics Commons, Cognition and Perception Commons, Computational Linguistics Commons, Psycholinguistics and Neurolinguistics Commons, Semantics and Pragmatics Commons
Publication Details
Computational Linguistics, Vol. 35, No. 2, Pages 271-306