Document Type
Conference Paper
Rights
Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence
Disciplines
Computer Sciences
Abstract
Human-Robot Interaction (HRI) invariably involves dialogue about objects in the environment in which the agents are situated. The paper focuses on the issue of resolving discourse references to such visual objects. The paper addresses the problem using strategies for intra-modal fusion (identifying that different occurrences concern the same object), and inter-modal fusion, (relating object references across different modalities). Core to these strategies are sensori-motoric coordination, and ontology-based mediation between content in different modalities. The approach has been fully implemented, and is illustrated with several working examples
DOI
https://doi.org/10.1007/11768029_12
Recommended Citation
Kruijff, G., Kelleher, J. & Hawes, N. (2006). Information Fusion For Visual Reference Resolution In Dynamic Situated Dialogue. PIT 06: Proceedings of Perception and Interactive Technologies, Kloster Irsee, Germany. doi:10.1007/11768029_12
Publication Details
In PIT 06: Proceedings of Perception and Interactive Technologies, 2006, Kloster Irsee, Germany. LNCS/LNAI/LNBI Series by Springer Verlag.(Baratoff,A., et al, eds)