Document Type
Conference Paper
Rights
Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence
Disciplines
Computer Sciences
Abstract
The paper presents an approach to using structural descriptions, obtained through a human-robot tutoring dialogue, as labels for the visual ob ject models a robot learns. The paper shows how structural descriptions can relate models for different aspects of one and the same object, and how relating descriptions for visual models and discourse referents enables incremental updating of model descriptions through dialogue (either robot- or human-initiated). The approach has been implemented in an integrated architecture for human-assisted robot visual learning.
DOI
https://doi.org/10.1145/1121241.1121307
Recommended Citation
Kruijff, G., Kelleher, J.D. & Berginc, G. (2006). Structural descriptions in human-assisted robot visual learning. HRI 06: Proceedings of the 1st Annual Conference on Human-Robot Interaction, Salt Lake City UTah, USA. doi:10.1145/1121241.1121307 doi:10.1145/1121241.1121307
Publication Details
In Proceedings of the 1st Annual Conference on Human-Robot Interaction (HRI'06). Salt Lake City UT, USA.