Document Type

Conference Paper

Rights

Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence

Disciplines

Computer Sciences

Publication Details

In Proceedings of the 1st Annual Conference on Human-Robot Interaction (HRI'06). Salt Lake City UT, USA.

Abstract

The paper presents an approach to using structural descriptions, obtained through a human-robot tutoring dialogue, as labels for the visual ob ject models a robot learns. The paper shows how structural descriptions can relate models for different aspects of one and the same object, and how relating descriptions for visual models and discourse referents enables incremental updating of model descriptions through dialogue (either robot- or human-initiated). The approach has been implemented in an integrated architecture for human-assisted robot visual learning.

DOI

https://doi.org/10.1145/1121241.1121307


Share

COinS