This item is available under a Creative Commons License for non-commercial use only
In human-robot interaction (HRI) it is essential that the robot interprets and reacts to a human’s utterances in a manner that reflects their intended meaning. In this paper we present a collection of novel techniques that allow a robot to interpret and execute spoken commands describing manipulation goals involving qualitative spatial constraints (e.g. “put the red ball near the blue cube”). The resulting implemented system integrates computer vision, potential field models of spatial relationships, and action planning to mediate between the continuous real world, and discrete, qualitative representations used for symbolic reasoning.
Bremner, Michael: Mediating between qualitative and quantitative representations for task-orientated human-robot interaction. Proceedings of IJCAI-07, Hydrabad, India, 2007.