Task-based evaluation of context-sensitive referring expressions in human-robot dialogue

The standard referring-expression generation task involves creating stand-alone descriptions intended solely to distinguish a target object from its context. However, when an artificial system refers to objects in the course of interactive, embodied dialogue with a human partner, this is a very diff...

Full description

Saved in:
Bibliographic Details
Published in:Language, cognition and neuroscience Vol. 29; no. 8; pp. 1018 - 1034
Main Authors: Foster, Mary Ellen, Giuliani, Manuel, Isard, Amy
Format: Journal Article
Language:English
Published: Routledge 14-09-2014
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The standard referring-expression generation task involves creating stand-alone descriptions intended solely to distinguish a target object from its context. However, when an artificial system refers to objects in the course of interactive, embodied dialogue with a human partner, this is a very different setting; the references found in situated dialogue are able to take into account the aspects of the physical, interactive and task-level context, and are therefore unlike those found in corpora of stand-alone references. Also, the dominant method of evaluating generated references involves measuring corpus similarity. In an interactive context, though, other extrinsic measures such as task success and user preference are more relevant - and numerous studies have repeatedly found little or no correlation between such extrinsic metrics and the predictions of commonly used corpus-similarity metrics. To explore these issues, we introduce a humanoid robot designed to cooperate with a human partner on a joint construction task. We then describe the context-sensitive reference-generation algorithm that was implemented for use on this robot, which was inspired by the referring phenomena found in the Joint Construction Task corpus of human-human joint construction dialogues. The context-sensitive algorithm was evaluated through two user studies comparing it to a baseline algorithm, using a combination of objective performance measures and subjective user satisfaction scores. In both studies, the objective task performance and dialogue quality were found to be the same for both versions of the system; however, in both cases, the context-sensitive system scored more highly on subjective measures of interaction quality.
ISSN:2327-3798
2327-3801
DOI:10.1080/01690965.2013.855802