AssistQ: Affordance-centric Question-driven Task Completion for Egocentric Assistant
A long-standing goal of intelligent assistants such as AR glasses/robots has been to assist users in affordance-centric real-world scenarios, such as "how can I run the microwave for 1 minute?". However, there is still no clear task definition and suitable benchmarks. In this paper, we def...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
08-03-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | A long-standing goal of intelligent assistants such as AR glasses/robots has
been to assist users in affordance-centric real-world scenarios, such as "how
can I run the microwave for 1 minute?". However, there is still no clear task
definition and suitable benchmarks. In this paper, we define a new task called
Affordance-centric Question-driven Task Completion, where the AI assistant
should learn from instructional videos to provide step-by-step help in the
user's view. To support the task, we constructed AssistQ, a new dataset
comprising 531 question-answer samples from 100 newly filmed instructional
videos. We also developed a novel Question-to-Actions (Q2A) model to address
the AQTC task and validate it on the AssistQ dataset. The results show that our
model significantly outperforms several VQA-related baselines while still
having large room for improvement. We expect our task and dataset to advance
Egocentric AI Assistant's development. Our project page is available at:
https://showlab.github.io/assistq/. |
---|---|
DOI: | 10.48550/arxiv.2203.04203 |