MARS: Multimodal Active Robotic Sensing for Articulated Characterization
Precise perception of articulated objects is vital for empowering service robots. Recent studies mainly focus on point cloud, a single-modal approach, often neglecting vital texture and lighting details and assuming ideal conditions like optimal viewpoints, unrepresentative of real-world scenarios....
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
01-07-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Precise perception of articulated objects is vital for empowering service
robots. Recent studies mainly focus on point cloud, a single-modal approach,
often neglecting vital texture and lighting details and assuming ideal
conditions like optimal viewpoints, unrepresentative of real-world scenarios.
To address these limitations, we introduce MARS, a novel framework for
articulated object characterization. It features a multi-modal fusion module
utilizing multi-scale RGB features to enhance point cloud features, coupled
with reinforcement learning-based active sensing for autonomous optimization of
observation viewpoints. In experiments conducted with various articulated
object instances from the PartNet-Mobility dataset, our method outperformed
current state-of-the-art methods in joint parameter estimation accuracy.
Additionally, through active sensing, MARS further reduces errors,
demonstrating enhanced efficiency in handling suboptimal viewpoints.
Furthermore, our method effectively generalizes to real-world articulated
objects, enhancing robot interactions. Code is available at
https://github.com/robhlzeng/MARS. |
---|---|
DOI: | 10.48550/arxiv.2407.01191 |