Learning 3D joint constraints from vision-based motion capture datasets
Realistic estimation and synthesis of articulated human motion must satisfy anatomical constraints on joint angles. A data-driven approach is used to learn human joint limits from 3D motion capture datasets. We represent joint constraints with a new formulation ( s 1 , s 2 , τ ) using swing-twist re...
Saved in:
Published in: | IPSJ transactions on computer vision and applications Vol. 11; no. 1; pp. 1 - 9 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Berlin/Heidelberg
Springer Berlin Heidelberg
25-06-2019
Springer Nature B.V |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Realistic estimation and synthesis of articulated human motion must satisfy anatomical constraints on joint angles. A data-driven approach is used to learn human joint limits from 3D motion capture datasets. We represent joint constraints with a new formulation (
s
1
,
s
2
,
τ
) using swing-twist representation in exponential maps form. Our parameterization is applied on Human3.6M dataset to create the lookup-map for each joint. These maps enable us to generate ‘synthetic’ datasets in entire joint rotation space of a given joint. A set of neural network discriminators is then trained with synthetic datasets to learn valid/invalid joint rotations. The discriminators achieve accuracy of [94.4−99.4
%
] for different joints. We validate precision-accuracy trade-off of discriminators and qualitatively evaluate classified poses with an interactive tool. The learned discriminators can be used as ‘priors’ for human pose estimation and motion synthesis. |
---|---|
ISSN: | 1882-6695 1882-6695 |
DOI: | 10.1186/s41074-019-0057-z |