Authors: Zongmian Li,Jiri Sedlar,Justin Carpentier,Ivan Laptev,Nicolas Mansard,Josef Sivic
Where published:
CVPR 2019 6
ArXiv: 1904.02683
Document:
PDF
DOI
Artifact development version:
GitHub
Abstract URL: https://arxiv.org/abs/1904.02683v2
In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person and the object, contact positions, and forces and torques actuated by the human limbs. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of their interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent MoCap dataset with ground truth contact forces and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments.