This portal has been archived. Explore the next generation of this technology.

Estimating 3D Motion and Forces of Person-Object Interactions from Monocular Video

lib:4fd4530b265a527e (v1.0.0)

Vote to reproduce this paper and share portable workflows   1 
Authors: Zongmian Li,Jiri Sedlar,Justin Carpentier,Ivan Laptev,Nicolas Mansard,Josef Sivic
Where published: CVPR 2019 6
ArXiv: 1904.02683
Document:  PDF  DOI 
Artifact development version: GitHub
Abstract URL: https://arxiv.org/abs/1904.02683v2


In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person and the object, contact positions, and forces and torques actuated by the human limbs. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of their interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent MoCap dataset with ground truth contact forces and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!