PyMAF: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop

ICCV, 2021 (Oral)
1Institute of Automation, Chinese Academy of Sciences 2Nanjing University 3The University of Sydney 4Tsinghua University
* Equal contribution
[Update] 🚩 Please see PyMAF-X [TPAMI 2023] for full-body model recovery.

Abstract

Regression-based methods have recently shown promising results in reconstructing human meshes from monocular images. By directly mapping from raw pixels to model parameters, these methods can produce parametric models in a feed-forward manner via neural networks. However, minor deviation in parameters may lead to noticeable misalignment between the estimated meshes and image evidences. To address this issue, we propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop to leverage a feature pyramid and rectify the predicted parameters explicitly based on the mesh-image alignment status in our deep regressor. In PyMAF, given the currently predicted parameters, mesh-aligned evidences will be extracted from finer-resolution features accordingly and fed back for parameter rectification. To reduce noise and enhance the reliability of these evidences, an auxiliary pixel-wise supervision is imposed on the feature encoder, which provides mesh-image correspondence guidance for our network to preserve the most related information in spatial features. The efficacy of our approach is validated on several benchmarks, including Human3.6M, 3DPW, LSP, and COCO, where experimental results show that our approach consistently improves the mesh-image alignment of the reconstruction.

Videos

Demo (Frame by frame reconstruction. No post-processing.)

Video clipped from here
Credit: cedro
Credit: cedro
Credit: cedro
Video clipped from here

Comparison with SPIN & VIBE

BibTeX


@inproceedings{pymaf2021,
  title={PyMAF: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop},
  author={Zhang, Hongwen and Tian, Yating and Zhou, Xinchi and Ouyang, Wanli and Liu, Yebin and Wang, Limin and Sun, Zhenan},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  year={2021}
}