ECON: Explicit Clothed humans
Optimized via Normal integration

CVPR 2023 (Highlight)

1Max Planck Institute for Intelligent Systems, 2Osaka University 3University of Amsterdam

Human digitization from a color image. ECON combines the best aspects of implicit and explicit surfaces to infer high-fidelity 3D humans, even with loose clothing or in challenging poses. It does so in three steps: (1) It infers detailed 2D normal maps for the front and back side. (2) The normal maps are converted into detailed, yet incomplete, 2.5D front and back surfaces guided by a SMPL-X estimate. (3) It then "inpaints" the missing geometry between two surfaces If the face or hands are noisy, they can optionally be replaced with the ones from SMPL-X, which have a cleaner geometry.

ECON on challenging poses
ECON on loose clothing

Abstract

The combination of artist-curated scans, and deep implicit functions (IF), is enabling the creation of detailed, clothed, 3D humans from images. However, existing methods are far from perfect. IF-based methods recover free-form geometry but produce disembodied limbs or degenerate shapes for unseen poses or clothes. To increase robustness for these cases, existing work uses an explicit parametric body model to constrain surface reconstruction, but this limits the recovery of free-form surfaces such as loose clothing that deviates from the body. What we want is a method that combines the best properties of implicit and explicit methods. To this end, we make two key observations:(1) current networks are better at inferring detailed 2D maps than full-3D surfaces, and (2) a parametric model can be seen as a “canvas” for stitching together detailed surface patches. ECON infers high-fidelity 3D humans even in loose clothes and challenging poses, while having realistic faces and fingers. This goes beyond previous methods. Quantitative evaluation of the CAPE and Renderpeople datasets shows that ECON is more accurate than the state of the art. Perceptual studies also show that ECON’s perceived realism is better by a large margin.

Intro Video (English)



Talk on ECON+ICON (中文)



Rasputin Demo (ECON + HybrIK-X)



Method Overview

ECON takes as input an RGB image, $\mathcal{I}$, and a SMPL-X body, $\mathcal{M}^\text{b}$. Conditioned on the rendered front and back body normal images, $\mathcal{N}^\text{b}$, ECON first predicts front and back clothing normal maps, $\hat{\mathcal{N}^\text{c}}$. These two normals along with body depth maps, $\mathcal{Z}^\text{b}$, are fed into a d-BiNI optimizer to produce front and back surfaces, $\{\mathcal{M}_\text{F}, \mathcal{M}_\text{B}\}$. Based on such partial surfaces, and body estimate $\mathcal{M}^\text{b}$, IF-Nets+ implicitly completes $\mathcal{R}_\text{IF}$. With optional face or hands from $\mathcal{M}^\text{b}$, screened Poisson combines above all as final watertight $\mathcal{R}$.
CVPR'23 (Highlight) | ECON: 一个数字人,显式隐式各自表述

ECON Applications

ECON as "3D Guidance" in SHHQ Dataset Multi-person w/ Occlusion
"All-in-One" Blender add-on SMPL-X based Animation (Instruction)


Related Links

For more work on similar tasks, please check out the following papers.

  • ICON and PaMIR reconstruct 3D clothed human from single image using Implicit Function and Explicit SMPL mesh.
  • PIFu, PIFuHD and MonoPort reconstruct them using Implicit Function without introducing any 3D prior.
  • BiNI robustly reconstructs 3D surface from a single normal map, and meanwhile preserves the discontinuity.
  • PIXIE and PyMAF-X estimate the expressive SMPL-X body from a single image.
  • IF-Nets completes the partial 3D data using implicit feature networks.
  • Any-Shot GIN uses similar sandwich structure to estimate 3D shapes of novel classes from single RGB image.

Acknowledgments & Disclosure

We thank Lea Hering and Radek Daněček for proof reading, Yao Feng, Haven Feng, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 (CLIPE Project).

MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a part-time employee of Meshcapade, his research was performed solely at, and funded solely by, the Max Planck Society.

Contact Discord

For technical questions, please contact yuliang.xiu@tue.mpg.de
For commercial licensing, please contact ps-licensing@tue.mpg.de

BibTeX

@inproceedings{xiu2023econ,
  title     = {{ECON: Explicit Clothed humans Optimized via Normal integration}},
  author    = {Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month     = {June},
  year      = {2023},
}