Skip to content

Latest commit

 

History

History
68 lines (56 loc) · 3.55 KB

README.md

File metadata and controls

68 lines (56 loc) · 3.55 KB

ibug.face_reconstruction

This repo has the inference codes for 3D face reconstruction based on DECA with some pretrained weights. We currently support blendshape model FLAME.

[Updates]

  • 2022.11.14: Support rendering reconstructed mesh
  • 2022.11.12: Support inference with detail model
  • 2022.11.01: Support inference with 100 landmarks, better expression regularisation
  • 2022.10.11: Support inference with AR linear model

Prerequisites

Modules needed for test script

How to Install

  1. Install the code
git clone https://github.com/jacksoncsy/face_reconstruction.git
cd face_reconstruction
pip install -e .
  1. Download the 3DMMs assets
    (a) Download FLAME, choose FLAME 2020 and unzip it, move generic_model.pkl into ./ibug/face_reconstruction/deca/assets/flame
    (b) Download FLAME landmark asset, and move it into ./ibug/face_reconstruction/deca/assets/flame
    (c) (Optional) Download AR multilinear models and assets, and move them into ./ibug/face_reconstruction/deca/assets/ar_multilinear
    (d) (Optional) Download AR linear models and assets, and move them into the corresponding folders (e.g., for ARLv1, move into ./ibug/face_reconstruction/deca/assets/ar_linear_v1).

  2. Download the pretrained models
    (a) Download the FLAME-based pretrained models, and put them into ./ibug/face_reconstruction/deca/weights
    (b) (Optional) Download the AR-based pretrained models, and put them into ./ibug/face_reconstruction/deca/weights

How to Test

  • To test on live video: python face_reconstruction_test.py [-i webcam_index]
  • To test on a video file: python face_reconstruction_test.py [-i input_file] [-o output_file]

How to Use

Call 3D face reconstruction given 68 landmarks as input in Python file

from ibug.face_reconstruction import DecaCoarsePredictor

# Instantiate the 3D reconstructor
reconstructor = DecaCoarsePredictor(device='cuda:0')

# Fit 3DMM to the face specified by the 68 2D landmarks.
image = cv2.imread("test.jpg")
reconstruction_results = reconstructor(image, landmarks, rgb=False)

References

@inproceedings{DECA:Siggraph2021,
  title={Learning an Animatable Detailed {3D} Face Model from In-The-Wild Images},
  author={Feng, Yao and Feng, Haiwen and Black, Michael J. and Bolkart, Timo},
  journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH)}, 
  volume = {40}, 
  number = {8}, 
  year = {2021}, 
  url = {https://doi.org/10.1145/3450626.3459936} 
}