This repo has the inference codes for 3D face reconstruction based on DECA with some pretrained weights. We currently support blendshape model FLAME.
[Updates]
2022.11.14
: Support rendering reconstructed mesh2022.11.12
: Support inference with detail model2022.11.01
: Support inference with 100 landmarks, better expression regularisation2022.10.11
: Support inference with AR linear model
- Numpy:
pip install numpy
- OpenCV:
pip install opencv-python
- PyTorch:
pip install torch torchvision
- PyTorch3D: Recommend to install from a local clone
- scikit-image:
pip install scikit-image
- Scipy:
pip install scipy
- ibug.face_detection: See this repository for details: https://github.com/hhj1897/face_detection.
- ibug.face_alignment: See this repository for details: https://github.com/hhj1897/face_alignment.
- Install the code
git clone https://github.com/jacksoncsy/face_reconstruction.git
cd face_reconstruction
pip install -e .
-
Download the 3DMMs assets
(a) Download FLAME, choose FLAME 2020 and unzip it, move generic_model.pkl into ./ibug/face_reconstruction/deca/assets/flame
(b) Download FLAME landmark asset, and move it into ./ibug/face_reconstruction/deca/assets/flame
(c) (Optional) Download AR multilinear models and assets, and move them into ./ibug/face_reconstruction/deca/assets/ar_multilinear
(d) (Optional) Download AR linear models and assets, and move them into the corresponding folders (e.g., for ARLv1, move into ./ibug/face_reconstruction/deca/assets/ar_linear_v1). -
Download the pretrained models
(a) Download the FLAME-based pretrained models, and put them into ./ibug/face_reconstruction/deca/weights
(b) (Optional) Download the AR-based pretrained models, and put them into ./ibug/face_reconstruction/deca/weights
- To test on live video:
python face_reconstruction_test.py [-i webcam_index]
- To test on a video file:
python face_reconstruction_test.py [-i input_file] [-o output_file]
from ibug.face_reconstruction import DecaCoarsePredictor
# Instantiate the 3D reconstructor
reconstructor = DecaCoarsePredictor(device='cuda:0')
# Fit 3DMM to the face specified by the 68 2D landmarks.
image = cv2.imread("test.jpg")
reconstruction_results = reconstructor(image, landmarks, rgb=False)
@inproceedings{DECA:Siggraph2021,
title={Learning an Animatable Detailed {3D} Face Model from In-The-Wild Images},
author={Feng, Yao and Feng, Haiwen and Black, Michael J. and Bolkart, Timo},
journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH)},
volume = {40},
number = {8},
year = {2021},
url = {https://doi.org/10.1145/3450626.3459936}
}