Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set (CVPRW 2019)
Перейти к файлу
microsoft-github-policy-service[bot] 1935ea92af
Auto merge mandatory file pr
This pr is auto merged as it contains a mandatory file and is opened for more than 10 days.
2022-11-28 19:10:22 +00:00
BFM update training code 2020-12-04 15:36:41 +08:00
images update readme 2020-04-08 19:24:45 +08:00
input transfer reconstruction process from numpy to tensorflow, add rendering process, update image preprocessing method 2020-04-08 15:32:34 +08:00
renderer Add files via upload 2021-07-01 19:37:56 +08:00
tf_mesh_renderer@8f851958c1 add tf_mesh_renderer 2021-07-01 18:38:35 +08:00
.gitignore test demo 2019-03-29 19:46:46 +08:00
.gitmodules add tf_mesh_renderer 2021-07-01 18:38:35 +08:00
LICENSE test demo 2019-03-29 19:46:46 +08:00
SECURITY.md Microsoft mandatory file 2022-07-04 03:32:05 +00:00
data_loader.py Update data_loader.py 2021-01-26 11:28:28 +08:00
demo.py remove explicit device specification 2021-01-27 20:02:06 +08:00
face_decoder.py set tf.device('/cpu:0') for render op explicitly 2021-01-27 20:03:22 +08:00
inception_resnet_v1.py update training code 2020-12-04 15:36:24 +08:00
losses.py update training code 2020-12-04 15:36:24 +08:00
networks.py update training code 2020-12-04 15:36:24 +08:00
options.py Update options.py 2020-12-07 11:37:57 +08:00
preprocess_img.py update training code 2020-12-04 15:36:24 +08:00
readme.md Update readme.md 2022-07-04 11:32:01 +08:00
reconstruction_model.py update training code 2020-12-04 15:36:24 +08:00
skin.py update training code 2020-12-04 15:36:24 +08:00
train.py fix error in train.py 2020-12-04 19:31:42 +08:00
utils.py update training code 2020-12-04 15:36:24 +08:00

readme.md

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set

***07/20/2021: A PyTorch implementation which has much better performance and is much easier to use is available now. This repo will not be maintained in future. ***

This is a tensorflow implementation of the following paper:

Y. Deng, J. Yang, S. Xu, D. Chen, Y. Jia, and X. Tong, Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set, IEEE Computer Vision and Pattern Recognition Workshop (CVPRW) on Analysis and Modeling of Faces and Gestures (AMFG), 2019. (Best Paper Award!)

The method enforces a hybrid-level weakly-supervised training for CNN-based 3D face reconstruction. It is fast, accurate, and robust to pose and occlussions. It achieves state-of-the-art performance on multiple datasets such as FaceWarehouse, MICC Florence and BU-3DFE.

Features

● Accurate shapes

The method reconstructs faces with high accuracy. Quantitative evaluations (shape errors in mm) on several benchmarks show its state-of-the-art performance:

Method FaceWareHouse Florence BU3DFE
Tewari et al. 17 2.19±0.54 - -
Tewari et al. 18 1.84±0.38 - -
Genova et al. 18 - 1.77±0.53 -
Sela et al. 17 - - 2.91±0.60
PRN 18 - - 1.86±0.47
Ours 1.81±0.50 1.67±0.50 1.40±0.31

(Please refer to our paper for more details about these results)

● High fidelity textures

The method produces high fidelity face textures meanwhile preserves identity information of input images. Scene illumination is also disentangled to generate a pure albedo.

● Robust

The method can provide reasonable results under extreme conditions such as large pose and occlusions.

● Aligned with images

Our method aligns reconstruction faces with input images. It provides face pose estimation and 68 facial landmarks which are useful for other tasks. We conduct an experiment on AFLW_2000 dataset (NME) to evaluate the performance, as shown in the table below:

Method [0°,30°] [30°,60°] [60°,90°] Overall
3DDFA 16 3.78 4.54 7.93 5.42
3DDFA+SDM 16 3.43 4.24 7.17 4.94
Bulat et al. 17 2.47 3.01 4.31 3.26
PRN 18 2.75 3.51 4.61 3.62
Ours 2.56 3.11 4.45 3.37

● Easy and Fast

Faces are represented with Basel Face Model 2009, which is easy for further manipulations (e.g expression transfer). ResNet-50 is used as backbone network to achieve over 50 fps (on GTX 1080) for reconstructions.

Getting Started

Testing Requirements

Installation

1. Clone the repository

git clone https://github.com/Microsoft/Deep3DFaceReconstruction --recursive
cd Deep3DFaceReconstruction

2. Set up the python environment

If you use anaconda, run the following:

conda create -n deep3d python=3.6
source activate deep3d
conda install tensorflow-gpu==1.12.0 scipy
pip install pillow argparse

Alternatively, you can install tensorflow via pip install (In this way, you need to link /usr/local/cuda to cuda-9.0):

pip install tensorflow-gpu==1.12.0

3. Compile tf_mesh_renderer

If you install tensorflow using pip, we provide a pre-compiled binary file (rasterize_triangles_kernel.so) of the library. Note that the pre-compiled file can only be run with tensorflow 1.12.

If you install tensorflow using conda, you have to compile tf_mesh_renderer from sources. Compile tf_mesh_renderer with Bazel. Set -D_GLIBCXX_USE_CXX11_ABI=1 in ./mesh_renderer/kernels/BUILD before the compilation:

cd tf_mesh_renderer
git checkout ba27ea1798
git checkout master WORKSPACE
bazel test ...
cd ..

If the library is compiled correctly, there should be a file named "rasterize_triangles_kernel.so" in ./tf_mesh_renderer/bazel-bin/mesh_renderer/kernels.

After compilation, copy corresponding files to ./renderer subfolder:

cd renderer
cp ./tf_mesh_renderer/mesh_renderer/{camera_utils.py,mesh_renderer.py,rasterize_triangles.py} ./renderer/
cp ./tf_mesh_renderer/bazel-bin/mesh_renderer/kernels/rasterize_triangles_kernel.so ./renderer/

If you download our pre-compiled binary file, put it into ./renderer subfolder as well.

Replace the library path in Line 26 in ./renderer/rasterize_triangles.py with "./renderer/rasterize_triangles_kernel.so".

Replace "xrange" function in Line 109 in ./renderer/rasterize_triangles.py with "range" function for compatibility with python3.

Testing with pre-trained network

  1. Download the Basel Face Model. Due to the license agreement of Basel Face Model, you have to download the BFM09 model after submitting an application on its home page. After getting the access to BFM data, download "01_MorphableModel.mat" and put it into ./BFM subfolder.

  2. Download the Expression Basis provided by Guo et al. You can find a link named "CoarseData" in the first row of Introduction part in their repository. Download and unzip the Coarse_Dataset.zip. Put "Exp_Pca.bin" into ./BFM subfolder. The expression basis are constructed using Facewarehouse data and transferred to BFM topology.

  3. Download the pre-trained reconstruction network, unzip it and put "FaceReconModel.pb" into ./network subfolder.

  4. Run the demo code.

python demo.py
  1. ./input subfolder contains several test images and ./output subfolder stores their reconstruction results. For each input test image, two output files can be obtained after running the demo code:
    • "xxx.mat" :
      • cropped_img: an RGB image after alignment, which is the input to the R-Net
      • recon_img: an RGBA reconstruction image aligned with the input image (only on Linux).
      • coeff: output coefficients of R-Net.
      • face_shape: vertex positions of 3D face in the world coordinate.
      • face_texture: vertex texture of 3D face, which excludes lighting effect.
      • face_color: vertex color of 3D face, which takes lighting into consideration.
      • lm_68p: 68 2D facial landmarks derived from the reconstructed 3D face. The landmarks are aligned with cropped_img.
      • lm_5p: 5 detected landmarks aligned with cropped_img.
    • "xxx_mesh.obj" : 3D face mesh in the world coordinate (best viewed in MeshLab).

Training requirements

  • Training is only supported on Linux. To train new model from scratch, more requirements are needed on top of the requirements listed in the testing stage.
  • Facenet provided by Sandberg et al. In our paper, we use a network to exrtact perceptual face features. This network model cannot be publicly released. As an alternative, we recommend using the Facenet from Sandberg et al. This repo uses the version 20170512-110547 trained on MS-Celeb-1M. Training process has been tested with this model to ensure similar results.
  • Resnet50-v1 pre-trained on ImageNet from Tensorflow Slim. We use the version resnet_v1_50_2016_08_28.tar.gz as an initialization of the face reconstruction network.
  • 68-facial-landmark detector. We use 68 facial landmarks for loss calculation during training. To make the training process reproducible, we provide a lightweight detector that produce comparable results to the method of Bulat et al.. The detector is trained on 300WLP, LFW, and LS3D-W.

Training preparation

  1. Download the pre-trained weights of Facenet provided by Sandberg et al., unzip it and put all files in ./weights/id_net.
  2. Download the pre-trained weights of Resnet_v1_50 provided by Tensorflow Slim, unzip it and put resnet_v1_50.ckpt in ./weights/resnet.
  3. Download the 68 landmark detector, put the file in ./network.

Data pre-processing

  1. To train our model with custom images5 facial landmarks of each image are needed in advance for an image pre-alignment process. We recommend using dlib or MTCNN. Use these public face detectors to get 5 landmarks, and save all images and corresponding landmarks in <raw_img_path>. Note that an image and its detected landmark file should have same name.
  2. Align images and generate 68 landmarks as well as skin masks for training:
# Run following command for data pre-processing. By default, the code uses example images in ./input and saves the processed data in ./processed_data
python preprocess_img.py

# Alternatively, you can set your custom image path and save path
python preprocess_img.py --img_path <raw_img_path> --save_path <save_path_for_processed_data>

Training networks

  1. Train the reconstruction network with the following command:
# By default, the code uses the data in ./processed_data as training data as well as validation data
python train.py

# Alternatively, you can set your custom data path
python train.py --data_path <custom_data_path> --val_data_path <custom_val_data_path> --model_name <custom_model_name>

  1. Monitoring the training process via tensorboard:
tensorboard --logdir=result/<custom_model_name> --port=10001
  1. Evaluating trained model:
python demo.py --use_pb 0 --pretrain_weights <custom_weights>.ckpt

Training a model with a batchsize of 16 and 200K iterations takes 20 hours on a single Tesla M40 GPU.

Latest Update

2020.4

The face reconstruction process is totally transferred to tensorflow version while the old version uses numpy. We have also integrated the rendering process into the framework. As a result, reconstruction images aligned with the input can be easily obtained without extra efforts. The whole process is tensorflow-based which allows gradient back-propagation for other tasks.

2020.6

Upload a pre-trained model with white light assumption as described in the paper.

2020.12

Upload the training code for single image face reconstruction.

Note

  1. An image pre-alignment with 5 facial landmarks is necessary before reconstruction. In our image pre-processing stage, we solve a least square problem between 5 facial landmarks on the image and 5 facial landmarks of the BFM09 average 3D face to cancel out face scales and misalignment. To get 5 facial landmarks, you can choose any open source face detector that returns them, such as dlib or MTCNN. However, these traditional 2D detectors may return wrong landmarks under large poses which could influence the alignment result. Therefore, we recommend using the method of Bulat et al. to get facial landmarks (3D definition) with semantic consistency for large pose images. Note that our model is trained without position augmentation so that a bad alignment may lead to inaccurate reconstruction results. We put some examples in the ./input subfolder for reference.

  2. We assume a pinhole camera model for face projection. The camera is positioned at (0,0,10) (dm) in the world coordinate and points to the negative z axis. We set the camera fov to 12.6 empirically and fix it during training and inference time. Faces in canonical views are at the origin of the world coordinate and facing the positive z axis. Rotations and translations predicted by the R-Net are all with respect to the world coordinate.

  1. The current model is trained using 3-channel (r,g,b) scene illumination instead of white light described in the paper. As a result, the gamma coefficient that controls lighting has a dimension of 27 instead of 9.

  2. We excluded ear and neck region of original BFM09 to allow the network concentrate on the face region. To see which vertices in the original model are preserved, check select_vertex_id.mat in the ./BFM subfolder. Note that index starts from 1.

  3. Our model may give inferior results for images with severe perspetive distortions (e.g., some selfies). In addition, we cannot well handle faces with eyes closed due to the lack of these kind of images in the training data.

  4. If you have any further questions, please contact Yu Deng (dengyu2008@hotmail.com) and Jiaolong Yang (jiaoyan@microsoft.com).

Citation

Please cite the following paper if this model helps your research:

@inproceedings{deng2019accurate,
    title={Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set},
    author={Yu Deng and Jiaolong Yang and Sicheng Xu and Dong Chen and Yunde Jia and Xin Tong},
    booktitle={IEEE Computer Vision and Pattern Recognition Workshops},
    year={2019}
}

The face images on this page are from the public CelebA dataset released by MMLab, CUHK.