This work extends 3DDFA, named 3DDFA_V2, titled Towards Fast, Accurate and Stable 3D Dense Face Alignment, accepted by ECCV 2020. The supplementary material is here. The gif above shows a webcam demo of the tracking result, in the scenario of my lab. This repo is the official implementation of 3DDFA_V2. Compared to 3DDFA, 3DDFA_V2 achieves better performance and stability. Besides, 3DDFA_V2 incorporates the fast face detector FaceBoxes instead of Dlib. A simple 3D render written by c++ and cython is also included. This repo supports the onnxruntime, and the latency of regressing 3DMM parameters using the default backbone is about 1.35ms/image on CPU with a single image as input. See requirements.txt, tested on macOS and Linux platforms. The Windows users may refer to FQA for building issues. Note that this repo uses Python3. The major dependencies are PyTorch, numpy, opencv-python and onnxruntime, etc.
Features
- The implementation of tracking is simply by alignment
- If the head pose > 90° or the motion is too fast, the alignment may fail
- A threshold is used to trickly check the tracking state, but it is unstable
- You can refer to demo.ipynb or google colab for the step-by-step tutorial of running on the still image
- The default backbone is MobileNet_V1 with input size 120x120
- The onnx option greatly reduces the overall CPU latency