Feng Liu


I am currently a postdoctoral researcher in the Computer Vision Lab at Michigan State University. I am fortunate to be advised by Prof. Xiaoming Liu. I previously graduated with Ph. D. degree in Computer Science from Sichuan University where I was advised by Prof. Zhisheng You and Prof. Qijun Zhao.


My research interests span the areas of joint analysis of 2D images and 3D shapes, including 3D modeling, semantic correspondence, and coherent 3D scene reconstruction.


cv | email | github | google scholar


liufeng6@msu.edu
  
My picture

News


      [Sep  2022]                     A paper is accepted by NeurIPS 2022
      [Jul   2022]                     Two papers are accepted by ECCV 2022
      [Sep  2021]                     A paper is accepted by NeurIPS 2021
      [Feb  2021]                     A paper is accepted by CVPR 2021
      [Sep  2020]                     A paper is accepted by NeurIPS 2020 as oral presentation (1.1% acceptance rate)
      [May 2020]                     A paper is accepted by TPAMI
      [Feb  2020]                     A paper is accepted by CVPR 2020

Publications



2022

The State of Aerial Surveillance: A Survey
Kien Nguyen, Clinton Fookes, Sridha Sridharan, Yingli Tian, Feng Liu, Xiaoming Liu, Arun Ross
TPAMI, 2022 (Under Review)
bibtex   abstract   pdf  

The rapid emergence of airborne platforms and imaging sensors are enabling new forms of aerial surveillance due to their unprecedented advantages in scale, mobility, deployment and covert observation capabilities. This paper provides a comprehensive overview of human-centric aerial surveillance tasks from a computer vision and pattern recognition perspective. It aims to provide readers with an in-depth systematic review and technical analysis of the current state of aerial surveillance tasks using drones, UAVs and other airborne platforms. The main object of interest is humans, where single or multiple subjects are to be detected, identified, tracked, re-identified and have their behavior analyzed. More specifically, for each of these four tasks, we first discuss unique challenges in performing these tasks in an aerial setting compared to a ground-based setting. We then review and analyze the aerial datasets publicly available for each task, and delve deep into the approaches in the aerial literature and investigate how they presently address the aerial challenges. We conclude the paper with discussion on the missing gaps and open research questions to inform future research avenues.

@article{ the-state-of-aerial-surveillance-a-survey,
  author = { Kien Nguyen and Clinton Fookes and Sridha Sridharan and Yingli Tian and Feng Liu and Xiaoming Liu and Arun Ross },
  title = { The State of Aerial Surveillance: A Survey },
  journal = { IEEE Transactions on Pattern Analysis and Machine Intelligence },
  month = { January },
  year = { 2022 },
}

[New] Cluster and Aggregate: Face Recognition with Large Probe Set
Minchul Kim, Feng Liu, Anil Jain, Xiaoming Liu
NeurIPS, 2022
bibtex   abstract   project page   pdf   supp  


@inproceedings{ cluster-and-aggregate-face-recognition-with-large-probe-set,
  author = { Minchul Kim and Feng Liu and Anil Jain and Xiaoming Liu },
  title = { Cluster and Aggregate: Face Recognition with Large Probe Set },
  booktitle = { In Proceeding of Thirty-sixth Conference on Neural Information Processing Systems },
  address = { New Orleans, LA },
  month = { December },
  year = { 2022 },
}

[New] 2D GANs Meet Unsupervised Single-View 3D Reconstruction
Feng Liu, Xiaoming Liu
ECCV, 2022
bibtex   abstract   project page   pdf   supp   code

Recent research has shown that controllable image generation based on pre-trained GANs can benefit a wide range of computer vision tasks. However, less attention has been devoted to 3D vision tasks. In light of this, we propose a novel image-conditioned neural implicit field, which can leverage 2D supervisions from GAN-generated multi-view images and perform the single-view reconstruction of generic objects. Firstly, a novel offline StyleGAN-based generator is presented to generate plausible pseudo images with full control over the viewpoint. Then, we propose to utilize a neural implicit function, along with a differentiable renderer to learn 3D geometry from pseudo images with object masks and rough pose initializations. To further detect the unreliable supervisions, we introduce a novel uncertainty module to predict uncertainty maps, which remedy the negative effect of uncertain regions in pseudo images, leading to a better reconstruction performance. The effectiveness of our approach is demonstrated through superior single-view 3D reconstruction results of generic objects.

@inproceedings{ 2d-gans-meet-unsupervised-single-view-3d-reconstruction,
  author = { Feng Liu and Xiaoming Liu },
  title = { 2D GANs Meet Unsupervised Single-View 3D Reconstruction },
  booktitle = { In Proceeding of European Conference on Computer Vision },
  address = { Tel-Aviv, Israel },
  month = { October },
  year = { 2022 },
}

[New] Controllable and Guided Face Synthesis for Unconstrained Face Recognition
Feng Liu, Minchul Kim, Anil Jain, Xiaoming Liu
ECCV, 2022
bibtex   abstract   project page   pdf   supp   code

Although significant advances have been made in face recognition (FR), FR in unconstrained environments remains challenging due to the domain gap between the semi-constrained training datasets and unconstrained testing scenarios. To address this problem, we propose a controllable face synthesis model (CFSM) that can mimic the distribution of target datasets in a style latent space. CFSM learns a linear subspace with orthogonal bases in the style latent space with precise control over the diversity and degree of synthesis. Furthermore, the pre-trained synthesis model can be guided by the FR model, making the resulting images more beneficial for FR model training. Besides, target dataset distributions are characterized by the learned orthogonal bases, which can be utilized to measure the distributional similarity among face datasets. Our approach yields significant performance gains on unconstrained benchmarks, such as IJB-B, IJB-C, TinyFace and IJB-S (+5.76% Rank1).

@inproceedings{ controllable-and-guided-face-synthesis-for-unconstrained-face-recognition,
  author = { Feng Liu and Minchul Kim and Anil Jain and Xiaoming Liu },
  title = { Controllable and Guided Face Synthesis for Unconstrained Face Recognition },
  booktitle = { In Proceeding of European Conference on Computer Vision },
  address = { Tel-Aviv, Israel },
  month = { October },
  year = { 2022 },
}

2021

Shape My Face: Registering 3D Face Scans by Surface-to-Surface Translation
Mehdi Bahri, Eimear O' Sullivan, Shunwang Gong, Feng Liu, Xiaoming Liu, Michael Bronstein, Stefanos Zafeiriou
IJCV, 2021
bibtex   abstract   pdf  

Standard registration algorithms need to be independently applied to each surface to register, following careful pre-processing and hand-tuning. Recently, learning-based approaches have emerged that reduce the registration of new scans to running inference with a previously-trained model. The potential benefits are multifold: inference is typically orders of magnitude faster than solving a new instance of a difficult optimization problem, deep learning models can be made robust to noise and corruption, and the trained model may be re-used for other tasks, e.g. through transfer learning. In this paper, we cast the registration task as a surface-to-surface translation problem, and design a model to reliably capture the latent geometric information directly from raw 3D face scans. We introduce Shape-My-Face (SMF), a powerful encoder-decoder architecture based on an improved point cloud encoder, a novel visual attention mechanism, graph convolutional decoders with skip connections, and a specialized mouth model that we smoothly integrate with the mesh convolutions. Compared to the previous state-of-the-art learning algorithms for non-rigid registration of face scans, SMF only requires the raw data to be rigidly aligned (with scaling) with a pre-defined face template. Additionally, our model provides topologically-sound meshes with minimal supervision, offers faster training time, has orders of magnitude fewer trainable parameters, is more robust to noise, and can generalize to previously unseen datasets. We extensively evaluate the quality of our registrations on diverse data. We demonstrate the robustness and generalizability of our model with in-the-wild face scans across different modalities, sensor types, and resolutions. Finally, we show that, by learning to register scans, SMF produces a hybrid linear and non-linear morphable model. Manipulation of the latent space of SMF allows for shape generation, and morphing applications such as expression transfer in-the-wild. We train SMF on a dataset of human faces comprising 9 large-scale databases on commodity hardware.

@article{ shape-my-face-registering-3d-face-scans-by-surface-to-surface-translation,
  author = { Mehdi Bahri and Eimear O' Sullivan and Shunwang Gong and Feng Liu and Xiaoming Liu and Michael Bronstein and Stefanos Zafeiriou },
  title = { Shape My Face: Registering 3D Face Scans by Surface-to-Surface Translation },
  journal = { International Journal of Computer Vision (IJCV) },
  volume = { 129 },
  month = { June },
  year = { 2021 },
  pages = { 2680--2713 },
}
}

Voxel-based 3D Detection and Reconstruction of Multiple Objects from a Single Image
Feng Liu, Xiaoming Liu
NeurIPS, 2021
bibtex   abstract   project page   pdf   supp   video   code

Inferring 3D locations and shapes of multiple objects from a single 2D image is a long-standing objective of computer vision. Most of the existing works either predict one of these 3D properties or focus on solving both for a single object. One fundamental challenge lies in how to learn an effective representation of the image that is well-suited for 3D detection and reconstruction. In this work, we propose to learn a regular grid of 3D voxel features from the input image which is aligned with 3D scene space via a 3D feature lifting operator. Based on the 3D voxel features, our novel CenterNet-3D detection head formulates the 3D detection as keypoint detection in the 3D space. Moreover, we devise an efficient coarse-to-fine reconstruction module, including coarse-level voxelization and a novel local PCA-SDF shape representation, which enables fine detail reconstruction and one order of magnitude faster inference than prior methods. With complementary supervision from both 3D detection and reconstruction, one enables the 3D voxel features to be geometry and context preserving, benefiting both tasks.The effectiveness of our approach is demonstrated through 3D detection and reconstruction in single object and multiple object scenarios.

@inproceedings{ voxel-based-3d-detection-and-reconstruction-of-multiple-objects-from-a-single-image,
  author = { Feng Liu and Xiaoming Liu },
  title = { Voxel-based 3D Detection and Reconstruction of Multiple Objects from a Single Image },
  booktitle = { In Proceeding of Thirty-fifth Conference on Neural Information Processing Systems },
  address = { Virtual },
  month = { December },
  year = { 2021 },
}

Fully Understanding Generic Objects: Modeling, Segmentation, and Reconstruction
Feng Liu, Luan Tran, Xiaoming Liu
CVPR, 2021
bibtex   abstract   project page   pdf   supp   video   code

Inferring 3D structure of a generic object from a 2D image is a long-standing objective of computer vision. Conventional approaches either learn completely from CAD-generated synthetic data, which have difficulty in inference from real images, or generate 2.5D depth image via intrinsic decomposition, which is limited compared to the full 3D reconstruction. One fundamental challenge lies in how to leverage numerous real 2D images without any 3D ground truth. To address this issue, we take an alternative approach with semi-supervised learning. That is, for a 2D image of a generic object, we decompose it into latent representations of category, shape and albedo, lighting and camera projection matrix, decode the representations to segmented 3D shape and albedo respectively, and fuse these components to render an image well approximating the input image. Using a category-adaptive 3D joint occupancy field (JOF), we show that the complete shape and albedo modeling enables us to leverage real 2D images in both modeling and model fitting. The effectiveness of our approach is demonstrated through superior 3D reconstruction from a single image, being either synthetic or real, and shape segmentation.

@inproceedings{ fully-understanding-generic-objects-modeling-segmentation-and-reconstruction,
  author = { Feng Liu and Luan Tran and Xiaoming Liu },
  title = { Fully Understanding Generic Objects: Modeling, Segmentation, and Reconstruction },
  booktitle = { In Proceeding of IEEE Computer Vision and Pattern Recognition },
  address = { Nashville, TN },
  month = { June },
  year = { 2021 },
}

2020

Learning Implicit Functions for Topology-Varying Dense 3D Shape Correspondence
Feng Liu, Xiaoming Liu
NeurIPS, 2020 (Oral presentation) [1.1% (105/9454)]
bibtex   abstract   project page   pdf   supp   video   poster   code

The goal of this paper is to learn dense 3D shape correspondence for topology-varying objects in an unsupervised manner. Conventional implicit functions estimate the occupancy of a 3D point given a shape latent code. Instead, our novel implicit function produces a part embedding vector for each 3D point, which is assumed to be similar to its densely corresponded point in another 3D shape of the same object category. Furthermore, we implement dense correspondence through an inverse function mapping from the part embedding to a corresponded 3D point. Both functions are jointly learned with several effective loss functions to realize our assumption, together with the encoder generating the shape latent code. During inference, if a user selects an arbitrary point on the source shape, our algorithm can automatically generate a confidence score indicating whether there is a correspondence on the target shape, as well as the corresponding semantic point if there is one. Such a mechanism inherently benefits man-made objects with different part constitutions. The effectiveness of our approach is demonstrated through unsupervised 3D semantic correspondence and shape segmentation.

@inproceedings{ learning-implicit-functions-for-topology-varying-dense-3d-shape-correspondence,
  author = { Feng Liu and Xiaoming Liu },
  title = { Learning Implicit Functions for Topology-Varying Dense 3D Shape Correspondence },
  booktitle = { In Proceeding of 2020 Conference on Neural Information Processing Systems },
  address = { Virtual },
  month = { December },
  year = { 2020 },
}

Gait Recognition via Disentangled Representation Learning
Ziyuan Zhang, Luan Tran, Feng Liu, Xiaoming Liu
IEEE TPAMI, 2020
bibtex   abstract   project page   pdf   dataset   code

Gait, the walking pattern of individuals, is one of the most important biometrics modalities. Most of the existing gait recognition methods take silhouettes or articulated body models as the gait features. These methods suffer from degraded recognition performance when handling confounding variables, such as clothing, carrying and view angle. To remedy this issue, we propose a novel AutoEncoder framework to explicitly disentangle pose and appearance features from RGB imagery and the LSTM-based integration of pose features over time produces the gait feature. In addition, we collect a Frontal-View Gait (FVG) dataset to focus on gait recognition from frontal-view walking, which is a challenging problem since it contains minimal gait cues compared to other views. FVG also includes other important variations, e.g., walking speed, carrying, and clothing. With extensive experiments on CASIA-B, USF and FVG datasets, our method demonstrates superior performance to the state of the arts quantitatively, the ability of feature disentanglement qualitatively, and promising computational efficiency.

@article{ on-learning-disentangled-representations-for-gait-recognition,
  author = { Ziyuan Zhang and Luan Tran and Feng Liu and Xiaoming Liu },
  title = { On Learning Disentangled Representations for Gait Recognition },
  journal = { IEEE Transactions on Pattern Analysis and Machine Intelligence },
  month = { May },
  year = { 2020 },
}

On the Detection of Digital Face Manipulation
Feng Liu*, Hao Dang*, Joel Stehouwer*, Xiaoming Liu, Anil Jain
CVPR, 2020
bibtex   abstract   project page   pdf   supp   poster   dataset   code

Detecting manipulated facial images and videos is an increasingly important topic in digital media forensics. As advanced face synthesis and manipulation methods become available, new types of fake face representations are being created and raise significant concerns for their implications in social media. Hence, it is crucial to detect the manipulated face image and localize manipulated regions. Instead of simply using multi-task learning to simultaneously detect manipulated images and predict the manipulated mask (regions), we propose to utilize the attention mechanism to process and improve the feature maps of the classification model. The learned attention maps highlight the informative regions to further improve the binary classification, and also visualize the manipulated regions. In addition, to enable our study of manipulated face detection and localization, we collect a large-scale database that contains numerous types of facial forgeries. With this dataset, we perform a thorough analysis of data-driven fake face detection. We demonstrate that the use of an attention mechanism improves manipulated region localization and fake detection.

@inproceedings{ on-the-detection-of-digital-face-manipulation,
  author = { Hao Dang* and Feng Liu* and Joel Stehouwer* and Xiaoming Liu and Anil Jain },
  title = { On the Detection of Digital Face Manipulation },
  booktitle = { In Proceeding of IEEE Computer Vision and Pattern Recognition },
  address = { Seattle, WA },
  month = { June },
  year = { 2020 },
}

2019

3D Face Modeling from Diverse Raw Scan Data
Feng Liu, Luan Tran, Xiaoming Liu
ICCV, 2019 (Oral presentation) [4.3% (187/4303)]
bibtex   abstract   project page   pdf   supp   video   poster   code

Traditional 3D face models learn a latent representation of faces using linear subspaces from limited scans of a single database. The main roadblock of building a large-scale face model from diverse 3D databases lies in the lack of dense correspondence among raw scans. To address theseproblems, this paper proposes an innovative framework tojointly learn a nonlinear face model from a diverse set ofraw 3D scan databases and establish dense point-to-pointcorrespondence among their scans. Specifically, by treatinginput scans as unorganized point clouds, we explore the use of PointNet architectures for converting point clouds to identity and expression feature representations, from which the decoder networks recover their 3D face shapes. Further, we propose a weakly supervised learning approach that does not require correspondence label for the scans. We demonstrate the superior dense correspondence andrepresentation power of our proposed method, and its contribution to single-image 3D face reconstruction.

@inproceedings{ 3d-face-modeling-from-diverse-raw-scan-data,
  author = { Feng Liu and Luan Tran and Xiaoming Liu },
  title = { 3D Face Modeling from Diverse Raw Scan Data },
  booktitle = { In Proceeding of International Conference on Computer Vision },
  address = { Seoul, South Korea },
  month = { October },
  year = { 2019 },
}

Towards High-fidelity Nonlinear 3D Face Morphable Model
Luan Tran, Feng Liu, Xiaoming Liu
CVPR, 2019
bibtex   abstract   project page   pdf   poster   code

Embedding 3D morphable basis functions into deep neural networks opens great potential for models with better representation power. However, to faithfully learn those models from an image collection, it requires strong regularization to overcome ambiguities involved in the learning process. This critically prevents us from learning high fidelity face models which are needed to represent face images in high level of details. To address this problem, this paper presents a novel approach to learn additional proxies as means to side-step strong regularizations, as well as, leverages to promote detailed shape/albedo. To ease the learning, we also propose to use a dual-pathway network, a carefully-designed architecture that brings a balance between global and local-based models. By improving the nonlinear 3D morphable model in both learning objectiveand network architecture, we present a model which is superior in capturing higher level of details than the linear or its precedent nonlinear counterparts. As a result, our model achieves state-of-the-art performance on 3D face reconstruction by solely optimizing latent representations.

@inproceedings{ towards-high-fidelity-nonlinear-3d-face-morphable-model,
  author = { Luan Tran and Feng Liu and Xiaoming Liu },
  title = { Towards High-fidelity Nonlinear 3D Face Morphable Model },
  booktitle = { In Proceeding of IEEE Computer Vision and Pattern Recognition },
  address = { Long Beach, CA },
  month = { June },
  year = { 2019 },
}

2018

Joint Face Alignment and 3D Face Reconstruction with Application to Face Recognition
Feng Liu, Qijun Zhao, Xiaoming Liu, Dan Zeng
IEEE TPAMI, 2018
bibtex   abstract   pdf

Face alignment and 3D face reconstruction are traditionally accomplished as separated tasks. By exploring the strong correlation between 2D landmarks and 3D shapes, in contrast, we propose a joint face alignment and 3D face reconstruction method to simultaneously solve these two problems for 2D face images of arbitrary poses and expressions. This method, based on a summation model of 3D faces and cascaded regression in 2D and 3D shape spaces, iteratively and alternately applies two cascaded regressors, one for updating 2D landmarks and the other for 3D shape. The 3D shape and the landmarks are correlated via a 3D-to-2D mapping matrix, which is updated in each iteration to refine the location and visibility of 2D landmarks. Unlike existing methods, the proposed method canfully automatically generate both pose-and-expression-normalized (PEN) and expressive 3D faces and localize both visible and invisible 2D landmarks. Based on the PEN 3D faces, we devise a method to enhance face recognition accuracy across poses and expressions. Both linear and nonlinear implementations of the proposed method are presented and evaluated in this paper. Extensive experiments show that the proposed method can achieve the state-of-the-art accuracy in both face alignment and 3D face reconstruction, and benefit face recognition owing to its reconstructed PEN 3D face.

@article{ joint-face-alignment-and-3d-face-reconstruction-with-application-to-face-recognition,
  author = { Feng Liu and Qijun Zhao and Xiaoming Liu and Dan Zeng },
  title = { Joint Face Alignment and 3D Face Reconstruction with Application to Face Recognition },
  journal = { IEEE Transactions on Pattern Analysis and Machine Intelligence },
  month = { November },
  year = { 2018 },
}

Disentangling Features in 3D Face Shapes for Joint Face Reconstruction and Recognition
Feng Liu, Ronghang Zhu, Dan Zeng, Qijun Zhao, Xiaoming Liu
CVPR, 2018
bibtex   abstract   pdf   supp   poster

This paper proposes an encoder-decoder network to disentangle shape features during 3D face reconstruction from single2D images, such that the tasks of reconstructing accurate 3D face shapes and learning discriminative shape features for face recognition can be accomplished simultaneously. Unlike existing 3D face reconstruction methods, our proposed method directly regresses dense 3D face shapes from single 2D images, and tackles identity and residual (i.e., non-identity) components in 3D face shapes explicitly and separately based on a composite 3D face shape model with latent representations. We devisea training process for the proposed network with a joint loss measuring both face identification error and 3D face shape reconstruction error. To construct training data we develop a method for fitting 3D morphable model (3DMM) to multiple 2D images of a subject. Comprehensive experiments have been done on MICC, BU3DFE, LFW and YTF databases. The results show that our method expands the capacity of 3DMM for capturing discriminative shape features and facial detail, and thus outperforms existing methods both in 3D face reconstruction accuracy and in face recognition accuracy.

@inproceedings{ disentangling-features-in-3d-face-shapes-for-joint-face-reconstruction-and-recognition,
  author = { Feng Liu and Dan Zeng and Qijun Zhao and Xiaoming Liu },
  title = { Disentangling Features in 3D Face Shapes for Joint Face Reconstruction and Recognition },
  booktitle = { In Proceeding of IEEE Computer Vision and Pattern Recognition },
  address = { Salt Lake City, UT },
  month = { June },
  year = { 2018 },
}

2017

Multi-Dim: A Multi-Dimensional Face Database Towards the Application of 3D Technology in Real-World Scenarios
Feng Liu, Jun Hu, Jianwei Sun, Yang Wang, Qijun Zhao
IJCB, 2017
bibtex   abstract   pdf   supp   poster

Three-dimensional (3D) faces are increasingly utilized in many face-related tasks. Despite the promising improvement achieved by 3D face technology, it is still hard to thoroughly evaluate the performance and effect of 3D face technology in real-world applications where variations frequently occur in pose, illumination, expression and many other factors. This is due to the lack of benchmark databases that contain both high precision full-view 3D faces and their 2D face images/videos under different conditions. In this paper, we present such a multi-dimensional face database (namely Multi-Dim) of high precision 3D face scans, high definition photos, 2D still face images with varying pose and expression, low quality 2D surveillance video clips, along with ground truth annotations for them. Based on this Multi-Dim face database, extensive evaluation experiments have been done with state-of-the-art baseline methods for constructing 3D morphable model, reconstructing 3D faces from single images, 3D-assisted pose normalization for face verification, and 3D-rendered multiview gallery for face identification. Our results show that 3D face technology does help in improving unconstrained 2D face recognition when the probe 2D face images are of reasonable quality, whereas it deteriorates rather than improves the face recognition accuracy when the probe 2D face images are of poor quality. We will make Multi-Dim freely available to the community for the purpose of advancing the 3D-based unconstrained 2D face recognition and related techniques towards real-world applications.

@inproceedings{liu2017multi,
  title={Multi-dim: A multi-dimensional face database towards the application of 3D technology in real-world scenarios},
  author={Liu, Feng and Hu, Jun and Sun, Jianwei and Wang, Yang and Zhao, Qijun},
  booktitle={Proc. International Joint Conference on Biometrics (IJCB)},
  pages={342--351},
  year={2017},
}

2016

Joint Face Alignment and 3D Face Reconstruction
Feng Liu, Dan Zeng, Qijun Zhao, Xiaoming Liu
ECCV, 2016 (Spotlight presentation) [4.6% (72/1561)]
bibtex   abstract   pdf

We present an approach to simultaneously solve the two problems of face alignment and 3D face reconstruction from an input 2D face image of arbitrary poses and expressions. The proposed method iteratively and alternately applies two sets of cascaded regressors, one for updating 2D landmarks and the other for updating reconstructed pose-expression-normalized (PEN) 3D face shape. The 3D face shape and the landmarks are correlated via a 3D-to-2D mapping matrix. In each iteration, adjustment to the landmarks is firstly estimated via a landmark regressor, and this landmark adjustment is also used to estimate 3D face shape adjustment via a shape regressor. The 3D-to-2D mapping is then computed based on the adjusted 3D face shape and 2D landmarks, andit further refines the 2D landmarks. An effective algorithm is devised to learn these regressors based on a training dataset of pairing annotated 3Dface shapes and 2D face images. Compared with existing methods, the proposed method can fully automatically generate PEN 3D face shapesin real time from a single 2D face image and locate both visible and invisible 2D landmarks. Extensive experiments show that the proposed method can achieve the state-of-the-art accuracy in both face alignmentand 3D face reconstruction, and benefit face recognition owing to its reconstructed PEN 3D face shapes.

@inproceedings{ joint-face-alignment-and-3d-face-reconstruction,
  author = { Feng Liu and Dan Zeng and Qijun Zhao and Xiaoming Liu },
  title = { Joint Face Alignment and 3D Face Reconstruction },
  booktitle = { Proc. European Conference on Computer Vision },
  address = { Amsterdam, The Netherlands },
  month = { October },
  year = { 2016 },
}

Academic Services


    Conference Reviewer:             CVPR {2019, 2020, 2021, 2022}, ICCV {2019, 2021}, ECCV {2020, 2022}, NeurIPS 2022,                                                               ICLR {2022, 2023}, AAAI {2020, 2021, 2022}, IJCAI {2022, 2021}, WACV {2020, 2021, 2022},
                                                      ACCV 2020, IJCB 2020, ICB 2019, FG 2019

    Journal Reviewer:                   TPAMI, TIFS, TIP, PR, TMM, TOMM


        @Website inspired from here. @12/19/2020