Skip to content

Commit

Permalink
Update readme.md
Browse files Browse the repository at this point in the history
  • Loading branch information
cleardusk committed Jul 18, 2018
1 parent 078edcb commit 44ed21e
Show file tree
Hide file tree
Showing 5 changed files with 29 additions and 11 deletions.
23 changes: 21 additions & 2 deletions readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,14 @@ Several results (inferenced from model *phase1_wpdc_vdc.pth.tar*) are shown belo
publisher={IEEE}
}

@article{zhu2016face,
title={Face Alignment Across Large Poses: A 3D Solution},
author={Zhu, Xiangyu and Lei, Zhen and Liu, Xiaoming and Shi, Hailin and Li, Stan Z},
journal={computer vision and pattern recognition},
pages={146--155},
year={2016}}



## Requirements
- PyTorch >= 0.4.0
Expand All @@ -33,14 +41,16 @@ Several results (inferenced from model *phase1_wpdc_vdc.pth.tar*) are shown belo
I strongly recommend using Python3.6 instead of older version for its better design.

## Inference speed
When batch size is 128, the inference time of MobileNet-V1 takes about 34.7ms. The average speed is about **0.27ms/pic**.

<p align="left">
<img src="imgs/inference_speed.png" alt="Inference speed" width="600px">
</p>

## Evaluation
First, you should download the cropped testset ALFW and ALFW-2000-3D in [test.data.zip](https://pan.baidu.com/s/1DTVGCG5k0jjjhOc8GcSLOw), then unzip it and put it in the root directory.
Next, run the benchmark code by providing trained model path.
I have already provided four pre-trained models in `models` directory. These models are trained using different loss in the first stage. The model size is about 13M due to the high efficiency of mobilenet-v1 structure.
I have already provided four pre-trained models in `models` directory. These models are trained using different loss in the first stage. The model size is about 13M due to the high efficiency of MobileNet-V1 structure.
```
python3 ./benchmark.py -c models/phase1_wpdc_vdc.pth.tar
```
Expand All @@ -55,6 +65,15 @@ The performances of pre-trained models are shown below. In the first stage, the
| *phase1_wpdc_vdc.pth.tar* | **5.401±0.754** | **4.252±0.976** |

## Training
The training scripts lie in `training` directory.
The training scripts lie in `training` directory. The related resources are in below table.

| Data | Link | Description |
|:-:|:-:|:-:|
| train.configs | [BaiduYun](https://pan.baidu.com/s/1ozZVs26-xE49sF7nystrKQ) or [Google Drive]() (Comming soon) 217M | The directory contraining 3DMM params and filelists of training dataset |
| train_aug_120x120.zip | [BaiduYun](https://pan.baidu.com/s/19QNGst2E1pRKL7Dtx_L1MA) or [Google Drive]() (Comming soon), 2.15G | The cropped images of augmentation training dataset |
| test.data.zip | [BaiduYun](https://pan.baidu.com/s/1DTVGCG5k0jjjhOc8GcSLOw) or [Google Drive]() (Comming soon), 151M | The cropped images of AFLW and ALFW-2000-3D testset |

After preparing the training dataset and configuration files, go into `training` directory and run the bash scripts to train. The training parameters are all presented in bash scripts.

## Acknowledgement
Thanks for your interest in this repo. If your research benefits from this repo, please cite it and star it : )
3 changes: 2 additions & 1 deletion train.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,8 @@ def main():
if Path(args.resume).is_file():
logging.info(f'=> loading checkpoint {args.resume}')

checkpoint = torch.load(args.resume)['state_dict']
checkpoint = torch.load(args.resume, map_location=lambda storage, loc: storage)['state_dict']
# checkpoint = torch.load(args.resume)['state_dict']
model.load_state_dict(checkpoint)

else:
Expand Down
2 changes: 1 addition & 1 deletion training/train_pdc.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,5 +24,5 @@ LOG_FILE="${LOG_DIR}/${LOG_ALIAS}_`date +'%Y-%m-%d_%H:%M.%S'`.log"
--workers=8 \
--filelists-train="../train.configs/train_aug_120x120.list.train" \
--filelists-val="../train.configs/train_aug_120x120.list.val" \
--root="/mnt/ramdisk/train_aug_120x120" \
--root="/path/to//train_aug_120x120" \
--log-file="${LOG_FILE}"
5 changes: 2 additions & 3 deletions training/train_vdc.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ LOG_FILE="${LOG_DIR}/${LOG_ALIAS}_`date +'%Y-%m-%d_%H:%M.%S'`.log"
--param-fp-val='../train.configs/param_all_norm_val.pkl' \
--warmup=-1 \
--opt-style=resample \
--resample-num=232 \
--resample-num=132 \
--batch-size=512 \
--base-lr=0.00001 \
--epochs=50 \
Expand All @@ -25,6 +25,5 @@ LOG_FILE="${LOG_DIR}/${LOG_ALIAS}_`date +'%Y-%m-%d_%H:%M.%S'`.log"
--workers=8 \
--filelists-train="../train.configs/train_aug_120x120.list.train" \
--filelists-val="../train.configs/train_aug_120x120.list.val" \
--root="/mnt/ramdisk/train_aug_120x120" \
--root="/path/to/train_aug_120x120" \
--log-file="${LOG_FILE}"

7 changes: 3 additions & 4 deletions training/train_wpdc.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,15 @@ LOG_FILE="${LOG_DIR}/${LOG_ALIAS}_`date +'%Y-%m-%d_%H:%M.%S'`.log"
--param-fp-val='../train.configs/param_all_norm_val.pkl' \
--warmup=5 \
--opt-style=resample \
--resample-num=232 \
--resample-num=132 \
--batch-size=512 \
--base-lr=0.02 \
--epochs=50 \
--milestones=30,40 \
--print-freq=50 \
--devices-id=0,1,2,3 \
--devices-id=0,1 \
--workers=8 \
--filelists-train="../train.configs/train_aug_120x120.list.train" \
--filelists-val="../train.configs/train_aug_120x120.list.val" \
--root="/mnt/ramdisk/train_aug_120x120" \
--root="/path/to//train_aug_120x120" \
--log-file="${LOG_FILE}"

0 comments on commit 44ed21e

Please sign in to comment.