Skip to content

Commit 9aaf00b

Browse files
committed
demo code release
1 parent 22437bb commit 9aaf00b

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

59 files changed

+2819
-347
lines changed

.gitignore

+6-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,6 @@
1-
.DS_Store
1+
models/
2+
logs*/
3+
*.json
4+
*.DS_Store
5+
*.pyc
6+
forrelease.sh

README.md

+64
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,72 @@
11
# End-to-end Recovery of Human Shape and Pose
22

33
Angjoo Kanazawa, Michael J. Black, David W. Jacobs, Jitendra Malik
4+
CVPR 2018
45

56
[Project Page](https://akanazawa.github.io/hmr/)
67
![Teaser Image](https://akanazawa.github.io/hmr/resources/images/teaser.png)
78

9+
### Requirements
10+
- Python 2.7
11+
- [TensorFlow](https://www.tensorflow.org/) tested on version 1.3
812

13+
### Installation
14+
15+
#### Setup virtualenv
16+
```
17+
virtualenv venv_hmr
18+
source venv_hmr/bin/activate
19+
pip install -U pip
20+
deactivate
21+
source venv_hmr/bin/activate
22+
pip install -r requirements.txt
23+
```
24+
#### Install TensorFlow
25+
With GPU:
26+
```
27+
pip install tensorflow-gpu==1.3.0
28+
```
29+
Without GPU:
30+
```
31+
pip install tensorflow==1.3.0
32+
```
33+
34+
### Demo
35+
36+
1. Download the pre-trained models
37+
```
38+
wget https://people.eecs.berkeley.edu/~kanazawa/cachedir/hmr/models.tar.gz && tar -xf models.tar.gz
39+
```
40+
41+
2. Run the demo
42+
```
43+
python -m demo --img_path data/coco1.png
44+
python -m demo --img_path data/im1954.jpg
45+
```
46+
47+
On images that are not tightly cropped, you can run
48+
[openpose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) and supply
49+
its output json (run it with `--write_json` option).
50+
When json_path is specified, the demo will compute the right scale and bbox center to run HMR:
51+
```
52+
python -m demo --img_path data/random.jpg --json_path data/random_keypoints.json
53+
```
54+
(The demo only runs on the most confident bounding box, see `src/util/openpose.py:get_bbox`)
55+
56+
### Training code/data
57+
58+
Coming soon.
59+
60+
### Citation
61+
If you use this code for your research, please consider citing:
62+
```
63+
@inProceedings{kanazawaHMR18,
64+
title={End-to-end Recovery of Human Shape and Pose},
65+
author = {Angjoo Kanazawa
66+
and Michael J. Black
67+
and David W. Jacobs
68+
and Jitendra Malik},
69+
booktitle={Computer Vision and Pattern Regognition (CVPR)},
70+
year={2018}
71+
}
72+
```

__init__.py

Whitespace-only changes.

data/coco1.png

96.3 KB
Loading

data/coco2.png

97.3 KB
Loading

data/coco3.png

72 KB
Loading

data/coco4.png

97 KB
Loading

data/coco5.png

100 KB
Loading

data/coco6.png

94.2 KB
Loading

data/im1954.jpg

4.44 KB
Loading

data/im1963.jpg

4.68 KB
Loading

data/random.jpg

125 KB
Loading

demo.py

+135
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
"""
2+
Demo of HMR.
3+
4+
Note that HMR requires the bounding box of the person in the image. The best performance is obtained when max length of the person in the image is roughly 150px.
5+
6+
When only the image path is supplied, it assumes that the image is centered on a person whose length is roughly 150px.
7+
Alternatively, you can supply output of the openpose to figure out the bbox and the right scale factor.
8+
9+
Sample usage:
10+
11+
# On images on a tightly cropped image around the person
12+
python -m demo --img_path data/im1963.jpg
13+
python -m demo --img_path data/coco1.png
14+
15+
# On images, with openpose output
16+
python -m demo --img_path data/random.jpg --json_path data/random_keypoints.json
17+
"""
18+
from __future__ import absolute_import
19+
from __future__ import division
20+
from __future__ import print_function
21+
22+
import sys
23+
from absl import flags
24+
import numpy as np
25+
26+
import skimage.io as io
27+
import tensorflow as tf
28+
29+
from src.util import renderer as vis_util
30+
from src.util import image as img_util
31+
from src.util import openpose as op_util
32+
import src.config
33+
from src.RunModel import RunModel
34+
35+
flags.DEFINE_string('img_path', 'data/im1963.jpg', 'Image to run')
36+
flags.DEFINE_string(
37+
'json_path', None,
38+
'If specified, uses the openpose output to crop the image.')
39+
40+
41+
def visualize(img, proc_param, joints, verts, cam):
42+
"""
43+
Renders the result in original image coordinate frame.
44+
"""
45+
cam_for_render, vert_shifted, joints_orig = vis_util.get_original(
46+
proc_param, verts, cam, joints, img_size=img.shape[:2])
47+
48+
# Render results
49+
skel_img = vis_util.draw_skeleton(img, joints_orig)
50+
rend_img_overlay = renderer(
51+
vert_shifted, cam=cam_for_render, img=img, do_alpha=True)
52+
rend_img = renderer(
53+
vert_shifted, cam=cam_for_render, img_size=img.shape[:2])
54+
rend_img_vp1 = renderer.rotated(
55+
vert_shifted, 60, cam=cam_for_render, img_size=img.shape[:2])
56+
rend_img_vp2 = renderer.rotated(
57+
vert_shifted, -60, cam=cam_for_render, img_size=img.shape[:2])
58+
59+
import matplotlib.pyplot as plt
60+
# plt.ion()
61+
plt.figure(1)
62+
plt.clf()
63+
plt.subplot(231)
64+
plt.imshow(img)
65+
plt.title('input')
66+
plt.axis('off')
67+
plt.subplot(232)
68+
plt.imshow(skel_img)
69+
plt.title('joint projection')
70+
plt.axis('off')
71+
plt.subplot(233)
72+
plt.imshow(rend_img_overlay)
73+
plt.title('3D Mesh overlay')
74+
plt.axis('off')
75+
plt.subplot(234)
76+
plt.imshow(rend_img)
77+
plt.title('3D mesh')
78+
plt.axis('off')
79+
plt.subplot(235)
80+
plt.imshow(rend_img_vp1)
81+
plt.title('diff vp')
82+
plt.axis('off')
83+
plt.subplot(236)
84+
plt.imshow(rend_img_vp2)
85+
plt.title('diff vp')
86+
plt.axis('off')
87+
plt.draw()
88+
plt.show()
89+
# import ipdb
90+
# ipdb.set_trace()
91+
92+
93+
def preprocess_image(img_path, json_path=None):
94+
img = io.imread(img_path)
95+
96+
if json_path is None:
97+
scale = 1.
98+
center = np.round(np.array(img.shape[:2]) / 2).astype(int)
99+
# image center in (x,y)
100+
center = center[::-1]
101+
else:
102+
scale, center = op_util.get_bbox(json_path)
103+
104+
crop, proc_param = img_util.scale_and_crop(img, scale, center,
105+
config.img_size)
106+
107+
# Normalize image to [-1, 1]
108+
crop = 2 * ((crop / 255.) - 0.5)
109+
110+
return crop, proc_param, img
111+
112+
113+
def main(img_path, json_path=None):
114+
sess = tf.Session()
115+
model = RunModel(config, sess=sess)
116+
117+
input_img, proc_param, img = preprocess_image(img_path, json_path)
118+
# Add batch dimension: 1 x D x D x 3
119+
input_img = np.expand_dims(input_img, 0)
120+
121+
joints, verts, cams, joints3d, theta = model.predict(
122+
input_img, get_theta=True)
123+
124+
visualize(img, proc_param, joints[0], verts[0], cams[0])
125+
126+
127+
if __name__ == '__main__':
128+
config = flags.FLAGS
129+
config(sys.argv)
130+
131+
config.batch_size = 1
132+
133+
renderer = vis_util.SMPLRenderer(face_path=config.smpl_face_path)
134+
135+
main(config.img_path, config.json_path)

hmr/.gitignore

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
models/
2+
logs*/
3+
*.json
4+
*.DS_Store
5+
*.pyc

hmr/README.md

+72
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
# End-to-end Recovery of Human Shape and Pose
2+
3+
Angjoo Kanazawa, Michael J. Black, David W. Jacobs, Jitendra Malik
4+
CVPR 2018
5+
6+
[Project Page](https://akanazawa.github.io/hmr/)
7+
![Teaser Image](https://akanazawa.github.io/hmr/resources/images/teaser.png)
8+
9+
### Requirements
10+
- Python 2.7
11+
- [TensorFlow](https://www.tensorflow.org/) tested on version 1.3
12+
13+
### Installation
14+
15+
#### Setup virtualenv
16+
```
17+
virtualenv venv_hmr
18+
source venv_hmr/bin/activate
19+
pip install -U pip
20+
deactivate
21+
source venv_hmr/bin/activate
22+
pip install -r requirements.txt
23+
```
24+
#### Install TensorFlow
25+
With GPU:
26+
```
27+
pip install tensorflow-gpu==1.3.0
28+
```
29+
Without GPU:
30+
```
31+
pip install tensorflow==1.3.0
32+
```
33+
34+
### Demo
35+
36+
1. Download the pre-trained models
37+
```
38+
wget https://people.eecs.berkeley.edu/~kanazawa/cachedir/hmr/models.tar.gz && tar -xf models.tar.gz
39+
```
40+
41+
2. Run the demo
42+
```
43+
python -m demo --img_path data/coco1.png
44+
python -m demo --img_path data/im1954.jpg
45+
```
46+
47+
On images that are not tightly cropped, you can run
48+
[openpose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) and supply
49+
its output json (run it with `--write_json` option).
50+
When json_path is specified, the demo will compute the right scale and bbox center to run HMR:
51+
```
52+
python -m demo --img_path data/random.jpg --json_path data/random_keypoints.json
53+
```
54+
(The demo only runs on the most confident bounding box, see `src/util/openpose.py:get_bbox`)
55+
56+
### Training code/data
57+
58+
Coming soon.
59+
60+
### Citation
61+
If you use this code for your research, please consider citing:
62+
```
63+
@inProceedings{kanazawaHMR18,
64+
title={End-to-end Recovery of Human Shape and Pose},
65+
author = {Angjoo Kanazawa
66+
and Michael J. Black
67+
and David W. Jacobs
68+
and Jitendra Malik},
69+
booktitle={Computer Vision and Pattern Regognition (CVPR)},
70+
year={2018}
71+
}
72+
```

hmr/__init__.py

Whitespace-only changes.

hmr/data/coco1.png

96.3 KB
Loading

hmr/data/coco2.png

97.3 KB
Loading

hmr/data/coco3.png

72 KB
Loading

hmr/data/coco4.png

97 KB
Loading

hmr/data/coco5.png

100 KB
Loading

hmr/data/coco6.png

94.2 KB
Loading

hmr/data/im1954.jpg

4.44 KB
Loading

hmr/data/im1963.jpg

4.68 KB
Loading

hmr/data/random.jpg

125 KB
Loading

0 commit comments

Comments
 (0)