Skip to content

Commit

Permalink
Inference done.
Browse files Browse the repository at this point in the history
  • Loading branch information
YuliangXiu committed Jan 30, 2022
0 parents commit 0045bd1
Show file tree
Hide file tree
Showing 242 changed files with 45,845 additions and 0 deletions.
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
data/*/*
!data/tbfo.ttf
__pycache__
results/*
.vscode
!.gitignore
52 changes: 52 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
License

Software Copyright License for non-commercial scientific research purposes
Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use the ICON model, data and software, (the "Data & Software"), including 3D meshes, images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License

Ownership / Licensees
The Software and the associated materials has been developed at the Max Planck Institute for Intelligent Systems (hereinafter "MPI"). Any copyright or patent right is owned by and proprietary material of the Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (hereinafter “MPG”; MPI and MPG hereinafter collectively “Max-Planck”) hereinafter the “Licensor”.

License Grant
Licensor grants you (Licensee) personally a single-user, non-exclusive, non-transferable, free of charge right:

• To install the Model & Software on computers owned, leased or otherwise controlled by you and/or your organization;
• To use the Model & Software for the sole purpose of performing peaceful non-commercial scientific research, non-commercial education, or non-commercial artistic projects;
• To modify, adapt, translate or create derivative works based upon the Model & Software.

Any other use, in particular any use for commercial, pornographic, military, or surveillance, purposes is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, or production of other artifacts for commercial purposes. The Data & Software may not be used to create fake, libelous, misleading, or defamatory content of any kind excluding analyses in peer-reviewed scientific research. The Data & Software may not be reproduced, modified and/or made available in any form to any third party without Max-Planck’s prior written permission.

The Data & Software may not be used for pornographic purposes or to generate pornographic material whether commercial or not. This license also prohibits the use of the Software to train methods/algorithms/neural networks/etc. for commercial, pornographic, military, surveillance, or defamatory use of any kind. By downloading the Data & Software, you agree not to reverse engineer it.

No Distribution
The Data & Software and the license herein granted shall not be copied, shared, distributed, re-sold, offered for re-sale, transferred or sub-licensed in whole or in part except that you may make one copy for archive purposes only.

Disclaimer of Representations and Warranties
You expressly acknowledge and agree that the Data & Software results from basic research, is provided “AS IS”, may contain errors, and that any use of the Data & Software is at your sole risk. LICENSOR MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE DATA & SOFTWARE, NEITHER EXPRESS NOR IMPLIED, AND THE ABSENCE OF ANY LEGAL OR ACTUAL DEFECTS, WHETHER DISCOVERABLE OR NOT. Specifically, and not to limit the foregoing, licensor makes no representations or warranties (i) regarding the merchantability or fitness for a particular purpose of the Data & Software, (ii) that the use of the Data & Software will not infringe any patents, copyrights or other intellectual property rights of a third party, and (iii) that the use of the Data & Software will not cause any damage of any kind to you or a third party.

Limitation of Liability
Because this Data & Software License Agreement qualifies as a donation, according to Section 521 of the German Civil Code (Bürgerliches Gesetzbuch – BGB) Licensor as a donor is liable for intent and gross negligence only. If the Licensor fraudulently conceals a legal or material defect, they are obliged to compensate the Licensee for the resulting damage.
Licensor shall be liable for loss of data only up to the amount of typical recovery costs which would have arisen had proper and regular data backup measures been taken. For the avoidance of doubt Licensor shall be liable in accordance with the German Product Liability Act in the event of product liability. The foregoing applies also to Licensor’s legal representatives or assistants in performance. Any further liability shall be excluded.
Patent claims generated through the usage of the Data & Software cannot be directed towards the copyright holders.
The Data & Software is provided in the state of development the licensor defines. If modified or extended by Licensee, the Licensor makes no claims about the fitness of the Data & Software and is not responsible for any problems such modifications cause.

No Maintenance Services
You understand and agree that Licensor is under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Data & Software. Licensor nevertheless reserves the right to update, modify, or discontinue the Data & Software at any time.

Defects of the Data & Software must be notified in writing to the Licensor with a comprehensible description of the error symptoms. The notification of the defect should enable the reproduction of the error. The Licensee is encouraged to communicate any use, results, modification or publication.

Publications using the Model & Software
You acknowledge that the Data & Software is a valuable scientific resource and agree to appropriately reference the following paper in any publication making use of the Data & Software.

Citation:

@article{xiu2021icon,
title={ICON: Implicit Clothed humans Obtained from Normals},
author={Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J},
journal={arXiv preprint arXiv:2112.09127},
year={2021}
}

Commercial licensing opportunities
For commercial uses of the Model & Software, please send email to [email protected]

This Agreement shall be governed by the laws of the Federal Republic of Germany except for the UN Sales Convention.
205 changes: 205 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,205 @@
<!-- PROJECT LOGO -->
<br />
<p align="center">

<h1 align="center">ICON: Implicit Clothed humans Obtained from Normals</h1>

<a href="">
<img src="./assets/teaser.jpeg" alt="Logo" width="100%">
</a>

<p align="center">
arXiv, December 2021.
<br />
<a href="https://ps.is.tuebingen.mpg.de/person/yxiu"><strong>Yuliang Xiu</strong></a>
·
<a href="https://ps.is.tuebingen.mpg.de/person/jyang"><strong>Jinlong Yang</strong></a>
·
<a href="https://ps.is.mpg.de/~dtzionas"><strong>Dimitrios Tzionas</strong></a>
·
<a href="https://ps.is.tuebingen.mpg.de/person/black"><strong>Michael J. Black</strong></a>
</p>

<p align="center">
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>
<a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a><br><br>
<a href='https://arxiv.org/abs/2112.09127'>
<img src='https://img.shields.io/badge/Paper-PDF-green?style=flat&logo=arXiv&logoColor=green' alt='Paper PDF'>
</a>
<a href='https://icon.is.tue.mpg.de/' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/Project-Page-blue?style=flat&logo=Google chrome&logoColor=blue' alt='Project Page'>
<a href='https://youtu.be/ZufrPvooR2Q' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/Youtube-Video-red?style=flat&logo=youtube&logoColor=red' alt='Project Page'>
</a>
<a href='https://huggingface.co/spaces/Yuliang/ICON_Avatar' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-SPACES-yellow' alt='HuggingFace Space'>
</a>
</p>
</p>

[ICON Demo](https://user-images.githubusercontent.com/7944350/146271038-4d571bd1-69c2-46ad-a796-112c480a4173.mp4)

<br />
<br />

<!-- TABLE OF CONTENTS -->
<details open="open" style='padding: 10px; border-radius:5px 30px 30px 5px; border-style: solid; border-width: 1px;'>
<summary>Table of Contents</summary>
<ol>
<li>
<a href="#who-needs-ICON">Who needs ICON</a>
</li>
<li>
<a href="#todo">TODO</a>
</li>
<li>
<a href="#installation">Installation</a>
</li>
<li>
<a href="#demo">Demo</a>
</li>
<li>
<a href="#citation">Citation</a>
</li>
<li>
<a href="#acknowledgments">Acknowledgments</a>
</li>
<li>
<a href="#disclosure">Disclosure</a>
</li>
<li>
<a href="#license">License</a>
</li>
<li>
<a href="#contact">Contact</a>
</li>
</ol>
</details>
<br />
<br />




## Who needs ICON?
- If you want to reconstruct 3D clothed humans in **unconstrained poses** from in-the-wild images
- together with the body under clothing (e.g. SMPL, SMPL-X)
- clothed-body normal maps (front/back) predicted from images

|![All Intermedia Results](assets/intermedias.gif)|
|:--:|
|*ICON's outputs from single RGB image*|

- If you want to obtain a **realistic and animatable 3D clothed avatar** direclty from video / a sequence of monocular images
- fully-textured with per-vertex color
- could be animated by SMPL pose parameters
- with pose-dependent clothing deformation


|![ICON+SCANimate+AIST++](assets/scanimate.gif)|
|:--:|
|*3D Clothed Avatar, created from 400+ images using **ICON+SCANimate**, animated by AIST++*|


<br/>
<br/>

## TODO

- [x] Testing code and pretrained model(*self-implemented version)
- [x] ICON (w/ & w/o global encoder)
- [x] PIFu* (RGB image + predicted normal map as input)
- [x] PaMIR* (RGB image + predicted normal map as input)
- [ ] Online app <a href='https://huggingface.co/spaces/Yuliang/ICON_Avatar' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-SPACES-yellow' alt='HuggingFace Space'>
</a>
- [ ] Training code
- [ ] Dataset processing code
- [ ] Video2Avatar module


## Installation

Please follow the [Installation Instruction](docs/Installation.md) to setup all the required packages, extra data, and models.

## Demo

```bash
cd ICON/apps

# PIFu* (*: re-implementation)
python infer.py -cfg ../configs/pifu.yaml -gpu 0 -in_dir ../examples -out_dir ../results

# PaMIR* (*: re-implementation)
python infer.py -cfg ../configs/pamir.yaml -gpu 0 -in_dir ../examples -out_dir ../results

# ICON w/ global filter (better visual details --> lower Normal Error))
python infer.py -cfg ../configs/icon-filter.yaml -gpu 0 -in_dir ../examples -out_dir ../results

# ICON w/o global filter (higher evaluation scores --> lower P2S/Chamfer Error))
python infer.py -cfg ../configs/icon-nofilter.yaml -gpu 0 -in_dir ../examples -out_dir ../results
```

## More Qualitative Results

|![Comparison](assets/compare.gif)|
|:--:|
| *Comparison with other state-of-the-art methods* |
|![extreme](assets/large-scale.gif)|
| *Reconstruction on in-the-wild photos with extreme poses (GIF)* |
|![extreme](assets/demo.png)|
| *Reconstruction on in-the-wild photos with extreme poses (PNG)* |
|![extreme](assets/normal-pred.png)|
| *Predicted normals on in-the-wild images with extreme poses* |



<br/>
<br/>


## Citation

```bibtex
@article{xiu2021icon,
title={ICON: Implicit Clothed humans Obtained from Normals},
author={Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J},
journal={arXiv preprint arXiv:2112.09127},
year={2021}
}
```

## Acknowledgments

We thank [Yao Feng](https://ps.is.mpg.de/person/yfeng), [Soubhik Sanyal](https://ps.is.mpg.de/person/ssanyal), [Qianli Ma](https://ps.is.mpg.de/person/qma), [Xu Chen](https://ait.ethz.ch/people/xu/), [Hongwei Yi](https://ps.is.mpg.de/person/hyi), [Chun-Hao Paul Huang](https://ps.is.mpg.de/person/chuang2), and [Weiyang Liu](https://wyliu.com/) for their feedback and discussions, [Tsvetelina Alexiadis](https://ps.is.mpg.de/person/talexiadis) for her help with the AMT perceptual study, [Taylor McConnell](https://ps.is.mpg.de/person/tmcconnell) for her voice over, [Benjamin Pellkofer](https://is.mpg.de/person/bpellkofer) for webpage, and [Yuanlu Xu](https://web.cs.ucla.edu/~yuanluxu/)'s help in comparing with ARCH and ARCH++.

Special thanks to [Vassilis Choutas](https://ps.is.mpg.de/person/vchoutas) for sharing the code of [bvh-distance-queries](./lib/bvh-distance-queries/README.md)

Here are some great resources we benefit from:

- [MonoPortDataset](https://github.com/Project-Splinter/MonoPortDataset) for Data Processing
- [PaMIR](https://github.com/ZhengZerong/PaMIR), [PIFu](https://github.com/shunsukesaito/PIFu), [PIFuHD](https://github.com/facebookresearch/pifuhd), and [MonoPort](https://github.com/Project-Splinter/MonoPort) for Benchmark
- [SCANimate](https://github.com/shunsukesaito/SCANimate) and [AIST++](https://github.com/google/aistplusplus_api) for Animation
- [rembg](https://github.com/danielgatis/rembg) for Human Segmentation
- [smplx](https://github.com/vchoutas/smplx), [PARE](https://github.com/mkocabas/PARE), [PyMAF](https://github.com/HongwenZhang/PyMAF), and [PIXIE](https://github.com/YadiraF/PIXIE) for Human Pose & Shape Estimation
- [CAPE](https://github.com/qianlim/CAPE) and [THuman](https://github.com/ZhengZerong/DeepHuman/tree/master/THUmanDataset) for Dataset
- [PyTorch3D](https://github.com/facebookresearch/pytorch3d) for Differential Rendering


Some images used in the qualitative examples come from [pinterest.com](https://www.pinterest.com/).

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 ([CLIPE Project](https://www.clipe-itn.eu)).


## Disclosure

MJB has received research gift funds from Adobe, Intel, Nvidia, Facebook, and Amazon. While MJB is a part-time employee of Amazon, his research was performed solely at, and funded solely by, Max Planck. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH.

## License
This code and model are available for non-commercial scientific research purposes as defined in the [LICENSE](LICENSE) file. By downloading and using the code and model you agree to the terms in the [LICENSE](LICENSE).

## Contact

For more questions, please contact [email protected]

For commercial licensing, please contact [email protected]
Loading

0 comments on commit 0045bd1

Please sign in to comment.