Skip to content
/ Dunit Public

(CVPR 2020) DUNIT: Detection-Based Unsupervised Image-to-Image Translation

License

Notifications You must be signed in to change notification settings

IVRL/Dunit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ec229c2 · Apr 25, 2024

History

32 Commits
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
May 12, 2020
May 12, 2020
May 12, 2020
Oct 8, 2022
May 12, 2020
Oct 8, 2022
May 12, 2020
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
Apr 25, 2024
Oct 8, 2022
Apr 25, 2024
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022
Oct 8, 2022

Repository files navigation

(CVPR 2020) DUNIT: Detection-Based Unsupervised Image-to-Image Translation

DOI

Deblina Bhattacharjee, Seungryong Kim, Guillaume Vizier, Mathieu Salzmann

Figure Abstract

CVPR 2020 Paper

Project Organization

├── LICENSE
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details
│
├── docker             <- Dockerfiles for running the models
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.

│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
└── app.py             <- Interactive demonstration of the behavior of the multi-box losses

Installation

  1. Clone the repo: `git clone
  2. Install the requirements: pip install -r requirements.txt

Interactive visualization of the behavior of the multi-box losses

  1. Run python app.py
  2. Open localhost:8050 on your favorite browser.

Citation

If you find the code, data, or the models useful, please cite this paper:

     @InProceedings{Bhattacharjee_2020_CVPR,
author = {Bhattacharjee, Deblina and Kim, Seungryong and Vizier, Guillaume and Salzmann, Mathieu},
title = {DUNIT: Detection-Based Unsupervised Image-to-Image Translation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

License

 [Creative Commons Attribution Non-commercial No Derivatives](http://creativecommons.org/licenses/by-nc-nd/3.0/)

Project based on the cookiecutter data science project template. #cookiecutterdatascience