Skip to content

hosseinjavidnia/SPDNN-Depth

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Semi-Parallel Deep Neural Network (SPDNN) Hybrid Architecture, First Application on Depth from Monocular Camera

This repository contains the code for the method presented in the following paper:

Bazrafkan, S., Javidnia, H., Lemley, J. and Corcoran, P., 2017. "Depth from Monocular Images using a Semi-Parallel Deep Neural Network (SPDNN) Hybrid Architecture". arXiv preprint arXiv:1703.03867


As described in the Training section of the paper, four experiments are designed in this project:

Exp1: Input: Left Visible Image + Pixel-wise Segmented Image. Target: Post-Processed Depth map.

Exp2: Input: Left Visible Image. Target: Post-Processed Depth map.

Exp3: Input: Left Visible Image + Pixel-wise Segmented Image. Target: Depth map.

Exp4: Input: Left Visible Image. Target: Depth map.


To prepare the input for training:

  1. Install the Caffe SegNet
  2. Train the SegNet using CamVid road scene database
  3. Use the trained model to segment the images of the KITTI 2012, 2015 dataset.


To prepare the target for training:

  1. Estimate the depth from the KITTI stereo sets using Adaptive Random Walk with Restart algorithm
  2. Post-process the initial depth maps using our post-processing method
    You can duplicate the experiments described in the paper using the codes in this repository and the prepared data.


Please cite the following papers when you are using this code:

Bazrafkan, S., Javidnia, H., Lemley, J. and Corcoran, P., 2017. "Depth from Monocular Images using a Semi-Parallel Deep Neural Network (SPDNN) Hybrid Architecture". arXiv preprint arXiv:1703.03867

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages