Skip to content

Benchmark for a paper submitted to Information Fusion

Notifications You must be signed in to change notification settings

JiangHe96/DL4sSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spectral super-resolution meets deep learning: achievements and challenges

Jiang He, Qiangqiang Yuan, Jie Li, Yi Xiao, Denghong Liu, Huanfeng Shen, and Liangpei Zhang, Wuhan University

  • Codes for the paper entitled as "Spectral super-resolution meets deep learning: achievements and challenges" published in Information fusion.

  • Benchmark about deep learning-based spectral super-resolution algorithms, including the workflows of spectral recovery, colorization, and spectral compressive imaging.

Datasets Loading

We give some classical datasets in three applications:

Spectral recovery: ARAD_1K dataset

The dataset used in spectral recovery is a public hyperspectral image data named ARAD_1K which is released in NTIRE 2022.

We only give the RGB images in github. The label of Training dataset can be downloaded from Zenodo.

Please put the traning dataset into ./2SR/dataset/, and unzipped it as .\2SR\dataset\Train_Spec\.

Colorization: SUN dataset

We used only part of SUN dataset. Details can be found in our paper.

We uploaded the test images in github. The training datasets can be downloaded from Zenodo.

Please put the traning dataset into ./1colorization/, and name it as color_train.h5.

Spectral compressive imaging: CAVE dataset

We designed our SCI procedure following TSA-Net.

The ./3SCI/mask.mat is the mask used in degradation. Training dataset can be downloaded from Zenodo.

Please put the traning dataset into ./3SCI/, and name it as 26train_256_enhanced.h5.

Model Zoos

We have collected some classical spectral super-resolution algorithms, including DenseUnet [42], CanNet [45], HSCNN+ [50], sRCNN [53], AWAN [60], FMNet [69], HRNet [70], HSRnet [71], HSACS [73], GDNet [77], and SSDCN [79].

If you want to get the pretrained model, please contact with [email protected].

Your own models

If you want to run your own models with this benchmark, you should put your model xxxxx.py into the file ./models/. And then, you should define your model in specific application. For example, if you want to run your model in spectral recovery, please define your model in ./2SR/model.py.

Running Details

The code has been tested on PyTorch 1.6.

Spectral recovery

We improved our implementation inspired by MST++.

For training, you should run train.py at the path .\2SR:

python train_adalr.py --method HSRnet --batchSize 2 --gpus 0

More details can be found in the help of 'argparse' in train.py.

After training, you can run test.py to obtain the testing results:

python demo_try.py -- model CanNet --name CanNet_b8_adalr0.0008 --time 2023_04_17_23_09_36 --epoch 200 --gpus 0 --data_root dataset/

'time' and 'name' can be found in the .\2SR\checkpoint\.

Colorization

For training, you should run train_adalr.py at the path .\1colorization:

python train_adalr.py --method HSRnet_color --batchSize 2 --gpus 0

More details can be found in the help of 'argparse' in train_adalr.py.

After training, you can run demo_try.py to obtain the testing results:

python demo_try.py --name CanNet_10_b8_adam_L1loss_adalr0.0004 --scale 1 --gpus 0

'scale' is used to calculate ERGAS, which is the spatial resolution ratio. Notice: you should change the 'bestepoch' in demo_try.py or just change 'path' with your checkpoint path.

Spectral compressive imaging

For the common training, you should run train.py at the path .\3SCI:

python train.py --method SSDCN --batchSize 2 --gpus 0

More details can be found in the help of 'argparse' in train.py.

For the new assumptive spectral imaging in our paper, you should run train_meaninput.py at the path .\3SCI:

python train_meaninput.py --method CanNet --batchSize 2 --gpus 0`. 

More details can be found in the help of 'argparse' in train_meaninput.py. Before training, you should download the new training data set 26train_256_enhanced.h5.

After training, you can run test.py to obtain the testing results:

python test.py --name FMNet_step10_b1_adam_L1loss_adalr0.0001 --scale 1 --gpus 0

'scale' is used to calculate ERGAS, which is the spatial resolution ratio. Notice: the 'test_ite ' is chosen as 200 in this application.

Contact

If any questions, please feel free to contact with [[email protected]].

Reference

Please cite:

@article{hj2023_DL4sSR,
title={Spectral super-resolution meets deep learning: achievements and challenges},
author={He, Jiang and Yuan, Qiangqiang and Li, Jie and Xiao, Yi and Liu, Denghong and Shen, Huanfeng and Zhang, Liangpei},
journal={Information Fusion},
volume={97},
pages={101812},
year={2023},
}

About

Benchmark for a paper submitted to Information Fusion

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages