Build a prototyping pipeline to test different hyperspectral reconstruction and segmentation approaches leveraging public data and/or simulated data.
In this Google Sheets spreadsheet you can find the papers and datasets we have found and which we believe might be useful for the project. Feel free to add any information you might have.
Additionally (if you haven't already), it might also be useful to check the Project Proposal's references. More specifically, I would recommend:
- Multi-Spectral Imaging via Computed Tomography (MUSIC)- Comparing Unsupervised Spectral Segmentations for Material Differentiation
- Spectral computed tomography:fundamental principles and recent developments
Ideally we have the issues and just update them accordingly to know whose doing what and in which state we currently are. Nevertheless here are some important deadlines:
- mid-presentation: 06.07.2023. For this date we are supposed to have a baseline script/model as well as the presentation.
- final presentation: 21.09.2023
- Adapt a method to do Hyperspectral CT Segmentation
- Approach 1:
- Slice Reconstruction from Histograms (use FBP/ART TV/any other similar method)
- Slice Segmentation and Classification (based on Automatic multi-organ segmentation in dual-energy CT (DECT) with dedicated 3D fully convolutional DECT networks)
- Find Method to Stack and visualize reconstruction
- Approach:
- Do Slice Reconstruction from Histograms using Learning-based approach (some methods can be found in the spreadsheet)
- Do direct reconstruction and segmentation using Learning-based approach.
Download the MUSIC 2D and Music 3D Spectral datasets and set them in the root of the project. After this, run the following scripts to transform the localized segmentations to our global mapping
python src/DETCTCNN/data/dataset_relabeling.py --dataset /path/to/data_root
Install docker (if you don't already have it).
After that, go to the root folder of the repo and simply run:
docker build -t mlmi -f Dockerfile ..
Once the docker file has been compiled you can simply run by executing:
sudo docker run --rm -v <DATASET FOLDER PATH>:/workspace/dataset -v <REPO FOLDER PATH>:/workspace --gpus all -it mlmi
From there, you can execute as you would normally do.
To exit the container simply run exit
.
This project contains several sub-topics explored during the Praktikum in order to achieve good results with Hyperspectral data.
Some pre-computed visualizations can be found here.
- The original data contained segmentations that are not appropriate for learning methods. Therefore, data preprocessing files can be found here
- The full dataset consists of volumes with 128 hyperspectral bands, where we easily identify a curse of dimensionality. Therefore we performed some data exploration. In particular, we explore what bands are more informative per class,dimensionality reduction techniques like PCA, and exploring data separation with UMAP.
- As PCA was not very useful, we explored Band Selection techniques, such as OPBS in here and BSNet in here.
Since we are lacking a substantial amount of 3D data (~4 samples with usable segmentations), we implemented a 2D Convolutional network based off DECTCNN but adapted to Hyperspectral data.
python src/DETCTCNN/model/train.py
python src/DETCTCNN/model/inference.py
python src/DETCTCNN/inference/3D_inference.py
As we explored into the effects of changing the receptive field of our network to redirect focus on hyperspectral data, we posed the segmentation challenge as a per-pixel classification problem. Thus, we implemented a 1D Convolutional network for the segmentation problem.
python src/OneD/OneDLogReg_train.py
python src/oned/onedlogreg_inference.py
python src/OneD/OneDLogReg_3Dinference.py
We modify BSNet for our particular dataset. To train and run the band selection network, follow the notebook at
jupter notebook
band_selection/BSNetConvMusic.ipynb
We modify OPBS for our particular dataset. To run the band selection process, run the following script
python band_selection/opbs.py