This is the official code release of A Hyperspectral Imaging Guided Robotic Grasping System.
[paper] [project] [code] [Datasets] [CAD files]
## EnvironmentThe complete deployment of the project includes the following components:
- Model Training and Inference
- Robotic Manipulation
- PRISM Control (Hyperspectral Camera, Motors)
Due to the windows required of the hyperspectral camera control interface, the project is developed on :
- Windows 10.
But the model training and inference can be run on any platform such as Ubuntu 20.04 (tested) that supports PyTorch.
-
Create conda environment and install pytorch
This code is tested on Python 3.10.14 on Ubuntu 20.04 and Windows 10.
conda create -n prism python=3.10 conda activate prism # pytorch with cuda 11.8 pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
-
Dependencies
Install dependencies
pip install joblib pip install tqdm pip install tensorboard pip install omegaconf pip install opencv-python pip install matplotlib pip install scipy pip install scikit-learn pip install plantcv pip install spectral pip install numpy==1.26.4 pip install h5py
Only tested under the pycharms environment. please unclick the "Run with Python Console" and "view > Scientific Mode" option in the run configuration.
Run commands below to run the prism working animation:
python scripts/prism_animation.py
You can modify the config parameter model_type
in config/train.yaml
to train the specific model.
python scripts/train.py
You can also run the test script to evaluate the trained model.
python scripts/test.py
All C++ device control codes are in the "c_device" folder. This includes control modules for Modbus devices, the Nachi robot, and the Specim linescan camera.
--c_device
--libModbus
--nachi
--specim