You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry to bother you @SorourMo. Since I only have datasets consist of RGB images now, I have to modify your source code to make it adapt to inputs of three channles by using only RGB channels of your cloud dataset for training. Further more, I resize the 384x384 images to 256x256 for more applicable to my own task.
Hovewer, I find two issues in the training procedure. Firstly, the code runs very slowly. A single epoch may consumes about 20~30 minutes in my NVIDIA RTX 3090 GPU, while the utilization of GPU is 0% for most of the time. Secondly, after 100 epoches of training, the predictions are all blank images without any semantic. Although the modifications above will make the model suboptimal, the new results are quite unbelievable as complete failure.
I will be very appreciate if you could give me any advice.
The text was updated successfully, but these errors were encountered:
@lizijue
As my perspective,the output(in Predictions folder) from test_main.py of this project is not real DN value tif file. It is pixel-level probabilities (for example the direct output of the sigmoid (or softmax) activation function in the last layer of a CNN model)
We need to binarize the probabilities to generate binary masks by provided code(.m file) in https://github.com/SorourMo/38-Cloud-A-Cloud-Segmentation-Dataset/tree/master/evaluation
Sorry to bother you @SorourMo. Since I only have datasets consist of RGB images now, I have to modify your source code to make it adapt to inputs of three channles by using only RGB channels of your cloud dataset for training. Further more, I resize the 384x384 images to 256x256 for more applicable to my own task.
Hovewer, I find two issues in the training procedure. Firstly, the code runs very slowly. A single epoch may consumes about 20~30 minutes in my NVIDIA RTX 3090 GPU, while the utilization of GPU is 0% for most of the time. Secondly, after 100 epoches of training, the predictions are all blank images without any semantic. Although the modifications above will make the model suboptimal, the new results are quite unbelievable as complete failure.
I will be very appreciate if you could give me any advice.
The text was updated successfully, but these errors were encountered: