Basic ImageDataset usage #1943
Unanswered
elijahrockers
asked this question in
Q&A
Replies: 1 comment
-
Hi @elijahrockers, I would first suggest looking at other tutorials such as this one which don't use |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Don't know if this is the right place to ask, but I'm experimenting with the 3d_regression tutorial. As a quick overview, I'm basically trying to train a regression model to predict which axial slice to choose (L3 slice detection). I have the nifti images, and the labels, which is just the integer number of which axial slice is correct
My idea was to train a regression model to predict a number between 0 and 1, which would correspond to the slice integer. For example... if there are 200 axial slices in total, and the labeled one is slice number 150, then the label would get transformed from 150 to 0.75 (150/200). That way any image labels that are input would kinda be on the same scale even if they had a different total number of axial slices.
However... even if I were to preprocess these labels, the ImageDataset then uses transform to resize the images to (96, 96, 96) which is fine, I want them to fit the architecture ... so on the inverse, the label that is predicted would be a "percentage" of 96. So, say the number 0.35 is predicted for an image, that would correspond to slice number 34 (rounded) out of 96.... then I would have to do another inverse transformation from 34/96 to 71/200.
I could create my own Dataset class and DataLoader to do all these transformations (i.e., find the original number of axial slices, transform the labels), but is there some way I'm supposed to use the ImageDataset that MONAI imports here instead?
Also, any suggestion on the overall ML problem are welcome ... I'm still learning.
Beta Was this translation helpful? Give feedback.
All reactions