You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I followed the tutorial to train a segmentation model with my own mri data. The image and labelmap dimension (256x256x16) is different than the tutorial, but all in grayscale, so one channel only. The labelmap is binary segmentation, the segmented area can be less than 1/4 of the whole image size.
(The tutorial uses 3d images with binary segmentation too, why they can use model = UNet(spatial_dims=3, in_channels=1, out_channels=2, ...? Mine gave me errors.)
The codes that changed for my need:
# I used the same augmentation for training and validation set
augmentation_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
EnsureChannelFirstd(keys=["image", "label"]),
RandFlipd(keys=["image", "label"], prob=0.1, spatial_axis=0),
RandFlipd(keys=["image", "label"], prob=0.1, spatial_axis=1),
RandFlipd(keys=["image", "label"], prob=0.1, spatial_axis=2),
RandScaleIntensityd(keys="image", prob=0.3, factors=5),
RandShiftIntensityd(keys="image", prob=0.5, offsets=5),
RandBiasFieldd(keys="image", coeff_range=(0.2,0.3), prob=0.1),
RandZoomd(keys=["image", "label"], prob=0.1, min_zoom=0.8, max_zoom=1.3, mode=['area','nearest'])
]
)
# codes skipped ...
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
UNet_meatdata = {
"spatial_dims": 3,
"in_channels": 1,
"out_channels": 1,
"channels": (16, 32, 64, 128, 256),
"strides": (2, 2, 2, 2),
"norm": Norm.BATCH
}
model = UNet(**UNet_meatdata).to(device)
loss_function = DiceLoss(include_background=False, smooth_nr=0, smooth_dr=1e-5, to_onehot_y=False, sigmoid=True)
loss_type = "DiceLoss"
dice_metric = DiceMetric(include_background=True, reduction="mean")
# codes skipped ...
# validation part in each epoch
...
with torch.no_grad():
for index, val_data in enumerate(val_loader):
val_inputs, val_labels = (val_data["image"].to(device), val_data["label"].to(device))
roi_size = (32,32,32) # tutorial uses (160, 160, 160)
sw_batch_size = 2
val_outputs = sliding_window_inference(val_inputs, roi_size, sw_batch_size, model)
...
I set in_channels, out_channels to 1. If I used 1 and 2, or 2 and 1, both gave me error for incorrect shapes. DiceLoss no one hot coding, sigmoid used instead of softmax,... I included background in 'DiceMetric' .
The average loss for epoch is reasonable like decreasing from 0.9 to xx .... but the mean dice is always 0, at least for the first 300 epochs. I stopped after running 300 epochs.
Also, when I used the tutorial codes to check best model output with the input image and label, the image and label were shown, but nothing showed up for the predicted box.
Seems like there's something wrong in the validation step, and anyone knows why dice loss is always 0? Meaning the ground truth and predicted ones are perfectly matched??
tutorial images have scalar type float and ranges from -1xxx to 3xxx. labels are saved as scalar, 0 or 1.
my images have scalar type unsigned short and ranges from 0 to 1xxxx. labels are saved as labelmap, 0 or 1.
Not sure if it will make any difference??
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I followed the tutorial to train a segmentation model with my own mri data. The image and labelmap dimension (256x256x16) is different than the tutorial, but all in grayscale, so one channel only. The labelmap is binary segmentation, the segmented area can be less than 1/4 of the whole image size.
(The tutorial uses 3d images with binary segmentation too, why they can use
model = UNet(spatial_dims=3, in_channels=1, out_channels=2, ...
? Mine gave me errors.)The codes that changed for my need:
I set
in_channels
,out_channels
to 1. If I used 1 and 2, or 2 and 1, both gave me error for incorrect shapes.DiceLoss
no one hot coding, sigmoid used instead of softmax,... I included background in 'DiceMetric' .The average loss for epoch is reasonable like decreasing from 0.9 to xx .... but the mean dice is always 0, at least for the first 300 epochs. I stopped after running 300 epochs.
Also, when I used the tutorial codes to check best model output with the input image and label, the image and label were shown, but nothing showed up for the predicted box.
Seems like there's something wrong in the validation step, and anyone knows why dice loss is always 0? Meaning the ground truth and predicted ones are perfectly matched??
tutorial images have scalar type float and ranges from -1xxx to 3xxx. labels are saved as scalar, 0 or 1.
my images have scalar type unsigned short and ranges from 0 to 1xxxx. labels are saved as labelmap, 0 or 1.
Not sure if it will make any difference??
Thanks
Beta Was this translation helpful? Give feedback.
All reactions