Retrain 2D U-Net: sparse annotations #1324
Unanswered
MirenLurBarquin
asked this question in
Q&A
Replies: 1 comment 4 replies
-
|
Hi @MirenLurBarquin - I'm one of the AxonDeepSeg maintainers. Would you be able to share a sample image and the resulting segmentation mask you get from AxonDeepSeg? |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear all,
I am currently using AxonDeepSeg and Ivadomed for my project; however, I have some questions as I am not obtaining the expected results.
I have a TEM dataset of rat brain tissue, for which axon and myelin segmentation was performed using AxonDeepSeg. The images are in
.tifformat, of size10.24 x 10.24 µmand a resolution of2048 x 2048 pixels(i.e. a pixel size of0.005 x 0.005 µm).Due to the characteristics of the image (e.g., contrast and definition) the segmentation outputs exhibit issues such as holes, sharp edges, and missing axons and myelin. To address this, we attempted to postprocess these segmentations, retaining only the blobs that appeared accurate in each mask. This approach resulted in a sparse segmentation dataset.
Next, we retrained the model using the sparse dataset, hoping to achieve a dense segmentation better than the one resulted from AxonDeepSeg. Unfortunately, this was not the outcome. For the retraining process, we utilized Ivadomed.
I have attached the latest configuration file I am using (having experienced with various other configurations), and I will outline some specific notes and questions that may be influencing the unsatisfactory results:
By default Ivadomed uses
DiceLoss, but after not obtaining a dense segmentation I tried to use theGeneralizedDiceLosswith no background. Which obtained much worse results than the base case: empty axon class for"transfer_learning"{..., "retrain_fraction": 1.0, "reset": true}and just small axons for"transfer_learning"{..., "retrain_fraction": 0.3, "reset": true}. Is this idea right?:no backgroundmasks out the background from the calculation of the loss to help the model not to penalize when new axons and myelin are found in the background.Has someone solved a similar task using Ivadomed and AxonDeepSeg?
Any idea or input is more than welcome :)
Thanks in advance!
config_microscopy_base.json
Beta Was this translation helpful? Give feedback.
All reactions