Releases: juglab/EmbedSeg
Releases · juglab/EmbedSeg
v0.2.5
What's Changed
- v1.0.1: 2d + 3d code examples by @lmanan in #6
- v0.2.3 by @lmanan in #7
- Automatic calculation of crop size by @lmanan in #13
- v0.2.5 - tag (a) by @lmanan in #19
- V0.2.5 - tag (b) by @lmanan in #20
- V0.2.5 - tag (c) by @lmanan in #21
- V0.2.5 - tag (d) by @lmanan in #22
- Update train.py by @ajinkya-kulkarni in #29
- Update utils.py by @ajinkya-kulkarni in #28
New Contributors
- @ajinkya-kulkarni made their first contribution in #29
Full Changelog: v0.2.0...v0.2.5
MIDL Notebooks
This release was used to compute numbers for the MIDL publication and is stable.
- The normalization of the image intensities was done by dividing pixel intensities by 255 (for 8-bit images) and 65535 (for unsigned 16-bit images). While this normalization strategy lead to a faster training, it lead to a sometimes, poorer OOD performance. In the future releases, the default will be set to
min-max-percentile(takes model longer to reach the same val IoU but leads to a better inference performance).
Minor bug-fixes
A minor update since release v0.2.2. This includes:
- Add
display_zsliceparameter andsave_checkpoint_frequencyparameter toconfigsdictionary here
- Support for visualization for setups when
virtual_batch_multiplier> 1 is still missing. - Also hardcoded install version of
tifffileinsetup.pyhere because latest version currently (2021.6.14) generates a warning message withimsavecommand while generating crops withbbbc010-2012dataset. Will relax this version specification in releasev0.2.4
TODOs include:
- Plan to update
pytorchversion to1.9.0in releasev0.2.4(currently pytorch version used is1.1.0) - Plan to add
tile and stitchcapability in releasev0.2.4for handling in large 2d and 3d images during inference - Plan to add a parameter
max_crops_per_imagein releasev0.2.4to set an optional upper bound on number of crops extracted from each image - Plan to save all instance crops and center crops as RLE files in release
v0.2.4 - Plan to add an optional mask parameter during training which ignores loss computation from certain regions of the image in release
v0.2.4 - Plan to deal with bug while evaluating
var_lossand to have crops of desired size by additional padding. - Plan to include support for more classes.
- Normalization for 3d ==> (0,1, 2)
- Make normalization as default option for better extensibility
- Parallelize operations like cropping
- Eliminate the specification of grid size in notebooks -set to some default value
- Simplify notebooks further
- Make colab versions of the notebooks
- Test
center=learncapability for learning the center freely - Add the ILP formulation for stitching 2d instance predictions
- Add the code for converting predictions from 2d model on xy, yz and xz slices to generate a 3D instance segmentation
- Add more examples from medical image datasets
- Add
threejsvisualizations of the instance segmentations. Explain how to generate these meshes, smoothen them and import them withthreejsscript. - Padding with
reflectioninstead ofconstantmode - Include
cluster_with_seedsin case nuclei or cell detections are additionally available
3d example notebooks
- Add all 3d example notebooks
- Pad images with average background intensity instead of 0
Functional 2d + 3d code
Major changes:
- Add 3d example notebooks for two datasets
- Correct
min_object_size(evaluated now from looking at the train and validation masks) - Save
tifimages with datatypenp.uint16(in the prediction notebooks ) - Provide support in case evaluation GT images are not available (during prediction)
Some things which are still incorrect in v0.2.0:
n_yshould be set ton_xfor equal pixel/voxel sizes in y and x dimension. This is fixed in v0.2.1anisotropy_factoris wrongly calculated for the 3d notebooks (it was calculated as the reciprocal). This is fixed in v0.2.1train_sizewas set to 600 for thebbbc012-2010dataset. This is raised to 1200 in v0.2.1
Functional 2d code, datasets, fully trained models and colormap
- Initial functional 2d code (
min_object_sizewas hard coded to 36 and will be updated in later iterations) - Assets include:
- 2d images and GT instance annotations
- 3d images and GT instance annotations
- fully trained models (
*demo.tar) (models trained from scratch up till 200 iterations) - glasbey-like colormap (
cmap_60.npy)
Data is released under CC-BY 4.0 license (see license).