You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice on things like care, Stardist and n2v that my images are opened as 64bit numpy arrays.
Whick leads me to the question: is TensorFlow in there using 64-bit float precision? Wouldn't 32 bit be enough and reduce the memory footprint by a factor of 2?
Thanks for the discussion!
Oli
The text was updated successfully, but these errors were encountered:
Imo csbdeep/stardist should and will convert everything to float32 by default, as double precision is not necessary (and would be slower as you observe). Where do you see 64 bit floats generated?
E.g.
importnumpyasnpfromcsbdeep.dataimportRawData, create_patchesx=np.random.randint(0,10,(10,100,100))
raw_data=RawData.from_arrays(x,x,axes="YX")
X, Y, XY_axes=create_patches(raw_data, patch_size=(64,64), n_patches_per_image=10)
print(X.dtype)
dtype('float32')
Hi all,
I notice on things like care, Stardist and n2v that my images are opened as 64bit numpy arrays.
Whick leads me to the question: is TensorFlow in there using 64-bit float precision? Wouldn't 32 bit be enough and reduce the memory footprint by a factor of 2?
Thanks for the discussion!
Oli
The text was updated successfully, but these errors were encountered: