Would a dev be kind enough to provide a very high level overview about how Watershed works in 3D mode? Let’s say the input is one stack of binarized images, representing the output of the Threshold module on nuclei images acquired on multiple Z planes. When Watershed is applied on the binarized images, is segmentation performed on each plane in the stack separately before some kind of image registration is performed to find/connect single objects across multiple planes? Or are multiple planes considered simultaneously to perform the initial segmentation? Or is it some other approach entirely?
Thanks much in advance!