Help with segmentation of cells and pixel count




I’m attaching two links to separate images. These images are merged red and green filter images from fluorescent microscopy. The green stain was intended to identify the membrane of the skeletal muscle cells in the cross-section, and the red stain was intended to identify a protein target predominantly localized to the membrane.

The primary outcomes of this analysis are to count the number of skeletal muscle cells in the cross section (circular objects separated by membranes) and then divide the number of the red pixels above an appropriate intensity by the number of fibers in the cross section as a surrogate of an increase or decrease in the protein’s expression at two separate time points.

Hence, I need good segmentation and separation of the cells, based on the membrane stain. Second, I need an accurate count of the red pixels that are not “background flourescence” so that I can confidently infer the expression of the protein target of interest. I’ve employed smoothing, image math, and thresholding strategies in pilot pipelines but the outcome images are not as pristine and accurate as I would like. I am hopeful you can provide assistance with this issue, as most of the other images look similar to these, and this is a relatively straightforward analysis.

Here are the links to the images.

Thanks so much for any assistance!



It would be helpful if you posted the pipeline you have so far; it’s easier to help you edit what you’ve already got (since of course you know your images better than we do!) than to just start it from scratch.


Hi bcimini,

Here is the pipeline as it stands. I’m getting logical values, but I just want to make sure the accuracy is very high for these measurements. Hence, I’d appreciate your counsel for improving the pipeline to any extent.

EXAMPLE_ANALYSIS.cpproj (459.2 KB)

Thank you for your time.


Wow, those are really hard images, and you’ve done a fantastic job on them!

I honestly don’t see a way your pipeline can be improved in any automated way as it’s already doing very, very well (at least as far as I can tell- I see a couple of your large fibers being split in half but otherwise it seems quite accurate. If there are other trouble points for you, can you point them out or screenshot/annotate them so we can get a better sense of the issue?). You say you’re getting logical values, so it seems to me you have a choice at this point- you’ll be the best judge of which of the two paths to take at this point since you know more about what you need, why you need it, what the error tolerance is and why, etc.

  1. Say that this is very accurate as well optimized as it’s likely to be- every experiment has some amount of imprecision built in, whether it’s a qPCR (otherwise we’d never need technical replicates!), a Western blot (where exactly do you stop calling the haze of the band?), or a microscope image. Unless you have reason to believe otherwise, there’s no reason a priori to suspect that the imprecision here is more than any other technique OR that whatever small amount of error creeps in won’t be equally propagated across all your images. Particularly if you have negative and positive controls, this would be a great time to look at them and see if the measurements you’re getting from the two seem well separated; if so, my recommendation would be that your work here is done.

  2. Instead of or after the FilterObjects step in your pipeline, add an EditObjectsManually step and go in and do manual correction of the objects on every image. You’ll increase your accuracy, but at the expense of MUCH lower throughput; in most cases, I believe you’re better off at 95% accuracy and enough time to do another experiment than 99% accuracy (particularly once the number of images/objects starts becoming large, as the errors “wash” over a larger set AND as correction takes longer), but again, you’ll have a better sense than I do which is better for your particular case.

Good luck, and again nice work! Please feel free to follow up if you have more questions.


Thanks for the reassurance, bcimini. I appreciate the advice and perspective.

I was thinking you could offer an additional method, separate from the one that I’ve used, for removing some of the green “background fluorescence” inside of the cells to better contrast the membrane against the background. I considered a correctillumination module series after viewing the color to gray output, but have never completed those modules and was unsure if the juice was worth the squeeze there. Do you think those modules would be helpful for a better contrasted green filter image? Do you have any further recommendations on controlling for the issue of background fluorescence or uneven illumination?


Strictly speaking, the nice thing about EnhanceOrSuppressFeatures ’ “Neurite” mode is that you shouldn’t NEED to do any background subtraction ahead of time. From left to right, I’m showing the results of EnhanceOrSuppressFeatures after 1) Your current workflow (doing a MeasureImageIntensity followed by a background subtraction) 2) a workflow using CorrectIlluminationCalculate+CorrectIlluminationApply 3) Just running EnhanceOrSupress on your OrigGreen image. To my eye, 2 and 3 are both pretty similar (and both a bit better than 1), so no need to add extra processing.

Just for extra fun, I took it one step further to see how the 3 methods would affect the membrane thresholding you’re getting, both with (bottom) and without (top) rescaling first (so your current outcome is the bottom left corner). Essentially you’re getting near the same answer no matter how you do it, though I fully admit I didn’t do much optimization at all.

All that being said, I really think if these images are representative you’re about at your optimal analysis. If you want to keep playing with it though, the pipeline with the few extra steps I added to test the optimization is attached. LAT_1_ANALYSIS_optimization.cppipe (36.4 KB)


Thanks so much for your time and effort. I will proceed with the analysis. Hope to interact again in the future.