Is there a module assigning multiple classes to one image?


Hello everyone,

I am new to CellProfiler and I need your advise on an image analysis problem I have.
I want to classify objects, but these objects will not be purely one class. What I will need would be an estimation of how much the object looks like one or another class. For example: 20% Golgi, 80% ER

Is this possible to realize in a module?
How does the classification module calculate the class of an object? Is there a step in between where this likelihood information is provided?

If you have any other suggestion on how to achieve this goal by combination with any other method (e.g. SOM?), this would be very welcome.

Thank you in advance,

Daniela Richter


Unfortunately, no, we do not have a module that will allow multi-class assignments to an image. However, it is possible to classify individual objects within an image. To do so, you want to use the ClassifyObjects module after you identify and measure your objects. Then, you can specify ‘large’ or ‘small’, etc. An example pipeline that demonstrates how to use the ClassifyObjects module is the ‘Classified Colonies’ example.

Hope that helps!


I agree with Martha; this sounds like a downstream, data analysis project. Of course, if your classification is very simple (how much protein fluorescence intensity is in the nucleus vs the cytoplasm, for example), that can be easily done in CellProfiler using its Measure modules plus possibly CalculateRatios. But for more complex scoring like you describe, make CellProfiler produce a database full of measurements for each object, then use some kind of machine-learning method to score each cell for what type of pattern it has, based on positive control images that show purely one pattern or another.

Our own machine-learning methods (available in CellProfiler Analyst) are designed to score a cell as for one particular phenotype; it is not designed to score a potential mixture of phenotypes. It’s possible it would work if you first train CellProfiler Analyst to recognize ER patterns and score all cells for how ER-like they are, then train it to recognize Golgi patterns and score all cells for how Golgi-like they are, but here is the catch: you would need to feed it a variety of positive control cells, as follows. For ER, you would need to give it positive controls that are not just pure-ER alone but instead some pure-ER, some ER+nucleus, some ER+Golgi, some ER+membrane, etc. In this way, the computer learns that it should ignore other patterns and only look for the thing in common across all these images: the ER pattern. If it is difficult to produce this positive control set of images, or if you have more than a few patterns to recognize it could become tedious (and note that we’ve never tried it - this is just a theoretical solution!)

You can check with Bob Murphy’s group at Carnegie-Mellon because his group is working on machine-learning to specifically address the mixture-of-localizations problem. His algorithms are designed to take pure Golgi and pure ER positive controls to define each pattern. His group is quite generous, in my experience, about producing algorithms that others can use so you might check with him.