ORO stain in C. elegans


Hi there, i’m using the cellprofiler and cellprofiler analyst to analise ORO staining in C. elegans. I’m having some troubles both with the pipeline and with the analyst. In the pipeline i’m having troubles to get rid of the bubbles in the photos of my experiment (i’ll add one). And with the analyst i can’t figure out what to graphic, like there’s a lot of differents option to plot or graph that i’m losing my mind trying.
ps: i’m “playing” with the cellprofiler analyst with the examples photos that are given
ps2: I’m using the pipeline that is explained in the video tutorial of ORO staining.

ORO analysis.cpproj (560.1 KB)


Can you shown an example of an image with a bubble in it and what happens to the segmentation when it does?

As for what to graph in Analyst, it depends on what you want to learn. Maybe you can give us a bit more info?


Ok, as the tutorial video said, to eliminate bubbles and the edge of the well, if you scan the origRed image you are able to eliminate them after, when you mask the image. But this is what i got:

and when i continue to mask the image with the origGray image, this is what i get:

The only thing i can tell is that playing with the threshold and with the typical diameter of objects in this picture and in others i was able to shape specifically the worms. But the point of this step is not to do this, so i’m really lost, just to be honest.

And with the analyser my idea was to quantify the numer of fat droplets or intensity of sign in each worm, and compare between different strains.
Thanks for everything


Did you figure out how to quantify the average number of fat droplets per worm? I am trying to do the same thing right now. My pipeline is finally working but I’m really clueless when it comes to data analysis with the use of CPA.
Best wishes,


hi aleksandra. to be honest i put this project on stand by for a while. but when i was doing this and trying to analise the data, i tryed to do exactly that, number of fat droplets per worm. But i didn’t think i could do anything because my data was trash.
i’ll write again if i can figure some of this out
good luck!


Hi Aleksandra,

If you put a RelateObjects module downstream of your worm and fat droplet identification modules (with droplets as “Children” and worms as “parents”, I believe you should be able to get the number of droplets per worm (Children_FatDroplets) will then be added to the parameters output per worm). I say believe because I’m not sure if there’s some incompatibility between overlapping worms and RelateObjects; there may be but I honestly just don’t know. If there were a problem with RelateObjects then there are ways to estimate or back-calculate it, but let’s not go there until we see if the simplest solution works.


Thank you Beth,
Will see what I can do. I’ll post here once everything is finished.


Could anyone confirm how accurate is the software with picking the fat objects? I have attached two images with locations of the fat mass picked by CellProfiler and it looks a bit random. How can I verify whether the identification of the fat objects is accurate?


The person who is going to be most able to judge if your personal results are accurate is likely you! I’m assuming if you’re here asking the question, you feel it may not be. We have definitely used CP in the past for quantification of ORO staining, so it certainly can be used for that, but since images vary from experimenter to experimenter, microscope to microscope, etc, how accurate the results end up being generally depends on how closely tuned the pipeline parameters are to the particular experiment.

When you run your pipeline in test mode and look at the output of the IdentifyPrimaryObjects module for the oil droplets, do you agree with what’s being called? Do you have a positive control in your experiment that you can check against a negative control to see if the quantification changes at all?


Hi Beth,
Thank you for the response. Yes, we made the test with the negative control and there were visible changes in quantification, pursuant with our expectations. But as you said I don’t really know the way to evaluate how closely tuned the pipeline parameters are to the particular experiment. For now we can certainly assess in % how much the tested condition decreased fat mass in worms comparing to the control. But what I’m not sure about is whether the total area of fat counted by the software is the real area of fat in the tested worms. We can only assess it by eye on the above images. Is there any way to evaluate that?
Best wishes,


Unless you went through some of the images manually to quantify the fat areas by hand (by annotating/coloring in each one then measuring their size/intensity in pixels or something like that), I can’t reassure you that CellProfiler is giving you a “real” answer- you need to have some sort of external ground truth to compare against to be able to assess that. Like I said above, the two main ways I see people typically approach this question (and doing both is always best!) are:

  • Do CellProfiler’s objects agree with the objects I would draw by hand? You’ll almost never get CellProfiler to give you the exact shape/size/number of what you would draw, but on the whole are they pretty close? When they are different than your sense of what the should be, how often is it and are the errors consistent in one direction? A pipeline where you agree with half the objects, think 45% are too small, and think 5% are too big can likely be improved, whereas a pipeline where you agree with 90% of the objects with 5% too small and 5% too big is likely as good as it’ll ever be.
    If you think the objects it’s returning are consistently smaller than what you’d draw, maybe you should play with the size settings a bit. If you think there are too many or too few, maybe the threshold needs to be made stricter or more lenient. The good news is that you’re already exporting images of what objects CP is drawing, so you can look at those and see if on the whole you agree or if you think there’s room for improvement. For experiments with less than ~500 images, I personally recommend you ALWAYS do this first before even looking at the quantification. Otherwise you may (and I have!) spend a lot of time chasing down an “interesting” quantification result that turns out to be a segmentation error when you go back to the images, and there’s no feeling more frustrating than that.

  • Can CellProfiler detect changes between groups I know should be different? Even if the quantification you get from CellProfiler isn’t perfect (and it never will be perfect on every image!), if your pipeline is well tuned it should be accurate enough to help you create a useful model. Generally you test this by it being able to differentiate between negative controls, treatments, and (if present) positive controls. You say you’re seeing difference between treatments and controls, so that’s great!

Absent manually created ground truth, if CellProfiler’s “model” of what your objects are reflects (to within reasonable standards of accuracy) what you can visually see to be true and quantitatively recapitulates known differences, I think most people would agree that the pipeline is succeeding.


Thank you! I think that basically explains everything! I will give it a try then!
Best wishes,