Identifying dead cells in DAPI stain


#1

Thank you very much for the free program CellProfiler! I have been working with CP for a couple of month and like it a lot!!

I am using DAPI-staining to identify nucleim, 10x magnification. Since some features of the cells might be different for apoptotic/ dying cells versus healthy cells I have been looking for a way to identify both cell types and make calculations for each subgroup.

Fortunately CP offers many features… Dying Cells in my hands are brighter than others, therefore one could obviously use “Measure Objects” and use maximum intensity. This seems to be a sensitive, but not a very spedific marker.

Furthermore, the texture of these nuclei is much more heterogeneous. Which of the features of “Measure Texture” would be best?

Since distinguishing nuclei of dying and living cells should be a common question, may be you could comment whether you have a solution for this problem? Which of these texture features would work best?

Thank you very much,

Benjamin


#2

Sorry for the long delay! Your message seems to have slipped through the cracks.

There are two ways to address this challenge of separating two cell types:
(1) Completely within CellProfiler:
(a) Make two ‘fake’ images, in Photoshop or another program - one image should contain live cells and the other should contain dead cells. These cells should be cropped out of your regular images and pasted into your ‘fake’ images (preferably the cells should come from a variety of images within the experiment).
(b) Now, set up a CellProfiler pipeline for those two images that identifies the nuclei and as many compartments of the cells as you can think of. It sounds like you have only DNA labeled, so perhaps you can just identify nuclei and then use ExpandOrShrink to shrink them by a few pixels, and then use IdentifyTertiary to define the outermost ring of the nucleus, and innermost part of the nucleus.
© Importantly, you should add as many MeasureObject modules as you can think of to the pipeline to measure DNA within each compartment you defined (Area, shape, intensity, texture etc). Be sure to measure a couple different scales of texture in the MeasureTexture modules.
(d) Lastly, add the CalculateStatistics module to the end of the pipeline. See the help for that module about adding a LoadText module upstream, which will load a text file telling which of the two images is a positive and which is a negative control.
(e) Run the pipeline and export the data - in particular the “Experiment” data. You are interested in the Z’ factors that have been calculated for each measured feature. The Z’ factor is a measure of the ability of that feature to distinguish positive and negative controls. Find the measurements that have the highest Z’ factor and you will have found the features you want to use! You can then process your real images using the ClassifyObjects module to sort cells based on whatever feature you have determined is best. You can also use the FilterByObjectMeasurement to remove objects that don’t meet certain criteria. By the way, we are hoping to write a paper describing this process in more detail soon.

(2) Collaborate with us to obtain our new tool, CellVisualizer (cellvisualizer.org). It is not yet released. CellVisualizer incorporates machine learning algorithms that allow you to classify cells based on a combination of measurements rather than just 1 or 2 measurements as I have described here. The combination of measurements allows you to score complex and subtle phenotypes.

Cheers,
Anne