Analyzing Picrosirius Red polarized light images


Hi all,
I’m a very new CellProfiler user and have spent some time reading through the manual and trying to decode existing pipelines that our lab has used, and have a basic grasp on some settings and logic behind them but am a little lost on others.

I am trying to create a pipeline that achieves the same measurement information as described in this paper:
where they used SigmaScan to analyze collagen content and fiber color, as well as spatial distribution of fibers in polarized light images of picrosirius red stained histological sections. This and other papers describe hue definition ranges for red, orange, yellow, and green collagen fibers in their methods and I’m a little confused as to how I can define these for my images using CellProfiler.

I’ve tried building a pipeline where I used custom definitions for my stain under “UnmixColors”, and used little snippets of a picture (where by eye I isolated a portion of collagen fibers that were strictly red, strictly orange, etc.) to use as my “reference photos” for the estimation. However, this (probably incorrect) method clearly grabs most of the photo using these parameters instead of just selecting the portions that each color is supposed to refer to, as evidenced when I step through test mode.

I think my biggest hurdle is trying to figure out how to ID just the color portions that I want. Any help would be greatly appreciated!

First attempt at pipeline: PicroSiriusRed_polarized_pipeline.cppipe (20.6 KB)




The values you input in UnmixColors are quite incorrect.

For example, if you want to pick Red fibers, it’s better just use only 1 image in UnmixColors module (don’t add Orange, Yellow, Green subsequently in the same module)
The values for custom absorbance would be then:

You can see the values for Red would be lowest, i.e. “please don’t absorb red color away”

Then you make another UnmixColors module just for Orange / Yellow / Green one at a time.

At the moment you put all values near 0.57 for all three channels, that would read “please eat each of the color half way”…



Thank you for your response! I guess another follow-up question is how did you determine these absorbances? I know in ImageJ I can hover over a given pixel and get the RGB coordinates for a pixel but am not sure how this translates to absorbance values.

Secondly, I think I’m going to need to ID pixels that are within a range of values (e.g. all red pixels that fall between the hue definitions of 2-9 and 230-256 for an 8-bit image). Is this possible?



If you use ImageJ, you can crop a portion of the pure red region into a new image. Then click Image > Color > Color deconvolution
and tick the option “Show matrix”, then you can see the absorbance range.

I’m not quite certain if there’s a specific method for the second question. My workaround would be first use RescaleIntensity to scale the image into 8-bit. Then in the IdentifyPrimaryObjects , in the box “Lower and Upper bounds on threshold”, input a range that is form 1/2562 to 1/2569 (0.0078125 to 0.03515625) for 2-9 and so on.


I guess another follow-up question is how did you determine these absorbances?

As @Minh stated, the first step is to crop down a region that contains ONLY one stain of interest- ImageJ/FIJI is usually best for this. You can then either use the trick he showed you or in CP’s UnmixColors module use the “Custom->Estimate” tool.

Secondly, I think I’m going to need to ID pixels that are within a range of values (e.g. all red pixels that fall between the hue definitions of 2-9 and 230-256 for an 8-bit image). Is this possible?

This is definitely possible, though it’ll take a couple of steps- if you DON’T want pixels higher than a given number (such as 9), you’ll have to mask them out in a first step, then take everything betwen 2-9. See here for a bit more detail.


Hi bcimini,

Thanks for your response and link to the other discussion. I’ve read through the other thread and I think I understand what the steps are that you’re outlining for determining a threshold range:

  1. IdentifyPrimaryObjects: Set my upper and lower bounds for Otsu thresholding at 2-9 (or more specifically, 2/256 to 9/256)
  2. MaskImage: Use initial grayscale image that I thresholded in first IPO as input image for this step as well
  • “Select object for mask” is the output from first IPO which would ostensibly mask anything above 9 for this image (correct?)
  • Mask is inverted
  1. IdentifyPrimaryObjects: Set my upper and lower bounds for Otsu thresholding at 0-2 and input image to my MaskImage output from Step2 to ID any pixels between 2-9 range

Does this make sense or am I missing something?

Thanks so much for your input!



This is how I’d do it:

  1. (Apply)Threshold- Use a manual threshold and set this to 10/256. This will give you a mask of anything with an intensity greater than 9.
  2. MaskImage- mask your original image, and set ‘Invert the mask’ to ‘Yes’. You’ll now have an image that shows you ONLY pixels with a value less than or equal to 9/256.
  3. IdentifyPrimaryObjects- again use a manual threshold, and set it to 2/256. Now you’ll be identifying objects only in the 2-9 range.

Does that make sense?


Hi Beth,
Thanks for responding! This is very helpful- I am such a newbie at all of this image analysis that having a step-by-step really helps. I have a pipeline written using slight modifications on my initial color range values, instead calling Red 1-13, Orange 14-25, Yellow 26-52, and Green 53-110.

I’ve done as you suggested with the steps. For instance, for my Red pixels, I’ve thresholded my grayscale image using 14/256, masked the grayscale image with my thresholded mask, then used IdentifyingPrimaryObjects with a manual threshold of 1/256. I then “Convert Objects to Image” and had a binary image for each color range to get my area measurement in pixels.

I’m just having some issues where I can clearly see green sections by eye in my original images, yet for every single image my return for green area is “0”. Therefore I’m questioning whether my other areas (Red, Yellow, Orange) are also giving me incorrect values.

Attached is the most current version of the pipeline. Any ideas on what may be happening?

30Nov17_PSR_pipeline.cppipe (20.3 KB)


Your threshold to make the green image is 0.04296, I think you wanted 0.4296.

FWIW if you’d like to simplify this a bit you don’t need to do the “ConvertObjectsToImage” step, MeasureImageAreaOccupied can measure the area inside objects.


Hi Beth,
Thanks for catching that! One question I have with my results- with the MeasureImageAreaOccupied function on my objects, what is the difference between the outputs “AreaOccupied_AreaOccupied_Red_Objects” versus “AreaOccupied_TotalArea_Red_Objects,” for example? I have labeled “Red_Objects” to be the outcome of “IdentifyPrimaryObjects” within the specified range after masking all pixels above this range.

The numbers for each differ from one another in my output file and I’m not sure why or what exactly they’re measuring. I’d ultimately like to find the % area for each color range by: % area covered by color = (total pixel area of color / total pixel area of tissue)*100



From the module help:

Measurements made by this module
AreaOccupied: The total area occupied by the input objects or binary image.
TotalImageArea: The total pixel area of the image that was subjected to measurement, excluding masked regions.

So TotalArea_Objects is the area of the image that you masked and fed to IdentifyPrimaryObjects (so, for example, the TotalArea_Orange_Objects would be the total area of all pixels 0-25/256)- the AreaOccupied_Objects is the total area of all the objects found by IdentifyPrimaryObjects (so again for Orange_objects it’s just the objects within your size range and your intensity range of 14-25/256).

You can make those % calculations after the fact in Excel etc or by using the CalculateMath module to create them within the pipeline.


Wonderful, thanks for clarifying.

I’m having a little issue upon closer examination of my pipeline and the outputs. My current pipeline is thus:
PSR_ColorCalculation_Pipeline.cppipe (18.8 KB)

I’m finding particularly with the Red range (which I’ve set to be 1/256- 13/256 after masking everything 14/256 and higher), CP is calling everything in the background as within my range. My images are on a black background, and my results from MaskImage and IdentifyPrimaryObjects are below for a sample image:

As you can see in the IPO step, all of the interstitial space between the tissue is being called “Red.” Do I need to do some kind of other processing to pull just the tissue out from the background before I run the pipeline as I have it set now? I’ve tried various methods (which essentially are shots in the dark for me since I’m a newbie at understanding all this image processing), which includes trying to “unmix colors” to ID the background (which was problematic), inverting the image, excluding objects touching edges, etc. before IPO. All seem to be to no avail.

FWIW I have separate side-by-side BF images that I have a separate pipeline for to calculate total tissue area that I’m pretty happy with, but I’m just having issues with these particular polarized PSR images being on a black background.

Thanks in advance for your wisdom, and for all the help you’ve already given me!


I honestly don’t have the experience to know what “should” be being pulled out in that image- can you explain more what you want identified?


So in the original “RAW_Gray” image the polarized collagen fibers are on the dark background but are not inclusive of the entire tissue area. In the “Mask_Red” image essentially all the dark parts are the tissue and the black background has become gray since I did an “invert mask.” My issue is that since the original image is on a black background, when I invert it in the masking step and it creates this gray background in the inverted, masked image, it calls all that gray background “red” since (I assume) it falls within the color range I’ve specified for my red pixels.

If you look at the “Red Objects Outlines” output, I’d essentially like ONLY the objects that are dark on the inside and outlined in green, but it is outlining the entire tissue slice so it calls all the gray parts “Red_Objects.” What ends up happening then is that the output for red objects is over 100% of the total tissue area, since it is calling Red Objects both in the tissue itself and the slice background, and I find the actual tissue area excluding the background using a different Brightfield image and pipeline.

So I guess I need to figure out a way to pull JUST the tissue out to quantify the red area. For reference, the original, unmanipulated image is this:


Does that make sense?


I think you just need to set a higher threshold then if you want to exclude that background area- hover over the background grey pixels to see what their value is and set it higher than that. You say up-thread you’re defining “red” as anything 1-13 (of 255); many background pixels will indeed have a value of 1/255, so I think setting it to 2 or 3 (or more) to 13 of 255 is more likely to get you the results you want.


I use to analyze picrosirius red histologies too and i developed the following pipeline to do it:
Sirius_working.cpproj (435.5 KB)
As you can see i tried to identify the red as objects, but i realized that measuring the pixel intensity on the “red channel” would be more accurate.
Maybe this can help you.