Image orientation changes my quantification


#1

Hi all,

I noticed that my quantification changes if the orientation of my import images is different.
I am using the IdentifyPrimaryObjects Module to identify objects above a certain threshold and in a certain size range. I though that the quantification should be the same if the image is the same regardless the orientation. However I notice slight differences when rotated 180 degrees and big differences in quantification when the image is rotated e.g. 20 or 45 degrees. Why is that, isn’t the module looking for above threshold pixels and they should be the same regardless orientation? What can I do to normalize for that effect?

Thanks for the help,
Larissa


#2

Hello,

My guess is:

When you rotate the image in angle that’s not 180 or 360, in your case 20 and 45 degree, the empty area is an extra space which has intensity of 0.
This extra space will change the dynamic range of the given image.
E.g. without rotation, the image only has 2 shades of gray, background and foreground
With rotation, the image now has 3 shades of gray, background, foreground and black empty space, which needs a different correction factor.

Hope that explains.


#3

Hi Minh,

thanks for the answer.
I see your point, that the black area changes. But actually all my images have black as I define ROI separately in ImageJ before. Thats why I am defining threshold based on the medianIntensity from the area different from 0. In my understanding threshold should therefore not change when simply rotating the image, but it does.

Best, Larissa


#4

Interesting.
Can you please try to see the image info (probably by using ImageJ) for both the non-rotated and rotated version.
Are they with the same bit-depth? e.g. both 16-bit or 24-bit.
We’re curious about this issue too, so if you don’t mind please upload the two versions.
Bests,


#5

Hi Minh,

thanks for looking into it, maybe I missed something very trivial…All images are 8-bit.
Anyway I attached my pipeline and six test images. For the first three the area and medianIntensity gets calculated same, but then still different amount of cells are recognized. For the last three the area is for some reason bigger and the medianIntensity as well, which for me makes no sense to begin with and then three different numbers get quantified.

Thanks again for the help, Larissa

GFP cells 03-20-17 .cppipe (14.5 KB)

<img


#6


#7

In general, if you’re rotating images anything other than 90 degrees (or a multiple thereof), whatever program you’re using to do the rotation has to do some averaging and/or guessing of certain pixel values - see this MicroscopyU tutorial. This means you’ll literally be changing the pixel values present in your image when you do a rotation, so it’s not surprising you may get a different quantification. In general, I’d avoid rotating an image unless you have a very very good reason to.


#8

Thanks bcimini, that makes sense.
I definitively will avoid rotating images at all :wink:
Howver it does not explain why the 180 degrees rotated image does give a different result?

Best, Larissa


#9

Howver it does not explain why the 180 degrees rotated image does give a different result?

Yeah, that I agree is weird. Which of your 6 test images is the unrotated and which is the 180 degree?


#10

test1 is unrotated, test 3 180 degrees


#11

It looks like it stems primarily from two objects that are split in the rotated 180 degree image (bottom, rotated again 180 just so they’re easy to compare) but not in the original image (top), leading to a slightly different object count and average object size. If you look pixel-by-pixel, there are a couple of other minor differences, but that’s the majority of it.

My guess from looking at these (though I can’t say for certain without looking at the source code) is that there’s some sort of tiny bias in how CellProfiler does declumping via maxima suppression -

Imagine you had three points A B and C; A which is 6 pixels away from B, which is 11 pixels away from Point C. You start from one end then throw out any point that’s fewer than 10 pixels away from it. If you start with A, you’ll throw out B (which is only 6 pixels away) but then keep C (which is 17 pixels away from A), leading to A and C being the ‘final set’. If you flip the whole thing 180 degrees and start from C you’ll keep B (which is 11 pixels away) but then throw out A (6 pixels from B), leading to B and C being the ‘final set’.

Again, I’m not certain, but I expect it’s something along those lines. Does that make sense?


#12

Hi bcmini,
thanks for looking into that so much in detail :slight_smile:

yeah it makes sense and I actually already have observed sometimes slight changes in declumping. Slight changes biological speaking should not make a big differences, but mathematical it would be nice if CellProfiler could account for these differences e.g. consider both directions ABC and CBA.

Anyway thanks for all the help,
for my analysis I will just avoid doing any rotation to not enter errors by that,
Best Larissa


#13

Slight changes biological speaking should not make a big differences, but mathematical it would be nice if CellProfiler could account for these differences e.g. consider both directions ABC and CBA

I agree, but my guess is that it’s a practical consideration- to be optimal you’d want to consider ALL paths, which since there are easily many thousands or tens of thousands of local maxima in any given image would be computationally intensive, plus you’d still have to make some sort of a judgement call of which ‘path explorations’ to use at the end (the distribution that comes up the most ways? the one with the most nodes? the one with the fewest nodes?)- at some point you have to throw your hands up and make SOME sort of implicit assumption to make the module run in a reasonable speed. Again though, this is an assumption built on another assumption- if you want to discuss this in more detail with our software team, the best thing to do would probably be to make an issue at our GitHub and you’ll get someone more knowledgeable about the code base than I am.


#14

yeah good point, havent thought about it.

I am fine for now, thanks for your help, pretty awesome to get feedback so quickly