It looks like it stems primarily from two objects that are split in the rotated 180 degree image (bottom, rotated again 180 just so they’re easy to compare) but not in the original image (top), leading to a slightly different object count and average object size. If you look pixel-by-pixel, there are a couple of other minor differences, but that’s the majority of it.
My guess from looking at these (though I can’t say for certain without looking at the source code) is that there’s some sort of tiny bias in how CellProfiler does declumping via maxima suppression -
Imagine you had three points A B and C; A which is 6 pixels away from B, which is 11 pixels away from Point C. You start from one end then throw out any point that’s fewer than 10 pixels away from it. If you start with A, you’ll throw out B (which is only 6 pixels away) but then keep C (which is 17 pixels away from A), leading to A and C being the ‘final set’. If you flip the whole thing 180 degrees and start from C you’ll keep B (which is 11 pixels away) but then throw out A (6 pixels from B), leading to B and C being the ‘final set’.
Again, I’m not certain, but I expect it’s something along those lines. Does that make sense?