Confidence of classification, output scores for each class per object?


#1

Hi, I’ve been working with CPA2.0 the past few weeks to classify colonies in competition experiments as belonging to one species or another. It has been working fairly well, but I was hoping to generate some sort of measure of confidence about the scoring. Since I have so many images, I’d really like to prioritize manual verification to images with lots of borderline cases rather than images with generally high confidence, but a few outliers.

If I could get at the scores for each class for each object, then I could know whether an object has only one high scoring class (wouldn’t need manual verification), or if it scored very closely in two classes (might be worth verifying if time permits).

Similar topics seemed to refer to CPA1, is there a better way to do this in CPA2.0? Thanks for all your hard work!
-Chris


#2

The current beta of CPA 2.2.1 (more information here) has as default evaluation method a confusion matrix. Here an example how it looks like for 23 classes:

Is that what you were looking for?

Best,
David


#3

Hi David,
That is useful, and would be great to have, so I’ll definitely give the new version a try. It’s a good metric to know in general how well the classifier works for a whole group of images.

I’m more interested in the per object scores though, rather than the per class scores. There must be some way that the classifier decides which class to assign a colony to. Based on my understanding, it sums the values for each rule for each class, then reports the highest score, as positive correlations are given large positives in the rule statement, negative correlations are large negative in the rule statement, and neutral correlations are close to zero. So for class A, B and C, for a single object, if after summing over all rules, A=0.3, B=0.8 and C=-0.2, the classifier would report “object assigned to B”. Is there a way to report that “A=0.3, B=0.8 and C=-0.2” instead? Assuming they are sufficiently different, it shouldn’t be a problem, but if A were actually equal to 0.75, then that might be a borderline case I’d want to check.

I’m guessing it’s calculated somewhere in the back end, but never makes it to the front end or gets saved anywhere. Just checking if there’s something I was missing?
Thanks again,
Chris


#4

I don’t think the user has direct access the summed classifier values. But they are setup to be simple SQL queries so you could easily calculate them yourself given the rules.

The rules are something like this:
IF (pHH3_Intensity_MeanIntensity_Origd4 > 0.119220003486, [1.00000059605, -1.00000059605], -1.00000059605, 1.00000059605])
which means
if FEATURE_X for a particular object > some_value, then add the first bracketed pair to a running total.

But if FEATURE_X for a particular object <= some_value, then add the second bracketed pair to the same running total.

This is then summed for all objects (or just look at your running total) and whichever class has the higher total wins. The classes are just the indices in the bracketed pair. You can of course add triplets, etc if you have 3 or more classes. So it is here that you can just get access to the individual running totals for each class which is what you are looking for I think.

This is done in the ScoreAll in classifier.py, and I thought that in the release version of CPA you can inspect the terminal output when you Score All and see the query for this? Not sure, but I think there is enough info above to get you going.


#5

Hi David,
That’s what I was looking for. It’s not going to be as easy as I hoped, but it does seem very doable. Thanks very much for all the help!
-Chris


#6

Along these lines, when we as CPA to load a new set of object images to curate as rule sets are being developed, does CPA try to select borderline cases for further validation? I know nothing of machine learning, but it seems intuitive that the algorithm would gain the most from supervised classification of objects that were initially categorized with lower confidence (i.e. with category scores that are near one another), as opposed to confirmation of cases that were initially categorized with high confidence.


#7

Hi @bbraun,

Great idea! For the new scikit-learn machine learning algorithms in CPA, we allow to fetch ‘borderline cases’ for training. Our current thoughts about this are listed here. You are more than welcome to join the discussion and propose ideas.

David


#8

Hi,’

Is there a way to classify objects in a continuous way rather than discrete (positive/negative)?
I would like to score objects in a continuous fashion, say from 0 to 1, where ‘0’ is absolute negative and ‘1’ is absolute positive. My intention is to get the score of each cell, so that I could plot histogram based on morphological feature.

With regards,
Vimal


#9

There isn’t, no, sorry. The closest you could come to that would be to classify the objects and then plot them based on the measured features you get from the classifier as having been important (if nuclear shape were the main determinant for example, it might be things like Nuclei_AreaShape_FormFactor and Nuclei_AreaShape_Eccentricity).

Now that’s on a per-object basis; on a per-well basis, for example, you will get an enrichment score from 0-1 that represents the enrichment of different classes in each of your wells.