That is useful, and would be great to have, so I'll definitely give the new version a try. It's a good metric to know in general how well the classifier works for a whole group of images.
I'm more interested in the per object scores though, rather than the per class scores. There must be some way that the classifier decides which class to assign a colony to. Based on my understanding, it sums the values for each rule for each class, then reports the highest score, as positive correlations are given large positives in the rule statement, negative correlations are large negative in the rule statement, and neutral correlations are close to zero. So for class A, B and C, for a single object, if after summing over all rules, A=0.3, B=0.8 and C=-0.2, the classifier would report "object assigned to B". Is there a way to report that "A=0.3, B=0.8 and C=-0.2" instead? Assuming they are sufficiently different, it shouldn't be a problem, but if A were actually equal to 0.75, then that might be a borderline case I'd want to check.
I'm guessing it's calculated somewhere in the back end, but never makes it to the front end or gets saved anywhere. Just checking if there's something I was missing?