Computer lockup


I have tried to process some fairly large datasets with CP 2.2. I am trying to process 4 experiments with 7800 images each. Each experiment has ~3500 objects. I have tried 3 different attempts to process this data.

Attempt 1 i was using an external hard drive and a computer with 4 core processors and 16gb ram
Attempt 2 i was using an external hard drive and a computer with 8 core processors and 32gb ram
Attempt 3 i was using an internal hard drive and a computer with 8 core processors and 32gb ram

Each attempt is a fatal freeze ie, the keyboard will not respond to caps-lock or num-lock (lights do not change on the keyboard. My operating system is windows 7.

The image is from my last attempt. This is what the computer freeze looks like, but of course i was unable to rearrange the windows, so we lost out on some troubleshooting information. I am suspecting i will need to learn how to batch process? And, somehow this is different from the Groups module?

The computer works fine with a cropped data-set -400 images

Feedback is greatly appreciated. Lee


Hi Lee,

Does it work if you run just one or two experiments at a time rather than all 4? That’d be my first suggestion for something to try. The other would be to just make sure that the CP output window isn’t blocked by the CP window next time, so if there is an error it can be seen (which’ll give you/us a better place to start troubleshooting).

Batch processing is indeed different than Groups- Groups just says that a set of images should be processed together (ie all the frames in a timelapse movie should be analyzed together so that you can do tracking on them). If you’re going to typically be analyzing datasets of this size, you may need to start looking into a more powerful image analysis computer or some sort of cluster solution.


What would a more powerful computer possess?

I’ve tried with one experiment at a time and it works. Once i go to two experiments (7800 images, ~3000 objects each) i get this problem. I haven’t had this problem in the past when i have done more than 15,000 images though (experiments were shorter, and i had less objects).

The image is when my computer locked up. I started around 6pm and as you can see, the computer froze at midnight.

My hypothesis was that the program couldn’t write more than a 3GB file at a time. But, that was disproved when i moved to my largest data set (3.2GB).


What would a more powerful computer possess?

I was thinking it might be a RAM issue, but given that you’ve said it’s a 32 GB RAM machine and looking at the log you posted there I doubt that’s it. It seems to finish all the measurements and then just horribly die when it’s time to actually start writing- I’d say that maybe you’re trying to write to a directory you don’t have write access too, but that wouldn’t explain why it fails on larger image sets and succeeds on smaller ones. Do you have enough space on the output drive?

It’d be helpful if you could post your pipeline just to see if there’s something funny going on in your configuration of the ExportToSpreadsheet module to explain this behavior (I doubt it, but there’s always a chance); I’ll make an official issue in GitHub once I have that. I couldn’t find any previous instance of these specific errors being reported in either the forum or the GitHub, but I did find a post from one of our previous software engineers with advice I think might be relevant-

ExportToSpreadsheet writes its measurements at the end of a run. For 1000 images and millions of cells, this is painfully slow. You should consider shorter runs or using ExportToDatabase with a sqlite database instead.

You could then use any of several free database tools that exist to export a spreadsheet from the SQLite database if you needed the data as a spreadsheet.


This problem persists. Attached is my pipeline.

Happens with a 5000 object experiment with 2000 images with two channels.

Please make an issue.

170206.fura2.analysis.cpproj (663.2 KB)