Memory Error Message



I am using a CellProfiler version 2.2 on my Windows 10 x64-based operating processor.
I’m running into the problem of memory error while analyzing my images (at ApplyThreshold step) and not even one single image can run through the pipeline because the program stops before finishing it. I set the maximum space allowed for java (File>Preferences) to 1500 MB, otherwise I would running into the problem of Java heap space error (unless I can fix it another way?)

What could my problem? Attached my pipeline here.
Project_IHC_cell_counting_PV_488_3.cpproj (455.1 KB)

Here’s the error message:
Encountered error while processing.
Exception in CellProfiler core processing
Traceback (most recent call last):
File “matplotlib\backends\backend_wx.pyc”, line 955, in _onPaint
File “matplotlib\backends\backend_wxagg.pyc”, line 51, in draw
File “matplotlib\backends\backend_agg.pyc”, line 474, in draw
File “matplotlib\artist.pyc”, line 61, in draw_wrapper
File “matplotlib\figure.pyc”, line 1133, in draw
File “matplotlib\artist.pyc”, line 61, in draw_wrapper
File “matplotlib\axes_base.pyc”, line 2304, in draw
File “cellprofiler\gui\cpfigure.pyc”, line 1984, in draw
File “cellprofiler\gui\cpfigure.pyc”, line 1450, in normalize_image
File “matplotlib\cm.pyc”, line 262, in to_rgba
File “matplotlib\colors.pyc”, line 612, in call

Thank you very much.


Can you also share your image (or at minimum let us know how big it is)? How much memory is available on your machine?


This is just one of the 48 images that I’m trying to analyze, about 40 MB (jpeg). By the way, would tiff. files work better?

I don’t know how to check the memory of my drive (the C drive you meant?), I think around 85 GB. Would this be the problem? I also edited my original post to include the error message.

And would the images processed in each step of the pipelline create a bunch of temporary files?



I don’t know how to check the memory of my drive

If you go to the search function in windows 10, type “System”, go to the System Information menu, then look at the Installed Physical Ram, you should get that number, but if your files are only 40MB I doubt that’s it so don’t worry about this for right now.

By the way, would tiff. files work better?

I don’t think TIF vs JPG is causing this error, but that being said we never recommend using JPGs for quantitative analysis since they’re a “lossy” format that can cause you to literally lose data.

The error you’ve now added makes me think it’s actually to do with (or exacerbated by) the fact that all of the “eyes” are open on your pipeline- can you close them all (you can do this by going to the Windows->Hide All Open Windows On Run) and see if the pipeline runs? I think your computer is just freaking out about trying to create all the display windows. If that doesn’t help (or if it does!), please let us know.


Hi Beth:

  • Closing the windows definitely helped it to get through the first cycle. However, I still have this MemoryError, at the ApplyThreshold step starting from the second cycle.

  • By the way, my files each is 40MB, if I have total of 48 of them, would that cause any problem?

  • And would the temporary files generated at each module take up memory too?

  • For the Installed RAM, I have 16 GB usable

  • Is there anything in my pipeline seem to be problem-causing?

  • And here’s my error message:
    Error while processing ApplyThreshold
    (Worker) MemoryError:

Traceback (most recent call last):
File “cellprofiler\pipeline.pyc”, line 1956, in run_image_set
File “cellprofiler\pipeline.pyc”, line 2067, in run_module
File “cellprofiler\modules\applythreshold.pyc”, line 139, in run
File “cellprofiler\modules\identify.pyc”, line 864, in threshold_image
File “cellprofiler\modules\identify.pyc”, line 1010, in get_threshold
File “centrosome\threshold.pyc”, line 110, in get_threshold
File “centrosome\threshold.pyc”, line 174, in get_global_threshold
File “centrosome\threshold.pyc”, line 286, in get_otsu_threshold
File “centrosome\otsu.pyc”, line 31, in otsu
File “centrosome\otsu.pyc”, line 252, in running_variance
File “numpy\core\shape_base.pyc”, line 275, in hstack

Thank you very much.


48 files of that size shouldn’t be a problem; temporary files should be flushed between images so it shouldn’t be a problem.

There’s nothing obviously wrong in the uploaded pipeline file, though I am slightly confused because the ApplyThreshold settings in the pipeline use the Automatic (MCT) method, and the error message you posted uses Otsu’s method- did you change the thresholding method since you sent it?

If your computer is truly choking due to memory on files that size, I’d try some combination of the following and see if it helps:

  • In File->Preferences, increase the Java memory to 2048 or more

  • In File->Preferences, decrease the number of workers to 1 or 2

  • Shut down everything else on that computer while you’re running your analysis.

If none of that solves the issue, you can try posting your image here and we can see if we can validate the issue.


Hi Beth:

Yes I have changed my thresholding strategy to Otsu’s method, very sorry for the confusion. Could this thresholding method in my pipeline seem be an issue though?

And I tried the combination of your suggestions, but still have this issue, on the 4th cycle. I changed the Java memory to 2200 and number of worker to 2 and let it sit without doing anything else on the computer. If there’s no other possible way to resolve it, then I probably could live with it.

  • Also, in my current run, it takes a long time to run each cycle, is there any other way to make it faster?

Attached are two of the images I’m analyzing. Any suggestion would be appreciated.

Thank you very much!!




I’m having trouble accessing your images. Rather than sending them as email attachments, can you upload them to the forum directly? Thanks.