Background profile in time-lapse data


Hi Team,

I tried to calculate the illuminationcorrection function and apply it on a data set of time-lapse images, but I am not sure the background was corrected.
The problem is that over time I see that the background profile is increasing, (probably due to secretion of fluorescence to the medium or other optical aberrations) but in the control FOV the background profile is stable. I also want to eliminate, by calculating this function, all the inhomogeneities in the camera or light over time.

The illumination correction function I used is:
First pipeline is to correctIlluminationCalculate using the following parameters: Background intensities, block size of 64 (images are 1024x1024 pixels and the objects are 20-80 pixels in diameter), no rescaling, All, load images and no smoothing.
For the correct illumination Apply module I chose to load the .mat image calculated in the previous pipeline and then subtract it from the original images without scaling.

After correcting the images, I got the same increasing profile in the background.

Is the way of calculating and applying the correctillumination module was right for this application?
How do you suggest eliminating this background?

Thanks in advance,



Hi Maya,

Since you’re using “All”, the algorithm is finding the minimum pixel value in each 64-pixel neighborhood for ALL of your images- so the function that it creates is not representative of the changing fluorescence with time. This sounds like a example where the “Each” option is more applicable- it will create a separate estimation of the background staining for each image. All of your settings look good, but I would probably add a median filter to smooth out the function (selecting "Automatic’ for the filter size is probably fine, but inspect the function to make sure it looks smooth and representative of the background; you can always increase the filter size for a smoother function).



Hi team,

I tried to run the pipeline according to your suggestion, and it seems that the profile of the background (increase in the background signal) is not changing. In FOV without the cells we don’t see this background profile.

The module parameters: Background, 0, 64, no rescaling, Each, pipeline, median filter, automatic, do not use, do not use ,do not use
For the apply module: I use subtract and no rescaling.

Attached please find the figures of the background profile before and after correction.
Do you have any suggestion how this kind of background can be eliminated?

Thanks in advance,



Hi Maya,

I see that you have used a block size of 64 when your objects are 20-80 pixels in size. One possible issue is whether the distribution of objects in the image is wide enough or your blocks big enough such that each block is able to accurately capture some of the background. So for example, if your objects are small but they are densely disbtributed in the image, the minimum value calculation may overestimate the background intensity. If your objects are densely packed, then using the background is unlikely to work well, regardless of block size.

So you may need to check whether the block size is appropriate; my guess is that it may be too small. If you increase the block size (double it, for example), do you see the same general background profile?



Hi Mark,

Thanks for your reply.
I tried to apply your suggestion, using a block size of 128, but I got the same increase in the background profile.
I attach the profile and the images, so it may help you to understand the problem.

I tried also regular mode but it seems to reduce most of the signal.

Do you have any other suggestion to how can I solve this issue?

Many thanks ,



Hi Maya,

Thanks for the pictures. It confirms that the objects are fairly closely clumped together, so using the Background method is unlikely to work on a per-image basis (i.e, “Each” instead of “All”) to remove the trend. The Regalar method will not help either since it assumes a random distribution of foreground objects in the image, which is not true in your case.

One approach to take is to find the background via another method. Assuming that you are able to segment the cells adequately, you might do the following:

  • ConvertToImage: Apply this to the cell objects with the binary setting to obtain an binary image of the objects. Caution: If you’ve used IdentifyPrimAuto to identify the objects with “Discard objects touching border…” set to “No”, you will want to re-identify the objects (under a new name) with this set to “No”

  • ImageMath: Invert the binary object image to get a binary image of the background.

  • IdentifyPrimAuto: Identify the background as an object. Set the discard objects outside the diameter range and touching the border to ‘No’, the threshold method to “Other…” and enter 0.5, “Do not use” for distinguishing clumped objects, and “No” for “Do you want to fill objects…”

  • MaskImage: Mask the florescent image with the Background object.

  • CorrectIllumination_Calculate: Use the masked image, using the Regular method, in “Each” and “Pipeline” mode, with “Smooth to average” as the smoothing method

What this (should) do is create a background object based on your foreground object, and use it to create an averaged background correction.

Also, in terms of correcting for systemic microscope issues, you can use your approach (i.e., using Background and “All”) to come up the illumination function since it should be invariant for all images. It also maybe worth coming up with a separate illumination function for each time point, and correcting the images for each time point independently.