The idea is the following:
* User is in Real Time Activity mode tuning parameters. Simulation, naturally, runs in the background.
* Currently, when user zooms in and continues to tune parameters, the simulation still calculates whole image, while only the part of it is shown.
* It is relatively easy to apply detector masking on the fly, depending on current zoom level.
* Then simulation will run much faster for zoomed IntensityData.
This can be very convenient for the user (no need to reconfigure the detector to give a fast try), and it is also a good selling celling point during presentations.
There is only one problem:
* It is parameter tuning which triggers simulation.
* How to re-simulate when zoom out to see original image?