Speeding up computations on samples with a large number of layers
|Status:||Sprint||Start date:||21 Oct 2019|
|Target version:||Sprint 43|
Despite recent speed up in computations on a large number of layers,
the computation complexity is still O(N^2), where N is the number of layers in the sample.
The computation engine is capable of working with worst-case complexity of O(N logN).
The reason for such slow computations is in
1. To build string paths it computes the number of copies of same-type objects in the sample.
Each of the uniform objects requires the counting of its siblings, total complexity of the computation
being O(N^2). This problem can be solved as in pull-request 881,
providing 1.5 times speed-up on 1000 layer-thick sample.
2. Second problem is the string concatenation and reallocation: if optimized, it can provide another factor of 2 speed-up on 1000 layers.
One can find a screenshot of kcachegrind output in the same PR.
To measure the performance, one can use the dedicated performance test in Tests/Functional/Core/CoreSpecial/MultilayerPerformanceTest.(h,cpp)