Testing #1609

Feature #861: === Website & Examples ===

Clarify relation between PyPersistence tests and tutorial examples

Added by pospelov over 4 years ago. Updated over 3 years ago.

Status:ResolvedStart date:27 Sep 2016
Priority:NormalDue date:
Assignee:pospelov% Done:


Target version:Sprint 34


At the very beginning of the project we had only few examples explaining, how to use BornAgain. Shortly after, they have been extended with python functional tests playing two roles at once: explaining how to use BornAgain, and making comparison against IsGisaxs. After that, we have realised, that it is difficult to combine both functions in one set, and we came to the separation: functional test machinery and user example section.

In this item I would like to discuss the necessity/possibility to keep clear distinction

  • Functional tests provide validation and check reproducibility
    • Should validate software in all 3 domains.
    • Should cover as much functionality as possible
    • Should be relatively fast to promote adding more tests (currently achieved by the usage 25x25 detector).
  • User examples demonstrates how to use the software.
    • Should produce attractively looking image(s) out of the box.
    • Should explain key simulation features (no need to cover whole API available, or try to substitute unit testing)
    • Should contain clean self-explanatory code.

The main question, we are trying to solve now, is how to provide automatic validation of user examples.
To my opinion, attempt to validate examples in functional-test style, leads to the following disadvantages:

As an example, consider RectangularDetector.py example turned into PyPersistent test.

  • User example now contains the code related to the functional testing.

Conceptually, example is organised now to simplify testing, and not user’s education process. There is a non-obvious for external observer logic of calling external “plotting” function, which in turn calls back “run_simulation” and “plot” functions.

  • Corresponding PyPersistent test runs 50 sec in debug mode, which is probably too much for comfortable use of functional test machinery in daily development.
  • Corresponding PyPersistent test is not testing plotting itself.

For example, recent appearance of useLatex=True in matplotlib routines, leads to the problems under many systems (e.g. no axes, ticks, labels, lots of confusing latex(!) related warning messages at least under windows and OpenSuse). And this was not traced by the PyPersistence test.

  • Corresponding PyPersistent test creates 3 files on disk, 8Mb each, under git control. This is a huge difference wrt 3kb files needed for normal single functional test.
  • PyPersistent test fully duplicates checks done in standard functional tests.

If we agree on two different distinct roles: functional test for testing and user examples for the documentation,
then, by creating new user example, we just have to make sure that it’s functionality is covered (more or less) by one or few various functional tests.

The validation of user examples in this case, to my opinion, will be required only on the level - crashed/not crashed to make sure API was updated. This can be done by introducing new cmake key “make doctests” (to not to still time from “make check”), which will run all examples one by one, as they are.

I believe, that the creation of new tests/examples and supporting them on long run will be easier, and, users will be also thankful for not having extra code in their examples.

Related issues

Related to BornAgain - Refactoring #1604: move expected inaccuracy (m_variability) out of class OutputData Resolved 09 Sep 2016


#1 Updated by wuttke over 4 years ago

  • Parent task set to #1606

#2 Updated by wuttke about 4 years ago

  • Subject changed from The role of PyPersistence tests to Clarify relation between PyPersistence tests and tutorial examples
  • Assignee set to wuttke
  • Target version set to Sprint 33

I'll try to work out a solution along Gennady's proposal of writing test wrappers around old-style tutorial examples.

#3 Updated by wuttke about 4 years ago

  • Parent task changed from #1606 to #861

#4 Updated by pospelov about 4 years ago

Then I will summarize my proposal here.

  • Move persistence check from "make check" to "make stresscheck"

'stresscheck' (name doesn't matter) will run normal functional tests + persistence tests

  • Revise all python examples and introduce missing functional/unit tests in our testing machinery

but for much smaller detectors, then those which are used for persistence tests
Few candidate examples to become new functional test: Specular, OffSpecular, Roughness, Grating. The rest seems to be already tested.

  • Make PyPersistence test run via wrapper
    • to not to have testing code in user examples
    • to test running of plotting too
    • Something in the line of dev-tools/check-examples/check_examples.py and its function generate_example_plot()
    • It would be already enough, I think, to check that no exception thrown and that there is a non-zero image on disk

#5 Updated by herck about 4 years ago

May I also add that this refactoring could also be used to get rid of massive extra data in our repository and tarball (tarball went from 12M to 78M between version 1.6.1 and 1.7)?

#6 Updated by wuttke about 4 years ago

  • Related to Refactoring #1604: move expected inaccuracy (m_variability) out of class OutputData added

#7 Updated by wuttke about 4 years ago

  • Assignee deleted (wuttke)

#8 Updated by pospelov almost 4 years ago

  • Status changed from Rfc to Sprint
  • Assignee set to pospelov

#9 Updated by pospelov almost 4 years ago

  • Assignee deleted (pospelov)
  • Target version changed from Sprint 33 to Sprint 34

#10 Updated by pospelov over 3 years ago

  • Assignee set to pospelov

#11 Updated by pospelov over 3 years ago

  • Status changed from Sprint to Resolved

Summary of changes:

PyPersistenceMachinery refactored

Machinery now runs through new example_template.py which takes original example, modifies it's simulation to make it faster and then injects "minified" simulation back in the example.

  • Code of all Python examples, as well as plot_utils.py cleaned from anything related to testing.
  • Reference files made smaller.
  • Size of source tarball went from 80Mb to original 12Mb
  • Functional test machinery runs approx. two times faster

New PyExample machinery introduced

Thanks to check_functionality.py python example is executed "as is", together with its graphics. The results of test is an image on disk.
Test fails if example is not able to create image (meaning that there is an error in BornAgain's PytnonAPI or matplotlib routines).

To run all examples in new machinery use

make fullcheck

The idea is that PyExample machinery is intended for checking that example is fully functioning. It takes care about all examples, including fit examples.
It doesn't check against reference files - this is done by CoreStandardTest machinery.

Important difference

# Run functional tests (PythonExample excluded)
make check
ctest -LE Examples

# Run functional tests (PyExample included)
make fullcheck

Suggestion for the future

Disable PyPersistence machinery since it duplicates all other functional tests.

Also available in: Atom PDF