|Status:||Resolved||Start date:||15 Jan 2019|
|Target version:||Sprint 41|
Reflectometry functionality in BornAgain should be compared to well-known codes.
The following is suggested:
1. Compare simulation results for some simple sample model between BornAgain, GenX, Refnx and Refl1D.
It would be nice to see how resolution effects compare to each other.
2. Try fitting the data provided by Alexandros with Refnx (or Motofit) and BornAgain. Do the same for the data
referenced in the recent paper about Refnx (see http://scripts.iucr.org/cgi-bin/paper?S1600576718017296 and https://refnx.readthedocs.io/en/latest/).
#2 Updated by juan over 1 year ago
1. Simple Ti-Ni simulation comparison amongst different Reflectometry codes, including BornAgain:
2. Multi-fitting in BornAgain. Fittings out-of-the-box were not being as successful as expected, so I had to invent some tweaks. I proposed a new Figure of Merit and now the fittings are looking good. I also designed some small wrappers to reduce boilerplate code and fit data in a similar way to scipy. You can view a Jupyter-notebook dealing with the issue of co-refinement here:
3. I made further tests to compare BornAgain performance against Refnx:
4. Looking ahead for a possible fix, I did some experiments on reusing the already performing engine of Refnx and embedding it inside a python API --which could also be a GUI for that matter:
The whole of this work can be found in the following Github Repository: