Testing #2287

Reflectometry cross-validation

Added by dmitry almost 2 years ago. Updated over 1 year ago.

Status:ResolvedStart date:15 Jan 2019
Priority:NormalDue date:
Assignee:juan% Done:


Target version:Sprint 41


Reflectometry functionality in BornAgain should be compared to well-known codes.

The following is suggested:
1. Compare simulation results for some simple sample model between BornAgain, GenX, Refnx and Refl1D.
It would be nice to see how resolution effects compare to each other.

2. Try fitting the data provided by Alexandros with Refnx (or Motofit) and BornAgain. Do the same for the data
referenced in the recent paper about Refnx (see http://scripts.iucr.org/cgi-bin/paper?S1600576718017296 and https://refnx.readthedocs.io/en/latest/).


#1 Updated by dmitry almost 2 years ago

  • Target version changed from Sprint 40 to Sprint 41

#2 Updated by juan over 1 year ago

1. Simple Ti-Ni simulation comparison amongst different Reflectometry codes, including BornAgain:


2. Multi-fitting in BornAgain. Fittings out-of-the-box were not being as successful as expected, so I had to invent some tweaks. I proposed a new Figure of Merit and now the fittings are looking good. I also designed some small wrappers to reduce boilerplate code and fit data in a similar way to scipy. You can view a Jupyter-notebook dealing with the issue of co-refinement here:


3. I made further tests to compare BornAgain performance against Refnx:


4. Looking ahead for a possible fix, I did some experiments on reusing the already performing engine of Refnx and embedding it inside a python API --which could also be a GUI for that matter:


The whole of this work can be found in the following Github Repository:


#3 Updated by juan over 1 year ago

  • Status changed from Sprint to Resolved
  • % Done changed from 0 to 100

Also available in: Atom PDF