relax releases
This is a collection of all of the full release notes for each released relax version.
Version 5 of relax
relax 5.0 series
relax 5.0.0
Description
This is a major feature release that adds initial support for wxPython-Phoenix. It includes a large number of under the hood changes to support more modern Python versions and packages, a lot of polish of the relax text output, improved test suite control, and improved and modernised Travis CI support for automatically checking the integrity of the software.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 5.0.0
(24 August 2020, from master)
https://sourceforge.net/p/nmr-relax/code/ci/5.0.0/tree/
Features
- Support for wxPython-Phoenix.
- Bmrblib: This Python package is now once again optional and relax can run without it.
- MS Windows builds are now 64-bit by default.
- Major improvements to relax text output.
Changes
- TestSuite: Skipped tests are no longer run when individual tests are supplied on the command line. The RelaxTestLoader.loadTestsFromNames() method has been implemented to gracefully handle the skipping of tests when only a single test is run.
- Travis CI config: Fixes for PyPI numpy no longer being compatible with Python 2.7. Older versions of numpy now need to be manually specified for the Python 2.7 job.
- Travis CI config: Attempt at making the MS Windows build job run again. The Travis CI infrastructure has changed yet again and the Windows job fails in the setup stage. These changes are just a guess to try to make this work again.
- Travis CI config: 2nd attempt at making the MS Windows build job run again. Chocolatey was automatically installing the new Python 3.8.0 but the paths pointed to the 3.7 version. Now the 3.7.4 Python version is explicitly specified.
- SCons: Change for the MS Windows build architecture from the default of 32-bit to 64-bit. Previously the default was 32-bit compilation on all Windows systems, via the
WIN_TARGET_OVERRIDE
flag, as official Python never used to release 64-bit builds for Windows systems. As this is no longer the case, the 32-bit override is now only set for the old Python 2 versions. - Travis CI config: Creation of a job for testing relax on an arm64 CPU. The system Python and its packages are used to avoid timeouts on arm64. Installing the Python packages via pip prior to running causes a Travis CI time out, as most of the 50 minutes allowed are used up by the compilation of SciPy. Despite the successful installation of the wxPython site-package on the system Python3, the GUI tests are not activated as there is a problem with xvfb on the arm64 Travis CI jobs.
- N_state_model.test_populations system test: Loosened two of the checks to allow arm64 to pass.
- wxPython: Added the
dep_check.old_wx
flag for differentiating between Classic and Phoenix. - wxPython-Phoenix: Fix for the
wx.BoxSizer.AddSpacer()
function calls. The old wxPython conversion of the size argument to (size, size) breaks the layout, so that the tuple arguments are essential. However tuple arguments are not allowed in wxPython-Phoenix. Therefore thedep_check.old_wx
flag is used to differentiate the behaviour of the different wxPythons. - wxPython-Phoenix: Fix for the old
wx.Sizer.DeleteWindows()
method. This method no longer exists, so instead theClear()
method with thedeleteWindows
argument (ordelete_windows
in Phoenix) is used instead. - wxPython-Phoenix: Fix for the missing
wx.SystemSettings_GetMetric()
function. This has been switched towx.SystemSettings.GetMetric()
which is present in the original wxPython and Phoenix. - wxPython-Phoenix: Fixes for the relax gui About dialog. The
wx.Frame.Center()
function call only works if the window is shown (i.e. it is broken in the test suite), and thewx.DC.EndDrawing()
function has been dropped in Phoenix. - wxPython-Phoenix: Fixes for the GUI sequence and file input elements. The
wx.Frame.Center()
function call only works if the window is shown (i.e. it is broken in the test suite). - wxPython-Phoenix: Support for the splash screen. The
wx.SplashScreen
and associated variables have shifted intowx.adv
. - wxPython-Phoenix: Support for the relax icon. The
wx.IconBundle.AddIconFromFile()
function have been replaced bywx.IconBundle.AddIcon()
in the current phoenix. - wxPython-Phoenix: Fix for the spin viewer window. The
wx.Window.GetClientSizeTuple()
function does not exist in Phoenix. However this can simply be replaced bywx.Window.GetClientSize()
in the current code. - Deletion hack: The
wx.Bitmap.HasAlpha()
function is missing in current phoenix. - relax GUI: Fix for the window icons.
- wxPython-Phoenix: Switch away from the depreciated
wx.Menu.AppendItem()
function. Classic still requires the calls to this function, but phoenix now useswx.Menu.Append()
instead. - wxPython: Renamed the
dep_check.old_wx
flag to dep_check.wx_classic. - wxPython-Phoenix: Prominent feedback warning the user about using unstable Phoenix versions. This includes both a RelaxWarning on start up and placing the warning text in red in the center of the blank relax GUI main window. Currently all Phoenix versions are labelled as unstable, however this can be changed in the future directly in the dep_check module.
- wxPython-Phoenix: Switch away from the depreciated
wx.ToolBar.AddLabelTool()
function. This is still used for "Classic". For Phoenix, thewx.ToolBar.AddTool()
function is used instead. - wxPython-Phoenix: Switch away from the depreciated
wx.Window.SetToolTipString()
function. Insteadwx.Window.SetToolTip(wx.ToolTip(text))
is used for both "Classic" and Phoenix. - wxPython-Phoenix: Switch from
wx.NamedColour()
towx.Colour()
in the relax controller. "Classic" still uses the old function. - wxPython-Phoenix: Switch from the depreciated
wx.Text.GetSizeTuple()
towx.Text.GetSize()
. This seems to work on "Classic" as well. - wxPython-Phoenix: Switch from the depreciated
wx.TreeCtrl.GetItemPyData()
function. "Classic" is still using this function, but Phoenix is now usingwx.TreeCtrl.GetItemData()
. - wxPython-Phoenix: Switch from the depreciated
wx.TreeCtrl.SetDimensions()
function. InsteadSetSize()
is now being used for Phoenix. - wxPython-Phoenix: Switch from the depreciated
wx.TreeCtrl.SetPyData()
function. "Classic" is still using this function, but Phoenix is now usingwx.TreeCtrl.SetItemData()
. - wxPython-Phoenix: Switch from the depreciated
wx.StockCursor()
wrapper function. The overloadedwx.Cursor
class can be used instead in Phoenix. - wxPython-Phoenix: Switch from the depreciated
wx.EmptyBitmap()
wrapper function. Instead Phoenix versions can simply use the overloadedwx.Bitmap
class with the same arguments. - wxPython-Phoenix: Switch from wrapper to overloaded functions for the
wx.ListCtrl
elements. - Python 3.8 support: The
platform.linux_distibution()
function no longer exists. This is now replaced by the distro site-package. The lib.compat package deals with this difference. - Model-free analysis: Obscure syntax error bug fix for an issue highlighted by Python 3.8. The error in the set_xh_vect() function. This is only encountered when reading an ancient relax 1.2 model-free results file.
- Travis CI config: Changes as suggested by the experimental Travis CI Build Config Explorer. The config text was pasted into https://config.travis-ci.com/explore and changed as suggested.
- Travis CI config: Shifted the OpenMPI required packages into an apt
addons
section. - Travis CI config: Shifted the API doc
build
specific parts into the jobs matrix. This allows an environmental variable to be removed and a simplification of thescript
section. - Travis CI config: Shifted the FSF copyright validation specific parts into the jobs matrix. This allows an environmental variable to be removed and a simplification of the
script
section. - Travis CI config: Removal of the now unused
TEST
environmental variable. - Travis CI config: Simplification of the single processor and OpenMPI execution. The
MPIRUN
andRELAX_ARGS
arguments have been introduced. These are normally unset but, for the OpenMPI jobs, they are set tompirun -np 2
and--multi=mpi4py
respectively. This allows the duplicated entries for the information printout and test suite execution to be collapsed into one. - Travis CI config: Removal of the
pip upgraded package
job. This job does not seem to be necessary for testing relax. - Travis CI config: Conversion of the Ubuntu Xenial job to Ubuntu Bionic.
- Travis CI config: Removal of the
language
key in the jobs matrix when the value ispython
. This is a duplication as the language is set to Python outside of the matrix. - Multi-processor: Shifted the processor type checking into the initial command line parsing. This allows a non-zero error code to be returned to the shell.
- Travis CI config: Shifted the echoing of environmental vars into a new
before_script
section. This allows the echoing to occur for all jobs. - Scons: Improvements to the string formatting and the printout for the C module compilation. This includes showing the target architecture for MS Windows compilation.
- Scons: Document the environmental variables used.
- Information printout: Improved output for Python3 compiled C modules. The bytesteam is now decoded.
- Scons: The MS Windows binary target architecture is now determined by the Python binary arch.
- Test suite: Implementation of a command line option for disabling IO capture. This was previously handled by using the debug command line option which simply prevented IO capture. This type of output is very hard to parse by eye, as the tests are not well separated and the debugging output is very verbose. Now the
--no-capt
or--no-capture
option has been implement to disable the IO capture. The debug command line option no longer disables IO capture, rather it allows for finer control of the test suite in that verbose debugging output is now only shown for tests that do not pass. When IO capture is disabled, extra formatted output is used to provide clear separators, titles, descriptions and endings for each test. - Test suite: Argument reordering and better docstring documentation in the relax test suite runners.
- Test suite: All adjustable widths are now set using the value of
status.text_width
. This include the separators for the tests and the test suite summary lines at the end. - Fix for Python 2.5 support.
- Command line processing: Switch from the depreciated optparse Python module to argparse. The argument parsing code and help text has also been improved.
- Travis CI config: Added the relax
--test
and--version
modes. - Travis CI config: Alphabetical ordering of the environmental variable printouts.
- Help: Improvements to the help printout, including new descriptions for the argument groups.
- Status object: Improved logic for determining the ideal text width for relax.
- Travis CI config: Added testing of the relax
--help
mode. - Test suite: Added text wrapping set to the relax text width for the test description. This is the description shown when running without IO capture.
- Information printout: Improved formatting for MS Windows. The
repr()
function results in\\
for path separators rather than\
, causing the formatting to be out. - Test suite: Addition of a new command line option for listing all of the test names. The new
--list-tests
option will cause the names of the tests to be printed out and not run any tests. - Travis CI config: Try to force a Py2 compatible version of kiwisolver, as needed by matplotlib.
- Travis CI config: The virtual machines with Python2 now seem to require SCons be manually installed.
- Test suite: Fixes for the Palmer.test_palmer_omp system test. The modelfree4 binary type
linux-x86_64-gcc
seems to now produce slightly different results with newer system libraries. The checks in this test have been updated to reflect this.
Bugfixes
- GUI: Bug fix for the deletion of analysis tabs on Python 3. The value of
None
cannot be compared to an integer. This bug appears to only be triggered by another bug - a GUItearDown()
or deletion failure on MS Windows with wxPython-Phoenix and Python 3. - Bug fix: Restoration of the simple user function menus.
Links
For reference, the announcement for this release can also be found at following links:
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
Version 4 of relax
relax 4.1 series
relax 4.1.3
Description
This is a minor bugfix release that re-enables the reading of Bruker Dynamics Center NOE data files.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 4.1.3
(14 June 2019, from master)
https://sourceforge.net/p/nmr-relax/code/ci/4.1.3/tree/
Features
N/A
Changes
- FSF Copyright Validation configuration: Blacklisted the PDF user manual. This allows the checking of relax tags to pass.
- Release checklist document: Describe the relax fork of latex2html.
- API manual: No longer raise errors when parsing the pystarlib docstrings.
- Release checklist document: Minor improvements to match the practical aspects of the release.
- User manual: Proper abbreviation of the "Quarterly Reviews of Biophysics" journal name.
- Test suite: New system test to catch the failure of reading newer Bruker DC NOE data files. The system test is Bruker.test_bug_15_NOE_read_fail and it catches bug #15. The test uses truncated data from Stefano Ciurli as attached to the bug report.
- Bruker DC: Silence the warnings about spin names already existing. The user does not need to see such warnings.
- Travis CI config: Explicitly set
trusty
as the distribution name for the default images. In the support request titled Failure of GUI testing via xvfb, the Travis CI support staff suggested that we explicitly setdist: trusty
. - Bruker DC: A different way to silence the warnings about spin names already existing. The previous attempt at setting the force flag to
True
was causing failures in a number of system tests. Therefore a new flagwarn_flag
has been added to pipe_control.mol_res_spin.name_spin() to allow warnings to be explicitly silenced. - Travis CI config: Use Xenial for running all tests on Linux and Python 2.7. This is from the support request titled Failure of GUI testing via xvfb.
- Travis CI config: Manual support for old SciPy versions on Python 2.7. SciPy 1.3.0 now requires Python ≥ 3.5. Therefore the
OLD_MATPLOTLIB
variable has been renamed toOLD_PY2_PACKAGES
and, when set, is now used to install old matplotlib and scipy versions when using Python 2.7. - Travis CI config: Deactivate the Mac OS X updates to avoid timeouts. The
brew update
andbrew upgrade python3
take up half of the build time for the Mac OS X target. This large amount of time sometimes causes this build to hit the Travis CI time limits.
Bugfixes
- Bruker DC: Support for handling newer versions of the NOE data file. This fixes bug #15, the failure to read newer versions of the Bruker DC NOE data files. This was simply a parsing issue as the NOE column is now
NOE [ ]
whereas previous DC versions used the textNOE
orNOE [none]
.
Links
For reference, the announcement for this release can also be found at following links:
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 4.1.2
Description
This is a minor feature and bugfix release. It includes tooltip improvements in the GUI for the user function windows and wizards, the addition of the newly published primary reference for the frame order analysis, and improved formatting for the bibliography and index of the relax manual.
There have also been improvements for the automated testing of relax by Travis CI. This includes the naming of the build jobs, the execution of the software verification tests, the installation of wxPython to enable GUI testing and the running of the whole test suite, the reordering of the system tests back before the unit tests to avoid hiding some nasty relaxation dispersion bugs, a fix for matplotlib on Mac OS X so that the tests will finally run on this OS, a new build job for the API documentation, and a new build job for the Free Software Foundation copyright validation script.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 4.1.2
(25 April 2019, from master)
https://sourceforge.net/p/nmr-relax/code/ci/4.1.2/tree/
Features
- relax GUI: Improved tooltips for the buttons of the user function windows and wizards. This follows from the mailing list discussion.
- User manual: Addition of the newly published frame order reference.
- Formatting improvements for the user manual bibliography and index sections.
Changes
- Development scripts: Improvements to the Python detection in the Python module seeking script.
- Release checklist document: Updated the text to better match the new release process.
- HTML manual: CSS fix for newer LaTeX2HTML versions. The text width in the HTML appears to now be fixed to a maximum width matching the text dimensions in the PDF. This looks bad together with the wider images and code snippets.
- System tests: Added two tests to catch bug #12, the failure to catch the '#' character when setting the molecule name. The tests are Structure.test_bug_12_hash_in_mol_name_via_arg and Structure.test_bug_12_hash_in_mol_name_via_file. These cover the two ways a
#
character can enter a molecule name - via the file name or via the set_mol_name argument. Both the structure.read_pdb and structure.read_xyz user functions are checked. - Test suite: 2 new system tests to catch the failure of reading newer Bruker DC files. The system tests are Bruker.test_bug_13_T1_read_fail and Bruker.test_bug_13_T2_read_fail and these catch bug #13.
- User function definitions: Clarifications for the bruker.read text.
- User manual: Clean up of the bibliography entry titles. Species names are properly italicised with genus names capitalised, nuclear isotopes are superscripted, R1ρ, R2, etc. are properly subscripted, the Perrin articles are translated into English, symbols are now symbols, and unnecessary capitalisation has been removed from the bibtex.
- User manual: Standardisation of the frame order indexing.
- User manual: Standardisation of the relaxation dispersion indexing.
- Travis CI config: Attempt at installing wxPython for Ubuntu and Python 2.7. This would allow for the whole test suite to be run on Travis CI on at least one OS. The instructions come from the stackoverflow response by dthor.
- FSF Copyright Validation script: The script now returns an exit status.
- Travis CI config: Avoid updating Conda. This seems to cause a breakage in installing matplotlib.
- Travis CI: matplotlib is now manually installed to allow for older versions on Python 2.7. The current pip default of 3.0.3 is incompatible with Python 2.7. It is not clear how the installation of Conda (for wxPython support) caused the 3.0.3 version to be installed instead of the 2.2.4 version. So now the version is manually set in the Travis CI script.
- Travis CI config: Enable xvfb to allow for wxPython and testing of the GUI.
- Test suite: Restored the original test suite order to reveal relaxation dispersion bugs. The system tests should come first. This allows the maximum amount of code that might accidentally change read-only variables to run prior to the unit tests, where such changes are often subsequently picked up.
- Test suite: The keyboard interrupt terminates the test suite once again.
- FSF Copyright Validation script: The return status now starts at 0 to allow for early returns.
- FSF Copyright Validation script: Support for saving and reading the committer information. This allows the committer information (file name, committer name, and copyright years) from older repositories to be saved and later read into the script. In this case, the old Subversion history has been read and the committer information placed into the
fsfcv.svn_committer_info.bz2
file (in thedevel_scripts/
directory). This compressed file is now specified in thefsfcv.conf.py
configuration file. The result is that thefsfcv
script can be run on the relax git repository without requiring a checkout of the old SVN repository. - Travis CI config: Improvements to the comments and spacing.
- API manual: Scons compilation via epydoc now fails if a warning or error is found. This manually parses the epydoc output to skip the unavoidable wxPython warnings. Any error or warning will now cause an error to be raised. This results in a non-zero return code from scons to allow the
api_manual_html
target to be checked in scripts. - Travis CI config: Named all of the jobs.
- Travis CI config: General clean up and execution of the software verification tests.
- API manual: Scons compilation via epydoc now fails if an import error occurs.
- Travis CI config: Alphabetical ordering of environmental variables and required Python packages.
- Travis CI config: Creation of an API documentation build job.
- Travis CI config: Fix for the Mac OS X build. This job passes, but the test suite fails with the following traceback message when trying to import matplotlib: "ImportError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are using (Ana)Conda please install python.app and replace the use of 'python' with 'pythonw'. See 'Working with Matplotlib on OSX' in the Matplotlib FAQ for more information.". The fix is simply to create
$HOME/.matplotlib/matplotlibrc
with the contentsbackend: TkAgg
. - API manual: Greater filtering of the file list passed to epydoc. Now only relax modules ending in
*.py
are processed. That means that all base directory scripts, includingsconstruct
, are excluded from the API documentation. - API manual: More reliable parsing of the epydoc output to detect non-wxPython issues.
- FSF Copyright Validation configuration: Improvements to the repository configuration section. The different configurations can now be chosen via a variable, rather than requiring code to be uncommented.
- FSF Copyright Validation script: Sorted the years for the committer information output. This makes it easier to read the file and will help with compression.
- FSF Copyright Validation script: A new file with all committer information up to 2018. This is to allow for much faster execution of the FSFCV script, by only looking at the git log from the start of 2019.
- FSF Copyright Validation script: Support for skipping the first commit. This is for truncated history, where for example the git repository start date is set to a later date than the git repository migration or initial SVN commit, when the committer information up to a given date is read from a file.
- FSF Copyright Validation script: Fix for tracking renames when saved committer information is used.
- Travis CI config: Execution of the FSF copyright validation script as part of the testing. This is set to run only on a new Python 3.7 build job, simply to avoid unnecessary repetition. All of the git history needs to be fetched for the script to work, and the script requires the pytz Python module.
- FSF Copyright Validation script: Addition of a repository configuration printout. This is to help in debugging, as it is otherwise not clear where the source of the copyright information comes from.
- Release checklist document: Rewrote the 'preparation' instructions for Travis CI. All previous manual checking is now performed automatically by Travis CI for each push to the GitHub mirror repository.
Bugfixes
- relax GUI: wxPython-Phoenix 4.x fix to allow relax to start again. In the later wxPython versions, relax would not be able to start either the GUI or any of the test suite due to a new error "wx._core.PyNoAppError: The wx.App object must be created first!". This was not present in wxPython-Phoenix 3. The Relax_icons class (a
wx.IconBundle
derived class) is no longer instantiated on import. - Structure loading: Fix for bug #12, the acceptance of the invalid '#' character in molecule names. A simple check has not been added to the load_pdb() and load_xyz() functions of the internal structural object in lib.structure.internal.object. This ensures that the
#
character can never be set as the molecule name, independently if it was taken from a file name or set via theset_mol_name
arguments of the structure.read_pdb or structure.read_xyz user functions. - Bruker DC: Complete redesign of the backend to support reading newer (or older) file versions. This fixes bug #13, the failure of reading newer Bruker DC files. The backend has been resigned so that the relax library produces a complex Python object representation of the Bruker DC results file. This object now stores all of the data present within the Bruker DC file. The design is more flexible as precise column ordering no longer matters.
- Fix for bug #14, the freezing of user functions in the GUI. The user functions freeze if an error occurs that is not a RelaxError, with the mouse pointer stuck on the busy cursor. These non-RelaxErrors are now caught and manually dealt with by the GUI interpreter. Like all GUI freezing bugs, this was introduced with the huge GUI speed up prior in relax 4.1.0. These also only to appear to be a freeze, but it is actually the failure to update and show the relax controller combined with not turning off the busy mouse cursor.
- GUI bug fix: Avoidance of the numpy depreciation of
== None
. This deprecation causes the GUI to fail with recent numpy versions. - Relaxation dispersion: Protection of all of the
MODEL_PARAMS_*
variables from modification. These are now only used withcopy.deepcopy()
. This removes a number of bugs in which the lists, which should be read-only, are permanently modified by the addition of'r1'
. The system tests add'r1'
and then the unit tests subsequently fail. This would also be an issue if an experiment without the'r1'
parameter is analysed after one with that parameter, without restarting relax. - Relaxation dispersion bug fix: The
'r1'
parameter was missing from the nested parameter algorithm. This is the nesting_param() function of the specific_analyses.relax_disp.model module. The'r1'
parameter must be treated differently from the other model parameters, just as the'r2*'
parameters are. - Dispersion auto-analysis: Bug fix for the plotting of the R1 parameter. The plotting relied on the insertion of the
'r1'
parameter into the read onlyMODEL_PARAMS_*
variables of lib.dispersion.variables. Now the Model_class class from specific_analyses.relax_disp.model is being used to dynamically determine the parameters of the model.
Links
For reference, the announcement for this release can also be found at following links:
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 4.1.1
Description
This is a major bugfix release. The release fixes multiple issues with the relax GUI and with the relaxation dispersion analyses. Please see the notes below for details.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 4.1.1
(8 March 2019, from master)
https://sourceforge.net/p/nmr-relax/code/ci/4.1.1/tree/
Features
- N/A.
Changes
- Mac OS X distribution file: Fixes for the DMG file generation. The
.git
directories are no longer bundled (the check insetup.py
was for.svn
directories), and thesobol_test.py
script contained a bug that blocked the image generation. - Release Checklist: Rewrite for the shift to a git repository and to the SourceForge infrastructure.
- Test suite: Temporary file fix for the Bmrb system and GUI tests. The temporary files normally used by these tests were accidentally removed in a previous commit. The result was temporary files being placed in the current directory.
log_converter.py
development script: Conversion from SVN to git. A number of spacing bugs have also been removed, simplifying the release process.- relax manual: The find_replicate_titles.py script can now handle the presence of latex2html. If latex2html had been set up via the
docs/devel/latex2html/setup
script, then find_replicate_titles.py would fail due to the presence of*.tex
files outside ofdocs/latex/
. - Update from LaTeX2HTML 2008 to 2019. The instructions now point to the latex2html repository fork at SourceForge, with the relax manual specific branches.
- GUI tests: Addition of the User_functions.test_bug_2_structure_read_pdb_failure test. This is to catch bug #2, the failure of the structure.read_pdb user function in the GUI.
- GUI tests: Addition of the User_functions.test_bug_3_no_argument_validation test. This is to catch bug #3, the absence of user function argument validation within the GUI.
- Unit tests: Addition of two tests for specific_analyses.relax_disp.parameters.param_num(). This is to catch bug #6, the failure of the parameter counting for the 3-site relaxation dispersion models when spins are clustered. The two unit tests are Test_parameters.test_param_num_clustered_spins and Test_parameters.test_param_num_single_spin in the unit test module _specific_analyses._relax_disp.test_parameters.
- Unit tests: Addition of two tests for specific_analyses.relax_disp.parameters.loop_parameters(). The two unit tests are Test_parameters.test_loop_parameters_clustered_spins and Test_parameters.test_loop_parameters_single_spin in the unit test module _specific_analyses._relax_disp.test_parameters. These were added to try to catch the typo error at the end of the function, where the ΔωHAB parameter appears twice (the second should be ΔωHAC). However the typo was not caught in the tests as no currently implemented dispersion model contains the ΔωHAC parameter. Hence it is a latent bug. The tests do catch a minor error with the 'R2eff' model in which the I0 parameter is always returned. I0 should only be returned when exponential curve data is present. This bug has no apparent affect on the current operation of relax, so the parameter is probably handled correctly downstream.
- Module specific_analyses.relax_disp.parameters: Fix for loop_parameters() with the 'R2eff' model. This now only returns the I0 parameter when exponential curve data is present. This fix has no apparent affect on the operation of relax, so the I0 parameter is probably correctly handled in code that calls the loop_parameters() function.
- Dispersion: Shift of the model parameters from the parameter loop to lib.dispersion.variables. This removes all references to specific model parameters from the loop_parameters() function in the specific_analyses.relax_disp.parameters module into lib.dispersion.variables. This simplifies the loop_parameters() function and should minimise latent bugs.
- Unit tests: Addition of two tests for specific_analyses.relax_disp.parameters.linear_constraints(). The two unit tests are Test_parameters.test_linear_constraints_clustered_spins and Test_parameters.test_linear_constraints_single_spin in the [unit test module _specific_analyses._relax_disp.test_parameters. These show that the linear constraints are correctly assembled for single and clustered spins for all models.
- Module specific_analyses.relax_disp.parameters: Docstring, whitespace, and comment fixes.
- Unit tests: Addition of tests for lib.dispersion.ns_mmq_3site and lib.dispersion.ns_r1rho_3site. These are to catch bug #9, and specifically test for when pA is 1.0 and the other probabilities are zero. Two new unit tests of the _lib._dispersion.test_ns_mmq_3site module include Test_ns_mmq_3site.test_ns_mmq_3site_mq and Test_ns_mmq_3site.test_ns_mmq_3site_sq_dq_zq, and a single new unit test of the _lib._dispersion.test_ns_r1rho_3site module is Test_ns_r1rho_3site.test_ns_r1rho_3site.
- Unit tests: Addition of two tests for specific_analyses.relax_disp.parameters.param_conversion(). The two unit tests are Test_parameters.test_param_conversion_clustered_spins and Test_parameters.test_param_conversion_single_spin in the unit test module _specific_analyses._relax_disp.test_parameters. These tests uncovered that the pC parameter for the 3-site R1ρ dispersion models 'NS R1rho 3-site' and 'NS R1rho 3-site linear' is not being calculated in the param_conversion() function. This is now reported as bug #11.
- Unit tests: Creation of the Test_parameters.test_param_conversion_clustered_spins_sim test. This is to check the specific_analyses.relax_disp.parameters.param_conversion() function for a cluster of 2 spins for Monte Carlo simulations. It was a failed attempt to catch bug #10. The problem probably lies in the Monte Carlo simulation setup functions in the specific analysis API rather than in the module specific_analyses.relax_disp.parameters.
- Unit tests: Test of the dispersion specific analysis API function sim_init_values(). This is an attempt at catching bug #10, the failure of the 3-site dispersion models when setting the pC parameter for Monte Carlo simulations. The failing test however shows that the sim_init_values() function probably needs a complete overhaul.
- Dispersion: Improved handling of deselected spins in the loop_parameters() function. This is from the specific_analyses.relax_disp.parameters module. The function can now handle the first spins in the cluster being deselected.
- FSFCV configuration: Skip some false positive copyrights in the
docs/CHANGES
file.
Bugfixes
- Fix for bug #2, the failure of the structure.read_pdb user function in the GUI. The problem was that the file selection argument was being set up incorrectly as two GUI elements - an inactive file selection element and a normal value setting GUI element. Only the second value input GUI element was active (due to the GUI elements being stored in a dictionary, with the first key value being overwritten by the second).
- Fix for bug #3, the absence of user function argument validation within the GUI. The code for the user function argument validation in the prompt/script UIs was simply copied and slightly modified to fit into the GUI user function window execution. All arguments are now passed into the new lib.arg_check.validate_arg() function and are checked based on their user function definitions.
- Fix for bug #4, the relax controller in the GUI not displaying text when required. Calls to the captured IO stream
flush()
methods are now been made in a number of places to allow the controller to show the text when required. This includes after printing out the intro text, after any captured and GUI handled errors, after clicking on thehelp→licence
menu entry, after thread exceptions, and after a number of GUI message dialogs. The bug is only present in relax 4.1.0. - Typo fix in the description of the
'atomic'
argument for the structure.rmsd user function. - Fix for bug #5, the incorrect numpy version check in the relaxation dispersion auto-analysis. The dep_check.version_comparison() function is now used for the version comparisons.
- Dispersion: Fix for bug #7, the model list containing 'No Rex' twice. The MODEL_LIST_FULL variable contained the model 'No Rex' twice. The only manifestation of the bug is a RelaxError message showing the full list of models, when a user selects a non-existent dispersion model.
- Dispersion: Fix for bug #6, the incorrect parameter counting for 3-site models with spin clustering. The issue was that the list of spin-specific parameters was incomplete. To resolve this, the parameter names have been shifted into the lib.dispersion.variables module lists
PARAMS_R1
,PARAMS_GLOBAL
, andPARAMS_SPIN
. By removing the parameter names from other parts of relax, the lib.dispersion.variables module will serve as a single point of failure and hence it will much easier to maintain the relaxation dispersion code when new models with new parameters are added. - Dispersion: Fix for bug #8, the accidental modification of the hardcoded variables. The
MODEL_PARAMS
lists in lib.dispersion.variables were accidentally being modified by the Model_class class in the specific_analyses.relax_disp.model module. The list for a given model was being set as the self.params list. This list would then have the'r1'
parameter pre-pended to it if that parameter is optimised for a model, and hence the lib.dispersion.variables list would be permanently modified. Nowcopy.deepcopy()
is being used for all variables to avoid this issue. This bug was uncovered in the unit tests as the _specific_analyses._relax_disp.test_model tests were causing'r1'
to be added, and then the later _specific_analyses._relax_disp.test_parameters tests would fail as'r1'
should not be in those lists. This bug is highly unlikely to be encountered by users of relax. You would need to run two analyses, one after the other without closing relax, and the first analysis would need to have R1 optimised and the second not. - Dispersion: Fix for bug #9, the failure of the 3-site dispersion models when pB and pC are zero. When both are zero, for example during a comprehensive grid search when model nesting is not utilised, a divide by zero error occurs. This is now caught and large values (1e100) are set for the rates instead.
- Dispersion: Fix for bug #11, the missing pC calculation for the 3-site R1ρ models. The models 'NS R1rho 3-site' and 'NS R1rho 3-site linear' were simply missing from the list of models for the pC parameter.
- Dispersion: Fix for bug #10, the 3-site model failure of setting pC for Monte Carlo simulations. For this, the sim_init_values() function of the relaxation dispersion specific API has been completely rewritten. The specific_analyses.relax_disp.parameters.param_conversion() function is now called at the start to generate initial non-model parameters, and at the end to populate the simulation structures. The rest of the function has been stripped down and significantly simplified.
Links
For reference, the announcement for this release can also be found at following links:
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 4.1.0
Description
This is a major feature and bugfix release. This is also the first release after the permanent Gna! shutdown and the complete migration of relax's free software infrastructure to SourceForge, the first release after the complicated migration from the original Subversion version control repository to git for the relax source code and the relax website, and the first release after three years of development. In the meantime, a new demo repository has been created containing all the data and instructions required to perform and demonstrate different relax analyses.
Features of this release include the addition of a bash completion script, large speed improvements in the GUI and in the execution of many relax user functions, improved sample scripts, significant relax manual updates, support for newer NMRPipe SeriesTab files, improved Docker images, automated testing of relax via Travis-CI, the new user functions frame_order.decompose, structure.add_helix, and structure.add_sheet, and significant improvements for user function argument checking and user feedback via RelaxErrors.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 4.1.0
(14 February 2019, from master)
https://sourceforge.net/p/nmr-relax/code/ci/4.1.0/tree/
Features
- Greater wxPython-Phoenix support while maintaining compatibility with wxPython-Classic.
- Creation of a bash completion script for completing command line arguments with the tab key (
docs/bash_completion.sh
). - A significantly more responsive relax GUI.
- Converted the steady-state NOE analysis sample script to use the auto-analysis.
- Standardisation of initial and final printouts in the auto-analyses, including the elapsed time.
- More of the GUI main menu entries are disabled during execution locking.
- Safe execution of all of the auto-analyses.
- Huge speed ups for many parts of relax with the addition of fast and temporary hash lookup tables and cross-referencing for the molecule, residue, spin and interatomic data containers.
- Many improvements and updates throughout the relax manual.
- Support for the new format of the NMRPipe SeriesTab files.
- Improvements for the Docker container scripts and documentation in
devel_scripts/Docker/
. - Automated testing of relax via Travis-CI.
- New frame_order.decompose user function for a new representation of the frame order analysis results.
- Addition of the new user functions structure.add_helix and structure.add_sheet for manually defining secondary structure.
- Speed up of the
'fit to first'
algorithm in the structure.superimpose user function. - Significant improvements to the checking of arguments passed into user functions, and the resultant error messages for invalid arguments.
- Improvements and fixes for the RelaxError messages to better explain user errors.
- A large number of updates for the switch from the Subversion version control repository to git, and the move from the closed Gna! infrastructure to SourceForge.
Changes
- Removal of the Mac OS X taskbar icon functionality. This code has been disabled since its deletion back in Jun 2012, as it does not work with wxPython 2.8 or 2.9. However with wxPython Phoenix, the disabled code fails as there is no wx.TaskBarIcon.
- Keyword to positional argument conversion for the GUI
wx.ListCtrl.SetStringItem()
function calls. The keyword arguments for this function must exist for backwards compatibility with ancient wxPython versions. The current documentation lists them as positional arguments, and keyword arguments are not accepted by wxPython-Phoenix. - Keyword to positional argument conversion for the GUI
wx.ScrolledWindow.EnableScrolling()
calls. These function calls were using keyword arguments, however the old wxPython and Phoenix documentation say that these are not keyword arguments (this must have been for backwards compatibility with very old wxPython versions). - Keyword to positional argument conversion for a GUI
wx.BoxSizer.Clear()
call. This is for the spin containers in the spin viewer window. The keyword argument in wxPython classic is deleteWindows however in Phoenix it is delete_windows. - Decreased the precision of a check in the Rx.test_r1_analysis GUI test. This is to allow the test to pass on wxPython-Phoenix and Python 3.
- Keyword to positional argument conversion for the GUI
wx.Font()
calls. A number of these were being called with keyword arguments, however the old wxPython and Phoenix documentation say that these are not keyword arguments (this must have been for backwards compatibility with very old wxPython versions). - Replacement of a
wx.ListCtrl.DeleteAllColumns()
function call from the spectrum GUI element. This function does not exist in wxPython-Phoenix. Instead the columns are looped over andwx.ListCtrl.DeleteColumn()
is called instead. - Creation of an initial bash script for enabling bash completion.
- Improvements for the bash completion relax script. Directories and relax scripts are now much better handled.
- Fine tuning of the bash completion relax script. The option
-o nospace
forcomplete
has been removed as spaces are not added for directories anyway. This means that a space is added after all options and scripts. - More precision decreases in the Rx.test_r1_analysis GUI test. This is to allow the test to pass on wxPython-Phoenix and Python 3.
- Updates to the upload section of the release checklist document for sending files to SourceForge.
- Added release instructions for creating the
README.rst
files for the download area. This is for using the custom html2rest to automatically generate the reStructuredText file from the wiki release notes. - Expanded the release checklist instructions for creating the
README.rst
files. - Updates to many frame order test suite shared data relax scripts. These scripts are used for data generation and display, and are not part of the test suite. The updates are for the frame_order.pdb_model and pymol.frame_order user functions which no longer support the
dist
keyword argument (this functionality was shifted into the frame_order.simulate user function). - First commit after the svn to git migration: Created a
.gitignore
file for the new git repository. - Documented the svn to git repository migration. All of the scripts used and detailed instructions have been included.
- Standardisation of the section titles in a number of the documentation files.
- The files auto-generated during the PDF user manual compilation are now ignored by git.
- Git support for the repository version information. This is used in the relax introductory text, the manual compilation, and in the relax save states. The version.repo_revision variable has been renamed to version.repo_head to be repository type independent. For the repository URL, all of the git remotes are included.
- C module blacklisting of the Relax_disp.test_bug_24601_r2eff_missing_data system test. The test is skipped if the C modules are not compiled.
- Added
.pyc
and.so
files to be ignored. - Fix for dep_check, when packages has an appended release candidate number. For example: numpy 1.8.0rc1.
- Added a script to check for copyright notice compliance to the FSF standard.
- Support for multiple git and svn repositories in the FSF copyright notice compliance checking script.
- Collection of all commits to attribute to other authors. This is for the FSF copyright notice compliance checking script.
- Collection of all commits to exclude by the FSF copyright notice compliance checking script.
- FSF compliant copyright notices for all files in the documentation directory
docs/devel/
. This includes twoREADME
files with the copyright notices for all of the patches. - FSF compliant copyright notices for all files in the documentation directory
docs/latex/
. This includes aREADME
file with the copyright notices for the binary graphics. - FSF compliant copyright notices for all files in the documentation directory
docs/html/
. This includes aREADME
file with the copyright notices for latex2html-2008 icons. The copyright notice script has been updated to handle false negatives (significant git commits without copyright ownership), and additional copyrights not present in the git log. - FSF compliant copyright notices for all remaining files in the documentation directory.
- Added the original oxygen icon
AUTHORS
andCOPYING
files and standardised theREADME
file titles. TheAUTHORS
andCOPYING
files from the original svn repository svn://anonsvn.kde.org/home/kde/trunk/kdesupport/oxygen-icon have been added to the repository for better documentation of the copyright. TheREADME
file had also been updated with the origin information. - FSF compliant copyright notices for the entirety of the
graphics/
directory. - FSF compliant copyright notices for the
extern/
directory. The packages within this directory are skipped in thedevel_scripts/copyright_notices.py
copyright compliance checking script. - Update to FSF compliant copyright notices for all modules in the auto_analyses package.
- Update to FSF compliant copyright notices for all modules in the data_store package.
- FSF compliant copyright notices for the entirety of the
devel_scripts/
directory. - Update to FSF compliant copyright notices for all modules in the gui package.
- Update to FSF compliant copyright notices for all modules in the lib package.
- Update to FSF compliant copyright notices for all modules in the multi package.
- Update to FSF compliant copyright notices for all modules in the pipe_control package.
- Update to FSF compliant copyright notices for all modules in the prompt package.
- Update to FSF compliant copyright notices for all scripts in the
sample_scripts/
directory. - Update to FSF compliant copyright notices for all modules in the scons package.
- Update to FSF compliant copyright notices for all modules in the specific_analyses package.
- Update to FSF compliant copyright notices for all modules in the target_functions package.
- Update to FSF compliant copyright notices for all modules in the user_functions package.
- Update to FSF compliant copyright notices for all modules and files in the base relax directory.
- Update to FSF compliant copyright notices for all unit test modules.
- Module docstring standardisation for the system test scripts.
- Update to FSF compliant copyright notices for all system test modules and scripts.
- Update to FSF compliant copyright notices for all verification test modules.
- Update to FSF compliant copyright notices for all GUI test modules.
- Update to FSF compliant copyright notices for the base test suite modules.
- Support for automated copyright notice placement in
README
files. This is directly within the FSF copyright notice compliance checking script. - Update to FSF compliant copyright notices for all scripts in the
test_suite/shared_data/
directory. - Self exclusion of the FSF compliant copyright notice commits.
- Cosmetic change for the test___all__() unit test base class method. The files are now sorted.
- Blacklisted missing files are now skipped in the test___all__() unit test base class method. This allows for the test_suite.unit_tests._target_functions.test___init__.Test___init__() unit test to pass when the relaxation curve-fitting C modules are not compiled.
- Changed the relax state file name for the state.save user function calls in the sample scripts. This is to make it clearer what the files are. The old
*save.bz2
notation has been removed and the files are now generally calledstate.bz2
. - Update to FSF compliant copyright notices for the external Sobol package. An explicit README file has been added to clarify the copyright status of all files.
- Added a trivial relax script to help regenerate the
pec_diag.eps
diagram. - Added the base Xmgrace data file for the generation of the NOE data plot. This is for regenerating
graphics/screenshots/noe_analysis/grace.svg
. The copyright notice checking script has been updated for this old 2004 file. - Changed a number of references to "Linux" to "GNU/Linux".
- Replaced all references to "open source" in the manual with "free software".
- Removed the ancient CIA.vc references in the development chapter of the manual.
- Added a README file for the
extern/numdifftools
package. This is taken from the VC log and explains the origin, version, and licensing of the package. - Added the base Xmgrace data file for the generation of the R2 peak intensity data plot. This is for regenerating
graphics/screenshots/xmgrace_peak_intensities.svg
. The copyright notice checking script has been updated for this old 2004 file. - Copyright notice updates for the
graphics/misc/relaxGUI_splash*
files. - Fixes for the FSF copyright notice compliance checking script.
- Update to FSF compliant copyright notices for the external numdifftools package. Explicit README files have been added to clarify the copyright status of all files.
- Removal of the numdifftools extern package, as this can be easily installed in Python using
pip
. - Support for Grace-formatted units in the specific analysis parameter object. This is currently used by the relaxation curve-fitting analysis for the Rx parameters.
- Created the Relax_fit.test_auto_analysis_pipe_name system test to catch a missing RelaxNoPipeError. This is to catch the error
NameError: global name 'RelaxNoPipeError' is not defined
. - Conversion of the relaxation curve-fitting sample script to use the auto-analysis.
- Improved documentation for the DIFF_MODEL variable in the
dauvergne_protocol.py
sample script. The fact that it can be supplied as a list is now mentioned in the script docstring, and the default value is now a list with all of the global models. - Support for NMR proton pseudo-atom identification from PDB files in the internal structural object. The standard pseudo-atoms are now identified as being protons.
- Removed a duplicated proton frequency check in the relax_data.read user function. This resulted duplicated RelaxWarnings being printed out.
- Huge improvement for the responsiveness of the relax GUI. The relax controller window log panel was being updated with a
wx.CallAfter()
call after every write to the IO streams. If a relax analysis was proceeding very quickly, which is the case in most analyses, this created a huge backlog of GUI updates. The result was that the GUI would freeze, running at 100% CPU usage in its own thread, with the analysis running at 100% on another thread. The fix was to shift the log panel write() call to be triggered by the Timer already being used by the gauges, rather than by the IO stream write() methods. The text was already placed on a Queue object, so this change is very simple. Another small change was made to the log panelwrite()
method to avoid a number of unneeded wx calls. This should also have a significant impact on the GUI updating. - Saved state file name change for the steady-state NOE and relaxation curve-fitting auto-analyses. The names are now simply
state.bz2
. This is so the file is easier to identify as being a relax state file that can be loaded with the state.load user function. - The relaxation curve-fitting sample script now timestamps the data pipe bundle name.
- Redesign of Troels' grace2images.py script. The executable script creation has been shifted from the relaxation curve-fitting auto-analysis (auto_analyses.relax_fit) into the new function lib.plotting.grace.create_grace2images(). This is now also used by the steady-state NOE auto-analysis. The content of the script has also been shifted into the lib.plotting.grace.GRACE2IMAGES variable to allow for easier code editing. The
grace2images.py
script itself has been heavily modified: The script now uses Python3 by default; The depreciated optparse module has been changed to argparse; A copyright notice has been placed at the top of the script; The top comment has been converted into a docstring; The default format is now EPS rather than PNG, as PNG is often not supported as an output device; Bug fix in that all formats can now be created (supplyingJPG
previously did nothing); General code and comment cleanups. - The FSF copyright notice compliance checking script is no longer dependent on relax. The relevant lib.io relax module functions have been copied into the script, and modified with the assumptions of Python 3 only compatibility and less flexible input.
- The relax status singleton now stores the time it was created as the program starting time. This is to allow for elapsed time calculations, which will be used in the auto-analyses for more detailed printouts.
- Creation of the lib.timing.print_elapsed_time() function. This prints out an elapsed time value in day, hour, minute, and second format. A number of unit tests have been added to check the handling of different time values, including plurals.
- Standardisation of initial and final printouts in the auto-analyses, including the elapsed time. The main auto-analyses now use lib.sectioning.title() for marking the start and the end of the analysis. And after the final title() printout, the lib.timing.print_elapsed_time() function is called to provide user feedback to how long relax had been running for.
- Creation of the Relax_disp.test_bug_missing_replicates GUI test. This is to catch an Attribute error when the replicated spectra are specified via the spectrum list GUI element rather than the peak intensity loading wizard.
- More of the GUI main menu entries are disabled during execution locking. This includes all of the
Tools
menu entries to block the free file format from changing mid-execution, the system information user function from being called, and the test suite from being run. The BMRB export menu entry is also disabled. - Safe execution of all of the auto-analyses (those that acquire the execution lock). The whole of the
__init__()
code of the auto-analyses is now wrapped within a try-finally set of statements. This is to be absolutely sure that the execution lock is released. This is not always the case, for example the Relax_fit.test_auto_analysis_pipe_name system test was not releasing the lock due to a RelaxError, and this was causing the later GUI tests to fail. - Updated the Rx.test_r1_analysis GUI test for the changed state file name in the auto-analysis.
- Fix for the FSF copyright notice compliance checking script for lib/plotting/grace.py. The copyright notices within the
grace2images.py
script in the module variable are now ignored. This additionally required removing duplicate copyright notices as both the module and embedded script have "Copyright (C) 2013 Troels E. Linnet". - Unique and temporary hash support in the spin containers. These private data structures will allow for fast SpinContainer to InteratomContainer and reverse lookups. The hash is temporary and only created when a SpinContainer is created. It is not stored, so it is regenerated between relax sessions.
- Unique and temporary hash support in the interatomic data containers. The interatomic data containers now have a unique and temporary private hash assigned to it, just as with the spin containers. They also now have the ability to store the unique spin container hashes. This is currently unused but will allow for fast SpinContainer to InteratomContainer and reverse lookups.
- The interatomic data containers now store the SpinContainer hashes.
- The InteratomContainer._hash value is now stored in the spin containers it refers to.
- Bmrb system test fixes for the new SpinContainer private hash data structures. These structures are now blacklisted in the data pipe comparisons.
- Speed up for the pipe_control.interatomic.define() function. The create_interatom() function will now accept the two spin containers as arguments. As the define() function already has these, they are now passed in avoid two calls to the pipe_control.mol_res_spin.return_spin() function.
- Creation of the pipe_control.interatomic.hash_update() function. This is used when copying interatomic data containers (the pipe_control.interatomic.copy() function) to make sure that the spin hashes in the receiving data pipe are stored in the new interatomic data container.
- Converted all pipe_control.mol_res_spin.return_spin() function calls to use keyword arguments. This is in preparation for adding support for the temporary spin hashes. The pipe_control.mol_res_spin module return_spin_from_selection() and return_spin_from_index() function calls have also been updated, just in case.
- Support for a spin hash fast lookup table for the molecule, residue and spin data structures. The fast lookup table is stored as dp.mol._spin_hash_lookup. This matches the dp.mol._spin_id_lookup fast lookup table, but is a simpler table to maintain as there is only one hash ever per spin and that hash is unique. The table is maintained by the pipe_control.mol_res_spin module.
- Conversion of all return_spin() calls with interatom spin IDs to use the spin_hash argument instead. This should slightly speed up the spin lookups.
- Improved the formatting of the interatomic data container list to help with debugging. The data is now presented with the format_table() function of the lib.text.table module.
- Data container hash cross-reference recreation. This is used by the model_selection, pipe.copy, results.read and state.load user functions. The cross referencing recreation is for both spin containers and interatomic data containers. The old pipe_control.mol_res_spin.metadata_update() and new pipe_control.interatomic.metadata_update() functions are called after the loading a results or state file, or a data pipe copy, so that the data structures properly cross-reference each other's hashes.
- Huge speed up of the interatomic data container handling. The pipe_control.interatomic.create_interatom(), return_interatom(), and return_interatom_list() functions now operate with the unique spin hashes rather than spin IDs. This avoids the expensive calls to the now deleted pipe_control.interatomic.id_match() function.
- Fixes for the copying of spin or interatomic data containers. The data_store.prototype methods Prototype.__clone__() and Prototype.__deepcopy__() will now regenerate the unique hash if a
_generate_hash()
function is present. This function has been added to SpinContainer and InteratomContainer. - Changed the spin ID printout for the rdc.read user function to be the unique ID rather than file ID. This is to help with debugging.
- Bug fix for the N_state_model.test_CaM_IQ_tensor_fit system test. Some of the RDC data contained RDCs between two @N spins rather than an @N and @H spin. This bug was only uncovered by the switch to the spin and interatomic data container hashes for fast lookups.
- Fix for the data store _back_compat_hook() method when creating interatomic data containers. The pipe_control.interatomic module define() function has been renamed to define_dipole_pair() for clarity and it now accepts two spin containers as arguments, overriding the spin ID arguments. This fixes the State.test_old_state_loading GUI test that was failing after the conversion to spin and interatomic data container hashes for fast lookups.
- Printout fix for the check_read_results_1_3() method of the Mf system tests.
- The interatomic_loop() function now uses the spin hash fast lookup table rather than spin IDs.
- Redesign of the create_spin() function of the pipe_control.mol_res_spin module. This function is the backend of the spin.create user function and is also used throughout relax. Instead of creating a single residue or spin, if only a name and not number is supplied, now multiple spins are created. If the residue name is supplied but not the residue number, now all residues matching the given name will have new spins created. For example creating the spin with the name 'NE1' and only specifying the residue name 'TRP', then all tryptophans in all molecules will have NE1 indole side-chain spins created. This makes the operation of the spin.create user function more logical for the user.
- Support for catching segfaults and other errors from Modelfree4. This allows for non-silent exiting from the
Popen()
class. All signals are now reported via RelaxErrors. - Added the text of the LGPLv3 licence to the extern.sobol package.
- Added FSF recommended LGPLv3 licence notices to the top of all of the extern.sobol files. Excluded is the auto-generated test output file.
- Renamed the LGPLv3 file in the extern.sobol package to
COPYING.LESSER
. - Updated all of the minfx project links from Gna! to the SourceForge site.
- Updated all of the relax deployment scripts for the Gna! shutdown. These now use the SourceForge sites for relax, minfx, and bmrblib instead. The svn to git conversion is also taken into account, and git is used to pull in the latest relax code from the SourceForge mirror.
- Converted a large number of Gna! links to point to the equivalent Web Archive URL. Most of these links should have had a snapshot made in the Internet Archive Wayback Machine.
- Added some hyperlinks to the external programs listed in the intro chapter of the user manual.
- Added the relaxation dispersion software support to the intro interfacing section.
- The prompt UI is no longer referenced as the 'primary' interface in the intro chapter of the manual.
- Added relaxation dispersion to the GUI features in the intro chapter.
- Added relaxation dispersion to the list of all data pipe types in the intro chapter.
- Improvements to the script UI text in the intro chapter.
- Linked to the internal Gna! mailing list archives for the multi-processor announcement.
- Added new sections to the infrastructure and development chapters about the Gna! shutdown. This is to warn that the information in these chapters of the manual is out of date.
- Updated the NESSY link to point to the new SourceForge location for the project.
- Changed the relax PDF manual link from Gna! to SourceForge for the HTML manual footers. This is in the latex2html configuration file so that the automatically created HTML manual pages point to a valid location.
- Changed the relax PDF manual link from Gna! to SourceForge for the HTML manual headers. This is in the LaTeX header, so that the automatically created HTML manual pages point to a valid location.
- Converted Gna! mail archive links in the manual to point to the copies at http://www.nmr-relax.com.
- Rewrote the core design of relax development section of the relax manual. The code design figure has also been updated. All of the content was still written for the relax 1.3 releases.
- Removed the dead Freshmeat/Freecode and Gmane text from the development chapter of the manual.
- Copyright notice and FSF compliant copyright notice script updates.
- Renamed the FSF Copyright Validator script to the acronym
fsfcv
. - Split the FSF Copyright Validation script into a configuration file and an executable script. The configuration part of the script has been retained but with all data stripped to be able to provide a blank template for a new configuration file. And the new mimetypes section has been converted into a variable rather than manipulating the mimetypes Python module so that the configuration script requires no Python imports.
- Converted the whole FSF copyright notice validation script code into a class. This is in preparation for a number of major changes to the script.
- The FSF copyright notice validation script now uses the argparse Python module. This is for more powerful command line argument processing. The new
--blank-config
option will now print out the blank configuration file, and the DEBUG variable has been replaced with the-d
or--debug
command line option. - Improved the documentation of the fsfcv configuration file.
- Implementation of the configuration file parsing. This uses modern Python import mechanisms to load the blank config first for default values, followed by the user supplied configuration file.
- Implemented the verbosity argument so per-file messages are only printed when activated.
- The FSF Copyright Validation script will now add the current directory repository if not supplied. This allows the script to be executed without a configuration file.
- New command line option for the FSF copyright validation script to only check for missing notices. This will only print out files with missing copyright notices. Files marked as valid may nevertheless have incorrect notices.
- The capitalisation of "Copyright (C)" no longer matters for the FSF Copyright Validation script. This is for the copyright notices within the file. The configuration file has been updated for the lower case copyright notices (false positives).
- Reactivated the user supplied binary mimetypes for the FSF Copyright Validation script.
- More robust reading of copyright notices from binary files in the FSF Copyright Validation script. The reading of the text file will now return and empty list if a UnicodeDecodeError occurs.
- Updated the fsfcv configuration file for the
fsfcv
script and configuration file itself. - Fixes for the
extern/numpy_future.py
copyright notices. - Support for multiple additional years in the FSF Copyright Validation script.
- Added a progress meter, a simple spinner, to the FSF Copyright Validation script. This is taken directly from lib.text.progress, and the output is sent to STDERR. All other script output is now sent to STDOUT. It is only active if the verbose flag is off.
- Separated the missing copyright notices from non-valid copyright notices in the
fsfcv
script. These are now counted separately and a different message printed out for the missing notice case. - Support added to the
fsfcv
script for handling content not within a version control repository. The untracked and non-valid copyright counting is turned off in this case. - Improved the feedback from the progress meter in the
fsfcv
script. This now says what the numbers are, using text such as "X files checked.". - Activated the
link
option for the epydoc API documentation. This allows for the navigation link to point to "/" rather than "http://www.nmr-relax.com". This is for SSL andhttps://
preparations, so that the http://www.nmr-relax.com part of the URL is not present in the local links. - Shifted the epydoc API documentation copyright notice insertion into the scons script. This notice was previously hardcoded into the
devel_scripts/google_analytics.js
script - as that is the GPLv3+ copyright notice of that script with the date of 2012. Instead the copyright notice in the Google analytics script is now skipped and the correct FDLv1.3 copyright notice with the current year programmatically inserted via thescons/manuals.py
script. - Adding new format of NMRPipe SeriesTab which give errors.
- Added the Relax_disp.test_bug_seriestab_format system test to check for the new format of NMRPipe SeriesTab.
- Changes to lib.spectrum.nmrpipe to handle NMRPipe SeriesTab, when assignment has not been performed. Auto detecting the multiplier column.
- Fixing for allowing renaming of SeriesTab spectrum ID.
- Fix for help section in
grace2images.py
file. It was unclear how to get different types of images. - Extended the Relax_disp.test_bug_seriestab_format system test to include reading of several SeriesTab files, and selecting intensity column.
- Modified lib.spectrum.nmrpipe in read_seriestab() to allow for selecting intensity column.
- Allow for int_col to be a list to make a proper warning.
- Initial try for running a Docker image with gedit. This is an attempt to try running OpenDX later.
- Simplification of Dockerfile.
- Removing dockerfile for gedit.
- Adding a Dockerfile, which makes it easy to build an Ubuntu image and Launch OpenDX. This is very useful on a Mac.
- If the current directory is mounted to home, then dx.map files is working.
- Improved the help to settings in XQuartz when running Docker on a mac and accessing the OpenDX GUI.
- Renamed the extern.sobol.sobol_lib-not_tested module to sobol_lib_untested. This is in preparation for updating to the newest upstream code.
- Updated the extern.sobol package to the latest upstream code. This is the new MIT licensed code (which was previously LGPL licensed). The licence text has been modified to suit the licence change, and the LGPL copyright notices dropped from all files. The Python 3 updates to the relax version of the package have been transferred into the new code.
- Added the MIT licence with copyright notices to the top of all files. The origin of all code was traced back through the MATLAB sources, FORTRAN90 sources, and FORTRAN77 sources. The original f77 code did not contain any shared lines of code with the f90 code, so no copyright statements for Bennett Fox were added. Comments were added to each function to document the history of all of the code.
- Easier reading of the Dockerfile.
- Extended the help section of running a Docker container, so now it is also possible to run a bash session in the container.
- Fix for deploy script of relax to ubuntu. The version variables was wrongly set.
- When running Docker with OpenDX, the current working directory is now mounted on
$HOME/work
instead of$HOME
. - Made the Ultimate Docker file, which package relax and OpenDX together in one Dockerfile. Everything can now be packed together. This makes it an ultimate opportunity to easily ship the relax Docker image to run 'everywhere' easily.
- Letting the default intensity column of SeriesTab be 'VOL'. This is the column SeriesTab uses. The 'HEIGHT' column is copied in from the nmrDraw
test.tab
file, and does not represent the measurement. - Fixes to sconstruct, when building with Python 3 and SCons. The current sconstruct caused an
SyntaxError: invalid syntax
when using`
in the file. - Fixes to sconstruct, when cleaning with Python 3 and SCons. This fix is to print the list represented.
- Removed the Oxygen Icon directory from the skipped directory list of the
fsfcv
script. - Added copyright notices for every Oxygen Icon.
- Small fix for the FSF Copyright Validation script (
fsfcv
). - Capitalised the copyright symbol in the Sobol' external library copyright notices. This is for easier handling by the FSF Copyright Validation script.
- Fixes for the
fsfcv
script configuration for the Sobol' external package. - The alternative committer names are now better handled in the
fsfcv
script. The committer's names in the VC logs are now also translated from the alternative to the standard name. - Correct spelling of Troels Schwarz-Linnet in the copyright notices.
- Troels' name is now handled differently in the
fsfcv
script configuration file. The text "Troels E. Linnet" is now the alternative name, and "Troels Schwarz-Linnet" the standard name. - MS Windows support for the FSF Copyright Validation script.
- Cut and paste error fix for the Oxygen Icon licensing text in the
README
files. As stated in theCOPYING
file, the licence is LGPLv3+, not GPLv3+. - Updated the general relax copyright notice for 2018. This last copyright year is now stored as info.copyright_final_year.
- Clarified the GPLv3+ licensing in the relax introduction string.
- Manual: Addition of a GPLv3+ copyright notice to a second title page.
- Another Oxygen Icon licensing text fix in the
README
files. - Improved the LGPLv3+ licensing text for the base directory of the Oxygen Icons.
- Manual: Added the LGPLv3+ copyright notice for the Oxygen Icons to the second title page.
- Documentation for the copyright and public domain notices for 3D structures. This is to explain why the strict format text files are not modified to include notices, hence they are placed in the
README
file, and detailing the public domain nature of the Protein Data Bank repository. - Updated the script for Docker images.
- Adding Dockerfile for Ubuntu 18.04 LTS and development on Windows.
- Fix for comparison of arrays to None. The use of
x == None
should bex is None
. - Initial commit of travis-ci.
- Setting
sys.exit(1)
in dep_check, to make Travis-ci fail the build on error. - Travis CI: Adding minfx to pip requirements file.
- Travis CI: Fixing path to minfx for pip to install.
- Travis CI: Adding
PYTHON_INCLUDE_DIR
. - Travis CI: Fix for getting
Python.h
. - Travis CI: Again trying to fix export variable to find
Python.h
. - Travis CI: Adding debug echo of path to
Python.h
. - Travis CI: Moving export to
.travis.yml
. - Travis CI: Adding unit test to travis.
- Travis CI: Fix for executing relax from current folder.
- Travis CI: Removing scons, since it should already be part of Compilers & Build toolchain in Trusty images.
- Travis CI: Adding print of relax information.
- Travis CI: Adding more packages to pip requirements.
- Travis CI: Better reading of tests performed.
- Travis CI config: Adding additional Python version to Travis and cleaning up.
- Travis CI config: Adding Python 2.6 and 3.5 to the test matrix.
- Travis CI config: Specific testing for Python 2.6.
- Travis CI config: Trying to get pip conf file.
- Travis CI config: Trying to add svwh.dl.sourceforge.net to trusted pip.
- Travis CI config: Adding importlib for Python 2.6.
- Travis CI config: Trying to add subprocess for Python 2.6.
- Travis CI: Removed matplotlib from Python 2.6.
- Travis CI: Remove test of Python 2.6.
- Renamed
README
file to markdown. - Added travis build shield to
README
. - Adding system-tests to be executed with travis.
- Creation of a large set of system tests for implementing the frame_order.decompose user function. The tests have been copied from
Frame_order.test_distribute_*
and include: Frame_order.test_decompose_free_rotor_z_axis, Frame_order.test_decompose_iso_cone_z_axis, Frame_order.test_decompose_iso_cone_xz_plane_tilt, Frame_order.test_decompose_iso_cone_free_rotor_z_axis, Frame_order.test_decompose_iso_cone_torsionless_z_axis, Frame_order.test_decompose_pseudo_ellipse_xz_plane_tilt, Frame_order.test_decompose_pseudo_ellipse_z_axis, Frame_order.test_decompose_pseudo_ellipse_free_rotor_z_axis, Frame_order.test_decompose_pseudo_ellipse_torsionless_z_axis, Frame_order.test_decompose_rotor_z_axis. - Creation of the frame_order.decompose user function front end.
- Implementation of the frame_order.decompose user function backend.
- Scons: Fixes for the manual compilation. The relax manual cannot be compiled if one of the
sys.path
values contains adocs/
directory. Instead of appending the relaxdocs/
path tosys.path
, it is now prepended. The documentation Python module__all__
lists have also been filled out. - Renamed the relax default repository version from
"repository checkout"
to"repository commit"
. This general text is more appropriate for a git repository. - Manual: Removed a Gna! reference in the intro chapter.
- Manual: Alias creation for the relax mailing lists. This is to allow for a centralised place for changing the mailing list name, if any changes occur to the mailing list in the future.
- Manual, Ch. Infrastructure: Converted the Gna! shutdown note into a new 'History' section. A lot of the relax free software/open source infrastructure history is now documented.
- Manual, Ch. Infrastructure: Removed the Gna! information from the relax website section.
- Manual, Ch. Infrastructure: Updated the relax mailing list information from Gna! to SourceForge. This is now all through LaTeX aliases, so infrastructure changes should be easier to deal with in the future.
- Manual, Ch. Infrastructure: Abstracted the bug reporting section using aliasing. This removes all Gna! specific links from the chapter, shifting them to SourceForge links in the main
relax.tex
file. - Manual, Ch. Infrastructure: Abstract the relax repository section and switch from svn to git. This removes all Gna! specific links from the chapter, shifting them to SourceForge links in the main
relax.tex
file. - Manual, Ch. Infrastructure: Removal of the news section, as this is not supported on SourceForge.
- Manual, Ch. Infrastructure: Abstract the distribution archive section and switch from svn to git. This removes all Gna! specific links from the chapter, shifting them to SourceForge links in the main
relax.tex
file. - Manual, Ch. Installation: Abstraction of the bug tracker links. This replaces the dead Gna! links to the current SourceForge links.
- Manual, Ch. N-state model: Abstraction of the relax-users mailing list.
- Manual, Ch. Dispersion: Dead link and mailing list fixes. The mailing lists are now abstracted using aliases, some old dead links have been removed, and some Gna! support request links have been converted to Internet Archive links.
- Manual, Ch. Development: Removal of the note about the Gna! shutdown. The chapter is about to be updated for the switch to SourceForge, so this note is no longer needed.
- Manual, Ch. Development: Aliases for the mailing lists and addition of a cross reference.
- Manual, Ch. Development: Converted the version control section from SVN to git.
- Manual, Ch. Development: Minor edits to the coding conventions section.
- Adding exit codes for the unit and system tests. This is for Travis to fail if these fail. In Windows these can be seen with: echo Exit Code is %errorlevel%
- Manual, Ch. Development: Removal of the section describing creating and submitting patches.
- Manual, Ch. Development: Section rearrangement in preparation for new text.
- Manual, Ch. Development: svn to git and infrastructure abstraction in the Committers section. All references to svn have been changed to git. And the Gna! infrastructure has been abstracted to aliases in the main
relax.tex
file so that future infrastructure changes are easier to deal with. In addition, many edits of the text have been made. - Manual, Ch. Development: Expansion of the relax repository section.
- Manual, Ch. Development: Minor edits to the relax repository git mirror section.
- Manual, Ch. Development: Editing of the source code repository section.
- Manual, Ch. Development: Added links to the web interfaces for all relax mirror sites.
- Fixing the return value of execution of unit and system tests.
- Manual, Ch. Development: New subsection and editing of the relax repository section. An initial section describing git version control and listing all relax repositories has been added.
- Manual, Ch. Infrastructure: Updated the relax repository section to include the website and demo.
- Manual, Ch. Development: Complete rewrite of the 'Submitting changes to the relax project' section. This converts the Subversion instructions to git, and switches from Gna! to the aliased primary relax infrastructure.
- Manual, Ch. Development: Converted the SCons section from SVN to git, and removed Gna! references.
- Manual, Ch. Development: Major editing of the 'Core design of relax' section. This section is now significantly improved. There was a lot of old information, some dating back to the pre-relax 3.0 designs. And a lot of new information has been added to expand on all of the descriptions.
- Manual, Ch. Development: Minor editing of the tracker section.
- Manual, Ch. Development: Updated the very out of date links section. This was incredibly out of date. The links have been updated to include everything listed at http://www.nmr-relax.com/links.html.
- Manual, Index: Removed the no longer relevant
svnmerge.py
entry. - Simplify Travis file.
- Added travis-ci support for Python 3.7 and OSX. Adding notifications from
builds att travis-ci.com
tonmr-relax-devel att lists.sourceforge.net
. This is after inspiration from https://github.com/WeblateOrg/translation-finder/blob/master/.travis.yml. Windows can not be added due to unknown compile error. - Fixing a bug for running scons. This happens after a
pip install -U numpy
, where numpy is upgraded from 13.3 to 16.1.0. More to read here: https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.set_printoptions.html; https://stackoverflow.com/questions/1987694/how-to-print-the-full-numpy-array; https://github.com/numpy/numpy/pull/12353. - Fix for building on Mac OSX Python 3.7. A possible solution was found here: https://stackoverflow.com/questions/31019854/typeerror-cant-use-a-string-pattern-on-a-bytes-like-object-in-re-findall.
- Adding sending mails to
nmr-relax-devel att lists.sourceforge.net
. This introduces a spamming problem. Everyone who forks this project and have travis setup for their user will spam the develop mailing list. To limit this, there are options in travis: https://docs.travis-ci.com/user/notifications/; https://docs.travis-ci.com/user/conditional-builds-stages-jobs. Introducing a condition like if: branch = master seems not to be implemented yet: https://github.com/travis-ci/travis-ci/issues/1405. Travis has internal ticket to track this feature request. - SCons: Git support for the scons distribution targets. This was previously only set up for Subversion.
- FSF Copyright Validation script: Support for tracking files renamed in later repositories. In this case, a file rename in the current git repository would not allow the file to be found in the SVN archive repository. The history of the later repository is now used to find all file renames after the end of the earlier repository. False git history is also correctly handled.
- FSF Copyright Validation script: Bug fixes for recording the first VC commit as copyright ownership.
- FSF Copyright Validation configuration: Updates for recent files and the script bug fixes. A lot false git history needed to be identified and blocked. And a lot of
README
files added for copyright identification needed to be manually included. - Python multiversion test suite script: Added Python 3.6 and 3.7 to the list to test.
- Travis CI config: Minimise mailing list messages with successes only reported after fixing failures.
- Test suite: Fix for the running of multiple test suite categories. Now all test categories will be run and the execution will not be terminated at the end of the category containing the first error/failure.
- Activating MS Windows Python 3.7 32-bit for travis (64 bit does not work). Adding travis option for upgrading pip packages in one of the builds. This is to try to have pip packages where the versions numbers are normal/average and then where the packages have been upgraded to the newest. Adding check for Python 3.6, since this is the standard version in Ubuntu 16.04 and 18.04.
- Added Python as overall language to travis.
- System tests: Relax_disp.test_paul_schanda_nov_2015 is now skipped when Scipy is missing.
- Devel scripts: Improved logic for finding
Python.h
in the manual C module building script. - SCons: Improved logic for finding
Python.h
for building the C modules. - Python multiversion test suite script: Removal of Python 2.3 and 2.4. These Python versions have not been supported since the first usage of
from __future__ import absolute_import
back in 2013. - Test suite: Graceful failure of the GUI tests when the wx app cannot be setup. This currently occurs when using wxPython-Phoenix.
- Travis CI config: Adding Python 3.6 and adding test of mpirun.
.gitignore
: Ignoring Windows C extensions.- Travis CI config: Trying to add MPI for Windows. It does not seem to work.
- Travis CI config: Trying MPI on Windows does not work:
The processor type 'mpi4py' is not supported.
- GUI: Fix for a wxPython 2.9 issue found via the Relax_disp.test_bug_missing_replicates GUI test. The spectrum ID wx.ListCtrl element cannot be queried for item 0 when empty.
- Development scripts: Rewrote the
python_seek.py
script to report all import errors. - Creation of a large set of system tests for expanding the frame_order.decompose user function. The tests have been copied from Frame_order.test_decompose_* and modified to include the new
total
,reverse
, andmirror
user function keywords. The tests include: Frame_order.test_decompose2_free_rotor_z_axis, Frame_order.test_decompose2_iso_cone_z_axis, Frame_order.test_decompose2_iso_cone_xz_plane_tilt, Frame_order.test_decompose2_iso_cone_free_rotor_z_axis, Frame_order.test_decompose2_iso_cone_torsionless_z_axis, Frame_order.test_decompose2_pseudo_ellipse_xz_plane_tilt, Frame_order.test_decompose2_pseudo_ellipse_z_axis, Frame_order.test_decompose2_pseudo_ellipse_free_rotor_z_axis, Frame_order.test_decompose2_pseudo_ellipse_torsionless_z_axis, Frame_order.test_decompose2_rotor_z_axis - User function frame_order.decompose: Implementation of the
total
,reverse
andmirror
params. This allows a fixed number of structures to be generated over the distribution, for the model order to be reversed, and for the models to step from the negative angle to positive angle and then return to the negative angle. The original code has been simplified by switching fromnumpy.arange()
tonumpy.linspace()
for generating the range of angles. This function is far more reliable thanarange()
which has end point instability issues. - Creation of the Test_object.test_add_model unit test. This is within the _lib._structure._internal.test_object test module. The aim is to reveal issues with the model number accounting within the internal structural object.
- System test: Addition of Structure.test_add_secondary_structure. This will be used to quickly implement the new structure.add_helix and structure.add_sheet user functions.
- User function: Implementation of structure.add_helix for defining alpha helices.
- User function: Implementation of structure.add_sheet for defining beta sheets.
- Library: Implementation of the lib.arg_check.is_bool_or_bool_list() function. This is to allow for either Boolean values or lists of Booleans.
- User functions: Registration of the
bool_or_bool_list
argument type. - User function frame_order.decompose: The argument
reverse
can now be a list of Booleans. This allows different modes to be selectively reversed. - User function structure.superimpose: Speed up of the
'fit to first'
algorithm. The translation and rotation are now skipped for the first structure (as the translation is zero and the rotation matrix is the identity matrix). - User function structure.superimpose: Improved the documentation of the
models
arg. - RelaxErrors: Implementation of a number of new error types. This includes the RelaxBoolListBoolError, RelaxNoneBoolError, RelaxNoneBoolListBoolError, and RelaxNoneTupleNumError objects.
- Unit tests: Complete checking of the lib.arg_check module.
- lib.arg_check module: Missing RelaxError import for the new is_bool_or_bool_list() function. The lib.error import statement has also been spread across multiple lines and alphabetically sorted.
- lib.arg_check module: Protection of the functions against future numpy depreciations. The code
arg == None
will not be supported by numpy in the future, if the arg being checked is a numpy object. Instead thearg is None
syntax must be used. - RelaxErrors: Bug fix for the error message generation for list types. The simple_types and list_types variables are class rather than instance variables, but these were being unintentionally modified by the BaseArgError base class __init__() method.
- lib.compat module: Implementation of the Python version independent from_iterable() function. This will be used to avoid directly using
itertools.chain.from_iterable()
, which was only introduced in Python 2.6 and later. For Python ≥ 2.6, theitertools.chain.from_iterable()
function is used, otherwise the roughly equivalent lib.compat.from_iterable_pre_2_6() function is used. - lib.arg_check module: Redesign of the is_float_object() function to handle any data input. Previously the function could only handle max rank-2 Python lists (lists of lists), and max rank-2 numpy arrays. And only the first dimensionality was being checked. Now any rank list or numpy array is correctly handled.
- lib.arg_check module: Addition of the can_be_none argument to the is_bool() function.
- lib.arg_check module: Documentation fixes for the
is_*()
functions. - lib.arg_check module: Fix for the wrong RelaxErrors being used in the is_num_tuple() function.
- lib.arg_check module: Fix for missing RelaxError imports for the is_list() function.
- lib.arg_check module: Bug fix, Boolean or empty lists no longer evaluate as true in is_num_tuple().
- lib.arg_check module: Bug fix, Boolean or empty lists no longer evaluate as true in is_num_list().
- lib.arg_check module: Simplification of the is_list() function.
- lib.arg_check module: Fixes to and simplification of the is_int_list() function. Boolean lists no longer evaluate as true.
- RelaxErrors: Addition of more error objects in preparation for a new lib.arg_check function.
- RelaxErrors: Expansion of the functionality of the BaseArgError base class. The docstring now documents the arguments. The
dim
andrank
arguments have been added to allow for more control over the reported message for array-type objects. And thecan_be_none
argument has been added to append', or None'
to the message, negating the need for theRelaxNone*Error
objects. For formatting the lists used in the BaseArgError class, the new function human_readable_list() has been added to the lib.text.string module. - lib.arg_check module: Creation of the generic validate_arg() function. A large number of associated unit tests have been added to test all combinations. The _lib.test_arg_check unit tests include: Test_arg_check.test_validate_arg_all_basic_types, Test_arg_check.test_validate_arg_all_basic_types_and_all_containers, Test_arg_check.test_validate_arg_all_containers, Test_arg_check.test_validate_arg_bool, Test_arg_check.test_validate_arg_bool_list, Test_arg_check.test_validate_arg_bool_list_rank2, Test_arg_check.test_validate_arg_bool_or_bool_list, Test_arg_check.test_validate_arg_float, Test_arg_check.test_validate_arg_float_list, Test_arg_check.test_validate_arg_float_list_rank2, Test_arg_check.test_validate_arg_float_or_float_list, Test_arg_check.test_validate_arg_func, Test_arg_check.test_validate_arg_int, Test_arg_check.test_validate_arg_int_list, Test_arg_check.test_validate_arg_int_list_rank2, Test_arg_check.test_validate_arg_int_or_int_list, Test_arg_check.test_validate_arg_list, Test_arg_check.test_validate_arg_list_or_numpy_array, Test_arg_check.test_validate_arg_number, Test_arg_check.test_validate_arg_number_array_rank1, Test_arg_check.test_validate_arg_number_array_rank2, Test_arg_check.test_validate_arg_number_array_rank3, Test_arg_check.test_validate_arg_number_list, Test_arg_check.test_validate_arg_number_list_rank2, Test_arg_check.test_validate_arg_number_list_rank3, Test_arg_check.test_validate_arg_number_numpy_array_rank1, Test_arg_check.test_validate_arg_number_numpy_array_rank2, Test_arg_check.test_validate_arg_number_numpy_array_rank3, Test_arg_check.test_validate_arg_number_or_number_tuple, Test_arg_check.test_validate_arg_number_tuple, Test_arg_check.test_validate_arg_number_tuple_rank2, Test_arg_check.test_validate_arg_number_tuple_rank3, Test_arg_check.test_validate_arg_numpy_float_array, Test_arg_check.test_validate_arg_numpy_float_matrix, Test_arg_check.test_validate_arg_numpy_float_rank3, Test_arg_check.test_validate_arg_numpy_int_array, Test_arg_check.test_validate_arg_numpy_int_matrix, Test_arg_check.test_validate_arg_numpy_int_rank3, Test_arg_check.test_validate_arg_str, Test_arg_check.test_validate_arg_str_list, Test_arg_check.test_validate_arg_str_list_rank2, Test_arg_check.test_validate_arg_str_or_file_object, Test_arg_check.test_validate_arg_str_or_str_list, Test_arg_check.test_validate_arg_tuple.
- lib.arg_check module: Fixes for handling empty numpy arrays. This is for the is_float_array() and is_float_matrix() functions.
- lib.arg_check module: Removal of the
is_list_val_or_list_of_list_val()
function. This was never completely implemented, and was only used by thepoint
argument of the dx.map user function. The user function py_type"list_val_or_list_of_list_val"
value has been renamed to'num_list_or_num_list_of_lists'
and the call tois_list_val_or_list_of_list_val()
replaced by a call to validate_arg(). Thedim
argument for thepoint
argument of the dx.map user function has been modified to match the validate_arg() function syntax. - User function definition redesign, increasing the argument setting flexibility. The
py_type
argument definition has been replaced bybasic_types
,container_types
, and sometimesdim
. This matches the new validate_arg() function in the lib.arg_check module and allows for far greater flexibility in defining a parameter together with more extensive parameter checking than previously possible. - specific_analyses.consistency_tests.api module: Missing RelaxWarning import.
- User function definitions: Support for checking file lists (from
arg_type='file sel multi'
). The new RelaxStrFileListStrFileError object has been created for this check (and the RelaxStrListError also added for completeness). - User function definitions: Overrides for arguments with
arg_type
set. Thearg_type
argument is now fully documented in the user_functions.objects module Uf_container.add_keyarg() function docstring. The value is now checked, and a few unimplemented values have been eliminated. Overrides for thedim
,basic_types
, andcontainer_types
are now set for almost all arguments witharg_type
set. And checks that these are not set in the user function definition have been added. - system.cd user function: Removal of the incorrect wiz_filesel_style argument in the definition.
- User function definitions: Split of the
'file sel'
arg_type value into readable and writable. The arg_type value is now either'file sel read'
or'file sel write'
. The'file sel multi'
value has also been split into'file sel multi read'
and'file sel multi write'
. This is used for checking if file objects supplied to the user function are correctly readable or writable. And it is used in the GUI to automatically set the file selection dialog style. Hence the redundantwiz_filesel_style
argument has been removed from the user function definitions. The is_filetype_readable(), is_filetype_rw(), and is_filetype_writable() functions have been added to the lib.check_types module to check the file objects from within the lib.arg_check module validate_arg() function. - Test suite: Zero times reported on MS Windows with
--time
no longer have a negative sign. python_seek.py
development script: Added _tkinter to theall
list for checking the Python install.- Test suite: Unit test times displayed with
--time
are now in milliseconds. - Python
tempfile.mktemp()
: Converted all usage of the function totempfile.mkstemp()
. Thetempfile.mktemp()
function was depreciated in Python 2.3. According to the Python documentation: "A historical way to create temporary files was to first generate a file name with the mktemp() function and then create a file using this name. Unfortunately this is not secure, because a different process may create a file with this name in the time between the call to mktemp() and the subsequent attempt to create the file by the first process. The solution is to combine the two steps and create the file immediately. This approach is used by mkstemp() and the other functions described above.". The Travis CI testing system was sometimes failing on files created withmktemp()
, so hopefullymkstemp()
will alleviate the issue.
Bugfixes
- Bug fix for the pcs.structural_noise user function. The user function now uses a real multivariate normal distribution for sampling atomic positions. The previous random unit vector + univariate Gaussian sampling does not correctly reproduce the multivariate normal distribution.
- Python 3 bugfix for the Relax_disp.test_bug_24601_r2eff_missing_data system test. Tab characters rather than spaces made the system test script unloadable in Python 3.
- Python 3 fixes for the gui.misc module. This is for text formatting using the
"x"*num
logic. In Python 3,num
is often a float so this does not work and an explicitint()
function call is required. - Python 3 fix for the combo list sequence elements. Comparison of integers to values of None are not allowed.
- Bug fix for
Tools→System information
GUI menu item. The user function has been renamed from sys_info to system.sys_info. - Python ≥ 3.4 fix by removing an unused
types.ListType
import. - Bug fix allowing for spaces in file paths in the GUI open_file() function. This used by the file preview buttons and the results viewer window.
- Minor fixes for the relaxation curve-fitting sample script.
- Another small fix for trp indole 15N spins in the relaxation curve-fitting sample script.
- Fix for the relaxation curve-fitting auto-analysis for when the data pipe name is incorrect. This was simply a missing import.
- Bug fix for the relaxation dispersion GUI analysis when specifying replicated spectra. This is for the Attribute error when the replicated spectra are specified via the spectrum list GUI element rather than the peak intensity loading wizard. The GUI test Relax_disp.test_bug_missing_replicates now passes.
- Bug fix by redesigning the GUI pipe editor pop up menu. The menu now uses IDs to associate menu items with the correct method to call. Previously all menu entries were calling the method of the last menu entry, which was in most cases the pipe switching method. As the pipe deletion method is now properly exposed, the Question dialog was increased in size to be able to see all the text.
- MS Windows fixes for running relax from git and git-svn repositories. Multiple commands on MS Windows need to be separated by
&&
and not;
. - Bug fix: Removal of
'\u'
escape sequences from thelatex_mf_table.py
test suite script docstring. This fixes Bug #1 reported on the new SourceForge infrastructure, and allows the script to be used with Python 3. - Bug fix for the model number tracking with the addition of new models. If a single model is present without a model number, this is now correctly renumbered.
Links
For reference, the announcement for this release can also be found at following links:
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 4.0 series
relax 4.0.3
Description
This is a minor feature and bugfix release. The structure.rmsd user function can now calculate per-atom RMSDs, structure superimposition is now orders of magnitude faster, the relax deployment scripts have been improved and expanded to cover other GNU/Linux systems, OpenMPI system testing scripts have been added, and the relax information printout has been improved. Bugfixes include that the structure.rmsd user function now correctly calculates the RMSD value, and the inversion recovery relaxation curve-fitting equations are now correct.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 4.0.3
(28 October 2016, from /trunk)
http://svn.gna.org/svn/relax/tags/4.0.3
Features
- Per-atom RMSD calculation by the structure.rmsd user function.
- Much faster superimposition of structures.
- More relax deployment scripts for Google Cloud for different GNU/Linux distributions.
- Addition of OpenMPI testing scripts.
- Improved relax information printout.
Changes
- Addition of the atomic boolean argument to the structure.rmsd user function front end. This will be used to enable the calculation of per-atom RMSDs.
- Created the Structure.test_rmsd_spins system test for checking the per-atom RMSD calculation. This is for the new option in the structure.rmsd user function.
- Implemented the per-atom RMSD calculation for the structure.rmsd user function.
- Fixes for the Relax_fit.test_inversion_recovery system test. The wrong equation was used in the
calc.py
Python script used to calculate the peak intensities in thetest_suite/shared_data/curve_fitting/inversion_recovery/*.list
files. The script and Sparky files have been updated. And the I0 value in the script and system test has been changed from 30 to -30, so that the curves start as negative. - Huge speed up for the superimposition of a large number of structures. The internal structural object validate_models() method was being called once for each structure via the selection() method prior performing the translations, and once prior to performing the rotations, for creating the atomic selection object. This resulted in the _translate() internal structural object method, which converts all input data to formatted strings, being called hundreds of millions of times. Therefore selection() method no longer calls validate_models(). This may speed up quite a number of internal structure object methods when large numbers of structures are present.
- Copying deployment script of Ubuntu to a Fedora version. This is a response to bug #25084.
- Moving fedora to redhat. Google Cloud does not offer fedora images.
- Adding deploy script for RHEL 6.
- Added initial script for testing OpenMPI.
- Making a redhat 6 deploy script, which will upgrade Python from 2.6 to 2.7 The normal installation through yum will have Python 2.6 and only numpy 2.4. This is not good.
- Moved deploy scripts. There would probably have to be a deploy script for each system.
- Renamed the Ubuntu deploy script.
- Adding scripts to test OpenMPI installation and deploy in redhat.
- Change to pip install command, to source Python first.
- Adding installation of matplotlib to Redhat 6, Python 2.7.
- More changing to deploy scripts.
- Small change to deploy script to build wxPython.
- More changing to deployment scripts.
- Moving test script of OpenMPI to bash version.
- Made a copy of OpenMPI test script for tcsh shell.
- Again small changes to deployment scripts.
- Changed more to OpenMPI script.
- Altering test OpenMPI script to an alias function.
- Change to bash OpenMPI test script.
- Last changes to testing of OpenMPI.
- Small change to test OpenMPI script for bash
- Back to function in bash script for OpenMPI.
- Made a deployment script for CentOS 6.
- Scons on CentOS finds python2.6 instead of python2.7
- Try to make the script for tcsh and OpenMPI working on all versions of tcsh.
- Added the MPI version information to the mpi4py information printout.
- Windows scons C module compilation now defaults to 32-bit. This is because the default Python downloads are 32-bit. And many libraries (e.g. numpy and scipy) are only pre-compiled as 32-bit. Hence a 64-bit relax build on Windows will require a lot of custom compilation that most users will never do.
- Added support in the information printout for Windows versions of the
file
program. This enables the C modules to be identified as 32 or 64-bit, if thefile
program is installed.
Bugfixes
- Fix for bug #24723. This is the bug that the mean RMSD from the structure.rmsd user function is incorrectly calculated - it should be a quadratic mean. The quadratic mean and quadratic standard deviation are now correctly calculated, and the structure.test_rmsd, structure.test_rmsd_molecules, and structure.test_rmsd_ubi system tests have been updated for the fix.
- Bug fix for the inversion recovery equations sr #3345. The inversion recovery experiment was incorrectly implemented as I(t) = I∞ - I0e-R1t whereas it should be I(t) = I∞ - (I∞ - I0)e-R1t.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Mail Archive
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 4.0.2
Description
This is a minor feature and bugfix release. The new user functions system.cd and system.pwd have been added to allow the working directory to be changed and displayed. The time and sys_info user functions have been renamed to system.time and system.sys_info. The structure.delete_ss user function has been created to remove the helix and sheet information from the internal structural object. For bugs, the R2eff dispersion model can now handle missing peaks in subsets of spectra, and the structure.read_pdb can now handle multiple structures and multiple models with the merge flag set.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 4.0.2
(13 May 2016, from /trunk)
http://svn.gna.org/svn/relax/tags/4.0.2
Features
- Addition of the new user functions system.cd and system.pwd to allow the working directory to be changed and displayed.
- Addition of the structure.delete_ss user function to remove the helix and sheet information from the internal structural object.
Changes
- Improved formatting for the
\yes
LaTeX command for the HTML manual. This now inputs the raw HTML character for a tick. - The replicate title finding script now processes short titles as well. This shows that the Frame_order.html file will be conflicting and overwritten.
- Avoidance of a replicated title in the frame order chapter of the manual.
- Added some unicode characters for improved formatting of the
CHANGES
file. - A number of updates for the release checklist document. This should make it easier to replicate the full release process.
- Update the release checklist document. The version number at http://wiki.nmr-relax.com/Template:Current_version_relax also needs to be updated for each release.
- Added a check for the total argument for the frame_order.distribute user function. The maximum value is 9999, as the PDB format cannot accept more models.
- Creation of the structure.delete_ss user function. This simply resets the helices and sheets data structures in the internal structural object to
[]
. - Updated the copyright notices for 2016.
- Created a short Info_box copyright string for displaying in the main GUI window. This shows the full range of copyright dates.
- Added the
spin_num
boolean argument to the structure.load_spins user function. Setting this flag toFalse
will cause the spin number information to be ignored when creating the spin containers. This allows for better support of homologous structures but with different PDB atom numbering. The default flag value isTrue
, preserving the old behaviour. - Added support for concatenating atomic positions in the structure.load_spins user function. Together with the
spin_num
flag set toFalse
, this allows for atomic positions to be read from multiple homologous structures with different PDB atomic numbering. The spin containers will be created from the first structure, in which the spin is defined, and the atomic position from subsequent structures will be appended to the list of current atomic positions. - Fix for the Structure.test_read_pdb_internal3 system test. With the new atomic position concatenation support, when called sequentially the structure.load_spins user function should always use the same value for the ave_pos argument.
- In the GUI the user functions sys_info and time are now grouped into a
system
subclass. This is to prepare for other system related functions. - Added a new 16x16 icon for the oxygen folder-favorites icon.
- Adding a new file at
lib.system
. This file will contain different functions related to Pythonos
andsystem
related functions. For example changing directory or printing working directory. - In
lib.__init__
, adding the filename forsystem.py
. - Renaming the folder-favorites icon.
- Deleting the old folder-favorites icon.
- Adding a new graphics variable:
WIZARD_OXYGEN_PATH
, to use oxygen icons with size of 200px. - Adding the new user function system.cd. This is to change the current working directory.
- Adding a new 200px of oxygen folder-favorites icon. This is to be used in the wizard image.
- Adding a user function translation for: This is to catch the new naming of these functions.
- Adding a new lib.system.pwd() function, to print and return the current working directory.
- Adding a new user function system.pwd to print/display the current working directory.
- Adding new 16x16 px and 200px of the oxygen icon folder-development. This icon is used for displaying the current working directory.
- Adding a relax GUI menu for changing the current working directory.
- Adding a menu item for changing the current working directory.
- Adding a verbose
True
/False
for the lib.system.pwd() function. - Storing the current working directory as a GUI variable.
- Adding a toolbar button for changing the current working directory.
- Adding a verbose flag to lib.system.pwd() function.
- Changing to a filedialog for the user function system.cd.
- Adding an observer for current working directory.
- Modifying the user function system.cd not to show the result to STDOUT.
- Letting the lib.system.cd function notify the observer, when changing directory.
- Letting the current working directory be printed in the statusbar in the bottom.
- Updating
self.system_cwd_path
when a directory change is observed. - For the four auto-analysis methods, the default results directory is now the current working directory instead of the launch directory.
- Changing the keyboard shortcut for changing the working directory to
Ctrl+W
. SinceCtrl+C
is often used for copying (from the terminal). - Fix for GUI prompt bug, where ANSI escape characters should not be printed when interpreter is inherited from wxPython.
- Added a newline character after printing the script.
- Optimising the width of the statusbar.
- When the user function script is called, a notification of
pipe_alteration
is made. This will force the GUI to update, and make sure that it is up to date. - Updated the frame order auto-analysis for the time → system.time user function change.
- Fix for the GUI status bar element widths. Fixed widths in pixels causes text truncation on many systems, depending on the width of the main relax window. Instead variable widths should be used to allow wxPython to more elegantly present the text while minimising truncation.
- Created a system test for catching bug #24601, the failure of the optimisation of the R2eff dispersion model when peaks are missing from one spectrum, as reported by Petr Padrta. The test uses his data and script to trigger the bug.
- Simplified the Relax_disp.test_bug_24601_r2eff_missing_data system test. This is to allow the test to catch bug #24601 to complete in a reasonable time (2 seconds on one system).
- Fix for the independence of the relax library. As
lib.system
was using the status object, the library independence was broken. To work around this, the module has simply been shifted into the pipe_control package. - Added some missing oxygen icons to allow the relax manual to compile. These are the 128x128 EPS versions of the
places/folder-development.png
andplaces/folder-favorites.png
Oxygen icons recently introduced. For completeness the 32x32, 48x48, and 128x128 PNG versions of the icons have also been added. To help create these EPS icons in the future, the graphics/README file has been added with a description of the*.eps.gz
file creation. - Some more details for the
*.eps.gz
icon creation process. - Mac OS X fixes for the Structure.test_pca and Structure.test_pca_observers system tests. The eigenvectors on this OS are sometimes inverted. As the sign of the eigenvector is irrelevant, the vectors hardcoded into the system tests are now inverted as required.
Bugfixes
- Fix for bug #24218, the incorrect labelling of alignment tensors by the align_tensor.matrix_angles user function when a subset of tensors is specified. The logic for the labels was expanded from being only for all tensors to handling subsets.
- Bug fix for the structure.read_pdb user function bug #24300, when the merge flag is True, and both multiple structures and multiple models are present, the structure.read_pdb user function would fail with a RelaxError. The problem was that the molecule index was simply not being updated correctly.
- Fix for bug #24601, the failure of the optimisation of the R2eff dispersion model when peaks are missing from one spectrum, as reported by Petr Padrta. To handle the missing data, the peak intensity keys are now checked for in the spin container peak_intensities data structure. This is both for the R2eff model optimisation as well as the data back-calculation. A warning is given when the key is missing. The relaxation dispersion base_data_loop() method has been modified to now yield the spin ID string, as this is used in the warnings. In addition, the Grace plotting code in the relax library was also modified. When peak intensity keys are missing, some of the Grace plots will have no data. The code will now generate a plot for that data set, but detect the missing data and allow an empty plot to be created.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- Mail Archive
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 4.0.1
Description
This is a major feature and bugfix release. Features include the new structure.pca user function for performing a principle component analysis (PCA) of a set of structures, handling of replicated R2,eff data points in the dispersion analysis, improvements in the handling of PDB structures, the protection against numpy ≥ 1.9 FutureWarnings for a number of soon to change behaviours in numpy, and addition of a deployment script for the Google Cloud Computing. Bugfixes include an error when loading relaxation data, the CSA constant equation in the manual, missing information in the relax state and results files, loading of certain state files in the GUI, running relax with no graphical display and using matplotlib, BMRB export failure when a spin container is missing data or parameters.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 4.0.1
(14 December 2015, from /trunk)
http://svn.gna.org/svn/relax/tags/4.0.1
Features
- Many improvements for the compilation of the HTML version of the relax manual.
- Updated relax to eliminate all FutureWarnings from numpy ≥ 1.9, to future-proof relax against upcoming numpy behaviour changes.
- Ability to handle replicated R2,eff data points by the relax_disp.r2eff_read user function, but adding 0.001 to the frequency value for the replicated point.
- A new sample script for loading a model-free results file and back-calculating relaxation data.
- Improvements for the handling of PDB structural data.
- Implementation of the structure.pca user function for performing principle component analyses (PCA) of an ensemble of structures.
- Addition of a script for rapid deployment on the Google Cloud Computing infrastructure.
Changes
- Fix for the rigid frame order model 2nd degree frame order matrix in the manual. The wrong symbol was being used.
- Removed the
newparagraph
andnewsubparagraph
definitions from the LaTeX manual. These were causing conflicts with latex2html, preventing the HTML version of the manual from being compiled. These definitions are unnecessary for the current set up of the sectioning in the manual. - Modified the short captions in the new frame models chapter of the manual. The runic ᛞ character has been replaced simply by 'Daeg'. This is due to incompatibilities with latex2html which prevents the HTML manual from being compiled.
- Removal of the definition of a fixed-width table column from the LaTeX manual preamble. This is required as the definition breaks latex2html compatibility, causing a corruption in the figure numbering resulting in the images in the HTML to be essentially randomised.
- Removal of the accents package to allow the HTML manual to be compiled. The
accents
LaTeX package is not compatible with latex2html, so the easiest fix is to eliminate the package. - Manually rotated the frame order matrix element EPS manual figures, for latex2html compatibility. The '90 rotate' command has been deleted and the bounding box permuted as
a b c d
→b -c d -a
. This allows the angle argument in the\includegraphics{}
command to be dropped, as latex2html does not recognise this. It allows the figures to be visible in the HTML version of the manual. - Redesign of the frame order parameter nesting table in the manual for latex2html compatibility. The table uses the tikz package, which is fatal for latex2html, even if not used. Therefore the table in the
docs/latex/frame_order/parameter_nesting.tex
file has been converted into a standalone LaTeX document to create a cropped postscript version of the tikz formatted table. A compilation script has been added as well. The resultant*.ps
file is now included into the PCS numerical integration section, rather than this section creating the tikz table. All tikz preamble text has been removed to allow latex2html to run. - Workaround for latex2html not being able to handle the allrunes package or associated font. In the preamble
htmlonly
environment, the frame order symbols are redefined using the text 'Daeg' instead of the runic character ᛞ. - Fixes for sub and superscripts throughout the manual. This introduces
{}
around all sub and superscripted\textrm{}
instances. This is not needed for the PDF version of the manual as the missing bracket problem is avoided, but it affects the HTML version of the manual compiled by latex2html, which requires the correct notation. The fixes are for both the new frame order chapter as well as the relaxation dispersion chapter. - Editing and fixes for the relax 4.0.0 part of the CHANGES file.
- Updated and improved the wiki instructions in the relax release checklist document.
- One more wiki instruction about checking for dead links in the release checklist document.
- More minor changes to the 'Announcement' section of the release checklist document.
- Updated the shell script for finding duplicated titles in the LaTeX files of the manual.
- Converted the duplicate title finding shell script into a Python script. The Python script is far more advanced and uses a different logic to produce a table of replicated titles and their count. The script also returns a failed exit status when replicates exist.
- Converted the replicated title finding Python script to use a class structure. This allows the script to be imported as a module. The replicate finding has been shifted into a
find()
class method. - Renamed the replicate title finding script.
- Removed the duplicate LaTeX title finding shell script. This is now handled by the far more advanced Python script.
- The Scons compilation of the PDF and HTML manuals now checks for replicated titles. A new replicate_title_check target has been added to the scons scripts. This calls the
find()
method of the replicate LaTeX title finding script to determine if any titles are replicated, and if so the scons target returns with asys.exit(1)
call. This target is set at the start of theuser_manual_pdf
,user_manual_pdf_nofetch
,user_manual_html
,user_manual_html_nofetch
scons targets. The result is that the manual cannot be compiled if replicate titles exist, forcing the titles to be changed. The result will be that the HTML pages will all be unique, as replicated titles results in only one HTML page being created for all the sections. - Elimination of replicated titles in the LaTeX sources that the new frame order chapters introduced.
- Removal of an old replicated title in the LaTeX sources for the manual. This is the title 'Model-free analysis' which is used for the entire specific analysis chapter as well as for the model-free analysis section of the values, gradients, and Hessians for optimisation chapter.
- Fixes and improved printouts for the replicate_title_check scons target.
- Updated all of relax to protect against future changes occurring in the numpy Python package. From numpy version 1.9, the FutureWarning
__main__:1: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
is seen in a large percentage of all relax's user functions. This is caught and turned into a RelaxWarning with the same message. The issue is that the behaviour of the comparison operators==
and!=
will change with future numpy versions. These have been replaced withis
andis not
throughout the relax code base. Changes have also been made to the minfx and bmrblib packages to match. - More future protection against numpy changes. The FutureWarning is
`rank` is deprecated; use the `ndim` attribute or function instead. To find the rank of a matrix see `numpy.linalg.matrix_rank`.
Therefore the N-state model target function method paramag_info() has been updated to use the.ndim
attribute and longer usenumpy.rank()
function. - Created the Mf.test_bug_23933_relax_data_read_ids system test. This is designed to catch bug #23933, the "NameError: global name 'ids' is not defined" problem when loading relaxation data. A truncated version of the PDB file and relaxation data, the full versions of which are attached to the bug report, consisting solely of residues 329, 330, and 331 have been added to the test suite shared data directories, and the system test written to catch the
NameError
. - Updated the Mf.test_bug_23933_relax_data_read_ids system test to catch the
RelaxMultiSpinIDError
. This allows the system test to pass, as aRelaxMultiSpinIDError
is expected. - Updated the minfx and bmrblib versions in the release checklist document to 1.0.12 and 1.0.4. This is to remove the numpy FutureWarning messages about the
== None
and=! None
comparisons to numpy data structures, which in the future will change in behaviour. - Increased the Gna! news item sectioning depth in the release checklist document.
- Expanded the description of the sequence.attach_protons user function. This follows from http://thread.gmane.org/gmane.science.nmr.relax.user/1849/focus=1855.
- Added initial data for testing data from Paul Schanda. This will demonstrate that there are several possibilities to enhance the R2,eff point method.
- Added the Relax_disp.test_paul_schanda_nov_2015 system test. This will catch the loaning of
nan
values. - Made additional check in sequence reading, that
nan
values are skipped. - Making sure that the replicated 4000 Hz point for the 950 MHz experiment is not overwritten.
- In the Relax_disp.test_paul_schanda_nov_2015 system test, added a test of counting the R2,eff values. This shows that the replicated R2,eff at 950 MHz/4000 Hz point is overwritten. A solution could be to change the dispersion frequency very little, to allow the addition of the data point.
- Added further tests to Relax_disp.test_paul_schanda_nov_2015. This will show that replicates of R2,eff values is not handled well.
- In the function of r2eff_read in data module of the dispersion, added the possibilities to read R2,eff values which are replicated. This is done first checking if the dispersion key exists in the R2,eff dictionary. If it exists, continue add 0.001 to the frequency until a new possibility exists. This should help handle multiple R2,eff points, as separate values and not taking any decision to average them.
- Added the expectation of raising an relax error, if trying to plot and no model information is stored.
- Raising an error if plotting dispersion curves, and no model is saved.
- Changed example script for analysing data.
- Extended the Relax_disp.test_paul_schanda_nov_2015 system test to include auto-analysis and clustered fits. This should show that the analysis is now possible.
- Added a temporary state and a script for GUI setup to the data Paul Schanda.
- Added the Relax_disp.test_paul_schanda_nov_2015 GUI test. This will show that loading a state will create a problem.
Traceback (most recent call last): TypeError: int() argument must be a string or a number, not 'NoneType'
. - Added a sample script for back-calculating relaxation data from a model-free results file. This is useful when the results file is not the final model, as these results file do not contain the back-calculated data. This is in response to Christina Möller's support request #3303.
- Using Gary's lib.float.isNaN() instead of
math.isnan()
, to have backwards compatibility with python 2.5. - Fix for spelling mistake and documenting the new behavior of relax_disp.r2eff_read, when reading R2,eff points with the same frequency. If the spin-container already contain R2,eff values with the 'frequency of the CPMG pulse' or 'spin-lock field strength', the frequency will be changed by a infinitesimal small value of + 0.001 Hz. This allow for duplicates or more of the same frequency.
- Modified the internal structural object to be less influenced by the format of the PDB. The PDB serial number is now intelligently handled, in that it is reset to 1 when a new model is created. This information is still kept for supporting the logic of the reading of the
CONECT
records, and will be eliminated in the future. The chain ID information is now no longer stored in the internal structural object, as this information is recreated by the structure.write_pdb user function based on how the internal structural object has been created. - Updates to the Noe and Structure system test classes for the internal structural object changes. The serial number can now be reset, and the chain ID information is no longer stored.
- Added a file to the test suite shared data to help implement the PCA structural analysis. This is the N-domain of the CaM-IQ complex used in a frame order analysis. It is the first 5 structures from a call to the frame_order.distribute user function, with the different rigid-bodies merged back together into a single molecule.
- Created the structure.pca user function front end. This is currently modelled on the structure.rmsd user function framework.
- Basic implementation of the structure.pca user function back end. This is the new pca() function of the pipe_control.structure.main module. It simply performs some checks, assembles the atomic coordinates, and the passes control to the relax library pca_analysis() function of the currently unimplemented lib.structure.pca module.
- Partial implemented of the PCA analysis in the relax library. This is for the new structure.pca user function. The lib.structure.pca module has been created, and the pca_analysis() function created to calculate the structure covariance matrix, via the calc_covariance_matrix() function, and then calculate the eigenvalues and eigenvectors of the covariance matrix, sorting them and truncating to the desired number of PCA modes.
- Added the
algorithm
andnum_modes
arguments to the structure.pca user function. These are passed all the way into the relax library backend. - Implemented the SVD algorithm for the PCA analysis in the relax library. This simply calls
numpy.linalg.svd()
. - The PCA analysis in the relax library now calculates the per structure projections along the PCs.
- The PCA analysis function in the relax library is now returning data. This includes the PCA values and vectors, and the per structure projections.
- The PCA values and vectors, and the per structure projections are now being stored. This is in the structure.pca user function backend in the pipe_control.structure.main module.
- Added the
format
anddir
arguments to the structure.pca user function. This is to the front and back ends. - Modified the assemble_structural_coordinates() method to return more information. This is from the pipe_control.structure.main module. The
lists
boolean argument is now accepted which will cause the function to additionally return the object ID list per molecule, the model number list per molecule, and the molecule name list per molecule. - The structure.pca user function now creates graphs of the PC projections. This includes PC1 vs. PC2, PC2 vs. PC3, etc.
- Added the Gromacs PCA results for the distribution.pdb file. This includes a script used to execute all parts of Gromacs and all output files.
- Updated the Gromacs PCA results for the newest 5.1.1 Gromacs version.
- Created an initial Structure.test_pca system test. This executes the new structure.pca user function, and checks if data is stored in
cdp.structure
. - Improved the graphs in the backend of the structure.pca user function. The graphs are now clustered so that different models of the same structure in the same data pipe are within one graph set. The graph header has also been improved.
- Expanded the Structure.test_pca system test checks to compare to the values from Gromacs.
- A weighted mean structure can now be calculated. This is for the calc_mean_structure() function of the relax library module lib.structure.statistics. Weights can now be supplied for each structure to allow for a weighted mean to be calculated and returned.
- Added support for
observer
structures in the structure.pca user function. This allows a subset of the structures used in the PC analysis to have zero weight so that these structures can be used for comparison purposes. Theobs_pipes
,obs_models
, andobs_molecules
arguments have been added to the user function front end. The backend uses this to create an array of weights for each structure. And the lib.structure.pca functions use the zero weights to remove the observer structures from the PC mode calculations. - Created the Structure.test_pca_observers system test. This is for testing the new observer structures concept of the structure.pca user function.
- Improved the printouts from the relax library principle component analysis. This is in the pca_analysis() function of the lib.structure.pca module.
- Fixes and improvements for the graphs produced by the structure.pca user function. The different sets are now correctly created, and are now labelled in the plots.
- Adding a testing deploy script, for rapid deployment on Google Cloud Computing. This is for an intended install in Ubuntu 14.04 LTS.
- Expanding script for installation.
- Putting installation into functions in deploy script.
- Splitting deploy script into several small functions.
- Adding checking statements to install script.
- When sourcing the scripts, several functions can be performed instead.
- Added spaces to install script for better printing.
- Adding a tutorial script.
- Adding 2 tutorial scripts.
- Fix for small spin ID error in tutorial script.
- Created a system test for catching bug #24131, the BMRB export failure when the SpinContainer object has no S2 attribute, as reported by Martin Ballaschk.
- Modified the Mf.test_bug_24131_bmrb_deposition system test to check for the
RelaxError
. The test results in aRelaxError
, as the results file contains no selected spins. - Added the Mf.test_bug_24131_missing_interaction system test to catch another problem. This is part of bug #24131, the BMRB export failure with the SpinContainer object having no S2 value. However the previous fix of skipping deselected spins introduced a new problem of relax still searching for the interatomic interactions for that deselected spin.
Bugfixes
- Replicated titles in the HTML version of the relax manual, and hence replicated HTML file names overwriting earlier sections, have been eliminated.
- Fix for bug #23933, the "NameError: global name 'ids' is not defined" problem when loading relaxation data. The bug was introduced back in November 2014, and is due to some incomplete error handling code. The problem is that the spin type that the relaxation data belongs to (@N vs. @H) has not been specified. Now the correct
RelaxMultiSpinIDError
is raised. Theids
variable did not exist - it was code that was planned to be added, but never was and was forgotten. - Fix for the CSA constant equation in the model-free chapter of the manual. This was spotted by Christina Möller and reported on the relax-users mailing list.
- Bug fix for the storage of the XML structural object in the state and results files. Previously any objects added to
cdp.structure
(or any structure object) would not be saved by the structural object to_xml() method unless the function is explicitly modified to store that object. Now all objects present will be converted to XML. - Fix for the relaxation dispersion analysis in the GUI, as caught by the Relax_disp.test_paul_schanda_nov_2015 GUI test. When loading from a script state file, the value of
None
can be present. This is now set to the standard values. - Fix for running relax at a server with no graphical display and using matplotlib. The error was found with the Relax_disp.test_repeat_cpmg system test. And the error generated was:
QXcbConnection: Could not connect to display. Aborted (core dumped)
. The backend ofmatplotlib
has to be changed. This is for example described in: http://stackoverflow.com/questions/2766149/possible-to-use-pyplot-without-display and http://stackoverflow.com/questions/8257385/automatic-detection-of-display-availability-with-matplotlib. - Modified the behaviour of the bmrb.write user function backend for a model-free analysis (fix for bug #24131). This is in the bmrb_write() method of the model-free analysis API. Deselected spins are now skipped and a check has been added to be sure that spin data has been assembled.
- Another fix for bug #24131, the BMRB export failure when the SpinContainer object has no S2 attribute. Now no data is stored in the BMRB file if a model-free model has not been set up for the spin. This allows the test suite to pass.
- Bug fix to allow the Mf.test_bug_24131_missing_interaction system test to pass. This is part of bug #24131, the BMRB export failure with the SpinContainer object having no S2 value. The problem was when assembling the diffusion tensor data. The spin_loop() function was being called, as the diffusion tensor is reported for all residues. Therefore the
skip_desel=True
has been added to match the model-free part.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- Mail Archive
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 4.0.0
Description
This is a major feature release for a new analysis type labelled 'frame order'. The frame order theory aims to unify all rotational molecular physics data sources via a single mechanical model. It is a bridging physics theory for rigid body motions based on the statistical mechanical ordering of reference frames. The previous analysis of the same name was an early iteration of this theory that was however rudamentary and non-functional. Its current implementation is for analysing RDC and PCS data from an internal alignment to interpret domain or other rigid body motions within a molecule or molecular complex.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 4.0.0
(7 October 2015, from /trunk)
http://svn.gna.org/svn/relax/tags/4.0.0
Features
- The final, complete, and correct implementation of the frame order theory for studying rigid body motions. This is currently for analysing RDC and PCS data from internally aligned systems.
Changes
- Deletion of the frame_order.average_position user function and all of the associated backend code. This user function allowed the user to specify five different types of displacement to the average moving domain position: a pure rotation, with no translation, about the pivot of the motion in the system; a rotation about the pivot of the motion of the system together with a translation; a pure translation with no rotation; a rotation about the centre of mass of the moving domain with no translation; a rotation about the centre of mass of the moving domain together with a translation. Now the last option will be the default and only option. This option is equivalent to the standard superimposition algorithm (the Kabsch algorithm) to a hypothetical structure at the real average position. The other four are due to the history of the development of the theory. These limit the usefulness of the theory and will only cause confusion.
- Clean up of the frame order target function code. This matches the previous change of the deletion of the frame_order.average_position user function. The changes include the removal of the translation optimisation flag as this is now always performed, and the removal of the flag which causes the average domain rotation pivot point to match the motional pivot point as these are now permanently decoupled.
- Alphabetical ordering of functions in the lib.frame_order.pseudo_ellipse module.
- Eliminated all of the 'line' frame order models, as they are not implemented yet. This is just frontend code - the backend does not exist.
- Updated the isotropic cone CaM frame order test model optimisation script. Due to all of the changes in the frame order analysis, the old script was no longer functional.
- Created a script for the CaM frame order test models for finding the average domain position. As the rotation about a fixed pivot has been eliminated, the shift from 1J7P_1st_NH_rot.pdb to 1J7P_1st_NH.pdb has to be converted into a translation and rotation about the CoM. This script will be used to replace the pivot rotation Euler angles with the translation vector and CoM rotation Euler angles. However the structure.superimpose user function will need to be modified to handle both the standard centroid superimposition as well as a CoM superimposition.
- Updated the CaM frame order test model superimposition script. The structure.superimpose user function is now correctly called. The output log file has been added to the repository as it contains the correct translation and Euler rotation information needed for the test models.
- Parameter update for the isotropic cone CaM frame order test model optimisation script. The Euler angles for the rotation about the motional pivot have been replaced by the translation vector and Euler angle CoM rotation parameters.
- Fix for a number of the frame order models which do not have parameter constraints. The linear_constraint() function was returning A, b = [], [] for these models, but these empty numpy arrays were causing the minfx library to fail. These values are now caught and the constraint algorithm turned off in the minimise() specific API method.
- Increased the precision of all the data in the CaM frame order test data generation base script. These have all been converted from float16 to float64 numpy types.
- Fix for the RDC error setting in the CaM frame order test data generation base script. The rdc_err data structure is located in the interatomic data containers, no the spin containers.
- Modification of the structure loading part of the CaM frame order data generation base script. The structures are now only loaded if the DIST_PDB flag is set, as they are only used for generating the 3D distribution of structures. This saves a lot of time and computer memory.
- Huge speedup of the CaM frame order test data generation base script. By using multidimensional numpy arrays to store the atomic positions and XH unit vectors of all spins, and performing the rotations on these structures using numpy.tensordot(), the calculations are now a factor of 10 times faster. The progress meter had to be changed to show every 1000 rather than 100 iterations. The rotations of the positions and vectors are now performed sequentially, accidentally fixing a bug with the double motion models (i.e. the 'double rotor' model).
- Modified the CaM frame order test data generation base script to conserve computer RAM. The XH vector and atomic position data structures for all N rotations are now of the numpy.float32 rather than numpy.float64 type. The main change is to calculate the averaged RDCs and averaged PCSs separately, deleting the N-sized data structures once the data files are written.
- Complete redesign of the CaM frame order data generation base script for speed and memory savings. Although the rotated XH bond vector and atomic position code was very fast, the amount of memory needed to store these in the spin containers and interatomic data containers was huge when N > 1e6. The subsequent rdc.back_calc and pcs.back_calc user function calls would also take far too long. Therefore the base script has been redesigned. The _create_distribution() method has been split into four: _calculate_pcs(), _calculate_rdc(), _create_distribution(), and _pipe_setup(). The _pipe_setup() method is called first to set up the data pipe with all required data. Then the _calculate_rdc() and _calculate_pcs() methods, and finally _create_distribution() if the DIST_PDB flag is set. The calls to the rdc.back_calc and pcs.back_calc user functions have been eliminated. Instead the _calculate_rdc() and _calculate_pcs() methods calculate the averaged RDC and PCS themselves as numpy array structures. Rather than storing the huge rotated vectors and atomic positions data structures, the RDCs and PCSs are summed. These are then divided by self.N at the end to average the values. Compared to the old code, when N is set to 20 million the RAM usage drops from ~20 GB to ~65 MB. The total run time is also decreased on one system from a few days to a few hours (an order or two of magnitude).
- Changed the progress meter updating for the CaM frame order test data generation base script. The spinner was far too fast, updating every 5 increments, and is now updated every 250. And the total number is now only printed every 10,000 increments.
- Improvements to the progress meter for the CaM frame order test data generation base script. Commas are now printed between the thousands and the numbers are now right justified.
- Large increase in accuracy of the RDC and PCS averaging. This is for the CaM frame order test data generation base script. By summing the RDCs and PCSs into 1D numpy.float128 arrays (for this, a 64-bit system is required), and then dividing by N at the end, the average value can be calculated with a much higher accuracy. As N becomes larger, the numerical averaging introduces greater and greater amounts of truncation artifacts. So this change alleviates this.
- Fix for the RDC and PCS averaging in the CaM frame order test data generation base script. For the double rotor model, or any multiple motional mode model, the averaging was incorrect. Instead of dividing by N, the values should be divided by NM, where M is the number of motional modes.
- Huge increase in precision for the CaM frame order free rotor model test data. The higher precision is because the number structures in the distribution is now twenty million rather than one million, and the much higher precision numpy.float128 averaging of the updated data generation base script has been used. This data should allow for a much better estimate of the β and γ average domain position parameter values for the free rotor models which are affected by the collapse of the α parameter to zero.
- Huge increase in precision for the CaM frame order double rotor model test data. The higher precision is because the number structures in the distribution is now over twenty million (45002) rather than a quarter of a million (5002). And the much higher precision numpy.float128 averaging of the updated data generation base script has been used.
- Fix for the constraint deactivation in the frame order minimisation when no constraints are present.
- Huge increase in precision for the CaM frame order rotor model test data. The higher precision is because the number structures in the distribution is now 20 million rather than 166,666, and the numpy.float128 data averaging has been used.
- Large increase in precision for the 2nd CaM frame order rotor model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1,000,001 and the numpy.float128 data averaging has been used.
- Parameter update for the 2nd rotor CaM frame order test model optimisation script. The Euler angles for the rotation about the motional pivot have been replaced by the translation vector and Euler angle CoM rotation parameters.
- Large increase in precision for the 2nd CaM frame order free rotor model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 999,999 and the numpy.float128 data averaging has been used.
- Updated the CaM frame order test model superimposition script. The Ca2+ atoms are now deleted from the structures before superimposition so that the centroid matches that used in the frame order analysis.
- The average domain rotation centroid is printed out when setting up the frame order target functions. This is to help the user understand what is happening in the analysis.
- Faster clearing of numpy arrays in the lib.frame_order modules. The x[:] = 0.0 notation is now used to set all elements to zero, rather than nested looping over all dimensions. This however has a negligible effect on the test suite timings.
- Large increase in precision for the CaM frame order pseudo-ellipse model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
- Improved the value setting in the optimisation() method of the CaM frame order system tests. This is in the base script used by all scripts in test_suite/system_tests/scripts/frame_order/cam/.
- Changed the average domain position parameter values in the CaM frame order system tests. This is in the base script used by all scripts in test_suite/system_tests/scripts/frame_order/cam/. The translation vector coordinates are now set, as well as the CoM Euler angle rotations. These come from the log file of the test_suite/shared_data/frame_order/cam/superimpose.py script, and are needed due to the simplification of the average domain position mechanics now mimicking the Kabsch superimposition algorithm.
- The CaM frame order system test mesg_opt_debug() method now prints out the translation vector. This is printed out at the end of all CaM frame order system tests to help with debugging when the test fails.
- Change for how the CaM frame order system test scripts handle the average domain position rotation. The trick of pre-rotating the 3D coordinates was used to solve the {α, β, γ} -> {0, β', γ'} angle conversion problem in the rotor models no longer works now that the average domain position mechanics has been simplified. Instead, high precision optimised β' and γ' values are now set, and the ave_pos_alpha value set to None. The high precision parameters were obtained with the frame_order.py script located in the directory test_suite/shared_data/frame_order/cam/free_rotor. The free rotor target function was modified so that the translation vector is hard-coded to [-20.859750185691549, -2.450606987447843, -2.191854570352916] and the axis θ and φ angles to 0.96007997859534299767 and 4.0322755062196229403. These parameters were then commented out for the model in the module specific_analyses.frame_order.parameters so only β' and γ' were optimised. Iterative optimisation was used with increasing precision, ending up with high precision using 10,000 Sobol' points.
- Updated a number of the CaM frame order system tests for the higher precision data. The new data results in chi-squared values at the real solution to be much closer to zero.
- Change for how the CaM frame order free-rotor pseudo-ellipse test script handle the average position.
- Added FIXME comments to the 2nd free-rotor CaM frame order model system test scripts. These explain the steps required to obtain the correct β' and γ' average domain position rotation angles.
- Large increase in precision for the CaM frame order isotropic cone model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
- Large increase in precision for the CaM frame order free-rotor, isotropic cone model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
- Updated the CaM frame order free-rotor model test data set for testing for missing data. This is the data in test_suite/shared_data/frame_order/cam/free_rotor_missing_data. To simplify the copying of data from test_suite/shared_data/frame_order/cam/free_rotor and then the deletion of data, the missing.py script was created to automate the process. The generate_distribution.py script and some of the files it creates were removed from the repository so it is clearer how the data has been created.
- Large increase in precision for the 2nd CaM frame order free-rotor, isotropic cone model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
- Large increase in precision for the CaM frame order free-rotor, pseudo-ellipse model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
- Large increase in precision for the CaM frame order pseudo-ellipse model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
- Updated a number of the CaM frame order system tests for the higher precision data. The new data results in chi-squared values at the real solution to be much closer to zero. The free-rotor pseudo-ellipse models might need investigation however as the chi-squared values have increased.
- Elimination of the error_flag variable from the frame order analysis. This flag is used to activate some old code paths which have now been deleted as they are never used.
- Optimisation of the average domain position for the CaM frame order free-rotor models. The log file that shows the optimisation of the average domain position for the free-rotor models has been added to the repository for reference. This is for the simple free-rotor model, but the optimised position holds for the isotropic cone and pseudo-ellipse model data too. To perform the optimisation, the axis_theta and axis_phi parameters were removed from the model and hardcoded into the target function. As the rotor axis is know, this allows the average domain position to be optimised in isolation. Visual inspection of the results confirmed the position to be correct.
- Fixes for the 2nd frame order free-rotor system tests. The average domain position parameters are now set to the correct values, matching those in the relax log file frame_order_ave_pos_opt.log in test_suite/shared_data/frame_order/cam/free_rotor2.
- Updated the 2nd CaM free-rotor frame order system tests for the correct average domain position. The chi-squared values are now significantly lower.
- Increased the precision of the chi-squared value testing in the CaM frame order system tests. The check_chi2 method has been modified so that the chi-squared value is no longer scaled, and the precision has been increased from 1 significant figure to 4. All of the tests have been updated to match.
- The minimisation verbosity flag now effects the frame order RelaxWarning about turning constraints off.
- Preformed a frame order analysis on the 2nd CaM free-rotor model test data. This is to check that everything is operating as expected.
- Small speedup for the frame order target functions for most models. The rotation matrix corresponding to each Sobol' point for the numerical integration is now pre-calculated during target function initialisation rather than once for each function call.
- Updates for some of the frame order system tests for the rotation matrix pre-calculation change. As the rotation matrix is being pre-calculated, one consequence is that the Sobol' angles are now full 64-bit precision rather than 32-bit. Therefore this changes the chi-squared value a little, requiring updates to the tests.
- Preformed a frame order analysis on the CaM free-rotor mode test data set. This is to demonstrate that everything is operating correctly.
- Preformed a frame order analysis on the CaM free-rotor mode test data set with missing data. This is to demonstrate that everything is operating correctly.
- Attempt to speed up the pseudo-elliptic frame order models. The quasi-random numerical integration of the PCS for the pseudo-ellipse has been modified so that the torsion angle check for each Sobol' point is preformed before the tmax_pseudo_ellipse() function call. A new check that the tilt angle is less than cone_theta_y, the larger of the two cone angles, has also been added to avoid tmax_pseudo_ellipse() when the θ tilt angle is outside of an isotropic cone defined by cone_theta_y.
- Preformed a frame order analysis on a number of the CaM test data sets. This includes the rotor, isotropic cone, and pseudo-ellipse, and the analyses demonstrate a common bug between all these models.
- Preformed a frame order analysis on the rigid CaM test data set. This is to demonstrate that everything is operating correctly.
- Optimisation of the rotor model to the rigid CaM frame order test data. The optimisation script and all results files have been added to the repository.
- Increased the grid search bounds for the frame order average domain translation. Instead of being a 10 Angstrom box centred at {0, 0, 0}, now the translation search has been increased to a 100 Angstrom box.
- Proper edge case handling and slight speedup of the frame order PCS integration functions. The case whereby no Sobol' points in the numerical integration lie within the motional distribution is now caught and the rotation matrix set to the motional eigenframe to simulate the rigid state. As the code for averaging the PCS was changed, it was also simplified by removing an unnecessary loop over all spins. This should speed up the PCS integration by a tiny amount.
- Created a new CaM frame order test data set. This is for the rotor model with a very small torsion angle of 1 degree, and will be used as a comparison to the rigid model and for testing the performance of the rotor model for an edge case.
- Updated the frame order representations in all of the frame_order.py scripts for the CaM test data. All PDB files are now gzipped to save space, the old pymol.cone_pdb user function calls replaced with pymol.frame_order, and an average domain PDB file for the exact solution is now created in all cases.
- The minimisation constraints are now turned on for all CaM test data frame_order.py optimisation scripts.
- Updated the rotor CaM test data frame_order.py script for the parameter reduction. The rotor axis {θ, φ} polar angles have been replaced by the single axis α angle. This now matches the script for the 2nd rotor model.
- Updated the parameters in all of the frame_order.py scripts for the CaM test data. The parameters are now specified at the top of the script as variables. All scripts now handle the change to the translation + CoM rotation for the average domain position rather than having a pure rotation about a fixed pivot, which is no longer supported.
- The frame_order.num_int_pts user function now throws a RelaxWarning if not enough points are used.
- Changed the creation of Sobol' points for numerical integration in the frame order target functions. The points are now all created at once using the i4_sobol_generate() rather than i4_sobol() function from the extern.sobol.sobol_lib module.
- Increased the number of integration points from 50 or 100 to 5000. This is for all CaM frame_order.py test data optimisation scripts. The higher number of points are essential for optimising the frame order models and hence for checking the relax implementation.
- Updated the frame_order.py optimisation script for the small angle CaM rotor frame order test data. This now has the correct rotor torsion angle of 1 degree, and the spherical coordinates are now converted to the axis α parameter.
- Expanded the capabilities of the pymol.frame_order user function. The isotropic and pseudo-elliptic cones are now represented as they used to be under the pymol.cone_pdb user function. To avoid code duplication, the new represent_cone_axis(), represent_cone_object() and represent_rotor_object() functions have been created to send the commands into PyMOL.
- Increased the precision of all of the CaM frame order system tests by 40 times. The number of Sobol' integration points have been significantly increased while only increasing the frame order system test timings by ~10%. This allows for checking for chi-squared values at the minima much closer to zero, and is much better for demonstrating bugs.
- Optimisation constraints are no longer turned off in the frame order auto-analysis. Constraints are now supported by all frame order models, or automatically turned off for those which do not have parameter constraints.
- Fix for the frame order visualisation script created by the auto-analysis. The call to pymol.frame_order is now correct for the current version of this user function.
- Removed a terrible hack for handling the frame order analysis without constraints. This is no longer needed as the log-barrier method is now used to constrain the optimisation, so that the torsion angle can no longer be negative.
- Constraints are now implemented in the frame order grid search. This is useful for the pseudo-elliptic models as the cone θx < θy constraint halves the optimisation space.
- Expanded the CaM rotor test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation are included.
- Expanded the CaM pseudo-ellipse test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation are included.
- Added one more iteration for the zooming optimisation of the frame order auto-analysis. This is to improve the speed of optimisation when all RDC and PCS data is being used. The previous iterations where with [100, 1000, 200000] Sobol' integration points and [1e-2, 1e-3, 1e-4] function tolerances. This has been increased to [100, 1000, 10000, 100000] and [1e-2, 1e-3, 5e-3, 1e-4]. The final number of points has been decreased as that level of accuracy does not appear to be necessary. These are also only default values that the user can change for themselves.
- Updated the CaM frame order data generation base script to print out more information. This is for the first axis system so that the same amount of information as the second system is printed.
- Expanded the CaM isotropic cone test data frame_order.py optimisation script and added the results. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower.
- Important fix for the 2nd rotor model of the CaM frame order test data. The tilt angle was not set, and therefore the old data matched the non-tilted 1st rotor model. All PCS and RDC data has been regenerated to the highest quality using 20,000,000 structures.
- Updated the 3 Frame_order.test_cam_rotor2* system tests for the higher quality data.
- Expanded the 2nd CaM pseudo-ellipse test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation have been added to the repository.
- Expanded the CaM free-rotor isotropic cone test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation have been added to the repository.
- Expanded all remaining CaM test data frame_order.py optimisation scripts. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower.
- Updated the CaM 2-site to rotor model frame_order.py optimisation script for the parameter reduction. The rotor frame order model axis spherical angles have now been converted to a single α angle.
- Fix for a number of the frame order models which do not have parameter constraints. This change to the grid_search() API method is similar to the previous fix for the minimise() method. The linear_constraint() function was returning A, b = [], [] for these models, but these empty numpy arrays were causing the dot product with A to fail in the grid_search() API method. These values are now caught and the constraint algorithm turned off.
- Converted the 'free rotor' frame order model to the new axis_alpha parameter system. The axis_theta and axis_phi spherical coordinates are converted to the new reduced parameter set defined by a random point in space (the CoM of all atoms), the pivot point, and a single angle α. The α parameter defines the rotor axis angle from the xy-plane.
- Parameter conversion for all of the CaM free rotor test data frame_order.py optimisation scripts. The rotor axis spherical angles have been replaced by the axis α angle defining the rotor with respect to the xy-plane.
- Modified the CaM frame order base system test script to catch a bug in the free rotor model. The axis spherical angles are no longer set for the rotor or free rotor models, as they use the α angle instead and the lack of the θ and φ parameters triggers the bug. The PDB representation of the frame order motions is also now tested for all frame order models, as it was turned off for the rigid, rotor and free rotor models and this is where the bug lies.
- Fix for the failure of the frame_order.pdb_model user function for the free rotor frame order model. This is due to the recent parameter conversion to the axis α angle.
- Eliminated the average position α Euler angle parameter from the free-rotor pseudo-ellipse model. As this frame order model is a free-rotor, the average domain position is therefore undefined and it can freely rotate about the rotor axis. One of the Euler angles for rotating to the average position can therefore be removed, just as in the free rotor and free rotor isotropic cone models.
- Eliminated the ave_pos_alpha parameter from the free rotor psuedo-ellipse model target function. The average domain position α Euler angle has already been removed from the specific analyses code and this change brings the target function into line with these changes.
- Added the full optimisation results for the 2nd rotor frame order model for the CaM test data. This is from the new frame_order.py optimisation script and the results demonstrate the stability of the rotor model.
- Added the full optimisation results for the small angle rotor CaM frame order test data. This is from the new frame_order.py optimisation script and the results demonstrate the stability of the rotor model, even when the rotor is as small as 1 degree.
- Fix for the free rotor PDB representation created by the frame_order.pdb_model user function. The simulation axes were being incorrectly generated from the θ and φ angles, which no longer exist as they have been replaced by the α angle.
- Added the full optimisation results for the free rotor pseudo-ellipse frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
- Added the full optimisation results for the rotor frame order model. This is for the 2-site CaM test data using the new frame_order.py optimisation script.
- The CaM frame order data generation base script now uses lib.compat.norm(). This is to allow the test suite to pass on systems with old numpy versions whereby the numpy.linalg.norm() function does not support the new axis argument.
- Modified the pymol.cone_pdb and pymol.frame_order user functions to use PyMOL IDs. The PyMOL IDs are used to select individual objects in PyMOL rather than all objects so that the subsequent PyMOL commands will only be applied to that object. This allows for multiple objects to be handled simultaneously.
- Added the full optimisation results for the free rotor frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
- Added the full optimisation results for the 2nd free rotor frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
- Added the full optimisation results for the free rotor frame order model with missing data. This is for the CaM test data using the new frame_order.py optimisation script.
- Added a script for recreating the frame order PDB representation and displaying it in PyMOL. This is for the optimised results.
- Fixes for the rotor object created by the frame_order.pdb_model user function. The rotor is now also shown for the free rotor pseudo-ellipse, despite it being a useless model, and the propeller blades are no longer staggered for all the free rotor models so that two circles are no longer produced.
- Updated the free rotor and 2nd free rotor PDB representations using the represent_frame_order.py script. This is for the CaM frame order test data.
- Reparameterisation of the double rotor frame order model. The two axes defined by spherical angles have been replaced by a full eigenframe and the second pivot has been replaced by a single displacement along the z-axis of the eigenframe.
- Removed the 2nd pivot point infrastructure from the frame order analysis. The 2nd pivot is now defined via the pivot_disp parameter.
- Added the 2nd rotor axis torsion angle to the list of frame order parameters. This is for the double rotor model.
- Comment fixes for the eigenframe reconstruction in the frame order target functions.
- Converted the double rotor frame order model target function to use the new parameterisation.
- Fix for the PDB representation generated by frame_order.pdb_model for the free rotor pseudo-ellipse.
- Fix for the Frame_order.test_rigid_data_to_free_rotor_model system test. As the free rotor has undergone a reparameterisation, the chi-squared value is now higher. The value is reasonable as the free rotor can never model the rigid system.
- Removed the structure loading and transformation from the CaM frame order system tests. This was mimicking the old behaviour of the auto-analysis. However as that behaviour has been shifted into the backend of the frame_order.pdb_model user function, which is called by these system tests as well, the code is now redundant and is wasting test suite time.
- Removed the setting of the second pivot point in the CaM frame order system tests. The second pivot point has been removed from the double rotor frame order model to eliminate parameter redundancy, so no models now have a conventional second pivot.
- Modified the CaM frame order system test base script to test alternative code paths. This pivot point was fixed in all tests, so the code in the target functions behind the pivot_opt flag was not being tested. Now for those system tests whereby the calc rather than minimise user function is called, the pivot is no longer fixed to execute this code.
- Simplification and clean up of the RDC and PCS flags in the frame order target functions. The per-alignment flags have been removed and replaced by a global flag for all data. This accidentally fixes a bug when only RDCs are present, as the calc_vectors() method was being called when it should not have been.
- Speedup and simplifications for the vector calculations used for the PCS numerical integration. This has a minimal effect on the total speed as the target function calc_vectors() method is not the major bottleneck - the slowest part is the quasi-random numerical integration. However the changes may be useful for speeding up the integration later on. The 3D pivot point, average domain rotation pivot, and paramagnetic centre position arrays are now converted into rank-2 arrays in __init__() where the first dimension corresponds to the spin. Each element is a copy of the 3D array. These are then used for the calculation of the pivot to atom vectors, eliminating the looping over spins. The numpy add() and subtract() ufuncs are used together with the out argument for speed and to avoid temporary data structure creation and deletion. The end result is that the calculated vector structure is transposed, so the first dimension are the spins. The changes required minor updates to a number of system tests. The target functions themselves had to be modified so that the pivot is converted to the larger structure when optimised, or aliased.
- Added a script for timing different ways to calculate PCSs and RDCs for multiple vectors. This uses the timeit module rather than profile to demonstrate the speed of 7 different ways to calculate the RDCs or PCSs for an array of vectors using numpy. In the frame order analysis, this is the bottleneck for the quasi-random numerical integration of the PCS. The log file shows a potential 1 order of magnitude speedup between the 1st technique, which is currently used in the frame order analysis, and the 7th and last technique. The first technique loops over each vector, calculating the PCS. The last expands the PCS/RDC equation of the projection of the vector into the alignment tensor, and calculates all PCSs simultaneously.
- Added another timing script for RDC and PCS calculation timings. This time, the calculation for multiple alignments is now being timed. An addition set of methods for calculating the values via tensor projections have been added. For 5 alignments and 200 vectors, this demonstrates a potential 20x speedup for this part of the RDC/PCS calculation. Most of this speedup should be obtainable for the numerical PCS integration in the frame order models.
- Small speedup for all of the frame order models. The PCS averaging in the quasi-random numerical integration functions now uses the multiply() and divide() numpy methods to eliminate a loop over the alignments. For this, a new dimension over the spins was added to the PCS constant calculated in the target function __init__() method. In one test of the pseudo-ellipse, the time dropped from 191 seconds to 172.
- Added another timing script for helping with speeding up the frame order analysis. This is for the part where the rotation matrix for each Sobol' integration point is shifted into the eigenframe.
- Python 3 fix for the CaM frame order system test base script.
- Added the full optimisation results for the torsionless isotropic cone frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
- Small speedups for all of the frame order models in the quasi-random numerical PCS integration. These changes result in an ~10% speedup. Testing via the func_pseudo_ellipse() target function using the relax profiling flag, the time for one optimisation decreased from 158 to 146 seconds. The changes consist of pre-calculating all rotations of the rotation matrix into the motional eigenframe in one mathematical operation rather than one operation per Sobol' point rotation, unpacking the Sobol' points into the respective angles prior to looping over the points, and taking the absolute value of the torsion angle and testing if it is out of the bounds rather than checking both the negative and positive values.
- Attempt at speeding up the torsionless pseudo-ellipse frame order model. The check if the Sobol' point is outside of an isotropic cone defined by the largest angle θy is now performed to avoid many unnecessary calls to the tmax_pseudo_ellipse() function. This however reveals a problem with the test suite data for this model.
- Updated all of the CaM frame order system tests for the recent speedup. The speedup switched to the use of numpy.tensordot() for shifting each Sobol' rotation into the eigenframe rather than the previous numpy.dot(). Strangely this affects the precision and hence the chi-squared value calculated for each system test - both increasing and decreasing it randomly.
- The frame order target function calc_vectors() method arguments have all been converted to keywords. This is in preparation for handling a second pivot argument for the double rotor model.
- Updated the double rotor frame order model to be in a pseudo-functional state. Bugs in the target function method have been removed, the calc_vectors() target function method now accepts the pivot2 argument (but does nothing with it yet), and the lib.frame_order.double_rotor module has been updated to match the logic used in all other lib.frame_order modules.
- The frame_order.pdb_model user function no longer tries to create a cone object for the double rotor.
- Added a timeit script and log file for different ways of checking a binary numpy array.
- Modified the rigid_test.py system test script to really be the rigid case. This is used in all of the Frame_order.test_rigid_data_to_*_model system tests. Previously the parameters of the dynamics were set to being close to zero, to catch the cases were a few Sobol' PCS integration points were accepted. But now the case were no Sobol' points can be used is being tested. This checks a code path currently untested in the test suite, demonstrating many failures.
- Fix for the frame order matrix calculation for a pseudo-elliptic cone with angles of zero degrees. The lib.frame_order.pseudo_ellipse_torsionless.compile_2nd_matrix_pseudo_ellipse_torsionless() function has been changed to prevent a divide by zero failure. The surface area normalisation factor now defaults to 0.0.
- Fixes for all PCS numeric integration for all frame order models in the rigid case. The exact PCS values for the rigid state are now correctly calculated when no Sobol' points lie within the motional model. The identity matrix is used to set the rotation to zero, and the PCS values are now multiplied by the constant.
- Updates for the chi-squared value in all the Frame_order.test_rigid_data_to_*_model system tests. This is now much reduced as the true rigid state is now being tested for.
- The rigid frame order matrix for the pseudo-ellipse models is now correctly handled. This allows the rigid case RDCs to be correctly calculated for both the pseudo-ellipse and torsionless pseudo-ellipse models. The previous catch of the θx cone angle of zero was incorrectly recreating the frame order matrix, which really should be the identity matrix. However truncation artifacts due to the quadratic SciPy integration still cause the model to be ill-conditioned near the rigid case. The rigid case is correctly handled, but a tiny shift of the parameters off zero cause a discontinuity.
- Updates for the Frame_order.test_rigid_data_to_pseudo_ellipse*_model system tests. The chi-squared value now matches the rigid model.
- Large increase in precision for the CaM frame order torsionless pseudo-ellipse model test data set. In addition, the θx and θy angles have also been swapped so that the new constraint of 0 ≤ θx ≤ θy ≤ π built into the analysis is satisfied. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used. The algorithm for finding suitable random domain positions within the motional limits has been changed as well by extracting the θ and φ tilt angles from the random rotation, dropping the torsion angle σ, and reconstructing the rotation from just the tilt angles. This increases the speed of the data generation script by minimally 5 orders of magnitude.
- Changed the parameter values for the Frame_order.test_cam_pseudo_ellipse_torsionless* system tests. The θx and θy angles are now swapped. The chi-squared values are now also lower in the 3 system tests as the data is now of much higher precision.
- Speedup for the frame order analyses when only one domain is aligned. When only one domain is aligned, the reverse Ln3+ to spin vectors for the PCS are no longer calculated. For most analyses, this should significantly reduce the number of mathematical operations required for the quasi-random Sobol' point numerical integration.
- Support for the 3 vector system for double motions has been added to the frame order analysis. This is used for the quasi-random Sobol' numeric integration of the PCS. The lanthanide to atom vector is the sum of three parts: the 1st pivot to atom vector rotated by the 1st mode of motion; the 2nd pivot to 1st pivot vector rotated by the 2nd mode of motion (together with the rotated 1st pivot to atom vectors); and the lanthanide to second pivot vector. All these vectors are passed into the lib.frame_order.double_rotor.pcs_numeric_int_double_rotor() function, which passes them to the pcs_pivot_motion_double_rotor() function where they are rotated and reconstructed into the Ln3+ to atom vectors.
- Fully implemented the double rotor frame order model for PCS data. Sobol' quasi-random points for the numerical integration are now generated separately for both torsion angles, and two separate sets of rotation matrices for both angles for each Sobol' point are now pre-calculated in the create_sobol_data() target function method. The calc_vectors() target function method has also been modified as the lanthanide to pivot vector is to the second pivot in the double rotor model rather than the first. The target function itself has been fixed as the two pivots were mixed up - the 2nd pivot is optimised and the inter-pivot distance along the z-axis gives the position of the 1st pivot. For the lib.frame_order.double_rotor module, the second set of Sobol' point rotation matrices corresponding to sigma2, the rotation about the second pivot, is now passed into the pcs_numeric_int_double_rotor() function. These rotations are frame shifted into the eigenframe of the motion, and then correctly passed into pcs_pivot_motion_double_rotor(). The elimination of Sobol' points outside of the distribution has been fixed in the base pcs_numeric_int_double_rotor() function and now both torsion angles are being checked.
- Fix for the unpacking of the double rotor frame order parameters in the target function. This is for when the pivot point is being optimised.
- Created a new synthetic CaM data set for the double rotor frame order model. This is the same as the test_suite/shared_data/frame_order/cam/double_rotor data except that the angles have been increased from 11.5 and 10.5 degrees to 85.0 and 55.0 for the two torsion angles. This is to help in debugging the double rotor model as the original test data is too close to the rigid state to notice certain issues.
- Corrected the printout from the CaM frame order data generation base script. The number of states used in the distribution of domain positions is now correctly reported for the models with multiple modes of motion.
- Created a frame order optimisation script for the CaM double rotor test suite data. This is the script used for testing the implementation, it will not be used in the test suite.
- Created the Frame_order.test_rigid_data_to_double_rotor_model system test. This shows that the double rotor model works perfectly when the domains of the molecule are rigid.
- Fix for the frame order target functions for when no PCS data is present. In this case, the self.pivot structure was being created as an empty array rather than a rank-2 array with dimensions 1 and 3. This was causing the rotor models to fail, as this pivot is used to recreate the rotation axis.
- Fix for the CaM double rotor frame order system tests. The torsion angle cone_sigma_max is a half angle, therefore the full angles from the data generation script are now halved in the system test script.
- Created 3 frame order system tests for the new large angle double rotor CaM synthetic data. These are the Frame_order.test_cam_double_rotor_large_angle, Frame_order.test_cam_double_rotor_large_angle_rdc, and Frame_order.test_cam_double_rotor_large_angle_pcs system tests.
- Added the full optimisation results for the torsionless pseudo-ellipse frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
- Added the full optimisation results for the 2nd free rotor isotropic cone frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
- Small fix for the large angle CaM double rotor frame order model synthetic test data. The way the rotation angle was calculated was slightly out due to integer truncation. The integers are now converted to floats in the generate_distribution.py script and all of the PCS and RDC data averaged over ~20 million states has been recalculated.
- Added proper support for the double rotor frame order models to the system test scripts. This is for the CaM synthetic data. The base script can now handle the current parameterisation of the double rotor model with a single pivot, an eigenframe, and the second pivot defined by a displacement along the z-axis. The scripts for the double_rotor and double_rotor_large_angle data sets have been changed to use this parameterisation as well.
- Attempt at implementing the 2nd degree frame order matrix for the double rotor model. This is required for the RDC.
- The second torsion angle is now printed out for the frame order system tests. This is in the system test class mesg_opt_debug() method and allows for better debugging of the double rotor models.
- Fix for the Frame_order.test_cam_double_rotor_large_angle* system tests. The system test script was pointing to the wrong data directory.
- The double rotor frame order system tests are no longer blacklisted.
- Updated the chi-squared values being checked for the double rotor frame order system tests.
- Shifted the frame order geometric representation functions into their own module. This is the new specific_analyses.frame_order.geometric module.
- The frame order geometric representation functions are no longer PDB specific. Instead the format argument is accepted. This will allow different formats to be supported in the future. Because of this change, all specific_analyses.frame_order.geometric.pdb_*() functions has been renamed to create_*().
- Created an auxiliary function for automatically generating the pivots of the frame order analysis. This is the new specific_analyses.frame_order.data.generate_pivot() function. It will generate the 1st or 2nd pivot, hence supporting both the single motion models and the double motion double rotor model.
- Shifted the rotor generation for the frame order geometric representation into its own function. This is the specific_analyses.frame_order.geometric.add_rotors() function which adds the rotors are new structures to a given internal structural object. The code has been extended to add support for the double rotor model.
- Fix for the pivots created by the specific_analyses.frame_order.data.generate_pivot() function. This is for the double rotor model where the 1st mode of motion is about the 2nd pivot, and the 2nd mode of motion about the 1st pivot.
- Fixes for the cone geometric representation in the internal structural object. The representation can now be created if the given MoleculeContainer object is empty.
- Refactored the frame order geometric motional representation code. The code of the specific_analyses.frame_order.geometric.create_geometric_rep() function has been spun out into 3 new functions: add_rotors(), add_axes(), and add_cones(). This is to better isolate the various elements to allow for better control. Each function now adds the atoms for its geometric representation to a separate molecule called 'axes' or 'cones'. The add_rotors() does not create a molecule as the lib.structure.represent.rotor.rotor_pdb() function creates its own. As part of the rafactorisation, the neg_cone flag has been eliminated.
- Renamed the residues of the rotor geometric object representation. The rotor axis atoms now belong to the RTX residue and the propeller blades to the RTB residue. The 'RT' at the start represents the rotor and this will allow all the geometric objects to be better isolated.
- Improvements to the internal structural object _get_chemical_name() method. This now uses a translation table to convert the hetID or residue name into a description, for example as used in the PDB HETNAM records to give a human readable description of the residue inside the PDB file itself. The new rotor RTX and RTB residue names have been added to the table as well.
- Renaming of the residues of the cone geometric representation. The cone apex or centre is now the CNC residue, the cone axis is now CNX and the cone edge is now CNE. These used to be APX, AXE, and EDG respectively. The aim is to make these names 100% specific to the cone object so that they can be more easily selected for manipulating the representation and so that they are more easily identifiable. The internal structural object _get_chemical_name() function now returns a description for each of these. Note that the main cone object is still named CON.
- The motional pivots for the frame order models are now labelled in the geometric representation. The pivot points are now added as a new molecule called 'pivots' in the frame_order.pdb_model user function. The atoms all belong to the PIV residue. The pymol.frame_order user function now selects this residue, hides its atoms, and then shows the atom name 'Piv' as the label. For the double rotor model, the atom names 'Piv1' and 'Piv2' are used to differentiate the pivots.
- Renamed the lib.structure.represent.rotor.rotor_pdb() function to rotor(). This function is not PDB specific and it just creates a 3D structural representation of a rotor object.
- Added support for labels in the rotor geometric object for the internal structural object. The labels are created by the frame_order.pdb_model user function backend. For the double rotor model, these are 'x-ax' and 'y-ax'. For all other models, the label is 'z-ax'. The labels are then sent into the lib.structure.represent.rotor.rotor() function via the new label argument. This function adds two new atoms to the rotor molecule which are 2 Angstrom outside of the rotor span and lying on the rotor axis. These then have their atom name set to the label. The residue name is set to the new RTL name which has been added to the internal structural object _get_chemical_name() method to describe the residue in the PDB file for the user. Finally the pymol.frame_order user function selects these atoms, hides them and then labels them using the atom name (x-ax, y-ax, or z-ax).
- Modified the rotor representation generated by the pymol.frame_order user function. This is to make the object less bulky.
- Redesign of the axis geometric representation for the frame order motions. This is now much more model dependent to avoid clashes with the rotor objects and other representations: For the torsionless isotropic cone, a single z-axis is created; For the double rotor, a single z-axis is produced connecting the two pivots, from pivot2 to pivot1; For the pseudo-ellipse and free rotor pseudo-ellipse, the x and y-axes are created; For the torsionless pseudo-ellipse, all three x, y and z-axes are created; For all other models, no axis system is produced as this has been made redundant by the rotor objects.
- Fixes for the cone geometric object created by the frame_order.pdb_model user function. This was broken by the code refactoring and now works again for the pseudo-ellipse models.
- Fix for the pymol.frame_order user function. The representation function for the rotor objects was hiding all parts of the representation, hence the pivot labels where being hidden. To fix this, the hiding of the geometric object now occurs in the base frame_order_geometric() function prior to setting up the representations for the various objects.
- Started to redesign the frame_order.pdb_model user function. Instead of having the positive and negative representations in different PDB models, and the Monte Carlo simulations in different molecules, these will now all be shifted into separate files. For this to be possible, the file root rather than file names must now be supplied to the frame_order.pdb_model user function. To allow for different file compression, the compress_type argument is now used. The backend code correctly handles the file root change, but the multiple files are not created yet.
- Python 3 fixes using the 2to3 script. Fatal changes to the multi.processor module were reverted.
- Improvements to the lib.structure.represent.rotor.rotor() function for handling models. The 'rotor', 'rotor2', or 'rotor3' molecule name determination is now also model specific.
- The frame order generate_pivot() function can now return the pivots for Monte Carlo simulations. This is the specific_analyses.frame_order.data.generate_pivot() function. The sim_index argument has been added to the function which will allow the pivots from the Monte Carlo simulations to be returned. If the pivot was fixed, then the original pivot will be returned instead.
- Test suite fixes for the recent redesign of the frame_order.pdb_model user function.
- Fixes for the frame_order.pdb_model user function for the rotor and free rotor models.
- Redesign of the geometric object representation part of the frame_order.pdb_model user function. The positive and negative representations of the frame order motions have been separated out into two PDB files rather than being two models of one PDB file. This will help the user understand that there are two identical representations of the motions, as both will now be displayed rather than having to understand the model concept of PyMOL. The file root is taken, for example 'frame_order', and the files 'frame_order_pos.pdb' and 'frame_order_neg.pdb' are created. If no inverse representation exists for the model, the file 'frame_order.pdb' will be created instead. The Monte Carlo simulations are now also treated differently. Rather than showing multiple vectors in the axes representation component within one molecule in the same file as the frame order representation, these are now in their own file and each simulation is now a different model. If an inverse representation is present, then the positive representation will go into the file 'frame_order_sim_pos.pdb', for example, and the negative representation into the file 'frame_order_sim_neg.pdb'. Otherwise the file 'frame_order_sim.pdb' will be created.
- Clean up of the frame_order.pdb_model user function definitions. Some elements were no longer of use, and some descriptions have been updated.
- Redesign of the pymol.frame_order user function to match the redesign of frame_order.pdb_model. The file names are no longer given but rather the file root. Then all PDB files matching that file root in the given directory will be loaded into PyMOL.
- Updated all of the frame order scripts for the frame_order.pdb_model and pymol.frame_order changes. These are the scripts for the CaM frame order test data.
- Redesign of the average domain position part of the frame_order.pdb_model user function. The Monte Carlo simulations are now represented. If the file root is set to the default of 'ave_pos', then these will be placed in the file 'ave_pos.pdb', or a compressed version. Each simulation is in a different model, matching the geometric representation '*_sim.pdb' files. The original structure is copied for each model, and then rotated to the MC simulation average position.
- Change all of the domain user function calls in the frame order CaM test data scripts. The domains are now identified by the molecule name rather than the range of residues. This allows non-protein atoms, for example the Ca2+ atoms, to be rotated to the average domain position as well.
- The PyMOL disable command is now used by the pymol.frame_order user function. This is to first disable all PyMOL objects prior to loading anything, to hide the original structures and any previous frame order representations, and then to hide all of the Monte Carlo simulation representations. This is to simplify the picture initially presented to the user while still allowing all elements to be easily found.
- The pymol.frame_order user function now centers and zooms on all objects.
- Simplified the PyMOL view commands in all of the CaM test data optimisation scripts. The pymol.view user function is not necessary as the PyMOL GUI will be launched by the pymol.frame_order user function. And the pymol.command user function call for running the 'hide all' command is also now redundant.
- Removed all remaining uncompressed PDB files from the CaM test data directories. These were complicating the debugging of the pymol.frame_order user function, as they were being loaded on top of the compressed versions.
- Removed some rotation files from the CaM frame order test data directories. These files are no longer of any use and just take up large amounts of room for nothing.
- Added titles to the frame order geometric representation PDB files from frame_order.pdb_model. These are in the form of special Ti atoms placed 40 Angstrom away from the pivot along the z-axis of the system, or shifted 3 more Angstrom for the Monte Carlo simulations. These are used to label the alternative representations or the Monte Carlo simulation representations. The residue type is set to TLE and this has been registered in the internal structural object. The pymol.frame_order user function now calls the represent_titles() function to select these atoms, hide them, and then add a long descriptive title. The atom name is used to distinguish between different titles.
- Changed the alternative representation names for the frame order geometric objects. The aim is to put both representations on a more equal footing, as they are identical solutions. Hence the inverted representation might be the correct representation of the domain motions. So instead of calling these 'positive' and 'negative', the 'A' and 'B' notation will be used. This affects the names of the files produced by the frame_order.pdb_model user function as well as the internal titles. Instead of ending the files with "*_pos.*" and "*_neg.*", these have been changed to "*_A.*" and "*_B.*". The atoms used for the titles have also been renamed, and the pymol.frame_order user function now labels the titles using the 'A' and 'B' notation.
- Changes to the rotor object in the frame order geometric representations. For the isotropic and pseudo-elliptic cone models, the rotor is now halved. Instead of having two axes radiating from the central pivot and terminating in the propeller blades, now only the positive axis is shown lying in the centre of the cone.
- Fixes for the MC simulation rotor objects in the frame order geometric representation. The axes of the Monte Carlo simulation rotors objects were being set to the original values and not to the simulation values.
- Fixes for the titles in the frame order geometric representation from frame_order.pdb_model. There were a few bugs for a number of the frame order models preventing this code from working.
- Redesign of the geometric representation of the cone structural objects to allow for models. The old representation was not compatible with the PDB model concept whereby each model must have the same number of atoms. To handle this situation, the cone objects have been simplified. Specifically the cone cap. The old behaviour was to remove all points outside of the cone when creating the cone cap, and then to stitch the cap to the cone edge in a subsequent step. Now the behaviour is that all points outside of the distribution are shifted to the cone edge. This avoids the need to stitch the cap to the edge. This behaviour means that all cones with the same inc value will have the same number of atoms. The cones for the pseudo-ellipses are not as nice as the latitudinal lines are not strait at the cone edge, but at least creating multiple models with different cone sizes is now possible.
- Bug fix for the y-axis rotation matrix for the double rotor Sobol' integration points. The matrix was inverted.
- Updated the frame order system test chi-squared values for the previous fix.
- Fixes for the double rotor frame order system tests for the CaM synthetic data. The torsion angles needed to be swapped and the pivot point changed from the C terminal domain CoM to the N domain CoM.
- More fixes for the double rotor frame order system tests for the CaM synthetic data. The eigensystem was inverted.
- Updated the χ2 check for the large angle double rotor frame order system tests. This is needed for the eigenframe fix.
- Updates for the frame order system tests for the float32 to float64 change. Some chi-squared values have slightly changed.
- The CaM frame order test data optimisation scripts now save more state files. The state of the true dynamics and the fixed pivot optimisation results are now stored as well. This might be useful for extracting these results without redoing the calculations.
- The script for representing the frame order dynamics for the CaM test data has been updated. The domains of the system are now defined.
- Changed the CaM frame order test data superimposition values. Because the domains are now defined via the molecule name rather than the residue numbers, the centroid of rotation set to the CoM has been shifted as now the Ca2+ ions are included in the CoM calculation. Therefore the superimpose.py script has been updated to not delete the Ca atoms. All of the frame order optimisation scripts have been updated with the new rotation Euler angles and translation vector. To match this, the system test base script for the CaM frame order test data has also had its rotations and translations updates, and the domain user function call changed to use molecule names.
- Updated all of the CaM frame order system test chi-squared values. These have changed slightly due to the rotation and translation changes.
- Added support for the 'pivot_disp' frame order parameter to the grid search. This is required for the double rotor model.
- Changed some of the default values for the frame order auto-analysis. The number of Sobol' quasi-random integration points were far too low to obtain any reasonable results.
- Simplified the PyMOL visualisation relax script created by the frame order auto-analysis. This now consists of a single pymol.frame_order user function call. The other pymol user function calls were unnecessary.
- Added the full optimisation results for the large angle double rotor frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
- Added model support for the rotor geometric object. This is the structural object used in the frame order analysis to create PDB representations of rotor motions. The number of atoms created for the rotor is now constant, allowing for models whereby the atom number and connectivity must be preserved between all models.
- Changed the grid search pivot displacement frame order parameter. Instead of searching from 0 to 50 Angstroms, the search is now from 10 to 50. This is to avoid the edge case of pivot_disp = 0.0 from which the optimisation cannot escape.
- Speedup of the PCS component of the rigid frame order model. The lanthanide to atom vectors are now being calculated outside of the alignment tensor and spin loops, as well as the inverse vector lengths to the 5th power. This increases the speed by a factor of 1.216 (from 38.133 to 31.368 seconds for 23329 calls of the func_rigid() target function).
- Added the full optimisation results for the rigid frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
- Numpy ≤ 1.6 fixes for the frame order PCS code. The numpy.linalg.norm function does not have an axis argument in numpy 1.6, therefore the lib.compat.norm() function is now used instead. This function was created exactly for this axis argument problem.
- Created the new specific_analyses.frame_order.variables module. This currently contains variables for all of the frame order model names, as well as various lists of these models. The rest of the frame order specific analysis code as well as the frame order user functions have been converted to use these model variables exclusively rather than having the model name strings hardcoded throughout the codebase.
- Added the full optimisation results for the double rotor test data. This is for the CaM frame order test data using the new frame_order.py optimisation script.
- Added a script for profiling the target function calls of the pseudo-ellipse frame order model.
- Added a timeit script and log file showing how numpy.cos() is 10 times slower than math.cos(). This is for single floats.
- Shifted the calculation of the θmax cone opening for the pseudo-ellipse outside of all loops. This is infrastructure change for potentially eliminating all of the looping for the PCS numeric integration in the future. It however slightly speeds up the pseudo-ellipse frame order model. Using 500 target function calls in the profiling_pseudo_ellipse.py script in test_suite/shared_data/frame_order/timings/, the time spent in the pcs_pivot_motion_full_qrint() function decreases from 20.849 to 20.719 seconds.
- Converted the torsionless pseudo-ellipse model to also use the tmax_pseudo_ellipse_array() function. This allows the calculation of the pseudo-elliptic cone opening θmax to be shifted outside of all loops.
- Created a profiling script and log file for the isotropic cone frame order model. This shows where the slow points of the model are, using 2000 target function calls.
- Increased the function call number to 500 in the pseudo-ellipse frame order model profiling script. The profiling log file has also been added to show where the slowness is - specifically that the numeric PCS integration takes almost the same amount of time as the RDC frame order matrix construction using the scipy.integrate.quad() function.
- Created the specific_analyses.frame_order.checks.check_pivot() function. This is to check that the pivot point has been set.
- The frame order grid search is now checking if the pivot point has been set.
- Added a profiling script and log file for the free rotor frame order model.
- Updated the frame order optimisation results for the CaM isotropic cone test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository and the old ones removed.
- Modified the script for recreating the frame order PDB representation and displaying it in PyMOL. The state loading, domain redefinition, and representation creation parts have all been removed, as these will soon all be redundant as the frame order analysis for all models is being redone. All that remains are the pymol.frame_order() function calls for displaying all the representations.
- The pivot point parameters in the frame order analysis are no longer scaled by 100. This is to match the average domain position translation which is also not scaled.
- The specific_analyses.frame_order.variables module is now used throughout the frame order code. The target function code, auto-analysis, and test suite now all use the variables defined in this module rather than having hardcoded strings. The MODEL_LIST_NONREDUNDANT variable has been created to exclude the redundant free rotor pseudo-ellipse which cannot be optimised, and this is used by the auto-analysis.
- Removal of many unused imports in the frame_order_cleanup branch. These were detected using the devel_scripts/find_unused_imports.py script which uses pylint to find all unused imports. The false positives also present in the trunk were ignored. And the unused imports in the dispersion code were also left for clean up the disp_spin_speed branch.
- Changed the minimisation in the frame order system tests where optimisation is activated. The number of iterations is now set to 1 for speed testing, and the constraints are turned on.
- Turned on the optimisation flag for the Frame_order.test_cam_free_rotor system test. This is to activate code paths currently not tested by the test suite.
- Constraints are now properly turned off in the minimise user function for the frame order analysis. The A and b matrices from linear_constraints() are now set to None if they are returned as empty arrays.
- Parallelised the frame order optimisation code to run on clusters or multi-core systems via OpenMPI. The optimisation code has been split into the three standard parts of the multi-processor: 1) Frame_order_memo is the new Memo object used to store data on the master for use when data is returned from the slaves. 2) Frame_order_minimise_command is the Slave_command which stored all required data for the optimisation, is pickled and sent to a slave, sets up the target function, and then performs optimisation. 3) Frame_order_result_command is the Result_command initialised by the Slave_command on the slave for pickling and returning results to the master. To avoid pickling the target function class, which is not possible, the store_bc_data() and target_fn_setup() functions of the specific_analyses.frame_order.optimisation module have been redesigned to work with basic data structures rather than the target function class directly. The target_fn_setup() function no longer returns an initialised target function class, but rather all the data assembled prior to the initialisation. And the target function class was itself modified so that pcs_theta and rdc_theta are always defined to allow the store_bc_data() function to be used successfully. This parallelisation currently only allows the Monte Carlo simulations to be run on slave processors.
- The frame order linear_constraints() function now returns None if no constraints are present. This allows the code using this to be simplified with respect to turning off the constraints.
- Improvements for the printout at the start of optimisation of the frame order models. This is in the target_fn_setup() frame order method. All the printouts are now in one place and they are now better formatted and better controlled.
- Parallelised the frame order grid search to run on clusters or multi-core systems via OpenMPI. This involved the creation of the Frame_order_grid_command class which is the multi-processor Slave_command for performing the grid search. This was created by duplicating the Frame_order_minimise_command class and then differentiating both classes. For the subdivision of the grid search, the new minfx grid.grid_split_array() function is used in the frame order grid() API method. The grid() method no longer calls the minimise() method but instead obtains the processor box itself and adds the subdivided grid slaves to the processor. The relax grid_search user function takes care of the rest.
- Fixes for the parallelised grid search for the frame order analysis. A chi-squared value check was added to the Frame_order_result_command.run() method to check if the value is lower than the current when the result is returned to the master. Without this check, each grid subdivision result will be stored as they are returned rather than storing the results from the global minimum of the entire grid search.
- Added a script for testing out the parameter nesting abilities of the frame order auto-analysis. This script attempts to find the dynamics solution without knowing where the pivot is located. Hence this will be as in the auto-analysis were this pivot point will be used as the base for all other models.
- Sent the verbosity argument to the minfx.grid.grid_split() function for the frame order analysis. This matches the relax trunk changes for the model-free analysis. The minfx function in the next release (1.0.8) will now be more verbose, so this will help with user feedback when running the model-free analysis on a cluster or multi-core system using MPI.
- Improvements for the parallelised grid search for the frame order analysis. As each grid point can take wildly different numbers of CPU cycles to calculate the chi-squared value for, the result of subdividing the grid search was that some subdivisions where incredibly quick while others required much larger amounts of time. To avoid this bad slave management, the grid points are now randomised. This means that the subdivisions will require about the same amount of time to optimise.
- Moved the setup of the target function data structures in the frame order analysis. This is for the grid_search and minimise user functions. The target function data setup function has been renamed to target_fn_data_setup(). This is now called before the Frame_order_grid_command and Frame_order_minimise_command multi-processor objects are initialised, and all of the data is now passed into these functions. Although the code is uglier, this has the benefit that the target_fn_data_setup() function will only be called once. This data setup requires a lot of time, so for a large cluster, this can be a large time saving for the grid search.
- Modified the frame_order_free_start.py script to better mimic the frame order auto-analysis.
- Updated the frame order optimisation results for the 2nd CaM free rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Updated the frame order optimisation results for the CaM free rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Updated the frame order optimisation results for the CaM missing data free rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Updated the frame order optimisation results for the CaM free rotor isotropic cone test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Updated the frame order optimisation results for the CaM small angle rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Updated the frame order optimisation results for the 2nd CaM free rotor isotropic cone test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Updated the frame order optimisation results for the CaM pseudo-ellipse test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Updated the frame order optimisation results for the CaM torsionless isotropic cone test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Updated the frame order optimisation results for the 2nd CaM pseudo-elliptic cone test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Some more fixes for the optimisation user function changes.
- Removed the parameter scaling for the pivot point frame order parameters. These were already removed from the frame_order_cleanup branch in the assemble_scaling_matrix() function, however they were reintroduced accidentally via the parameter object where this information is now defined. So this removes the scaling a second time.
- Fixes for the parameter scaling changes in the trunk. The scaling flag is no longer part of the specific analysis API optimisation methods. Instead the pre-assembled scaling matrices are passed into all three API optimisation methods.
- Implemented the frame order specific analysis API method print_model_title(). This is simply aliased from the API common method _print_model_title_global().
- Fix for the grid search in the frame order analysis. This is a recently introduced problem due to the changes of the zooming_grid_search branch.
- Turned on the optimisation in the Frame_order.test_cam_rigid system test. This is to catch a number of failures in the frame order grid search.
- Activated the grid search in the frame order system tests using the CaM synthetic data. This is set to one increment so that the tests can complete in a reasonable time.
- Fix for the specific_analyses.frame_order.optimisation.grid_row() function. This can now handle the case of a single grid increment. The change is similar to r163 in the minfx project.
- Converted the frame_order_free_start.py script to use the zooming grid search.
- Added lots of calls to the time user function to the frame_order_free_start.py. This will be used to fine tune the frame order analysis on a cluster.
- Increased the default grid bounds for the pivot parameters of the frame order models. The pivot point is now searched for in a 50 Angstrom box and the pivot displacement for the double motion models from 10 to 60 Angstroms. These were originally a 20 Angstrom box and 10 to 50 Angstroms. The larger grid is possible when combined with the new zooming grid search.
- Updated the frame order optimisation results for the 2-site CaM test data fitting to the rotor model. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Updated the frame order optimisation results for the CaM rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Updated the frame order optimisation results for the 2nd CaM rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Fixes for the CaM free-rotor pseudo-ellipse frame order model test data set. This is for the constraint 0 ≤ θx ≤ θy ≤ π, as the old data was created with θx > θy. The new data is also of high quality using 20 million structures and numpy.float128 data averaging.
- Created the lib.frame_order.rotor_axis.convert_axis_alpha_to_spherical() function. This will convert the axis α angle to the equivalent spherical angles θ and φ.
- Renamed the lib.frame_order.rotor_axis module to lib.frame_order.conversions. This module will be used for all sorts of frame order parameter conversions.
- Added the pipe_name argument to the specific_analyses.frame_order.data.generate_pivot() function. This allows the pivot from data pipes other than the current one to be assembled and returned.
- Updated the frame order optimisation results for the CaM free rotor, pseudo-ellipse test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Updated the frame order optimisation results for the CaM torsionless, pseudo-ellipse test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
- Fix for the Frame_order.test_cam_pseudo_ellipse_free_rotor system test. This is for the change of the X and Y cone opening angles.
- Redesign and expansion of the nested model parameter copying in the frame order auto-analysis. The nested parameter protocol used to allow the analysis to complete in under 1,000,000 years was no longer functional due to the switching to the axis α parameter to decrease parameter number and redundancy. The copying of the average domain position for the free rotor models was also incorrect as the dropping of the α Euler angle cause the translation parameters and β and γ angles to change drastically. The new protocol has been split into four methods for the average domain position, the pivot point, the motional eigenframe and the parameters of ordering. These use the fact that the free rotor and torsionless models are the two extrema of the models where the torsion angle is restricted. The pivot copying is a new addition.
- Created the Frame_order.test_auto_analysis system test. This will be an extremely quick run through of the frame order auto-analysis as this is not currently tested. 1 Sobol' quasi-random integration point will be used for all models for speed. The system test uses the rigid CaM test data to perform a full analysis.
- Alphabetical ordering of the imports in the frame order auto-analysis module.
- Fixes for the backend script of the Frame_order.test_auto_analysis system test. This includes a missing import and the removal of a long ago deleted user function.
- Fix for the frame order auto-analysis for the call to the grid search user function. This user function has been renamed to minimise.grid_search, however not all parts of the analysis had been converted to the new name.
- Created a method in the frame order auto-analysis to reorder the models. This is needed as the nested model parameter copying protocol requires the simpler models to be optimised first.
- The Frame_order.test_auto_analysis system test now writes all files to the directory of ds.tmpdir. This is to prevent the system test from dumping files in the current directory.
- Modified the specific_analyses.frame_order.parameters.update_model() function. This will no longer set all parameters to 0.0, excluding the pivot point.
- Modified the specific_analyses.frame_order.parameters.assemble_param_vector() function. This can now handle the case of no parameters being present. The corresponding elements of the numpy array will consist of NaN values.
- Better handling of unset parameters in the frame order optimisation functions. The specific_analyses.frame_order.optimisation.target_fn_data_setup() and specific_analyses.frame_order.parameters.assemble_param_vector() function both now accept the unset_fail argument. This is set in both the calculate() and minimise() API methods. When set, a RelaxError will be raised in the assemble_param_vector() function when a parameter has not been set yet. This together with previous changes will prevent the frame order analysis from using 0.0 as a starting value for unset parameters.
- Fixes for all of the Frame_order.test_rigid_data_to_*_model system tests. The base script now sets all parameter values so that the minimise.calculate user function can operate. The two free rotor model chi-squared values have been updated as these are sensitive to the motional eigenframe parameter values - these models can never approximate a rigid state.
- Modified the optimisation of the rigid model in the frame order auto-analysis. The grid search is now implemented as a zooming grid search.
- Updates and fixes for the frame order auto-analysis. The custom grid setup now works for the new reduced parameter set models and the double rotor model is now also included. The cone axis α angle to spherical angle conversion has had a bug removed. And some of the printouts are now more detailed.
- Redesigned the Frame_order.test_auto_analysis system test. This now uses a hypothetical new Optimisation_settings object from the frame order auto-analysis module for holding all of the grid search, zooming grid search and minimisation settings. This will allow for far greater user control of the settings and hugely simplify the auto-analysis interface by decreasing the number of input arguments. It should also be less confusing.
- Implementation of the Optimisation_settings object in the frame order auto-analysis. This object holds all of the grid search, zooming grid search, and minimisation settings. It provides the add_grid() and add_min() methods to allow the user to add successive iterations of optimisation and settings to the object. The loop_grid() and loop_min() methods are used to loop over each iteration of each method. And the get_grid_inc(), get_grid_num_int_pts(), get_grid_zoom_level(), get_min_algor(), get_min_func_tol() and get_min_num_int_pts() methods are used to access the user defined settings. The auto-analysis has been redesigned around this new concept. All of the optimisation arguments have been replaced. Instead there are the opt_rigid, opt_subset, opt_full, and opt_mc arguments which are expected to be instances of the Optimisation_settings object. The optimisation in the auto-analysis is now more advanced in that more user optimisation settings are now available and active.
- Added linear constraints for the pivot and average domain translation frame order parameters. The pivot coordinates are constrained between -999 and 999 Angstrom and the translation between -500 and 500 Angstrom. This allows the frame_order.pdb_model user function to operate in the case of failed models - often the free rotors fitting to torsionally restricted data - by preventing the PDB coordinates from being out of the PDB format range. It should also speed up optimisation by stopping the optimisation of failed models earlier.
- The frame order auto-analysis Optimisation_settings object now handles the maximum iterations. The new max_iter argument has been added to the add_min() method, and the new get_min_max_iter() method added to fetch the value. This is used in the auto-analysis to set the maximum number of optimisation iterations in the minimise.execute user function calls. Limiting this will be of greatest benefit for the test suite.
- Speedup of the Frame_order.test_auto_analysis system test. This involves limiting the maximum number of optimisation steps to 20 for most parts (the rigid model excluded so the average domain position is correctly found), and using the PCS subset data for the full data set.
- Updated the full_analysis.py script for the CaM frame order test data. This is for the recent changes to the auto-analysis with the Optimisation_settings object and for the changes of this branch.
- Removed the RDC data checks from the frame order optimisation. This is in the minimise_setup_rdcs() and store_bc_data() functions of the specific_analyses.frame_order.optimisation module, called before and after all optimisation. The reason was identified by profiling - this check was adding significant amounts of time to the setup and results unpacking parts of the optimisation. Specifically the interatomic_loop() function was identified via profiling as the function requiring the most amount of cumulative time in the Frame_order.test_auto_analysis system test (17 seconds out of a total of ~60 seconds).
- Fixes for the removal of the RDC data checks from the frame order optimisation functions. The specific analysis API method overfit_deselect() has now been created to deselect spins which do not have PCS data or interatomic data containers missing RDC data. The handling of deselected spins and interatomic data containers is now also correctly handled throughout the frame order specific code.
- Enabled pivot optimisation in the full_analysis.py script for the CaM frame order test data.
- The frame order auto-analysis now calls the time user function. This is used at the start of each model section, as well as at the very start and very end of the analysis. This feedback is needed for the user to be able to optimise the optimisation settings.
- Major bugfix for the frame order auto-analysis. The algorithm of using a PCS data subset of a few selected residues to find an initial parameter estimate followed by using all PCS data was badly implemented. The use of the PCS subset caused most spin systems to be deselected, however they remained deselected once all data was being used. So the result was that only the spin subset was ever being used in the analysis.
- Fix for the recent lib.period_table and lib.physical_constants module changes.
- Created the model_directory() method for the frame order auto-analysis. This is used to create the full path for saving model specific files. It replaces spaces with underscores in the path and removes all commas. The commas in the path appear to be fatal for certain PyMOL versions when viewing the frame order representation.
- The frame order auto-analysis results printout has been extended to include the pivot point.
- Change to the parameter nesting in the frame order auto-analysis. The pivot is now taken from the rotor model for all other models. Taking the pivot point from the isotropic cone model is not a good idea as there are situations where the pivot point optimisation catastrophically fails, sending the point many tens or hundreds of Angstrom away from the molecule.
- Copied a frame order results file for testing axis permutations. This is from the test_suite/shared_data/frame_order/cam/pseudo_ellipse/ directory. The optimisation results were identified to have failed, in that it found the alternative minimum. The pseudo-ellipse model as two minima in the space, and in this case the global minimum was missed.
- Created the Frame_order.test_axis_permutation system test. This is to test the operation of the yet-to-be implemented frame_order.permute_axes user function.
- Implemented the frame_order.permute_axes user function. This is used to switch between local minima in the pseudo-elliptic frame order models.
- Fix for the Frame_order.test_axis_permutation system test. The motional eigenframe in the old log file was not exactly correct and did not correspond exactly to the Euler angles in the cam_pseudo_ellipse.bz2 results file in test_suite/shared_data/frame_order/axis_permutations/.
- Extended the Frame_order.test_axis_permutation system test to check frame_order.permute_axes twice. This will check that two calls to the frame_order.permute_axes user function will restore the original parameter values.
- The frame_order.permute_axes user function can now handle the torsionless pseudo-ellipse. This model does not have the variable cdp.cone_sigma_max set.
- Added support for axis permutations in the frame order auto-analysis. This is done by copying the data pipe of the already optimised pseudo-elliptic models, permuting the axes, and performing another optimisation using all RDC and PCS data. This allows the second solution for these pseudo-elliptic models to be found. The 2nd pipe is included in the model selection step to allow the best solution for the model to be found.
- Fix for the reading of old results files in the frame order auto-analysis. The directory name is now processed by the model_directory() method. This will convert the spaces to '_' and remove commas. Without this the already created files could not be found, if the model name contains a space or comma.
- Made the pivot point in the frame order PDB representation fail-proof. If the pivot position was outside of the bounds [-1000, 1000], the PDB file creation would fail as the record would be too long. So now the pivot is shifted to be in these bounds.
- The axis permutation step in the frame order auto-analysis is now always performed. If an old results file was found, this step was accidentally skipped.
- Added extensive printouts to the frame_order.permute_axes user function.
- Redesigned the frame_order.permute_axes user function frontend. Previously only cyclic permutations were considered, however non-cyclic permutations are also allowed when accompanied by an axis inversion. Therefore 3 combinations exist with cone_theta_x ≤ cone_theta_y, or 2 when the current combination is excluded.
- Created 6 system tests for the frame_order.permute_axes user function. This covers the 3 starting conditions (x<y<z, x<z<y, z<x<y) and the two permutations ('A' and 'B') for each of these which do not include the starting permutation. They replace the original Frame_order.test_axis_permutation system test with the tests Frame_order.test_axis_perm_x_le_y_le_z_permA, Frame_order.test_axis_perm_x_le_y_le_z_permB, Frame_order.test_axis_perm_x_le_z_le_y_permA, Frame_order.test_axis_perm_x_le_z_le_y_permB, Frame_order.test_axis_perm_z_le_x_le_y_permA, and Frame_order.test_axis_perm_z_le_x_le_y_permB.
- Implemented the new frame_order.permute_axes backend. The 3 starting conditions x<y<z, x<z<y, and z<x<y and the two permutations 'A' and 'B' (for each of these which do not include the starting permutation) are now supported. For these 6 combinations, the axis and order parameter permutation and the z-axis inversion are selected and applied to the current system.
- Removed the second permutation from the 6 Frame_order.test_axis_perm_* system tests. A second identical permutation does not necessarily restore the original state.
- Fix for the frame_order.permute_axes for the torsionless pseudo-ellipse model. The data structure cdp.cone_sigma_max does not exist in this model as cone_sigma_max == 0.0.
- Modified the frame order auto-analysis axis permutation algorithm to handle both permutations. Instead of creating one additional data pipe for the permutations, two are now created for the permutations 'A' and 'B'. This allows all 3 solutions for the pseudo-elliptic models to be explored and included in the final model selection process.
- Fix for the Frame_order.test_axis_perm_x_le_z_le_y_permB system test. The permuted z-axis needs to be inverted in the test.
- Many fixes for the frame_order.permute_axes user function. The z-axis inversion is now encoded into a 3D numpy array as the index of the new z-axis position needs to be stored. The cone_theta_x, cone_theta_y and cone_sigma_max parameters are now permuted in reverse 'perm' data structure by calling its index() method. And the cone_theta_x - cone_theta_y to y-axis - x-axis switch has been removed (this may need to be reintroduced later).
- Fix for the axis permutation protocol in the frame order auto-analysis. The pipe.copy user function does not switch pipes, therefore the pipe.switch user function is now being called so that the correct pipe is being permuted and optimised.
- Created some test data files for visualising the frame order axis permutation. This uses the CaM frame order synthetic data for the rotor model to visualise the pseudo-ellipse frame order model axis permutations. The initial conversion sets the pseudo-ellipse torsion angle cone_sigma_max to the rotor opening half-angle, and the pseudo-elliptic cone opening to close to zero. Then the axis permutations are performed. All three solutions are optimised. PDB representations before and after optimisation are included to illustrate any problems.
- Bug fix for the new frame_order.permute_axes user function. The cone and torsion angles were not being correctly permuted. Now the direct permutation array is being used. And the fact that cone_theta_x is a rotation along the y-axis and cone_theta_y along the x-axis is taken into account.
- Redesign of the axis permutation algorithm of the frame_order.permute_axes user function. Instead of tracking the fact that cone_theta_x is a rotation around the y-axis and cone_theta_y is about the x-axis, now two permutation arrays are created - one for the three angles and one for the axes. The permutation array values have also been completely changed as previously the incorrect inverse permutation was coded into the algorithm.
- Updated the frame order pseudo-ellipse motion permutation test data. This is for the CaM frame order rotor model synthetic data. The correct axis and cone angle permutations of the frame_order.permute_axes user function are now being used and optimised.
- Renamed the pseudo-ellipse permutation directory to perm_pseudo_ellipse_x_le_y_le_z. This is for the CaM frame order rotor model synthetic data.
- Fix for the frame_order.permute_axes user function. One of the 6 permutations had the x and y axes switched (the x ≤ z ≤ y condition, permutation A).
- Visualisation files for all of the pseudo-ellipse permutations by frame_order.permute_axes. This includes the x ≤ z ≤ y and z ≤ x ≤ y conditions (the previous files were for x ≤ y ≤ z). In all permutation combinations, optimisation has been performed to demonstrate that these are all local minima. These all approximate the rotor when using the CaM frame order rotor model synthetic data.
- Added support for the isotopic cone models to the frame_order.permute_axes user function. This is a simpler setup, but it uses the same permutation algorithm as derived for the pseudo-ellipse models. Instead of setting the x and y cone angles separately, they are instead averaged. And as the cone axis is undefined in the xy plane, the axis has been randomly selected as being the axis perpendicular to both the z-axis and the reference frame x-axis.
- Created set of files showing the axis permutation problem for the isotopic cone frame order model. This shows that there are two minima. However one has a chi-squared value of ~1, and the other a value of ~150. Nevertheless, the optimisation could be trapped in the non-global minimum so the frame_order.permute_axes user function should be used for the isotopic cones as well, just in case.
- Created the other isotropic cone condition z ≤ x = y. As there are no constraints in this model, this condition should not result in any major differences, just the size of the cone being different and the optimisation having to decrease the cone angle significantly to mimic the rotor.
- Modified the frame order auto-analysis. The axis permutation algorithm is now performed on all isotopic cone and pseudo-ellipse models. This is just in case the non-global minima was found in the original optimisation. The isotropic cone models possess two local minima whereas the pseudo-ellipse models possess three local minima.
- Simplified the optimisation in the axis permutation part of the frame order auto-analysis. Only the last, highest quality setting is used for optimisation.
- Fix for the axis permutation protocol in the frame order auto-analysis. This would fail if a results file for the permuted model already exists as the pipe.copy user function call was being performed too early.
- Created set of files for the axis permutation of the torsionless isotopic cone frame order model.
- Created an initial Frame_order.test_frame_order_pdb_model_ensemble system test. This is to check the operation of the frame_order.pdb_model user function when an ensemble of structures is encountered. However as this uses a very minimal number of user functions to set up the system, a number of other minor bugs will probably be uncovered.
- Added printouts to the specific_analyses.frame_order.parameters.update_model() function. This is to make it easier to understand why certain things fail due to the system not being fully set up.
- Simplified the operation of the frame_order.select_model user function. This is by removing the check of PCS data from the specific_analyses.frame_order.data.pivot_fixed() function using the base_data_types() function call. This allows the model to be set up more easily.
- Modified the frame order check_pivot() function to operate on any data pipe. The function now accepts the pipe_name argument so that checks can happen on any data pipe.
- Missing imports in the specific_analyses.frame_order.checks module. This is from the recent pipe_name argument addition in the check_pivot() function.
- The frame order generate_pivot() function can now handle no pivot being present. At the start of this specific_analyses.frame_order.data module function, the check_pivot() function is being called to make sure that a pivot is present.
- Modified the Frame_order.test_frame_order_pdb_model_ensemble system test so it is set up correctly. The pivot point and moving domain are now specified.
- Added Monte Carlo simulations to the Frame_order.test_frame_order_pdb_model_ensemble system test. This is only setting up Monte Carlo simulation data structures via the monte_carlo.setup user function. This demonstrates a failure of the frame_order.pdb_model user function when an ensemble of structures is present with Monte Carlo simulations.
- Added support for the model argument for the frame_order.pdb_model user function. This argument is used to specify which of the models in an ensemble will be used to represent the average domain position Monte Carlo simulations, as each simulation is encoded as a model, as well as for the distribution of structures simulating the motion of the system. The argument is therefore passed into the create_ave_pos() and create_distribution() functions of the specific_analyses.frame_order.geometric module. To handle all models being used in the non Monte Carlo simulation PDB file and only one in this file, the internal structural object is copied twice. The second copy for the MC sims has all but the chosen model deleted out of it.
- Fix for the Frame_order.test_frame_order_pdb_model_ensemble system test. More needed to be done to set up the Monte Carlo simulations - the monte_carlo.initial_values user function call was required.
- Modified the frame order sim_init_values() API method to handle missing optimisation data. The monte_carlo.initial_values user function was failing if optimisation had not been performed. This is now caught and handled correctly.
- Created the Frame_order.test_frame_order_pdb_model_failed_pivot system test. This simply shows how the frame_order.pdb_model user function currently fails if the optimised pivot point is outside of the PDB coordinate limits of "%8.3f".
- The frame_order.pdb_model user function can now properly handle a failed pivot optimisation. This is when the pivot point optimises to a coordinate outside of the PDB limits. Now all calls to specific_analyses.frame_order.data.generate_pivot() from the module specific_analyses.frame_order.geometric set the pdb_limit flag to True. This allows all representation objects to be within the PDB limits. The algorithm in generate_pivot() has been extended to allow higher positive values, as the real PDB limits are [-999.999, 9999.999]. And a RelaxWarning is called when the pivot is outside to tell the user about it.
- Modified the frame order auto-analysis to be more fail-safe. Almost all of the protocol is now within a try-finally block so that the execution lock will always be released.
- Fix for the specific_analyses.frame_order.data.pivot_fixed() function. This was recently introduced when the check for PCS data was removed from this function. To fix the problem, instead of calling base_data_types() to see if PCS data is present, the cdp.pcs_ids data structure is checked instead.
- Fix for the model argument for the frame_order.pdb_model user function. The deletion of structural models for the Monte Carlo simulations in the average domain position representation now only happen if more than one model exists.
- Modified the Frame_order.test_frame_order_pdb_model_failed_pivot system test. This is to show that the frame_order.pdb_model user function fails if the pivot is close to but still within the PDB coordinate limits.
- Modified the pivot position checking in specific_analyses.frame_order.data.generate_pivot(). Now the pivot is shifted to be within the limits shrunk by 100 Angstrom. This allows any PDB representation created by the frame_order.pdb_model user function to be within the PDB limits.
- Fix for the axis permutation protocol in the frame order auto-analysis. If a results file was found for one of the permutations, a return from the function would occur. The result is that the other permutations would not be loaded or optimised.
- Fix for the RelaxError raised by the frame_order.select_model user function. This is the error if the model name is incorrect.
- Created the Frame_order.test_pseudo_ellipse_zero_cone_angle system test. This is to catch a bug in optimisation when the cone_theta_x is set to zero in the pseudo-ellipse models.
- Bug fix for the lib.frame_order.pseudo_ellipse.tmax_pseudo_ellipse_array() function. The problem was that when θx or θy were zero, the floating point value of 0.0 would be returned. This is the incorrect behaviour as the returned value must be an array matching the dimensions of the φ angle array argument.
- Fix for the Pseudo_elliptic cone object for when the cone angles are zero. The Pseudo_elliptic.phi_max() method now avoids a divide by zero error.
- Updates for all of the Frame_order.test_axis_perm_* system tests. The axis permutations and angle permutations are now performed correctly within the tests themselves. This allows the tests to pass.
- Modified the Frame_order.test_pseudo_ellipse_zero_cone_angle system test to be quick. Now that the test passes, the optimisation needs to be short. So a maximum of two iterations are now set. Otherwise the test would take hours to complete.
- Small speedup of the Frame_order.test_auto_analysis system test.
- Alphabetical ordering of most of the Frame_order system tests.
- Created the very simple Frame_order.test_num_int_points system test. This simply creates a data pipe and calls the frame_order.num_int_pts user function to test its operation. This is to increase the test suite coverage of this user function.
- Created the Frame_order.test_num_int_pts2 system test. This checks the operation of the frame_order.num_int_pts user function when only the model has been chosen.
- Renamed the Frame_order.test_num_int_points system test to Frame_order.test_num_int_pts.
- Created the check_domain() function for the frame order analysis. This is in the specific_analyses.frame_order.checks module. The function checks that the reference domain has been specified.
- Created the check_model() function for the frame order analysis. This is in the specific_analyses.frame_order.checks module. The function checks that the frame order model has been selected via the frame_order.select_model user function.
- The frame_order.ref_domain user function backend now uses the check_domain() function.
- Created the check_parameters() function for the frame order analysis. This is in the specific_analyses.frame_order.checks module. The function checks that the frame order parameters have been set up and have values.
- Created the Frame_order.test_num_int_pts3 system test. This checks the operation of the frame_order.num_int_pts user function when the model has been and the frame order parameters have been set up.
- Created the Frame_order.test_count_sobol_points system test. This will test that the frame_order.num_int_pts user function can correctly count the number of Sobol' integration points used for the current set of parameter values. This frame_order.num_int_pts functionality does not exist yet.
- Implementation of the specific_analyses.frame_order.optimisation.count_sobol_points() function. This is used by the frame_order.num_int_pts user function to provide a printout of the number of Sobol' integration points used for the current parameter values. This is to provide user feedback so that it is know if enough Sobol' points have been used.
- Modified the Frame_order.test_count_sobol_points system test. The number of points has been massively decreased as generating Sobol' points takes a long time, and the check for the number of used Sobol' points has been set to the real value.
- Created the Frame_order.test_count_sobol_points2 system test. This checks the operation of the frame_order.count_sobol_points user function. As this user function has not been implemented yet, the test currently fails.
- Created the frame_order.count_sobol_points user function. This is simply a frontend to the new specific_analyses.frame_order.optimisation.count_sobol_points() function.
- Updated the Frame_order.test_count_sobol_points2 system test for the correct number of Sobol' points.
- Created the Frame_order.test_count_sobol_points_rigid system test. This is to demonstrate a failure of the frame_order.test_count_sobol_points user function when applied to the rigid frame order model.
- Fix for the frame_order.count_sobol_points user function for the rigid model. This model is now caught at the start, a message printed out, and the function exited.
- Fix for the Frame_order.test_count_sobol_points_rigid system test. This now checks that cdp.used_sobol_points does not exist for the rigid frame order model after a call to the frame_order.count_sobol_points user function.
- Created the Frame_order.test_count_sobol_points_rotor system test. This is to test the frame_order.count_sobol_points user function for the rotor model.
- Fix for the frame_order.count_sobol_points user function for the rotor model. The σ angles unpacking required a dimensionality collapse in the Sobol' angle data structure.
- Updated the number of points to allow the Frame_order.test_count_sobol_points_rotor system test to pass.
- The frame order count_sobol_points() function is now being called by all of minimise user functions. This occurs at the end of the minimise.calculate, minimise.grid_search, and minimise.execute user function backends to provide more feedback to the user as to the quality of the optimisation. To avoid initialising the target function twice, the count_sobol_points() function now accepts the initialised target function as an optional argument.
- Created the Frame_order.test_count_sobol_points_free_rotor system test. This is to demonstrate that the frame_order.count_sobol_points user function currently fails for the free-rotor model.
- Fix for the frame_order.count_sobol_points user function for the free-rotor models. The torsion angle is now correctly handled as the 3 free-rotor models do not have cdp.cone_sigma_max set.
- Updated the number of points in the Frame_order.test_count_sobol_points_free_rotor system test. This is to allow the test to pass.
- Fix for the frame order count_sobol_points() function. The checks for the model, parameter and domain set up must come first, before cdp.model is accessed. Otherwise the frame_order.num_int_pts user function will often fail.
- Fix for the frame order count_sobol_points() function. The free-rotor isotropic cone model was incorrectly handled, as the cone parameter is 'cone_s1' and not 'cone_theta'. The order parameter is now converted to an angle before checking if the Sobol' point is outside of the cone or not.
- More fixes for the frame order count_sobol_points() function. The torsion angle for the torsionless models is no longer accessed, and the cone_theta parameter is only accessed for models with this parameter.
- Created the Frame_order.test_count_sobol_points_iso_cone_free_rotor system test. This is to test the frame_order.count_sobol_points user function for the free-rotor isotropic cone model.
- Fix for the frame order count_sobol_points() function. The torsion angle ranges from -π to π, so the absolute value needs to be checked, just as in the lib.frame_order modules.
- Updates for the number of Sobol' points in the Frame_order.test_count_sobol_points_* system tests. This is simply to allow all Frame_order system tests to pass.
- Redesigned the frame_order.num_int_pts user function frontend for the oversampling idea. The use of the quasi-random Sobol' sequence for numerical PCS integration will be modified to use the concept of oversampling. Instead of specifying the exact number of points in the Sobol' sequence and then removing points outside of the current parameter values, the algorithm will oversample as N * Ov * 10M, where N is the maximum number of Sobol' points to be used for the integration, Ov is the oversampling factor, and M is the number of dimensions or torsion-tilt angles used in the system. The aim is to try to use the maximum number of points N for all frame order models and all ranges of dynamics.
- Renamed the frame_order.num_int_pts user function to frame_order.sobol_setup. The user function no longer specifies the number of integration points. Instead it now specifies the maximum number of points N and oversampling factor Ov used to generate the oversampled Sobol' sequence.
- Implemented the Sobol' sequence oversampling in the frame order target function class.
- Converted all of the specific_analyses.frame_order package to the Sobol' point oversampling design. The correct values are now sent into the target function and all references to cdp.num_int_pts has been replaced with the cdp.sobol_max_points and cdp.sobol_oversample pair of variables. The frame_order.count_sobol_points user function backend has also been updated to show the total number of oversampling points and the number of points used.
- The frame_order.count_sobol_points user function now shows more information. The maximum number and oversampling factors are now also printed out for maximum user feedback.
- Improved the printout formatting for the count_sobol_points() frame order function.
- The frame order target function now passes the maximum number of Sobol' points to the relax library. The value is being passed into the lib.frame_order.*.pcs_numeric_int_*() functions, though it is not used set.
- Fix for the percentage calculation for the frame order count_sobol_points() function.
- Changed the creation of the Sobol' points in the frame order target function. For increased accuracy of the numerical PCS integration, the first 1000 points of the Sobol' sequence are now skipped to avoid any bias. For speed, the axis order of the Sobol' torsion-tilt angles has been swapped so that the numpy.swapaxes() function call is no longer required in the lib.frame_order.*.pcs_numeric_int_*() functions.
- Updated the frame order count_sobol_points() function to handle the swapped axis order.
- Huge speedup for the generation of the Sobol' sequence data in the frame order target function. The new Sobol_data class has been created and is instantiated in the module namespace as target_function.frame_order.sobol_data. This is used to store all of the Sobol' sequence associated data, including the torsion-tilt angles and all corresponding rotation matrices. When initialising the target function, if the Sobol_data container holds the data for the same model and same total number of Sobol' points, then the pre-existing data will be used rather than regenerating all the data. This can save a huge amount of time.
- Updated the frame order count_sobol_points() function to use the new Sobol_data container. The Sobol' sequence data generated by the target function is now located at target_functions.frame_order.sobol_data.
- Updated all the lib.frame_order.*.pcs_numeric_int_*() functions for the new Sobol' point algorithm. The functions now all accept the max_points argument and terminate the loop over the Sobol' points once the maximum number of points has been reached. The calls to numpy.swapaxes() have also been removed as this is now pre-performed by the target function initialisation.
- Changed the default oversampling factor from 100 to 1 in the frame_order.sobol_setup user function.
- Converted the frame order auto-analysis to use the new frame_order.sobol_setup user function design. The auto-analysis Optimisation_settings object has also been modified so that all num_int_pts arguments and internal structures have been split into the two new sobol_max_points and sobol_oversample names and objects.
- Fix for the rigid frame order model for the recent frame_order.sobol_setup user function changes. For this model, the number of Sobol' points normally is does not exist. This is now correctly handled.
- Created the sobol_setup() method for the frame order auto-analysis. This is used to correctly handle the new design of the frame_order.sobol_setup user function consistently throughout the protocol.
- Updated the Frame_order.test_auto_analysis system test script. This now uses the new auto-analysis Optimisation_settings object design.
- Updated the Frame_order.test_count_sobol_points system test. The call to the frame_order.num_int_pts user function was changed to frame_order.sobol_setup.
- Fixes for the Frame_order.test_count_sobol_points2 system test. The test_suite/shared_data/frame_order/axis_permutations/cam_pseudo_ellipse.bz2 relax state file has been manual edited to change the num_int_pts data pipe structure to sobol_max_points and to add the new sobol_oversample variable.
- Added a backwards compatibility hook for state and results files for the Sobol' sequence changes. The data pipe num_int_pts variable is now renamed to sobol_max_points when present, and the sobol_oversample variable is created and set to 1.
- Updates to all of the Frame_order.test_count_sobol_points_* system tests. The frame_order.sobol_setup user function is used to set a small maximum number of points (20) to allow the tests to be fast. The value of 20 is also checked for to allow the tests to pass.
- Renamed the cdp.used_sobol_points variable to sobol_points_used. This is created by the count_sobol_points() frame order function. The name change is to match the sobol_max_points and sobol_oversample variable names.
- Renamed all the Frame_order.test_num_int_pts* system tests to Frame_order.test_sobol_setup*. These system tests where for checking the operation of the old frame_order.num_int_pts user function. But this is now the frame_order.sobol_setup user function.
- Fix for all of the Frame_order.test_rigid_data_to_*_model system tests. The frame_order.num_int_pts user function call was changed to frame_order.sobol_setup.
- Updated the χ2 check in the Frame_order.test_rigid_data_to_free_rotor_model system test. This value has changed due to the first 1000 points of the Sobol' sequence being skipped.
- Fixes for all of the lib.frame_order.*.pcs_numeric_int_*_qrint() functions. The loop over the Sobol' points was broken. As numpy.swapaxes() has been applied to the points argument already, the loop needs to be over the second dimension of the points data structure.
- Updates for all of the Frame_order.test_cam_* system tests. The NUM_INT_PTS variable in the system tests scripts is now passed into the frame_order.sobol_setup user function as the max_num argument. This number has also been changed so that the tests take a reasonable amount of time. All χ2 value checks were updated. These were validated by increasing the number of integration points and watching the χ2 value of the Frame_order.test_cam_*_pcs version of the system tests head to zero.
- Another update for the χ2 check in the Frame_order.test_rigid_data_to_free_rotor_model system test. The previous commit used an incorrect value for the χ2. This new value is now much closer to the original.
- Turned down the verbosity of the update_model() frame order function. The verbosity flag is now accepted and set to zero by the get_param_names() API method and specific_analyses.frame_order.parameters.param_num() function. This removes a lot of useless printouts from many different user functions.
- Introduced the verbosity argument to the count_sobol_points() frame order function. This is used to turn the printouts on or off. The optimisation code now calls this function with the verbosity argument sent into the minimise.grid_search and minimise.execute user functions. Hence the printouts are suppressed for Monte Carlo simulations.
- Removed the axis system printout from the frame_order.pdb_model user function. This is for the geometric representation of the frame order dynamics. The axis system is printed out as the rotation matrix used for the lib.structure.geometric.generate_vector_residues() function later on anyway. The change is to simplify the printouts.
- Editing of the docstring of the frame_order.sobol_setup user function.
- Fix for the frame order system test optimisation printouts. The cdp.num_int_pts variable is now called cdp.sobol_max_points.
- The starting time of the axis permutation model optimisations is now output. This is in the frame order auto-analysis. This call to the time user function occurred for the normal models, so extending it to the permuted axes models makes the output more consistent.
- Simplified the atomic position averaging warning in the frame order analysis. Instead of throwing a warning for each spin, one warning for all spins is now given. This should make the output a lot less verbose.
- The frame order minimise_setup_atomic_pos() function now accepts the verbosity argument. This is used to silence the warnings in user functions such as frame_order.sobol_setup.
- Improvements for the frame order overfit_deselect() API method. Three changes have been made: The print statements have been converted to RelaxWarnings; The spin IDs or spin ID pairs are now stored in a list and one RelaxWarning for the missing PCS data and one for the missing RDC data is now given; And the verbose flag is now used to determine if a RelaxWarning will be given.
- Change to the position averaging warning in the minimise_setup_atomic_pos() frame order function.
- Improvements for the printout from the update_model() frame order function. A list of updated parameters is now created and everything is printed on a single line at the end. The printout is therefore much more compact.
- Spun out part of the frame_order.pdb_model user function into the new frame_order.simulate user function. The new user function arguments required for properly creating the pseudo-Brownian dynamics simulation would have made the frame_order.pdb_model user function too complicated. Therefore this part has been spun out into the new frame_order.simulate user function. The frame_order.simulate frontend fully describes the algorithm that will be used to simulate the dynamic content of the PCS and RDC data, and warns that not all modes of motion are visible and present.
- Updated the frame order auto-analysis to call the new frame_order.simulate user function. Although not implemented yet, this allows the user function to create the simulation PDB file in the future.
- Small fix for the new frame_order.simulate user function backend.
- Updated the base script for the Frame_order.test_cam_* system tests. The frame_order.simulate user function is now called directly after the frame_order.pdb_model user function.
- Created the backend framework for the frame_order.simulate user function. The backend specific_analyses.frame_order.uf.simulate() function performs all data checks required, prepares the output file object, assembles the frame order parameter values and pivot point, and creates a copy of the structural object object with the ensemble collapsed into a single model. All this data is then passed into the new lib.frame_order.simulation.brownian() function. This initialises all required data structures and the structural object. The main loop of the simulation is also implemented, taking snapshots at every fixed number of steps and terminating the loop once the total number of snapshots are reached. The snapshot consists of copying the original unrotated structural model and rotating it into the new position. The rotation is currently the identity matrix. The old specific_analyses.frame_order.geometric.create_distribution() stub function has been deleted.
- Decreased the time required for the Frame_order.test_cam_* system tests. The frame_order.simulate user function now only creates a total of 20 snapshots rather than 1000.
- Added new arguments to the frame order auto-analysis for the frame_order.simulate user function. These are the brownian_step_size, brownian_snapshot and brownian_total arguments which are passed directly into the frame_order.simulate user function. This gives the user more control, as well as allowing the test suite to speed up this part of the analysis.
- Huge speedup for the Frame_order.test_auto_analysis system test. The pseudo-Brownian dynamics simulation via the frame_order.simulate user function has been massively sped up to allow the test to be almost as fast as before.
- Spun out the code for shifting to the average frame order position into a new function. The old code of the create_ave_pos() of the specific_analyses.frame_order.geometric module has been shifted into the new average_position() function. This will allow the code to be reused by other parts of relax to obtain the average frame order structures.
- Implemented the shifting to the average position for the frame_order.simulate user function backend. This simply sends the structural object into the new average_position() function of the specific_analyses.frame_order.geometric module.
- Improvements for the frame_order.simulate user function. The rigid model is now skipped, the PDB file closed, and some printouts for better user feedback have been added.
- Changed the default PDB file name for the frame_order.simulate user function to 'simulate.pdb'. The '*.bz2' extension has been dropped so that the file is quicker to create and does not need to be decompressed for loading into molecular viewers.
- Created the specific_analyses.frame_order.geometric.generate_axis_system() function. This is now used by most parts of the frame order analysis to generate the full 3D eigenframe of the motions. Previously this was implemented each time the frame or major axis was required. This replicated and highly inconsistent code has been eliminated.
- Fix for the new specific_analyses.frame_order.geometric.generate_axis_system() function. The rotor and free rotor models were not correctly handled and the returned eigenframe was the zero matrix.
- Implemented the pseudo-Brownian frame order dynamics simulation for the single motion models. This uses the same logic as in the test_suite/shared_data/frame_order/cam/*/generate_distribution.py scripts which were used to generate all of the test suite data. However rather than using a random rotation matrix, a random 3D vector is used to rotate a fixed angle. And the rotation is used to rotate the current state to state i+1. The rotation for the state is decomposed into torsion-tilt angles once shifted into the motional eigenframe, the violations checked for as the state shifted to the boundary, then the new state reconstructed from the corrected torsion-tilt angles, and then it is shifted from the motional eigenframe to the PDB frame.
- Shifted the specific_analyses.frame_order.variables module into the lib.frame_order package. This is both to minimise circular dependencies, as previously the specific_analyses.frame_order modules import from target_functions.frame_order and vice-versa, and to allow the relax library functions to have access to these variables.
- Implemented the frame_order.simulate user function backend for the double rotor frame order model. This involved extending the algorithm to loop over N states, where N=2 for the double rotor and N=1 for all other models. To handle the rotations being about the x and y-axes, an axis permutation algorithm is used to shift these axes to z prior to decomposing to the torsion-tilt angles. The reverse permutation is used to shift the axes back after correcting for being outside of the allowed angles.
- Fixes for the specific_analyses.frame_order.geometric.average_position() function. The recent trunk changes with the structural object Internal_selection class required a change in this function.
- Updated the lib.frame_order.simulation.brownian() function. This now uses the internal structural object selection object logic - the selection() method is called to obtain the Internal_selection object, and this is then passed into the rotation() method.
- The quad_int argument for the frame order target function class now defaults to False. This is so that quasi-random Sobol' numerical integration will be used by default.
- The cdp.quad_int flag is now passed into the target function for the frame order calculate() method. This is for the minimise.calculate user function backend.
- Fixes for the missing cdp.quad_int flag. If the cdp.quad_int flag is missing, this is now set to False before setting up the target function class. The previous behaviour was that the frame_order.quad_int user function must be called prior to optimisation. Now it is optional for turning this flag on and off.
- The RDC only optimisation now defaults to the *_qrint() frame order target functions. This restores the earlier behaviour prior to the restoration of the SciPy quadratic integration.
- Clean up for the frame order target function aliasing. The Scipy quadratic integration and the quasi-random Sobol' integration target functions are now aliased using the getattr() Python method to programmatically choose one or the other. The rigid model has been removed from the list as it is not a numeric model, and the func_double_rotor() target function has been renamed to func_double_rotor_qrint() to make it consistent with the naming of the other target functions.
- Renaming of all the frame order target functions and PCS integration functions. For consistency, all quasi-random Sobol' integration functions now use the 'qr_int' tag whereas the SciPy quadratic integration functions use the 'quad_int' tag. This is not only in the target function names but also the PCS integration functions in lib.frame_order.
- Duplicated all Frame_order.test_cam_* system tests for testing the SciPy quadratic integration. The Frame_order.test_cam_* system tests have all been renamed to Frame_order.test_cam_qr_int_*. These have been duplicated and renamed to Frame_order.test_cam_quad_int_*. The flag() system test method has been extended to include the quad_int flag which is then stored in the status object and used in the base CaM frame order system test script to activate the frame_order.quad_int user function.
- Activated the quad_int flag for a number of the Frame_order.test_cam_quad_int_* system tests. The quad_int argument for the flags() test suite method had been missed for a few of these tests.
- Updated the χ2 check in the Frame_order.test_cam_qr_int_pseudo_ellipse_free_rotor_rdc system test. This test is not normally run as it blacklisted and duplicates the coverage of other tests. However its chi-squared value check had not been updated for a while and hence the test fails when explicitly run.
- The Sobol' point counting is now turned off for the frame order optimisation functions if none exist. If the cdp.quad_int flag is set, then there will be no Sobol' points to count. This count_sobol_point() user feedback function will therefore not be called by the minimise.calculate, minimise.grid_search and minimise.execute user functions.
- Turned off optimisation for all of the Frame_order.test_cam_quad_int_* system tests. The SciPy quadratic integration is far too slow to be used in the test suite. The simple call to the minimise.calculate user function is sufficient for checking these target functions.
- Updated all of the Scipy quadratic integration frame order target functions. A number of the data structures in the target function class have been redesigned since these target functions were deleted. All of the func_*_quad_int*() target functions have been updated for these changes.
- Updated all of the χ2 value checks for the Frame_order.test_cam_quad_int_* system tests. This is only for those tests which use PCS data - the RDC only test χ2 values are the same as in the Frame_order.test_cam_qr_int_* system tests. In all cases, the χ2 value is lower for the more accurate SciPy quadratic integration as compared to the quasi-random Sobol' integration, as expected.
- Implemented the SciPy quadratic integration target function for the double rotor frame order model. This simply follows from what all the other quadratic integration target functions and lib.frame_order module functions do.
- Changed the χ2 value checks in the Frame_order.test_cam_quad_int_double_rotor* system tests. These were the values for the quasi-random Sobol' integration and needed updating for the SciPy quadratic integration.
- Removed the skip_tests argument for the Frame_order system tests __init__() method. This argument, which was used to manually turn on or off the blacklisted tests, is no longer needed due to the new --no-skip relax command line flag which will enable all blacklisted tests.
- The [http://www.nmr-relax.com/api/4.0/auto_analyses.frame_order-module.html frame order auto-analysis Optimisation_settings object now supports the quad_int flag. This is for activating the SciPy quadratic integration. It is accepted as an argument for the add_grid() and add_min() methods, and it returned by the new get_grid_quad_int() and get_min_quad_int() methods.
- Added the ability to specify a pre-run directory in the frame order auto-analysis. This will be used for refinement purposes. If the new pre_run_dir argument, modelled on the relaxation dispersion auto-analysis, is supplied then results files will be loaded from this directory and the base data pipe copying and PCS subset optimisation steps will be skipped. The model nesting algorithm is also deactivated.
- Activated the SciPy quadratic integration in the frame order auto-analysis. If the Optimisation_settings object has been set up with the quad_int flag, then the auto-analysis will skip the sobol_setup() method and instead directly call the frame_order.quad_int user function. Optimisation will then use the SciPy quadratic integration rather than the quasi-random Sobol' integration.
- Improvements for the usage of the frame_order.quad_int user function in the auto-analysis. The frame_order.quad_int user function is now called even when the Optimisation_settings object quad_int flag is False. This allows for switching between the SciPy quadratic integration and the quasi-random Sobol' integration, as the SciPy quadratic integration can now be turned off.
- Additions to the frame order auto-analysis documentation.
- Incorporated the contents of the summarise.py script into the frame order auto-analysis module. This has been converted into the summarise() function which will generate a results summary table as the analysis is still running.
- Improved logic in the auto_analyses.frame_order.summarise() function. The model names, directories and titles are now being auto-generated from the full list of frame order models in lib.frame_order.variables.MODEL_LIST. To create a common mechanism for determining the model directory name, the Frame_order_analysis.model_directory() method has been converted into a module function.
- The frame order auto-analysis now calls the summarise() function at the end to create a summary table.
- Shifted the final state saving in the frame order auto-analysis to be within the safety of the try block.
- Turned off the final state saving in the Frame_order.test_auto_analysis system test. This almost halves the time required for the test. A private class variable _final_state has been added to the auto_analyses.frame_order.Frame_order_analysis class which when False will cause the state saving step to be skipped.
- The summarise() function call is now after saving the final state in the frame order auto-analysis. This is needed because the summarise() function will create a new set of data pipes, loading the results which already exist under a different pipe name in the relax data store. Otherwise the final state file is twice as big as it should be.
- Incorporated the contents of count_sobol_points.py into the frame order auto-analysis module. The analysis script has been converted into the count_sobol_points() function which will generate a summary table of the number of quasi-random Sobol' points used for the PCS numerical integration.
- The frame order auto-analysis now calls the count_sobol_points() function at the end. This is to automatically create the Sobol' point summary table.
- Fixes for the auto_analyses.frame_order.summarise() function. If the count_sobol_points() function is called followed by summarise(), a RelaxError will be raised as the data pipe already exists. The summarise() function has been modified to switch to the data pipe if it already exists.
- Expanded the frame order auto-analysis documentation. This adds a description for the summarise() and count_sobol_points() functions.
- Elimination of most of the Frame_order.fixme_test_* system tests and associated data. These tests are from a very early stage of the development of the frame order theory back when the base data was the full and reduced alignment tensors for the each domain calculated from the RDC data. They do not fit into the current analysis where the base data is the RDCs and PCSs for the moving domain. There is no point upgrading the tests as it will be far too much effort and it will only duplicate the coverage of the Frame_order.test_cam_* system tests.
- Renamed the Frame_order.fixme_test_opendx_map system test to Frame_order.test_opendx_map to activate it.
- Upgraded the Frame_order.test_opendx_map system test. To upgrade from the ancient design to the current design so that the test is functional and relevant, this now uses the same setup as the Frame_order.test_cam_qr_int_rigid system test. Instead of performing optimisation, the test calls the dx.map user function.
- Fix for the frame order specific API calculate() method. This was caught by the Frame_order.test_opendx_map system test. The scaling matrix was not being specified by the dx.map user function backend and this was causing the method to fail. Instead of passing the non-existent scaling matrix into the target function, the argument is simply ignored. The scaling matrix has no effect on the minimise.calculate user function so it is not necessary.
- The verbosity flag is now being respected by the frame order specific API calculate() method. This silences the method when executing the dx.map user function. The χ2 value printout is suppressed and the verbosity argument is being sent into the frame order count_sobol_points() function.
- Added a section printout to the frame order auto-analysis when summary tables are created.
- The frame_order.simulate user function now defaults to creating a gzipped PDB file. This is to save room, and because most molecular viewers will automatically read gzipped PDB files.
- Fix for the change of the pipe_control.pipes.test() function to check_pipe().
- Small change in the title of the summary table of the frame order auto-analysis. 'Order parameters' has been replaced by 'Cone half angles' to clarify what the values really are.
- Fix for the frame order optimisation target setup printouts. The 'Numerical integration: ' printout was fixed to 'Quasi-random Sobol' sequence'. This now changes to 'SciPy quadratic integration' if cdp.quad_int is set. The text 'PCS' has also been added for clarification.
- Removed the call to the frame_order.simulate user function for the rigid model in the auto-analysis. There is no motion to simulate in the rigid model, so the frame_order.simulate user function has no use.
- Improvements, fixes, and expansion of the results and data visualisation file creation. This is for the frame order auto-analysis. The visualisation() method has been renamed to results_output() and its scope expanded. The function previously only called the frame_order.pdb_model and frame_order.simulate user functions for creating PDB representations of the frame order motions and performing a pseudo-Brownian frame order dynamics simulate. This has been extended to also call the results.write user function for outputting results files and the rdc.corr_plot and pcs.corr_plot for generating correlation plots of the measured vs. back-calculated data. All parts of the auto-analysis were output files are required now call this method. This ensures that all output files are always created, and are placed into the correct directories.
- Improvements for the sectioning printouts for the frame order auto-analysis. The sections now use the lib.text.formatting subtitle() and subsubtitle() functions to distinguish them from the output of all the user functions, which use the section(), subsection() and subsubsection() functions. New sectioning printouts have been added for clarity.
- Possible fixes for the frame order auto-analysis. This is just in case a user decides to not perform the optimisation starting with a PCS subset. In this case, the analysis will now execute correctly.
- Improvements to the summary table for the frame order auto-analysis. The rotor and free rotor model motional eigenframe parameter axis_alpha is now being converted into spherical angles and reported in the table. This allows the motional eigenframe of all models to be easily compared in the table.
- Created a directory and base PDB system for testing out the PCS information content. The base PDB system consists of Ad Bax's CaM domain structures superimposed onto the open CaM structure, the N-domain CoM shifted to the origin, and the C-domain CoM shifted to the z-axis.
- Modified the PCS content testing base system. The paramagnetic centre is now shifted to the origin, as this is the real centre of the PCS physics.
- Intermediate optimisation results are now stored by the frame order auto-analysis. The results from each minimise.grid_search and minimise.execute user function call are now stored in specially named directories located in the 'intermediate_results' directory, which itself is located in the auto-analysis results_dir directory. This allows intermediate results to be more easily analysed later on, which can be useful for optimising the optimisation steps. These directories can also be used for the pre_run_dir auto-analysis argument for subsequent refinements from earlier steps in the optimisation. The results stored include everything from the results_output() method and the count_sobol_points() and summarise() functions. To allow this to work, the auto-analysis functions count_sobol_points() and summarise() required modification. Results files are now always loaded into a temporary data pipe, rather switching to the corresponding pipe, and the temporary data pipe is deleted after the data has been extracted. The original data pipe name is also stored and a switch back to that pipe occurs at the end of each function.
- The simulation is now turned of for intermediate results in the frame order auto-analysis. The intermediate results are only for checking, so for these the full pseudo-Brownian dynamics simulations are not required. The simulation flag has been introduced into the results_output() method of the auto-analysis to control this.
- The splitting of the rigid model grid search into rotation and translation parts is now optional. In the frame order auto-analysis, the rigid_grid_split argument has been introduced. The alternating algorithm of performing a grid search over the rotational space followed by translation is now optional and turned off by default. The reason is because the global minimum is sometimes missed with this shortcut algorithm.
- Speedup of the Frame_order.test_auto_analysis system test. The splitting of the rigid model grid search into rotation and translation parts has been reactivated.
- Created the Optimisation.has_grid() method for the frame order auto-analysis. This is used to test if the optimisation settings object has a grid search defined.
- The grid search can now be skipped for the rigid model in the frame order auto-analysis. If the input 3D structures are close to the real solution, the grid search over the translational and rotation parameters of the rigid model could be skipped. This speeds up the analysis and can help find the real solution in problematic cases.
- The intermediate results storing can now be turned off in the frame order auto-analysis. The new store_intermediate Boolean argument has been added to the analysis to allow the storage of these results to be turned on or off.
- The intermediate results are no longer stored in the Frame_order.test_auto_analysis system test. This drops the test timing on one system from ~190 seconds to ~50 seconds.
- The compression level for results files can now be set in the frame order auto-analysis. This is via the new argument results_compress_type, which is used to set the compress_type argument of the results.write user function. The results reading parts of the auto-analysis have been updated to allow uncompressed, bzip2 compressed, and gzip compressed files to be handled.
- Added a printout of the frame order model in the target function setup function. This is printed out when the minimise.calculate, minimise.grid_search, or minimise.execute user functions are called, and is for better feedback, especially in the auto-analysis where the repetitive optimisations can be confusing.
- Updated the frame order analysis for the structure.load_spins user function changes. The minimise_setup_atomic_pos() function of the specific_analyses.frame_order.optimisation module now handles the mixed type spin.pos variable correctly.
- The data pipe containing a PCS subset is now optional in the frame order auto-analysis. This is for systems which have so little data that a subset makes no sense.
- Redesigned the optimisation steps for the frame order auto-analysis. The code has been significantly simplified as the optimisation for the PCS subset and full data set was the same. The code duplication has been eliminated by combining it into the new optimisation() method. The check for the PCS subset has also been expanded so that it is skipped if the subset data pipe is not supplied, even if an optimisation object for the subset has been (this should prevent strange errors when the auto-analysis is incorrectly used). A side effect of this code merger is that the zooming grid search has now been activated for the full PCS data set. This is of great benefit when a PCS subset is not being used.
- The minimise.execute user function skip_preset flag is now False in the frame order auto-analysis. This is for the main model optimisation. Without this flag set, the grid search for the pivot point position for the rotor model was being skipped at the first zoom level.
- The pivot point can now be excluded from the grid search in the frame order auto-analysis. If the initial pivot point is known to be reasonable, then it may be possible to skip it in the grid search for the rotor frame order model. This can lead to a speedup of the analysis and can help with stability. The pivot_search argument has been added to the auto-analysis Optimisation.add_grid() method to enable this. The get_grid_pivot_search() method has also been added to allow the auto-analysis to query this and turn it off if desired.
- Updated the description of the frame_order.permute_axes user function. This now includes the isotopic cone.
- Replaced the table in the frame_order.permute_axes user function. The original table was an old and incorrect version. This has been replaced by the correct permutation table.
- Added some old relax scripts for both simulating and predicting the frame order matrix elements. These were used for the initial implementation of the pseudo-ellipse frame order model back in July 2010. The scripts will be extended for all frame order models. The simulated values could then be used in unit tests of the frame order matrix code in lib.frame_order.
- Updated the frame_order_simulate.py script for simulating frame order matrix elements. The MODEL variable has been added in preparation for supporting all model types, and this is now added to the file name. The Grace header is now also being automatically generated.
- Improvements for the Grace files produced by the frame_order_simulate.py script. The model name is now set as a variable and is used for the subheading.
- Updated the frame_order_solution.py script for directly calculating the frame order matrix elements. The MODEL variable has been added in preparation for supporting all model types, and this is now added to the file name. The Grace header is now also being automatically generated and this matches that for the frame_order_simulation.py script.
- Zero values can now be handled in the pseudo-ellipse 1st degree frame order matrix function. This is in lib.frame_order.pseudo_ellipse.compile_1st_matrix_pseudo_ellipse().
- Removed some unused code in the pseudo-ellipse 2nd degree frame order matrix function. This is the compile_2nd_matrix_pseudo_ellipse() function in the lib.frame_order.pseudo_ellipse module. The change should make the RDC part of the frame order analysis for the pseudo-ellipse model slightly faster.
- Modified the rotate_daeg() function as this is independent of the degree of the frame order matrix. This is the lib.frame_order.matrix_ops.rotate_daeg() function.
- Fix for the compile_1st_matrix_pseudo_ellipse() function. This function of the lib.frame_order.pseudo_ellipse module now can rotate the 1st degree frame order matrix out of its eigenframe and into the PDB frame.
- Created an executable Python script for mass converting the frame order matrix Grace graphs. The script converts the *.agr files to EPS and PNG files.
- Modified the frame order matrix Grace graph to EPS/PNG format conversion script. The binary being called is now 'grace' rather than 'xmgrace'. This allows different Grace versions to be used.
- Modified the frame order matrix Grace graph to EPS/PNG format conversion script. Grace is now used to create a PostScript file and then the ps2eps program is called to convert to EPS. This produces much better EPS files for inclusion into LaTeX documents.
- Redesign of the frame_order_solution.py script for calculating the frame order matrix elements. This script now loops over all models, all motional frame orientations, and all order parameters to generate the Grace graphs of all 1st and 2nd degree frame order matrix elements. Therefore the script only needs to be executed once. The script also now calculates a point at zero (slightly shifted to 0.01 to avoid artifacts).
- Added all of the Grace graphs produced by the frame_order_solution.py script. These are the graphs of the 1st and 2nd degree frame order matrix elements, calculated using the functions in lib.frame_order.
- Updated frame_order_simulate.py to be much faster in simulating the frame order matrix elements. The script also matches the Grace file output of the frame_order_solution.py script. The inside() method has been renamed for the pseudo-ellipse and the infrastructure for adding support for the other frame order models has been added. By shifting calculations outside of the loops, the script is now many orders of magnitude faster.
- Implemented the compile_1st_matrix_rotor() function. This is for the lib.frame_order.rotor module. The function will calculate the 1st degree in-frame frame order matrix for the rotor model.
- Created the Grace graphs for the rotor model 1st degree frame order matrix elements. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
- Implemented the compile_1st_matrix_free_rotor() function. This is for the lib.frame_order.free_rotor module. The function will calculate the 1st degree in-frame frame order matrix for the free rotor model.
- Created the Grace graphs for the free rotor model 1st degree frame order matrix elements. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
- Implemented the compile_1st_matrix_iso_cone() function. This is for the lib.frame_order.iso_cone module. The function will calculate the 1st degree in-frame frame order matrix for the isotropic cone model.
- Created the Grace graphs for the isotropic cone model 1st degree frame order matrix elements. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
- Implemented the compile_1st_matrix_iso_cone_torsionless() function. This is for the lib.frame_order.iso_cone_torsionless module. The function will calculate the 1st degree in-frame frame order matrix for the torsionless isotropic cone model.
- Created the Grace graphs for the torsionless isotropic cone 1st degree frame order matrix elements. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
- Implemented the compile_1st_matrix_iso_cone_free_rotor() function. This is for the lib.frame_order.iso_cone_free_rotor module. The function will calculate the 1st degree in-frame frame order matrix for the free rotor isotropic cone model.
- Created the Grace graphs for the free rotor isotropic cone 1st degree frame order matrix elements. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
- Docstring fixes for the new compile_1st_matrix_iso_cone() function.
- A minor speedup for the frame_order_simulate.py script. The angles are now being calculated at the very start prior to the main loop, removing repetitive calculations.
- The frame_order_simulate.py script now uses lib.text.progress.progress_meter(). This script for simulating the frame order matrix elements now uses the standard progress meter in relax to simplify the script. This should also speed up the calculations as the progress printouts were slowing down the calculations.
- Simulation of the pseudo-ellipse frame order matrix elements. This is for a simulation of 1,000,000 states for each angle increment, and includes in-frame and out-of-frame and varying of θ X, Y, and Z. The resultant Grace graphs have been added to the repository.
- The frame order matrix element simulation script now uses the Kronecker outer product. This allows the frame order matrix to be in the same notation as that used internally in relax. It will cause the colours of the Sijkl_* curves to match between the simulation and solution scripts.
- Added the rotor model to the frame order matrix element simulation script. The generated in-frame and out-of-frame Grace graphs containing the matrix values for 1,000,000 simulation values have been added to the repository. The script was modified so that the rotation is generated by special rotation_*() methods which are aliased depending on the model.
- Added the free rotor model to the frame order matrix element simulation script. The generated in-frame and out-of-frame Grace graphs containing the matrix values for 1,000,000 simulation values have been added to the repository. The inside_free_rotor() method has been added to always return True for the rotation generated by rotation_z_axis().
- Simplifications and fixes for the 1st degree frame order matrix calculation for the pseudo-ellipse. The compile_1st_matrix_pseudo_ellipse() function of the lib.frame_order.pseudo_ellipse module has been significantly simplified by shifting a lot of maths outside of the quadratic integration.
- Updated all the calculated 1st degree frame order matrix graphs for the pseudo-ellipse. The changes are due to the fixes in the lib.frame_order.pseudo_ellipse module.
- Simplifications for all of the torsionless pseudo-ellipse frame order matrix equations.
- Implemented the compile_1st_matrix_pseudo_ellipse_torsionless() function. This is for the lib.frame_order.pseudo_ellipse_torsionless module. The function will calculate the 1st degree in-frame frame order matrix for the torsionless pseudo-ellipse model.
- Created the Grace graphs for the torsionless pseudo-ellipse model 1st degree frame order matrix. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
- Added the isotropic cone model to the frame order matrix element simulation script. The generated in-frame and out-of-frame Grace graphs for the torsion angle cone_sigma_max, containing the matrix values for 1,000,000 simulation values have been added to the repository. The inside_iso_cone() method has been created to check for the θx and θz angle violations from the rotation_hypersphere() method.
- Simplifications for the inside_*() methods of the frame order matrix element simulation script. The limit() method is now called only once outside of these methods and the maximum cone half-angles passed into the inside_*() methods. Although only slightly faster, this is mainly to simplify the code.
- Alphabetical ordering of methods in the frame order matrix element simulation script.
- Simplification of some of the pseudo-ellipse 2nd degree frame order matrix equations.
- More simplifications of the pseudo-ellipse 2nd degree frame order matrix equations.
- Integer to float conversions in part_int_daeg2_pseudo_ellipse_13(). This avoid integer to float conversion during execution, saving a little time for the pseudo-ellipse 2nd degree frame order matrix compilation.
- Removal of many repetitive calculations in the pseudo-ellipse 2nd degree frame order matrix equations.
- Simplifications of pseudo-ellipse 1st degree frame order matrix functions. The xx, yy, and zz have been renamed to 00, 11, and 22 for consistency. And all sigma_max arguments have been dropped as they are not used.
- Small numerical changes for the pseudo-ellipse 2nd degree frame order matrix graphs. These are only for the first point close to zero and the changes are minimal, caused by the recent simplifications of the code.
- Created the Grace graphs for the free rotor pseudo-ellipse model 1st degree frame order matrix. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
- Implemented the compile_1st_matrix_pseudo_ellipse_free_rotor() function. This is for the lib.frame_order.pseudo_ellipse_free_rotor module. The function will calculate the 1st degree in-frame frame order matrix for the free_rotor pseudo-ellipse model.
- Speedups and simplifications of the free rotor pseudo-ellipse 2nd degree frame order matrix equations.
- Added the torsionless isotropic cone model to the frame order matrix element simulation script.
- Implemented the compile_1st_matrix_double_rotor() function. This is for the lib.frame_order.double_rotor module. The function will calculate the 1st degree frame order matrix for the double_rotor model.
- Created the Grace graphs for the double rotor model 1st degree frame order matrix. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
- Recreated all of the simulated pseudo-ellipse frame order matrix element graphs. These are now in the Kronecker product notation so that they will match the graphs calculated using the relax lib.frame_order.pseudo_ellipse module.
- Fix for the pseudo-ellipse 1st degree frame order matrix ᛞ22 element.
- Updated all of the pseudo-ellipse 1st degree frame order matrix graphs for the recent fix.
- Converted the Sobol' rotation matrices to float32 in the frame order target function. This is to conserve huge amounts of memory to allow for more Sobol' points to be used. For example for the models which use 3D Sobol' points (isotropic cone and pseudo-ellipse), a maximum of 50000 Sobol' points requires 50000000 to be created, using about 15 Gb of RAM.
- A few Frame_order system test updates for the float64 to float32 memory saving changes. The chi-squared value of 3 tests was slightly different.
- Bug fix for the activation of quadratic integration in the frame order auto-analysis. The calls to the frame_order.quad_int user function in the optimisation() method did not supply an argument so the user function was defaulting to False rather than the True value required.
- The frame order auto-analysis summary functions are now more robust. If the data pipe already exists for some reason, it is deleted prior to the new one being created.
- Changed the frame_order.quad_int user function argument default to True. This means that calling the user function without arguments will activate the quadratic integration rather than turning it off.
- Added the isotropic cone model frame order matrix simulation graphs for the cone opening angle θx.
- Created and added all of the torsionless isotropic cone simulated frame order matrix element graphs.
- Added the free rotor isotropic cone model to the frame order matrix element simulation script. The generated Grace graphs containing the matrix values for 1,000,000 simulation values have been added to the repository. The self.torsion_check variable has been created to allow the inside_iso_cone() method to skip the torsion angle check when its value is False.
- Added the torsionless pseudo-ellipse model to the frame order matrix element simulation script. The generated Grace graphs containing the matrix values for 1,000,000 simulation values have been added to the repository. The rotations are generated by the rotation_hypersphere_torsionless() method and the angle violations checked using the inside_pseudo_ellipse() method.
- Bug fix for the torsionless pseudo-ellipse 1st degree frame order matrix. The 11 element was of the wrong sign.
- Fixes for the torsionless pseudo-ellipse 1st degree frame order matrix element graphs.
- Added the free rotor pseudo-ellipse model to the frame order matrix element simulation script. This only required the self.torsion_check variable to be set to False. The model uses the inside_pseudo_ellipse() and rotation_hypersphere() methods.
- Fixes for free rotor isotropic cone 1st degree frame order matrix graphs calculated using relax. The 1st degree function accepts the cone opening angle θ rather than the order parameter S.
- Added the frame order matrix element graphs for the in-frame free rotor pseudo-ellipse model.
- Added the frame order matrix element graphs for the out-of-frame free rotor pseudo-ellipse model.
- Added support for the double rotor model to the frame order matrix element simulation script. The double rotation is constructed in the new rotation_double_xy_axes() method, and the checks for the violation of the two torsion angles in the inside_double_rotor() method. In the main loop, the θ, φ and σ angles correspond to sigma1, sigma2, and nothing.
- Fixes for all of the calculated double rotor model frame order matrix graphs. The X and Y angles were mixed up. The first torsion half-angle sigma1 corresponds to a y-axis rotation and the second sigma2 corresponds to a x-axis rotation.
- Added the frame order matrix element graphs for the double rotor model.
- A divide by zero fix for the torsionless pseudo-ellipse. This is in the compile_2nd_matrix_pseudo_ellipse_torsionless() relax library function.
- A divide by zero fix for the free rotor pseudo-ellipse. This is in the compile_2nd_matrix_pseudo_ellipse_free_rotor() relax library function.
- The 1st angle for the calculated frame order matrix graphs is 0 for all non pseudo-ellipse models. This is for the frame_order_solution.py script. Only the pseudo-ellipse models where numerical integration is required fail for the angle of 0.0. Therefore the changing of the first angle from 0.0 to 0.01 only occurs for the pseudo-ellipse models. All graphs have been updated.
- The 1st pseudo-ellipse torsion angle value in the frame order matrix graphs is now 0.0. Only the cone opening angles set to 0.0 cause a failure in the pseudo-ellipse models, so the torsion angle is now allowed to start at exactly zero.
- Clean up of the frame order matrix element simulation script.
- Redesign of the free rotor isotropic cone frame order model - the order parameter has been replaced. From the frame order matrix element graphs in test_suite/shared_data/frame_order/sim_vs_pred_matrix, specifically Sijkl_iso_cone_free_rotor_in_frame_theta_x_calc.agr, Sijkl_iso_cone_free_rotor_axis2_1_3_theta_x_calc.agr, and Sijkl_iso_cone_free_rotor_out_of_frame_theta_x_calc.agr, it is clear that the symmetry of the order parameter after 120 degrees causes the 2nd degree frame order matrix to be incorrectly estimated. Therefore the S1 order parameter has been replaced with the original cone opening angle cone_theta. All parts of relax have been updated for this large conversion.
- Updated the frame order matrix element graphs for the free rotor isotropic cone fixes. The cone S1 parameter has been converted back to the original cone θ opening half-angle, allowing the 2nd degree frame order matrix elements to be properly calculated for all motions.
- Eliminated the lib.frame_order.iso_cone.populate_*() functions. The populate_1st_eigenframe_iso_cone() function was unused and incorrect, so it was deleted. The contents of the populate_2nd_eigenframe_iso_cone() function have been shifted compile_2nd_matrix_iso_cone() as a separate function is unnecessary. This now matches all the other lib.frame_order modules.
- Bug fix for the frame_order.simulate user function. The incorrect model number was being specified and hence the simulation was not starting from the optimised average domain position but rather the arbitrary position of the original structure.
- Manual Python 3 fixes for the dict.key() function which returns a list or iterator in Python 2 or 3. This matches r26519 in trunk.
- Python 3 fixes via 2to3 - the "while 1" construct has been replaces with "while True". The command used was: 2to3 -j 4 -w -f idioms .
- Python 3 fixes via 2to3 - the spacing around commas has been fixed. The command used was: 2to3 -j 4 -w -f ws_comma .
- Python 3 fixes via 2to3 - the xrange() function has been replaced by range(). The command used was: 2to3 -j 4 -w -f xrange .
- Started to create the Frame_order.test_pdb_model_rotor system test. This will be used to check that the PDB representations of the frame order motions are correct.
- Modified the frame_order.pdb_model user function backend to handle missing structural data. The create_ave_pos() function of the specific_analyses.frame_order.geometric module now checks that cdp.structure exists, and if not a warning is given and the PDB file creating is skipped.
- Fixes for the frame_order.pdb_model user function backend for when no data is present. The pipe_centre_of_mass() function of pipe_control.structure.mass module is now called with the missing_error flag set to False so that the PDB generation can continue with the CoM set to [0, 0, 0].
- The geometric representation part of the frame_order.pdb_model user function now checks parameters. This calls the specific_analyses.frame_order.checks.check_parameters Check object to make sure that all necessary parameters for the model exist.
- Completed the Frame_order.test_pdb_model_rotor system test. This now sets the rotor axis to the z-axis (with a printout to be sure), sets the torsion angle to zero for simplicity, creates a new data pipe and loads the PDB representation file, then checks all of the key atom coordinates.
- Fixes for the unit tests of the lib.frame_order.matrix_ops module for the free rotor isotropic cone. The S1 order parameter has been eliminated due to angles > π/2.0 causing the frame order matrix to be incorrectly predicted. Therefore all unit tests have been converted to use the cone opening angle θ instead. In addition, the test_compile_2nd_matrix_iso_cone_free_rotor_disorder had been modified to pass with the incorrect frame order matrix by comparing to the half cone frame order matrix rather than the identity frame order matrix.
- Fix for inverted axes in the new Frame_order.test_pdb_model_rotor system test.
- Huge bug fix for the frame_order.pdb_model user function - the single axis direction was incorrect. In the PDB representation of the frame order motion for the rotor and isotropic cone models (rotor, free rotor, isotropic cone, free rotor isotropic cone, and torsionless isotropic cone), the X and Z axes were swapped. This is because the eigenframe of the motion was being incorrectly constructed via the lib.geometry.rotations.two_vect_to_R() function. For better control, the specific_analyses.frame_order.geometric.frame_from_axis() function has been created. This constructs a full motional eigenframe from the Z-axis. The problem was detected via the new Frame_order.test_pdb_model_rotor system test.
- Size fix for the rotor representation from the frame_order.pdb_model user function. The size problem was detected via the Frame_order.test_pdb_model_rotor system test. The rotors in the PDB representation were all fixed in size, and ignored the 'size' argument of the frame_order.pdb_model user function. The size argument is now passed into the add_rotors() function of the specific_analyses.frame_order.geometric module and passed on to the rotor() function of the lib.structure.represent.rotor module.
- Created the Frame_order.test_pdb_model_rotor2 system test to check for an offset pivot. The pivot is set to [1, 0, 1] so that the rotor axis is tilted -45 degrees in the xz-plane. And the size of the geometric object is set to 100 Angstrom for better testing of the sizes of the elements.
- Simplification of the Frame_order.test_pdb_model_rotor system test. The size is now programatically handled.
- Created the Frame_order.test_pdb_model_iso_cone system test. This is for checking the PDB representation of the isotropic cone frame order model created by the frame_order.pdb_model user function. It checks both A and B representations.
- Fix for the cone sized created by the frame_order.pdb_model user function. The 'size' argument was not being used at all for the cone size. It is now passed into the lib.structure.represent.cone.cone() function as the 'scale' argument.
- Small fix for the Frame_order.test_pdb_model_iso_cone system test for the 'B' representation.
- Fix for the representation label positions created by the frame_order.pdb_model user function. The 'size' argument was not being used at all for the representation title atoms. It is now passed into the add_titles() function as the displacement argument + 10 Angstrom.
- Printout fix for the axis in the Frame_order.test_pdb_model_iso_cone system test.
- Created the Frame_order.test_pdb_model_iso_cone_xz_plane_tilt system test. This checks the PDB file from the frame_order.pdb_model user function for the isotropic cone model with a xz-plane tilt.
- Renamed all of the Frame_order.test_pdb_model_* system tests to be more descriptive.
- Improvements for all of the Frame_order.test_pdb_model_* system tests. The rotate_from_Z() method has been introduced to simplify the determination of the 3D coordinates expected for the PDB file. This will allow for more advanced testing of the PDB for the cone models.
- Fixes for the printouts from the Frame_order.test_pdb_model_rotor_* system tests.
- Alphabetical ordering of the Frame_order system test methods.
- Fixes for all of the Frame_order system tests - the temporary directories are now being deleted. The system test base class tearDown() method is now being called to properly clean up after the tests.
- Created the Frame_order.test_pdb_model_pseudo_ellipse_z_axis system test. This demonstrates the correct atom coordinates in the PDB file created by the frame_order.pdb_model user function for the pseudo-ellipse model along the z-axis.
- Fixes for the checks in the Frame_order.test_pdb_model_* system tests. Atomic positions are now checked with self.assertAlmostEqual() to 3 places, and the residue and atom names and numbers are checked with self.assertEqual().
- Created the Frame_order.test_pdb_model_pseudo_ellipse_xz_plane_tilt system test. This checks the PDB file created by the frame_order.pdb_model user function for the pseudo-ellipse model with a xz-plane tilt. To properly construct the coordinates, the rotate_from_Z() method was modified to accept a rotation matrix argument to allow the geometric shape to be rotated.
- Modified the Frame_order.test_pdb_model_iso_cone_xz_plane_tilt system test to have a cone angle. The cone opening half-angle was previously 0.0. The test now checks the geometric object in the PDB file for a cone opening half-angle of 2.0.
- Modified the Frame_order.test_pdb_model_iso_cone_z_axis system test to have a cone angle. The cone opening half-angle was previously 0.0. The test now checks the geometric object in the PDB file for a cone opening half-angle of 2.0.
- Created two new system tests for the free rotor PDB representation file. This is the file from the frame_order.pdb_model user function. The two new unit tests are Frame_order.test_pdb_model_free_rotor_z_axis and Frame_order.test_pdb_model_free_rotor_xz_plane_tilt.
- Created two new frame order system tests for the free rotor isotropic cone PDB representation file. This is the two PDB files from the frame_order.pdb_model user function. The two new system tests are Frame_order.test_pdb_model_iso_cone_free_rotor_z_axis and Frame_order.test_pdb_model_iso_cone_free_rotor_xz_plane_tilt.
- Created two new frame order system tests for the torsionless isotropic cone PDB representation file. This is the two PDB files from the frame_order.pdb_model user function. The two new system tests are Frame_order.test_pdb_model_iso_cone_torsionless_z_axis and Frame_order.test_pdb_model_iso_cone_torsionless_xz_plane_tilt.
- Created two new frame order system tests for the free rotor pseudo-ellipse PDB representation file. This is the two PDB files from the frame_order.pdb_model user function. The two new system tests are Frame_order.test_pdb_model_pseudo_ellipse_free_rotor_z_axis and Frame_order.test_pdb_model_pseudo_ellipse_free_rotor_xz_plane_tilt.
- Created two new frame order system tests for the torsionless pseudo-ellipse PDB representation file. This is the two PDB files from the frame_order.pdb_model user function. The two new system tests are Frame_order.test_pdb_model_pseudo_ellipse_torsionless_z_axis and Frame_order.test_pdb_model_pseudo_ellipse_torsionless_xz_plane_tilt.
- Created two new frame order system tests for the double rotor PDB representation file. This is the two PDB files from the frame_order.pdb_model user function. The two new system tests are Frame_order.test_pdb_model_double_rotor_z_axis and Frame_order.test_pdb_model_double_rotor_xz_plane_tilt.
- Added relax scripts and PDB files which match the Frame_order.test_test_pdb_model_* system tests. These were used to construct and visually check the tests in a molecular viewer. These could be a useful reference, so have been added to the repository.
- Simplified all of the Frame_order.test_pdb_model_* system tests. The atom, residue and 3D coordinate checking in all these methods has been shifted into the common check_pdb_model_representation() method. This dramatically decreases the amount of code in the system test file.
- Simplification for all of the Frame_order.test_pdb_model_* system tests. The model setup in all of these tests has been merged into the common setup_model() method. This not only removes a large quantity of repetitive code, but the new method can also be used for constructing future tests, for example for checking the frame_order.simulate user function.
- Created an initial version of the Frame_order.test_simulate_rotor_z_axis system test. This is to check the frame_order.simulate user function rotor model along the z-axis. It currently fails due to a bug in the user function.
- Fixes for the Frame_order.test_simulate_rotor_z_axis system test. Now 6 atoms are being created at X, -X, Y, -Y, Z, and -Z, 100 Angstrom from the origin. This is required so that the CoM is at the origin, to allow the CoM-pivot vector to be unchanged at [1, 0, 0] so that the axis α angle of π/2 creates an axis parallel to Z. The origin to atom distance check has also been loosened due to the PDB truncation artifact.
- Fix for the Frame_order.test_pdb_model_free_rotor_xz_plane_tilt system test. This was broken while implementing the Frame_order.test_simulate_rotor_z_axis system test. Instead of shifting the 6 atom structure so its CoM is the pivot of the motion when creating the atoms, now the Frame_order.test_simulate_rotor_z_axis system test sets the average domain translation vector to the pivot to achieve the same result. This preserves the z-axis orientation of the rotor models.
- Created the Frame_order.test_simulate_free_rotor_z_axis system test. This is to check the frame_order.simulate user function for the free rotor model along the z-axis.
- Created the Frame_order.test_simulate_iso_cone_z_axis system test. This is to check the frame_order.simulate user function for the isotropic cone model along the z-axis.
- Created the Frame_order.test_simulate_iso_cone_free_rotor_z_axis system test. This is to check the frame_order.simulate user function for the free rotor isotropic cone model along the z-axis.
- Created the Frame_order.test_simulate_iso_cone_torsionless_z_axis system test. This is to check the frame_order.simulate user function for the torsionless isotropic cone model along the z-axis.
- Created the Frame_order.test_simulate_pseudo_ellipse_z_axis system test. This is to check the frame_order.simulate user function for the pseudo-ellipse model along the z-axis.
- Created the Frame_order.test_simulate_iso_cone_xz_plane_tilt system test. This is to check the frame_order.simulate user function for the torsionless isotropic cone model with a xz-plane tilt.
- Created the Frame_order.test_simulate_pseudo_ellipse_free_rotor_z_axis system test. This is to check the frame_order.simulate user function for the free rotor pseudo-ellipse model along the z-axis.
- Created the Frame_order.test_simulate_pseudo_ellipse_xy_plane_tilt system test. This is to check the frame_order.simulate user function for the pseudo-ellipse model with a xz-plane tilt.
- Created the Frame_order.test_simulate_pseudo_ellipse_torsionless_z_axis system test. This is to check the frame_order.simulate user function for the torsionless pseudo-ellipse model along the z-axis.
- Fix for the Frame_order.test_simulate_pseudo_ellipse_xz_plane_tilt system test name. This was mislabelled as Frame_order.test_simulate_pseudo_ellipse_xy_plane_tilt.
- Redesign of the pymol.frame_order user function. This user function was still fitting to the old design in the relax trunk. It has been updated for the frame_order_cleanup branch whereby the frame_order.pdb_model user function has been split up and the positional distribution has been replaced by the Brownian simulation user function frame_order.simulate.
- Better checking for the non-moving domain setup. The frame_order.pdb_model user function will now raise a RelaxError if the frame_order.ref_domain user function has not been called to set up the non-moving domain.
- Updated the frame_order.ref_domain user function for the current branch design. This user function was quite out of date. The alignment tensor checks have been removed, to allow this to be used in the absence of base data. And the user function description has been updated.
- Updated all frame order system tests for the frame_order.ref_domain user function requirement.
- Expanded all of the Frame_order.test_simulate_* system tests. Two atoms have been added to the origin [0, 0, 0], one in the moving domain, the other in the reference non-moving domain. The positions of these atoms are checked to make sure that the domain systems are correctly handled.
- Expanded the double rotor model description in the frame_order.select_model user function.
- Added the pipe_name argument to the frame order check_model() function. This is for the specific_analyses.frame_order.checks module.
- Converted the specific_analyses.frame_order.checks module to the new Check object design. This follows from http://wiki.nmr-relax.com/Relax_source_design#The_check_.2A.28.29_functions and the changes significantly simplify the checking objects.
- Improved checking for the frame order generate_pivot() function. The check_model() checking object is now called to make sure the frame order model has been specified, as this is essential for this function.
- Created two system tests for the frame_order.simulate user function for the double rotor model. These are Frame_order.test_simulate_double_rotor_mode1_z_axis and Frame_order.test_simulate_double_rotor_mode2_z_axis.
- Created two system tests for the frame_order.simulate user function for the double rotor model. These are Frame_order.test_simulate_double_rotor_mode1_xz_plane_tilt and Frame_order.test_simulate_double_rotor_mode2_xz_plane_tilt.
- Added relax scripts which match the Frame_order.test_test_simulate_* system tests. These are the tests of the frame_order.simulate user function. These were used to construct and visually check the Brownian simulation and PDB model representation in a molecular viewer. These could be a useful reference, so have been added to the repository.
- Fix for the frame order auto-analysis when only the 'rigid' model is optimised. The final summary table printout for the number of Sobol' points used was failing as there were no models in the table. The table is now only printed out if non rigid models are present in the model list.
- Introduced the nested_params_ave_dom_pos argument to the frame order auto-analysis. This allows the average domain position to be set to no rotations and translations rather than taking the average position from the rotor or free-rotor model. This can be useful when large motions are present causing the rigid model to have unreasonable domain positions.
- Fix for the frame_order.permute_axes user function description to allow the manual to be compiled. The table caption containing the user function name was causing the LaTeX compilation to fail. Therefore the captions have been rewritten to avoid the user function name.
- Modified the frame order system test check_chi2() method to test the statistics.model user function. This causes all of the Frame_order.test_cam_* system tests to fail, as the user function backend is not implemented for the frame order analysis.
- Implemented the frame order analysis backend for the statistics.model and statistics.aic user functions. This simply required aliasing the specific analysis API common _get_model_container_cdp() method to get_model_container().
- Bug fix for the frame order specific analysis API base_data_loop() method. This was looping over non-existent PCS and RDC data. Now the alignment ID is checked for in the interatomic data container 'rdc' data structure and the spin container 'pcs' data structure, as well as values of None, before yielding the data.
- Created a large set of system tests for implementing the frame_order.distribute user function. This user function will be similar to frame_order.simulate. However instead of creating a PDB file with models from a pseudo-Brownian simulation, the frame_order.distribute user function will generate a PDB file of models forming a uniform distribution of structures covering the full frame order motional space. The new system tests are: Frame_order.test_distribute_double_rotor_mode1_xz_plane_tilt, Frame_order.test_distribute_double_rotor_mode1_z_axis, Frame_order.test_distribute_double_rotor_mode2_xz_plane_tilt, Frame_order.test_distribute_double_rotor_mode2_z_axis, Frame_order.test_distribute_free_rotor_z_axis, Frame_order.test_distribute_iso_cone_z_axis, Frame_order.test_distribute_iso_cone_xz_plane_tilt, Frame_order.test_distribute_iso_cone_torsionless_z_axis, Frame_order.test_distribute_pseudo_ellipse_xz_plane_tilt, Frame_order.test_distribute_pseudo_ellipse_z_axis, Frame_order.test_distribute_pseudo_ellipse_free_rotor_z_axis, Frame_order.test_distribute_pseudo_ellipse_torsionless_z_axis, Frame_order.test_distribute_rotor_z_axis. These are aliases for the equivalent Frame_order.test_simulate_* system tests which have had the 'type' keyword argument added, defaulting to 'sim', which allows to switch between the frame_order.simulate and frame_order.distribute user functions. The concept behind these system tests are the same for both user functions, so the code is shared.
- Created the front-end of the frame_order.distribute user function. This is a copy and modification of the frame_order.simulate user function, as the concepts are similar.
- Small modification of the frame_order.simulate user function. The GUI file opening dialog wildcard selectors are now set to all PDB file types (plain text, bzip2 compressed, and gzip compressed).
- Added the frame_order.distribute user function to the auto-analysis results output. This will allow both the pseudo-Brownian simulation and uniform distribution PDB files to be available to the user in all results directories (excluding the intermediate results for speed).
- Implemented the back-end of the frame_order.distribute user function. This follows the design of the pseudo-Brownian simulation frame_order.simulate user function. The specific_analyses.frame_order.uf.distribute() function has been created as a modified copy of the simulate() function of the same module. This simply performs checks and assembles the data, passing into the new lib.frame_order.simulate.uniform_distribution() function, which itself is a modified copy of the brownian() function in the same module.
- Introduced the max_rotations argument into the frame_order.distribute user function. This is used to prevent the user function from running forever. This happens whenever a cone opening angle or torsion angle is zero, and hence the random sampling of the rotational space will never find rotations within the motional distribution.
- Improved control of the frame_order.distribute user function in the frame order auto-analysis. The maximum number of rotations can now be set, and the argument for the total states for the distribution has been shortened.
- Speedup of the Frame_order.test_auto_analysis system test. After the introduction of the frame_order.distribute user function into the auto-analysis, the test was taking far too long to complete. Now the distribution arguments are set to low values to allow the test to pass in under a minute.
- Changed the default relax results compression type to bzip2 in the frame order auto-analysis. This was set to no compression for speeding up some system tests, however the system tests can set this for themselves.
- The Frame_order.test_auto_analysis system test now sets the results file compression type to bzip2.
- Changed the default max_rotations argument value to 100,000 in the frame_order.distribute user function. This decrease from one million is so the user function completes in a reasonable amount of time.
- The frame_order.distribute user function now warns when the maximum rotations are reached.
- Deleted a number of Frame_order.test_distribute* system tests. These are the four double rotor model tests. The frame_order.distribute user function cannot operate on these test cases as one of the two torsion angles are set to zero in the tests.
- Fix to allow Monte Carlo simulations to be repeated in the frame order analysis. The code for checking for pre-existing Monte Carlo simulation data structures and raising a RelaxError if anything is found has been deleted.
- Fix of a fatal bug preventing the frame order analysis to be run on a multi-processor system. The multi-processor code was calling the count_sobol_points() function of the specific_analyses.frame_order.optimisation module to give feedback when calling the minimise.execute or minimise.calculate user functions. However this was run in the slave command run() method, hence would be executed on the slave. The problem is that count_sobol_points() performs a number of checks on the current data pipe, however the slaves do not have any data pipes set up.
- Added the new 'atom_id' argument to the frame_order.distribute user function. This uses the new inverse selection functionality recently introduced into the trunk to delete all structural data not matching the atom_id from the copy of the loaded structural data string prior to generating the distribution of structures.
- Bug fix for the frame order target function (introduced recently). The copy.deepcopy() function is now used for all numpy input data to avoid the data from being modified between function calls. This is important for missing RDC and PCS data which is sent in as NaN values. In the target function __init__() method, the NaN values are replaced by 0.0 after the self.missing_rdc and self.missing_pcs structures have been by checking for NaN values. However the recent specific_analyses.frame_order.optimisation change in the Frame_order_minimise_command slave command to printout the number of integration points resulted in the target function being initialised twice, causing all NaN values to be 0.0 in the second initialisation. Hence all missing data was being treated as real data with values of 0.0.
- Created a new skeleton chapter in the relax manual for the frame order analysis.
- Added a theory section to the new frame order chapter. This is taken from an in-preparation supplement.
- Rearrangement of the frame order chapter in the manual. The theory section has been spun out into its own frame_order_theory.tex LaTeX file for better organisation.
- Added two more sections to the frame order chapter of the manual. This includes a frame order modelling section and PCS numerical integration section. Both are from a supplement from an in-preparation manuscript.
- Added a DOI and ISBN number to the bibliography.
- Moved the frame_order_theory.tex LaTeX file into the frame_order directory.
- Shifted the frame order model derivations into their own 'Advanced topics' chapter.
- Added the frame order sample scripts used in the CaM-IQ analysis.
- Added an introduction for the frame order chapter of the manual.
- Added a 'Data analysis' section to the frame order chapter of the manual. This includes the N-state and frame order analysis scripts required to perform a full analysis.
- Editing of the data analysis section of the frame order chapter of the manual. A PCS structural error figure has been added, all the text improved, and the scripts made to match those in sample_scripts/frame_order/.
- Added a section to the end of the frame order chapter about the long computation times.
- The 'scons clean' target now removes all LaTeX *.aux files. The docs/latex/frame_order/ directory is now also being checked for *.aux files.
- Removed many unnecessary references to relax.
- Removed lots of useless comments about book references.
- Added some images missing from the frame order chapter of the manual.
- Avoided a doubly defined label in the manual.
- Removed some duplicated text in the frame order models chapter of the manual. This is duplicated from the frame order analysis chapter.
- Indentation fix for allowing the API documentation to be properly compiled.
- Added a patch file for fixing Epydoc version 3.0.1. This is needed to allow the dot graph files names to be unique (by no longer truncating to 30 characters), and to allow epydoc to handle newer Graphvis versions.
- Improvements for the release checklist document. The backporting of the CHANGES file to trunk is now more obvious, and instructions for fixing Epydoc have been added.
- Clean up of some of the release instructions (for using vim).
- Added error catching to the find_unused_imports.py developer script.
- Fix for the error catching in the find_unused_imports.py developer script. The numerous pylint warnings are also sent to STDERR.
- Removed the printout of pylint STDERR messages in the find_unused_imports.py developer script.
- Elimination of a number of wildcard imports from some frame order timing scripts. This is to avoid excessive function imports.
- Removal of an unused import from the user_functions.frame_order module.
- Removal of unused imports from the test_suite/shared_data/frame_order/simulation scripts.
- Updated some unused frame order scripts to use the new minimise user function design.
- Unused import clean up in the test_suite/shared_data/curve_fitting/numeric_topology directory. All the scripts in this directory have been cleaned up to remove unused imports. In one case, commented out code was replaced with an 'if 0:' statement to silence the unused import warnings from the devel_scripts/find_unused_imports.py script.
- Unused import clean up in the test_suite/shared_data/curve_fitting/profiling directory. The scripts in this directory have been cleaned up to remove unused imports.
- Added an exception system to the find_unused_imports.py developer script. Sometimes pylint will give an "Unused import" warning for imports that are needed by the module. Therefore an exception list of the file name and module has been created to skip these warnings. The list covers the dep_check module and all of the profiling_*.py scripts in the directory test_suite/shared_data/dispersion/profiling/.
- Added a copyright notice to the find_unused_imports.py development script. This is mainly to indicate how out of date the script will be in the future.
- A directory can now be supplied on the command line for the find_unused_imports.py devel script.
- Changed the imports in the test_monte_carlo_mean.py script. This inconsequential change is to avoid false positives from the find_unused_imports.py devel script.
- Modifications of the test suite script for calculating synthetic CPMG data. The imports in cpmg_synthetic.py are now all used, rather than being commented out. This allows the find_unused_imports.py devel script to pass.
- Unused import cleanup of all scripts in the test_suite/shared_data/dispersion/ directories. This both removes unused imports, or uncomments but deactivates temporarily unused code.
- Removed unused imports from the scripts in the test_suite/shared_data/frame_order subdirectories.
- Removed unused imports from the spectrum system test base module.
- Removed unused imports from the relax_disp system test base module.
- Clean up of all unused imports in the system test scripts.
- Removed unused imports from the structure system test base module.
- Changed how the import of lib.regex in the test_regex unit tests is used. The module is no longer stored in the TestCase class namespace, but is rather called directly within the unit test.
- Changed the import of pipe_control.state in the test_state unit test module.
- Removed unused imports from the unit tests.
- Added another exception to the find_unused_imports.py devel script. This is for the test_suite.unit_tests._lib._geometry.test_rotations module which programatically obtains the imports using globals().
- Added a workaround or hack for exceptions for circular imports in the find_unused_imports.py script. This is currently for the test_suite.unit_tests._lib.test___init__ and test_suite.unit_tests._lib._geometry.test___init__ modules.
- Removal of unused imports from the GUI test modules.
- Removed all unused imports from the pipe_control package.
- Added import exceptions for the lib.compat module in the find_unused_imports.py devel script.
- Added import exceptions for the lib.xml module in the find_unused_imports.py devel script. These are needed because of eval() function calls on XML stored Python data structures.
- Removed all unused imports from the relax library package.
- Removed all unused imports from the target_functions package.
- Removed unused imports from the developer scripts.
- Removed all unused imports from the specific_analyses package.
- Removed all unused imports from the auto_analyses package.
- Removed all unused imports from the numdifftools extern package.
- Removal of the last unused import from the target_functions package.
- Fix for the PCS system tests on old Python versions. The self.assertAlmostEqual() function cannot compare None values in earlier Python versions.
- MS Windows fix for the Frame_order.test_generate_rotor2_distribution system test. The locale.setlocale() function call for correctly setting up a spinning progress meter was failing on MS Windows. The error is now caught and the local setting skipped.
- Added Python 3.5 to the manual C module compilation script.
- Added Python 3.5 to the Python multiversion test suite script.
- Changes to the introduction of the frame order theory chapter of the manual.
Bugfixes
- Fix for the alignment tensor MC simulation objects in the data store for Python 3.1. The sim_indices object was sometimes created with the range() method, however the returned iterator does not possess an index() function in Python 3.1. Therefore it was converted to a standard list.
- Cosmetic bug fix for the running of the test suite in the GUI. The list of skipped tests in the status object was not being reinitialised for each run of the test suite. This only affects the GUI where the tests can be run multiple times. The result was that the list of skipped tests was always being printed out, even if no tests were skipped.
- Fix for the numpy version number checking in the dep_check module. The version_comparison() function is now being used to compare numbers, replacing the previous hack.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
Version 3 of relax
relax 3.3 series
relax 3.3.9
Description
This is a minor feature release with improvements to the automatic relaxation dispersion protocol for repeated CPMG data, support for Monte Carlo or Bootstrap simulating RDC and PCS Q factors, a huge speedup of Monte Carlo simulations in the N-state model analysis, and geometric mean and standard deviation functions added to the relax library.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.3.9
(30 September 2015, from /trunk)
http://svn.gna.org/svn/relax/tags/3.3.9
Features
- Improvements to the automatic relaxation dispersion protocol for repeated CPMG data.
- Support for Monte Carlo or Bootstrap simulating the RDC and PCS Q factors.
- Huge speedup of Monte Carlo simulations in the N-state model analysis.
- Geometric mean and standard deviation functions added to the relax library.
Changes
- Wrote a method to store parameter data and dispersion curves, for the protocol of repeated CPMG analysis. This is to prepare for analysis in other programs. The method loops through the data pipes, and writes the data out. It then writes a bash script that will concatenate the data in an matrix array style, for reading and processing in other programs. Task #7826: Write an Python class for the repeated analysis of dispersion data.
- Added to write out a collection script for χ2 and rate parameters. Task #7826: Write an Python class for the repeated analysis of dispersion data.
- In the collection bash script, removes spins which have not been fitted. Task #7826: Write an Python class for the repeated analysis of dispersion data.
- Fix for use of " instead of ' in bash script. Task #7826: Write an Python class for the repeated analysis of dispersion data.
- Adding option to minimise class function, to perform Monte Carlo error analysis. Task #7826: Write an Python class for the repeated analysis of dispersion data.
- Printout when minimising Monte Carlo simulations. Task #7826: Write an Python class for the repeated analysis of dispersion data.
- Added additional test to system test Relax_disp.test_bug_23186_cluster_error_calc_dw() to prove that Bug #23619 is invalid. Bug #23619: Stored chi2 sim values from Monte Carlo simulations does not equal normal chi2 values.
- Small fix for the shell script to collect data files, and not use the program "column" in the end. The line width becomes to large to handle for column. Task #7826: Write an Python class for the repeated analysis of dispersion data.
- Added a unit test that triggers the bug. Test added in test_delete_spin_all, and can be accessed with: relax -u _pipe_control.test_spin. Bug #23642: When deleting all spins for a residue, an empty placeholder is where select=True.
- Added sample data and analysis script, that will eventually show that there is not much difference in the sample statistics used for comparing the output of two very similar datasets. This is a multiple comparison test with many T-tests at once, where the familywise error is controlled by the Holm method. Even if the values are close to equal, and within the standard deviation, this procedure will reject up to 20% of the null hypothesis. This is not deemed as a suitable method. Bug #23644: monte_carlo.error_analysis() does not update the mean value/expectation value from simulations.
- Added Monte Carlo simulations to the N_state_model.test_absolute_T system test. This is to demonstrate a failure of the simulations in certain N-state model setups.
- Added a missing call to monte_carlo.initial_values in the N_state_model.test_absolute_T system test. This fixes the N_state_model.test_absolute_T system test, showing that there is not a problem with the Monte Carlo simulations.
- Added Monte Carlo and Bootstrap simulation support for the RDC and PCS Q factor calculations. The pipe_control.rdc.q_factors() and pipe_control.pcs.q_factors() functions have been modified to support Monte Carlo and Bootstrap simulations. The sim_index argument has been added to allow the Q factor for the given simulation number to be calculated. All of the Q factor data structures in the base data pipe now have *_sim equivalents for permanently storing the simulation values. For the simulation values, all the warnings have been silenced.
- Added simulation support for the RDC and PCS Q factors in the N-state model analysis. This is for both Monte Carlo and Bootstrap simulation. The simulation RDC and PCS values, as well as the simulation back calculated values are now stored via the minimise_bc_data() function of specific_analyses.n_state_model.optimisation in the respective spin or interatomic data containers. The analysis specific API methods now send the sim_index value into minimise_bc_data(), as well as the pipe_control.rdc.q_factors() and pipe_control.pcs.q_factors() functions.
- Silenced a warning in the N-state model optimisation if the verbosity is set to zero. This removes a repetitive warning from the Monte Carlo or Bootstrap simulations.
- Huge speed up for the Monte Carlo simulations in the N-state model analyses. This speed up is also for Bootstrap simulations and the frame order analysis. The change affects the monte_carlo.initial_values user function. The alignment tensor _update_object() method was very inefficient when updating the Monte Carlo simulation data structures. For each simulation, each of the alignment tensor data structures were being updated for all simulations. Now only the current simulations is being updated. This speeds up the user function by many orders of magnitude.
- Added functions for calculating the geometric mean and standard deviation to the relax library. These are the geometric_mean() and geometric_std() functions of the lib.statistics module. The implementation is designed to be fast, using numpy array arithmetic rather than Python loops.
- Created a simple unit test for the new lib.statistics.geometric_mean() function.
- Added a unit test for the new lib.statistics.geometric_std() function.
- Made a summarize function to compare results. Task #7826: Write an Python class for the repeated analysis of dispersion data.
Bugfixes
- Fix committed, where an empty spin placeholder has the select flag set to False. Bug #23642: When deleting all spins for a residue, an empty placeholder is where select=True.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.3.8
Description
This is a minor bugfix release which allows the relax GUI to be used on screens with the low resolution of 1024x768 pixels.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.3.8
(2 April 2015, from /trunk)
http://svn.gna.org/svn/relax/tags/3.3.8
Features
N/A
Changes
- Fix for the pipe_control.reset.reset() function when resetting the GUI in non-standard contexts. This is mainly for debugging scripts when simulating a GUI and hence the GUI reset() method does not exist.
- Created a GUI memory management debugging script for the align_tensor.init user function. This repetitively calls the reset, pipe.create and align_tensor.init user functions, and opening the GUI element for setting alignment tensor elements (the Sequence window). The pympler muppy_log file shows no memory leaks for these user functions on Linux systems.
Bugfixes
- Resized all fixed-sized GUI wizards to fit on 1024x768 pixel wide displays. The problem was reported by Lora Picton in the thread starting at http://thread.gmane.org/gmane.science.nmr.relax.user/1813. Both the spin loading wizard of the spin viewer window and the relaxation data loading wizard used currently in the model-free analysis tab and BMRB export page were fixed. These both had the y-dimension set to 800 pixels, hence parts of the window would be out of view.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.3.7
Description
This is a major feature and bugfix release. New features include the statistics.aic and statistics.model user functions, plotting API advancements, huge speed ups for the assembly of atomic coordinates from a large number of structures, the sorting of sequence data in the internal structural object for better structural consistency, conversion of the structure.mean user function to the new pipe/model/molecule/atom_id design, and improvements to the rdc.copy and pcs.copy user functions. Bugs fixed include the incorrect pre-scanning of old scripts identifying the minimise.calculate user function as the old minimise user function, Python 3 fixes, and the failure in reading CSV files in the sequence.read user function. Many more features and bugfixes are listed below.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.3.7
(13 March 2015, from /trunk)
http://svn.gna.org/svn/relax/tags/3.3.7
Features
- Creation of the statistics.aic and statistics.model user functions for calculating and printing out different statistics.
- Addition of new infrastructure for future support for plotting data using Veusz.
- Huge speed up for the assembly of atomic coordinates from a large number of structures.
- Sequence data in the internal structural object can now be sorted for better structural consistency.
- The structure.read_pdb user function now skips water molecules, avoiding the creation of hundreds of new molecules when reading X-ray structures.
- Conversion of the structure.mean user function to the new pipes/models/molecules/atom_id design and the addition of the set_mol_name and set_model_num arguments to allow the mean structure to be stored alongside the other molecules.
- The monte_carlo.setup user function now raises a RelaxError if the number of simulations is less than 3, avoiding subsequent errors.
- Expanded the functionality of the rdc.copy and pcs.copy user functions, allowing for the operation on two data pipes with different spin sequences, skipping deselected spins and interatomic data containers, printing out all copied data for better feedback, and copying all alignment metadata.
- The sequence.attach_protons user function now lists all the newly created spins.
- Clarification of the RDC and PCS Q factors with the printouts and XML file variable names modified to indicate if the normalisation is via the tensor size (2Da2(4 + 3R)/5) or via the sum of data squared to allow for clearer RDC vs. PCS comparisons.
- Expansion of the align_tensor.copy user function to allow all tensors to be copied between different data pipes.
- Huge speed up for loading results and state files with Monte Carlo simulation alignment tensors.
- Improvements for the rdc.weight and pcs.weight user functions. The spin_id argument can now be set to None to allow all spins or interatomic data containers to be set.
- Improvements for the pcs.structural_noise user function. The check for the presence of PCS data for points to skip now includes checking for PCS values of None. And the output Grace file now also includes the spin ID string as a string or comment value which can be displayed in the plot when desired.
Changes
- Created the N_state_model.test_statistics system test. This system test will be used to implement the new statistics user function class consisting of the statistics.model and statistics.aic user functions for calculating and storing the [chi2, n, k] parameters and Akaike's Information Criterion statistic respectively.
- Added the structure.align user function to the renaming translation table. This is so relax identifies structure.align user functions in scripts to raise an error saying that the structure.superimpose user function should be used instead.
- Added the office-chart-pie set of Oxygen icons for use in the new statistics user function class.
- Created the empty statistics user function class. This adds the infrastructure for creating the statistics user functions.
- Small fix for the structure.add_model user function description.
- Created the frontend for the statistics.model user function.
- Created a wizard graphic for the statistics user functions. This is based on a number of Oxygen icons, as labelled in the SVG layer names.
- The statistics.model user function now uses the new statistics wizard graphic.
- Created the empty pipe_control.statistics module. This will be used for the backend of all of the statistics user functions.
- Fixes for the EPS versions of some Oxygen icons used in the relax manual. This is the actions.document-preview-archive and actions.office-chart-pie Oxygen icons used for the user function icons. The files were not created correctly in the Gimp. The export to EPS requires the width and height to be both set to 6 mm, and the X and Y offsets to zero. This allows the icon bounding boxes and sizes to match the other EPS icons.
- Implemented the backend of the statistics.model user function. The implementation heavily uses the specific analysis API, calling the calculate(), model_loop(), print_model_title(), model_statistics() and get_model_container() methods to do all of the work. The last of these API methods is yet to be implemented.
- Fix for the statistics.model user function backend. The API methods are now called with the model_info argument set to a keyword argument so that it is always passed in as the correct argument.
- Fix for the specific analysis API _print_model_title_global() common method. This method was horribly broken, as it was never used. The new statistics.model user function together with the N-state model uncovers this breakage.
- Defined the get_model_container() specific analysis API method. This base method raises a RelaxImplementError, therefore each analysis type must implement its own method (or use an API common method).
- Implemented the specific analysis API _get_model_container_cdp() commmon method. This is to be used for the get_model_container() for returning the current data pipe object as the model container. This is for the global models where the model information is stored in the pipe object rather than in spin containers.
- The N-state model now uses the _get_model_container_cdp() method. This is aliased as the get_model_container() specific analysis API method.
- Fix for the N_state_model.test_statistics system test - the probabilities were missing from k.
- Expanded the printouts from the statistics.model user function to include the statistics.
- Updated the N-state model num_data_points() function to use more modern integer incrementation.
- Fix for the N_state_model.test_statistics system test. The deselected spins and interatomic data containers are now taken into account for the RDC and PCS data point counts.
- Implementation of the statistics.aic user function. This is very similar to the statistics.model user function - the code was copied and only slightly modified. The new user function will calculate the current chi-squared value per model, obtain the model statistics, calculate the AIC value per model, and store the AIC value, chi-squared value and number of parameters in the appropriate location for the model in the relax data store.
- Created the empty lib.plotting.veusz module for graphing using Veusz.
- Shifted the lib.software.grace module to lib.plotting.grace. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/7532 and http://thread.gmane.org/gmane.science.nmr.relax.devel/7536.
- Created XY-data functions for the plotting API of the relax library. These are currently copies of the heads of the lib.plotting.grace functions write_xy_data() and write_xy_header(). These lib.plotting.api functions (write_xy_data() and write_xy_header()) are set up to use the grace functions.
- Converted all of the Grace plotting in relax to use the plotting API of the relax library.
- Shifted the pipe_control.grace.write() function. This is now the format independent pipe_control.plotting.write_xy() function. The format argument has been added and this defaults to 'grace'. The grace.write user function has been updated to use the new backend.
- Updated the pcs.structural_noise user function to use the relax library plotting API.
- Fixes for the new pipe_control.plotting.write_xy() function. This includes missing imports which should have moved from pipe_control.grace, as well as shifting the axis_setup() function from the pipe_control.grace module into the pipe_control.plotting module.
- The rdc.corr_plot user function backend now uses the relax library plotting API. The write_xy_data() and write_xy_header() functions from lib.plotting.api are now uses instead of the equivalent pipe_control.grace functions which no longer exist.
- More import fixes for the new pipe_control.plotting.write_xy() function.
- Fix for the backend of the relax_disp.plot_disp_curves user function. The lib.plotting.api functions write_xy_data() and write_xy_header() require the format argument.
- Updated the relative stereochemistry auto-analysis to use the relax library plotting API.
- Huge speed up for the assembly of atomic coordinates from a large number of structures. The internal structural object validate_models() method was being called once for each structure when assembling the atomic coordinates. This resulted in the _translate() internal structural object method, which converts all input data to formatted strings, being called hundreds of millions of times. The problem was in lib.structure.internal.coordinates.assemble_atomic_coordinates(), in that the one_letter_codes() method, which calls validate_models(), was called for each molecule encountered. The solution was not to validate models in one_letter_codes().
- Huge speed up of the internal structural object validate_models() method. The string formatting to create pseudo-PDB records and the large number of calls to the _translate() method for atomic information string formatting has been shifted to only be called when atomic information does not match. Instead the structural information is directly compared within a large if-else statement.
- Created the Structure.test_atomic_fluctuations_no_match system test. This demonstrates a failure in the operation of the structure.atomic_fluctuations user function when the supplied atom ID matches no atoms.
- Fix for the Structure.test_atomic_fluctuations_no_match system test. The structure.atomic_fluctuations user function will now raise a RelaxError when no data corresponding to the atom ID can be found, so the test now checks for this.
- Created the unit test infrastructure for the lib.structure.internal.object module.
- Created the Test_object.test_add_atom_sort unit test. This is from the _lib._structure._internal.test_object unit test module. The test will be used to implement the sorting of input data by residue number in the add_atom() internal structural object method. This will mean that added atoms will be placed in residue sequence order, so that output PDB files are correctly ordered.
- Implementation of methods for sorting sequence data in the internal structural object. The information is sorted in the molecule container level using the new MolContainer._sort() private method. This uses the _sort_key() helper method which determines what the new order should be. This is used as the 'key' argument for the Python sort() method. Instead of list shuffling, new lists in the correct order are created. Although not memory efficient, this might be faster than shuffling.
- The loading of structural data now sorts the data if the merge flag is True. The pack_structs() method for sorting the data will now call the new MolContainer._sort() function is the data is being merged. This is to ensure that the final structural data is correctly ordered.
- Fixes for a number of Structure system tests for the sorted structural data changes.
- Modified the structure.read_pdb user function backend to skip water molecules. All residues with the name 'HOH' are now skipped when loading PDB files. This is implemented in the MolContainer.fill_object_from_pdb() method, and a RelaxWarning is printed listing the residue numbers of all skipped waters.
- Modified the Structure.test_read_pdb_1UBQ system test for the new water skipping feature. As the structure.read_pdb user function will now skip waters, the last atom in the structural object will now be the last ubiquitin atom and not the last water atom.
- Modified the Test_object.test_add_atom_sort unit test to check atom connectivities. This is from the _lib._structure._internal.test_object unit test module. The problem is that the MolContainer._sort() method for sorting the structural data currently does not correctly update the bonded data structure.
- Completed the implementation of the sorting of structural data in the internal structural object. The MolContainer._sort() private method now changes the connect atom indices in the bonded data structure to the new sorted indices.
- Created new system tests for implementing new functionality for the structure.mean user function. This includes the Structure.test_mean_models and Structure.test_mean_molecules. The idea is to convert the user function to the new pipes/models/molecules/atom_id design. This will allow molecules with non-identical sequences and atomic compositions to be averaged. The set_mol_name and set_model_num arguments from the structure.read_pdb, structure.read_gaussian, and structure.read_xyz user functions will also be implemented to allow the mean structure to be stored alongside the other molecules.
- Some fixes for the checks in the Structure.test_mean_molecules system test.
- Fix for the structure.mean user function call in the Structure.test_mean_models system test.
- Expanded the checking in all the Structure.test_mean* system tests to cover all atomic information. This includes the Structure.test_mean, Structure.test_mean_models, and Structure.test_mean_molecules system tests. All structural data is now carefully checked to make sure that the structure.mean user function operates correctly.
- Converted the structure.mean user function to the new pipe/model/molecule/atom_id design. This allows the average structure calculation to work on atomic coordinates from different data pipes, different structural models, and different molecules. The user function backend uses the new pipe_control.structure.main.assemble_structural_coordinates() function to assemble the common atom coordinates, molecule names, residue names, residue numbers, atom names and elements. All this information is then used to construct a new molecule container for storing the average structure in the internal structural object. To allow for the averaged structural data to be stored, the internal structural object method add_coordinates() has been created. This is modelled on the PDB, Gaussian, and XYZ format loading methods. The internal structural object mean() method is no longer used, but remains for anyone who might have interest in the future (though as it is untested, bit-rot will be a problem).
- Small correction for the structure.read_pdb user function description.
- Created the Structure.test_read_merge_simultaneous system test. This is to demonstrate a failure in the structure.read_pdb user function when merging multiple molecules from one file into one molecule simultaneously with a single user function call.
- Added some error checking for the monte_carlo.setup user function. A RelaxError is now raised if the number of simulations is less than 3. This prevents Python errors when later calling the monte_carlo.error_analysis user function.
- Test suite fixes for the error checking in the monte_carlo.setup user function. The number of simulations has been increased from either 1 or 2 in all tests to the minimal number of simulations (3).
- Created the Structure.test_bug_23293_missing_hetatm system test. This is to catch bug #23293, the PDB HETATM loading error whereby the last HETATM record is sometimes not read from the PDB file.
- Small fix for the chain IDs in the Structure.test_bug_23293_missing_hetatm system test.
- Created the Structure.test_multi_model_and_multi_molecule system test. This is used to check the loading and writing of a multi-model and multi-molecule PDB file. The test shows that this functions correctly.
- Modified the Structure.test_multi_model_and_multi_molecule test to check for model consistency. This is just for better test suite coverage of the handling of PDB structural data.
- Created the Structure.test_bug_23294_multi_mol_automerge system test. This is used to catch bug #2329, the automatic merging of PDB molecules resulting in an IndexError. It reads in the 'in.pdb' PDB file attached to the bug report, now named 'bug_23294_multi_mol_automerge.pdb', to show the IndexError. The test also checks the structure.write_pdb user function to make sure that the output PDB file contains a single merged molecule.
- Added the PDB file to the repository for the Structure.test_bug_23294_multi_mol_automerge system test.
- Fix for the Structure.test_bug_23294_multi_mol_automerge system test. The MASTER PDB record has been added to the data to check for, as this will be produced by the structure.write_pdb user function.
- Improved the RelaxWarning for missing atom numbers in the PDB CONECT records. This is for the structure.read_pdb user function. Now only one warning is given for the entire PDB file listing all of the missing atom numbers rather than one warning per missing atom. This can significantly compact the warnings, removing a lot of repetition.
- Improved the quality of the printouts from the structure.read_pdb user function. This also affects the structure.read_gaussian and structure.read_xyz user functions. The messages about adding new molecules or merging with existing molecules has been significantly improved. The text with the model information is now only printed if the model number is present in the PDB file or has been supplied by the user.
- Fixes for all of the PDB documentation HTML links in the docstrings. The PDB have shifted their documentation from http://www.wwpdb.org/documentation/format33/v3.3.html to http://www.wwpdb.org/documentation/file-format/format33/v3.3.html, stupidly without redirects. This will create dead links in the relax API documentation at http://www.nmr-relax.com/api/3.3/, as well as the older API documentation (http://www.nmr-relax.com/api/2.2/, http://www.nmr-relax.com/api/3.0/, http://www.nmr-relax.com/api/3.1/, http://www.nmr-relax.com/api/3.2/).
- Created the Structure.test_bug_23295_ss_metadata_merge system test. This is to catch bug #23295, the PDB secondary structure HELIX and SHEET records not updated when merging molecules. This uses the '2BE6_secondary_structure.pdb' structure file and 'test.py' relax script contents as the test, checking the HELIX and SHEET records.
- Added one more check to the Structure.test_bug_23295_ss_metadata_merge system test. The test would pass if no HELIX or SHEET records were to be written to the PDB file.
- Fix for the Structure.test_bug_23295_ss_metadata_merge system test and additional printouts.
- Fix for the Structure.test_pdb_combined_secondary_structure system test. The SHEET PDB record check was incorrect and was checking for the improperly formatted atom name field, which has now been fixed in relax.
- Large speed up of the structure.web_of_motion user function. With the introduction of the _sort() internal structural object method and it being called by the add_atom(), the structure.web_of_motion user function was now painfully slow. As sorting the structural data is unnecessary for the backend of this user function, the add_atom() boolean argument 'sort' has been added to turn the sorting on and off, and the structure.web_of_motion backend now sets this to False.
- Fix for the internal structural object unit test Test_object.test_add_atom_sort. This test of the _lib._structure._internal.test_object unit test module now requires the sort argument set to True when calling the add_atom() method.
- Improvement for a RelaxError message when assembling structural data but no coordinates can be found.
- Created a series of unit tests for implementing a new internal structural object feature. These tests check a new 'inv' argument for the selection() structural object method for allowing all atoms not matching the atom ID string to be selected.
- Implemented the new 'inv' argument for the selection() structural object method. This allows for all atoms not matching the atom ID string to be selected. The unit tests for this argument now all pass, validating the implementation.
- Improvement for the structure.mean user function. This can now be used to store an averaged structure in an empty data pipe. Previously structural data needed to be present in the current data pipe for the user function to work.
- Created a system test to show a limitation of the rdc.copy user function. Currently, it cannot work when spin systems in two data pipes are different. The system test will be used to implement the support.
- Simplification of the new Rdc.test_rdc_copy_different_spins system test. This no longer tests the deletion of interatomic data containers by the spin.delete user function, something which is not implemented.
- Some more fixes for the Rdc.test_rdc_copy_different_spins system test. The residue.delete and not spin.delete user function is required to delete the sequence data.
- Another small fix for the new Rdc.test_rdc_copy_different_spins system test. The rdc.copy user function requires the pipe_to argument to be supplied in this case.
- Expansion of the Rdc.test_rdc_copy_different_spins system test. The interatomic data containers are now defined via the interatom.define user function, which requires the spin.element user function to set up the element information. A printout has also been added to demonstrate a failure in the pipe_control.interatomic.interatomic_loop() function in handling the correct data pipe.
- Some more modifications for the Rdc.test_rdc_copy_different_spins system test. One of the interatomic data containers does not have RDC data, as it is not present in the original data pipe, hence this is checked for. And the printouts have more formatting.
- Expanded the functionality of the rdc.copy user function. The user function will now operate on two data pipes with different spin sequences. If the interatomic data container is missing from the target data pipe, a warning is given. And if the interatomic data container is not present in the source data pipe, nothing will be copied.
- Modified the rdc.copy user function to printout all copied RDC values and errors.
- Created the Rdc.test_rdc_copy_back_calc system test. This will be used to implement the back_calc Boolean argument for the rdc.copy user function to allow not only measured, but also back-calculated RDC values to be copied.
- Modified the rdc.copy printout of RDCs to occur for each alignment ID.
- Implemented the back_calc argument for the rdc.copy user function. This allows the back-calculated RDCs to be additionally copied together with the real value and error.
- Small formatting change for the rdc.copy user function printouts.
- Created the Pcs.test_pcs_copy_different_spins system test. This will be used to show a limitation of the pcs.copy user function in that it cannot copy data between two data pipes with different molecule, residue, and spin sequences.
- Added a printout of the alignment ID for the pcs.copy user function. This is to match the rdc.copy user function.
- Created the Pcs.test_pcs_copy_back_calc system test. This will be used to implement the back_calc Boolean argument for the pcs.copy user function to allow not only measured, but also back-calculated PCS values to be copied. It matches the equivalent Rdc.test_rdc_copy_back_calc system test.
- Implemented the back_calc argument for the pcs.copy user function. This allows the back-calculated PCSs to be additionally copied together with the real value and error. The implementation simply copies that of the rdc.copy user function.
- Added full per-alignment data printouts to the pcs.copy user function to match rdc.copy. The feedback is important to know what was actually copied.
- Modified the pcs.copy user function to handle different spin sequence between data pipes.
- Fixes for the Pcs.test_pcs_copy_different_spins and Pcs.test_pcs_copy_back_calc system tests.
- Fix for the pcs.copy user function for a recently introduced problem. The data pipe for the spin_loop() function must be supplied.
- The pcs.copy user function now skips deselected spins.
- Modified the N_state_model.test_data_copying system test to skip deselected spins.
- Added more checks to the three Pcs.test_pcs_copy* system tests.
- Added more checks to the three Rdc.test_rdc_copy* system tests.
- Created the Rdc.test_calc_q_factors_no_tensor system test. This is to demonstrate a failure in the rdc.calc_q_factors user function when no alignment tensor is present. In addition, the test is also triggering an earlier problem of spin isotope information being missing. However the isotope is not required if the tensor is absent.
- The Rdc.test_rdc_copy_* system tests now check for the 'rdc_data_types' data structure. This is in the Rdc.test_rdc_copy_different_spins and Rdc.test_rdc_copy_back_calc system tests and shows that the rdc.copy user function fails to duplicate this information.
- The Rdc.test_rdc_copy_* system tests now check for the 'absolute_rdc' data structure. This is in the Rdc.test_rdc_copy_different_spins and Rdc.test_rdc_copy_back_calc system tests and shows that the rdc.copy user function fails to duplicate this information as well.
- Expanded the rdc.copy user function to copy the RDC data type and absolute RDC flag information.
- Created the Rdc.test_corr_plot system test to check the rdc.corr_plot user function. This shows that this poorly tested function works correctly.
- Created the Pcs.test_corr_plot system test to check the pcs.corr_plot user function. This user function is poorly tested, and this test triggers a series of bugs.
- Added the 'title' and 'subtitle' arguments to the pcs.corr_plot user function. This problem was detected by the new Pcs.test_corr_plot system test. The pcs.corr_plot user function now matches the rdc.corr_plot user function in terms of arguments.
- Completed the Pcs.test_corr_plot system test. The file contents are now known and have been carefully checking in Grace.
- Clarification of the RDC and PCS Q factors. This affects the rdc.calc_q_factors and pcs.calc_q_factors user functions, as well as all other operations involving the calculation of Q factors. The printouts have been modified to clarify if the normalisation is via the tensor size (2Da2(4 + 3R)/5) or via the sum of data squared, and the separation of the two is now clearer. This allows for better RDC vs. PCS comparisons. In addition, the data pipe variable names have been updated to reflect the normalisation, so it is instantly known when looking at the XML contents of results or save files which was used. The backwards compatibility hooks have been modified to support the data pipe variable name changes.
- The align_tensor.copy user function 'tensor_from' argument can now be None. This is to enable the copying of all alignment tensors from one data pipe to another.
- Created the Align_tensor.test_copy_pipes system test. This is to show a problem in the align_tensor.copy user function when copying all tensors between data pipes.
- Modified the pipe_control.align_tensor.align_data_exists() function to handle no tensor IDs. If no tensor ID is supplied, this will then return True if any alignment data exists.
- Improvement for the align_tensor.copy user function. The user function has been modified to allow all alignment tensors to be copied between two data pipes. This allows the Align_tensor.test_copy_pipes system test to pass.
- Fixes for the align_tensor.copy user function argument unit tests. The tensor_from and tensor_to arguments can now be None.
- Created the Align_tensor.test_copy_pipes_sims system test. This demonstrates a failure of the align_tensor.copy user function when Monte Carlo simulated tensors are present.
- Deleted the data_store.align_tensor.AlignTensorSimList.append() method. This replacement list method was proving fatal when copy.deepcopy() is called on the alignment tensor object. The change allows the Align_tensor.test_copy_pipes_sims system test to pass.
- Huge speed up for loading results and state files with Monte Carlo simulation alignment tensors. The reading of the alignment tensor component of XML formatted results and state files has been modified. Previously the data_store.align_tensor.AlignTensorData._update_object() method for updating the alignment tensor object (for values, errors, simulations) was being called once for each Monte Carlo simulation. Now is it called only once for all simulations. In one test, the reading of the save file with 500 simulations dropped from 253.7 to 10.0 seconds.
- Added an extra check for the assembly of RDC data. This is in the pipe_control.rdc.return_rdc_data() function and the check is for any unit vectors set to None, which is a fatal condition.
- Improved the RelaxError message from the RDC assembly function when unit vectors are None.
- Added a new warning to the interatom.unit_vectors user function if data is missing. This is to aid in detecting problems earlier before unit vectors of None are encountered by other parts of relax.
- Modified the rdc.corr_plot user function to skip deselected interatomic data containers. This would normally happen as no back-calculated data is normally present. However, if data has been copied from elsewhere, this may not always be the case.
- Created the Sequence.test_bug_23372_read_csv system test. This is to catch bug #23372, the sequence.read failure with CSV files. It uses a truncated version of the CSV data file attached to sr #3219.
- Converted the lib.sequence.validate_sequence() to the checking function design. This is the checking function design documented at Relax_source_design#The_check_.2A.28.29_functions. The validate_sequence() function has been renamed to check_sequence_func() and the checking object is called check_sequence. It removes the string processing hack to convert RelaxErrors to RelaxWarnings in the lib.sequence.read_spin_data() function, avoiding strange messages such at "RelaxWarning: ror: The sequence data in the line..." as seen in the Sequence.test_bug_23372_read_csv system test.
- Small typo fix for the Sequence.test_bug_23372_read_csv system test.
- Added the raise_flag argument to the lib.sequence.read_spin_data() function. This is to allow the missing data RelaxError to be deactivated.
- Modified the spectrum.read_intensities user function backend to be more robust. This affects the generic formatted peak lists, via the lib.spectrum.peak_list.intensity_generic() function. The peak list reading will now continue reading the file after corrupted lines have been encountered.
- Python 3 improvement for the rdc.corr_plot and pcs.corr_plot user functions. The world view is now set in floating point numbers. In Python 2, the math.ceil() and math.floor() functions return floats, whereas in Python 3 these functions return integers. The behaviour is now consistent in both Python versions, fixing a few system tests.
- Modified the internal formatting of the data section of the Grace 2D graph files. This affects the lib.plotting.grace.write_xy_data() function. The formatting is now more consistent, with the X value now set to a fixed number of decimal places, and hence will no longer change between Python 2 and 3. The data is now all right justified as well, for easier reading. All affected system tests have been updated for the new format.
- Epydoc documentation fix for the lib.structure.pdb_write._handle_atom_name() function.
Bugfixes
- Big bug fix for the N-state model num_data_points() function. This is from the specific_analyses.n_state_model.data module. This code was very much out of date. It was expecting an ancient behaviour where the spin container 'pcs' variable and interatomic data container 'rdc' where lists of floats. However these were converted many years ago to dictionaries with keys set to the alignment IDs. The result was that no RDCs nor PCSs were counted as a base data point, so the function would in most cases return a value of zero.
- Fixes for the printout from the pipe_control.pcs.return_pcs_data() function. The number of PCSs printed out was including values of None when data was missing for one alignment. These values of None are no longer counted.
- Fixes for the printout from the pipe_control.rdc.return_rdc_data() function. The number of RDCs printed out was including values of None when data was missing for one alignment. These values of None are no longer counted.
- More fixes for the RDC and PCS count printouts from the corresponding data assembly functions. Sometimes the RDC or PCS value could be present as None. This is now detected and the count is not incremented.
- More fixes for the PCS count printout from the pipe_control.pcs.return_pcs_data() function. The check for None values was incorrect.
- Fixes for the N-state model num_data_points() function. The deselected interatomic data containers are no longer used for counting RDC data. And the skipping of deselected spin containers for the PCS is now via the spin_loop() skip_desel argument.
- Fix for bug #23259, the broken user functions in the prompt UI with the RelaxError: The user function 'X' has been renamed to 'Y'. The problem was that the only the first part of the user function name, for example 'minimise' from 'minimise.calculate' was being checked in the user function name translation table. As the minimise user function has been renamed to minimise.execute, 'minimise' is in the translation table and hence minimise.calculate was being identified as the minimise user function. Now the full user function name is reconstructed before checking the translation table.
- Fixes for the lib.structure.internal.coordinates.assemble_coord_array() function. The problem was uncovered by the Structure.test_atomic_fluctuations_no_match system test. The function can now handle no data being passed in.
- Fixes for the pipe_control.structure.main.assemble_structural_coordinates() function. The function will now raise a RelaxError if no structural data matching the atom ID can be found. The problem was uncovered by the Structure.test_atomic_fluctuations_no_match system test. The fix affects the structure.atomic_fluctuations, structure.displacement, structure.find_pivot, structure.rmsd, structure.superimpose, and structure.web_of_motion user functions.
- Fix for bug #23265, the failure of the edit buttons in the user function GUI windows. The problem was that the column titles of the window opened by the edit button were being incorrectly handled if the dimensions of the window were not supplied.
- Fix for bug #23288, the failure of the structure.read_pdb user function when simultaneously merging multiple molecules from one file. The set_mol_name and set_model_num arguments are now converted to lists equal to the length of the read_mol and read_model arguments simultaneously, if supplied.
- Small fix for the structure.write_pdb user function for handling old relax state and results files.
- Fix for bug #23293, the PDB HETATM loading error whereby the last HETATM record is sometimes not read from the PDB file. The problem was two-fold. Firstly the internal structural object _parse_mols_pdb() method for separating a PDB file into distinct molecules was terminating too early when a new molecule is found, so that the last PDB record is not appended to the records list for the molecule. Secondly the write_pdb() method was not handling the PDB sequential serial number correctly.
- Fix for bug #23294, the automatic merging of PDB molecules resulting in an IndexError. Now if only a single molecule name is supplied, this will be used for all molecules in the PDB file. The result is that the structural data will all be automatically merged into a single molecule. This merging is communicated to the user via the current printouts.
- Bug fix for the SHEET PDB records created by the structure.write_pdb user function. The current and previous atom parts of the record were not being correctly formatted. This was simply using the %4s formatting string. However the PDB atom format is rather more complicated. To handle this, the new _handle_atom_name() helper function has been added to the lib.structure.pdb_write module. This is now used in the atom() and sheet() functions for consistently formatting the atom name field.
- Fix for bug #23295, the PDB secondary structure HELIX and SHEET records not updating when merging molecules. The problem was that the algorithm for changing the molecule numbers for the helix and sheet metadata when calling the structure.read_pdb user function was far too simplistic. Therefore the logic has been completely rewritten. Now the helix and sheet metadata are stored in temporary data structures in the _parse_pdb_ss() method. As the molecules are being read from the PDB records, new data structures containing the original molecule numbers and new molecule numbers are created. The helix and sheet metadata is then stored in the internal structural object via the pack_structs() method, and the molecule indices of the metadata changed based on the two molecule number remapping data structures.
- Python 3 fix for the new internal structural object MolContainer._sort() method. The list() builtin function is required to convert the output of the range() function into a true list in Python 3, so that the list.sort() method can be accessed.
- Python 3 fix for the Test_msa.test_central_star unit test. This is from the _lib._sequence_alignment.test_msa unit test module. The logic of range() + range() does not work in Python 3, so the range function calls are now wrapped in list() function calls to convert to the correct data structure type.
- Python 3 fix for the internal structural object MolContainer._sort_key() method. This method is used as the key for the sort() function. However in Python 3, the key cannot be None. So now if the residue number is None, the value of 0 is returned instead.
- Python 3 fix for the pipe_control.structure.main.assemble_structural_coordinates() function. This affects most of the structure user functions. This was another case of requiring the list() built in function to create a list object from an iterator.
- Another Python 3 list() fix for the structure user functions. This time the problem was in the pipe_control.structure.main.sequence_alignment() function.
- Fix for a RelaxError message from the internal structural object when validating models.
- Bug fix for the results.write user function when loading relax state files. The results.write user function can load not only the results file consisting of a single data pipe, but also relax state files if only a single pipe is present. However this was causing the current data pipe and other pipe-independent data (sequence alignments and the GUI) to be overwritten, just as when loading a state file. Now only the data from the data pipe will be loaded and the pipe independent data in the state file will be ignored.
- Fix for the rdc.write user function. The check for the missing rdc_data_types variable in the interatomic containers is now more comprehensive and checks for the presence of the alignment ID.
- Big bug fix for the pipe_control.interatomic.interatomic_loop() function. This was identified in the Rdc.test_rdc_copy_different_spins system test. The problem was that the pipe argument was being ignored when looking up the spin containers. Hence if the pipe being worked on was not the current data pipe, and the spin sequences were not identical, the function would fail. This mainly affects the rdc.copy user function.
- Fix for the pcs.read user function. The problem was caught by the new Pcs.test_pcs_copy_different_spins system test. If the spin system does not exist in the current data pipe, but data for it is present in the PCS file, the pcs.read user function would terminate in a TypeError.
- Fixes for the rdc.calc_q_factors user function for when no alignment tensor is present. This was caught by the Rdc.test_calc_q_factors_no_tensor system test. Now if no tensor is present, a warning is given and the 2Da2(4 + 3R)/5 normalised Q factor is skipped. Also, if present but no spin isotope information is present, then RelaxSpinTypeError errors are raised.
- Fix for the pcs.corr_plot user function when the spin containers have no element information.
- Fix for bug #23372, the sequence.read failure with CSV files. The problem was that the sep argument was not being passed all the way to the backend lib.io.extract_data() function.
- Fix for the lib.sequence.check_sequence checking object. Although rarely used, the check for the spin number was incorrect and half of the checks were instead for the residue number. This is a classic copy and paste error where the residue name and number checks were copied but not completely converted to spin name and numbers.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.3.6
Description
This is a minor feature and bugfix release. It includes the addition of the new structure.sequence_alignment user function which can use the 'Central Star' multiple sequence alignment algorithm or align based on residue numbers, saving the results in the relax data store. The assembly of structural coordinates used by the structure.align, structure.atomic_fluctuations, structure.com, structure.displacement, structure.find_pivot, structure.mean, structure.rmsd, structure.superimpose and structure.web_of_motion user functions has been redesigned around this new user function. It will use any pre-existing sequence alignments for the molecules of interest, use no sequence alignment if only structural models are selected, and default to a residue number based alignment if the structure.sequence_alignment user function has not been used. Bug fixes include a system test failure on Mac OS X, and I∞ parameter text files and Grace graphs are now produced by the relaxation curve-fitting auto-analysis for the inversion recovery and saturation recovery experiment types. Many more details are given below.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.3.6
(4 February 2015, from /trunk)
http://svn.gna.org/svn/relax/tags/3.3.6
Features
- The Needleman-Wunsch sequence alignment algorithm now calculates an alignment score.
- Implementation of the central star multiple sequence alignment (MSA) algorithm.
- Implementation of a reside number based multiple sequence alignment (MSA) algorithm.
- Large speed up of the molecule, residue, and spin selection object, affecting all parts of relax.
- Sequence alignments are now saved in the relax data store.
- Important formatting improvement for the description in the GUI user function windows, removing excess empty lines after lists.
- Creation of the structure.sequence_alignment user function. The MSA algorithm can be set to either 'Central Star' or 'residue number', the pairwise sequence alignment algorithm to 'NW70' for the Needleman-Wunsch algorithm, and the substitution matrix to one of 'BLOSUM62', 'PAM250', or 'NUC 4.4'.
- More advanced support for different numpy number types in the lib.xml relax library module. This allows numpy int16, int32, float32, and float64 objects to be saved in the relax data store and retrieved from relax XML save and results files.
- Merger of structure.align into the structure.superimpose user function.
- The assembly of common atomic coordinates by the structure user functions now takes sequence alignments into account. The logic is to first use a sequence alignment from the relax data store if present, use no sequence alignment if coordinates only come from structural models, or fall back to a residue number based alignment. This affects the structure.align, structure.atomic_fluctuations, structure.com, structure.displacement, structure.find_pivot, structure.mean, structure.rmsd, structure.superimpose and structure.web_of_motion user functions.
- Large improvements in the memory management for all parts of the GUI.
Changes
- Spelling fixes for the CHANGES document.
- Created the Structure.test_align_molecules2 system test. This is to demonstrate a failure condition in the structure.align user function.
- Large simplification of the atomic coordinate assembly code in the internal structural object. This is in the lib.structure.internal.coordinates.assemble_coord_array() function. The logic of the function has recently changed due to the introduction of the pairwise sequence alignments. This caused a lot of code to now be redundant, and also incorrect in certain cases. This simplification fixes the problem caught by the Structure.test_align_molecules2 system test.
- Fix for the Structure.test_displacement system test - the molecule IDs needed updating.
- Created the Structure.test_align_molecules_end_truncation system test. This is to demonstrate a failure of the common residue detection algorithm using multiple pairwise alignments in the backend of the structure.align and other multiple structure based user functions.
- Created empty unit test infrastructure for testing the lib.structure.internal.coordinates module.
- Created the Test_coordinates.test_common_residues unit test. This is from the _lib._structure._internal.test_coordinates unit test module. The test shows that the lib.structure.internal.coordinates.common_residues() function is working correctly. However the printout, which is not caught by the test, is incorrect.
- Modified the lib.structure.internal.coordinates.common_residues() function. It now accepts the seq argument which will caused the gapped sequence strings to be returned. This is to allow for checking in the unit tests.
- Created the Test_align_protein.test_align_multiple_from_pairwise unit test. This is in the _lib._sequence_alignment.test_align_protein unit test module. This test checks the operation of the lib.sequence_alignment.align_protein.align_multiple_from_pairwise() function, which does not yet exist.
- Simplified the Test_coordinates.test_common_residues unit test by removing many residues. This is from the _lib._structure._internal.test_coordinates unit test module.
- Expanded the docstring of the Test_align_protein.test_align_multiple_from_pairwise unit test. This is from the _lib._sequence_alignment.test_align_protein unit test module.
- Attempt at fixing the lib.structure.internal.coordinates.common_residues() function. This function still does not work correctly.
- Renamed the Test_align_protein.test_align_multiple_from_pairwise unit test. This is now the Test_msa.test_central_star unit test of the _lib._sequence_alignment.test_msa unit test module (it was originally in the _lib._sequence_alignment.test_align_protein unit test module). This is in preparation for converting the lib.sequence_alignment.align_protein.align_multiple_from_pairwise() function into the lib.sequence_alignment.msa.central_star() function.
- Added the lib.sequence_alignment.align_protein.align_multiple_from_pairwise() function. This should have been committed earlier. The function is only partly implemented.
- Initial lib.sequence_alignment.msa.central_star() function. This was moved from lib.sequence_alignment.align_protein.align_multiple_from_pairwise().
- Import fix for the _lib._sequence_alignment.test_align_protein unit test module.
- Added the verbosity argument to lib.sequence_alignment.align_protein.align_pairwise(). If set to zero, all printouts are suppressed.
- The Needleman-Wunsch sequence alignment algorithm now calculates and returns an alignment score. This is in the lib.sequence_alignment.needleman_wunsch.needleman_wunsch_align() function. The score is calculated as the sum of the Needleman-Wunsch matrix elements along the traceback path.
- The protein pairwise sequence alignment function now returns the alignment score. This is in the lib.sequence_alignment.align_protein.align_pairwise() function. The score from the Needleman-Wunsch sequence alignment algorithm is simply passed along.
- Fix for the Test_msa.test_central_star unit test. This is from the _lib._sequence_alignment.test_msa unit test module. Some of the real gap matrix indices were incorrect.
- Complete implementation of the central star multiple sequence alignment algorithm. This includes all the four major steps - pairwise alignment between all sequence pairs, finding the central sequence, iteratively aligning the sequences to the gapped central sequence, and introducing gaps in previous alignments during the iterative alignment. The correctness of the implementation is verified by the Test_msa.test_central_star unit test of the _lib._sequence_alignment.test_msa module.
- Fixes for the unit tests of the _lib._sequence_alignment.test_align_protein module. The Test_align_protein.test_align_pairwise_PAM250 unit test was accidentally duplicated due to a copy and paste error. And the lib.sequence_alignment.align_protein.align_pairwise() function now also returns the alignment score.
- Fixes for the unit tests of the _lib._sequence_alignment.test_needleman_wunsch module. The lib.sequence_alignment.needleman_wunsch.needleman_wunsch_align() function now returns the alignment score.
- The assemble_coord_array() function is now using the central star multiple sequence alignment. This is the function from the lib.structure.internal.coordinates module used to assemble common atomic coordinate information, used by the structure.align, structure.atomic_fluctuations, structure.com, structure.displacement, structure.find_pivot, structure.mean, structure.rmsd, structure.superimpose and structure.web_of_motion user functions. The non-functional lib.structure.internal.coordinates.common_residues() function has been removed as the lib.sequence_alignment.msa.central_star() function performs this functionality correctly.
- Deleted the Test_coordinates.test_common_residues unit test. This is from the _lib._structure._internal.test_coordinates unit test module. The lib.structure.internal.coordinates.common_residues() function no longer exists.
- Alphabetical ordering of all Structure system tests.
- Better printout spacing in lib.sequence_alignment.msa.central_star().
- Fixes for the Structure.test_align_molecules_end_truncation system test. This system test had only been partly converted from the old Structure.test_align_molecules2 system test it had been copied from.
- Created the Internal_selection.count_atoms() internal structural object selection method. This counts the number of atoms in the current selection.
- Added final printouts to the structure.rotate and structure.translate user function backends. This is to give feedback to the user as to how many atoms were translated or rotated, to aid in solving problems with the structure user functions. These backend functions are also used by the structure.align and structure.superimpose user functions.
- Corrections for the Structure.test_align_CaM_BLOSUM62 system test. The CaM N and C domains can not be aligned together in a global MSA as they would align very well to themselves, causing the atomic coordinate assembly function to fail.
- Improvement for the lib.sequence_alignment.msa.central_star() function. The strings and gap matrix returned by the function have been reordered to match the input sequences.
- Modified the Structure.test_align_molecules_end_truncation system test. The calmodulin bound calciums are now deleted prior to the structure.align user function call. This prevents these being labelled as '*' residues and aligning with real amino acids via the central star multiple sequence alignment (MSA) algorithm.
- Large speed up of the mol-res-spin selection object. The Selection.contains_mol(), Selection.contains_res() and Selection.contains_spin() methods of the lib.selection module have been redesigned for speed. Instead of setting a number of flags and performing bit operations at the end of the method to return the correct Boolean value, each of the multiple checks now simply returns a Boolean value, avoiding all subsequent checks. The check list order has also been rearranged so that the least expensive checks are to the top and the most time intensive checks are last.
- Created the new relax data store object for saving sequence alignments. This is in the new data_store.seq_align module via the Sequence_alignments object, subclassed from RelaxListType, for holding all alignments and the Alignment Element object, subclassed from Element, for holding each individual alignment. The objects are currently unused.
- Added the seq_align module to the data_store package __all__ list.
- Created the Test_seq_align.test_alignment_addition unit test. This is in the _data_store.test_seq_align unit test module. This tests the setup of the sequence alignment object via the data_store.seq_align.Sequence_alignment.add() method.
- Fixes for the data_store.seq_align.Alignment.generate_id() method. These problems were identified by the _data_store.test_seq_align module Test_seq_align.test_alignment_addition unit test.
- Added the Test_seq_align.test_find_alignment and Test_seq_align.test_find_missing_alignment unit tests. These are in the _data_store.test_seq_align unit test module. They check the functionality of the currently unimplemented Sequence_alignment.find_alignment() method which will be used to return pre-existing alignments.
- Code rearrangement in the _data_store.test_seq_align unit test module. The ID generation has been shifted into the generate_ids() method to be used by multiple tests.
- Implemented the data_store.seq_align.Sequence_alignments.find_alignment() method. This will only return an alignment if all alignment input data and alignment settings match exactly.
- Shifted the data_store.seq_align.Alignment.generate_id() method into the relax library. It has been converted into the lib.structure.internal.coordinates.generate_id() function to allow for greater reuse.
- Created the Sequence.test_align_molecules system test. This will be used to implement the sequence.align user function which will be used for performing sequence alignments on structural data within the relax data store and storing the data in the data pipe independent sequence_alignments data store object (which will be an instance of data_store.seq_align.Sequence_alignments). The system test also checks the XML saving and loading of the ds.sequence_alignments data structure.
- Renamed the Sequence.test_align_molecules system test to Structure.test_sequence_alignment_molecules. As the sequence alignment is dependent on the structural data in the relax data store, the user function for sequence alignment would be better named as structure.sequence_alignment. The sequence.align user function is not appropriate as all other sequence user functions relate to the molecule, residue, and spin data structure of each data pipe rather than to the structural data.
- Modified the Structure.test_sequence_alignment_molecules system test. Changed and expanded the arguments to the yet to be implemented structure.sequence_alignment user function.
- Important formatting improvement for the description in the GUI user function windows. Previously lists, item lists, and prompt items were spaced with one empty line at the top and two at the bottom. The two empty lines at the bottom was an accident caused by how the list text elements were built up. Now the final newline character is stripped so that the top and bottom of the lists only consist of one empty line. The change can give a lot more room in the GUI window.
- Created the frontend for the structure.sequence_alignment user function. This is based on the structure.align user function with the 3D superimposition arguments removed and new arguments added for selecting the MSA algorithm and the pairwise alignment algorithm (despite only NW70 being currently implemented).
- Modified the assemble_coordinates() function of the pipe_control.structure.main module. The function has been renamed to assemble_structural_objects(). The call to the lib.structure.internal.coordinates.assemble_coord_array() function has also been shifted out of assemble_structural_objects() to simplify the logic and decrease the amount of arguments passed around.
- Spun out the atomic assembly code of the assemble_coord_array() function. The code from the lib.structure.internal.coordinates.assemble_coord_array() function has been shifted to the new assemble_atomic_coordinates(). This is to simplify assemble_coord_array() as well as to isolate the individual functionality for reuse.
- Implemented the backend of the structure.sequence_alignment user function. This checks some of the input parameters, assembles the structural objects then the atomic coordinate information, performs the multiple sequence alignment, and then stores the results.
- Fixes for the sequence alignment objects for the relax data store. The Sequence_alignments(RelaxListType) and Alignment(Element) classes were not being set up correctly. The container names and descriptions were missing.
- The data store ds.sequence_alignment object is now being treated as special and is blacklisted. The object is now explicitly recreated in the data store from_xml() method.
- Fixes for handling the sequence_alignments data store object.
- Implemented the data store Sequence_alignments.from_xml() method. This method is required for being able to read RelaxListType objects from the XML file.
- Modified the data returned by lib.structure.internal.coordinates.assemble_atomic_coordinates(). The function will now assemble simple lists of object IDs, model numbers and molecule names with each list element corresponding to a different structural model. This will be very useful for converting from the complicated pipes, models, and molecules user function arguments into relax data store independent flat lists.
- Updates for the structure.sequence_alignment user function. This is for the changes to the lib.structure.internal.coordinates.assemble_atomic_coordinates() function return values. The new object ID, model, and molecule flat lists are used directly for storing the alignment results in the relax data store.
- Updates for the Structure.test_sequence_alignment_molecules system test. This is required due to the changes in the backend of the structure.sequence_alignment user function.
- Merger of the structure.align and structure.superimpose user functions. The final user function is called structure.superimpose. As the sequence alignment component of the structure.align user function has been shifted into the new structure.sequence_alignment user function and the information is now stored in ds.sequence_alignments relax data store object, the functionality of structure.align and structure.superimpose are now essentially the same. The sequence alignment arguments and documentation has also been eliminated. And the documentation has been updated to say that sequence alignments from structure.sequence_alignment will be used for superimposing the structures.
- Updated the Structure system tests for the structure.align and structure.superimpose user function merger.
- Fix for the structure.sequence_alignment user function. The alignment data should be stored in ds.sequence_alignments rather than ds.sequence_alignment.
- Sequence alignments can now be retrieved without supplying the algorithm settings. This is in the data_store.seq_align.Sequence_alignments.find_alignment() method. The change allows for the retrieval of pre-existing sequence alignments at any stage.
- Added a function for assemble the common atomic coordinates taking sequence alignments into account. This is the new pipe_control.structure.main.assemble_structural_coordinates() function. It takes the sequence alignment logic out of the lib.structure.internal.coordinates.assemble_coord_array() function so that sequence alignments in the relax data store can be used. The logic has also been redefined as: 1, use a sequence alignment from the relax data store if present; 2, use no sequence alignment if coordinates only come from structural models; 3, fall back to a residue number based alignment. The residue number based alignment is yet to be implemented. As a consequence, the lib.structure.internal.coordinates.assemble_coord_array() function has been greatly simplified. It no longer handles sequence alignments, but instead expects the residue skipping data structure, built from the alignment, as an argument. The seq_info_flag argument has also been eliminated in this function as well as the pipe_control.structure.main module.
- Updated the structure.displacement user function for the changed atomic assembly logic. This now uses the assemble_structural_coordinates() function of the pipe_control.structure.main module to obtain the common coordinates based on pre-existing sequence alignments, no-alignment, or the default of a residue number based alignment.
- Updated the structure.find_pivot user function for the changed atomic assembly logic. This now uses the assemble_structural_coordinates() function of the pipe_control.structure.main module to obtain the common coordinates based on pre-existing sequence alignments, no-alignment, or the default of a residue number based alignment.
- Updated the structure.atomic_fluctuations user function for the changed atomic assembly logic. This now uses the assemble_structural_coordinates() function of the pipe_control.structure.main module to obtain the common coordinates based on pre-existing sequence alignments, no-alignment, or the default of a residue number based alignment.
- Updated the structure.rmsd user function for the changed atomic assembly logic. This now uses the assemble_structural_coordinates() function of the pipe_control.structure.main module to obtain the common coordinates based on pre-existing sequence alignments, no-alignment, or the default of a residue number based alignment.
- Updated the structure.web_of_motion user function for the changed atomic assembly logic. This now uses the assemble_structural_coordinates() function of the pipe_control.structure.main module to obtain the common coordinates based on pre-existing sequence alignments, no-alignment, or the default of a residue number based alignment.
- Fix for the structure.superimpose user function if no data pipes are supplied. This reintroduces the pipes list construction.
- Fix for the new pipe_control.structure.main.assemble_structural_coordinates() function. The atom_id argument is now passed into the assemble_atomic_coordinates() function of the lib.structure.internal.coordinates module so that atom subsets are once again recognised.
- Another fix for the new pipe_control.structure.main.assemble_structural_coordinates() function. The logic for determining if only models will be superimposed was incorrect.
- Implemented the residue number based alignment in the atomic assembly function. This is in the new pipe_control.structure.main.assemble_structural_coordinates() function. The code for creating the residue skipping data structure is now shared between the three sequence alignment options.
- Implemented the multiple sequence alignment method based on residue numbers. This is the new msa_residue_numbers() function in the lib.sequence_alignment.msa module. The logic is rather basic in that the alignment is based on a residue number range from the lowest residue number to the highest - i.e. it does not take into account gaps in common between all input sequences.
- The residue number based sequence alignment is now executed when assembling atomic coordinates. This is in the assemble_structural_coordinates() function of the pipe_control.structure.main module.
- Modified the internal structural object one_letter_codes() method. This now validates the models to make sure all models match, and the method requires the selection object so that residue subsets can be handled.
- The assemble_atomic_coordinates() function now calls one_letter_codes() with the selection object. This is the lib.structure.internal.coordinates module function.
- Fix for the residue number based sequence alignment when assembling structural coordinates. This is in the assemble_structural_coordinates() function of the pipe_control.structure.main module. The sequences of the different molecules can be of different lengths.
- Shifted the residue skipping data structure construction into the relax library. The code was originally in pipe_control.structure.main.assemble_structural_coordinates() but has been shifted into the new lib.sequence_alignment.msa.msa_residue_skipping() function. This will also for greater code reuse. The lib.sequence_alignment.msa module is also a better location for such functionality.
- Renamed the Structure.test_sequence_alignment_molecules system test. The new name is Structure.test_sequence_alignment_central_star_nw70_blosum62, to better reflect what the test is doing.
- Modified the Structure.test_sequence_alignment_central_star_nw70_blosum62 system test. Some residues are now deleted so that the sequences are not identical.
- Created the Structure.test_sequence_alignment_residue_number system test. This will be used to test the structure.sequence_alignment user function together with the 'residue number' MSA algorithm. This is simply a copy of the Structure.test_sequence_alignment_central_star_nw70_blosum62 system test with a few small changes.
- Corrections and simplifications for the Structure.test_sequence_alignment_residue_number system test.
- Modified the structure.sequence_alignment user function arguments. The pairwise_algorithm and matrix arguments can no be None, and they default to None.
- Updated the Structure.test_align_CaM_BLOSUM62 system test script. The MSA algorithm and pairwise alignment algorithms are now specified in the structure.sequence_alignment user function calls.
- Creation of the lib.sequence_alignment.msa.msa_general() function. This consists of code from the structure.sequence_alignment user function backend function pipe_control.structure.main.sequence_alignment() for selecting between the different sequence alignment methods.
- The structure.sequence_alignment user function now sets some arguments to None before storage. This is for all arguments not used in the sequence alignment. For example the residue number based alignment does not use the gap penalties, pairwise alignment algorithm or the substitution matrices.
- Fix for the lib.sequence_alignment.msa.msa_residue_skipping() function. The sequences argument for passing in the one letter codes has been removed. The per molecule loop should be over the alignment strings rather than one letter codes, otherwise the loop will be too short.
- Fix for the internal structural object atomic coordinate assembly function. This is the pipe_control.structure.main.assemble_structural_coordinates() function. The case of no sequence alignment being required as only models are being handled is now functional. The strings and gaps data structures passed into the lib.sequence_alignment.msa.msa_residue_skipping() function for generating the residue skipping data structure are now set to the one letter codes and an empty structure of zeros respectively.
- Test data directory renaming. The test_suite/shared_data/diffusion_tensor/spheroid directory has been renamed to spheroid_prolate. This is in preparation for creating oblate spheroid diffusion relaxation data.
- Creation of oblate spheroid diffusion relaxation data. This will be used in the Structure.test_create_diff_tensor_pdb_oblate system test.
- Fix for the oblate spheroid diffusion relaxation data. The diffusion parameters are constrained as Dx ≤ Dy ≤ Dz.
- More fixes for the Structure.test_create_diff_tensor_pdb_oblate system test. The initial Diso value is now set to the real final Diso, and the PDB file contents have been updated for the fixed oblate spheroidal diffusion relaxation data.
- Updates for many of the Diffusion_tensor system tests. This is due to the changed directory names in test_suite/shared_data/diffusion_tensor/. The ds.diff_dir variable has been introduced to point to the correct data directory.
- Large improvement for the GUI test tearDown() clean up method, fixing the tests on wxPython 2.8. The user function window destruction has been shifted into a new clean_up_windows() method which is executed via wx.CallAfter() to avoid racing conditions. In addition, the spin viewer window is destroyed between tests. The spin viewer window change allows the GUI tests to pass on wxPython 2.8 again. This also allows the GUI tests to progress much further on Mac OS X systems before they crash again for some other reason. This could simply be hiding a problem in the spin viewer window. However it is likely to be a racing problem only triggered by the super fast speed of the GUI tests and a normal user would never be able to operate the GUI on the millisecond timescale and hence may never see it.
- Reverted the wxPython 2.8 warning printout when starting relax, introduced in relax 3.3.5.
- Reverted the skipping of the GUI tests on wxPython 2.8, introduced in relax 3.3.5.
- Reverted the General.test_bug_23187_residue_delete_gui GUI test disabling, introduced in relax 3.3.5. The 'Bus Error' on Mac OS X due to this test is no longer an issue, as the spin viewer window is now destroyed after each GUI test.
- Created a special Destroy() method for the spin viewer window. This is for greater control of the spin viewer window destruction. First the methods registered with the observer objects are unregistered, then the children of the spin viewer window are destroyed, and finally the main spin viewer window is destroyed. This change saves a lot of GUI resources in the GUI tests (there is a large reduction in 'User Objects' and 'GDI Objects' used on MS Windows systems, hence an equivalent resource reduction on other operating systems).
- Fix for the GUI test clean_up_windows() method called from tearDown(). The user function window (Wiz_window) must be closed before the user function page (Uf_page), so that the Wiz_window._handler_close() can still operate the methods of the Uf_page. This avoids a huge quantity of these errors: Traceback (most recent call last): __getattr__ wx._core.PyDeadObjectError: The C++ part of the Uf_page object has been deleted, attribute access no longer allowed.
- Simplification of the Dead_uf_pages.test_mol_create GUI test. The RelaxError cannot be caught from the GUI user function window, therefore the try statement has been eliminated.
- More memory saving improvements for the GUI test suite tearDown() method. The clean_up_windows() method now loops through all top level windows (frames, dialogs, panels, etc.) and calls their Destroy() method.
- Created the gui.uf_objects.Uf_object.Destroy() method. This will be used to cleanly destroy the user function object.
- Modified the GUI test suite _execute_uf() method. This user function execution method now calls the user function GUI object Destroy() method to clean up all GUI objects. This should save memory for GUI objects in the GUI test suite.
- Modified the GUI test suite tearDown() method. The clean_up_windows() method called by tearDown() now prints out a lost of all of the living windows instead of trying to destroy them (which causes the running of the GUI tests in the GUI to cause the GUI to be destroyed). The printouts will be used for debugging purposes.
- Fixes for the custom Wiz_window.Destroy() method. This will now first close the wizard window via the Close() method to make sure all of the wizard pages are properly updated. In the end the wizard DestroyChildren() method is called to clean up all child wx objects, and finally Destroy() is called to eliminate the wizard GUI object.
- The GUI test suite tearDown() method now calls the user function GUI wizard Destroy() method. This is for better handling of user function elimination.
- Fixes for the user function GUI object Destroy() method. This matches the code just deleted in the GUI test suite tearDown() method for handing the user function page object.
- More fixes for the user function GUI object Destroy() method. This page GUI object is destroyed by the wizard window Destroy() method, so destroying again causes wxPython runtime errors.
- Spacing printout for the list of still open GUI window elements. This is for the GUI test tearDown() method.
- Shifted the printouts from the GUI tests suite clean_up_windows() method to the tearDown() method. This change means that the printouts are not within a wx.CallAfter() call, but rather at the end of the tearDown() method just prior to starting the next test.
- Simplification of the GUI analysis post_reset() method. This now uses the delete_all() and hence delete_analysis() methods to clean up the GUI. The reset argument has been added to skip the manipulation of relax data store data, as the data store is empty after a reset. However the calling of the delete_analysis() method now ensures that the analysis specific delete() method is now called so that the GUI elements can be properly destroyed.
- Proper destruction of the peak analysis wizard of the NOE GUI analysis. The peak wizard's Destroy() method is now called and the self.peak_wizard object deleted in the NOE GUI analysis delete() method.
- Improved memory management in the NOE GUI analysis peak_wizard_launch() method. This method was just overwriting the self.peak_wizard object with a new object. However this does not destroy the wxPython window. Now if a peak wizard is detected, its Destroy() method is called before overwriting the object.
- Improved GUI clean up when terminating GUI tests. The clean_up_windows() method, called from tearDown(), now also destroys the pipe editor window, the results viewer window, and the prompt window. This ensures that all of the major relax windows are destroyed between GUI tests.
- Improved memory management in the relaxation curve-fitting GUI analysis. The peak intensity loading wizard is now properly destroyed. This is both via the delete() function for terminating the analysis calling the wizard Delete() method, and in the peak_wizard_launch() method calling the wizard Delete() method prior to overwriting the self.peak_wizard object with a new GUI wizard.
- Improved memory management in the model-free GUI analysis. The dipole-dipole interaction wizard is now properly destroyed. This is both via the delete() function for terminating the analysis calling the wizard Delete() method, and in the setup_dipole_pair() method calling the wizard Delete() method prior to overwriting the self.dipole_wizard object with a new GUI wizard.
- Improved memory management in the model-free GUI analysis. The analysis mode selection window (a wx.Dialog) is now being destroyed in the analysis delete() method. This appears to work on Linux, Windows, and Mac systems.
- Improved memory management in the model-free GUI analysis. The local tm and model-free model windows are now destroyed in the GUI analysis delete() method.
- Improved termination of the GUI tests. The clean_up_windows() method now calls the results viewer and pipe editor window handler_close() methods. This ensures that all observer objects are cleared out so that the methods of the dead windows can no longer be called.
- Fix for the previous commit, calls to wx.Yield() are required to flush the calls on the observer objects after unregisteristing them and deleting the results and pipe editor windows.
- Improved memory management in the relaxation dispersion GUI analysis. The peak intensity loading wizard is now properly destroyed. This is both via the delete() function for terminating the analysis calling the wizard Delete() method, and in the peak_wizard_launch() method calling the wizard Delete() method prior to overwriting the self.peak_wizard object with a new GUI wizard.
- Created custom Destroy() methods for the pipe editor and results viewer GUI windows.
- Improved memory management in the relaxation dispersion GUI analysis. The dispersion model list window is now destroyed in the GUI analysis delete() method.
- Fixes for the custom Destroy() methods for the pipe editor and results viewer GUI windows. The event argument is now a keyword argument which defaults to None. This allows the Destroy() methods to be called without arguments.
- Temporary disablement of the results viewer window destruction in the GUI tests. This currently, for some unknown reason, causes segfault crashes of the GUI tests on Linux systems.
- Changes for how the main GUI windows are destroyed by the GUI test tearDown() method. These changes revert some of the code of previous commits. The recently introduced pipe editor and results viewer windows Delete() methods have been deleted. Instead the Close() methods are called in the tearDown() method to unregister the windows from the observer objects, followed by a wx.Yield() call to flush the wx events, and then the clean_up_windows() GUI test base method is called within a wx.CallAfter() call. This avoids the racing induced segfaults in the GUI tests.
- Improved memory management in the spin viewer window. The spin loading wizard is now destroyed in the Destroy() method as well as before reinitialising the wizard in the load_spins_wizard() method.
- The GUI tests tearDown() method now prints out the Wizard windows title, if not destroyed.
- The Wizard window title is now being stored as a class instance variable.
- Improved memory management in the relaxation data list GUI element, as well as the base list object. The relaxation data loading wizard is now destroyed in the Base_list.delete() method, or any wizard for that matter. In addition, the relaxation data loading wizard is destroyed before reinitialising the wizard in the wizard_exec() method.
- Better memory management for the missing data dialog in the GUI analyses. The dialog is now stored as the class variable missing_data, and then is destroyed in the analysis delete() method. Without this, the wxPython dialog would remain in memory for the lifetime of the program.
- Improved memory management for the Sequence and Sequence_2D input GUI elements. These are mainly used in the user function GUI windows. The dialogs are now destroyed before a second is opened.
- Improved memory management for the GUI user function windows. The Destroy() method will now destroy any Sequence or Sequence_2D windows used for the user function arguments.
- The relax prompt window is now being destroyed by the GUI test suite tearDown() method. The window is first closed in the tearDown() method and then destroyed in the clean_up_windows() method.
- Added memory management checking to the GUI test suite tearDown() method. If any top level windows are present, excluding the main GUI window and the relax controller, then a RelaxError will be raised. Such a check will significantly help in future GUI coding, as now there will be feedback if not all windows are properly destroyed.
- Popup menus are now properly destroyed in the GUI tests. In many instances, the wx.Menu.Destroy() method was only being called when the GUI is shown. This causes memory leaking in the GUI tests.
- Changed the title for the user function GUI windows. To better help identify what the window is, the title is now the user function name together with text saying that it is a user function.
- Removed the wx.CallAfter() call in the GUI tests tearDown() method. This was used to call the clean_up_windows() method. However the value of wx.Thread_IsMain() shows that the tearDown() method executes in the main GUI thread. Therefore the wx.CallAfter() call for avoiding racing conditions is not needed.
- Fix for the GUI tests clean_up_windows() tearDown method. After destroying all of the main GUI windows, a wx.Yield() call is made to flush the wxPython event queue. This seems to help with the memory management.
- Temporary disabling of the memory management check in the GUI tests tearDown() method. For some reason, it appears as if it is not possible to destroy wx Windows on MS Windows.
- Created the relax GUI prompt Destroy() method. This is used to cleanly destroy the GUI prompt by first unregistering with the observer objects, destroying then deleting the wx.py.shell.Shell instance, and finally destroying the window.
- Modified the manual_c_module.py developer script so that the path can be supplied on the command line.
- Removed some unused imports, as found by devel_scripts/find_unused_imports.py.
- Added a copyright notice to the memory_leak_test_relax_fit.py development script. This is to know how old the script is, to see how out of date it is in the future.
- Created the memory_leak_test_GUI_uf.py development script. This is to help in tracking down memory leaks in the relax GUI user functions. Instead of using a debugging Python version and guppy (wxPython doesn't seem to work with these), the pympler Python package and its muppy module is used to produce a memory usage printout.
- Clean up of the memory_leak_test_GUI_uf.py development script.
- Created the new devel_scripts/memory_management/ directory. This will be used for holding all of the memory C module leak detection, GUI object leak detection, memory management, etc. development scripts.
- Shifted the memory_leak_test_GUI_uf.py script to devel_scripts/memory_management/GUI_uf_minimise_execute.py.
- Created a base class for the memory management scripts for the GUI user functions. The core of the GUI_uf_minimise_execute.py script has been converted into the GUI_base.py base class module. This will allow for new GUI user function testing scripts to be created.
- Removal of unused imports from the GUI user function memory testing scripts.
- Created a script for testing the memory management when calling the time GUI user function.
- Large memory management improvement for the relax GUI wizards and GUI user functions. The pympler.muppy based memory management scripts in devel_scripts/memory_management for testing the GUI user function windows was showing that for each GUI user function call, 28 wx._core.BoxSizer elements were remaining in memory. This was traced back to the gui.wizards.wiz_objects.Wiz_window class, specifically the self._page_sizers and self._button_sizers lists storing wx.BoxSizer instances. The problem was that 16 page sizers and 16 button sizers were initialised each time for later use, however the add_page() method only added a small subset of these to the self._main_sizer wx.BoxSizer object. But the Destroy() method was only capable of destroying the wx.BoxSizer instances associated with another wxPython object. The fix was to add all page and button sizers to the self._main_sizer object upon initialisation. This will solve many memory issues in the GUI, especially in the GUI tests on Mac OS X systems causing 'memory error' or 'bus error' messages and on MS Windows due to 'USER Object' and 'GDI object' limitations.
- The maximum number of pages in the GUI wizard is no longer hardcoded. The max_pages argument has been added to allow this value to be changed.
- Fix for GUI wizards and GUI user functions. The recent memory management changes caused the wizard windows to have an incorrect layout so that the wizard pages were not visible. Reperforming a layout of the GUI elements did not help. The solution is to not initialise sets of max_pages of wx.BoxSizer elements in the wizard __init__() method, but to generate and append these dynamically via the add_page() method. The change now means that there are no longer multiple unused wx.BoxSizer instances generated for each wizard window created.
- Fix for the GUI wizard _go_next() method. The way to determine if there are no more pages needs to be changed, as there are now no empty list elements at the end of the wizard storage objects.
- Another fix for the now variable sized wizard page list. This time the fix is in the GUI user function __call__() method.
- Created the Relax_fit.test_bug_23244_Iinf_graph system test. This is to catch bug #23244.
Bugfixes
- Bug fix for the structure.align user function. The addition of the molecule name to the displacement ID is now correctly performed.
- Fix for the new Internal_selection.count_atoms() internal structural object selection method. The method was previously returning the total number of molecules, not the total number of atoms in the selection.
- Printout fix for the backend of the structure.translate and structure.rotate user functions. Model numbers of zero were not correctly identified. This also affects the structure.align and structure.superimpose user functions which uses this backend code.
- Another fix for the Internal_selection.count_atoms() internal structural object selection method.
- Small fix for the lib.structure.internal.coordinates.assemble_coord_array() function. The termination condition for determining the residues in common between all structures was incorrect.
- The Structure.test_create_diff_tensor_pdb_oblate system test now uses oblate diffusion relaxation data. This fixes bug #23232, the failure of this system test on Mac OS X. The problem was that the system test was previously using relaxation data for prolate spheroidal diffusion and fitting an oblate tensor to that data. This caused the solution to be slightly different on different CPUs, operating systems, Python versions, etc. and hence the PDB file representation of the diffusion would be slightly different.
- Big bug fix for the GUI tests on MS Windows systems. On MS Windows systems, the GUI tests were unable to complete without crashing. This is because each GUI element requires one 'User object', and MS Windows has a maximum limit of 10,000 of these objects. The GUI tests were taking more than 10,000 and then Windows would say - relax, you die now. The solution is that after each GUI test, all user function windows are destroyed. The user function page is a wx.Panel object, so this requires a Destroy() call. But the window is a Uf_page instance which inherits from Wiz_page which inherits from wx.Dialog. Calling Destroy() on MS Windows and Linux works fine, but is fatal on Mac OS X systems. So the solution is to call Close() instead.
- Fix for the default grid_inc argument for the relaxation curve-fitting auto-analysis. This needs to be an integer.
- Fix for bug #23244. The relaxation curve-fitting auto-analysis now outputs text files and Grace graphs for the I0 parameter and the I∞ parameter if it exists.
- Fixes for the package checking unit tests on MS Windows for the target_functions package. The compiled relaxation curve-fitting file is called target_functions\relax_fit.pyd on MS Windows. The package checking was only taking into account *.so compiled files and not *.pyd file.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.3.5
Description
This is a major feature and bugfix release. It fixes an important bug in the Monte Carlo simulation error analysis in the relaxation dispersion analysis. Features include improvements to the NMR spectral noise error analysis, expansion of the grace.write user function to handle both first and last point normalisation for reasonable R1 curves in saturation recovery experiments, the implementation of Needleman-Wunsch pairwise sequence alignment algorithm using the BLOSUM62, PAM250 and NUC 4.4 substitution matrices for more advanced 3D structural alignments via the structure.align and structure.superimpose user functions as well as any of the other structure user functions dealing with multiple molecules, conversion of the structure.displacement, structure.find_pivot, structure.rmsd, structure.superimpose and structure.web_of_motion user functions to a new pipes/models/molecules/atom_id design to allow the user functions to operate on different data pipes, different structural models and different molecules, addition of the displace_id argument to the structure.align and structure.superimpose user functions to allow finer control over which atoms are translated and rotated by the algorithm, large improvement for the PDB molecule identification code affecting the structure.read_pdb user function, creation of the lib.plotting package for assembling all of the data plotting capabilities of relax, implementation of the new structure.atomic_fluctuations user function for creating text output or Gnuplot graphs of the correlation matrix of interatomic distance, angle or parallax shift fluctuations, the implementation of ordinary least squares fitting, and improvements for the pcs.corr_plot and rdc.corr_plot user functions. Many more features and bugfixes are listed below.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.3.5
(27 January 2015, from /trunk)
http://svn.gna.org/svn/relax/tags/3.3.5
Features
- Improvements to the NMR spectral noise error analysis.
- Addition of the new spectrum.error_analysis_per_field user function to quickly perform a per-NMR field spectrum error analysis.
- Added spectrum.sn_ratio user function to calculate the signal to noise ration for all spins, and introduced the per-spin sn_ratio parameter for the NOE, relaxation curve-fitting and relaxation dispersion analyses.
- Added the new select.sn_ratio and deselect.sn_ratio user functions to change the selection status of spins according to their signal to noise ratio.
- Expansion of the grace.write user function to handle both first and last point normalisation for reasonable R1 curves in saturation recovery experiments.
- Conversion of the structure.align, structure.displacement, structure.find_pivot, structure.rmsd, structure.superimpose and structure.web_of_motion user functions to a standardised pipes/models/molecules/atom_id argument design to allow the user functions to operate on different data pipes, different structural models and different molecules simultaneously and to restrict operation to a subset of all spins. This is also used by the new structure.atomic_fluctuations user function.
- Addition of the displace_id argument to the structure.align and structure.superimpose user functions to allow finer control over which atoms are translated and rotated by the algorithm independently of the align_id atom ID for selecting atoms used in the superimposition.
- Large improvement for the PDB molecule identification code affecting the structure.read_pdb user function allowing discontinuous ATOM and HETATM records with the same chain ID to be loaded as the same molecule.
- Creation of the lib.plotting package for assembling all of the data plotting capabilities of relax into a unified software independent API.
- Implementation of the new structure.atomic_fluctuations user function for creating text output or Gnuplot graphs of the correlation matrix of interatomic distance, angle or parallax shift fluctuations, measured as sample standard deviations, between different molecules.
- The implementation of ordinary least squares fitting.
- Improvements for the pcs.corr_plot and rdc.corr_plot user functions.
- The implementation of Needleman-Wunsch pairwise sequence alignment algorithm using the BLOSUM62, PAM250 and NUC 4.4 substitution matrices for more advanced 3D structural alignments via the structure.align user function. The Needleman-Wunsch algorithm is implemented as in the EMBOSS software to allow for gap opening and extension penalties as well as end penalties. This is also used in all the other structure user functions dealing with multiple molecules - structure.atomic_fluctuations, structure.displacement, structure.find_pivot, structure.rmsd, structure.superimpose, structure.web_of_motion.
- Improved support for PDB secondary structure metadata for the structure.read_pdb and structure.write_pdb user functions.
Changes
- Added a sentence to the start of the citation chapter about http://www.nmr-relax.com links. This is to convince people to more freely use this URL. In that way, the relax search engine ranking should be significantly increased. And it will be easier for new users to get into relax.
- Removing the automatic function for error analysis per field in the relaxation dispersion auto-analysis. This function is moved into pipe_control/spectrum.py.
- Added the function pipe_control.spectrum.error_analysis_per_field(), as an automatic way of submitting subset IDs per field for error analysis.
- For the pipe_control.spectrum.error_analysis_per_field(), added additional printout of subset IDs used for error analysis.
- In the auto_analysis.relax_disp module, used the new spectrum.error_analysis_per_field user function to calculate the peak intensity errors.
- Reinserted the error_analysis() function in the auto class of relaxation dispersion. This function only checks if the error analysis has not been be performed before, and then decides to call the spectrum.error_analysis_per_field user function. The implementation can be tested with the Relax_disp.test_estimate_r2eff_err_auto system test.
- In pipe_control.spectrum.error_analysis_per_field() removed the checks which would stop the calculation of the errors. This function will now always run, which will make it possible for the user to try different error calculations.
- Copy of the system test script peak_lists.py to spectrum.py. This is for the implementation of calculation of signal to noise ratio, selection and deselection.
- Initialised first test in the Spectrum system test class. This is simply loading some intensity data, and checks data. The system test Spectrum.test_signal_noise_ratio will be expanded to test the calculation of the signal to noise ratio.
- Added the Spectrum system test class to the init file, so these system tests can be executed.
- Added the pipe_control.spectrum.signal_noise_ratio() backend function, for calculation of the signal to noise ratio per spin.
- Added system test Spectrum.test_grace_int, to test plotting the intensity per residue. This is to prepare for a grace plotting of the signal to noise level per residue. Also added additional tests for signal to noise ratio calculation in the system test Spectrum.test_signal_noise_ratio.
- Added system test Spectrum.test_grace_sn_ratio to help implement plotting the signal to noise ratio per residue.
- Added the common API Parameter structure 'sn_ratio' in parameter_object.
- For the specific analysis of "noe", "relax_disp", and "relax_fit", initialised the sn_ratio parameter structure.
- Added float around values in signal_noise_ratio() function.
- Made the spectrum.sn_ratio user function smaller.
- Added two new system tests Spectrum.test_deselect_sn_ratio_all and Spectrum.test_deselect_sn_ratio_any. These test the deselect.sn_ratio user function, to deselect spins with a signal to noise ratio lower than the specified ratio.
- Added function in pipe_control.spectrum.sn_ratio_deselection(), a function to deselect spins according to the signal to noise ratio. The function is flexible, since it possible to use different comparison operators. And the function can be switched, so a selection is made instead.
- Added the new deselect.sn_ratio user function to deselect spins according to their signal to noise ratio.
- Added new backend function in pipe_control.spectrum.sn_ratio_selection. This is to select spins with a signal to noise ratio, higher or lower than the specified ratio.
- Added two new system tests Spectrum.test_select_sn_ratio_all and Spectrum.test_select_sn_ratio_any. These test the select.sn_ratio user function.
- Added the new select.sn_ratio user function to select spins with signal to noise ratio above a specified ratio. The default ratio for signal to noise selection is 10.0. But should probably be 50-100 instead. The default of 'all_sn' is True, meaning that all signal to noise ratios for the spins needs to pass the test.
- Small fix for standard values in deselect.sn_ratio user function. The standard values will deselect spins which have at least one signal to noise ratio which is lower than 10.0.
- Small fix for the backend of spectrum sn_ratio_selection() and sn_ratio_deselection(). The standard values have been changed.
- Fix for the window size in dx.map user function. The size of the windows was not compatible with the latest change.
- Documentation fix in the manual for the lower and upper bonds for parameters in the grid search.
- Documentation fix in the manual for the lower and upper bonds for parameters in the minimisation.
- Documentation fix in the manual for the scaling values of parameters in the minimisation. The scaling helps the minimisers to make the same step size for all parameters when moving in the χ2 space.
- Added a devel script which can quickly convert oxygen icons to the desired sizes.
- Extended the devel script image size converter.
- Adding new oxygen icon in all needed sizes.
- Comment fix in user function select.sn_ratio and deselect.sn_ratio.
- Important fix for the spectrum.error_analysis_per_field user function. This is for the compilation of the user manual. The possessive apostrophe should not be used in the text "spectrum ID's". This grammar error triggers an unfortunate bug in the docstring fetching script docs/latex/fetch_docstrings.py whereby the script thinks that ' is the start of a quote.
- Added a compressed EPS version of the 128x128/actions/document-preview-archive Oxygen icon. The EPS bounding box was manually changed to 0 0 18 18 in a text editor. The scanline translation parameters were also fixed by changing them all to 18 as well. This allows the icon to be used in the relax manual.
- Fix for the blacklist objects in data_store.data_classes.Element.to_xml(). The class blacklist variable was not being taken into account.
- Added the norm_type argument to the grace.write user function. This is in response to http://thread.gmane.org/gmane.science.nmr.relax.devel/7392/focus=7438. This norm_type argument can either be 'first' or 'last' to allow different points of the plot to be the normalisation factor. The default of 'first' preserves the old behaviour of first point normalisation.
- The relax_fit_saturation_recovery.py system test script now sets the norm_type argument. This is for testing out this new option for the grace.write user function.
- The new grace.write user function norm_type argument has been activated. The argument is now passed from pipe_control.grace.write into the write_xy_data() function of the lib.software.grace module, and is used to select which point to use for the normalisation.
- The relaxation exponential curve-fitting auto-analysis now sets the normalisation type. This is for the new grace.write user function. If the model for all spins is set to 'sat', then the norm_type will be set to 'last'. This allows for reasonable normalised curves for the saturation recovery R1 experiment types.
- Change for norm_type variable in the relaxation exponential curve-fitting auto-analysis. This is now set to 'last', not only for the saturation recovery, but now also for the inversion recovery experiment types. This ensures that the normalisation point is the steady state magnetisation peak intensity.
- Cleared the list of blacklisted objects for the cdp.exp_info data structure. The data_store.exp_info.ExpInfo class blacklist variable had previously not been used. But after recent changes, the list was now active. As all the contents of the container were blacklisted, the container was being initialised as being empty when reading the XML formatted state or results files. Therefore the blacklist is now set to an empty list.
- Improvements for all of the tables of the relaxation dispersion chapter of the manual. The captions are now the full width (or height for rotated tables) of the page in the PDF version of the manual. The \latex{} command from the latex2html package has been used to improve the HTML versions of the tables by deactivating the landscape environment, the cmidrule command, and the caption width commands. This results in properly HTML formatted tables, rather than creating a PNG image for the whole table. These should significantly improve the tables in the webpages http://www.nmr-relax.com/manual/Comparison_of_dispersion_analysis_software.html, http://www.nmr-relax.com/manual/The_relaxation_dispersion_auto_analysis.html, and http://www.nmr-relax.com/manual/Dispersion_model_summary.html.
- Created the Structure.test_align_molecules system test. This will be used to extend the functionality of the structure.align user function to be able to align different molecules in the same data pipe, rather than requiring either models or identically named structures in different data pipes.
- Modified the Structure.test_align_molecules system test. This now simultaneously checks both the pipes and molecules arguments to the structure.align user function.
- More changes for the new Structure.test_align_molecules system test.
- Some more fixes for the Structure.test_align_molecules system test.
- Change to the Structure.test_align system test. The molecules argument for the structure.align user function has been changed to match the models argument, in that it now needs to be a list of lists with the first dimension matching the pipes argument. This change is to help with the implementation of the new structure.align functionality.
- Implemented the new molecules argument for the structure.align user function. In addition to accepting the new argument, the user function backend has been redesigned for flexibility. The assembly of coordinates and final rotations and translations now consist of three loops over desired data pipes, all models, and all molecules. If the models or molecules arguments are supplied, then the models or molecules in the loop which do not match are skipped. This logic simplifies and cleans up the backend.
- Created the Structure.test_rmsd_molecules system test. This will be used to implement a new molecules argument for the structure.rmsd user function so that the RMSD between different molecules rather than different models can be calculated.
- Implemented the new molecules argument for the structure.rmsd user function. This allows the RMSD between different molecules rather than different models to be calculated, extending the functionality of this user function.
- Created the Structure.test_displacement_molecules system test. This will be used to implement the new molecules argument for the structure.displacement user function.
- Implemented the molecules argument for the structure.displacement user function. This allows the displacements (translations and rotations) to be calculated between different molecules rather than different models. This information is stored in the dictionaries of the cdp.structure.displacement object with the keys set to the molecule list indices.
- Created the Structure.test_find_pivot system test. This is to check the structure.find_pivot user function as this algorithm is currently not being checked in the test suite.
- Created the Structure.test_find_pivot_molecules system test. This will be used to implement support for a molecules argument in the structure.find_pivot user function so that different molecules rather than different models can be used in the analysis.
- Increased the precision of pivot optimisation in the Structure.test_find_pivot_molecules system test.
- Implemented the molecules argument for the structure.find_pivot user function. This allows the motional pivot optimisation between different molecules rather than different models.
- Shifted the atomic assembly code from the structure.align user function into its own function. The new function assemble_coordinates() of the pipe_control.structure.main module will be used to standardise the process of assembling atomic coordinates for all of the structure user functions. This will improve the support for comparing different molecules rather than different models as missing atoms or divergent primary sequence are properly handled, and it has multi-pipe support.
- Changed the argument order for the structure.align user function. The standardised order will now be pipes, models, molecules, atom_id, etc.
- Converted the structure.find_pivot user function to the new pipes/models/molecules/atom_id design. This allows the motional pivot algorithm to work on atomic coordinates from different data pipes, different structural models, and different molecules. The change allows the Structure.test_find_pivot_molecules system test to now pass, as missing atomic data is now correctly handled. The user function backend uses the new pipe_control.structure.main.assemble_coordinates() function. The Structure.test_find_pivot and Structure.test_find_pivot_molecules system tests have been updated for the user function argument changes.
- Shift of the atomic coordinate assembly code into the relax library. Most of the pipe_control.structure.main.assemble_coordinates() function has been shifted into the assemble_coord_array() function of the new lib.structure.internal.coordinates module. The pipe_control function now only checks the arguments and assembles the structural objects from the relax data store, and then calls assemble_coord_array() to do all of the work. This code abstraction increases the usefulness of the atomic coordinate assembly and allows it to be significantly expanded in the future, for example by being able to take sequence alignments into consideration.
- Tooltip standardisation for the structure.align and structure.find_pivot user functions.
- The coordinate assembly function now returns list of unique IDs. This is for each structural object, model and molecule.
- Changed the structure ID strings returned by the assemble_coord_array() function. This is from the lib.structure.internal.coordinates module. The structural object name is only included if more than one structural object has been supplied.
- More improvements for the structure ID strings returned by the assemble_coord_array() function.
- Converted the internal structural displacement object to use unique IDs rather than model numbers. This allows the object to be much more flexible in what types of structures it can handle. This is in preparation for a change in the structure.displacement user function.
- Converted the structure.displacement user function to the new pipes/models/molecules/atom_id design. This allows the displacements to be calculated between atomic coordinates from different data pipes, different structural models, and different molecules. The user function backend has been hugely simplified as it now uses the new pipe_control.structure.main.assemble_coordinates() function. The Structure.test_displacement system test has been updated for the user function argument changes.
- Another refinement for the structure ID strings returned by the assemble_coord_array() function.
- Updated the Structure.test_displacement_molecules system test. This is for the changes to the structure.displacement user function.
- Docstring spelling fixes for the steady-state NOE and relaxation curve-fitting auto-analyses.
- Converted the structure.rmsd user function to the new pipes/models/molecules/atom_id design. This allows the RMSD calculation to work on atomic coordinates from different data pipes, different structural models, and different molecules. The user function backend uses the new pipe_control.structure.main.assemble_coordinates() function. The Structure.test_rmsd_molecules system test has been updated for the user function argument changes.
- Created the internal structural object model_list() method. This is to simplify the assembly of a list of all current models in the structural object.
- Converted the structure.superimpose user function to the new pipes/models/molecules/atom_id design. The user function arguments have not changed, however the backend now uses the new pipe_control.structure.main.assemble_coordinates() function. This is to simply decrease the number of failure points possible in the structure user functions. The change has no effect on the user function use or results.
- Documentation fix for the assemble_coord_array() function. The return values for lib.structure.internal.coordinates.assemble_coord_array() were incorrectly documented.
- Modified the Structure.test_bug_22070_structure_superimpose_after_deletion system test. This now calls the structure.align user function after calling the structure.superimpose user function to better test a condition that can trigger bugs.
- Fixes for the structure.superimpose and structure.align user functions. The fit_to_mean() and fit_to_first() functions of lib.structure.superimpose where being incorrectly called, in that they expect a list of elements and not lists of lists.
- Code refactorisation for the structure.align user function backend. The looping over data pipes, model numbers, and molecule names, skipping those that don't match the function arguments, has been shifted into the new structure_loop() generator function of the pipe_control.structure.main module. This function assembles the data from the data store and then calls the new loop_coord_structures() generator function of the lib.structure.internal.coordinates module which does all of the work.
- Some docstring expansions for the pipe_control.structure.main module functions.
- Refactored the descriptions of a number of structure user functions. This includes the structure.align, structure.displacement, structure.find_pivot, structure.rmsd and structure.superimpose user functions. The paragraph_multi_struct and paragraph_atom_id module strings have been created and are shared as two paragraphs for each of these user function descriptions. This standardises the pipe/model/molecule/atom_id descriptions. The user function wizard page sizes have been updated for these changes.
- Changed the design of the lib.structure.internal.coordinates.assemble_coord_array() function. The elements_flag argument has been renamed to seq_info_flag. If this is set, then in addition to the atomic elements, the molecule name, residue name, residue number, and atom name is now assembled and returned. This information is now the common information between the structures, hence the return values for the elements are a list of str rather than list of lists. All of the code in pipe_control.structure.main has been updated for the change.
- Fix for the structure.align user function if no data pipes are supplied. The pipes list was no longer being created as it was shifted to the assemble_coordinates() function, however it is required for the translation and rotation function calls.
- Converted the structure.web_of_motion user function to the new pipe/model/molecule/atom_id design. This allows the web of motion representation to work on atomic coordinates from different data pipes, different structural models, and different molecules. The user function backend uses the new pipe_control.structure.main.assemble_coordinates() function to assemble the common atom coordinates, molecule names, residue names, residue numbers, atom names and elements. All this information is then used to construct the new web of motion PDB file. Therefore the entire backend has been rewritten. The Structure.test_web_of_motion_12, Structure.test_web_of_motion_13, and Structure.test_web_of_motion_all system tests have all been updated for the changed structure.web_of_motion user function arguments. In addition, the system tests Structure.test_web_of_motion_12_molecules, Structure.test_web_of_motion_13_molecules and Structure.test_web_of_motion_all_molecules have been created as a copy of the other tests but with the 3 structures loaded as different molecules.
- Fix for the IDs returned by lib.structure.internal.coordinates.assemble_coord_array(). The list of unique structure IDs was being incorrectly constructed if multiple molecules are present but the molecules argument was not supplied. It would be of a different size to the coordinate data structure.
- Fix for the Structure.test_displacement system test for the assemble_coord_array() function bugfix.
- Modified the Structure.test_align system test to show a failure of the structure.align user function. The alignment causes all atoms in the structural object to be translated and rotated, whereas it should only operate on the atoms of the atom_id argument.
- Modified the Structure.test_superimpose_fit_to_mean system test. This is also to demonstrate a bug, this time in the structure.superimpose user function, in which the algorithm causes a translation and rotation of all atoms rather than just those selected by the atom_id argument.
- Modified some system tests of the structure.align and structure.superimpose user functions. The displace_id argument has been introduced for both of these user functions for finer control over which atoms are translated and rotated by the algorithm. This allows, for example, to align structures based on a set of backbone heavy atoms while the protons and side chains are displaced by default. Or if a domain is aligned, then just that domain can be displaced.
- Added the displace_id argument to the structure.align and structure.superimpose user functions. This gives both of these user functions finer control over which atoms are translated and rotated by the algorithm. This allows, for example, to align structures based on a set of backbone heavy atoms while the protons and side chains are displaced by default. Or if a domain is aligned, then just that domain can be displaced.
- Fixes for the Structure.test_superimpose_fit_to_mean system test for the displace_id argument.
- Modified the Structure.test_align_molecules system test to catch a bug. This is the failure of the displace_id argument of the structure.align user function when the molecules argument is supplied - all atoms are being displaced instead of a subset.
- Fix for the displace_id and molecules arguments of the structure.align user function. The atom ID used for the translations and rotations is now properly constructed from the molecule names in the molecules list and the displace_id string.
- Changes for water in the PDB file created by the structure.write_pdb user function. The waters with the residue name 'HOH' are no longer output to HET records.
- Improvement for the structure.read_pdb user function. The helix and sheet secondary structure reading now takes the real_mol argument into account to avoid reading in too much information.
- Improvement for the merge argument of the structure.read_pdb user function. This argument is now overridden if the molecule to merge to does not exist. This allows the merge flag to be used together with read_mol and set_mol_name set to lists.
- Fix for the selective secondary structure reading of the structure.read_pdb user function. The molecule index needs to incremented by 1 to be the molecule number.
- Large improvement for the PDB molecule identification code. This affects the structure.read_pdb user function. Now the chain ID code, if present in the PDB file, is being used to determine which ATOM and HETATM records belong to which molecule. All of the records for each molecule are stored until the end, when they are all yielded. This allows for discontinuous chain IDs throughout the PDB file, something which occurs often with the HETATM records.
- Expanded the displace_id argument for the structure.align user function. This can now be a list of atom IDs, so that any atoms can be rotated together with the structure being aligned. This is useful if the molecules argument is supplied.
- Fix for the Noe.test_bug_21562_noe_replicate_fail system test. This is for the changed behaviour of the structure.read_pdb user function. The problem is that the PDB file read in this test has the chain ID set to X. This broken PDB causes molecule numbering problems.
- Expanded the description of the structure.rmsd user function.
- Changed the paragraph ordering in the documentation of a number of the structure user functions. This includes the structure.align, structure.displacement, and structure.find_pivot user functions.
- Fix for the prompt examples documentation for the structure.align user function.
- Improved the sizing layout of the structure.align user function GUI dialog.
- Improved the sizing layout of the structure.superimpose user function GUI dialog.
- Created the Structure.test_atomic_fluctuations system test. This will be used to implement the idea of the structure.atomic_fluctuations user function.
- Implemented the structure.atomic_fluctuations user function. This is loosely based on the structure.web_of_motion user function and is related to it. The user function will write to file a correlation matrix of interatomic distance fluctuations.
- Created 4 unit tests for the lib.io.swap_extension function. This is in preparation for implementing the function.
- Implemented the lib.io.swap_extension() function. This is confirmed to be fully functional by its four unit tests.
- Created the empty lib.plotting package. This follows from this thread. The package will be used for assembling all of the data plotting capabilities of relax. It will make support for different plotting software - Grace, OpenDX, matplotlib, gnuplot, etc - more coherent. This will be used to create a software independent API for plotting in relax. I.e. the plotting software is chosen by the user and then the data output by the user function passes into the lib.plotting API which is then passed into the software dependent backend in lib.plotting.
- Created the Structure.test_atomic_fluctuations_gnuplot system test. This checks the operation of the structure.atomic_fluctuations user function when the output format is set to 'gnuplot'. This will be used to implement this option. The current gnuplot script expected by this test is just a very basic starting script for now.
- Created the lib.plotting API function correlation_matrix(). This is the lib.plotting.api.correlation_matrix() function. It will be used for the visualisation of rank-2 correlation matrices. The current basic API design here uses a dictionary of backend functions (currently empty) for calling the backend.
- Implemented a very basic gnuplot backend for the correlation_matrix() plotting API function. This is in the new lib.plotting.gnuplot module. It creates an incredibly basic gnuplot script for visualising the correlation matrix, assuming a text file has already been created.
- Enabled the gnuplot format for the structure.atomic_fluctuations user function. This uses the plotting API correlation_matrix() function for visualisation. The change allows the Structure.test_atomic_fluctuations_gnuplot system test to pass.
- Shifted the matrix output of the structure.atomic_fluctuations user function into lib.plotting.text. The new lib.plotting.text module will be used by the relax library plotting API to output data into plain text format. The current correlation_matrix() function, which has been added to the API correlation_matrix() function dictionary, simply has the file writing code of the structure.atomic_fluctuations user function. This significantly simplifies the user function.
- More simplifications for the structure.atomic_fluctuations user function backend.
- Fix for the structure.atomic_fluctuations user function backend. The pipe_control.structure.main.atomic_fluctuations() function no longer opens the output file.
- The gnuplot correlation_matrix() plotting API function now creates a text file of the data. The lib.plotting.gnuplot.correlation_matrix() function now calls the lib.plotting.text.correlation_matrix() function prior to creating the gnuplot script.
- Significantly expanded the gnuplot script from via the correlation_matrix() plotting API function. This is for the structure.atomic_fluctuations user function. The output terminal is now set to EPS, the colour map changed from the default to a blue-red map, labels have been added, the plot is now square, and comments are now included throughout the script to help a user hand modify it after creation.
- Improvement in the comments from the gnuplot correlation_matrix() plotting API function.
- Updated the Structure.test_atomic_fluctuations_gnuplot system test. This is for the gnuplot correlation_matrix() plotting API changes which affect the structure.atomic_fluctuations user function.
- Docstring fixes for the Structure.test_atomic_fluctuations_gnuplot system test. This was pointing to the structure.rmsd user function instead of structure.atomic_fluctuations.
- Fixes and improvements for the gnuplot correlation_matrix() plotting API function. This is for the structure.atomic_fluctuations user function. The "pm3d map" plot type is incorrect for such data type, so instead of using 'splot', 'plot' is being used instead. The resultant EPS file is now much smaller. The colour map has also been changed to one of the inbuilt ones for higher contrast.
- Forced the gnuplot correlation_matrix plot to be square. This is for the correlation_matrix() plotting API function used by the new structure.atomic_fluctuations user function.
- Updated the Structure.test_atomic_fluctuations_gnuplot system test. This is for the changes of the gnuplot correlation_matrix() plotting API function used by the structure.atomic_fluctuations user function.
- Docstring fix for the Structure.test_atomic_fluctuations system test.
- Another docstring fix for the Structure.test_atomic_fluctuations system test.
- Created the Structure.test_atomic_fluctuations_angle system test. This will be used to implement the mapping of inter-atomic vector angular fluctuations between structures via a new 'measure' keyword argument for the structure.atomic_fluctuations user function.
- Implemented angular fluctuations for the structure.atomic_fluctuations user function. This adds the measure argument to the user function to allow either the default of 'distance' or the 'angle' setting to be chosen. The implementation is confirmed by the Structure.test_atomic_fluctuations_angle system test which now passes.
- Clean ups and speed ups of the structure.atomic_fluctuations user function. Duplicate calculations are now avoided, as the SD matrix is symmetric.
- Description improvements and GUI layout fixes for the structure.atomic_fluctuations user function.
- Added the 'parallax shift' measure to the structure.atomic_fluctuations user function. The parallax shift is defined as the length of the average vector minus the interatomic vector. It is similar to the angle measure however, importantly, it is independent of the distance between the two atoms.
- Updated the gnuplot scripts to be executable. These are the scripts created by the gnuplot specific correlation_matrix() plotting API function. The file is made executable and the script now starts with "#!/usr/bin/env gnuplot".
- Created the Structure.test_atomic_fluctuations_parallax system test. This is to demonstrate that the parallax shift fluctuations are not implemented correctly.
- Fix for the Structure.test_atomic_fluctuations_parallax system test. The distance shifts need to be numbers, not vectors.
- Proper implementation of the 'parallax shift' for the structure.atomic_fluctuations user function.
- Improved the structure.atomic_fluctuations user function documentation. The fluctuation categories are now better explained. And the 'parallax shift' option is now available in the GUI.
- Fix for the parallax shift description in the structure.atomic_fluctuations user function. The parallax shift is not quite orthogonal to the distance fluctuations.
- Implemented ordinary_least_squares function the repeated auto-analysis. Inspection of statistics books, shows that several authors does not recommend using regression through the origin (RTO). From Joseph G. Eisenhauer, Regression through the Origin: RTO residuals will usually have a nonzero mean, because forcing the regression line through the origin is generally inconsistent with the best fit; R square measures (for RTO) the proportion of the variability in the dependent variable "about the origin" explained by regression. This cannot be compared to R square for models which include an intercept. From "Experimental design and data analysis for biologists", G. P. Quinn, M. J. Keough: Minimum observed xi rarely extends to zero, and forcing our regression line through the origin not only involves extrapolating the regression line outside our data range but also assuming the relationship is linear outside this range (Cade & Terrell 1997, Neter et al. 1996); We recommend that it is better to have a model that fits the observed data well than one that goes through the origin but provides a worse fit to the observed data; residuals from the no-intercept model no longer sum to zero; usual partition of SSTotal into SSRegression and SSResidual does not work.
- Added save state for test of bug 23186. Bug #23186: Error calculation of individual parameter δω from Monte-Carlo, is based on first spin.
- Added the system test Relax_disp.test_bug_23186_cluster_error_calc_dw which shows the failure of Monte Carlo simulations error calculations. Bug #23186: Error calculation of individual parameter δω from Monte-Carlo, is based on first spin.
- Added additional test for the r2a parameter. Bug #23186: Error calculation of individual parameter δω from Monte-Carlo, is based on first spin.
- Attempt to implement the GUI test General.test_bug_23187_residue_delete_gui. This will NOT catch the error. Bug #23187: Deleting residue in GUI, and then open spin viewer crashes relax.
- Added test for spin independent error of kAB. Bug #23186: Error calculation of individual parameter δω from Monte-Carlo, is based on first spin.
- Fix for the showing of the spin viewer window in the GUI tests. The show_tree() method of the main GUI window class was not calling the custom self.spin_viewer.Show() method, as required to set up the observer objects required to keep the spin viewer window updated. The value of status.show_gui was blocking this. Instead the show argument of this Show() method is being set to status.show_gui to allow the method to always be executed.
- Updated the main relax copyright notices for 2015.
- The copyright notice in the GUI now uses the info box object. This is for the status bar at the bottom of the GUI window. This removes one place where copyright notices needs to be updated each year. This status text will then be updated whenever the info.py file has been updated.
- Updated the copyright notice for 2015 in the GUI splash screen graphic.
- Racing fixes for the General.test_bug_23187_residue_delete_gui GUI test. Some GUI interpreter flush() calls have been added to avoid racing in the GUI. The GUI tests are so quick that the asynchronous user function call will be processed at the same time as the spin viewer window is being created, causing fatal segmentation faults in the test suite.
- More robustness for the spin viewer GUI window prune_*() methods. When no spin data exists, the self.tree.GetItemPyData(key) call can return None. This is now being checked for and such None values are being skipped in the prune_mol(), prune_res() and prune_spin() methods. The problem was found in the Mf.test_bug_20479_gui_final_pipe system test when running the command: for i in {1..10}; do ./relax --gui-tests --time -d &>> gui_tests.log; done
- More robustness for the spin viewer GUI window update_*() methods. When no spin data exists, the self.tree.GetItemPyData(key) call can return None. This is now being checked for and such None values are being skipped in the update_mol(), update_res() and update_spin() methods. The problem was found in the Mf.test_bug_20479_gui_final_pipe system test when running the command: for i in {1..10}; do ./relax --gui-tests --time -d &>> gui_tests.log; done
- More robustness for the spin viewer GUI window prune_*() methods. The data returned from the self.tree.GetItemPyData(key) call can in rare racing cases not contain the 'id' key. This is now being checked for and are being skipped in the prune_mol(), prune_res() and prune_spin() methods. The problem was found in the Mf.test_bug_20479_gui_final_pipe system test when running the command: for i in {1..10}; do ./relax --gui-tests --time -d &>> gui_tests.log; done
- More robustness for the spin viewer GUI window update_*() methods. The data returned from the self.tree.GetItemPyData(key) call can in rare racing cases not contain the 'id' key. This is now being checked for and are being skipped in the update_mol(), update_res() and update_spin() methods. The problem was found in the Mf.test_bug_20479_gui_final_pipe system test when running the command: for i in {1..10}; do ./relax --gui-tests --time -d &>> gui_tests.log; done
- Created a development document for catching segfaults and other errors in the GUI tests. This is needed as not all wxPython errors can be caught in the Python unittest framework.
- Small whitespace formatting fix for the titles printed by the align_tensor.display user function.
- Improvements for the plots created by the pcs.corr_plot user function. The axes now have labels, and have the range and number of ticks set to reasonable values.
- Improvements for the pcs.corr_plot user function - the plot range is now determined by the data.
- Improvements for the rdc.corr_plot user function - the plot range is now determined by the data.
- Added save state for testing implementation of error analysis. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Simplification of system test Relax_disp.test_task_7882_monte_carlo_std_residual, to just test the creation of Monte-Carlo data where errors are drawn from the reduced χ2 distribution. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Extension of the monte_carlo.create_data user function to draw errors from the reduced χ2 Gauss distribution as found by best fit. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Adding to backend of pipe_control.error_analysis(), to modify data point as error drawn from the reduced χ2 Gauss distribution. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Adding empty API method to return errors from the reduced χ2 distribution. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Added API function in relaxation dispersion to return error structure from the reduced χ2 distribution. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Temporary test of making a confidence interval as described in fitting guide. This is system test Relax_disp.x_test_task_7882_kex_conf, which is not activated by default. Running the test, interestingly shows, there is a possibility for a lower global kex. But the value only differ from kex=1826 to kex=1813. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Change to system test Relax_disp.x_test_task_7882_kex_conf(). This is just a temporary system test, to check for local minima. This is method in regression book of Graphpad: http://www.graphpad.com/faq/file/Prism4RegressionBook.pdf Page: 109-111. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Raising an error, if the R2eff model is used, and drawing errors from the fit. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- To system test Relax_disp.test_task_7882_monte_carlo_std_residual(), adding test for raise of errors, if the R2eff model is selected. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Added test of argument "distribution" in pipe_control.error_analysis.monte_carlo_create_data(). This is to make sure that a wrong argument is not passed into the function. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Extended the monte_carlo.create_data user function, to allow for the definition of the STD to use in Gauss distribution. This is for creation of Monte-Carlo simulations, where one has perhaps gained information about the expected errors of the data points, which is not measured. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- In backend pipe_control.error_analysis.monte_carlo_create_data() added the argument 'fixed_error' to allow for fixed input of error to the Gauss distribution. Inserted a range of checks, to make sure function behaves as expected. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Added to pipe_control.error_analysis.monte_carlo_create_data() the creation of data points for a fixed distribution. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- To system test Relax_disp.test_task_7882_monte_carlo_std_residual(), added tests for creation of Monte-Carlo data by different methods. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- In pipe_control.error_analysis.monte_carlo_create_data(), if data is of list type or ndarray, then modify the data point according to the fixed error if the distribution is set to 'fixed'. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Expanded the STD acronym, to the meaning of standard deviation. This is in the monte_carlo.create_data user function. Task #7882: Implement Monte-Carlo simulation whereby errors are generated with width of standard deviation or residuals.
- Added a RelaxWarning printout to the dep_check module if wxPython 2.8 or less is encountered. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/7502. The warning text is simply written to STDERR as relax starts.
- Updated the wxPython version in the relax manual to be 2.9 or higher. This is in the section http://www.nmr-relax.com/manual/Dependencies.html.
- The GUI tests are now skipped for wxPython version <= 2.8 due to bugs causing fatal segfaults. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/7502. These wxPython versions are simply too buggy.
- Fix for the Relax_disp.test_bug_23186_cluster_error_calc_dw system test on 32-bit and Python <= 2.5 systems.
- Better error handling in the structure.align user function. If no common atoms can be found between the structures, a RelaxError is now raised for better user feedback.
- Created an empty lib.sequence_alignment relax library package. This may be used in the future for implementing more advanced structural alignments (the current method is simply to skip missing atoms, sequence numbering changes are not handled).
- Added the sequence_alignment package to the lib package __all__ list.
- Added the unit testing infrastructure for the new lib.sequence_alignment package.
- Implementation of the Needleman-Wunsch sequence alignment algorithm. This is located in the lib.sequence_alignment.needleman_wunsch module. This is implemented as described in the Wikipedia article https://en.wikipedia.org/wiki/Needleman%E2%80%93Wunsch_algorithm.
- Created a unit test for checking the Needleman-Wunsch sequence alignment algorithm. This uses the DNA data from the example in the Wikipedia article at https://en.wikipedia.org/wiki/Needleman%E2%80%93Wunsch_algorithm. The test shows that the implementation of the lib.sequence_alignment.needleman_wunsch.needleman_wunsch_align() function is correct.
- Created the lib.sequence_alignment.substitution_matrices module. This is for storing substitution matrices for use in sequence alignment. The module currently only includes the BLOSSUM62 matrix.
- Corrected the spelling of the BLOSUM62 matrix in lib.sequence_alignment.substitution_matrices.
- Fix for the lib.sequence_alignment.substitution_matrices.BLOSUM62_SEQ string.
- Modification of the Needleman-Wunsch sequence alignment algorithm implementation. This is in the lib.sequence_alignment.needleman_wunsch functions. Scoring matrices are now supported, as well as a user supplied non-integer gap penalty. The algorithm for walking through the traceback matrix has been fixed for a bug under certain conditions.
- Created the lib.sequence_alignment.align_protein module for the sequence alignment of proteins. This general module currently implements the align_pairwise() function for the pairwise alignment of protein sequences. It provides the infrastructure for specifying gap starting and extension penalties, choosing the alignment algorithm (currently only the Needleman-Wunsch sequence alignment algorithm as 'NW70'), and choosing the substitution matrix (currently only BLOSUM62). The function provides lots of printouts for user feedback.
- Created a unit test for lib.sequence_alignment.align_protein.align_pairwise(). This is to test the pairwise alignment of two protein sequences using the Needleman-Wunsch sequence alignment algorithm, BLOSUM62 substitution matrix, and gap penalty of 10.0.
- Added more printouts to the Test_align_protein.test_align_pairwise unit test. This is the test of the module _lib._sequence_alignment.test_align_protein.
- Fix for the Needleman-Wunsch sequence alignment algorithm when the substitution matrix is absent.
- The lib.sequence_alignment.align_protein.align_pairwise() function now returns data. This includes both alignment strings as well as the gap matrix.
- Annotated the BLOSUM62 substitution matrix with the amino acid codes for easy reading.
- Updated the gap penalties in the Test_align_protein.test_align_pairwise unit test. This is from the unit test module _lib._sequence_alignment.test_align_protein.
- Modified the Needleman-Wunsch sequence alignment algorithm. The previous attempt was buggy. The algorithm has been modified to match the logic of the GPL licenced EMBOSS software to allow for gap opening and extension penalties, as well as end penalties. No code was copied, rather the algorithm for creating the scoring and penalty matrices, as well as the traceback matrix.
- Added a DNA similarity matrix to lib.sequence_alignment.substitution_matrices.
- Added sanity checks to the Needleman-Wunsch sequence alignment algorithm. The residues of both sequences are now checked in needleman_wunsch_align() to make sure that they are present in the substitution matrix.
- Added the NUC 4.4 nucleotide substitution matrix from ftp://ftp.ncbi.nih.gov/blast/matrices/. Uracil was added to the table as a copy to T.
- Added the header from ftp://ftp.ncbi.nih.gov/blast/matrices/BLOSUM62. This is to document the BLOSUM62 substitution matrix.
- Added the PAM 250 amino acid substitution matrix. This was taken from ftp://ftp.ncbi.nih.gov/blast/matrices/PAM250 and added to lib.sequence_alignment.substitution_matrices.PAM250.
- Modified the Test_needleman_wunsch.test_needleman_wunsch_align_DNA unit test to pass. This is from the unit test module _lib._sequence_alignment.test_needleman_wunsch. The DNA sequences were simplified so that the behaviour can be better predicted.
- Created the Test_needleman_wunsch.test_needleman_wunsch_align_NUC_4_4 unit test. This is in the unit test module _lib._sequence_alignment.test_needleman_wunsch. This tests the Needleman-Wunsch sequence alignment for two DNA sequences using the NUC 4.4 matrix.
- Created a unit test for demonstrating a failure in the Needleman-Wunsch sequence alignment algorithm. The test is Test_needleman_wunsch.test_needleman_wunsch_align_NUC_4_4b from the _lib._sequence_alignment.test_needleman_wunsch module. The problem is that the start of the alignment is truncated if any gaps are present.
- Fix for the Needleman-Wunsch sequence alignment algorithm. The start of the sequences are no longer truncated when starting gaps are encountered.
- The needleman_wunsch_align() function now accepts the end gap penalty arguments. These are passed onto the needleman_wunsch_matrix() function.
- Added the end gap penalty arguments to lib.sequence_alignment.align_protein.align_pairwise().
- Created the Structure.test_align_CaM_BLOSUM62 system test. This will be used for expanding the functionality of the structure.align user function to perform true sequence alignment via the new lib.sequence_alignment package. The test aligns 3 calmodulin (CaM) structures from different organisms, hence the sequence numbering is different and the current structure.align user function design fails. The structure.align user function has been expanded in the test to include a number of arguments for advanced sequence alignment.
- Added support for the PAM250 substitution matrix to the protein pairwise sequence alignment function. This is the function lib.sequence_alignment.align_protein.align_pairwise().
- Bug fix for the Needleman-Wunsch sequence alignment algorithm. Part of the scoring system was functioning incorrectly when the gap penalty scores were non-integer, as some scores were being stored in an integer array. Now the array is a float array.
- Created the Test_align_protein.test_align_pairwise_PAM250 unit test. This is in the unit test module _lib._sequence_alignment.test_align_protein. It checks the protein alignment function lib.sequence_alignment.align_protein.align_pairwise() together with the PAM250 substitution matrix.
- Small docstring expansion for lib.sequence_alignment.align_protein.align_pairwise().
- Added the sequence alignment arguments to the structure.align user function front end. This includes the 'matrix', 'gap_open_penalty', 'gap_extend_penalty', 'end_gap_open_penalty', and 'end_gap_extend_penalty' arguments. The 'algorithm' argument has not been added to save room, as there is only one choice of 'NW70'. A paragraph has been added to the user function description to explain the sequence alignment part of the user function.
- Added the sequence alignment arguments to the back end of the structure.align user function. This is to allow the code in trunk to be functional before the sequence alignment before superimposition has been implemented.
- Removed the 'algorithm' argument from the Structure.test_align_CaM_BLOSUM62 system test script. This is for the structure.align user function. The argument has not been implemented to save room in the GUI, and as 'NW70' is currently the only choice.
- The sequence alignment arguments are now passed all the way to the internal structural object backend. These are the arguments of the structure.align user function.
- Created the lib.sequence.aa_codes_three_to_one() function. The lib.sequence module now contains the AA_CODES dictionary which is a translation table for the 3 letter amino acid codes to the one letter codes. The new aa_codes_three_to_one() function performs the conversion.
- Implemented the internal structural object MolContainer.loop_residues() method. This generator method is used to quickly loop over all residues of the molecule.
- Implemented the internal structural object one_letter_codes() method. This will create a string of one letter residue codes for the given molecule. Only proteins are currently supported. This method uses the new lib.sequence.aa_codes_three_to_one() relax library function.
- Sequence alignment is now performed in lib.structure.internal.coordinates.assemble_coord_array(). This is a pairwise alignment to the first molecule of the list. The alignments are not yet used for anything. The assemble_coord_array() function is used by the structure.align user function, as well as a few other structure user functions.
- Fix for the lib.sequence.aa_codes_three_to_one() function. Non-standard residues are now converted to the '*' code. The value of 'X' prevents any type of alignment of a stretch of X residues as X to X in both the BLOSUM62 and PAM250 substitution matrices are set to -1.
- Modified the gap penalty arguments for the structure.align user function. These now must always be supplied, as None is not handled by the backend lib.sequence_alignment.needleman_wunsch module. The previous defaults of None are now set to 0.0.
- Updated the artificial diffusion tensor test suite data. This is the data in test_suite/shared_data/diffusion_tensor. The residues in the PDB files are now proper amino acids, so the HETATM records are now ATOM records, and the CONECT records have been eliminated.
- Another update for the artificial diffusion tensor test suite data. The number of increments on the sphere has been increased from 5 to 6, to make the vector distribution truly uniform. All PDB files and relaxation data has been updated.
- Changed the synthetic PDB for the artificial diffusion tensor test suite data. The nitrogen and proton positions are now shifted 10 Angstrom along the distribution vectors. This is to avoid having all nitrogens positioned at the origin which causes the internal structural object algorithm for determining which atoms are connected to fail.
- Reintroduced the CONECT PDB records into the artificial diffusion tensor test suite data. The uniform vector distributions have overlapping vectors. This causes the internal structural object atom connection determining algorithm to fail, as this is distance-based rather than using the PDB amino acid definitions for now.
- Updates for the Structure.test_create_diff_tensor_pdb_sphere system test. The test now uses the sphere synthetic relaxation data rather than the ellipsoid data, and the PDB checking has been updated for the new data.
- Updates for the Structure.test_create_diff_tensor_pdb_prolate system test. The test now uses the spheroid synthetic relaxation data rather than the ellipsoid data, and the PDB checking has been updated for the new data.
- Updates for the Structure.test_create_diff_tensor_pdb_oblate system test. The test now uses the spheroid synthetic relaxation data rather than the ellipsoid data, and the PDB checking has been updated for the new data. The oblate tensor is now forced in the system test script.
- Updates for the Structure.test_create_diff_tensor_pdb_ellipsoid system test. The PDB checking has been updated for the new data.
- Updated the Structure.test_delete_atom system test for the changed PDB structures. The test_suite/shared_data/diffusion_tensor/spheroid/uniform.pdb file now has more residues, and the atomic positions are different.
- Updated the Structure.test_align system test for the changed PDB structures. The test_suite/shared_data/diffusion_tensor/spheroid/uniform.pdb file now has more residues, and the atomic positions are different.
- Updated the Structure.test_align_molecules system test for the changed PDB structures. The test_suite/shared_data/diffusion_tensor/spheroid/uniform.pdb file now has more residues, and the atomic positions are different.
- Python 3 fix for the lib.sequence module. The string.upper() function no longer exists.
- Python 3 fix for the lib.sequence_alignment.align_protein module. The string.upper() function no longer exists.
- Modified the generate_data.py diffusion tensor to relaxation data creation script. The NH vectors are no longer truncated to match the PDB.
- Python 3 fix for the generate_data.py diffusion tensor to relaxation data creation script. The string.upper() function no longer exists.
- Reintroduced the simulated PDB truncation into the artificial diffusion tensor test suite data. This is different to the previous implementation which was deleted recently. It now simulates the truncation of both the N and H positions in the PDB and reconstructs the expected vector.
- Updates for some of the Structure.test_create_diff_tensor_pdb_* system tests. This includes Structure.test_create_diff_tensor_pdb_ellipsoid, Structure.test_create_diff_tensor_pdb_oblate, and Structure.test_create_diff_tensor_pdb_prolate. The new simulated PDB truncation in the test data causes the PDB files created in these tests to be slightly different.
- The pairwise sequence alignment is now active in the structure.align user function. This is implemented in the lib.structure.internal.coordinates.assemble_coord_array() function for assembling atomic coordinates. It will also automatically be used by many of the structure user functions which operate on multiple structures. The atomic coordinate assembly logic has been completely changed. Instead of grouping atomic information by the molecule, it is now grouped per residue. This allows the residue based sequence alignments to find matching coordinate information. The assemble_coord_array() function will also handle the algorithm argument set to None and assume that the residue sequences are identical between the structures, but this should be avoided. A new function, common_residues() has been created as a work-around for not having a multiple sequence alignment implementation. It will take the pairwise sequence alignment information and construct a special data structure specifying which residues are present in all structures. The logic for skipping missing atoms remains in place, but it now operates on the residue rather than molecule level and simply uses the atom name rather than atom ID to identify common atoms.
- Changed the gap opening penalty to 10 in the N-state model structure_align.py system test script.
- Docstring update for the pipe_control.structure.main.assemble_coordinates() function. This is for the algorithm argument which can now be set to None.
- Fix for the sequence alignment for assembling atomic coordinates. This caused the Structure.test_superimpose_fit_to_mean system test to fail. The problem was in the new logic of the lib.structure.internal.coordinates.assemble_coord_array() function. The coordinate assembly now terminates when either the end of the first molecule or the current molecule is reached.
- Bug fixes for the new lib.structure.internal.coordinates.common_residues() function. This function for determining the common residues between multiple sets of pairwise alignments was failing in quite a number of cases. The logic has been updated to handle these.
- Another fix for the lib.structure.internal.coordinates.common_residues() function. The wrong index was being used to skip residues in the second sequence.
- Created the Structure.test_pdb_combined_secondary_structure system test. This is used to demonstrate a problem in the handling of secondary structure metadata when combining multiple PDB structures. It appears as if the chain ID is preserved as the original ID and is not updated to match the new IDs in the output PDB.
- Updated the Structure.test_metadata_xml system test for the changed PDB metadata handling. The helix and sheets IDs are now molecule indices.
- Disabled the General.test_bug_23187_residue_delete_gui GUI test. This is essential as a wxPython bug in Mac OS X systems causes this test to trigger a 'Bus Error' every time the GUI tests are run, killing relax.
Bugfixes
- Bug fix for the lib.arg_check.is_int_list() function for checking a list of lists. This is used to check user function arguments, but was causing a RelaxError to be raised for all integer list of lists user function arguments when a valid value is supplied. The function has been updated to match the is_str_list() function which does not suffer from this bug.
- Fix in dispersion API, to set error value for clustered values. Bug #23186: Error calculation of individual parameter δω from Monte-Carlo, is based on first spin.
- Fix for bug #23187, the problem whereby opening the spin viewer window, deleting a residue, and then reopening the spin viewer crashes relax. This change completes the spin viewer update_*() functions. The prune_list variable was initialised but not used. Now it is used to store the keys of the items to delete, and then the items are deleted at the end in a new loop so that the loop over the dictionary keys is not corrupted.
- Fix for the rdc.corr_plot user function. The Y-axis is now set to the measured RDC, as the RDC errors are plotted as dY errors. This matches the behaviour of the pcs.corr_plot user function.
- Bug fix for the printouts from the relax_data.read user function. This problem was introduced in the last relax release. The problem is that the spin ID in the loaded relaxation data printout is the same for all data, being the spin ID of the first spin. This has no effect on how relax runs, it is only incorrect feedback.
- Bug fix for the PDB secondary sheet handling when combining multiple PDB structures. The helix and sheet metadata now converts the original chain IDs into molecule indices, shifted to new values based on the currently loaded data, when the structure.read_pdb user function is executed. When the structure.write_pdb user function is executed, the molecule indices are converted into new chain IDs. This allows the chain IDs in the HELIX and SHEET records to match those of the ATOM and HETATOM records.
- Bug fix for the structure.read_pdb user function parsing of CONECT records. CONECT records pointing to ATOM records were not being read by the user function. As ATOM records should not require CONECT records by their definition, this is only a minor problem affecting synthetic edge cases.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.3.4
Description
This is a major feature and bugfix release, finally adding support for the saturation recovery and inversion recovery R1 experiments and including a major bug fix for storing multi-dimensional numpy data structures as IEEE 754 byte arrays in the XML output of the relax state and results files.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.3.4
(3 December 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.3.4
Features
- Numerous improvements for the relax_fit.select_model user function.
- Support for the saturation recovery experiment in the relaxation exponential curve-fitting analysis.
- Support for the inversion recovery experiment in the relaxation exponential curve-fitting analysis.
- Added a section to the start of the relaxation curve-fitting chapter of the manual to include descriptions of all supported models.
- Addition of a button to the R1 and R2 GUI analyses for selecting the desired exponential curve model via the relax_fit.select_model user function.
Changes
- Small updates for the wiki section of the release checklist document.
- Fixes for the links at the bottom of all HTML manual pages. This is for the automatically generated documentation, created using latex2html. The links all require double quotes, and some a trailing '/'. The links fixed are [1], [2] and [3].
- Removed the repository backup file text from the relax manual. The gzipped repository dump file has not been created by Gna! for many, many years. The problem was identified by the W3C link checker.
- Updated all of the [4] links in the lib.dispersion package. This is for all of the individual model pages in the HTML manual.
- Improved the description for the relax_fit.select_model user function.
- A small code rearrangement to create the new target_functions.relax_fit_wrapper module. This follows from the idea at https://web.archive.org/web/gna.org/task/?7415#comment6. The *func_wrapper() functions of the specific_analyses.relax_fit.optimisation module have been shifted out and converted to class methods to create the target_functions.relax_fit_wrapper module. This will be used to abstract away all of the C code, and will form the infrastructure to allow new exponential curves to be quickly supported. The modules of the specific_analyses.relax_fit and specific_analyses.relax_disp packages now import the target_functions.relax_fit_wrapper.Relax_fit_opt target function class and use that instead.
- Shifted the C code Jacobian functions into the new target_functions.relax_fit_wrapper module. This shifts all of the relaxation curve-fitting C code access into the target_functions.relax_fit_wrapper module so that the rest of relax does not need to handle the C code. This will allow for new models to be very easily supported, as they would all be set up in this target function module.
- Updated the formula in the description of the relax_fit.select_model user function.
- Modified the printouts from the structure.write_pdb user function if models are present. Instead of printing out 'MODEL', 'ATOM, HETATM, TER' and 'ENDMDL' for each model, the header 'MODEL records' is printed followed by a single '.' character for each model. For structures with many models, this results in a huge speed up of the user function which is strongly limited by how fast the terminal can display text.
- Added the synthetic saturation-recovery data in the form of Sparky peak lists to the repository. These files were created by Andras Boeszoermenyi. They are attached to Task #7415 as the Relax_sym.tar.gz file. They were created for the formula I0(1 - e−R1.t) where I0 = 1000000000000000.00 and R1 = 0.5. These files and the associated relax_sim.py script (which needs to be updated for the latest relax version) could form the basis of a basic system test. This system test could then be used to implement the saturation-recovery experiment equations in relax.
- Updated the target_functions package __all__ list to include the relax_fit* modules.
- Modified the package __all__ list checking unit test to accept *.so C modules.
- Removal of an unused import in the relax_fit_zooming_grid.py system test script.
- Added a system test script for testing the saturation-recovery R1 experiment. This was created by Andras Boeszoermenyi. The file was taken from the saturation_recovery.tar.gaz file attached to Task #7415. The only difference with the original script is that the grace.view user function calls have been removed, as these cannot be used in a system test.
- Modified the relax_fit_saturation_recovery.py script to work as a system test. This is the script from Andras Boeszoermenyi. The change follows from the discussion of [5]. The status.install_path variable is now used to point to the location of the files. The relax data store ds.tmpdir variable is used for outputting all files. And commented out user functions have been deleted.
- Added a copyright notice for Andras Boeszoermenyi for the newly added saturation-recovery R1 script. This change follows the discussion in the message [6].
- Created the Relax_fit.test_saturation_recovery system test. This follows from the discussion of [7].
- Added the saturation recovery experiment to the relax_fit.select_model user function. This simply adds a new option and sets up a different parameter set [Rx, I∞].
- Modified the Relax_fit.test_saturation_recovery system test script. The relax_fit.select_model user function call now selects the 'sat' model.
- Fix for the relax_fit.select_model user function backend for the 'sat' model.
- The exponential model name is now being passed into the target function class. The model as specified by the relax_fit.select_model user function is now finally being sent into the target function, in this case the target_functions.relax_fit_wrapper.Relax_fit_opt Relax_fit_opt class in target_functions.relax_fit_wrapper.
- Small fix for the relax_fit.select_model user function.
- Renamed all of the relaxation curve-fitting target functions. This includes all of the C functions which are model specific, by appending '_exp' to the current names to now be func_exp, dfunc_exp, http://www.nmr-relax.com/api/3.3/target_functions.relax_fit-module.html#d2func_exp d2func_exp], jacobian_exp, and jacobian_chi2_exp. And all of the target_functions.relax_fit_wrapper.Relax_fit_opt Relax_fit_opt target function class *_wrapper() methods to *_exp(). The target function class is now only aliasing the *_exp() methods when the model is set to 'exp'.
- Alphabetical ordering of the C function imports in the target_functions.relax_fit_wrapper module.
- Modified the Relax_fit.test_saturation_recovery system test to check for I∞ instead of I0.
- Added support for the saturation recovery experiment to parameter disassembly function. This is in the disassemble_param_vector() function of the specific_analyses.relax_fit.parameters module. This function requires each experiment to be handled separately.
- Implemented the target functions for the saturation recovery exponential curve. In the target_functions.relax_fit_wrapper.Relax_fit_opt Python target function class Relax_fit_opt, the new func_sat(), dfunc_sat() and d2func_sat() methods have been created as wrappers for the new C functions. These are aliased to func(), dfunc() and d2func() in the __init__() method. In the target_functions/exponential.c C file, the functions exponential_sat(), exponential_sat_dIinf(), exponential_sat_dR(), exponential_sat_dIinf2(), exponential_sat_dR_dIinf() and exponential_sat_dR2() have been created to implement the function, gradient, and Hessian for the equation I(t) = I∞(1 - e-R.t). In the target_functions/relax_fit.c file, the functions func_sat(), dfunc_sat(), d2func_sat(), jacobian_sat() and jacobian_chi2_sat() have been created as duplications of the *_exp() functions, but pointing to the exponential_sat*() functions and using I∞ instead of I0.
- Split the saturation recovery exponential equations and partial derivatives into their own C file.
- Expansion and improvements for the relax_fit.select_model user function documentation and printouts.
- The relax_fit.relax_time and relax_fit.select_model user functions now have wizard graphics. The R1 graphic from graphics/analyses/r1_200x200.png is now being used.
- Added support for the inversion recovery experiment to parameter disassembly function. This matches the change for the saturation recovery experiment. This is in the disassemble_param_vector() function of the specific_analyses.relax_fit.parameters module. This function requires each experiment to be handled separately.
- Expanded the relax_fit_saturation_recovery.py system test script. This now calls the error_analysis.covariance_matrix user function to test that code path.
- Updated the relaxation curve-fitting covariance_matrix() API method to handle all models. The check for the 'exp' model type has been eliminated, and the parameter vector is assembled using the flexible assemble_param_vector() function rather than manually constructing the vector.
- The errors in the Relax_fit.test_saturation_recovery system test are now reasonable. They have been set to 5% of I∞ so that the chi-squared value during optimisation is more realistic.
- Updated the relaxation curve-fitting get_param_names() API method to handle all models. This now simply returns the spin container 'params' list, allowing all models to be properly supported.
- Big bug fix for the error_analysis.covariance_matrix user function. The model_info structure is now being passed into the get_param_names() API method, as required by the API.
- Another change for the relaxation curve-fitting covariance_matrix() API method to handle all models. The scaling matrix diagonalised list of 1.0 values now has the same number of elements as there are parameters.
- Implemented the target functions for the inversion recovery exponential curve. In the target_functions.relax_fit_wrapper.Relax_fit_opt Python target function class Relax_fit_opt, the new func_inv(), dfunc_inv() and d2func_inv() methods have been created as wrappers for the new C functions. These are aliased to func(), dfunc() and d2func() in the __init__() method. The target_functions/exponential_inv.c C file has been created with the functions exponential_inv(), exponential_inv_d0(), exponential_inv_dIinf(), exponential_inv_dR(), exponential_inv_dI02(), exponential_inv_dIinf2(), exponential_inv_dI0_dIinf(), exponential_inv_dR_dI0(), exponential_inv_dR_dIinf() and exponential_inv_dR2() have been created to implement the function, gradient, and Hessian for the equation I(t) = I∞ - I0e-R.t. In the target_functions/relax_fit.c file, the functions func_inv(), dfunc_inv(), d2func_inv(), jacobian_inv() and jacobian_chi2_inv() have been created as duplications of the *_exp() functions, but pointing to the exponential_inv*() functions and adding the I∞ dimension.
- More editing of the relax_fit.select_model user function. The IR and SR abbreviations have been added, and a lot of text cleaned up.
- Improvement for the relax_fit.select_model user function in the GUI. Unicode text is now being used to display the parameters as R_x and I_0 and to show an infinity symbol in the I∞ parameter. The Rx and I∞ parameters have been added to lib.text.gui to allow this.
- Expanded the relaxation curve-fitting chapter of the manual to include descriptions of the models. A new section at the start of this chapter has been added to explain the different models and their equations. This was taken from the script mode section and expanded to include the new saturation recovery experiment.
- Removed the relax_fit.select_model user function call from the relax_fit auto-analysis. This is to allow the user in a script, or in the GUI, to choose the model themselves.
- Added a button to the R1 and R2 GUI analyses for executing the relax_fit.select_model user function. This is just after the peak list GUI element and before the optimisation settings. It allows different curve types to be selected for the analysis.
- Created the new specific_analyses.relax_fit.checks module. This creates the check_model_setup Check object, following the check_*() function design at [8]. This will be used to make sure that the exponential curve model is set prior to executing certain user functions.
- Improved the checking in the relaxation curve-fitting analysis. The new specific_analyses.relax_fit.checks.check_model_setup() function is now called prior to minimisation and in the get_param_names() API method to prevent Python errors from occurring due to missing data structures. In addition, the pipe_control.mol_res_spin module function exists_mol_res_spin_data() has been replaced with check_mol_res_spin_data().
- Fix for the recently broken Relax_fit.test_curve_fitting_height_estimate_error system test. The relax_fit.select_model user function is now called as this is no longer performed in the auto-analysis.
- Removed the text that the inversion recovery experiment is not implemented yet. This is in the documentation for the relax_fit.select_model user function and is in preparation for completing this.
- Added the checks module to the specific_analyses.relax_fit package __all__ list.
- Fixes for the relaxation dispersion analysis for the recent relaxation curve-fitting analysis changes. The target_functions.relax_fit_wrapper.Relax_fit_opt Relax_fit_opt target function class requires the model argument to be supplied to be correctly set up.
- Fixes for the unit tests of the target_functions.relax_fit C module. This is for the recent renaming of all the C functions based on the model type.
- Fix for the Rx.test_r1_analysis GUI test. A click on the relax_fit.select_model user function button is now being simulated.
- Created a directory for holding synthetic inversion recovery R1 data.
- Copied synthetic inversion recovery Sparky peak lists from Sébastien Morin's inversion-recovery branch.
- Created a system test script for the inversion-recovery function. This is based on a copy of the script 'relax_fit_exp_2param_neg.py'.
- The 3-parameter curve fitting test script now uses the corresponding peak lists.
- Prepared the "exp_3param" test for inclusion of artificial data.
- Added missing delays in the list. The duplicates had been omitted...
- Manually fix the script based on changes made during branch updating. This is as discussed by Edward d'Auvergne in a post at [9].
- Updated Séb's relax_fit_exp_3param_inv_neg.py system test script to work with the current relax design.
- Added a script for calculating the expected peak intensities for an inversion recovery curve. This is based on the values used by Sébastien Morin in his inversion-recovery branch, as the check_curve_fitting_exp_3param_inv_neg() function of the test_suite/system_tests/relax_fit.py file.
- Increased the precision of the printout from the calc.py script of the last commit.
- Changed the peak intensities for Gly 4 in the synthetic inversion recovery Sparky lists. The values have been changed to match that determined from the calc.py script. The replicate spectra intensities are simply the calculated intensity +/-1, to preserve the average.
- Created the Relax_fit.test_inversion_recovery system test. This simply calls Sébastien Morin's relax_fit_exp_3param_inv_neg.py system test script, ported from the inversion-recovery branch, and then checks the parameter values for the single optimised spin.
- Updated the manual_c_module.py C module compilation development script for the recent changes. The exponential_inv.c and exponential_sat.c files need to be compiled as well.
- Python 3 fix for the relax_fit_exp_3param_inv_neg.py system test script. The xrange() function does not exist in Python 3, so was replaced by range().
- Updated the memory_leak_test_relax_fit.py development script for the C module changes. This is only the docstring description which changed.
- Epydoc docstring fixes for the lib.io module - keyword arguments were not correctly identified. These were identified by Troels in the post at [10].
- Created the State.test_bug_23017_ieee_754_multidim_numpy_arrays system test. This is to catch bug #23017, the multidimensional numpy arrays are not being stored as IEEE 754 arrays in the XML state and results files. This test checks a rank-2 float64 numpy array stored in the current data pipe against what the IEEE 754 int list should be for it.
- Grammar fix for a warning from the pymol.frame_order user function.
Bugfixes
- Bug fix for the pymol.view user function for when no PDB file exists. The user function would fail with an AttributeError when the currently loaded data does not exist as a PDB file. This is now caught and the non-existent PDB is no longer displayed. A better solution might be to dump all the current structural data into a temporary file and load that, all within a try-finally statement to be sure to delete the temporary file. This solution may not be what the user is interested in anyway.
- Simple fix for bug #23017, the multidimensional numpy arrays are not being stored as IEEE 754 arrays in the XML state and results files. The problem was a relatively recent regression caused by a change to the is_float_matrix() function of the lib.arg_check module. It was simply that the default dims keyword argument value was changed from None to (3, 3). Therefore any call to the function without supplying the dims argument would fail if the matrix was not of the (3, 3) shape.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.3.3
Description
This is a major feature and bugfix release. It fixes a failure when loading relaxation data and adds Python 3 support for using the NMRPipe showApod software. Features include a large expansion for the align_tensor.matrix_angles and align_tensor.svd user functions to support the standard inter-matrix angles, the unitary 9D vector notation {Sxx, Sxy, Sxz, Syx, Syy, Syz, Szx, Szy, Szz}, and the irreducible spherical tensor 5D basis set of {A-2, A-1, A0, A1, A2} for correctly calculating the inter-tensor angles, singular values and condition numbers.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.3.3
(24 November 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.3.3
Features
- Implemented the lib.geometry.vectors.vector_angle_atan2() relax library function. This is for calculating the inter-vector angle using the more numerically stable atan2() formula.
- Implemented the lib.geometry.vectors.vector_angle_acos() relax library function. This is used to calculate the inter-vector angle using the arccos of the dot product formula. The function has been introduced into the relax library as the calculation is repeated throughout relax.
- Expanded the basis sets for the align_tensor.matrix_angles user function to allow the correct inter-tensor angles to be calculated. This includes the standard inter-matrix angles via the arccos of the Euclidean inner product of the alignment matrices in rank-2, 3D form divided by the Frobenius norm of the matrices, irreducible spherical tensor 5D basis set {A-2, A-1, A0, A1, A2}, and the unitary 9D basis set {Sxx, Sxy, Sxz, Syx, Syy, Syz, Szx, Szy, Szz} (all of which produce the same result).
- Expanded the basis sets for the align_tensor.svd user function to allow the correct singular values and condition number to be calculated. This includes the irreducible spherical tensor 5D basis set {A-2, A-1, A0, A1, A2} and the unitary 9D basis set {Sxx, Sxy, Sxz, Syx, Syy, Syz, Szx, Szy, Szz} (both of which produce the same result).
- Added the angle_units and precision arguments to the align_tensor.matrix_angles user function to allow either degrees or radians to be output and the number of decimal points to be specified.
- Added the precision argument to the align_tensor.svd user function to allow the number of decimal points for the singular values and condition number to be specified.
- Updated the align_tensor.display user function to output the irreducible spherical harmonic weights. This is the alignment tensor in the {A-2, A-1, A0, A1, A2} notation.
Changes
- Basic Epydoc fix for the data_store.exp_info module.
- Epydoc fix for the name_pipe() method of the relaxation dispersion auto-analysis for repeated data
- Fixes for the HTML user manual compilation. The index.html file was not being created as the main page has changed from 'relax_user_manual.html' to 'The_relax_user_manual.html'.
- Added a line to the release checklist document about updating the wiki release links. These are for the combined release notes pages at Relax releases, Relax release descriptions, Relax release metadata, Relax release features, Relax release changes, Relax release bugfixes, Relax release links.
- Updates for the release announcement section of the release checklist document.
- Created a system test to catch a rare relaxation data loading problem.
- Created the Mf.test_dauvergne_protocol_sphere system test. This catches bug #22963: Using '@N*' to define the interatomic interactions for a model-free analysis fails when using non-backbone 15N spins.
- Set more reasonable default values for the lib.structure.pdb_write functions atom() and hetatm(). The occupancy now defaults to 1.0 instead of , and the temperature factor to 0.0 instead of . This avoid painful errors when using these functions, as these arguments must be floating point numbers at all times, hence the default value of causes a TypeError.
- Updated the PDB file in the test_suite/shared_data/model_free/sphere/ directory. The relax library is now being used to create the PDB file. Additional TER and CONECT records are now being created so the result is a more correct PDB file.
- Converted all ATOM records to HETATM in the sphere.pdb file.
- Renamed vector_angle() to vector_angle_normal() in the lib.geometry.vectors module. This is to standardise the naming as there are now the standard vector angle formula implemented as the vector_angle_acos() and vector_angle_atan2() functions.
- Added 6 unit tests for the lib.geometry.vectors.vector_angle_acos() function. These are similar to those of the vector_angle_normal() function but unsigned angles are checked for.
- Created 6 unit tests for the lib.geometry.vectors.vector_angle_atan2() function.
- Created a script and log file to demonstrate differences between alignment tensor basis sets. This shows that the inter-tensor angles and condition numbers are dependent on the basis set used.
- Improved the printouts from the align_tensor.svd user function by including the basis set text.
- Updated the log file for comparing different alignment tensor basis sets for align_tensor.svd changes.
- Implemented a new default basis set for the align_tensor.matrix_angles user function. This is uses standard definition of the inter-matrix angle using the Euclidean inner product of the two matrices divided by the product of the Frobenius norm of each matrix. As this is a linear map, it should produce the correct definition of inter-tensor angles.
- Improvements to the description of the align_tensor.matrix_angles user function.
- Updated the test_matrix_angles_identity() unit test for pipe_control.align_tensor.matrix_angles(). This is the test in the _prompt.test_align_tensor.Test_align_tensor module. The basis set has been set back to the now non-default value of 0, and the value checks have been converted from assertEqual() to assertAlmostEqual() to allow for small truncation errors.
- Conversion of the basis_set argument for the align_tensor.matrix_angles user function. The argument is now a string that accepts the values of 'matrix', 'unitary 5D', and 'geometric 5D' to select between the different matrix angles techniques. This has been updated in the test suite as well.
- Added a check for the values of the basis_set argument. This is to the align_tensor.matrix_angles user function backend.
- Printout improvements clarifying the align_tensor.matrix_angles user function.
- Conversion of the basis_set argument for the align_tensor.svd user function. The argument is now a string that accepts the values of 'unitary 9D', 'unitary 5D', and 'geometric 5D' to select between the different SVD matrices. This has been updated in the test suite as well.
- Expanded the N_state_model.test_5_state_xz system test. This now covers the new 'unitary 9D' basis set for the align_tensor.svd user function and the new 'matrix' basis set for the align_tensor.matrix_angles user function.
- Expansion of the align_tensor.matrix_angles user function. The new basis set 'unitary 9D' has been introduced. This creates vectors as {Sxx, Sxy, Sxz, Syx, Syy, Syz, Szx, Szy, Szz} and computes the inter-vector angles. These match the 'matrix' basis set whereby the Euclidean inner product divided by the Frobenius norms is used to calculate the inter-tensor angles. In addition, the user function documentation and printouts have been improved. And the backend code has been simplified.
- Updated the script and log file for demonstrating differences between alignment tensor basis sets. This now handles the changes to the basis_set arguments used in the align_tensor.matrix_angles and align_tensor.svd user functions, and includes the new basis sets.
- Added the irreducible tensor notation of {A-2, A-1, A0, A1, A2} to the alignment tensor object. This follows from the definition of Sass et al, J. Am. Chem. Soc. 1999, 121, 2047-2055, DOI: 10.1021/ja983887w. The equations of (2) were converted using Gaussian elimination to obtain a reduced row echelon form, so that the equations in terms of {A-2, A-1, A0, A1, A2} were derived. These have been coded into the alignment tensor object calc_Am2, calc_Am1, calc_A0, calc_A1 and calc_A2 methods respectively, and the values can be obtained by accessing the Am2, Am1, A0, A1, and A2 objects. To check that the implementation is correct, a unit test has been created to compare the calculated values with those determined using Pales.
- Expanded the unit test of the alignment tensor {A-2, A-1, A0, A1, A2} parameters to cover all values.
- Created functions in the relax library for calculating the inter-vector angle for complex vectors. This is in the lib.geometry.vectors module. The [function http://www.nmr-relax.com/api/3.3/lib.geometry.vectors-module.html#vector_angle_complex_conjugate vector_angle_complex_conjugate()] has been created to calculate the angle between two complex vectors. This uses the new auxiliary function complex_inner_product() to calculate <v1|v2>.
- Added the 'irreducible 5D' basis set option to the align_tensor.matrix_angles user function. This is for the inter-tensor vector angle for the irreducible 5D basis set {S-2, S-1, S0, S1, S2}. Its results match that of the standard tensor angle as well as the 'unitary 9D' basis sets.
- Added the 'irreducible 5D' basis set option to the align_tensor.svd user function. This is for the inter-tensor vector angle for the irreducible 5D basis set {A-2, A-1, A0, A1, A2}. Its results match that of the 'unitary 9D' basis set.
- Editing of the description for the 'irreducible 5D' alignment tensor basis set. This is for the align_tensor.matrix_angles and align_tensor.svd user functions. All Sm element have been converted to Am.
- Editing of the description for the align_tensor.matrix_angles user function.
- Editing of the align_tensor.svd user function description.
- Updated the script and log file for demonstrating differences between alignment tensor basis sets. The 'irreducible 5D' basis set in now used for both the align_tensor.matrix_angles and align_tensor.svd user functions.
- Fix for a spelling mistake in the align_tensor.matrix_angles user function printouts.
- Small fix for the align_tensor.matrix_angles user function documentation.
- Expanded the N_state_model.test_5_state_xz system test for more alignment tensor basis sets. The align_tensor.matrix_angles and align_tensor.svd user functions are now being called with the additional 'irreducible 5D', and 'unitary 9D' basis sets, to make sure these work correctly.
- Created the Align_tensor.test_align_tensor_matrix_angles system test. This is to check the angles calculated by the align_tensor.matrix_angles user function. As there are no external references, this essentially fixes the angles to the currently calculated values to catch any accidental changes in the future.
- Created the Align_tensor.test_align_tensor_svd system test. This is to check the angles calculated by the align_tensor.svd user function. As there are no external references, this essentially fixes the singular values and condition numbers to the currently calculated values to catch any accidental changes in the future.
- Fixes for the proportions of the align_tensor.matrix_angles user function GUI wizard.
- Expanded the 'irreducible 5D' text in the align_tensor.matrix_angles and align_tensor.svd user functions. This now explains that these are the coefficients for the spherical harmonic decomposition.
- Improved the text for the irreducible tensor notation in the align_tensor.display user function.
- Formatting fix for the magnetic susceptibility tensor part of the align_tensor.display user function.
- More improvements for the align_tensor.matrix_angles user function description.
- Epydoc docstring fixes and expansion for the lib.io.sort_filenames() function.
- Epydoc docstring fixes for the lib.spectrum.nmrpipe module. This is for the API documentation. The show_apod_rmsd_to_file() and show_apod_rmsd_dir_to_files() function docstrings have both been modified.
- Epydoc docstring fixes for the pipe_control.opendx.map() function. The fixes include whitespace and textwrapping changes.
- Python 2.5 fix for the align_tensor.display user function. The new irreducible spherical tensor coefficient printout was failing as the float.real variable was introduced from Python 2.6 onwards.
- Shifted the structure checks into their own module. This shifts the special check_structure Check object from pipe_control.structure.main into the new checks module. It allows the check to be performed by other modules in the pipe_control.structure package.
- Added the missing_error keyword argument to the pipe_centre_of_mass() function. This is from the pipe_control.structure.mass module. The new keyword controls what happens with the absence of structural data. The pipe_control.structure.checks.check_structure() function is now being used to either throw a warning and return [0, 0, 0] or to raise a RelaxError.
- Fix for the new unit tests - Python 2.5 floats do not have a 'real' property.
Bugfixes
- Fix for bug #22961, the failure of relaxation data loading with the message "IndexError: list index out of range". The bug was found by Julien Orts. It is triggered by loading relaxation data from a file containing spin name information and supplying the spin ID using the spin name to restrict data loading to a spin subset. To solve the problem, the pipe_control.relax_data.pack_data() function has been redesigned. Now the selection union concept of Chris MacRaild's selection object is being used by joining the spin ID constructed from the data file and the user supplied spin ID with '&', and using this to isolate the correct spin system.
- Big Python 3 bug fix for the dep_check module for the detection of the NMRPipe showApod software. The showApod program was falsely detected as always not being present when using Python 3. This is because the output of the program was being tested using string comparisons. However the output from programs obtained from the subprocess module is no longer strings but rather byte-arrays in Python 3. Therefore the byte-array is not being converted to text if Python 3 is being used, allowing the showApod software to be detected.
- Python 3 bug fix for the lib.spectrum.nmrpipe.show_apod_extract() function. The subprocess module output from the showApod program, or any software, is a byte array in Python 3 rather than text. This is now detected and the byte array converted to text before any processing.
- Bug fix for the lib.structure.angles.angles_*() functions for odd increments. This affects the PDB representations of the diffusion tensor and frame order when the number of increments in the respective user functions is set to an odd number. It really only affects the frame_order.pdb_model user function, as the number of increments cannot be set in any of the other user functions (structure.create_diff_tensor_pdb, structure.create_rotor_pdb, structure.create_vector_dist, n_state_model.cone_pdb).
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.3.2
Description
This is a minor feature and bugfix release. It includes improvements to the readability of the HTML version of the manual, improved printouts throughout the program, numerous GUI enhancements, and far greater Python 3 support. Please see below for a full listing of all the new features and bugfixes.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.3.2
(13 November 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.3.2
Features
- Many improvements for the HTML version of the manual.
- Improved sectioning printouts in the model-free dauvergne_protocol auto-analysis.
- Significant improvements for the relax controller window.
- All wizards and user functions in the relax GUI now have focus so that keyboard is active without requiring a mouse click.
- The ESC key will now close the relax controller window and all user function windows.
- The structure.load_spins user function can now load spins from multiple non-identical molecules and merge them into one molecule allowing missing atoms and differential atom numbering to be handled.
- Improvements to the printouts for many user functions.
Changes
- Updated the minfx version in the release checklist document to version 1.0.11.
- Updated the relax version in the release checklist document to be more modern.
- Spelling fixes for the CHANGES file.
- Updates for the release checklist document. This is mainly because the main release notes are now the relax wiki, for example for the current version at http://wiki.nmr-relax.com/Relax_3.3.1.
- Spelling fixed throughout the CHANGES document.
- Removed a few triple spaces in the CHANGES document.
- Added periods to the end of all items in the CHANGES document.
- Fix for an 'N/A' in the CHANGES document.
- Converted a number of single spaces between sentences to double spaces in the CHANGES document.
- More updates for the announcement section of the release checklist document.
- The HTML version of the manual is now compiled with Unicode character support. It allows Greek symbols, for example, to be represented as text rather than LaTeX generated PNG images. This fixes titles and massively decreases the number of images required by the HTML pages.
- Removal of many dual LaTeX and latex2html section titles in the manual. As the HTML manual is now compiled with Unicode support, the Greek characters in the titles are now supported. Therefore in the model-free and the values, gradients, and Hessians chapters, the dual LaTeX and latex2html section titles could be collapsed to the standard LaTeX section title. This will result in better formatting of the manual and its links.
- Added instructions and a build script for creating a useful version of latex2html. This version is essential for building the HTML version of the manual. The build script downloads the Debian latex2html-2008 sources as well as all Debian patches for latex2html. It then applies a number of patches for fixing and improving the relax documentation. The program is then compiled and can be installed as the root user into /usr/local/.
- Extended the number of words used in the HTML webpage file names. This is to hopefully prevent files from being overwritten by multiple files having the same name.
- Added the write out of parameters and χ2 values, when creating a dx_map. Task #7860: When dx_map is issued, create a parameter file which maps parameters to χ2 value.
- Created system test Relax_disp.test_dx_map_clustered_create_par_file, which must show that relax is not able to find the local minimum under clustered conditions. When creating the map, the map contain χ2 values, which are lower than the clustered fitted values. This should not be the case. Running a larger map with larger bounds and more increments, which should show that there exist a minimum in the minimisation space with a lower χ2 value. Bug #22754: The minimise.calculate() does not calculate χ2 value for clustered residues. Task #7860: When dx_map is issued, create a parameter file which maps parameters to χ2 value.
- Renamed test scripts and files for producing surface χ2 plots.
- Renamed sample scripts making surface maps.
- Added scripts to make surface plots of spin independents parameters δω and Ra2.
- Added example surface χ2 values for plots. Task #7826: Write an python class for the repeated analysis of dispersion data.
- Added example save state for more surface plotting.
- Added boolean argument to dx.map user function, to specify the creation of a parameter and associated χ2 values file. For very very special situations, the creation of this file is not desired.
- Modified that structure of points in dx.map is always a list of numpy arrays with 3 values.
- When issuing dx.map user function with points, implemented the writing out of parameter file, with associated calculated χ2 values.
- Improved the feedback in the User_functions.test_structure_add_atom GUI test. It is now clearer what the input and output data is.
- The devel_scripts/python_multiversion_test_suite.py script now runs relax with the --time flag. This is for quicker identification of failure points. It will also force the sys.stdout buffer to be flushed more often on Python 2.5 so that it does not appear as if the tests have frozen.
- Added check to system test Relax_disp.test_cpmg_synthetic_dx_map_points for the creation of a matplotlib surface command plot file.
- Added the write out of a matplotlib command file, to plot surfaces of a dx map. It uses the minimum χ2 value in the map space, to define surface definitions. It creates a X,Y; X,Z; Y,Z map, where the values in the missing dimension has been cut at the minimum χ2 value. For each map, it creates a projected 3d map of the parameters and the χ2 value, and a heat map for the contours. It also scatters the minimum χ2 value, the 4 smallest χ2 values, and maps any points in the point file, to a scatter point. Mapping the points from file to map points, is done by finding the shortest Euclidean distance in the space from the points to any map points.
- Fix for testing the raise of expected errors in system tests. The system test will not be tested, if Python version is under version 2.7. Bug #22801: Failure of the relax test suite on Python 2.5.
- Inserted a z_axis limit for the plotting of 2D surfaces in matplotlib.
- Added better figure control of χ2 values on z-axis for surface plots.
- Narrowed in dx_map in system test Relax_disp.test_dx_map_clustered_create_par_file. This is to illustrate the failure of relax finding the global minimum. It seems there is a shallow barrier, which relax failed to climb over, in order to find the minimum value.
- Added the verbosity argument to the pipe_control.minimise.reset_min_stats() function. All of the minimisation code which calls this now send in their verbosity arguments. This allows the text "Resetting the minimisation statistics." to be suppressed.
- Added the verbosity argument to the pipe_control.value.set() function. This is passed into the pipe_control.minimise.reset_min_stats() function so its printouts can be silenced.
- The pipe_control.opendx space mapping code now calls the value.set() function with verbosity=0. This is to silence the very repetitive statistics resetting messages when executing the dx.map user function.
- Added more checks to the determine_rnd() of the dauvergne_protocol model-free auto-analysis. This is to try to catch bizarre situations such as bug #22730, model-free auto-analysis - relax stops and quits at the polate step. The following additional fatal conditions are now checked for: A file with the same name as the base model directory already exists; The base model directory is not readable; The base model directory is not writable. The last two could be caused by file system corruptions. In addition, the presence of the base model directory is checked for using os.path.isdir() rather than catching errors coming out of the os.listdir() function. These changes should make the analysis more robust in the presence of 'strangeness'.
- Added an additional check to determine_rnd() of the dauvergne_protocol model-free auto-analysis. This is to try to catch bizarre situations such as bug #22730, model-free auto-analysis - relax stops and quits at the polate step. The additional check is that if the base model directory is not executable, a RelaxError is raised.
- Added printouts to the determine_rnd() function of the dauvergne_protocol model-free auto-analysis. This is for better user feedback in the log files as to what is happening. It may help in debugging bug #22730: Model-free auto-analysis - relax stops and quits at the polate step.
- Alphabetical ordering of imports in the dauvergne_protocol model-free auto-analysis.
- Changed the model-free single spin optimisation title printouts. The specific_analyses.model_free.optimisation.spin_print() function has been deleted. It has instead been replaced by a call to lib.text.sectioning.subtitle(). This is to match the grid search setup title printouts and to differentiate these titles from those printed out by minfx being underlined by '~' characters.
- Added extensive sectioning printouts to the dauvergne_protocol model-free auto-analysis. The lib.text.sectioning functions title() and subtitle() are now used to mark out all parts of the auto-analysis. This will allow for a much better understanding of the log files produced by this auto-analysis.
- Complete redesign of the following of text in the relax controller window in the GUI. The current design for some reason no longer worked very often, and there would be many situations where the scrolling to follow the text output would stop and could never be recovered. Therefore this feature has been redesigned. In the LogCtrl element of the relax controller, which displays the relax output messages, the at_end class boolean variable has been introduced. It defaults to True. The following events will turn it off: Arrow keys, Home key, End key, Ctrl-Home key, Mouse button clicks, Mouse wheel scrolling, Window thumbtrack scrolling (the side scrollbar), finding text, the pop up menu 'Go to start', and Select all (menu or Ctrl-A). It will only be turned on in two cases: The pop up menu 'Go to end', and if the caret is on the final line (caused by Ctrl-End, Mouse wheel scrolling, Page Down, Down arrow, Window thumbtrack scrolling, etc.). Three new methods have been introduced to handle certain events: capture_mouse() for mouse button clicks, capture_mouse_wheel() for mouse wheel scrolling, and capture_scroll for window thumbtrack scrolling.
- Improvements for selecting all text in the relax controller window. Selecting text using the pop up menu or [Ctrl-A] now shifted the caret to line 1 before selecting all text. This deactivates the following of the end of text, if active, as the text following feature causes the text selection to be lost.
- Modified the behaviour of the relax controller window so that pressing escape closes the window. This involves setting the initial focus on the LogCtrl, and catching the ESC key press in the LogCtrl as well as all relax controller read only wx.Field elements and calling the parent controller handle_close() method.
- Replaced the hardcoded integer keycodes in the relax controller with the wx variables. This is for the LogCtrl.capture_keys() handler method for dealing with key presses.
- Improvement for all wizards and user functions in the relax GUI. The focus is now set on the currently displayed page of the wizard. This allows the keyboard to be active without requiring a mouse click. Now text can be instantly input into the first text control and the tab key can jump between elements. As the GUI user functions are wizards with a single page, this is a significant usability improvement for the GUI.
- The ESC character now closes all wizards and user functions in the relax GUI. By using an accelerator table set to the entire wizard window to catch the ESC keyboard event, the ESC key will cause the _handler_escape() method to be called which then calls the windows Close() method to close the window.
- Changed the logic for how the new analysis wizard in the GUI is destroyed. This relates to bug #22818, the GUI test suite failures in MS Windows - PyAssertionError: C++ assertion "Assert failure". The Destroy() method has been added to the Analysis_wizard class to properly close all elements of the wizard. This is now called from the menu_new() method of the Analysis_controller class, which is the target of the menu item and toolbar button. To allow the test suite to use this, the menu_new() method now accepts the destroy boolean argument. The test suite can set this to False and then access the GUI elements after calling the method (however the Destroy() method must be called by the test suite).
- Resign of how the new analysis wizard is handled in the GUI tests. This relates to bug #22818, the GUI test suite failures in MS Windows - PyAssertionError: C++ assertion "Assert failure". The GUI test base class method new_analysis_wizard() has been created to simplify the process. When a new analysis is desired, this method should be called. It will return the analysis page GUI element for use in the test. The method standardises the execution of the new analysis wizard and sets up the analysis in the GUI. It also properly destroys the wizard to avoid the memory leaking issues such as bug #22818. All GUI tests have been converted to use new_analysis_wizard(). This allows the GUI tests to pass on MS Windows. However there are still significant sources of memory leaks (the USER Objects count) visible in the Windows Task Manager.
- Fix for the gui.fonts module to allow it to be used outside of the GUI.
- Updated all of the scripts in devel_scripts/gui/. These have been non-functional since the merger of the relax bieri_gui branch back in January 2011.
- The gui.misc.bitmap_setup() function can now be used outside of the GUI.
- Fix for the GUI test base class new_analysis_wizard() method for relaxation dispersion analyses.
- Modified the pipe_control.pipes.get_bundle() function to operate when no pipe is supplied. In this case, the pipe bundle that the current data pipe belongs to will be returned.
- Created the Periodic_table.has_element() method for the lib.periodic_table module. This is used to simply check if a given symbol exists as an atom in the periodic table.
- Added 4 unit tests to the _lib.test_periodic_table module for the Periodic_table.has_element() method.
- Modified the internal structural object backend for the structure.read_pdb user function. The MolContainer._det_pdb_element() method for handling PDB files with missing element information has been updated to use the Periodic_table.has_element() method to check if the PDB atom name corresponds to any atoms in the periodic table. This allows for far greater support for HETATOMS and all of the metals.
- Created the Structure.test_load_spins_multi_mol system test. This is to test yet to be implemented functionality of the structure.load_spins user function. This is the loading of spin information similar, but not necessarily identical molecules all loaded into the same structural model. For this, the from_mols argument will be added.
- Fixes for the Structure.test_load_spins_multi_mol system test. The call to the structure.load_spins user function has also been modified so that all 3 spins are loaded at the same time.
- Implemented the multiple molecule merging functionality of the structure.load_spins user function. The argument has been added to the user function frontend and a description added for this new functionality. In the backend, the pipe_control.structure.main.load_spins() function will now call the load_spins_multi_mol() function if from_mols is supplied. This alternative function is required to handle missing atoms and differential atom numbering.
- Modified the N_state_model.test_populations system test to test the grid search code paths. This performs a grid search of one increment after minimisation, then switches to the 'fixed' N-state model and performs a second grid search of one increment. This now tests currently untested code paths in the grid_search() API method behind the minimise.grid_search user function. The test demonstrates a bug in the N-state model which was not uncovered in the test suite.
- Created the N_state_model.test_CaM_IQ_tensor_fit system test. This is for catching bug #22849, the failure of the N-state model analysis when optimising only alignment tensors using RDCs and/or PCSs. This new test checks code paths unchecked in the rest of the test suite, and is therefore of high value.
- Modified the atomic position handling in pipe_control.structure.main.load_spins_multi_mol(). The multiple molecule merging functionality of the structure.load_spins user function now handles missing atomic positions differently. The aim is that the length of the spin container position variable is fixed for all spins to the number of structures, as the N-state model analysis assumes this equal length for all spins. When data is missing, the atomic position for that structure is now set to None. This will require other modifications in relax to support this new design.
- Modified the interatom.unit_vectors user function backend to handle missing atomic positions. This is to match the structure.load_spins user function change whereby missing atomic positions are now set to the value of None.
- Fix for the atomic position handling in pipe_control.structure.main.load_spins_multi_mol(). The dimensionality of the position structure returned by the structural object atom_loop() method needed to be reduced.
- The structure.load_spins user function now stores the number of states in cdp.N. This is to help the specific analyses which handle ensembles of structures. With the introduction of the from_mols argument to the structure.load_spins user function, the number of states is now not equal to the number of structural models, as the states can now come from different structures of the same model. Therefore the user function will now explicitly set cdp.N to the number of states depending on how the spins were loaded.
- Clean up and speed up of the N_state_model.test_CaM_IQ_tensor_fit system test. All output files are now set to 'devnull' so that the system test no longer creates any files within the relax source directories. And the optimisation settings have been decreased to hugely speed up the system test.
- Expanded the lib.arg_check.is_float_matrix() function by adding the none_elements argument. This matches a number of the other module functions, and allows for entire rows of the matrix to be None.
- Lists of lists containing rows of None are now better supported by the lib.xml functions. The object_to_xml() function will now convert the float parts to IEEE-754 byte arrays, and the None parts will be stored as None in the <ieee_754_byte_array> list node. The matching xml_to_object() function has also been modified to read in this new node format. This affects the results.write and state.save user functions (as well as the results.read and state.load user functions).
- Added spacing after the minimise.grid_search user function setup printouts. This is for better spacing for the next messages from the specific analysis.
- Speed up of the N_state_model.test_CaM_IQ_tensor_fit system test. This test is however still far too slow.
- Added printouts to pipe_control.pcs.return_pcs_data() and pipe_control.rdc.return_rdc_data(). These functions now accept the verbosity argument which if greater than 0 will activate printouts of how many RDCs or PCSs have been assembled for each alignment. This will be useful for user feedback as the spin verses interatomic data container selections can be difficult to understand.
- The verbosity argument for the N-state model optimisation is now propagated for more printouts. The argument for the calculate() and minimise() API methods is now sent into specific_analyses.n_state_model.optimisation.target_fn_setup(), and from there into the pipe_control.pcs.return_pcs_data() and pipe_control.rdc.return_rdc_data() functions. That way the number of RDCs and PCSs used in the N-state model is reported back to the user for better feedback.
- Updated the N_state_model.test_CaM_IQ_tensor_fit system test so it operates correctly as a GUI test. All user functions are now executed through the special self._execute_uf() method to allow either the prompt interpreter or the GUI to execute the user function.
- Modified the N_state_model.test_CaM_IQ_tensor_fit system/GUI test for implementing a new feature. The 'spin_selection' argument has been added to the interatom.define user function. This will be used to carry the spin selections over into the interatomic data containers.
- Implemented the spin_selection Boolean argument for the interatom.define user function. This has been added to the frontend with a description, and to the backend. When set, it allows the spin selections to define the interatomic data container selection.
- Changed the spin_selection argument default in the interatom.define user function backend. This now defaults to False to allow other parts of relax which call this function to operate as previously. The default for the interatom.define user function is however still True.
- Modified the Structure.test_load_spins_multi_mol system test for the spin.pos variable changes. The atomic position for an ensemble of structures is now set to None rather than being missing, so the system test has been updated to check for this.
- The align_tensor.display user function now has more consistent section formatting. The section() and subsection() functions of the lib.text.sectioning module are now being used to standardise these custom printouts with the rest of relax.
- Modifications to the new N_state_model.test_CaM_IQ_tensor_fit system test. The system test now checks all of the optimised values to make sure the correct values have been found. That will block any future regressions in this N-state model code path. The system test is now also faster. And the pcs.structural_noise user function RMSD value has been set to 0.0 so that the test no longer has a random component affecting the final optimised values.
- Added printouts for the rdc.calc_q_factors and pcs.calc_q_factors user functions. These are activated by the new verbosity user function argument which defaults to 1. If the value is greater than 0, then the backend will print out all the calculated Q factors.
- The verbosity argument of the RDC and PCS q_factors() functions now defaults to 1. This causes the Q factors to be printed out at the end of all N-state model optimisations.
- Created the Structure.test_bug_22860_CoM_after_deletion system test. This is to catch bug #22860, the failure of the structure.com user function after calling structure.delete.
- Fix for the checks in the new Structure.test_load_spins_multi_mol system test. A spin index was incorrect.
- Fix for the structure.load_spins user function when the from_mols argument is used. The load_spins_multi_mol() function of the pipe_control.structure.main module was incorrectly handling the atomic position returned by the internal structural object atom_loop() method. This position is a list of lists when multiple models are present. But when only a single model is present, it returns a simple list.
- Modified the Structure.test_bug_22860_CoM_after_deletion system test to expect a RelaxNoPdbError. This tests that the structure.com user function raises RelaxNoPdbError after deleting all of the structural information from the current data pipe.
- The mol_name argument is now exposed in the structure.add_atom user function. This has been added as the first argument of the user function to allow new molecules to be created or to allow the atom to be placed into a specific molecule container. The functionality was already implemented in the backend, so it has been exposed by simply adding a new argument definition to the user function.
- Created the Structure.test_bug_22861_PDB_writing_chainID_fail system test. This is to catch bug #22861, the chain IDs in the structure.write_pdb user function PDB files are incorrect after calling structure.delete.
- Small modification of the Structure.test_bug_22861_PDB_writing_chainID_fail system test. File metadata is now being set to demonstrate that the structure.delete user function does not remove this once there is no more data left for the molecule.
- Small indexing fixes for the dispersion chapter of the relax manual.
- Fix for system test Relax_disp.test_cpmg_synthetic_dx_map_points. Another import line was written to the matplotlib script.
- Speedup and fix for system test Relax_disp.test_dx_map_clustered_create_par_file. The following test was taken out, since this a particular interesting case. There exist a double minimum, where relax has not found the global minimum. This is due to not grid searching for Ra2, but using the minimum value.
- Removed debugging code from the N_state_model.test_CaM_IQ_tensor_fit system test. This was an accidentally introduced state.save user function used to catch the system test state. It would results in the 'x.bz2' file being dumped in the current directory.
- Loosened the checks in the Relax_disp.test_baldwin_synthetic_full system test. This is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system.
- Fix for the Relax_disp.test_cpmg_synthetic_dx_map_points system test for certain systems. This change is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system. This may be related to 32-bit numpy 1.6.2 verses later numpy versions causing precision differences.
- Fixes for the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test for certain systems. The optimisation precision has been increased, and the value checking precision has been decreased. This change is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system. This may be related to 32-bit numpy 1.6.2 verses later numpy versions causing precision differences.
- Converted all the extern.numdifftools modules using the dos2unix program.
- Updated the Python 2 to Python 3 migration document to be more current.
- Small edit of the docs/devel/2to3_checklist document.
- Expanded the Python 2 to 3 conversion document to list the 2to3 command individually.
- The ImportErrors in unit tests are now correctly handled by the relax test suite. If an ImportError occurred, this was previously killing the entire test suite.
- The target_function.relax_fit module unit tests are now skipped if the C module is not compiled.
- Expanded the Python 2 to 3 conversion document.
- Small update to the 2to3_checklist document - the print statement conversion has been added.
- The lib.errors module is now importing lib.compat.pickle for better Python 2 and 3 support. This shifts the compatibility code from lib.errors into lib.compat so that the 2to3 program will not touch the lib.errors module.
- Better Python 3 compatibility in some test suite shared data profiling scripts. These changes invert the logic, importing the Python 3 builtins module and aliasing xrange() to range(), and passing if an ImporError occurs. The code will now no longer be modified by the 2to3 program.
- Unicode fixes for the "\u" string in "\usepackage" in the module docstring. This requires escaping as "\\usepackage" to avoid the unicode character '\u'.
- The lib.check_types module now imports io.IOBase from the lib.compat module. This is to shift more Python 2 vs. 3 compatibility into lib.compat and out of all other modules.
- Python 3 improvements - changed how the Python 3 absent builtins.unicode() function is handled. The aliased builtins.str() function is now referenced as lib.compat.unicode(). The Python 2 __builtin__.unicode() function is also aliased to lib.compat.unicode(). The GUI using this function now import it from lib.compat.
- Removed the writable base directory check in the dauvergne_protocol auto-analysis. This check was causing the system test to fail if the user does not have write access to the installed relax directory.
- Expanded the Mac_framework_build_3way document to include matplotlib.
- Important bug fix for racing causing the GUI to freeze. This is really only seen in the GUI tests on MS Windows systems, as a user could never be fast enough with the mouse. The GUI interpreter flush() method for ensuring that all user functions in the queue have been cleared now calls wx.Yield() to force all wxPython events to also be flushed. This change will avoid random freezing of the relax test suite.
- Bug fix for the Mf.test_bug_21615_incomplete_setup_failure GUI test on MS Windows systems. The GUI interpreter flush() method needs to be called between the two structure.load_spins user function calls. Without this, the test will freeze on MS Windows. The freezing behaviour is however not 100% reproducible and is dependent on the Windows version and wxPython version.
- Shifted a number of wx.NewId() calls to the module namespace to conserve IDs. These are for the menus in the main window and in the spin view window.
- Shifted the wx.NewId() calls for the spectrum list GUI element to the module namespace. These IDs are used for the pop up menus. The change avoids repetitive calls to wx.NewId() every time a right click occurs, conserving wx IDs so that they are not exhausted when running the test suite or running the GUI for a long time.
- More shifting of wx.NewId() calls for popup menus to module namespaces to conserve IDs.
- Converted all of the GUI wizard button IDs to -1, as they are currently unused. This should conserve wx IDs, especially in the test suite.
- Shifted the main GUI window toolbar button wx IDs to the module namespace. This has no effect apart from better organising the code.
- Shifted the relax controller window popup menu wx IDs to the module namespace. This is simply to better organise the code to match the other GUI module changes.
- Menus created by the gui.components.menu.build_menu_item() now default to the wx ID of -1. This is to conserve wx IDs. If the calling code does not provide the ID, there is no need to grab one from the small pool of IDs.
- Shifted the spin viewer GUI window toolbar button wx IDs to the module namespace. This should conserve wx IDs as the window is created and destroyed, as only 2 IDs will be taken from the small pool for the entire lifetime of the program.
- Shifted all of the wx.NewId() calls for the new analysis wizard into the module namespace. This will hugely save the number of wx IDs used by the GUI, especially in the test suite. Instead of grabbing 8 IDs from the small pool every time the new analysis wizard is created, only 8 IDs for the lifetime of the program will be used.
- Another large wx ID saving change. The ID associated with the special accelerator table that allows the ESC button to close relax wizards is now initialised once in the module namespace, and not each time a wizard is created.
- A small wx ID conserving change - the 'Execute' button in the analysis tabs now uses the ID of -1. A unique ID is not necessary and is unused.
- The user function class menus no longer have unique wx IDs, as these are unnecessary. This conserves the small pool of unique wx IDs, as the spin viewer window is created and destroyed.
- Bug fix for the structure.load_spins user function new from_mols argument. This was incorrectly using the pipe_control.pipes.pipe_names() function to obtain its default values in the GUI (although this is not currently uesd). The result was a non-fatal error message on Mac OS X systems of "Python[1065:1d03] *** __NSAutoreleaseNoPool(): Object 0x3a3944c of class NSCFString autoreleased with no pool in place - just leaking".
- Added a debugging Python version check to the devel_scripts/memory_leak_test_relax_fit.py script. This prevents the script from being executed with a normal Python binary.
- Created the blacklisted Noe.test_noe_analysis_memory_leaks GUI test. This long test can be manually run to help chase down memory leaks. This can be monitored using the MS Windows task manager, once the 'USER Objects' column is shown. If the USER Objects count reaches 10,000 in Windows, then no more GUI elements can be created and the user will see errors.
- Added a printout to the Noe.test_noe_analysis_memory_leaks GUI test to help with debugging.
- Improved debugging printouts for the Noe.test_noe_analysis_memory_leaks GUI test.
- Small fix for the GUI analysis deletion method to prevent racing in the GUI tests.
- Redesigned how wizards are destroyed in the GUI. The relax wizard Destroy() method is now overridden. This allows the buttons in the wizard to be properly destroyed, as well as all wizard pages. This should remove a lot of GUI memory leaks.
- Created the General.test_new_analysis_wizard_memory_leak blacklisted GUI test. This will be used to check for memory leaks in the new analysis wizard.
- Removed an unused dictionary from the GUI wizard object.
- Added a wx.Yield() before destroying the new analysis wizard via menu_new(). This is to avoid racing which can be triggered in the test suite.
Bugfixes
- Fix for the latex2html tags in the model-free chapter of the relax manual. This bug may affect the compilation of both the PDF and HTML version (http://www.nmr-relax.com/manual/) of the manual.
- Formatting improvements for the user function chapter of the HTML manual. This will hopefully fix the horrible formatting whereby all text is wrapped in the HTML tags <SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="FOOTNOTESIZE"><SMALL CLASS="SCRIPTSIZE">text</SMALL></SMALL></SMALL></SMALL></SMALL></SMALL></SMALL></SMALL></SMALL></SMALL>.
- Big bug fix for the text size formatting of the HTML manual. The previous fix for the user function chapter of the HTML manual (http://www.nmr-relax.com/manual/Alphabetical_listing_user_functions.html) did not fix the problem. The issue was with the {exampleenv} defined using a \newenvironment command in the preamble. The command \footnotesize was bing used in the start, but nothing was changing the font size at the end. In LaTeX, the ending of the environment appears to reset the font size, whereas in latex2html it does not. Therefore all text after this environment is prepended by <SMALL CLASS="FOOTNOTESIZE"> in the HTML manual and this keeps adding to the text after each new exampleenv environment.
- Fix for the poorly written User_functions.test_structure_add_atom GUI test. This fixes one part of 2 of the bug #22772, the modelfree4 binary issue and the User_functions GUI tests with wxPython 2.9 failures of the test suite. The problem was that a list element was being set in the GUI test, but that element did not exist yet. Somehow this worked in wxPython 2.8. But the bad code failed on wxPython 2.9.
- Updated the Palmer.test_palmer_omp system test for the 64-bit Linux Modelfree 4.20 GCC binary file. This fixes the second part and last part of the bug #22772, the modelfree4 binary issue and the User_functions GUI tests with wxPython 2.9 failures of the test suite. The problem is that the 64-bit GNU/Linux GCC compiled binary of Modelfree 4.20 produces different results as previous versions. These are now caught by the system test and correctly checked.
- Removal the use of OrderedDict(). OrderedDict is first available in python 2.7, and is not essential functionality. The functionality is replaced with looping over a list of dictionary keys instead, which is picked up under analysis. Bug #22798: Failure of relax to start due to an OrderedDict ImportError on Python 2.6 and earlier.
- Fix for the find next bug in the relax controller window. This is bug #22815, the failure of find next using F3 (or Ctrl-G on Mac OS X) in the relax controller window if search text has already been set. The fix was simple, as the required flags are in the self.find_data class object (an instance of wx.FindReplaceData).
- Fix for find dialog in the relax controller window. This is for bug #22816, the find functionality of the relax controller window does not find text when using wxPython >= 2.9. The find wxPython events are now bound to the find dialog rather than the relax controller window LogCtrl element for displaying the relax messages. This works on all wxPython versions.
- Bug fix for the structure.align user function for when no data pipes are supplied.
- Bug fix for the N-state model grid search when only alignment tensor parameters are optimised. The algorithm for splitting up the grid search to optimise each tensor separately, hence massively collapsing the dimensionality of the problem, was being performed incorrectly. The grid_search() API method inc, lower, and upper arguments are lists of lists, but were only being treated as lists.
- Final fix for bug #22849, the failure of the N-state model analysis when optimising only alignment tensors using RDCs and/or PCSs. The alignment tensor is no longer initialised to zero values. This is to allow the skip_preset argument for the minimise.grid_search user function to be operational for the N-state model, a feature introduced with the zooming grid search. The solution was to check for the uninitialised tensor in the minimise_setup_fixed_tensors() method of the specific_analyses.n_state_model.optimisation module.
- Bug fix for the lib.arg_check.is_float_matrix() function. The check for a numpy.ndarray data structure type was incorrect so that lists of numpy arrays were failing in this function. Rank-2 arrays were not affected.
- Fix for the structure.com user function. This fixes bug #22860, the failure of the structure.com user function after calling structure.delete. The number of models in cdp.structure is now counted and if set to zero, RelaxNoPdbError will be raised.
- The structure.write_pdb user function can now handle empty molecules. This fixes bug #22861, the chain IDs in the structure.write_pdb user function PDB files are incorrect after calling structure.delete. To handle this consistently, the internal structural object ModelContainer.mol_loop() generator method has been created. This loops over the molecules, yielding those that are not empty. The MolContainer.is_empty() method has been fixed by not checking for the molecule name, as that remains after the structure.delete user function call while all other information has been removed. And finally the write_pdb() structural object method has been modified to use the mol_loop() method rather than performing the loop itself.
- Fix for the structure.delete user function for molecule metadata once no more data exists. This relates to bug #22861, the chain IDs in the structure.write_pdb user function PDB files are incorrect after calling structure.delete. The metadata, when it exists, is now deleted for the molecule once no more data is present.
- Fix for system test Relax_disp.test_bug_atul_srivastava. The call to the expected RelaxError needed to be performed differently for erlier python versions that 2.7.
- Fix for bug #22937, the failure of the Relax_disp.test_estimate_r2eff_err_auto system test on Python 2.5. The test_suite/shared_data/dispersion/Kjaergaard_et_al_2013/1_setup_r1rho_GUI.py simply required a newline character at the end of the file so that it can be executed in Python 2.5.
- Fix for bug #22938, the failure of the test suite in the relax GUI. The problem was that the status.skip_blacklisted_tests variable did not exist - it was only initialised if relax is started in test suite mode. Now the value is always set from within the status module and defaults to True.
- Python 3 fixes for the relax codebase. These changes were made using the command: 2to3 -j 4 -w -f buffer -f idioms -f set_literal -f ws_comma -x except -x import -x imports -x long -x numliterals -x xrange .
- Python 3 fixes throughout relax, as identified by the 2to3 script. The command used was: 2to3 -j 4 -w -f except -f import -f imports -f long -f numliterals -f xrange .
- Python 3 fixes - eliminated all usage of the dictionary iteritems() calls as this no longer exists.
- Python 3 fixes using 2to3 for the extern.numdifftools package (mainly spacing fixes). The command used was: 2to3 -j 4 -w -f buffer -f idioms -f set_literal -f ws_comma -x except -x import -x imports -x long -x numliterals -x xrange .
- Python 3 fixes using 2to3 for the extern.numdifftools package. The command used was: 2to3 -j 4 -w -f except -f import -f imports -f long -f numliterals -f xrange .
- Python 3 fixes for all print statements in the extern.numdifftools package. The print statements have been manually converted into print() functions.
- Python 3 fixes via 2to3 - elimination of all map and lambda usage in relax. The command used was: 2to3 -j 4 -w -f map .
- Python 3 fixes via 2to3 - replacement of all `x` with repr(x). The command used was: 2to3 -j 4 -w -f repr .
- Manual Python 3 fixes for the dict.key() function which returns a list or iterator in Python 2 or 3. This involves a number of changes. The biggest is the conversion of the "x in y.keys()" statements to "x in y". For code which requires a list of keys, the function calls "list(y.keys())" or preferably "sorted(y.keys())" are used throughout (sorted() ensures that the list will be of the same order on all operating systems and Python implementations). A number of "x in list(y.keys())" statements were simplified to "x in y", some list() calls changed to sorted(), and some unnecessary list() calls were removed.
- Python 3 fixes via 2to3 - elimination of all apply() calls. This only affects the GUI which cannot run in Python 3 yet as wxPython is not Python 3 compatible yet. The command used was: 2to3 -j 4 -w -f apply .
- Python 3 fixes via 2to3 - proper handling of the dict.items() and dict.values() functions. These are now all wrapped in list() function calls to ensure that the Python 3 iterators are converted to list objects before they are accessed. The command used was: 2to3 -j 4 -w -f dict .
- Python 3 fixes via 2to3 - the execfile() function does not exist in Python 3. The command used was: 2to3 -j 4 -w -f execfile .
- Python 3 fixes via 2to3 - the filter() function in Python 3 now returns an iterator. The command used was: 2to3 -j 4 -w -f filter .
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.3.1
Description
This is a minor feature and bugfix release. It includes the addition of the error_analysis.covariance_matrix, structure.align, and structure.mean user functions and expanded functionality for the structure.com and structure.delete user functions. Many operations involving the internal structural object are now orders of magnitude faster, with the interatom.define user function showing the greatest speed ups. There are also improvements for helping to upgrade relax scripts to newer relax versions. The numdifftools package is now bundled with relax for allowing numerical gradient, Hessian and Jacobian matrices to be calculated. And the release includes the start of a new protocol for iteratively analysing repetitive relaxation dispersion experiments.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.3.1
(9 October 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.3.1
Features
- Initial auto-analysis support for a highly repetitive protocol for analysing relaxation dispersion data.
- Addition of the docs/user_function_changes.txt file which documents all user function changes from relax 1.0.1 to 3.3.1 to help with upgrading scripts to newer relax versions.
- Updated the translation table used to identify no longer existing user functions and explain what the new user function is called for all relax versions from 1.3.1 to 3.3.1.
- The structure.delete user function can now delete individual models as well as select atoms in individual models.
- Addition of the error_analysis.covariance_matrix user function for determining parameter errors via the covariance matrix. This is currently only implemented for the relaxation curve-fitting analysis.
- Bundling of the Numdifftools 0.6.0 package with relax for numerically testing implementations of gradients, Hessians, and Jacobians.
- Implementation of the internal structural object collapse_ensemble() method to allow for all but one model to be deleted.
- Massive speed up of the internal structural object by pre-processing the atom ID string into a special atom selection object. This speeds up the interatom.define, structure.delete, structure.rotate, structure.translate and many other user functions which loop over structural data.
- Many orders of magnitude speed up of the structure.add_model user function.
- Implementation of the structure.mean user function to calculate the mean structure from the atomic coordinates of all loaded models.
- Implementation of the structure.align user function for aligning and superimposing different but related structures. This is similar to the structure.superimpose user function but allows for missing atomic information or small sequence changes. Only atoms with the same residue name and number and atom name are used in the superimposition.
- Expanded the structure.com user function to accept the atom_id argument to allow the centre of mass of a subset of atoms to be determined.
- Improvements for the running of the relax test suite.
Changes
- Epydoc docstring fix for the dep_check.version_comparison() function.
- Removed ZZ and HD exchange from the dispersion chapter of the relax manual. These would probably require completely new analysis types added to relax to analyse such data.
- Updated the 'Announcement' section of the release checklist document. This now includes details about initially composing the message using the relax wiki, and then how that text and the CHANGES file are used for the email announcement and the Gna! news item.
- Small changes for the Gna! news item in the release checklist document.
- Modified the announcement section of the release checklist document. Text about removing wiki markup has been added.
- More expansion of the release checklist document. Added text about creating internal and external links for the wiki release notes.
- Modified system test Relax_disp.test_show_apod_extract that test output from showApod. The output can be different according to NMRPipe version. The 'Noise Std Dev' is though the same.
- Fix for comments to dependency check of showApod.
- Fix for raising error when calling showApod, and subprocess module not available.
- Fix for the dependency check for showApod in system tests.
- Further extended the protocol for repeated dispersion analysis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Extended the system test for the protocol for repeated dispersion analysis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Added a relaxation dispersion model profiling log file for relax version 3.3.0 vs. 3.2.3. This is the output from the dispersion model profiling master script. These numbers will be used for the relax 3.3.0 release notes.
- Fixes for the relax 3.3.0 vs. 3.2.3 dispersion model profiling log file. The numeric model numbers were incorrectly scaled and a factor of 10 too high.
- Fixes for the scaling factors in the dispersion model super profiling script.
- Editing of the relax 3.3.0 features section of the CHANGES file. This will be used for the release notes.
- Added more test data for the repeated analysis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Updated the Baldwin 2014 reference in the relax manual. The pybliographic software was used to format this BibTeX entry. This was updated as volume and page number information is now available.
- Updated the Morin et al, 2014 paper (the relax relaxation dispersion paper) reference in the manual. The paper now has volume and page information.
- Added some more user function ranamings to the translation table. These were identified while preparing the release notes on the wiki (http://wiki.nmr-relax.com/Category:Release_Notes, http://wiki.nmr-relax.com/Release_notes).
- Stored a frequency dependent dictionary with spectrum IDs and repeated PMG frequencies in setup pipe. This information will progress out through children pipes. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Further extended methods in the class for repeated analysis of dispersion data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Updated the release checklist document, including adding a section about cross-linking. The cross-linking is important for search engine indexing.
- Created a simple script for printing out the names of all user functions.
- Added listings of all user functions from relax version 2.0.0 all the way to relax 3.3.0. This will be used to look at how the user function names have changed with time.
- Added a script and log file for comparing relax user function differences between versions.
- Created a document for relax users which follows the changes to the user function names.
- For the spin.display user function, added the print out of spin ID and status for selection. This is to help with showing the spin ID string for selection, and the current status of selection. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- To the back-end of display pipes, added functionality to sort the pipe names before printing. Also added the return of the list of pipes, with its associated information about pipe type, and pipe_bundle. This is to help with getting a better overview for multiple pipes in data store. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Parsed the force flag from front end of value.set to back end. Bug #22598. Back end of value.set does not respect force=False flag.
- Broke optimisation function into smaller functions. This is to help selecting spins, do particular grid search and minimise. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Modified system test to follow the new functions in the auto analysis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Shifted the user function listing script into the test suite directory where the results are.
- Created a script for printing out relax 1.3 user functions.
- Stripped out all of the relax intro and script printouts from the user function listing files. This allows the diff.py script to be simplified.
- Updated the relax 1.3 user function printout script and added many printouts. The printouts are for relax versions 1.3.5 to 1.3.16. The earlier relax versions used the relax 1.2 user function setup.
- Created a script for printing out all user functions for relax 1.2 versions. This also includes the relax 1.3.0 to relax 1.3.4 versions.
- Added the relax 1.3.0 to relax 1.3.4 user function printouts.
- Changed the behaviour of the script for showing user function difference between relax versions. The relax versions are now reversed so the oldest version is at the bottom of the difference printout.
- Added the relax 1.0.1 to relax 1.2.15 user function printouts. The diff.log file has also been updated with all of these versions.
- Updated the user_function_changes.txt document. This now lists all changes in the user function naming from relax version 1.0.1 all the way to relax 3.3.0.
- Added all remaining user function ranamings since relax 2.0.0 to the translation table. These were taken directly from the docs/user_function_changes.txt document.
- Added all user function ranamings since relax 1.3.1 to the translation table. These were taken directly from the docs/user_function_changes.txt document. Earlier relax versions are far too different, so this will be the earliest relax version for this translation table. The relax 1.2 and earlier (and 1.3.0) versions used the run argument throughout and the scripting was so different, that telling the user how to upgrade to new user functions is pointless. And the release date of relax 1.2.15, the last of these old designs was in November 2008.
- Changed the order of the two relax versions being compared for user function changes. This is in the diff.py script and log file and the user_function_changes.txt document.
- Changed the organisation of the files in the docs/ directory. A new docs/devel directory has been created and the 2to3_checklist, Mac_framework_build_3way, package_layout, and prompt_screenshot.txt documents shifted into it. This is to hide or abstract away the development documents so that relax users do not see them when looking into docs/. This should make the directory less intimidating.
- Shifted the Release_Checklist document into docs/devel/ to hide it from users.
- Correction for the noe.read to spectrum.read_intensities user function change. This is for the translation table used to catch old user function calls.
- Initial try to implement plotting in the repeated auto analysis protocol. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Small improvement of the matplotlib plotting of data in the repeated analysis protocol. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Fix for calling correct folder with test intensities. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- For the class of repeated analysis, implemented method to collect peak intensity, and function to plot the correlation. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Added system test Relax_disp.test_repeat_cpmg to be skipped, if no matplotlib module exists. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Added the Gimp XCF file for the logo of the relax wiki.
- Added system test Relax_fit.test_curve_fitting_height_estimate_error() for the manual and automated analysis of exponential fit. This is to prepare for new methods in the auto analysis protocol.
- In the auto analysis of exponential fitting, changed the minimisation method from simplex to Newton, to speed-up the fitting. This is for master Monte Carlo simulations.
- In the system test Relax_fit.test_curve_fitting_height_estimate_error(), moved the auto-detection of replicated spectra into the manual method. This is to prepare for auto-mated detection of replicates.
- Implemented a method to automatically find duplicates of spectrum in exponential fit. This is to ease the user intervention for error analysis, if this has been forgotten.
- Implemented the writing out of a "grace2images.py" script file, when performing auto analysis of exponential fits.
- Created the Structure.test_delete_model system test. This is in preparation for extending the structure.delete user function to be able to delete individual structural models. The test will only pass once this functionality is in place.
- Expanded the wiki instructions in the release checklist document. This includes a number of steps for significantly improving the release notes: External links to the Gna! trackers with full descriptions, external links to the HTML user manual for all user functions, internal links to release notes of other relax versions, internal links to wiki pages for all models from all theories, and HTML formatting of all symbols/parameters/etc.
- Introduction of the model argument to the structure.delete user function. This argument is passed all the way into the internal structural object, but is not used yet.
- The model argument in the structure.delete user function is now operational. In the internal object, it has two functions. When the atom_id argument is none, then new ModelList.delete_model() function is being called to remove the entire model from the list of structural models. When the atom_id argument is supplied, then only the corresponding atoms in the given model will be deleted.
- Expanded the checking in the Structure.test_delete_model system test. Now a number of structural model loading and deletion scenarios are tested.
- Implemented back-end function to estimate Rx and I0 errors from Jacobian matrix. This is to prepare for user function in relax_fit, to estimate errors.
- Implemented user function relax_fit.rx_err_estimate in relax_fit to estimate Rx and I0 errors from the Jacobian Co-variance matrix.
- Extended system test Relax_fit.test_curve_fitting_height_estimate_error() to test the error estimation method from the Co-variance matrix. The results seems very similar, if increasing to 2000 Monte Carlo simulations.
- Renamed the pipe_control.monte_carlo module to pipe_control.error_analysis. This is in preparation for the module to handle all error analysis techniques: Monte Carlo simulations, covariance matrix, Jackknife simulations, Bootstrapping (which is currently via the Monte Carlo functions), etc. All current functions are now prepended with 'monte_carlo_*()'.
- Fix for the old relax 1.2 model-free results file reading. This is due to the pipe_control.monte_carlo to pipe_control.error_analysis module renaming.
- Implemented the pipe_control.error_analysis.covariance_matrix() function. This follows from http://thread.gmane.org/gmane.science.nmr.relax.scm/23526/focus=7096. It will be used by a new error_analysis.covariance_matrix user function. And it calls the specific API methods model_loop(), covariance_matrix(), and set_error() and the relax library lib.statistics.multifit_covar() function do to most of the work.
- Modified the Relax_fit.test_curve_fitting_height_estimate_error system test. The call to relax_fit.rx_err_estimate has been replaced by the yet-to-be implemented error_analysis.covariance_matrix user function.
- Creation of the error_analysis.covariance_matrix user function. This is simply a code rearrangement. The relax_fit user function module was duplicated and relax_fit.rx_err_estimate renamed to error_analysis.covariance_matrix. References to the specific analysis have been removed.
- Created the specific analysis base API method covariance_matrix(). This defines the arguments required and what is returned by the method. It raises the RelaxImplementError for all analyses which do not implement this method.
- Modified pipe_control.error_analysis.covariance_matrix(). The call to the API covariance_matrix() method now has the model_info argument passed into it. For the relaxation curve-fitting, this allows the loop over spin systems to be skipped.
- Shifted the contents of the specific_analysis.relax_fit.estimate_rx_err module into the API. The estimate_rx_err() function is now the covariance_matrix() method of the specific API. The code for calculating the covariance matrix and errors are now in the function pipe_control.error_analysis.covariance_matrix(), so this has been removed. And the error setting is performed by the set_errors() API method, so that code has been deleted as well.
- Removed the specific_analyses.relax_fit.estimate_rx_err module import. The module has been merged into the specific API module.
- Fix for the pipe_control.error_analysis.covariance_matrix() function. The set_error() API method is parameter specific, so a loop over the parameters using the get_param_names() API method has been added.
- Removed the estimate_rx_err module from the specific_analyses.relax_fit.__all__ list. This module was deleted after merger into the api module.
- Improved the plotting of correlation plot for intensity. Now the intensity to error is plotted, which is the correct measure of this data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Implemented a correlation plot for Reff2 values to be plotted for different pipes. This has the Reff2/σReff2 plotted, which is the best way to represent this data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Further improved the plotting of data in repeated analysis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Added the Relax_disp.test_show_apod_rmsd_dir_to_files system test to the blacklist. This is if the showApod program is not installed on the machine and allows the test suite to pass.
- Extended the printout for the skipped tests in the test suite. As tests using the NMRPipe showApod software are skipped and listed in this table, the text now includes 'software' in the list.
- Shifted the checks for the Dasha and Modelfree4 software into the system test __init__() method. This is to bring this into the same design as the relaxation dispersion tests which require the NMRPipe showApod software. Now the test suite will list either Dasha or Modelfree4 in the skipped test table if they are not installed.
- Adding another statistic method to plot for multi-data sets. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- More adding of matplotlib snippets for plotting intermediate data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Changing the range of plotting for statistics. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- More changes to plotting for statistics. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Fix for axis limits when plotting stats. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Fix for globing, to prevent incidentally taking wrong intensity file. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Correction to figure limits. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Implemented writing out of statistics to file. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Adding writing out of PNG files from matplotlib, when looking at statistics. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Another math domain checking, if ref intensity is set to 0.0, then points are skipped, rather than raising an Error. This can happen for extremely bad dispersion data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Trying implementing flexibility, when data expected data is missing. This can be due failing of processing data, where a whole run of data is randomly skipped. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Better check for math domain error in intensity proportionality. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Removal of initialised of dictionary, before data existence have been checked. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Small fix for correct check of missing of data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Imported the Numdifftools 0.6.0 package into the relax source tree. This package is extremely useful for testing the implementation of gradients, Hessians, and Jacobians for all relax target functions. The numerical values from numdifftools can be compared to the directly calculated values. And for analysis types where the partial derivatives with respect to each model parameter are too complicated to calculated, or the derivatives are very complicated and hence slow, numdifftools can be used to provide a numerical estimate for direct use in the optimisation. The Numdifftools package is from https://pypi.python.org/pypi/Numdifftools and https://code.google.com/p/numdifftools/. The current version 0.6.0 has been placed into extern/numdifftools. This is only the numdifftools package within the official distribution files and the Python package setup.py file and associated files and directories have not been included. The package uses the New BSD licence (the revised licence with no advertising clause) which is compatible with the GPL v3 licence.
- Reordered functions in repeated analysis protocol. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Added more check of methods to the system test Relax_disp.test_repeat_cpmg(). This actually shows, that user function relax_disp.r20_from_min_r2eff maybe is broken. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Fix for the testing of method is finished when called. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Turned on minimisation in system test Relax_disp.test_repeat_cpmg(). Task #7826: Write a Python class for the repeated analysis of dispersion data.
- The lib.spectrum.nmrpipe module has been made independent of the relax source code. This was discussed at http://thread.gmane.org/gmane.science.nmr.relax.scm/23357/focus=7103. The change allows the software verification tests pass. The dep_check module cannot be used in the relax lib package. Only modules from within lib are allowed to be imported into modules of lib. The fix now allows the full test suite to pass and hence new relax releases are once again possible.
- Created a document which explains how missing copyrights can be found.
- Even more improvements to the shell command for finding missing copyrights.
- Updated the copyright notice for 2014 for all files changed by Edward d'Auvergne. These were identified using the command in the find_missing_copyrights document.
- Added numdifftools to the extern package __all__ list.
- Updated the find_missing_copyrights document. The matching is now more precise and skips all svnmerge operations.
- Added the 2014 copyright notice for Troels Linnet to many relax source files. These were identified as being edited by Troels using the command listed in the find_missing_copyrights document. The changes include adding "Copyright 2014 Troels E. Linnet" to many files not containing Troels' copyright notice, and extending the 2013 copyright to 2014.
- Implemented correlation plot of minimisation values. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Changed the missing package/module/software table in the test suite. This is to allow all names to fit and to update the column titles for software packages.
- Decreased the accuracy of a check in the Relax_disp.test_estimate_r2eff_err_auto system test. This is to allow the test to pass on my Windows 7 VM.
- Added Troels E. Linnet to the COMMITTERS file, which has not been updated in almost 3 years.
- Created the Structure.test_get_model system test. This demonstrates that the internal structural object get_model() method is not working as it should.
- Added a few more checks to the Structure.test_get_model system test.
- Created the Structure.test_collapse_ensemble system test. This is used to test a planned feature of the internal structural object. The collapse_ensemble() method will be created to remove all but one model in the structural ensemble.
- Modified the Structure.test_collapse_ensemble system test to check the initial values. This is for sanity reasons as the test coverage of the structure.add_atom user function is poor.
- Implemented the internal structural object collapse_ensemble() method. This allows the Structure.test_collapse_ensemble system test to pass.
- Created a basic text based progress meter in the new lib.text.progress module. This is taken from the script test_suite/shared_data/frame_order/cam/generate_base.py.
- Modifications to the User_functions.test_structure_add_atom GUI test. As lists of lists are now accepted by the structure.add_atom user function, the operation in the GUI is now significantly different. Therefore many checks have been removed from the GUI test.
- Updated the minimum minfx dependency version number from 1.0.9 to 1.0.11 in the dep_check module. This {{gna link|url= newest version handles infinite target function values preventing optimisation from continuing forever]. The 1.0.10 version is also useful as there is full support for gradients and Hessians in the log-barrier constraint algorithm.
- Shifted the specific_analyses.relax_disp.variables module into lib.dispersion. This is both to minimise circular dependencies, as previously the specific_analyses.relax_disp modules import from target_functions.relax_disp and vice-versa, and to allow the relax library functions to have access to these variables. This follows from a similar change to the frame order analysis in the frame_order_cleanup branch.
- Dependency fix for the auto_analyses.relax_disp_repeat_cpmg module. This was causing relax to fail. SciPy is an optional dependence for relax, but this module caused relax to not start if scipy was not installed. This was detected by testing relax with PyPy.
- Implemented writing out of particular correlation plots to file. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Created a special internal structural object selection object. This will be used for massively speeding up the internal structural object. The use of the lib.selection module by the internal structural object is currently very slow as a huge number of calls to re.search() are required. The idea is to avoid this by using lib.selection once to populate this new selection object, and then reusing this object to loop over molecules and atoms.
- Added the selection() method to the internal structural object. This parses the atom ID string using the lib.selection module, loops over the molecules and atoms, performs matches using re.search() via lib.selection, and populates and returns the new Internal_selection object. This can be used to pre-process the atom ID string to save huge amounts of time.
- The internal structural object validate_models() method now accepts the verbosity argument. This is used to silence printouts.
- Fixes for the new structural object Internal_selection object. The atom indices are not stored via the molecule index.
- Converted the rotate() and translate() structural object methods to use the new selection object. The atom_id arguments have been replaced with selection arguments. Therefore all parts of relax which call these methods must first call selection() to obtain the Internal_selection instance.
- Created the structural object Internal_selection.mol_loop() method. This is to simply quickly loop over all molecule indices of the selection object.
- Converted all structural object methods to use the selection object rather that atom ID strings. This should have a significant impact on the speed of certain operations within relax. The most obvious effect will be a huge speed up of the interatom.define user function. There should be speed ups with a number of other user functions relating to structural information. All parts of relax have been updated for the change.
- Implemented the sampling sparseness instead of NI on the graph axis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Massive speed up of the internal structural object add_model() method. This speeds up the structure.add_model user function, as well as many internal relax operations on the structural object. Instead of using the copy.deepcopy() function to duplicate an already existing structural model, now new molecule container objects are created and then the individual elements of the original molecule container data lists are copied one by one. This avoids copying a lot of internal Python junk and hence the copying operation is now orders of magnitude faster.
- Created the new --no-skip relax command line option. This is a debugging option specifically designed for relax developers. It allows all blacklisted tests to be executed, i.e. all blacklists are ignored. These tests would normally be skipped, however this option enables them.
- Fix for the test suite summary printout function for the new --no-skip option. The relax status object was clashing with a variable of the same name.
- Reactivated the Relax_disp.test_m61b_data_to_m61b system test, but blacklisted it. This will allow the test to be executed if the --no-skip command line option is used.
- Created the Bmrb.test_bug_22703_display_empty system and GUI test. This system test catches bug #22703, the failure of the bmrb.display user function with an AttributeError when no data is present. It is simultaneously a system and GUI test, as the GUI test class inherits directly from the system test class.
- Created the pipe_control.spectrometer.check_setup() function. This follows the design on the wiki page http://wiki.nmr-relax.com/Relax_source_design. This is for checking if spectrometer information has been set up.
- Created the RelaxNoFrqWarning warning class for warning that no spectrometer information is present.
- Renamed the pipe_control.spectrometer.check_setup() function to check_spectrometer_setup(). This is so it can be used without confusion outside of the module.
- Fix for a broken elif block in the new pipe_control.spectrometer.check_spectrometer_setup() function.
- The model-free bmrb_write() API method now checks for spectrometer information. This is via a call to thepipe_control.spectrometer.check_spectrometer_setup() function.
- Modified the Bmrb.test_bug_22703_display_empty system/GUI test to catch the RelaxNoFrqError.
- Created a special Check class based on the strategy design pattern. This is in the new lib.checks module. The class will be used to simplify and unify all of the check_*() functions in the pipe_control and specific_analyses packages.
- Converted the pipe_control.spectrometer.check_*() functions to the strategy design pattern. These are now passed into the lib.checks.Check object, and the original functions are now instances of this class.
- Alphabetical ordering of all functions in the pipe_control.pipes module.
- Changed the design of the Check object in lib.checks. The design of the checking function to call has been modified - it should now return either None if the check passes or an instantiated RelaxError object if not. This is then used to determine if the __call__() method should return True (when None is received). Otherwise if escalate is set to 1, the text from the RelaxError object is sent into a RelaxWarning and False is returned. And if escalate is set to 2, then the error object is simply raised.
- Updated the pipe_control.spectrometer.check_*_func() functions to use the new design.
- Implemented the writing out of parameter values between comparison of NI level. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Fixes for the lib.checks.Check object. The __call__() method keyword arguments **kargs needs to be processed inside the method to strip out the escalate argument.
- The default value of the escalate argument of the Check.__call__() method is now 2. This will cause the calls to the check_*() function/objects to default to raising RelaxErrors.
- Changed the behaviour of the lib.checks.Check object again. This time the registered function is stored rather than converted into a class instance method. That way the check_*() function-like objects do not need to accept the unused 'self' argument.
- The data pipe testing function has been converted to the strategy design pattern of the Check object. The function pipe_control.pipes.test() has also been renamed to check_pipe().
- Created the Bmrb.test_bug_22704_corrupted_state_file system test. This is to catch bug #22704, the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions.
- Implemented getting the statistics for parameters and comparing to init NI. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Implemented writing and plotting of statistics for individual and clustered fitting, comparing to full NI. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Added checks to the Bmrb.test_bug_22704_corrupted_state_file system test. This is to see if the cdp.exp_info data structure has been correctly restored from the save file.
- Uncommented some checks in the Bmrb.test_bug_22704_corrupted_state_file system test.
- For relaxation dispersion, modified that the grid search and linear constraints for parameter kAB is between 0-100. The parameter is only used in the TSMFK01 model. The kAB parameter is only for very slow forward exchange rate. The expected values should according to the reference paper: [Tollinger et al., 2001]. The paper concerns values of kAB in the region 0.1 to 5.0. If the exchange rate is any higher value of this, then another model should be used for the analysis.
- Set the default insignificance value to 0.0 instead of 1.0. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Modified the grid search limits for parameter kAB to be between 0.1 and 20.0 rad.s-1. This is for the TSMFK01 model, where values much above 10/20 is not expected.
- Implemented counting of outliers for statistics. This is to get a better feeling why some statistics are very much different between NI. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Created the Structure.test_mean system test. This is to test the functionality of a planned new feature, the structure.mean user function. This is an analysis aid that will calculate the mean structure from all loaded models.
- Implemented the structure.mean user function frontend. The backend is currently just a stub function.
- Fixes and simplifications for the pipe_control.pipes.check_pipe() checking object. One of the RelaxError classes were not initialised and the docstring was incorrect.
- Created the pipe_control.structure.main.check_structure() checking object. This will be used for providing much more detailed feedback for when structural information is missing.
- Converted all of the pipe_control.structure.main functions to use the check_structure() object. This standardises and improves all of the checks.
- Some fixes and additional checks for the Structure.test_mean system test.
- Implemented the backend of the structure.mean user function. This primarily occurs within the internal structural object in the new mean() method. The pipe_control.structure.main.mean() function simply checks if the current data pipe is correctly set up and then calls the structural object mean() method.
- Created the Structure.test_align system test. This will be used to test the yet to be implemented structure.align user function. This user function will be similar to the structure.superimpose user function but will be designed so that structures with different primary and atomic sequences can be superimposed.
- Created the frontend of the structure.align user function. This is almost the same as that of the structure.superimpose user function except that the pipes argument has been added and the titles and description changed to indicate the differences.
- Registered the new user function argument type 'int_list_of_lists' in the prompt UI. This is to allow for lists of lists of integers, as used for the model argument in the new structure.align user function.
- Modified the lib.arg_check.is_int_list() function to accept the list_of_lists Boolean argument. This updates the function to have the same functionality as is_str_list(), allows for lists of lists of int to be checked.
- Extended the Structure.test_align system test to throughly check the structural data. This includes changing the structure.align user function call to use 'fit to first' and carefully checking the new atomic coordinates.
- Modified the Structure.test_align system test so that translations and rotations match the algorithm. This allows the output of the structure.align user function to be checked to see if the rotation matrix and translation vector found match that used to shift the original structures.
- Implemented the structure.align user function backend. This is similar to the structure.superimpose user function, however the coordinate data structure only contains atoms which are in common to all structures.
- The pipe_control.structure.main functions translate() and rotate() now accept the pipe_name argument. This is used to translate and rotate structures in different data pipes, as required by the structure.align user function.
- The pipe_control.structure.main.check_structure() checking object now accepts the pipe_name argument. This allows structural data to be checked for in different data pipes without having to switch to them.
- Modified the Structure.test_align system test to call the structure.write_pdb user function. This sets the file name to sys.stdout so that the original structure and the final aligned structures are output to STDOUT for debugging purposes.
- Created the Structure.test_delete_atom system test. This is used to test the deletion of a single atom using the structure.delete user function.
- Expanded the Structure.test_delete_atom system test. This is to show that the structure.write_pdb user function fails after a call to the structure.delete user function to delete individual atoms.
- Fix for the new structure.align user function. The translation and rotation of the structures at the end to the aligned positions was being incorrectly performed.
- Loosened some checks in the Structure.test_align system test to allow it to pass. Some self.assertEqual() checks for the atomic coordinates have been replaced by self.assertAlmostEqual() to allow for small machine precision differences.
- Modified the lib.arg_check.is_str_or_inst() function to handle cStringIO objects. This allows sys.stdout to be used as a file object in the relax test suite.
- Modified the lib.arg_check.is_str_or_inst() function to work with Python 3. Instead of checking for cStringIO.OutputType, which does not exist in Python 3, the argument is simply checked to see if it has a write() method.
- Print out of the number of all Reff2 points, if they are different between analysis. This can become an issue if a single intensity point has slipped into noise, due to low quality of spectrum reconstruction. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Implemented statistics for Reff2 values. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Added data checks and printouts to the structure.align user function. The data checks are to prevent the user from attempting an alignment with differently named molecules, as this will not work.
- Implemented writing out intensity and error correlations plot. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Implemented writing out of intensity statistics. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Expanded the structure.com user function to accept the atom_id argument. This allows the centre of mass (CoM) calculation to be restricted to a certain subset of atoms. The backend already had support for this feature, but now it is exposed in the frontend. The user function docstring has been slightly modified as well.
- Skipping of intensity calculation, if the intensity pipe does not exists. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Added example CPMG data, which could possibly be sent for BMRB submission. The data is un-published CPMG data, related to the paper: Webb H, Tynan-Connolly BM, Lee GM, Farrell D, O'Meara F, Soendergaard CR, Teilum K, Hewage C, McIntosh LP, Nielsen JE (2011). Remeasuring HEWL pK(a) values by NMR spectroscopy: methods, analysis, accuracy, and implications for theoretical pK(a) calculations. Proteins: Struct., Funct., Bioinf. 79(3), 685-702, DOI 10.1002/prot.22886. Task #7858: Make it possible to submit CPMG experiments for BMRB.
- Added system test Relax_disp.test_bmrb_sub_cpmg() to try calling the bmrb functions in relax. Task #7858: Make it possible to submit CPMG experiments for BMRB.
- Implemented the initial part of the API, to collect data for BMRB submission. Task #7858: Make it possible to submit CPMG experiments for BMRB.
- Inserted a "RelaxImplementError" when trying to call bmrb_write from a relaxation dispersion analysis. To implement the function, it would require a re-write of the relax_data bmrb_write(star) function, and proper handling of cdp.ri_ids. It was also not readily possible to find examples of submitted CPMG data in the BMRB database. This makes it hard to develop, and even ensure that BMRB would accept the format. Task #7858: Make it possible to submit CPMG experiments for BMRB.
- Removed the system test Relax_disp.test_bmrb_sub_cpmg() to be tested in the test-suite. This test will not be implemented, as it requires a large re-write of data structures. Task #7858: Make it possible to submit CPMG experiments for BMRB.
- Removed the showing of Matplotlib figures in the test suite. Task #7826: Write a Python class for the repeated analysis of dispersion data.
- Implemented system test Relax_disp.test_dx_map_clustered to catch the missing creation of a point file. Bug #22753: dx.map does not work when only 1 point is used.
- Inserted a check in system test Relax_disp.test_dx_map_clustered, that a call to minimise.calculate should be the same as the file stored with the clustered χ2 value. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Made initial preparation to loop over clustered spins and IDs for the minimise.calculate user function call. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Implemented looping over spin-clusters when issuing a minimise.calculate. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Made back_calc_r2eff() in optimisation module use the spin and ID list instead. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Fix for graph plotting functionality to send spins as list of one spins. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Fix for calling back_calc_r2eff with the new argument keywords, and use list of spin and spin IDs. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Fix for synthetic script calling back_calc_r2eff() with old arguments and to use list of spin containers and IDs. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Inserted last test in test_dx_map_clustered, to check out the written χ2 values are as expected. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Moved the looping over cluster spin IDs into its own function in the API. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Added the selection string for all the cluster IDs to be parsed back as well. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Made the value set function, set values to all spins, if it is a global parameter. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Moved the skipping of protons away from looping function. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Inserted some testing lines for making a dx_map, either global clustered or as a free spin. There is a big difference which dx map you get. It illustrates beautifully the effect of clustering things together. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Added a BMRB NMR-STAR formatted deposition file for the OMP model-free data for reference. This is because there are no other NMR-STAR formatted files in the relax sources.
- In the dispersion API calculate(), used the API function model_loop() to loop over the clusters instead. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Removed then function loop_cluster_ids() from dispersion API(). This should be implemented elsewhere. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Updated the API set_param_values() function to use model_loop() to get the spin_ids from the cluster. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Initial try to fix unit test test_value_set_r1_rit(). The problem is that no spin ID can be generated since the spins are created manually. "AttributeError: 'MoleculeContainer' object has no attribute '_res_name_count' ". Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Removed the checking of MODEL_LIST_MMQ, and spin.isotope from optimisation.back_calc_r2eff(), since this check is already covered. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Fix for references to "spin" in optimisation.back_calc_r2eff(). Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Fix for looping performed twice in relax_disp API model_loop(). Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Removed unused proton reference in relax_disp API calculate(). There is though some problems with these tests (F 1.93 s for Relax_disp.test_korzhnev_2005_15n_dq_data, F 2.01 s for Relax_disp.test_korzhnev_2005_1h_mq_data, F 1.93 s for Relax_disp.test_korzhnev_2005_1h_sq_data). It is unsure where these comes from. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Fix for epydoc in system test Relax_disp.test_dx_map_clustered.
- Updated all of the Relax_disp.test_korzhnev_2005_*_data system tests. These now have slightly changed parameter values due to the fix of bug #22563, the NS MMQ 2-site dispersion model running at 32-bit precision and not 64-bit as it should be.
- Epydoc change for DOI reference in system tests. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Added some test PyMOL scripts to create OpenDX maps and χ2 surface plots. These will go to the wiki: http://wiki.nmr-relax.com/Chi2_surface_plot.
- Big improvement for running the relax unit tests via the relax command line options. The unit test module path is now accepted as a command line option. This brings more capabilities of Gary Thompson's test_suite/unit_tests/unit_test_runner.py script into the relax command line. The _pipe_control/test_value unit test module path can be specified as, for example, one of 'test_suite.unit_tests._pipe_control.test_value', 'test_suite/unit_tests/_pipe_control/test_value', '_pipe_control.test_value', '_pipe_control/test_value'. This allows individual modules of tests to be run, rather than having to execute all unit tests, which is very useful for debugging.
- Modified the printouts for the unit tests when running with the --time command line option. The test name is now being processed. The leading 'test_suite.unit_tests.' text is now stripped out. And the remaining text is split into the module name and the test name. This is to allow the unit test module name to be more easily identifiable, so it can then be used as a command line option to allow only a subset of tests to be performed.
- Modified the help strings for the test suite options shown when 'relax -h' is run. The ability to specify individual tests (or modules of tests for the unit tests) is now documented. The '--time' option help string has also been edited.
- Fix for the Bmrb.test_bug_22704_corrupted_state_file GUI test. This was failing because the setUp() method in the inherited Bmrb system test module was being overwritten by the default Unittest.setUp() method. Therefore the system test setUp() method has been copied into the GUI test class.
- Fix for the Test_value.test_value_set_r1_rit test of the _pipe_control.test_value unit test module. This is a general fix for all unit test modules which use the test_suite.unit_tests.value_testing_base.Value_testing_base base class. After the molecules, residues and spins are manually created, the pipe_control.mol_res_spin.metadata_update() function is called to make sure that all of the private and volatile metadata have been correctly created, so that the other pipe_control.mol_res_spin module functions can operate correctly.
- Removal of repetitive code in the relaxation dispersion model_loop() API method. The spin loop does not need to be called twice, instead the if statements have been modified to better direct the code execution.
- Added script to simulate dispersion profiles at different settings. This shows that something is wrong. The back-calculated values in the graphs are not equal to the interpolated values. There must be something wrong somewhere. This list shows the χ2 values and, judging from the dispersion graphs, this simply cannot be true.
- Changed bounds for sample scripts to create: 3D iso-surface plot, surface plot and simulation of dispersion curves.
- Minor changes to Python matplotlib script, to produce surface plot. Also added the new data for the plotting.
- Modified the example data, after issue with parameters was fixed.
Bugfixes
- Fix for two-point calculation of exponential curve with corrupted data. The two-point calculation is now also skipped, if the measured intensity is 0. This can happen for corrupted intensity files.
- Fix for the internal structural object get_model() method - it now actually returns the model.
- Fixes for the structure.add_atom user function to allow for list of lists for the atomic position. This allows different coordinates to be supplied for each model.
- Added safety checks for NaN values to the lib.structure.pdb_write module. This is within the _record_validate() function. The check prevents the creation of invalid PDB files.
- Fix for the experimental information data pipe object when converting to XML state and results files. This is a partial fix for bug #22704, the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. The names and descriptions for the software, citation and script list objects were incorrectly set. These have been fixed so that the name of the data structure and the real description is present in the XML state or results file instead of <relax_list desc='relax list container'>.
- Fix for the experimental information data pipe object when converting to XML state and results files. This is a partial fix for bug #22704, the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. The names and descriptions for the software, citation and script list objects were incorrectly set. These have been fixed so that the name of the data structure and the real description is present in the XML state or results file instead of <relax_list desc='relax list container'>.
- Fix for the cdp.exp_info.software data structure setup. This is a partial fix for bug #22704, the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. The Element data container name was being replaced by the software name, making it impossible to restore from the XML.
- Implemented the cdp.exp_info.from_xml() method to correctly restore the experimental info structure. This fixes bug #22704, the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. This custom ExpInfo.from_xml() method is required to properly recreate the software, script and citation list data structures of the cdp.exp_info data structure, as these are special RelaxListType objects populated by Element objects (both from data_store.data_classes).
- Bug fix for the structure.delete user function. When individual atoms are deleted, the bonded atom data structure is no correctly updated to remove the now non-existent atom.
- Another bug fix for the structure.delete user function when deleting individual atoms. The bonded atom data structure consisting of indices requires all indices after the deleted atom to be decremented by 1.
- Bug fix for the CONECT records created by the structure.write_pdb user function. The atom numbers inside the structural object were being used for the CONECT records rather than the atom numbers used within the PDB file.
- Fix for writing out point files, when only one point is used. The code was testing for > 1 points to be present, before writing out point files. Bug #22753: dx.map does not work when only 1 point is used.
- Fix for bug #22563, the NS MMQ 2-site dispersion model running at 32-bit precision and not 64-bit as it should be. The numpy.complex64 32-bit types have been replaced by numpy.complex128 in the lib.dispersion.ns_mmq_2site module.
- Critical fix for kAB not belonging to list of global parameters. kAB was only changed to the spin of interest, but not for the rest of the cluster. When the parameter vector is assembled, "assemble_param_vector(spins=spins)" it takes the global parameter from spin 0. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
- Improvements for PDB creation in the relax library for out of bounds structural coordinates. The lib.structure.pdb_write module atom() and hetatm() functions will now more gracefully handle atomic coordinates which are outside of the PDB limits of [-999.999, 9999.999]. When such coordinates are encountered, instead of producing a too long PDB line which does not pass the validation step, the functions will set the coordinates to the boundary value. This will at least allow a valid PDB file to be created, despite the warping of the coordinates.
- Expanded the list of global dispersion parameters in the set_param_values() API method. This is a quick expansion of Troels' fix for the kAB parameter to allow for the release of relax 3.3.1. This is a small part of the discussion at http://thread.gmane.org/gmane.science.nmr.relax.scm/23948/focus=7188.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.3.0
Description
This is a major feature release which includes a huge number of changes, as can be seen below. The most important change is an incredible speed up of all relaxation dispersion models. See the table below for a comparison to the previous relax 3.2.3 release. The maximum possible advantage of linear algebra operations are used to eliminate all of the slow Python looping and to obtain the ultimate algorithms for speed. As this is using NumPy, conversion to C or FORTRAN will not result in any significant speed advantage. With these huge speed ups, relax should now be one of the fastest software packages for analysing relaxation dispersion phenomena.
Other important features include the implementation of a zooming grid search algorithm for use in all analysis types, expanded plotting capabilities for R1ρ values in the relaxation dispersion analysis, the ability to optimise the R1 parameter in all off-resonance dispersion models, proper minimisation statistics resetting by the minimisation user functions, and a large expansion of the periodic table information for all elements in the relax library for correctly estimating molecular masses. Additional features are that there is better tab completion support in the prompt UI for Mac OS X, the addition of the time user function for printing the current date and time, the value.copy user function accepting a force argument for overwriting values, model nesting in the dispersion auto-analysis has been extended, the spin-lock offset is now shown in the dispersion analysis in the GUI, the relax_disp.r2eff_estimate user function has been added for fast R2eff and I0 parameter value and error estimation, and gradient and Hessian functions have been added to the exponential curve-fitting C module allowing for more advanced optimisation in the relaxation curve-fitting and dispersion analyses.
Note that this new 3.3 relax series breaks compatibility with old relax scripts. The important change, which is the main reason for starting the relax 3.3.x line, is the renaming of the calc, grid_search and minimise user functions to minimise.calculate, minimise.grid_search and minimise.execute respectively. Please update your scripts appropriately. A new relax feature is that old user function calls are detected in the prompt and script UIs and a RelaxError raised explaining what to rename the user function to.
Important bugfixes in this release include that relax can run on MS Windows systems again, numerous Python 3 fixes, the ability to load Bruker DC files when the file format has corrupted whitespace, the GUI "close all analyses" feature works and no longer raises an error, structure.create_diff_tensor_pdb user function now works when no structural data is present, the geometric prolate diffusion 3D PDB representation in a model-free analysis now aligns with the axis in the PDB as it was previously rotated by 90 degrees, and the Monte Carlo simulations in the relaxation dispersion analysis for exponential curve-fitting for R2eff/R1ρ parameter errors is now correct and no longer underestimating the errors by half. For more details about the new features and the bug fixes, please see below. For fully formatted and easy to navigate release notes, please see http://wiki.nmr-relax.com/Relax_3.3.0.
To demonstrate the huge speeds ups in the relaxation dispersion analysis, the following table compares the speed of dispersion models in relax 3.2.3 compared to the new 3.3.0 version:
Dispersion model | relax 3.2.3 timings | relax 3.3.0 timings | Speed change |
---|---|---|---|
No Rex | 0.824±0.017 | 0.269±0.016 | 3.068x faster. |
LM63 | 1.616±0.017 | 0.749±0.008 | 2.157x faster. |
LM63 3-site | 3.218±0.039 | 0.996±0.013 | 3.230x faster. |
CR72 | 2.639±0.042 | 1.536±0.019 | 1.718x faster. |
CR72 full | 2.808±0.027 | 1.689±0.075 | 1.663x faster. |
IT99 | 1.838±0.032 | 0.868±0.011 | 2.118x faster. |
TSMFK01 | 1.643±0.033 | 0.718±0.011 | 2.289x faster. |
B14 | 5.841±0.050 | 3.747±0.044 | 1.559x faster. |
B14 full | 5.942±0.053 | 3.841±0.044 | 1.547x faster. |
NS CPMG 2-site expanded | 8.309±0.066 | 4.070±0.073 | 2.041x faster. |
NS CPMG 2-site 3D | 245.180±2.162 | 45.410±0.399 | 5.399x faster. |
NS CPMG 2-site 3D full | 237.217±2.582 | 45.177±0.415 | 5.251x faster. |
NS CPMG 2-site star | 183.423±1.966 | 36.542±0.451 | 5.020x faster. |
NS CPMG 2-site star full | 183.622±1.326 | 36.788±0.343 | 4.991x faster. |
MMQ CR72 | 5.920±0.105 | 4.078±0.105 | 1.452x faster. |
NS MMQ 2-site | 363.659±2.610 | 82.588±1.197 | 4.403x faster. |
NS MMQ 3-site linear | 386.798±4.480 | 92.060±0.754 | 4.202x faster. |
NS MMQ 3-site | 391.195±3.442 | 93.025±0.829 | 4.205x faster. |
M61 | 1.576±0.022 | 0.862±0.009 | 1.828x faster. |
DPL94 | 22.794±0.517 | 1.101±0.008 | 20.705x faster. |
TP02 | 19.892±0.363 | 1.232±0.007 | 16.152x faster. |
TAP03 | 31.701±0.378 | 1.936±0.017 | 16.377x faster. |
MP05 | 24.918±0.572 | 1.428±0.015 | 17.454x faster. |
NS R1rho 2-site | 244.604±2.493 | 35.125±0.202 | 6.964x faster. |
NS R1rho 3-site linear | 287.181±2.939 | 68.245±0.536 | 4.208x faster. |
NS R1rho 3-site | 290.486±3.614 | 70.449±0.686 | 4.123x faster. |
Dispersion model | relax 3.2.3 timings | relax 3.3.0 timings | Speed change |
---|---|---|---|
No Rex | 0.818±0.016 | 0.008±0.001 | 97.333x faster. |
LM63 | 1.593±0.018 | 0.037±0.000 | 43.401x faster. |
LM63 3-site | 3.134±0.039 | 0.067±0.001 | 47.128x faster. |
CR72 | 2.610±0.047 | 0.115±0.001 | 22.732x faster. |
CR72 full | 2.679±0.034 | 0.122±0.005 | 22.017x faster. |
IT99 | 1.807±0.025 | 0.063±0.001 | 28.687x faster. |
TSMFK01 | 1.636±0.036 | 0.039±0.001 | 42.170x faster. |
B14 | 5.799±0.054 | 0.488±0.010 | 11.879x faster. |
B14 full | 5.803±0.043 | 0.484±0.006 | 11.990x faster. |
NS CPMG 2-site expanded | 8.326±0.081 | 0.685±0.012 | 12.160x faster. |
NS CPMG 2-site 3D | 244.869±2.382 | 41.217±0.467 | 5.941x faster. |
NS CPMG 2-site 3D full | 236.760±2.575 | 41.001±0.466 | 5.775x faster. |
NS CPMG 2-site star | 183.786±2.089 | 30.896±0.417 | 5.948x faster. |
NS CPMG 2-site star full | 183.243±1.615 | 30.898±0.343 | 5.931x faster. |
MMQ CR72 | 5.978±0.094 | 0.847±0.007 | 7.061x faster. |
NS MMQ 2-site | 363.138±3.041 | 75.906±0.845 | 4.784x faster. |
NS MMQ 3-site linear | 384.978±5.402 | 83.703±0.773 | 4.599x faster. |
NS MMQ 3-site | 388.557±3.261 | 84.702±0.762 | 4.587x faster. |
M61 | 1.555±0.021 | 0.034±0.001 | 45.335x faster. |
DPL94 | 22.837±0.494 | 0.140±0.002 | 163.004x faster. |
TP02 | 19.958±0.407 | 0.167±0.002 | 119.222x faster. |
TAP03 | 31.698±0.424 | 0.287±0.003 | 110.484x faster. |
MP05 | 25.009±0.683 | 0.187±0.007 | 133.953x faster. |
NS R1rho 2-site | 242.096±1.483 | 32.043±0.157 | 7.555x faster. |
NS R1rho 3-site linear | 280.778±2.589 | 62.866±0.616 | 4.466x faster. |
NS R1rho 3-site | 282.192±5.195 | 63.174±0.816 | 4.467x faster. |
Full details of this comparison can be seen in the test_suite/shared_data/dispersion/profiling directory. For information about each of these models, please see the links: http://wiki.nmr-relax.com/No_Rex, http://wiki.nmr-relax.com/LM63, http://wiki.nmr-relax.com/LM63_3-site, http://wiki.nmr-relax.com/CR72, http://wiki.nmr-relax.com/CR72_full, http://wiki.nmr-relax.com/IT99, http://wiki.nmr-relax.com/TSMFK01, http://wiki.nmr-relax.com/B14, http://wiki.nmr-relax.com/B14_full, http://wiki.nmr-relax.com/NS_CPMG_2-site_expanded, http://wiki.nmr-relax.com/NS_CPMG_2-site_3D, http://wiki.nmr-relax.com/NS_CPMG_2-site_3D_full, http://wiki.nmr-relax.com/NS_CPMG_2-site_star, http://wiki.nmr-relax.com/NS_CPMG_2-site_star_full, http://wiki.nmr-relax.com/MMQ_CR72, http://wiki.nmr-relax.com/NS_MMQ_2-site, http://wiki.nmr-relax.com/NS_MMQ_3-site_linear, http://wiki.nmr-relax.com/NS_MMQ_3-site, http://wiki.nmr-relax.com/M61, http://wiki.nmr-relax.com/DPL94, http://wiki.nmr-relax.com/TP02, http://wiki.nmr-relax.com/TAP03, http://wiki.nmr-relax.com/MP05, http://wiki.nmr-relax.com/NS_R1rho_2-site, http://wiki.nmr-relax.com/NS_R1rho_3-site_linear, http://wiki.nmr-relax.com/NS_R1rho_3-site.
For CPMG statistics: 3 fields, each with 20 CPMG points. Total number of dispersion points per spin is 60.
For R1ρ experiments: 3 fields, each with 10 spin lock offsets, and each offset has been measured at 5 different spin lock fields. Per field there is 50 dispersion points. Total number of dispersion points per spin is 150.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.3.0
(3 September 2014, from /trunk)
https://sourceforge.net/p/nmr-relax/code/ci/3.3.0/tree/
Features
- Huge speed ups for all of the relaxation dispersion models ranging from 1.452x to 163.004x times faster. The speed ups for the clustered spin analysis are far greater than for the single spin analysis.
- Implementation of a zooming grid search algorithm for optimisation in all analyses. This includes the addition of the minimise.grid_zoom user function to set the zoom level. The grid width will be divided by 2zoom_level and centred at the current parameter values. If the new grid is outside of the bounds of the original grid, the entire grid will be translated so that it lies entirely within the original.
- Increased the amount of user feedback for the minimise.grid_search user function. Now a comment for each parameter is included in the printed grid search setup table. This includes if the lower or upper bounds, or both, have been supplied and if a preset value has been used instead.
- Expanded support for R1ρ 2D graph plotting in the relax_disp.plot_disp_curves user function as the X-axis can now be the ν1 value, the effective field ωeff, or the rotating frame title angle θ. And the plots are interpolation over the spin-lock offset.
- Ability to optimise the R1 relaxation rate parameter in the off-resonance relaxation dispersion models.
- Creation of the relax_disp.r1_fit user function for activating and deactivating R1 fitting in the dispersion analysis.
- Better tab completion support in the prompt UI for Mac OS X users. For some Python versions, the Mac supplied libedit library is used rather than GNU readline. But this library uses a completely different language and hence tab completion was non-functional on these systems. The library difference is now detected and the correct language sent into libedit to activate tab completion.
- Created the time user function. This is just a shortcut for printing out the output of the time.asctime() function.
- The value.copy user function now accepts the force flag to allow destination values to be overwritten.
- Expanded model nesting capabilities in the relaxation dispersion auto-analysis to speed up the protocol.
- The spin-lock offset is now included in the spectra list GUI element for the relaxation dispersion analysis.
- Creation of the relax_disp.r2eff_estimate user function for the fast estimation of R2eff/R1ρ values and errors when full exponential curves have been collected. This experimental feature uses linearisation to estimate the R2eff and I0 parameters and the covariance matrix to estimate parameter errors.
- Gradients and Hessians are implemented for the exponential curve-fitting, hence all optimisation algorithms and constraint algorithms are now available for this analysis type. Using Newton optimisation instead of Nelder-Mead Simplex can save over an order of magnitude in computation time. This is also available in the relaxation dispersion analysis.
- The minimisation statistics are now being reset for all analysis types. The minimise.calculate, minimise.grid_search, and minimise.execute user functions now all reset the minimisation statistics for either the model or the Monte Carlo simulations prior to performing any optimisation. This is required for both parallelised grid searches and repetitive optimisation schemes to allow the result to overwrite an old result in all situations, as sometimes the original chi-squared value is lower and the new result hence is rejected.
- Large expansion of the periodic table information in the relax library to include all elements, the IUPAC 2011 standard atomic weights for all elements, mass numbers and atomic masses for all stable isotopes, and gyromagnetic ratios.
- Significant improvements to the structure centre of mass calculations by using the new periodic table information - all elements are now supported and exact masses are now used.
- Added a button to the spectra list GUI element for the spectrum.error_analysis user function. This is placed after the 'Add' and 'Delete' buttons and is used in the NOE, R1 and R2 curve-fitting and relaxation dispersion analyses.
- RelaxErrors are now raised in the prompt or script UI if an old user function is called, printing out the names of the old and new user functions. This is for help in upgrading old scripts and is currently for the calc(), grid_search(), and minimise() user function calls.
Changes
- Improved model handling for the internal structural object. The set_model() method has been added to allow either a model number to be set to the first unnumbered model (in preparation for adding new models) or to allow models to be renumbered. The logic of the add_model() has also been changed. Rather than looping over all atoms of the first model and copying them, which does not work due to the model validity checks, the entire MolList (molecule list) data structure is copied using copy.deepcopy() to make a perfect copy of the structural data. The ModelList.add_item() method has also been modified to return the newly added or numbered model. This is used by the add_model() structural object method to obtain the model object.
- Updated the Mac OS X framework setting up instruction document. New sections have been added for the nose and matplotlib Python packages, as nose is needed for the numpy and scipy testing frameworks and matplotlib might be a useful optional dependency in the future. The mpy4py section has been updated to avoid the non-framework fink version of mpicc which cannot produce universal binaries. A few other parts also have small edits.
- Removed the Freecode section from the release checklist as Freecode has been permanently shut down. The old relax links are still there (http://freecode.com/projects/nmr-relax), but Freecode is dead (http://freecode.com/about).
- Fix for the internal structural object MolContainer.last_residue() method. This can now operate when no structural information is present, returning 0 instead of resulting in an IndexError.
- Updated the script for finding unused imports in the relax source code. Now the file name is only printed for Python files which have unused imports.
- Completely removed all mentions of Freecode from the release document. The old relax links are still there, but Freecode is dead.
- Updated the minfx version in the release checklist document to 1.0.8. This version has not been released yet, but it will include important fixes and additions for constrained parallelised grid searches.
- Fix for a broken link in the development chapter of the relax manual.
- Fixes for dead hyperlinks in the relaxation dispersion chapter of the relax manual. The B14 model links to http://www.nmr-relax.com/api/3.2/lib.dispersion.b14-module.html were broken as the B in B14 was capitalised.
- Sent in the verbosity argument value to the minfx.grid.grid_split() function. The minfx function in the next release (1.0.8) will now be more verbose, so this will help with user feedback when running the model-free analysis on a cluster or multi-core system using MPI.
- The time user function now uses the chronometer Oxygen icon in the GUI.
- Removed the line wrapping in the epydoc parameter section of the optimisation function docstrings. This is for the pipe_control.minimise module.
- More docstring line wrapping removal from pipe_control.minimise.
- Bug fix for the parameter units descriptions. This only affects a few rare parameters. The specific analysis API parameter object units() method was incorrectly checking if the units value is a function - it was checking the parameter conversion factor instead.
- Modified the align_tensor.init user function so that the parameters are now optional. This allows alignment tensors to be initialised without specifying the parameter values for that tensor.
- Modified profiling script to have different number of NCYC points per frequency. This is to complicate the data, so any erroneous reshaping of data is discovered. It is expected, that experiments can have different number of NCYC points per spectrometer frequency. Task #7807: Speed-up of dispersion models for clustered analysis.
- Initial try to alter the target function calc_CR72_chi2. This is the first test to restructure the arrays, to allow for higher dimensional computation. All numpy arrays have to have same shape to allow to multiply together. The dimensions should be [ei][si][mi][oi][di]. [Experiment][spins][spec. frq][offset][disp points]. This is complicated with number of disp point can change per spectrometer frequency. Task #7807: Speed-up of dispersion models for clustered analysis. This implementation brings a high overhead. The first test shows no winning of time. The creation of arrays takes all the time.
- Temporary changed the lib/dispersion/cr72.py function to unsafe state. This change turns-off all the safety measures, since they have to be re-implemented for higher dimensional structures.
- Altered profiling script to report cumulative timings and save to temporary files. Task #7807: Speed-up of dispersion models for clustered analysis. This indeed shows that the efficiency has gone down.
- Added print out of χ2 to profile script. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the creation of special numpy structures outside target function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified profiling script to calculate correct values when setting up R2eff values. This is to test, that the return of χ2 gets zero. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removing looping over exp and offset indices in calc_chi2. They are always 0 anyway. This brings a little speed. Task #7807: Speed-up of dispersion models for clustered analysis.
- In profiling script, moved up the calculation of values one level. This is to better see the output of the profiling iterations for CR72.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for calculation of the Larmor frequency per spin in profiling script. The frq loop should also be up-shifted. It was now extracted as 0.0. Task #7807: Speed-up of dispersion models for clustered analysis.
- Re-inserted safety checks in lib/dispersion/CR72.py file. This is re-inserted for the rank_1 cases. This makes the unit-tests pass again. Task #7807: Speed-up of dispersion models for clustered analysis.
- Important fix for extracting the correct shape to create new arrays. If using just one field, or having the same number of dispersion points, the shape would extend to the dispersion number. It would report [ei][si][mi][oi][di] when calling ndarray.shape. Shape always has to be reported as: [ei][si][mi][oi]. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made it easier to switch between single and cluster reporting in profiling script. Task #7807: Speed-up of dispersion models for clustered analysis.
- Important fix for the creation of the multi dimensional pA numpy array. It should be created as numpy.zeros([ei][si][mi][oi]) instead of numpy.ones([ei][si][mi][oi]). This allows for rapid testing of all dimensions with np.allclose(pA, numpy.ones(dw.shape)). pA can have missing filled out values, when the number of dispersion points are different per spectrometer frequency. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added unit tests demonstrating edge cases 'no Rex' failures of the model CR72 full, for a clustered multi dimensional calculation. This is implemented for one field. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Task #7807: Speed-up of dispersion models for clustered analysis.
- Re-implemented safety checks in lib/dispersion/cr72.py. This is now implemented for both rank-1 float array and of higher dimensions. This makes the unit tests pass for multi dimensional computing. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added unit tests demonstrating edge cases 'no Rex' failures of the model CR72 full, for a clustered multi dimensional calculation. This is implemented for three fields. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed that special numpy structure is also created for CR72. This makes most system tests pass. Task #7807: Speed-up of dispersion models for clustered analysis.
- Critical fix for the slicing of values in target function. This makes system test: Relax_disp.test_sod1wt_t25_to_cr72 pass. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added self.has_missing keyword in initialization of the Dispersion class. This is to test once, per spin or cluster. This saves a looping over the dispersion points, when collection the data. Task #7807: Speed-up of dispersion models for clustered analysis.
- Created multi dimensional error and value numpy arrays. This is to calculate the χ2 sum much faster. Reordered the loop over missing data points, so it is only initiated if missing points is detected. Task #7807: Speed-up of dispersion models for clustered analysis.
- Switch the looping from spin->frq to frq->spin. Since the number of dispersion points are the same for all spins, this allows to move the calculation of pA and kex array one level up. This saves a lot of computation. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed all the creation of special numpy arrays to be of float64 type. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the data filling of special numpy array errors and values, to initialization of Dispersion class. These values does not change, and can safely be stored outside. Task #7807: Speed-up of dispersion models for clustered analysis.
- Just a tiny little more speed, by removing temporary storage of χ2 calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made copies of numpy arrays instead of creating from new. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added a self.frqs_a as a multidimensional numpy array. Task #7807: Speed-up of dispersion models for clustered analysis.
- Small fix for the indices to the errors and values numpy array. Task #7807: Speed-up of dispersion models for clustered analysis.
- Lowered the number of iterations to the profiling scripts. This is to use the profiling script as bug finder. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the calculation of dw_frq out of spin and spectrometer loop. This is done by having a special 1/0 spin numpy array, which turns on or off the values in the numpy array multiplication. The multiplication needs to first axis expand Δω, and then tile the arrays according to the numpy structure. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the calculation of pA and kex out off all loops. This was done by having two special 1/0 spin structure arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed dw_frq_a numpy array, as it was not necessary. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed all looping over spin and spectrometer frequency. This is the last loop! Wuhu. Task #7807: Speed-up of dispersion models for clustered analysis.
- Reordered arrays for beauty of code. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made the back_calc array be initiated as copy of the values array. Task #7807: Speed-up of dispersion models for clustered analysis.
- Small edit to profiling script, to help bug finding. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fixed that arrays are correctly initiated with one or zero values. Task #7807: Speed-up of dispersion models for clustered analysis.
- Very important fix, for only replacing part of data array which have Nan values. Before, all values were replaced, which was wrong. Task #7807: Speed-up of dispersion models for clustered analysis.
- Needed to increase the relative tolerance when testing if pA array is 1. Now system test Relax_disp.test_hansen_cpmg_data_missing_auto_analysis passes. Also added some comments lines, to prepare for mask replace of values. For example if only some of etapos values should be replaced. Task #7807: Speed-up of dispersion models for clustered analysis.
- Restored profiling script to normal. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made the logic and comments much clearer about how to reshape, expand axis, and tile numpy arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented a masked array search for where "missing" array is equal 1. This makes it possible to replace all values with this mask, from the value array. This eliminates the last loops over the missing values. It took over 4 hours to figure out, that the mask should be called with mask.mask, to return the same fulls structure, Task #7807: Speed-up of dispersion models for clustered analysis.
- Yet another small improvement for the profiling script. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed the multi dimensional structure of pA. pA is not multi-dimensional, and can just be multiplied with numpy arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for testing of pA in lib function, when pA is just float. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified unit tests, so pA is sent to target function as float instead of array. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed the multi dimensional structure of kex. kex is not multi-dimensional, and can just be multiplied with numpy arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for testing of kex in lib function, when kex is just float. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified unit tests, so kex is sent to target function as float instead of array. Task #7807: Speed-up of dispersion models for clustered analysis.
- Important fix for replacing values if eta_pos > 700 is violated. This fixes system test: Relax_disp.test_sod1wt_t25_to_cr72, which failed after making kex to a numpy float. The trick is to make a numpy mask which stores the position where to replace the values. Then replace the values just before last return. This makes sure, that not all values are changed. Task #7807: Speed-up of dispersion models for clustered analysis.
- Increased the kex speed to 1e7 in clustered unit tests cases. This is to demonstrate where there will be no excange. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added a multi-dimensional numpy array χ2 value calculation function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Called the newly created χ2 function to calculate for multi dimensional numpy arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
- Renamed chi2_ND to chi2_rankN. This is a better name for representing multiple axis calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made special ei, si, mi, and oi numpy structure array. This is for rapid speed-up of numpy array creation in target function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced self.spins_a with self.disp_struct. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made initialisation structures for Δω. Task #7807: Speed-up of dispersion models for clustered analysis.
- Initial try to reshape Δω faster. Task #7807: Speed-up of dispersion models for clustered analysis.
- Switched to use self.ei, self.si, self.mi, self.oi, self.di. This is for better reading of code. Task #7807: Speed-up of dispersion models for clustered analysis.
- Comment out the sys.exit(), which would make the code fail for wrong calculation of Δω. Task #7807: Speed-up of dispersion models for clustered analysis.
- Copied profiling script for CPMG model CR72 to R1ρ DPL94 model. The framework of the script will be the same, but the data a little different. Task #7807: Speed-up of dispersion models for clustered analysis.
- Started converting profiling script to DPL94. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced self.(ei,si,mi,oi,di) with self.(NE,NS,NM,NO,ND). These numbers represents the maximum number of dimensions, instead of index. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added the ei index, when creating the first dw_mask. Task #7807: Speed-up of dispersion models for clustered analysis.
- Reordered how the structures Δω init structures are created. Task #7807: Speed-up of dispersion models for clustered analysis.
- Clearing the dw_struct before calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
- Started using the new way of constructing Δω. This is for running system tests. Note, somewhere in the Δω array, the frequencies will be different between the two implementations. But apparently, this does not matter. Task #7807: Speed-up of dispersion models for clustered analysis.
- Inserted temporary method to switch for profiling. Task #7807: Speed-up of dispersion models for clustered analysis.
- First try to speed-up the old Δω structure calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
- Simplified calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
- Yet another try to implement a fast Δω structure method. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented the fastest way to calculate the Δω structure. This uses the numpy ufunc multiply.outer function to create the outer array, and then multiply with the frqs_structure. Task #7807: Speed-up of dispersion models for clustered analysis.
- Renamed Δω temporary structure to generic structure. Task #7807: Speed-up of dispersion models for clustered analysis.
- Restructured the calculation of R2A0 and R2B0 to the most efficient way. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made the lib/dispersion/CR72.py to a numpy multi dimensional numpy array calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed the catching when Δω is zero, to use masked array. Implemented backwards compatibility with unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
- Bugfix for testing if kex is zero. It was tested if kex was equal 1.0. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented masked replacement if fact is less that 1.0. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced isnan mask with function that catches all invalid values.
- Removed the masked replacement if fact is less than 1.0. This is very strange, but otherwise system test: Relax_disp.test_hansen_cpmg_data_missing_auto_analysis would fail. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed the slow allclose() function to test if R2A0 and R2B0 is equal. It is MUCH faster to just subtract and check sum is not 0.0. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced the temporary variable R2eff with back_calc, and used numpy subtract to speed up. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made the lib function into a pure numpy array calculation. This requires, that R2A0, R2B0 and Δω has same dimension as the dispersion points. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changes too unit tests, so data is sent to target function in numpy array format. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed the creation of an unnecessary structure by using numpy multiply. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the mask which finds where to replace values into the __init__ function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Copied profiling script for CR72 to B14 model. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified profiling script for the B14 model. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified model B14 lib file to faster numpy multidimensional mode. The implementations comes almost directly from the CR72 model file. Task #7807: Speed-up of dispersion models for clustered analysis.
- Reverted the use of the mask "mask_set_blank". It did not work, and many system tests started failing. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed the target function to handle the B14 model for faster numpy computation. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed unit test for B14 to match numpy input requirement. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added additional tests in B14, when math errors can occur. This is very easy with a conditional masked search in arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
- Comment fix for finding when E0 is above 700 in lib function of B14. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed use of "asarray", since the variables are already arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed target function for model CR72. To CR72 is now also the input of the parameters of R2A0, R2B0 and Δω. Δω is tested for zero, to return flat lines. It is faster to search in the smaller numpy array, than the 5 dimensional Δω array. This is for speed-up. R2A0 and R2B0 is also subtracted, to see if the full model should be used. In the same way, it is faster to subtract the smaller array. These small tricks are expected to give 5-10 pct. speeed-up. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made the lib function of CR72 accept the R2A0, R2B0 and Δω of the original array. This is for speed-up. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed unit-tests, to send in the original R2A0, R2B0 and dw_orig to the testing of the lib function CR72. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed profiling script to send R2A0, R2B0 and Δω, as original parameters to the lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed target function for model B14. To B14 now also send the input of the original parameters Δω. Δω is tested for zero, to return flat lines. It is faster to search in the smaller numpy array, than the 5 dimensional Δω array. This is for speed-up. These small tricks are expected to give 5-10 pct. speed-up. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made the lib function of B14 accept Δω of the original array. This is for speed-up. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed unit-tests, to send in the original dw_orig to the testing of the lib function B14. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed profiling script to send Δω as original parameters to the lib function B14. Task #7807: Speed-up of dispersion models for clustered analysis.
- Copied profiling script for CR72 model to TSMFK01 model. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified profiling script to be used for model TSMFK01. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified target function for model TSMFK01, to send in Δω as original parameter. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified lib function for model TSMFK01 to accept dw_orig as input and replaced functions to find math domain errors into maske replacements. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made unit tests for model TSMFK01 send in R2A0 and Δω as a numpy array. Task #7807: Speed-up of dispersion models for clustered analysis.
- Large increase in speed for model TSMFK01 by changing target functions to use multidimensional numpy arrays in calculation. This is done by restructuring data into multidimensional arrays of dimension [NE][NS][NM][NO][ND], which are number of spins, number of magnetic field strength, number of offsets, maximum number of dispersion point. The speed comes from using numpy ufunc operations. The new version is 2.4X as fast per spin calculation, and 54X as fast for clustered analysis.
- Replacing math domain checking in model DPL94, with masked array replacement. Task #7807: Speed-up of dispersion models for clustered analysis.
- First try to speed up model DPL94. This has not succeeded, since system test: Relax_disp.test_dpl94_data_to_dpl94 still fails. Task #7807: Speed-up of dispersion models for clustered analysis.
- Trying to move some of the structures into its own part. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for forgetting to multiply frqs to power 2. This was found by inspecting all print out before and after implementation. New implementation of DPL94 now passes all system and unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the expansion of the R1 structure out of the for loops. This is to speed-up the __init__ of the class of the target function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the packing of errors and values out of for loop in the __init__ class of target function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the multi dimensional expansion of inv_relax_times out of for loop. This can be done for all structures, which does not have missing points. Task #7807: Speed-up of dispersion models for clustered analysis.
- For inv_relax_times, expanded one axis, and tiled up to NR spins, before reshaping and blowing up to full structure. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the expansion of frqs out of for loops. Task #7807: Speed-up of dispersion models for clustered analysis.
- Documentation fix for description of input arrays to lib functions. Task #7807: Speed-up of dispersion models for clustered analysis.
- Converted TAP03 model to use multi dimensional numpy arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made Δω in unit tests of TAP03 be of numpy array. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced the loop structure in target function of TAP03 with numpy arrays. This makes the model faster. Task #7807: Speed-up of dispersion models for clustered analysis.
- Reordered the initialization structure of the special numpy arrays. This was done in the init part of the target function of relaxation dispersion. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added model MODEL_TSMFK01 also get self.tau_cpmg calculated in init part. Task #7807: Speed-up of dispersion models for clustered analysis.
- Methods to replace math domain errors in model TP02, has been replaced with numpy masks. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for sending in Δω as numpy array in unit tests of model TP02. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced target function for model TP02, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for adding model TP02 to part of init class to initialize preparation of higher dimension numpy structures. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made the NOREX model a faster numpy array calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed an unnecessary frq_struct in init of target function. frqs can just be expanded, and back_calc is cleaned afterwards with disp_struct. Task #7807: Speed-up of dispersion models for clustered analysis.
- Methods to replace math domain errors in model M61, has been replaced with numpy masks. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for sending in r1rho_prime and phi_ex_scaled as numpy array in unit tests of model M61. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced target function for model M61, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
- Methods to replace math domain errors in model M61b, has been replaced with numpy masks. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for sending in r1rho_prime and Δω as numpy array in unit tests of model M61b. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced target function for model M61b, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed number of points to be send to lib function of model TSMFK01. These are not used anymore. Also removed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed number of points and pB to be send to lib function of model TP02. Number of points are not used anymore. pB is calculated in lib function instead. Also removed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed number of points and pB to send to lib function of model TP02. pB is calculated in lib function instead. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed number of points, pB, kAB, kBA to be send to lib function of model B14. Number of points are not used anymore. pB is calculated in lib function instead. kAB, and kBA are calculated in lib functions instead. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for sending number of points in target function of TSMFK01. This was removed in lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed number of points, pB, to be send to lib function of model TAP03. Number of points are not used anymore. pB is calculated in lib function instead. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed number of points, to be send to lib function of model CR72. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed number of points, to be send to lib function of model DPL94. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed number of points, to be send to lib function of model M61. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed number of points, to be send to lib function of model M61b. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
- Methods to replace math domain errors in model MP05, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. Calculation of pB, has been moved to lib function for simplicity. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for sending in Δω as numpy array in unit tests of model MP05. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced target function for model MP05, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
- Methods to replace math domain errors in model LM63, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for sending in number of points in unit tests of model LM63. Task #7807: Speed-up of dispersion models for clustered analysis. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced target function for model LM63, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for replacement of values with mask, when φex is zero. This can be spin specific. System test: Relax_disp.test_hansen_cpmg_data_to_lm63 starts to fail: Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for sending in R20 and φex as numpy array in unit tests of LM63. This is after using masks as replacement. Task #7807: Speed-up of dispersion models for clustered analysis.
- 1 digit decrease in parameter check in system test: Relax_disp.test_hansen_cpmg_data_to_lm63. It is unknown, why this has occurred. Task #7807: Speed-up of dispersion models for clustered analysis.
- Methods to replace math domain errors in model IT99, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. pB is now moved to be calculated inside. This makes the lib function simpler. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for sending in R20 and Δω as numpy array in unit tests of IT99. This is after using masks as replacement. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced target function for model IT99, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
- Methods to replace math domain errors in model ns_cpmg_2site_expanded, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. pB is now moved to be calculated inside. This makes the lib function simpler. kAB and kBA is also now calculated here. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for sending in R20 and Δω as numpy array in unit tests of ns_cpmg_2site_expanded. This is after using masks as replacement. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced target function for model ns_cpmg_2site_expanded, to use higher dimensional numpy array structures. That makes the model much faster. I cannot get system test: Relax_disp.test_cpmg_synthetic_dx_map_points to pass. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for system test Relax_disp.test_cpmg_synthetic_dx_map_points. By just copying self.back_calc_a to self.back_calc, problem was solved. In specific_analysis.relax_disp.optimisation in function back_calc_r2eff(), the function gets the last values stores in the class function. This is in "class Disp_result_command(Result_command)" with self.back_calc = back_calc. And back_calc_r2eff() have return model.back_calc. Task #7807: Speed-up of dispersion models for clustered analysis.
- Methods to replace math domain errors in model ns_cpmg_2site_3d, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. pB is now moved to be calculated inside. This makes the lib function simpler. kAB and kBA is also now calculated here. Magnetization vector is also now filled in lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for unit tests of model NS CPMG 2-site 3D to the reduced input to the lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Change to the target function to the model NS CPMG 2-site 3D to use the reduced input to the lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed linked matrix/vector inner products into chained dot expressions. Task #7807: Speed-up of dispersion models for clustered analysis.
- Wrote the essential dot matrix up to be initiated earlier. Task #7807: Speed-up of dispersion models for clustered analysis.
- Lowered the number of dot iterations, by pre-prepare the dot matrix another round. Task #7807: Speed-up of dispersion models for clustered analysis.
- Turned Mint vector into a 7,1 matrix, so dimensions fit with evolution matrix. Task #7807: Speed-up of dispersion models for clustered analysis.
- Lowered the number of dot operations, by pre-preparing the evolution matrix another round. The power is in system tests always even. The trick to removing this for loop, would be to make a general multi dot function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the bulk operation of model NS CPMG 2-site 3D into the lib file. This is to keep the API clean. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed the unit test of NS CPMG 2-site 3D, after the input to the function has changed. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed the target function for NS CPMG 2-site 3D. This reflects the new API layout. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed the lib function of NS CPMG 2-site star, to get input of Δω and R2A0+R2B0 of higher dimensional type. This is to move the main operations from the target function to the lib function, and make the API code clean and consistent. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed the target function of NS CPMG 2-site star, to reflect the input to the function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made the dot evolution structure faster for NS CPMG 2-site 3D. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented the BLAS method of dot product, which should be faster. I cannot get the "out" argument to work. Task #7807: Speed-up of dispersion models for clustered analysis.
- Small fix for the dot method. But the out argument does not work. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented the dot method via blas. This needs a array with one more axis. Task #7807: Speed-up of dispersion models for clustered analysis.
- Last try to use the out argument. In the last dotting loop, the out argument wont work, no matter what I do. Task #7807: Speed-up of dispersion models for clustered analysis.
- Inner product fix in model NS CPMG 2-site 3D. Fix for system tests: Relax_disp.test_cpmg_synthetic_ns3d_to_b14, Relax_disp.test_cpmg_synthetic_ns3d_to_CR72, Relax_disp.test_cpmg_synthetic_ns3d_to_CR72_noise_cluster. The number of dotting with Mint, should correspond to the power. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced the temporary structure self.frqs_a to self.frqs, which works for all target functions. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced the temporary structure self.cpmg_frqs_a to self.cpmg_frqs, which works for all target functions. Task #7807: Speed-up of dispersion models for clustered analysis.
- Restructured all data structures into higher dimension in target function. Fix for the input to the different models. Restructured how to detect the number of offset and dispersion points. Task #7807: Speed-up of dispersion models for clustered analysis.
- Various index fixes, after the data structures have been reordered. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for unit test, where the dimension of points has to be one lower. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for plotting, since the back_calc now can hold more data points that cpmg frequencies. This is because the numpy array has been expanded to the maximum number of points. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented a frqs_squared calculation in the init of target function. This is to speed up the calculations. Task #7807: Speed-up of dispersion models for clustered analysis.
- Restructured frqs_H to higher dimension in target function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the calculation of Δω and ΔωH out of for loops for model MMQ CR72. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed looping over spin and frequencies for model MMQ CR72. Task #7807: Speed-up of dispersion models for clustered analysis.
- Temporary removed check for Δω = 0.0 in MMQ CR72. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed number of points to be parsed to model MMQ CR72. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed power to be parsed to MMQ CR72, since it is not used. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed MMQ CR72 to use multi dimensional data. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed unit test of MMQ CR72 to pass. Δω needs to be of numpy structure. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the calculation of Δω out of for loops for model NS MMQ 2-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified lib function for NS MMQ 2-site, to have looping over spins and frequencies inside lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fixed the use of higher dimensional data in NS MMQ 2-site SQ DQ ZQ. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for documentation in NS MMQ 2-site/SQ/DQ/ZQ/MQ. Now explains which dimension data should be in. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed the reshaping of Δω and ΔωH, since it is not dependent on experiment. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changed the calculation of inner product in model NS CPMG 2-site 3D. The out argument of numpy.dot is buggy, and should not be used. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added missing instances of cleaning the data. Task #7807: Speed-up of dispersion models for clustered analysis.
- Bug fix for model LM63 3-site. The index si has to be used to extract data to lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Temporary added system test: test_korzhnev_2005_all_data_disp_speed_bug. This makes a minimisation with 1 iteration, and so will give the χ2 value at the preset parameter values. This is χ2 value should give 162.5, but gives 74.7104. Task #7807: Speed-up of dispersion models for clustered analysis.
- Updated documentation on dimensionality of numpy array num_points. They are in dimension [NE][NS][[NM][NO], where oi gives the number of dispersion points at that offset. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for system test: test_korzhnev_2005_all_data. The masking for replacing values was wrong. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the cleaning of data points and replacing of values of out loop for model NS MMQ 2-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for structure cleaning and value replacing for model MMQ CR72. System test: test_korzhnev_2005_all_data revealed how this should be done properly. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for system test test_korzhnev_2005_all_data_disp_speed_bug. The precision is lowered, and now matches the original system test. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced index to numpy array from example [0][si][mi][oi] to [0, si, mi, oi]. Task #7807: Speed-up of dispersion models for clustered analysis.
- More replacing of numpy index. Task #7807: Speed-up of dispersion models for clustered analysis.
- Documentation fix, where a double bracket "[[" has been copied into all lib functions. Task #7807: Speed-up of dispersion models for clustered analysis.
- More fixes for numpy index in lib functions. Task #7807: Speed-up of dispersion models for clustered analysis.
- Restructured target function for NS MMQ 3-site to the new API structure of higher dimensional data. Task #7807: Speed-up of dispersion models for clustered analysis.
- Reordered the lib function for NS MMQ 3-site to use higher dimensional data. Task #7807: Speed-up of dispersion models for clustered analysis.
- Documentation fix for which dimensionality number of points have. Task #7807: Speed-up of dispersion models for clustered analysis.
- Documentation fix for the initial data structure of cpmg_frqs, spin_lock_nu1, r1. They were incorrect. Task #7807: Speed-up of dispersion models for clustered analysis.
- First attempt to implement target function for NS R1rho 2-site. But it does not work yet. Task #7807: Speed-up of dispersion models for clustered analysis.
- First attempt to implement lib function for NS R1rho 2-site. But it does not work yet. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fatal fix for calling inv_relax_time from relax_time variable. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removal of the temporary offset argument. Task #7807: Speed-up of dispersion models for clustered analysis.
- Documentation fix for the dimensionality of the input arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented the target function for NS R1rho 3-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented the lib function for NS R1rho 3-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented target function for LM63 3-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented the lib function for LM63 3-site, for higher dimensional data. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed the number of disp points in target function for LM63 3-site, since it is no longer used, but have been replaced with mask replacements. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented class function in target class, to return back_calc values as list of lists. This is the back and forth conversion between the data structures implemented when gathering the data, and the data send to the library function of higher dimensionality. Task #7807: Speed-up of dispersion models for clustered analysis.
- Used the new class function: get_back_calc(), to get the data in the right structures when interpolating for graphs. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed superfluous check, after the returned data is now in right structure. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made changes to the dir argument of system test Relax_disp.test_r1rho_kjaergaard. This is to prepare for: sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff and sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1. This is also to test an expected bug, if R1 is not loaded. Task #7807: Speed-up of dispersion models for clustered analysis.
- The relaxation dispersion target function can now be set up when the optional frqs_H argument is None. This allows the profiling scripts to run.
- More stability fixes for the relaxation dispersion target function initialisation. The target function can now be initialised when the r1 and chemical_shift arguments are None.
- Split the system test test_r1rho_kjaergaard into a setup function, and a test function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Renamed system test test_r1rho_kjaergaard to test_r1rho_kjaergaard_auto. This corresponds to the use of the automatic analysis method. Task #7807: Speed-up of dispersion models for clustered analysis.
- Split system test test_r1rho_kjaergaard into test_r1rho_kjaergaard_auto and test_r1rho_kjaergaard_man. This is to test use of the manual way to analyse. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified all of Troels' dispersion profiling scripts to work with older relax versions. This is in preparation for obtaining some powerful timing statistics. The calls to the r2eff_*() functions are unnecessary and are the only failure point in the scripts between the current code in the disp_spin_speed branch and trunk or older versions of relax. So these function calls have been eliminated.
- Implemented system test test_r1rho_kjaergaard_missing_r1, for safety check if R1 data is not loaded. The system test passes, so target function is safe. Task #7807: Speed-up of dispersion models for clustered analysis.
- Python 3 support for the dispersion profiling scripts. The xrange() builtin function does not exist in Python 3, so this is now aliased to range() which is the same thing.
- Replaced double or triple hash-tags "##" with single hash-tags "#". Task #7807: Speed-up of dispersion models for clustered analysis.
- Copyright fixes for all the models, where Troels E. Linnet have made changes to make them functional for higher dimensional data. Task #7807: Speed-up of dispersion models for clustered analysis.
- Copyright fix for model TSMFK01. Sebastien Morin did not take part of implementing the TSMFK01 model. Task #7807: Speed-up of dispersion models for clustered analysis.
- Created a super script for profiling the relaxation dispersion models. This script will execute all of the current profiling scripts in the directory test_suite/shared_data/dispersion/profiling for both the current version of relax and any other specified version (current set to the 3.2.2 relax tag). It will run the scripts and relax versions interleaved N=10 times and extract the func_*() target function call profile timings. This interleaving makes the numbers much more consistent. Averages and standard deviations are then calculated, as well as the speed up between the two relax versions. The results are printed out in a format suitable for the relax release messages.
- Increased the number of iterations to 1000 in all of the profiling scripts. This is for better statistics in the disp_profile_all.py script, and makes the number consistent between the different models.
- Added a log file for comparing the speed of the disp_speed_branch to relax 3.2.2. This is from the disp_profile_all.py statistics generating script.
- Made the processor.return_object get the back_calc structure in the expected order. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fixed the ordering of the relax versions in the dispersion super profiling script disp_profile_all.py. This has also been fixed in the disp_spin_speed branch to relax 3.2.2 comparison log.
- Added a log file for comparing the speed of the disp_speed_branch to relax 3.2.1. This is from the disp_profile_all.py statistics generating script.
- Added a profiling script for the NS CPMG 2-site expanded dispersion model. The script was copied from that of the CR72 model, and it only needed to be changed in a few places. This is the first numeric model profiling script.
- Updated the profiling super script and log for the NS CPMG 2-site expanded model. This shows that the single spin calculation is 1.8 times faster, and the cluster of 100 11.7 times faster, when compared to relax 3.2.2.
- Modified all of the dispersion model profiling scripts. The single() function for timing the single spin target function speed has been modified to include a second outer loop over 100 'spins'. This means that the timing numbers are equivalent to the cluster timings, as both are then over 100 spins. This now allows not only relax version differences and model differences to be compared, but also the non-clustered and clustered analysis speeds.
- Added a script for profiling the NS CPMG 2-site 3D relaxation dispersion model. Again this only involved copying one of the other scripts and modifying a few variable and function names.
- Added the NS CPMG 2-site 3D model to the dispersion super profiling script. To handle the fact that this script has nr_iter set to 100 rather than 1000 (as otherwise it is too slow), a list of scaling factors has been created to scale all timing numbers to equivalent values.
- Added DPL94 profiling script. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified profiling script for TSMFK01, to use correct parameters kAB and R2A0. Or else, the lib functions is just calculating with zero? Task #7807: Speed-up of dispersion models for clustered analysis.
- Changes to profiling script of NS CPMG 2-site expanded. The model does not have R2A0 and R2B0, but only R2. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made changes to the profiling script of NS CPMG 2-site 3D. Need to use the full model, when r2a and r2b is specified. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changes to profiling script of NS CPMG 2-site expanded. The unpacking can be removed. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for the profiling script of NS CPMG 2-site 3D. The model should also be specified to full. Task #7807: Speed-up of dispersion models for clustered analysis.
- The disp_profile_all.py super script now prints out the current relax version information. This is so that the log files contain information about the repository revision and path.
- Copied profiling script of DPL94 to NS R1rho 2-site.
- Improved the final printout from the disp_profile_all.py dispersion model super profiling script.
- Added profiling script for NS R1rho 2-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- The disp_profile_all.py dispersion model super profiling script is now executable.
- Decreased all nr_iter values by 10 and added more dispersion models to the super profiling script. This is for the dispersion model profiling scripts in test_suite/shared_data/dispersion/profiling/, all controlled by the disp_profile_all.py super profiling script for generating statistics using all of the other profiling scripts. The number of iterations needed to be decreased as otherwise it would now take almost 1 day to generate the statistics table.
- Moved the parter conversion in LM63 3-site into the lib function. This cleans up the target api function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Copied profiling script for DPL94 to TAP03.
- Copied profiling script for DPL94 to TP02.
- Copied profiling script for DPL94 to MP05.
- Copied profiling script for DPL94 to M61.
- Modified profiling script for TAP03 to be used. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified profiling script for TP02, to be used. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified profiling script for MP05. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified profiling script for M61. This is the last one. Task #7807: Speed-up of dispersion models for clustered analysis.
- Expansion of the disp_profile_all.py dispersion model super profiling scripts. The newly added profiling scripts for models M61, TP02, TAP03, and MP05 are now included in the super script to generate statistics for all of these as well. The nr_iter variable has also been changed to match the other analytic models, so that the standard deviations are lowered and the statistics are better.
- Moved the parameter conversion of MMQ CR72 into lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the parameter conversions of kAB, kBA and pB into lib function of NS MMQ 2-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the parameter conversion from target function to lib function for NS R1rho 2-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Updated the dispersion model speed statistics for the disp_spin_speed branch vs. relax-3.2.2. This now includes the NS CPMG 2-site 3D, DPL94, and NS R1rho 2-site dispersion models. The timings for the single spin analyses are now comparable to the clustered analysis, as the equivalent of 100 single spins is being used. The final printout is also in a better format to present for the relax release messages. These new results show the insane 160x speed up of the DPL94 model.
- Alignment improvements for the final printout from the dispersion model super profiling script. The log file has been updated with what the new formatting will look like.
- Updated the model names in the dispersion model super profiling script. The CR72, B14 and NS CPMG 2-site 3D models are the full, slower versions rather than the faster models with R20 = R2A0 = R2B0. The log file has been updated to match.
- Moved the parameter conversion for NS MMQ 3-site into lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Updated the dispersion model profiling comparison of the disp_spin_speed branch vs. relax-3.2.2. The M61, TP02, TAP03, and MP05 models are now included. The final printout has been manually updated to reflect the newest version of the disp_profile_all.py super profiling script.
- Moved the parameter conversion for NS R1rho 3-site into lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Copied profiling script for CR72, so there is now a normal and a full version.
- Copied profiling for B14 to normal and full model.
- Created a text file suitable for use as part of the relax release notes. This contains the statistically averaged profiling information of the speed of the dispersion models in the disp_spin_speed branch vs. relax-3.2.2. This file has been created so that it can be used as part of the release notes for the version of relax that contains the insane speed ups of this branch. This file will be updated as new models are profiled and if any more speed ups magically appear.
- Copied profiling script for NS CPMG 2-site 3D.
- Copied profiling script for NS CPMG 2-site star.
- Copied profiling script for No Rex.
- Modified profiling script for B14, to R2A0=R2B0. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented profiling script for NS CPMG 2-site 3D. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented profiling script for NS CPMG 2-site star and star full. Task #7807: Speed-up of dispersion models for clustered analysis.
- Copied profiling script to be used for LM63.
- Copied profiling script to model IT99.
- Added profiling script for IT99. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented profiling script for LM63. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the "eta_scale = 2.0**(-3.0/2.0)" out of lib function for MMQ CR72, since this is only needs to be computed once. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for spaces aroung "=" outside functions. Task #7807: Speed-up of dispersion models for clustered analysis.
- Critical fix for wrong space inserted in NS MMQ 3-site MQ. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fixed the input for unit test of MMQ CR72. The number of input parameters has been lowered. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added additional math domain checking in B14. This is when v1c is less than 1.0. Task #7807: Speed-up of dispersion models for clustered analysis.
- Comment fixing, for explaining the masking and replacing when Δω is zero. Task #7807: Speed-up of dispersion models for clustered analysis.
- Copied profiling script to be used for profiling the use of higher dimensional data for the numpy eig function.
- Implemented the collection of the 3D exchange matrix, for rank [NE][NS][NM][NO][ND][7][7]. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented test, to see if 3D exchange matrices are the same. This can be tested while running system test test_hansen_cpmg_data_to_ns_cpmg_2site_3D. Task #7807: Speed-up of dispersion models for clustered analysis.
- Shifted the computation of Rexpo two loops up. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added intermediate step with for loops. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added another intermediate step. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added function to compute the matrix exponential for higher dimensional data of shape [NE][NS][NM][NO][ND][7][7]. This is done by using numpy.einsum, to make the dot product of the last two axis. Task #7807: Speed-up of dispersion models for clustered analysis.
- Inserted intermediate step, to check if the matrix propagator to evolve the magnetization is equal when done for lower dimensional data of shape [7][7] and higher dimensional data of shape [NE][NS][NM][NO][ND][7][7]. A short example is shown at the wiki: http://wiki.nmr-relax.com/Numpy_linalg#Ellipsis_broadcasting_in_numpy.einsum. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented double speed of model NS CPMG 2-site 3D. This is done by moving the costly calculation of the matrix exponential out of the for loops. The trick was to find a method to do dot product of higher dimensions. This was done with numpy.einsum, example at: http://wiki.nmr-relax.com/Numpy_linalg#Ellipsis_broadcasting_in_numpy.einsum. Example: dot_V_W = einsum('...ij,...jk', V, W_exp_diag) where V, and W_exp_diag has shape [NE][NS][NM][NO][ND][7][7]. The profiling script shows a 2X speed up.
- Made notation consistent for variables, using "_i" to clarify extracted data from matrix. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the calculation the evolution matrix out of for loops. The trick is that numpy.einsum allows for dot product of higher dimension: The essential evolution matrix; This is a dot product of the outer [7][7] matrix of the Rexpo_mat and r180x_mat matrices, which have the shape [NE][NS][NM][NO][ND][7][7]; This can be achieved by using numpy einsum, and where ellipsis notation will use the last axis.
- Implemented system test: test_cpmg_synthetic_b14_to_ns3d_cluster. This is to catch failures of the model, when data is clustered. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed unused variables in NS CPMG 2-site 3D, to clean up the code. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added the NS matrices, rr1rho_3d_rankN, to collect the multi dimensional 3D exchange matrix, of rank [NE][NS][NM][NO][ND][6][6]. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added a check in lib/dispersion/ns_r1hro_2site.py, to see if the newly created multidimensional ns matrix of rank NE][NS][NM][NO][ND][6][6], is equal to the previous [6][6] matrix. It is. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added the relax_time to collection of rr1rho_3d_rankN matrix collection. This is to pre-multiply all elements with the time. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added a check, that the pre- relax_time multiplied multidimensional array, equal the previous. It does, to the sum of 1.0e-13. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made the function use the new multidimensional R_mat matrix. System test: test_tp02_data_to_ns_r1rho_2site still passes. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix to the matrix_exponential_rankN, to return the exact exponential for any higher dimensional square matrix of shape [NE][NS][NM][NO][ND][X][X]. The fix was to the eye(X), to make the shape the same as the input shape. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the costly calculation of the matrix exponential out of for loops. It was the numpy.eig and numpy.inv which was draining power. This speeds up model NS R1rho 2-site, by a factor 4X.
- Made the returned multidimensional rr1rho_3d_rankN, be of float64 type. Task #7807: Speed-up of dispersion models for clustered analysis.
- Cleaned up the code of NS R1rho 2-site, and removed the matrix argument to the function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed the parsing of a matrix to the lib function of NS R1rho 2-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added the function "rcpmg_star_rankN" for the collection of the multidimensional relaxation matrix for model NS CPMG 2-site star. Task #7807: Speed-up of dispersion models for clustered analysis.
- Insert check, that the newly created multidimensional matrix is the same. They are, but only to the fifth digit. Task #7807: Speed-up of dispersion models for clustered analysis.
- Started using the newly created multidimensional matrix. test_hansen_cpmg_data_to_ns_cpmg_2site_star. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added the system test: test_cpmg_synthetic_b14_to_ns_star_cluster, to check for the model is still working after change. Task #7807: Speed-up of dispersion models for clustered analysis.
- Started using the newly cR2 variable, extracted from higher dimensional data. This should be okay, but system test test_hansen_cpmg_data_to_ns_cpmg_2site_star, now fails.
- Changes of values to system test test_hansen_cpmg_data_to_ns_cpmg_2site_star. The values are changed, since χ2 is lower than before. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the costly finding of matrix exponential out of for loops for eR_tcp. Task #7807: Speed-up of dispersion models for clustered analysis.
- Rearranged the code, to properly show the nested matrix exponentials in dot functions. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the costly matrix_exponential of cR2 out of for loops. Task #7807: Speed-up of dispersion models for clustered analysis.
- Rearranged the dot code, for better view. Task #7807: Speed-up of dispersion models for clustered analysis.
- Cleaned up the code in model NS CPMG 2-site star. Task #7807: Speed-up of dispersion models for clustered analysis.
- Simplified model NS CPMG 2-site 3D. The expansion of matrices to higher dimensionality is not necessary. Task #7807: Speed-up of dispersion models for clustered analysis.
- Further cleaned up the code in NS CPMG 2-site star. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed input of matrix, Rr, Rex, RCS and R to model NS CPMG 2-site star. These matrices is now extracted from NS matrix function rcpmg_star_rankN. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented the collection of the multidimensional matrix m1 and m2 in model NS MMQ 2-site. Inserted also a check, that the newly computed matrix is equal. They are, to the 6 digit. Task #7807: Speed-up of dispersion models for clustered analysis.
- Started using the newly created multidimensional m1 and m2 matrices. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the costly calculation of matrix_exponential of M1 and M2 out of for loop, in model ns_mmq_2site_mq. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made the function matrix_exponential_rankN also find the exponential if the experiments indices are missing. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for an extra axis inserted in eye function, when dimensionality is only [NS][NM][NO][ND]. This also fixes the index in the lib function of ns_mmq_2site_mq. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented same functionality in mmq_2site_sq_dq_zq. Problem, following system test fails: test_korzhnev_2005_15n_dq_data, test_korzhnev_2005_15n_mq_data, test_korzhnev_2005_15n_sq_data, test_korzhnev_2005_1h_mq_data, test_korzhnev_2005_1h_sq_data, test_korzhnev_2005_all_data, test_korzhnev_2005_all_data_disp_speed_bug. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed grid search, and lowered number of iterations for system test: test_cpmg_synthetic_b14_to_ns3d_cluster, test_cpmg_synthetic_b14_to_ns_star_cluster. This is to speed them up, since they before took 30 seconds. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for ns_mmq_2site_mq. Variable was wrong called. There seems to be a serious problem more with MQ.
- Reinserted old code. This fixes: test_korzhnev_2005_15n_mq_data. Task #7807: Speed-up of dispersion models for clustered analysis.
- Forcing the dtype to be complex64, instead of complex128. This solves a range of system tests. The one who fails now is: test_korzhnev_2005_15n_zq_data, test_korzhnev_2005_1h_mq_data, test_korzhnev_2005_1h_sq_data. Task #7807: Speed-up of dispersion models for clustered analysis.
- Forces complex64 in ns_mmq_2site_sq_dq_zq instead complex128. This fixes system tests: test_korzhnev_2005_15n_zq_data,test_korzhnev_2005_1h_sq_data. Task #7807: Speed-up of dispersion models for clustered analysis.
- Force complex64 in ns_mmq_2site_mq. This solves all system tests. Forcing to be complex64, does not seems like a long standing solution, since complex128 is possible. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for using the old matrix_exponential of m1. One: test_korzhnev_2005_15n_sq_data is still failing. That still uses the matrix_exponential_rankN. There seems to be a problem with matrix_exponential_rankN, when doing complex numbers. Maybe the dtype has to get fixed? Use it as a input argument? It must be the einsum. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added the "dtype" argument to function matrix_exponential_rankN. This is to force the conversion of dtype, if they are of other type. This can be conversion from complex128 to complex64. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix the bug: "M2_i = M1_mat", which was causing the problems getting system tests to pass. Removed the specifications of which dtype, the initial matrices are created. They can be converted later, with the specification of dtype to matrix_exponential_rankN(). All system tests now pass. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the Bloch-McConnell matrix for 2-site exchange into lib/dispersion/ns_matrices.py. This is for consistency with the other code. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the matrices for Bloch-McConnell from lib ns_mmq_2site, since they are now defined in ns_matrices.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the Bloch-McConnell matrix for 3-site exchange, into the lib/dispersion/ns_matrices.py. This is to standardize the code. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed m1 and m2 to be sent to lib function of NS MMQ 2-site, since they are now populated inside the lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented the Bloch-McConnell matrix for 3-site exchange, for multidimensional data. Task #7807: Speed-up of dispersion models for clustered analysis.
- Inserted a check, that the new higher dimensional m1 and m2 matrices are equal to before. They are, to the 5 digit. Task #7807: Speed-up of dispersion models for clustered analysis.
- Started using the newly created higher dimensional Bloch-McConnell matrix for 3-site exchange. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the calculation of the matrix exponential out of for loops for NS MMQ 3-site MQ. Task #7807: Speed-up of dispersion models for clustered analysis.
- Converted NS MMQ 3-site/SQ/DQ/ZQ to calculate the matrix exponential out of the for loops. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed the complex64 to be used as dtype in matrix exponential. Fix for missing "_i" in variable. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed m1 and m2 to be sent to target function of ns_mmq_3site_chi2. They are now populated inside the lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Documentation and input fix for NS MMQ 2-site. The m1 and m2 matrices are populated inside the lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Renamed some numerical matrices, to get consistency in naming. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented multidimensional NS R1rho 3-site exchange matrix. Task #7807: Speed-up of dispersion models for clustered analysis.
- Inserted check, that newly multi dimensional matrix is equal the old. It is, to the 13 digit. Task #7807: Speed-up of dispersion models for clustered analysis.
- Started using the newly multidimensional 3D exchange matrix. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved the calculation of the matrix exponential out of the for loops for NS R1rho 3-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed the parameter "matrix" to be send to lib function of NS R1rho 3-site, since it is now populated inside the lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved parameter conversion for NS R1rho 3-site inside lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Cleaned up the Dispersion class target function, for creation of matrices, which is now populated inside the lib functions instead. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed pA and pB from the matrix population function rcpmg_star_rankN, since they are not used. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed pA and pB from the matrix population function rr1rho_3d_2site_rankN, since they are not used. Task #7807: Speed-up of dispersion models for clustered analysis.
- Documentation fix for the dimensionality for model NS R1rho 2-site. The data is lined up to be of form [NE][NS][NM][NO][ND]. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed pA, pB and pC from the matrix population function rr1rho_3d_3site_rankN, since they are not used. Task #7807: Speed-up of dispersion models for clustered analysis.
- Deleted the profiling of eig function profiling script. This was never implemented. Task #7807: Speed-up of dispersion models for clustered analysis.
- For all profiling scripts, added conversion to numpy array for CPMG frqs and spin_lock, since some models complained in 3.2.2, that they were of list types. Also fixed IT99 to only have 1 spin, since clustering is broken in 3.2.2. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified super profiling script, to allow input to script, where alternative version of relax is positioned. Collected the variables in a list of lists, for better overview. Added a print out comment to IT99, to remember the bug. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added comment field to super profiling script. Task #7807: Speed-up of dispersion models for clustered analysis.
- Math domain fix for NS CPMG 2-site expanded. This is when t108 or t112 is zero, in the multidimensional array, a division error occurs. The elements are first set to 1.0, to allow for computation. Then elements are later replaced with 1e100. Lastly, if the elements are not part of the "True" dispersion point structure, they are cleaned out. Task #7807: Speed-up of dispersion models for clustered analysis.
- Precision lowering of system test, test_korzhnev_2005_15n_sq_data and test_korzhnev_2005_1h_sq_data. The system tests does not fail on Linux 64-bit system, but only on Mac 32-bit system. This is due to floating error deviations. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added log files for super profiling against tags 3.2.1 and 3.2.2. Task #7807: Speed-up of dispersion models for clustered analysis.
- Copied lib.linear_algebra.matrix_exponential to lib.dispersion.matrix_exponential. The matrix exponential of higher dimensional data is only used in the dispersion part of relax.
- Added to __init__, the new lib.dispersion.matrix_exponential module. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added to unit_tests/_lib/_dispersion/__init__.py, the new unit test file: test_matrix_exponential.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added numpy array save files. They are the numpy array structures, which are send in from system test: Relax_disp.test_hansen_cpmg_data_to_ns_cpmg_2site_3D. These numpy array structures, are used in unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added unit test unit_tests/_lib/_dispersion/test_matrix_exponential.py to test the matrix exponential from higher dimensional data. lib.dispersion.matrix_exponential.matrix_exponential_rankN will match against lib.linear_algebra.matrix_exponential. Data which is used for comparison, comes from system test: Relax_disp.test_hansen_cpmg_data_to_ns_cpmg_2site_3D. Task #7807: Speed-up of dispersion models for clustered analysis.
- Renamed function to return data in unit test _lib/_dispersion/test_matrix_exponential.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix to lib/dispersion/matrix_exponential.py, since the svn copy command was used on non-updated version of the file. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added unit test for doing the matrix exponential for complex data. This test shows, that the dtype=complex64, should be removed from lib/dispersion/ns_mmq_2site.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added data for unit test for the testing of the matrix_exponential_rankN. Task #7807: Speed-up of dispersion models for clustered analysis.
- Expanded the dispersion profiling master script to handle any two relax versions. To compare two relax versions, for example 3.2.2 to 3.2.1, either the path1 and path2 variables or two command line arguments can be supplied. The first path should be for the newest version. This will allow for comparing the speed differences between multiple relax versions in the future.
- Split matrix_exponential_rankN into matrix_exponential_rank_NE_NS_NM_NO_ND_x_x and matrix_exponential_rank_NS_NM_NO_ND_x_x. Task #7807: Speed-up of dispersion models for clustered analysis.
- Moved numerical solution matrices into the corresponding lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Copied profiling scripts, to be used for 3-site models and MMQ models.
- Implemented profiling script for LM63 3-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Improved the relax version printouts for the dispersion model master profiling script. This now reports both relax versions.
- Removed a tonne of unused imports from the dispersion model profiling scripts. This is to allow most of the scripts to run on the relax 3.1.x versions, as well as to clean up the scripts. The unused imports were found using the command: pylint test_suite/shared_data/dispersion/profiling/*.py --disable=all --enable=unused-import.
- Added a relaxation dispersion model profiling log file for relax version 3.2.1 vs. 3.2.0. This is the output from the dispersion model profiling master script. It shows a 2.2 times increase in speed for the B14 and B14 full models, with all other models remaining at the same speed. This matches the changes for relax 3.2.1, the main feature of which is a major bugfix for the B14 models.
- The 'relax -v' command is now used for the dispersion model profiling script initial printout. This is to show the two different relax versions being compared.
- Modifications to the dispersion model profiling master script. The info.print_sys_info() function of the current relax version is being called at the start to show all information about the current system. This is useful to know the speed of the machine, the OS, the Python version and numpy version. The numpy version is important as future versions might optimise certain functions that are currently very slow, hence that could be a cause of model speed differences. In addition, the path variables path1 and path2 have been renamed to path_new and path_old to make it clearer which is which. And the individual profiling scripts are no longer copied to the base directory of the relax versions being compared, and are run in place.
- Added a relaxation dispersion model profiling log file for relax version 3.2.2 vs. 3.2.1. This is the output from the dispersion model profiling master script. It shows that the relax 3.2.2 release did not in fact feature "a large speed up of all analytic relaxation dispersion models" as described in the release notes at https://web.archive.org/web/. For the CPMG models there is a 1 to 2 times increase in speed. But for the R1ρ models, there is a 1 to 2 times decrease in speed.
- Added a relaxation dispersion model profiling log file for relax version 3.2.0 vs. 3.1.7. This is the output from the dispersion model profiling master script. It shows that there are no speed differences.
- Added a relaxation dispersion model profiling log file for relax version 3.1.7 vs. 3.1.6. This is the output from the dispersion model profiling master script. It shows that there are no speed differences.
- Modified profiling script for NS R1rho 3-site, to be functional. Task #7807: Speed-up of dispersion models for clustered analysis.
- Modified profiling script for NS R1rho 3-site linear to be functional. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added a relaxation dispersion model profiling log file for relax version 3.1.3 vs. 3.1.2 vs. 3.1.1. This is the output from the dispersion model profiling master script. It shows that there are no major speed differences between these relax versions.
- Added the system information printout to the dispersion model profiling master script output. This is for the log files comparing one version of relax to the previous version.
- Added profiling script for model MMQ CR72, Task #7807: Speed-up of dispersion models for clustered analysis.
- Fix for the replacement value for invalid values in model MMQ CR72. The value was set to use R20, but should instead be 1e100. Task #7807: Speed-up of dispersion models for clustered analysis.
- Copied profiling script from MMQ CR72, to NS MMQ 2-site and 3-site.
- Copied profiling script to NS MMQ 3-site linear.
- Implemented profiling script for NS MMQ 2-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented profiling script for NS MMQ 3-site and 3-site linear. Task #7807: Speed-up of dispersion models for clustered analysis.
- Speeded up model NS CPMG 2-site star, by moving the forming of the propagator matrix out of the for loops, and preform it. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added a relaxation dispersion model profiling log file for relax version 3.1.4 vs. 3.1.3. This is the output from the dispersion model profiling master script. It shows that there are no speed differences.
- Speeded up NS MMQ 2-site, by moving the forming of evolution matrix out of the for loops, and preform it. Task #7807: Speed-up of dispersion models for clustered analysis.
- Speeded up NS MMQ 3-site, by moving the forming of evolution matrix out of the for loops, and preform it. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added a relaxation dispersion model profiling log file for relax version 3.1.5 vs. 3.1.4. This is the output from the dispersion model profiling master script. It shows that there are no speed differences.
- Speeded up NS R1rho 2-site, by preforming the evolution matrices, and the M0 matrix in the init part of the target function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Speeded up NS R1rho 3-site, by preforming the evolution matrices, and the M0 matrix in the init part of the target function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Expanded the dispersion model profiling master script to cover all the new profiling scripts. This includes all 3-site and MMQ models. The list is now complete and covers all models. The only model not included in M61 skew which has redundant parameters and is not optimisable anyway.
- Added a relaxation dispersion model profiling log file for relax version 3.1.6 vs. 3.1.5. This is the output from the dispersion model profiling master script. It shows that there are almost no speed differences, except for a slight decrease in speed in the CR72 full model for single spins.
- Split system test test_tp02_data_to_ns_r1rho_2site into a setup and test part. Task #7807: Speed-up of dispersion models for clustered analysis.
- Implemented a clustered version of system test test_tp02_data_to_ns_r1rho_2site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Inserted an extremely interesting development in NS R1rho 2-site. If one do a transpose of M0, one can calculate all the matrix evolutions in the start via numpy einsum. Since M0 is in higher a dimensions, one should not do a numpy transpose, but swap/roll the outer M0 6x1 axis. Task #7807: Speed-up of dispersion models for clustered analysis.
- Shortened the code dramatically for NS R1rho 2-site. It is possible to calculate all in "one" go, after having the transposed/rolled-back M0 magnetization. Task #7807: Speed-up of dispersion models for clustered analysis.
- Speeded up the code of NS R1rho 2-site. This was essential done to numpy einsum, and doing the dot operations in multiple dimensions. It was though necessary to realize, that to do the proper dot product operations, the outer two axis if M0 should be swapped, by rolling the outer axis one back. Task #7807: Speed-up of dispersion models for clustered analysis.
- Speeded up the code of NS R1rho 3-site. This was essential done to numpy einsum, and doing the dot operations in multiple dimensions. It was though necessary to realize, that to do the proper dot product operations, the outer two axis if M0 should be swapped, by rolling the outer axis one back. Task #7807: Speed-up of dispersion models for clustered analysis.
- For model NS CPMG 2-site 3D, the M0 matrix was preformed for higher dimensionality in init of target function. The transposes/rolled axis versions was also initiated. Task #7807: Speed-up of dispersion models for clustered analysis.
- Swapped the dot product position, when propagating the magnetisation in model NS CPMG 2-site 3D. This it to try to align to same method as in NS R1rho 2-site. Task #7807: Speed-up of dispersion models for clustered analysis.
- Lowered the looping in NS CPMG 2-site 3D, by preforming the initial dot product. Task #7807: Speed-up of dispersion models for clustered analysis.
- Speeded up NS CPMG 2-site 3D, by preforming the magnetisation. Task #7807: Speed-up of dispersion models for clustered analysis.
- Got rid of the inner evolution of the magnetization. If the looping over the number of CPMG elements is given by the index l, and the initial magnetization has been formed, then the number of times for propagation of magnetization is l = power_si_mi_di-1. If the magnetization matrix "Mint" has the index Mint_(i,k) and the evolution matrix has the index Evol_(k,j), i=1, k=7, j=7 then the dot product is given by: Sum_{k=1}^{k} Mint_(1,k) * Evol_(k,j) = D_(1, j). The numpy einsum formula for this would be: einsum('ik,kj -> ij', Mint, Evol). Following evolution will be: Sum_{k=1}^{k} D_(1, j) * Evol_(k,j) = Mint_(1,k) * Evol_(k,j) * Evol_(k,j). We can then realize, that the evolution matrix can be raised to the power l. Evol_P = Evoll. It will then be: einsum('ik,kj -> ij', Mint, Evol_P). Get which power to raise the matrix to. l = power_si_mi_di-1. Raise the square evolution matrix to the power l. evolution_matrix_T_pwer_i = matrix_power(evolution_matrix_T_i, l), Mint_T_i = dot(Mint_T_i, evolution_matrix_T_pwer_i) or Mint_T_i = einsum('ik,kj -> ij', Mint_T_i, evolution_matrix_T_pwer_i). Task #7807: Speed-up of dispersion models for clustered analysis.
- Tried to implement using lib.linear_algebra.matrix_power.square_matrix_power instead of matrix_power from numpy in NS CPMG 2-site 3D. Strangely, then system test: test_hansen_cpmg_data_to_ns_cpmg_2site_3D_full starts to fail. Task #7807: Speed-up of dispersion models for clustered analysis.
- Changes to unit test of NS CPMG 2-site 3D. This is after the new initiated M0 matrix in init of target function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Double speed in NS CPMG 2-site star, after using numpy.linalg.matrix_power instead of the lib version in relax. Task #7807: Speed-up of dispersion models for clustered analysis.
- Triple speed in NS MMQ 2-site, after using numpy.linalg.matrix_power instead of the lib version in relax. Task #7807: Speed-up of dispersion models for clustered analysis.
- Small fix for making sure that power is a integer in NS MMQ 2-site. Following system tests was failing: Relax_disp.test_korzhnev_2005_15n_dq_data, Relax_disp.test_korzhnev_2005_15n_sq_data, Relax_disp.test_korzhnev_2005_15n_zq_data, Relax_disp.test_korzhnev_2005_1h_sq_data, Relax_disp.test_korzhnev_2005_all_data, Relax_disp.test_korzhnev_2005_all_data_disp_speed_bug. They should already be integers, but is now solved. Task #7807: Speed-up of dispersion models for clustered analysis.
- Comment and spell fixing in NS CPMG 2-site 3D. Task #7807: Speed-up of dispersion models for clustered analysis.
- Triple speed in NS MMQ 3-site, after using numpy.linalg.matrix_power instead of the lib version in relax. Task #7807: Speed-up of dispersion models for clustered analysis.
- Updated the dispersion model profiling comparison of the disp_spin_speed branch vs. relax-3.2.2. This now includes all dispersion models. This shows the large speed increases in the numeric and MMQ models recently obtained. Note that something went wrong with the NS CPMG 2-site 3D full model for the clustered analysis, most times were around 24 seconds except for the first which was strangely 292 seconds.
- Updated the relaxation dispersion model profiling log file for relax version 3.2.2 vs. 3.2.1. This adds the MMQ and 3-site models to the log file. The new information shows that there was a 4.2 times speed up for the MMQ CR72 model between these two relax versions, both for single spins and clustered spins, a much greater improvement than any other of the models.
- Removed the now redundant disp_profile_all_3.2.2.table.txt dispersion model profiling table. As the dispersion model profiling master script now covers all dispersion models, the output from this script produces this table exactly. Therefore the end of the log files saved from running this script contains this table.
- Initiated lengthy profiling script, that shows that doing square numpy matrix_power on strided data, can speed up the calculation by factor 1.5. The profiling script can quickly be turned into a unit test, and includes small helper functions to calculate how to stride through the data. Task #7807: Speed-up of dispersion models for clustered analysis.
- First try to implement function that will calculate the matrix exponential by striding through data. Interestingly, it does not work. These system tests will fail: test_hansen_cpmg_data_to_ns_cpmg_2site_3D, test_hansen_cpmg_data_to_ns_cpmg_2site_3D_full. Task #7807: Speed-up of dispersion models for clustered analysis.
- Added matrix_power to the init file in lib/dispersion. Task #7807: Speed-up of dispersion models for clustered analysis.
- Deleted the printout in dep_check. The printouts are only used for the essential packages before calling sys.exit(). Task #7807: Speed-up of dispersion models for clustered analysis.
- Added the missing "self.num_exp" to target function. Testing on older system, this was failing the system test. It is a wonder how these lines in __init__ could be performed without this.
- Fix for unit test passing on old numpy systems. The error was: ValueError: setting an array element with a sequence. Task #7807: Speed-up of dispersion models for clustered analysis.
- Expanded the dispersion target function class documentation. The NE, NS, NM, NO, and ND notation is now explained.
- Added Ti and NT to the dispersion target function class documentation.
- Slight speed up of the B14 and B14 full dispersion models by minimising repetitive maths.
- Initial try to write up a 2x2 matrix by closed form. Task #7807: Speed-up of dispersion models for clustered analysis.
- Made the validation check in profiling of matrix_power check all values. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced all self.spins with self.NS in target function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced all self.num_exp with self.NE in target function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Replaced all self.num_frq with self.NM in target function. Task #7807: Speed-up of dispersion models for clustered analysis.
- A very small speed up to the CR72 dispersion models by minimising repetitive maths operations. The kBA and kAB rates are used to simplify the Psi calculation, dropping from 3 to 2 multiplications and removing a squaring operation. The Dpos and Dneg value calculations have been simplified to drop one multiplication operation. And the calculation of eta_scale / cpmg_frqs now only occurs once rather than twice.
- Removal of a tonne of unused imports in the lib.dispersion package. These were identified using the command "pylint * --disable=all --enable=unused-import".
- A very small speed up to the MMQ CR72 dispersion model by minimising repetitive maths operations. This matches the recent change for the CR72 model, though the Psi calculation was already using the fast form.
- Created a master profiling script for comparing the speed between different dispersion models. This is similar to the disp_profile_all.py script except it only operates on a single relax version. The output is then simply the timings, with statistics, of the calculation time for 100 function calls for 100 spins (either 10,000 function calls for single spins or 100 function calls for the cluster of 100 spins). The output of the script for the current disp_spin_speed branch code has also been added.
- Critical fix for the recalculation of tau cpmg, when plotting for numerical models. The interpolated dispersion points with tau_cpmg was calculated with frq, instead of cpmg_frq. Task #7807: Speed-up of dispersion models for clustered analysis.
- The new dispersion model profiling master script now includes links to the relax wiki. The models are no longer presented by name but rather by the relax wiki links for each model (see Category:Relaxation dispersion analysis for all these links). This is to improve the Google rank of the relax wiki, as these links may appear in a number of locations.
- Removal of many unused imports in the disp_spin_speed branch. These were detected using the devel_scripts/find_unused_imports.py script which uses pylint to find all unused imports. The false positives also present in the trunk were ignored.
- Code validation of lib/dispersion/b14.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/cr72.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/dpl94.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/lm63_3site.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/lm63.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/m61b.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/m61.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/matrix_exponential. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/mp05.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/ns_cpmg_2site_expanded.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/ns_cpmg_2site_star.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/ns_mmq_2site.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/ns_mmq_3site.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/ns_r1rho_2site.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/ns_r1rho_3site.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/tap03.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/tp02.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of lib/dispersion/two_point.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Code validation of target_functions/relax_disp.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- For model NS MMQ 3-site, moved the parameter conversion of ΔωAB from target function to lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed chi sum initialisation in func_ns_mmq_2site() as this is not used. Task #7807: Speed-up of dispersion models for clustered analysis.
- Documentation fix for the get_back_calc() function in target_function/relax_disp.py. Task #7807: Speed-up of dispersion models for clustered analysis.
- Removed unnecessary repetitive calculation of kex2 in model DPL94. Task #7807: Speed-up of dispersion models for clustered analysis.
- API documentation fixes, where a "\" is the last character on the line. There should be a space " ", ending this character. Task #7807: Speed-up of dispersion models for clustered analysis.
- Updated the minfx version number to 1.0.9 in the release checklist document. This as of yet unreleased version contains an important fix for parallelised grid searches when the number of increments is set to one (i.e. a preset parameter).
- Fix for the _prompt.test_align_tensor.Test_align_tensor.test_init_argfail_params unit test. As the alignment tensor can now be initialised as None, the None value can be accepted and a different RelaxError is raised when the params argument is incorrectly supplied.
- Added a new set of icons for use with the minimisation user functions. These are of the Rosenbrock function and are much better suited for small icons than the current OpenDX 3D isosurface plots. The matplotlib figure originates from public domain code at http://commons.wikimedia.org/wiki/File:Rosenbrock_function.svg.
- Redesign of the optimisation user functions calculate, grid_search, and minimise. In preparation for expanding the number of optimisation user functions, these three current user functions have been shifted into the new minimise user function class. The calc user function is now accessed as minimise.calculate. The grid search as minimise.grid_search. And minimisation is via the minimise.execute user function. The icon used for the new user function class is the Rosenbrock function or the banana optimisation problem. As this is such a radical change, a huge number of changes in the relax source code, the sample scripts, the user manual, and the test suite were required.
- Created the new minimise.grid_zoom user function. This allows the grid zoom level to be set. The value is stored in the current data pipe and will be used later by the minimise.grid_search user function.
- The minimise.grid_zoom user function now uses the zoom-in Oxygen icon.
- Created the Relax_fit.test_zooming_grid_search system test. This will be used to test the implementation of the zooming grid search. The relaxation curve-fitting analysis should be one of the fastest for testing this.
- Added the print_model_title() method to the specific analysis base API class. This will be used to format and print out the information returned by the model_info() API method.
- Implemented the print_model_title() specific analysis API method for the dispersion analysis.
- Modified the specific analysis API _model_loop_spin() common method. This now additionally returns the spin ID string to allow the corresponding spin container to be identified.
- Implemented the specific analysis API common method _print_model_title_spin(). This is for the corresponding _model_loop_spin() method. It can be aliased in the specific analyses to provide the print_model_title() API method.
- Aliased the _print_model_title_spin() specific analysis API common method in a few analyses. This provides the print_model_title() API method for the J(ω) mapping, consistency testing, and relaxation curve fitting analyses.
- Updated all the specific analysis methods affected by the _model_info_spin() API method change. This is for the change whereby the common API method now returns the spin ID string as well.
- Implemented get_param_names() and get_param_values() for the relaxation curve-fitting analysis. These are part of the specific analysis API.
- Created the specific analysis API return_parameter_object() function. This is used by the non-specific analysis code to obtain the parameter object (a singleton object). It will allow for more direct access to the parameter information.
- Created the parameter object infrastructure for adding the grid search lower and upper bounds. The _add() method now accepts the grid_lower and grid_upper keyword arguments, which can be either values or functions. These are then stored in the _grid_lower and _grid_upper class dictionaries. The public methods grid_lower() and grid_upper() have been added to return the value corresponding to the given parameter.
- Modified the specific analysis parameter object grid_lower() and grid_upper() methods. These now accept the model information from the model_loop() API method and send that into an grid lower and upper functions. These functions will require the information to pull out the correct spin, spin cluster, or other information from the current data pipe to determine what the bounds should be.
- Implemented infrastructure in the grid_search user function in preparation for the zooming grid. The grid search backend now calls the new grid_bounds() function. This takes the lower and upper bounds as arguments, uses the specific API to determine the per-model parameter grid search bounds, and then returns a per model list of lower and upper bounds. The specific API get_param_names() and get_param_values() are called to obtain the current model parameter names and values, and then the parameter names and model info are used in the new parameter object grid_lower() and grid_upper() methods to obtain the bounds. This shifts all of the grid search bounds logic out of the specific analyses and into the grid search backend, so it should allow the specific analysis code to be simplified.
- More modifications of the minimise.grid_search user function backend. The grid_bounds() function has been renamed to grid_setup(), and it now accepts and processes the inc user function argument. The error checking code of the relaxation curve-fitting grid_search_setup() optimisation function has been shifted into this analysis independent grid_setup() function to shift the minimise.grid_search user function error checking out of the specific analyses. The function now scales the parameter bounds, using the yet-to-be implemented scaling() method of the parameter object. And the grid search increments are converted into a per-model list of lists.
- Created the parameter object infrastructure for registering parameter scalings. The _add() method now accepts the scaling keyword argument, which can be either a value or function. This is then stored in the _scaling dictionary. The public method scaling() has been added to return the scaling factor corresponding to the given parameter.
- Modified the analysis specific API optimisation method. The base calculate(), grid_search() and minimise() methods now all accept the scaling_matrix argument, and the minimise() scaling argument has been removed. This scaling_matrix argument should be a per-model list of scaling matrices. To handle the change, the pipe_control.minimise.assemble_scaling_matrix() function has been created. This uses the new parameter object scaling values to create the list of scaling matrices. This will in the end replace all of the analysis specific assemble_scaling_matrix() functions and simplify their optimisation code paths.
- Changed the order of operations in the minimisation user function backends. The specific analysis API overfit_deselect() method needs to be called before any grid bounds, increments, or the scaling matrices are assembled. This is for the cases when the grid bounds or scaling factors are functions rather than values.
- Converted the relaxation curve-fitting analysis to the new grid bounds and scaling factor design. The parameter object now registers the grid bounds and scaling factors for all of the curve-fitting parameters. This includes three functions i_scaling(), i0() and iinf() in the specific_analyses.relax_fit.parameter_object module for calculating some of these values. The specific_analyses.relax_fit.parameters.assemble_scaling_matrix() function has been deleted as this is now provided by the upstream code in pipe_control.minimise. And the API methods grid_search() and minimise() has been modified to accept the list of scaling matrices. As the grid bounds and increments are now handled by the upstream pipe_control.minimise.grid_setup() function, the specific_analyses.relax_fit.optimisation.grid_search_setup() function was redundant and was deleted.
- Created the consistency testing specific API method get_param_names(). This is now required for the minimise.calculate user function, specifically for the analysis independent assemble_scaling_matrix() function. The get_param_names() method simply returns the fixed list of parameter names.
- Standardisation of the specific analysis API with respect to the model_loop() and base_data_loop(). The model information arguments for the data returned by model_loop(), and the data arguments for the data arguments for the data returned by base_data_loop() have been standardised throughout the API.
- Epydoc parameter order rearrangement in the specific analysis API base class.
- Updated the specific analysis API common methods for the recent model_info argument changes.
- Updated all of the specific API calculate() methods to accept the scaling_matrix argument. The list of per-model scaling matrices is now created independently of the analysis type by the pipe_control.minimise methods for the minimise.calculate, minimise.grid_search and minimise.execute user functions and sent into the specific analysis backend.
- Updated all of the specific API grid_search() methods to accept the scaling_matrix argument. The list of per-model scaling matrices is now created independently of the analysis type by the pipe_control.minimise methods for the minimise.calculate, minimise.grid_search and minimise.execute user functions and sent into the specific analysis backend. The argument is also passed into the minimise() API method from the grid_search() method when that is used.
- Updated all of the specific API minimise() methods to accept the scaling_matrix argument. The list of per-model scaling matrices is now created independently of the analysis type by the pipe_control.minimise methods for the minimise.calculate, minimise.grid_search and minimise.execute user functions and sent into the specific analysis backend.
- Fix for the Monte Carlo simulations for the model_info argument changes in the specific API.
- Fixes for the consistency testing and J(ω) mapping calculate() methods. This is for the changes to the data_init() specific analysis API method.
- More fixes for the Monte Carlo simulations for the model_info argument changes in the specific API.
- Updated all of the data_init() specific API calls where the spin ID is expected.
- Fixes for the _data_init_spin() specific API common method. The data returned from _base_data_loop_spin() is just the spin ID, the spin container is not included.
- Updated the eliminate user function backend to work with the model_info argument changes in the specific API.
- The new pipe_control.minimise module functions can now handle models with no parameters. The new assemble_scaling_matrix() and grid_setup() functions will now handle models with no parameters, as this is required for the relaxation dispersion analysis.
- More fixes for the eliminate user function backend. This is for the model_info argument changes in the specific API.
- Fixes for the grid search backend for a recent breakage and expansion of its capabilities. The user supplied lower and upper bounds for the grid search were no longer being scaled via the scaling matrix. In addition, the code has been refactored to be simpler and more flexible. The user can now supply just the lower or just the upper bounds and the grid search will work.
- The grid search setup function now prints out the grid search bounds to be used. This is in the pipe_control.minimise.grid_setup() function, hence it is analysis independent. This is useful feedback for the user to know what the grid search is doing. And it will be even more useful for the zooming grid search to understand what is happening.
- The grid search setup printout now also included the number of increments for each parameter.
- Modified the new print_model_title() specific analysis API method. This now accepts the prefix argument for creating different titles independently of the specific analysis.
- The grid search setup function now uses the prefix argument to the print_model_title() API function. This is simply set to 'Grid search setup:'.
- The relaxation dispersion API now uses the MODEL_R2EFF variable for identifying the R2eff model.
- Changes to the minimise.grid_search user function frontend. The Boolean constraints argument has been shifted to the end, and empty lines have been removed.
- Epydoc docstring fixes for the keyword arguments of the pipe_control.minimise module.
- Shifted the constraints Boolean argument to the end of the grid_search() function argument list.
- Major change to the grid_search user function. The minimise.grid_search user function now accepts the skip_preset flag. When True, the grid search will skip any parameters with a preset value. This allows the user to set parameters via the value.set user function and then have these parameters skipped in the grid search. The new skip_preset argument is passed into the pipe_control.minimise.grid_setup() function in the backend. This then sets both the grid lower and upper bounds to the preset parameter value and sets the number of increments to 1 for that parameter so that it is essentially skipped in the grid search.
- Small change to the table printed out during the minimise.grid_search setup.
- Fix for the skipping of preset parameters in the grid search. Dictionary and list type parameters are now handled correctly.
- Converted the relaxation dispersion analysis to the new grid bounds and scaling factor design. The parameter object now registers the grid bounds and scaling factors for all of the dispersion parameters. This includes the functions dw_lower(), dwH_lower(), pA_lower() and i0_upper() in the specific_analyses.relax_disp.parameter_object module for calculating some of these values. The specific_analyses.relax_disp.parameters.assemble_scaling_matrix() function has been deleted as this is now provided by the upstream code in pipe_control.minimise. And the API methods grid_search() and minimise() has been modified to accept the list of scaling matrices. As the grid bounds and increments are now handled by the upstream pipe_control.minimise.grid_setup() function, the specific_analyses.relax_disp.optimisation.grid_search_setup() function was redundant and was deleted. The specific_analyses.relax_disp.parameters.get_param_names() function was also modified with the full argument added, to allow either the base parameter names or an augmented form with the dictionary key for presenting to the user to be returned. Importantly to allow the changes to be operational, the model_loop() API method was redesigned so that, for the R2eff base model, the individual spins rather than spin clusters will be looped over. This allows the specific_analyses.relax_disp.optimisation.minimise_r2eff() function to continue to operate correctly.
- Implemented the J(ω) mapping analysis get_param_names() API method. This simply returns the hardcoded list of 3 parameters of the model, and allows the minimise.calculate user function to operate.
- Updated the _print_model_title_spin() specific API common method. This now accepts the prefix argument and adds this to the title.
- The minimise.grid_search user function can now properly handle preset values of NaN. This occurs when the parameter vector contains values of None due to the parameter not being set and then the Python list being converted to a numpy array. The value of NaN is now caught and the parameter is no longer identified as being preset.
- Fixes for the relaxation curve-fitting grid search. The parameters which are not set are no longer defaulting to 0.0. This means that the parameter vector will sometimes contain NaN values, but this is important for the correct operation of the new minimise.grid_search user function backend.
- Updated the NOE analysis to handle the changes for the minimise.calculate user function. This now requires the model_loop() and get_param_names() API methods to be implemented. The first is provided by the API common _model_loop_spin() method and the second simply returns a list of the single 'noe' parameter.
- Created the _print_model_title_global() specific analysis API method. This is to be paired with the _model_loop_single_global() API method and it simply prints out the prefix as the title.
- Created the specific analysis parameter object _add_align_tensor() method. When called by a specific analysis, this will add the [Axx, Ayy, Axy, Axz, Ayz] parameters to the corresponding parameter object.
- Deleted the lib.optimisation module. The checks in the single test_grid_ops() function are implemented in the pipe_control.minimise.grid_setup() function and are now redundant.
- Removed the import of the now deleted lib.optimisation module from the model-free analysis.
- Converted the N-state model analysis to the new grid bounds and scaling factor design. The parameter object now registers the grid bounds and scaling factors for all of the N-state model parameters. The specific_analyses.n_state_model.parameters.assemble_scaling_matrix() function has been deleted as this is now provided by the upstream code in pipe_control.minimise. And the API methods grid_search() and minimise() has been modified to accept the list of scaling matrices. In addition, all of the lower bounds defined in the grid_search() API method have been deleted as this is now in the parameter object. The new API function print_model_title() has been aliased from _print_model_title_global(). And the get_param_names() and get_param_values() API methods have been implemented.
- The grid search upper and lower bound functions now must accept the incs argument. For a few analyses, the number of grid search increments are used to remove the end point of the grid, to remove duplicate points due to the circular nature of angles. Therefore the parameter object grid_lower() and grid_upper() methods now send the grid increment number for each parameter into all grid bound determining functions. The relaxation dispersion and curve-fitting analyses have been updated for the change.
- Converted the frame order analysis to the new grid bounds and scaling factor design. The parameter object now registers the grid bounds and scaling factors for all of the dispersion parameters. This includes the functions angle_upper_excluding_bound(), axis_alpha_upper(), cone_angle_lower(), cone_angle_upper(), pivot_grid_bound(), pivot_x_lower(), pivot_x_upper(), pivot_y_lower(), pivot_y_upper(), pivot_z_lower(), and pivot_z_upper() in the specific_analyses.frame_order.parameter_object module for calculating some of these values. The specific_analyses.frame_order.parameters.assemble_scaling_matrix() function has been deleted as this is now provided by the upstream code in pipe_control.minimise. And the API methods grid_search() and minimise() has been modified to accept the list of scaling matrices. As the grid bounds and increments are now handled by the upstream pipe_control.minimise.grid_setup() function, the setup and error checking code in the grid_search() API method was redundant and was deleted.
- Modified all calls to the parameter object _add() base class method. These are now all spread across multiple lines, with each argument on a separate line. This is for easier maintenance of the specific analysis parameters, as the code is now much cleaner and argument changes will only have diffs for that argument. It is also visually easier to see all the settings for each parameter.
- Fix for the grid search setup printout when parameters are preset.
- Changes to the diffusion tensor initialisation in the model-free auto-analysis. The values of the tensor are now initialised to None. This is to allow for the new grid search preset flag which defaults to True, setting the values to None indicates that a grid search should be performed.
- The diffusion_tensor.init user function can now set initial tensor parameter values of None. This is to allow for the new grid search preset flag. Therefore allowing the values to be None allows for a grid search to be performed by default.
- Created two new model-free system tests. These are Mf.test_m0_grid_with_grid_search and Mf.test_m0_grid_vs_m1_with_grid_search. Their aim is to better test the grid search in a model-free analysis when parameters are preset.
- Converted the model-free analysis to the new grid bounds and scaling factor design. The parameter object now registers the grid bounds and scaling factors for all of the model-free parameters. This includes the functions rex_scaling() and rex_upper() in the specific_analyses.model_free.parameter_object module for calculating some of these values. The base parameter object has also been updated as that is where the diffusion parameters are defined. Here the da_lower() and da_upper() have been defined to handle the different Da value constraints. The specific_analyses.model_free.parameters.assemble_scaling_matrix() function has been deleted as this is now provided by the upstream code in pipe_control.minimise. And the API methods grid_search() and minimise() has been modified to accept the list of scaling matrices. As the grid bounds and increments are now handled by the upstream pipe_control.minimise.grid_setup() function, the grid_search_config(), grid_search_diff_bounds() and grid_search_spin_bounds() functions in the specific_analyses.model_free.optimisation module were redundant and were deleted. The new API function print_model_title() has been implemented to handle the grid search setup printouts.
- Modified the pipe_control.minimise.grid_setup() function for when no parameters are present. For the current version of minfx to function correctly (1.0.8), the lower, upper and inc values should be set to [] rather than None.
- Fix for the lib.arg_check.is_num_or_num_tuple(). When the can_be_none flag is set to True, the tuple of None values is now considered valid. This enable the diffusion_tensor.init user function to accept the spheroid tensor values of (None, None, None, None), and the ellipsoid tensor values as a tuple of 6 None.
- Fix for the _prompt.test_diffusion_tensor.Test_diffusion_tensor.test_init_argfail_params unit test. As the diffusion tensor can now be initialised as None, the None value can be accepted and a different RelaxError is raised when the params argument is incorrectly supplied.
- Modified the behaviour of the parameter object units() method. If the unit is set to the default of None, this method will now return an empty string instead of None.
- The rx parameter of the relaxation curve-fitting analysis now has 'rad.s^-1' units defined.
- Implemented the zooming grid search. If the zoom level is set to any value other than 0, then the grid width will be divided by 2zoom_level and centred at the current parameter values. If the new grid is outside of the bounds of the original grid, the entire grid will be translated so that it lies entirely within the original.
- Modified the zooming grid search algorithm. If the zoom level is negative, hence the grid will be larger than the original, the checks that the grid is within the original are no longer active.
- Changed the minimise.grid_zoom user function. The zoom level can now be any floating point number or integer, including negative values. The user function docstring has been significantly expanded to explain the entire zooming grid search concept.
- Alphabetical ordering of the minimisation user functions in the user_functions.minimisation module.
- Large expansion of the minimise.grid_search user function documentation. The previous documentation was essentially non-existent.
- Expanded the minimise.grid_zoom user function documentation. A few sentences about the limitations of the algorithm have been added.
- Completed the Relax_fit.test_zooming_grid_search system test. Now only a single spin is optimised. The zooming levels increase in integer increments from 0 to 50 so that the final zoomed grid is insanely small (as the curve-fitting C modules are incredibly fast, this test is nevertheless relatively quick). The final zooming grid search parameter values are checked to see if they are the same as those optimised in the Relax_fit.test_curve_fitting_height system test to demonstrate the success of the algorithm.
- Modified the grid search upper bounds functions for the relaxation curve-fitting. This is for both the exponential relaxation curve-fitting analysis and the same fitting in the dispersion analysis. The intensity values are doubled and then rounded to the next order. This ensures that I0 and I∞ will be within the grid bounds. Hence the zooming grid search can be used for these curves.
- Expanded the documentation for the minimise.calculate user function. This now explains the dual operations of the user function.
- Fixes for some relaxation dispersion system tests not converted to the new optimisation user functions. Minimisation is now via the minimise.execute user function, which used to be the minimise user function.
- Added a 128x128 pixel version of the zoom-in Oxygen icon. This icon size is not available in the repository located at svn://anonsvn.kde.org/home/kde/trunk/kdesupport/oxygen-icons. Therefore the scalable/actions/small/48x48/zoom-in.svgz file was copied and then exported into a 128x128 PNG, and finally converted to a Gzipped EPS file for the relax manual.
- The frame order grid search bound functions can now handle increment values of None or 1. These cases are now caught and the full lower or upper bound is now returned.
- More even spacing for the frame order grid search. This is for the parameters which exclude end points in the grid search, as these excluded points do not decrease the number of increments searched over.
- Even more even spacing for the frame order grid search. This is for the parameters which exclude end points in the grid search, as these excluded points do not decrease the number of increments searched over. This fixes the algorithm for all of the bounds.
- Improved the logic for skipping parameters in the grid search. The logic is also fully explained in the user function documentation.
- Removal of all unused imports. These were identified using the devel_scripts/find_unused_imports.py script.
- Reverted the deletion of the Relax_disp.test_hansen_cpmg_data_to_lm63_3site system test which occurred in relax 3.2.3. See the thread at http://thread.gmane.org/gmane.science.nmr.relax.scm/21774/focus=6300 for the request that this deletion be reverted. This is the only system test for the LM63 3-site dispersion model using real data. Having this test allows for better coverage of the code.
- Updated the Relax_disp.test_hansen_cpmg_data_to_lm63_3site system test. This is for the changes to the optimisation user functions.
- Updated the checks in the Relax_disp.test_hansen_cpmg_data_to_lm63_3site system test. The values were incorrect due to a bug in relax and a non-optimal minfx setting (https://gna.org/bugs/?22210 and https://gna.org/bugs/?22211).
- Fix for a fatal bug for the prompt UI on MS Windows. The improvements in the tab completion support for the prompt UI on Mac OS X systems was fatal for certain Python readline modules on MS Windows, as readline.__doc__ can be None. This is now correctly handled.
- Decreased the precision the Relax_disp.test_hansen_cpmg_data_to_lm63_3site system test. This is to allow the test to pass on Mac OS X systems.
- Unit test fix for Mac OS X. This is for the test_ns_mmq_2site_korzhnev_2005_15n_dq_data_complex128 test of test_suite.unit_tests._lib._dispersion.test_matrix_exponential.Test_matrix_exponential. The tests no longer check for exact values, but use the assertAlmostEqual() calls instead.
- Deleted the ancient optimisation_testing.py development script, as this no longer works and is of no use.
- Implemented the pipe_control.mol_res_spin.format_info_full() function. This follows from http://thread.gmane.org/gmane.science.nmr.relax.scm/22522/focus=6534. This is a verbose representation of the spin information which can be used for presenting to the user. Functions for shorter string versions will also be of great use, for example as described by Troels at http://thread.gmane.org/gmane.science.nmr.relax.scm/22522/focus=6535.
- Created a unit test for the pipe_control.mol_res_spin.format_info_full() function. This comprehensive test covers all input argument combinations.
- Changed the behaviour of the pipe_control.structure.mass.pipe_centre_of_mass() function. This function returns the CoM and optionally the mass of the structural data loaded into the current data pipe. However it was matching the structural data to the molecule-residue-spin data structure and skipping spins that were deselected. This illogical deselection part has been eliminated, as spins can be deselected for various analysis purposes and this should not change the CoM. The deletion also significantly speeds up the function.
- Added Andy Baldwin's 2013 R1ρ relaxation dispersion model (BK13) to the manual. The model has been added to the table of dispersion models and to the dispersion software comparison table of the dispersion chapter of the manual. The citation has also been added to the bibliography.
- The BK13 dispersion model is now properly added to the software comparison table.
- Added the 'BK13' and 'BK13 full' dispersion models to the to do section of the manual.
- Standardisation of the author names in the bibliography of the relax manual.
- Added links for the BK13 model to https://gna.org/support/?3155 in the manual.
- Expansion of the 'to do' section of the dispersion chapter of the manual.
- Editing of the 'to do' section of the dispersion chapter of the manual.
- Split out the interpolating in specific_analyses.relax_disp.data.plot_disp_curves() into separate function. This is to prepare for a interpolating function for spin-lock offset rather than spin-lock field strength for R1ρ models. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Split out the looping over frequency and offset into its own function, in function of specific_analysis.relax_disp.data.plot_disp_curves(). Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Split out the writing of dispersion graph files in specific_analyses.relax_disp.data.plot_disp_curves(). This is to prepare for a stand-alone function to plot R1ρ graphs, interpolating θ through spin-lock offset rather than spin-lock field strength. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Fix for function calling and default values of None in sub-plotting functions. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Large extension of specific_analyses.relax_disp.data(), by adding several helper plotting functions. This is to prepare for plotting R1ρ/R2 as function of effective field in rotating frame: ωeff. R2 = R1ρ / sin2(θ) - R1 / tan2(θ) = (R1ρ - R1 * cos2(θ) ) / sin2(θ). Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Extended specific_analyses.relax_disp.optimisation.back_calc_r2eff() to handle interpolated spin-lock offset values. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Removed a wrong adding of empty offset dimension in the get_back_calc() function of target_functions.relax_disp(). Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added the back calculated R2 as function of effective field in rotating frame: ωeff. R1ρ/R2 is defined as: R2 = R1ρ / sin2(θ) - R1 / tan2(θ) = (R1ρ - R1 * cos2(θ) ) / sin2(θ). This is described more at: http://wiki.nmr-relax.com/DPL94#Equation_-_re-writed_forms. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added an intermediate attempt to show the back calculated data in the graph for R1ρ/R2 as function of the effective field in rotating frame: ωeff. The graph is aiming for the representation of Figure 2 in Kjaergaard et al 2013. (http://dx.doi.org/10.1021/bi4001062). The figure can be seen at https://gna.org/support/download.php?file_id=20208. It becomes clear, that it is not necessary interpolate through the spin-lock offset, but it is sufficient to interpolate through the spin-lock field strengths. The necessary step was the extraction of the effective field in rotating frame, ωeff. In earlier attempt is shown at: http://wiki.nmr-relax.com/File:Matplotlib_52_N_R1_rho_R2eff_w_eff.png This though show lines for 6 offset values. The question is how to show the single line of interpolation. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added NMR library function to convert the given frequency from rad/s to ppm units. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Hard-coded restriction of R1ρ extra plotting to model DPL94, TP02, TAP03, MP05, NS R1rho 2-site. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Fix for sending the correct data structures to target function, and fix for the spin index which is always zero in graph production. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Moved the file_name creation out of the interpolate function, to make it a general function for interpolating. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Made both interpolation functions of the spin-lock field strength and spin-lock offset return the offset data. This is make it possible to switch between the interpolating functions, when plotting R1ρ graphs. This is necessary to produce the R2 as function of effective field in rotating frame ωeff, and to produce R1ρ as function of θ, when ramping the spin-lock offset. These graphs can be seen at: http://wiki.nmr-relax.com/Matplotlib_DPL94_R1rho_R2eff. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Changed interpolation function from offset to spin lock field strength, to plot R1ρ/R2 as function of effective field. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Improved axis label for plotting R1ρ/R2 as function of effective field ωeff. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added sub-title to the plot of R1ρ/R2 as function of effective field. This is to add information about how the effective field has been interpolated. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added functionality to plot R1ρ/R2 as function of effective field ωeff, for the R2eff model. Also renamed a function, to better reflect is functionality. The hard-coding of which models to plot, has been removed. If the exp-type is R1ρ, then the plotting will commence. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added the spin specific residue name and spin_id to the title of the dispersion plots. This is handy, since it is often of interest to have this information at hand, when looking through many graphs. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Renamed and improved epydoc information for interpolating function for dispersion values. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Renamed the interpolating function for offset, and improved the epydoc information. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Corrected the epydoc information for the return_offset_data() function in specific_analyses.relax_disp.data. The function has been extended to return more data. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Renamed function to represent the functionality of returning data in correct xmgrace form. Also improved the epydoc information, for the return of values. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Renamed the other function to represent the functionality of returning data in correct xmgrace form. Also improved the epydoc information, for the return of values. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added function to calculate rotating frame parameters for lib/nmr.py. This function is called several times in specific_analyses/relax_disp/data.py by plotting functions. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Replaced repeated calculation of rotating frame parameters to use function in lib/nmr.py. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Skip production of plotting R1ρ/R2 as function of effective field ωeff, when spin.isotope is not present. This can happen when it is 'exp_fit' model curve fitting. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added functionality to function to accept how the first part of the filename is formed. This is to prepare to reuse the same plotting function. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Renamed plotting sub function, to accept different file name arguments, and X-axis arguments. This is to reuse the sub-plot function to plot against different X-axis. Added plot of R1ρ as function of θ, where interpolated against spin-lock field strength. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Prepared flag, to tell which data type to interpolate through. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Preparation to interpolate through the offset to plot R1ρ as function of θ, interpolated through spin-lock offset. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added graph functionality, to plot R1ρ as function of θ, when spin-lock offset is interpolated. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Ensured production of plotting R1ρ/R2 as function of effective field ωeff, when spin.isotope is not present. The offset in radians would be set to 0.0 instead. This can happen when it is 'exp_fit' model curve fitting. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added function to return spin info, and a function to return a spin string for graphs. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Replaced the spin info string in the title of graphs, with the new function method. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added keywords to be used to backend function of plot_disp_curves. The keyword 'y_axis', determine which y data to be plotted on this axis. The keyword 'x_axis', determine which x data to be plotted on this axis. The keyword 'extend_hz' to determine how far to extend interpolated CPMG frequency or spin-lock field strenght. The keyword 'extend_ppm' to determine how far to extend interpolated spin-lock offsets. The keyword 'interpolate' to determine to interpolate dispersion points, or offset. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Made the plotting function uniform into accepting both CPMG and R1ρ data. Also made a function to return data, depending if it is data, back calculated, interpolated or residual. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Renamed return grace data function, to a better shorter name. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Renamed the other return grace data function, to a better shorter name. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Hardcoded xmgrace colour_order, and made function to return data label, and data plot settings, depending on data type. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Renamed plotting function to represent the function of writing to file. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Removed unused plotting function. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Streamlined plotting functions, to have similar input. Reordered the output from return_offset_data(), interpolate_disp() and interpolate_offset(), to reflect the order of data type. Made the input to return_grace_data_vs_disp(), return_grace_data_vs_offset the same(). Added the interpolate flag to return_grace_data_vs_disp() and return_grace_data_vs_offset(), to help return correct X-value. Added the interpolate flag to return_x_y_point(), to help determine if "disp point" or "offset point" should be returned. Added the "offset point" to return_x_y_point() function, to make it possible to plot against offset. Cleaned up the return_grace_data_vs_offset() function, to use newly created return_x_y_labels() function. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Fix for output catching after reorder or return_offset_data() function. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Removed unused return_grace_data() function. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Made uniform function for returning x_axis and y_axis labels for xmgrace plotting. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Renamed return functions, to reflect they are specific for xmgrace plotting. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Extended relax_disp auto_analyses to plot special R1ρ graphs. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Modified front-end user function relax_disp.plot_disp_curves to send new arguments to back-end function. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added new Unicode symbols to be used by the GUI drop-down menu. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Modified back-end of plot_disp_curves() to reflect changes to the front-end function. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Extended system test Relax_disp.test_r1rho_kjaergaard_auto(), to check that the expected graphs exist. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added model No Rex to system test Relax_disp.test_r1rho_kjaergaard_auto(), to check all graphs are produced. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added function to return the initial part of the file name for grace plotting. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Modified system test to use the new function to return initial part of file name for grace plotting. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Started testing all possible combinations of graphs for R1ρ analysis. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added more printout, to detect which graphs are not working. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added forgotten "interpolate" type to function which return X,Y point to xmgrace graphs. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Fix for interpolation graph when plotting R1ρ/R2 as function of offset (ppm). Missed to extract the offset value from list. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Forcing overwrite of special R1ρ graphs in auto analyses in relax_disp. The other graphs are also auto forced. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added system test Relax_disp.test_r1rho_kjaergaard_auto_check_graphs, to check that the contents of all combinations of graphs are consistent. The system test actual show that the error is changing per run-through. This is a bug, which should be corrected. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added graphs to check against in system test test_r1rho_kjaergaard_auto_check_graphs. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Code validation of system test file for Relax_disp. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Fix to system test Relax_disp.test_r1rho_kjaergaard_auto_check_graphs by only comparing X,Y values, and skipping the error. This is a hack until the error difference bug gets corrected. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added system test Relax_disp.test_kteilum_fmpoulsen_makke_check_graphs() to check all possible combinations of dispersion plotting. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Fix for GUI text in MS Windows, since subscript 1 and Greek θ symbol is not working in this Unicode system. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Fix for desc_short, in user function relax_disp.plot_disp_curves. The text "The " is preformed in the formatting. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Fix for system test, after moving graphs to check against. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Moved graph files up one level in system test. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Fix for forgotten removal of counter. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added check function for relax_disp. This function check if interpolating against offset for non-R1ρ exp types, and will raise an Error. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added check function to plot_disp_curves, to check that CPMG exp types are not interpolated against offset, which is not implemented. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Fix for forgotten "1" in lib text GUI. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Improved description in GUI text for user function relax_disp.plot_disp_curves. The improved description now explains the new features. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Extended graph labelling, file naming and return of data for multiple CPMG graphs types. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added system test Relax_disp.test_kteilum_fmpoulsen_makke_check_graphs to check all CPMG graph combinations of: y_axis_types = [Y_AXIS_R2_EFF, Y_AXIS_R2_R1RHO]; x_axis_types = [X_AXIS_DISP, X_AXIS_THETA, X_AXIS_W_EFF]; interpolate_types = [INTERPOLATE_DISP]. This is a total of 6 graphs. The graphs will in most cases be totally equal, since the θ angle is calculated to 90 degrees, and R1 is returned as 0.0, then R2=(R1ρ - R1 cos2(θ)) / sin2(θ) = R1ρ = R2eff for CPMG models. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Added graphs to check against for system test: Relax_disp.test_kteilum_fmpoulsen_makke_check_graphs. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Expanded ex. to example in help text for function. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Fix for unit test, where return_offset() function return has been expanded and reordered. Sr #3124: Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Sr #3138: Interpolating θ through spin-lock offset Ω, rather than spin-lock field strength ω1.
- Created the Bruker.test_bug_22411_T1_read_fail system test. This is to catch bug #22411 as reported by Olena Dobrovolska.
- Fix for system test Relax_disp.test_kteilum_fmpoulsen_makke_check_graphs where minimise has been extended with execute.
- Changed graphs after new minimisation algorithm has been implemented. The values are now slightly different.
- Implemented first try to stride through data, when computing the eig() of higher dimensional data. System test test_cpmg_synthetic_b14_to_ns3d_cluster survived this transformation. The system test will go from about 11 seconds to 22 seconds.
- Implemented second try to stride through data, when computing the eig() of higher dimensional data. This of data of form: NS, NM, NO, ND, Row, Col. System test test_sprangers_data_to_ns_mmq_2site survived this transformation. The system test will go from about 2 seconds to 4 seconds.
- Created function to create the helper index numpy array, to help figuring out the indices to store in the exchange data matrix. This is for striding through data and store the data correct in the data matrix. This is for a special situation where numpy version is < 1.8, and where the numpy.linalg.eig() function can only be performed on square matrices, and not on higher dimensional data. For this situation, it is necessary to stride through the data.
- Created the numpy array self.index target function which contains index to store the data. This is for situations where numpy version is under 1.8.
- Added function to get the data view via striding through a higher dimensional column numpy array.
- Extracted the data view of the index indices numpy array in the target_function.
- A profiling showed, that it was not faster to preform the index view.
- Made new general stride helper function and matrix_exponential function.
- Changed to the matrix_exponential function for NS R1rho 2-site.
- Removed all unused helper functions, and matrix exponential functions. They are now condensed to the fewest possible functions.
- Fix for eye matrix being formed incorrectly.
- Replaced all matrix_exponential functions in numerical models to use the new general matrix_exponential function.
- Added warning message to auto analysis in relax disp, if numpy is below 1.8 and using numerical model. This will make the analysis 5-6 times slower.
- Fixes for numpy version under 1.8, when striding through data.
- Fix to unit tests, after changing the name of matrix_exponential function.
- Added graphs and results for run with MC=2000, for system test Relax_disp.test_r1rho_kjaergaard_auto(). This is to be able to extend graph testing for interpolated R1ρ graphs, and to add figures to the latex manual.
- Added list of R1ρ models, which use R1 in their equations. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models. Bug #21788: Only Warning is raised for missing loading R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Modified the warning and error messages being raised when calling return_r1_data(). Now warnings is raised if no R1 data is available. An error is raised if the R1ρ model is expected to have R1 data, and it is not available. That makes system test Relax_disp.test_r1rho_kjaergaard_missing_r1() fail, which is the expected behaviour. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models. Bug #21788: Only Warning is raised for missing loading R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added the model "MODEL_DPL94_FIT_R1", to the full list of models. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- In system test Relax_disp.test_r1rho_kjaergaard_missing_r1(), started using the new model MODEL_DPL94_FIT_R1. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added "r1_fit" as a parameter object. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added functionality to return r1_fit parameter in loop_parameters() function. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added model variable MODEL_DPL94_FIT_R1, to relax_disp target function. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Split the target function of model DPL94 into a func_DPL94 and calc_DPL94. This is to prepare for a target function func_DPL94_fit_r1. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added initial target function for model DPL94_fit_r1. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added list of R1ρ models, which can fit R1 in their equations. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Modified return_r1_data() function, to return numpy array of "None", if model is in list of "MODEL_LIST_R1RHO_FIT_R1". Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Modified target function func_DPL94_fit_r1(), to unpack fitted parameters correct. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added linear linear_constraints for parameter "r1_fit". Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added warning message when using function return_r1_data(), and model is in list MODEL_LIST_R1RHO_FIT_R1. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added model DPL94_FIT_R1 to the list of MODEL_LIST_R1RHO and MODEL_LIST_R1RHO_FULL. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Modified return_r1_data(), to be dependent on fitting model. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Made function return_r1_err_data() be dependent on model type. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Replaced instances of "['r2', 'r2a', 'r2b']" with variable PARAMS_R20. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Implemented model list, which uses parameter of inverted relax delay times. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Implemented model list, which uses parameter of R2B0. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Fix in target function for relax_disp, where model IT99 does not belong to model list with several chemical shift correlated parameters. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added new variable, for models which has parameters with mixed Δω, and has two variables. For example with both Δω and ΔωH or ΔωAB and ΔωBC or φexB and φexC. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added model MODEL_NS_R1RHO_3SITE, MODEL_NS_R1RHO_3SITE_LINEAR, to list of models who has who Δω parameters. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added new variable, for models which has parameters with mixed Δω, and has four variables Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added to relax_disp auto_analyses, that R1_fit should be plotted and written out. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added 2 new models, MODEL_NOREX_R1RHO and MODEL_NOREX_R1RHO_FIT_R1. The "NOREX" model is not covering R1ρ models. The target function for "NOREX" is calculated as: back_calc = R20. R20 is for R1ρ models equivalent to R1ρ prime (R1ρ'), which for example in the model of DPL94 would mean: R1ρ = R1ρ' But for the "NOREX" case, the return should be R1ρ = R1 * cos2(θ) + (R1ρ' + 0) * sin2(θ). This affects all off-resonance model calculations. These two target functions will be implemented. Bug #22440: The "NOREX" model is not covering R1ρ models. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Made the model MODEL_NOREX_R1RHO_FIT_R1, be tested in system test Relax_disp.test_r1rho_kjaergaard_missing_r1(). Bug #22440: The "NOREX" model is not covering R1ρ models. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added models MODEL_NOREX_R1RHO and MODEL_NOREX_R1RHO_FIT_R1 to MODEL_LIST_FULL. Bug #22440: The "NOREX" model is not covering R1ρ models. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Implemented target and calculation function for MODEL_NOREX_R1RHO, MODEL_NOREX_R1RHO_FIT_R1. Bug #22440: The "NOREX" model is not covering R1ρ models. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Made the model "MODEL_NOREX_R1RHO", be testes in system test Relax_disp.test_r1rho_kjaergaard_auto. This is for system test where R1 has been loaded from earlier results, which was not analysed in relax. Bug #22440: The "NOREX" model is not covering R1ρ models. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Made list of models which fit pA or pA and pB. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added function to auto_analysis, to test if it give meaning to write and plot out the parameter. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Made the writing out of parameter pC, be tested with the new function. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Implemented models list with φex, φexB, and φexC, and added to test in auto_analyses of relax_disp. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Made use of the dictionary MODEL_PARAMS, to determine if parameter is present. This makes the list of models belonging to parameter lists superfluous. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Removed unnecessary list of models which support a parameter. This functionality already exists with the dictionary MODEL_PARAMS. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Rearranged the writing out of parameters in auto_analysis of relax_disp. This is to prevent writing out all possible parameters in the final round, if any of those parameters have not been tested. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Extended the writing and checking of parameters, to use different file name, than the parameter name. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Moved the auto-analyses writing out of ωeff and θ into check for has_r1rho_exp_type(). Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Re-arranged all model variable lists, to be able to re-use earlier lists. This is to prevent user errors, when setting up the lists, and re-use the lists through all code. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Fix for MODEL_NOREX_R1RHO_FIT_R1 not being part of list: MODEL_LIST_DISP. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Moved the auto_analyses plot of special R1ρ graphs into the check of has_r1rho_exp_type(). Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Moved the auto-analyses plotting and writing of R2, r2a and r2b for CPMG models into test of has_cpmg_exp_type(). Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Replaced in auto analysis all instances of No Rex and R2eff with its equivalent defined variables. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Re-arranged plotting and writing in auto-analyses of relax disp, when model is R2eff. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Removed unused variables in auto-analyses of relax_disp. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Implemented the list "MODEL_LIST_NEST", which define which model are used for nesting. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Fix for the nested copying of R2, if using a nested list. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added the equivalent R1 fit models for: TP02, TAP03, MP05 and NS R1rho 2-site. The R1 fit models will no be implemented for 3-site models, because there will be to many variables. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added the new R1 fit models to system test Relax_disp.test_r1rho_kjaergaard_missing_r1(). Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Split target function of model TP02, into a calc and two func_TP02* variants. One target function will use measured R1 values, while one target function will use the fitted R1 values. They will use the same calculation function. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Fix for error checking covering R1ρ off resonance models in target function. This is for checking presence of chemical shifts and R1. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Split target function of model TAP03, into a calc and two func_TAP03* variants. One target function will use measured R1 values, while one target function will use the fitted R1 values. They will use the same calculation function. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Split target function of model MP05, into a calc and two func_MP05* variants. One target function will use measured R1 values, while one target function will use the fitted R1 values. They will use the same calculation function. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Fix for system test Relax_disp.test_r1rho_kjaergaard_auto_check_graphs(), where the special R1ρ graphs is no longer produced for the R2eff models. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Split target function of model ns_r1rho_2site, into a calc and two func_ns_r1rho_2site* variants. One target function will use measured R1 values, while one target function will use the fitted R1 values. They will use the same calculation function. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Removed num_points to be used in target and lib function of model ns_r1rho_2site. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added a variable describing the model year, for all relaxation dispersion models. This could be used, when trying to write up an intelligent detect+select nesting function. This function need some meta-data describing the models, in order to sort the self.models before calculations, and to select a proper nested model pipe. Other meta data could be: Accept of Exp_type, full or normal model for CPMG, fitted or loaded R1 for R1ρ, is analytic, silico or numeric type. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added initial unit test class for testing specific_analysis.relax_disp.variables. There will be more tests added, when a nesting selection function has been written here. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added dictionary for returning year, when using model as key. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added unit test for the dictionary of model years. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Reused the EXP_TYPE_LIST_CPMG and EXP_TYPE_LIST_R1RHO, to combine for the list EXP_TYPE_LIST, the list of all dispersion experiment types. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added a common EXP_TYPE_CPMG MMQ description for models which handle MMQ. This is part of adding meta data for each model, making it possible to device a sensible nesting selection function. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added meta data about EXP_TYPE per model, and made a dictionary for it. Added unit test for the new dictionary. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added three new EXP_TYPE variables: EXP_TYPE_R2EFF = 'R2eff/R1rho', EXP_TYPE_NOREX = 'No Rex', EXP_TYPE_NOREX_R1RHO = 'No Rex: R1rho off res'. These are used to add meta-data information to each model, making it possible to make a nesting function, determining which model to nest from. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added model meta information about number of chemical exchange sites. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added meta information about equation type. The models are divided into: analytic, silico or numeric. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Implemented nesting function, which will determine which model to nest from. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- In auto analysis of relax_disp, started implementing the new nesting function. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Modified nesting function to return all model info for the current model, and the comparable model. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Fix for nesting kex, when model is CR72, and analysed models is LM63. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Fix for nesting kex, when model is CR72, and analysed models is IT99. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added a return from the nesting model, if all fails. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Replaced the test, if acquiring the model info to a numerical model from a analytical model. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Improved the printing when nesting parameters from equivalent models. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Removed unused import of models in auto-analyses of relax_disp. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Improved the printing of system test Relax_disp.test_r1rho_kjaergaard_missing_r1. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Improved the printing of system test Relax_disp.test_r1rho_kjaergaard_missing_r1. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added function to sort models before auto-analyses. They are sorted in order: exp_type: EXP_TYPE_R2EFF, EXP_TYPE_NOREX, EXP_TYPE_NOREX_R1RHO, EXP_TYPE_CPMG_SQ, EXP_TYPE_CPMG_MMQ, EXP_TYPE_R1RHO; equation: EQ_SILICO, EQ_ANALYTIC, EQ_NUMERIC; Nr of chemical sites: 2 or 3; Year: Newest models first; Nr of parameters. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added unit test, to test the expected sorting of models for auto-analyses. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added more models to be tested in system test Relax_disp.test_r1rho_kjaergaard_auto. Bug #22461: NS R1rho 2-site_fit_r1 has extremely high χ2 value in system test Relax_disp.test_r1rho_kjaergaard_missing_r1. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Implemented the sorting of models, for auto-analyses. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Implemented partial reading of results file. Before reading a results file, it is determined if the file exists. This makes is possible to read a directory with partial results from a previous analysis. This can be handsome, if reading R2eff values in R1ρ experiments, and the error estimation has been prepared with a high number of Monte Carlo simulations. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added keyword to relax_disp auto analysis, if R2eff values should be optimised. Here optimisation means minimisation and Monte Carlo simulations of the error. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Modified system test Relax_disp.test_r1rho_kjaergaard_missing_r1 to load previous R2eff values, and not optimise them. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Fix in back end for relax_disp.parameter_copy, where r2a and r2b should be skipped, since it has already been copied. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Locked values in system test Relax_disp.test_r1rho_kjaergaard_missing_r1. This is possible after locking the R2eff values and errors from a previous run. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Removed MODEL_NS_CPMG_2SITE_EXPANDED to be analysed in system test test_hansen_cpmg_data_missing_auto_analysis. The new ordering of models, will make MODEL_NS_CPMG_2SITE_EXPANDED be analysed first, and results copied to model CR72. This will interfere with the old results. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Lowered the precision of Δω for model NS CPMG 2-site expanded, in system test est_hansen_cpmg_data_auto_analysis_numeric. Model NS CPMG 2-site expanded is now analysed before MODEL_CR72, which alter the values a bit. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added GUI text for parameter r1_fit. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added front-end description of the 6 new R1 fit R1ρ models for relax_disp.select_model. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added a paragraph for the no chemical exchange model in help text description for selecting models. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added the new R1ρ models where R1 is fitted, to the GUI model selection. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Deleted system test test_r1rho_kjaergaard_man, since it was no necessary. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Deleted unused script files in data folder for Kjaergaard_et_al_2013. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Fix for the linear constraints where parameter r1_fit was written as R1_fit. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Fix for the numbers of parameters not getting counted correct. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Triggering an error in test_r1rho_kjaergaard_missing_r1. There is a bug fetching the standard value of parameter 'r1_fit'. AttributeError: 'float' object has no attribute 'keys'. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Removed unused scripts in folder of Kjaergaard et al., 2013. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Added new analysis scripts in folder of Kjaergaard et al., 2013. Sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models.
- Removed triggering an error in test_r1rho_kjaergaard_missing_r1. There is a bug fetching the standard value of parameter 'r1_fit'.
- Fix for system test Relax_disp.test_r1rho_kjaergaard_missing_r1, running on 64 bit system.
- Inserted LaTeX bibliography for reference to linear constraints of the exchange rate.
- Equation fix for $kex$ in manual.
- Added system test Relax_disp.test_bug_22477_grace_write_k_AB_mixed_analysis and data. This is for bug #22477: Not possible to perform grace.write() on kAB parameter for mixed CPMG analysis.
- Added more printout to system test Relax_disp.test_bug_22477_grace_write_k_AB_mixed_analysis and data.
- Fix for system test Relax_disp.test_bug_22477_grace_write_k_AB_mixed_analysis. Bug #22477: Not possible to perform grace.write() on kAB parameter for mixed CPMG analysis.
- Set the default value of r1_fit to 5.0.
- Added the relax_disp pipe type to be setup for unit tests of value function.
- Setup a unit test for the value.set functionality for param r1_fit. Bug #22470: value.set does not work for parameter r1_fit.
- Fix in relax_disp API, how to handle the r1_fit parameter type. Bug #22470: value.set does not work for parameter r1_fit.
- Modified system test Relax_disp.test_r1rho_kjaergaard_missing_r1 to use GRID_INC=None, and thereby speeding up the analysis. Bug #22470: value.set does not work for parameter r1_fit.
- Added to system test a count of number of headers and values, when issuing a value.write(). Sr #3121: Support request for replacing space in header files for the value.write functions.
- Fix for replacing spaces " " with "_" in header files. Sr #3121: Support request for replacing space in header files for the value.write functions.
- Fix for comment, which mentions R2 parameter, when it relates to R1 fit.
- Replaced variable name: MODEL_PARAM_INV_RELAX_TIMES with MODEL_LIST_INV_RELAX_TIMES, to match all of the other MODEL_LIST_* variables. Also added a newline to end of file.
- Replaced remaining variable names: MODEL_PARAM_* with MODEL_LIST_*, to match all of the other MODEL_LIST_* variables.
- Renamed R1ρ off resonance models where R1 is fitted. This removes the "underscore". This is a better representation presented to the user, for example in the GUI model selection list or the relax_disp.select_model user function in all UI.
- Renamed the parameter "r1_fit" to "r1". This naming fits better to all other parameters.
- Split the unit test of specific_analyses.relax_disp.checks.get_times() into its own unit test file.
- Added a "check" function, what will determine if R1 data is missing for a model to analyse. Also added corresponding unit tests, to test the functionality.
- Modified in documentation, that the No Rex model have one chemical exchange site, namely itself.
- Copied variables.py to model.py. There should not exist any functions in variables.py. It should only consist of hardcoded variables, and the functions related to model sorting and nesting is split into its own file.
- Parted the file of variables.py into model.py.
- Removed the unit test regarding model.py in test_variables.py.
- Added unit tests regarding model.py and its functions.
- Removed the auto-sorting of models, when performing auto analysis of Relaxation dispersion. This was discussed in: http://thread.gmane.org/gmane.science.nmr.relax.scm/22733, http://thread.gmane.org/gmane.science.nmr.relax.scm/22734, http://thread.gmane.org/gmane.science.nmr.relax.scm/22737. Through this discussion, it has appeared that the order of how models are sorted for analysis, and hence the possibility for nesting, is a complicated case. The order of analysis should be possible to manually put into the auto analysis. This was not the scope of sr #3135: Optimisation of the R1 relaxation rate for the off-resonance R1ρ relaxation dispersion models to implement such a feature. Such a feature could be implemented for the next version of relax. It could be designed as function to "suggest" an order in the GUI. But this functionality would have to wait.
- Removing the tex->kex conversion, and φex from Δω and pA. This solution is not a proper implementation, but these parameters should rather be found by grid search.
- Re-inserted "MODEL_NS_CPMG_2SITE_EXPANDED" to be tested in system test test_hansen_cpmg_data_missing_auto_analysis.
- Removed special cases for nesting. Now following order is determined. First sort completed models into: EQ_NUMERIC, EQ_SILICO, EQ_ANALYTIC. Then into year, with newest first. Then number of chemical sites, which reflects number of parameters. Go through the completed models. If the experiment types are the same, then look for: If a completed model has same parameters, then nest from this. If a completed model has all other parameters than R20 parameters, then nest from this. Special cases are taken care of by: MODEL_LM63_3SITE from MODEL_LM63, MODEL_NS_MMQ_3SITE, MODEL_NS_MMQ_3SITE_LINEAR from MODEL_NS_MMQ_2SITE, MODEL_NS_R1RHO_3SITE, MODEL_NS_R1RHO_3SITE_LINEAR from MODEL_NS_R1RHO_2SITE, MODEL_MMQ_CR72 from MODEL_CR72. This functionality represents the hard-coding from previous implementation.
- Moved the nesting lists down in variables file.
- Small verb tense fix for the descriptions of the '* R1 fit' relaxation dispersion models.
- Added definition and dictionary, for each model, to determine which model they nest from. This is better to hardcode, since it makes it possible to produce a table with an overview, and accurately determine which model is nested from. This is discussed in thread http://thread.gmane.org/gmane.science.nmr.relax.devel/6684.
- Moved the lookup in dictionaries for model information, into the class of model info.
- Division of unit tests of specific_analyses/relax_disp/model.py into different functions. Also added more print information to each tests.
- Added the ordered list of nest models to the class of model information.
- Modified the defined list of nesting, according to thread: http://thread.gmane.org/gmane.science.nmr.relax.devel/6684. More specific, thread: http://thread.gmane.org/gmane.science.nmr.relax.devel/6694.
- Modified unit test regarding finding correct nested model. This was discussed in: http://thread.gmane.org/gmane.science.nmr.relax.devel/6684. More specific, thread: http://thread.gmane.org/gmane.science.nmr.relax.devel/6694.
- Modified nesting function, to pull list of possible models from dictionary, and check if these models are available in the completed models.
- Added initial Python script, to help print each model and its corresponding nested models. It can be executed by: relax test_suite/shared_data/dispersion/print_model_info/print_model_info.py.
- Added a relaxation dispersion example to show how certain literature statements are just utter crap. This follows from http://thread.gmane.org/gmane.science.nmr.relax.scm/22774/focus=6693, and the change http://thread.gmane.org/gmane.science.nmr.relax.scm/22774 which implement such dangerous literature conjectures. To see how a real minimum is excluded from the optimisation space, here for residue :2, execute the R1rho_analysis.py script in relax. This is synthetic data generated with kex = 1e5 assuming the model TP02. For the case of residue :2, this still produces an optimisable minimum in the space and dispersion curves. However the change blocks optimisation to the minimum.
- Added a function to determine how to nest/copy the parameters, when nesting from another model. It takes the list of parameters from the current model, and the list of parameters available in the nested model, and return a dictionary of parameter conversion for the current model params.
- Added unit test for the new function which determine how parameters are copied for a nested model parameters.
- Added a table for dispersion model nesting in the auto-analysis to the manual. This adds the ideas discussed in the thread http://thread.gmane.org/gmane.science.nmr.relax.devel/6684.
- Added the string 'me' for Methods in Enzymology to the bibtex file for the manual.
- Set the average value of R1 to 2.0 instead of 5.0. This is "normally" a better guess for R1.
- Implemented the function, which translates how parameters are copied from a nested model in the auto_analyses for relax_disp. This makes it possible to test the translating code, and makes logic clearer in the auto_analyses.
- Modified the r1rho_off_res_tp02_high_kex dispersion test data. The kex value is now set to 2e5.
- Removed the 'DPL94 R1 fit' model nesting from the table in the dispersion chapter of the manual. This was identified using the dispersion test suite data script print_model_info/print_model_info.py (http://thread.gmane.org/gmane.science.nmr.relax.scm/22823).
- Added to system test Relax_disp.test_r1rho_kjaergaard_missing_r1 to check, that values are not None when writing .out files. This is related to: sr #3121: Support request for replacing space in header files for the value.write functions. The fix for this bug, destroyed getting the values.
- Fix for earlier bug fix destroyed functionality. Altering the data-keys to early, meant that data was not fetched correctly. This is related to: sr #3121: Support request for replacing space in header files for the value.write functions.
- Moved the unit test of specific_analyses.relax_disp.checks.check_missing_r1() from a unit test to a system test. This is because the unit test involved several functions of relax.
- Inserted dictionary, that will convert a R1ρ off-resonance without R1, to the corresponding model which fit R1.
- Inserted to the check of missing R1, that MODEL_NOREX_R1RHO also depends on R1.
- Implemented function that determine if any model in the list of all models should be replaced or inserted as the correct No Rex model. It also translate the R1ρ off-resonance model to the corresponding 'R1 fit' models, if R1 is not loaded.
- Inserted system test Relax_disp.test_convert_no_rex_fit_r1, which test the return for the function that will determine if models self.models in the relax_disp should be translated/corrected.
- Fix for unit test, where the standard value of R1 was lowered from 5.0 to 2.0. Also fixed an import error in another unit test.
- Minimised the dependencies of the version module. This no longer relies on the dep_check module.
- Inserted return of True/False flags from function which convert models. The flag tells if: flag if No Rex model for R1ρ off-resonance was translated; flag if No Rex model for R1ρ off-resonance was inserted; flag if R1ρ off-resonance was translated to 'R1 fit' models if no R1 data was found.
- Changes to system test, after the number of returns from function has been altered.
- Inserted into relax_disp auto_analyses, to convert the input models. This will convert/insert the correct No Rex to the corresponding No Rex model for R1ρ off-resonance models. It will also translate to the corresponding 'R1 fit' model, if no 'R1 data' has been loaded with relax_data.read() function.
- Lowering of precision in system test Relax_disp.test_r1rho_kjaergaard_missing_r1(). This is due to 64/32-bit issues, between analysing on Linux computer, and testing on Mac computer.
- Made the GUI selection of models for relaxation dispersion more simple. After the implementation of a function which will translate the models, the No Rex model will be converted to the No Rex model for R1ρ off-resonance. Also the corresponding 'R1 fit' model will be chosen instead, if R1 data has not been loaded. This makes the model selection easier in the GUI interface.
- Bugfix for Relax_disp.test_bug_21715_clustered_indexerror, where only R2eff, No Rex is analysed. This special case was not tested in the translating function.
- Shortening the text in the auto_analysis, and raises a warning if R1 data has not been loaded.
- Rewrote the logic of the key-word 'optimise_r2eff' in the auto-analyses of relax disp. If R2eff result file exist in the 'pre_run_dir', this is loaded. If the results contain both values, and errors, then no optimisation is performed on the R2eff model. Unless the 'optimise_r2eff' flag is raised, which is not standard.
- Fixes for the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test on MS Windows. This is for 32-bit MS Windows systems where the parameter checks need to be loosened.
- Fixes for the Relax_disp.test_r1rho_kjaergaard_missing_r1 system test on MS Windows. This is for 32-bit MS Windows systems where the parameter checks need to be loosened.
- Created the specific_analyses.relax_disp.data.is_r1_optimsed() function. This follows from an idea for handling R1 optimisation in the tread at http://thread.gmane.org/gmane.science.nmr.relax.scm/22850/focus=6736. This relaxation dispersion function can be used anywhere that requires the knowledge that R1 values should be fitted during optimisation or if loaded values should be used.
- Implemented the very basic relax_disp.r1_fit user function. This is as discussed at http://thread.gmane.org/gmane.science.nmr.relax.scm/22850/focus=6737.
- Implemented the specific_analyses.relax_disp.parameters.r1_setup() function. This matches the description at http://thread.gmane.org/gmane.science.nmr.relax.scm/22850/focus=6739 and http://thread.gmane.org/gmane.science.nmr.relax.scm/22850/focus=6736.
- Spelling fix for the is_r1_optimised() function name.
- Implemented to the back_end of the value.copy, a "force" flag to overwrite destination value.
- Implemented to the front_end of the value.copy, a "force" flag to overwrite destination value.
- Copying the R2eff value from the reading of R2eff results in the auto-analyses of relax_disp.
- Fix for the misspelled is_r1_optimised() function.
- The r1_fit flag is stored in the Disp_minimise_command class and passed into the target function. This matches the details at http://thread.gmane.org/gmane.science.nmr.relax.scm/22850/focus=6736. The specific_analyses.relax_disp.optimisation.Disp_minimise_command class calls is_r1_optimised() and stores the result. This is then passed into the relaxation dispersion target function class.
- Modified that NS CPMG 2-site expanded will be preferred before NS CPMG 2-site 3D and NS CPMG 2-site star.
- Modified the nesting for NS MMQ 3-site linear. NS MMQ 3-site linear should be able to nest from NS MMQ 3-site.
- Fix to unit test, after MODEL_NS_CPMG_2SITE_EXPANDED has been preferred over other numerical CPMG models.
- Modified to unit test, that when model: MODEL_PARAMS_NS_R1RHO_3SITE nest from: MODEL_PARAMS_NS_R1RHO_2SITE the conversion should be: 'r2', 'r2' 'pA', 'pA' 'dw_AB', 'dw' 'kex_AB', 'kex' 'pB', '1 - pA' 'dw_BC', 'dw' 'kex_BC', 'kex' 'kex_AC', 'kex'. Here '1 - pA' is a special conversion flag.
- Implemented the special flag '1 - pA', when nesting parameters from models with fewer chemical sites.
- Implemented the nesting of parameters from a model with fewer chemical sites when nesting for NS R1rho 3-site.
- Inserted system test Relax_disp.test_model_nesting_and_param() which will go through all models, and then through all it nested models, testing that all parameters have a conversion.
- Modified system test Relax_disp.test_model_nesting_and_param() to only print, when the converted parameter is different from the original parameter.
- Fix for parameter conversion when model is: MODEL_PARAMS_NS_R1RHO_3SITE or MODEL_PARAMS_NS_R1RHO_3SITE_LINEAR.
- Fix for parameter conversion for: MODEL_PARAMS_NS_MMQ_3SITE and MODEL_PARAMS_NS_MMQ_3SITE_LINEAR
- Fixes to unit tests, after parameter conversion have been corrected.
- Replaced that folder names for writing out results should be with replaced with underscores "_". This is for the dispersion auto-analysis.
- Fixes for the relaxation dispersion loop_parameters() function. The R1, R20, R2A0, R2B0 (and R1rho_prime, R1rho_primeA, R1rho_primeB) parameters are now checked for in each spin container rather than just the first of the cluster. This should make no difference as all spins should have the same model and parameters, but it might be a source of bugs in the future.
- The r1_fit flag is now used to switch between dispersion target functions. This is as described in http://thread.gmane.org/gmane.science.nmr.relax.scm/22850/focus=6736. The change makes the '* R1 fit' models now redundant.
- Removed all of the '* R1 fit' models out of the relax_disp.select_model user function frontend. These models are now redundant as the question of R1 fitting is now determined internally in relax.
- Removed all of the MODEL_*_FIT_R1 dependencies from the specific_analyses.relax_disp package. These models are now redundant as the question of R1 fitting is now determined internally in relax.
- Fix for the specific_analyses.relax_disp.data.is_r1_optimised() function for on-resonance R1ρ data. This function needs to specifically catch these models.
- Fix for the MODEL_LIST_R1RHO variable. Recent changes causes this to not include the on-resonance R1ρ dispersion models.
- Import fix for the Relax_disp.test_model_nesting_and_param system test. Somehow the import of the convert_no_rex() function was lost.
- Modified the MODEL_LIST_R1RHO_OFF_RES list to include MODEL_NOREX_R1RHO.
- The specific_analyses.relax_disp.parameters.r1_setup() function is now being called. This happens before the R1 data is returned in the Disp_minimise_command class.
- The dispersion auto-analysis now handles the optional R1 parameter correctly. The value.set user function was no longer setting the R1 parameter to the default value when the grid search was deactivated, as it is no longer in MODEL_PARAMS. So instead the new is_r1_optimised() function is being used to decide if the value.set user function should set the 'r1' parameter value.
- The dispersion loop_parameter() function now calls r1_setup() to handle R1 parameters correctly. This allows the R1 parameter to be removed or added to the parameter list prior to looping over the parameters of the model. The change is required to allow for the dynamic handling of R1 parameters.
- The dispersion back_calc_r2eff() function can now handle the dynamic R1 parameter. This required a call to r1_setup() to add or remove the parameter, and is_r1_optimised() to obtain the r1_fit flag to be sent into the target function class.
- Updated the specific_analyses.relax_disp.model.Model_class class to handle the dynamic R1 parameter. The class variable self.params now has the 'r1' parameter prepended to the list if is_r1_optimised() returns True.
- More changes for specific_analyses.relax_disp.model.Model_class for the dynamic R1 parameter. The 'r1' parameter is only prepended to self.params if it is not already in the list.
- Created the MODEL_LIST_FIT_R1 variable to keep track of dispersion models with R1 fitting support.
- The is_r1_optimised() function now checks MODEL_LIST_FIT_R1. If the model is not in MODEL_LIST_FIT_R1, i.e. R1 optimisation is not supported, then the function will return False.
- Fix for the test_nesting_param_5 unit test. The 'r1' parameter is now dynamic and hence will not be present in the initial list.
- One final fix for the Model_class.params list with 'r1'. The is_r1_optimised() function is now called with the model name argument, as required.
- Updated the relax_disp.r1_fit user function docstring. This now includes information about which models support R1 parameter optimisation.
- Removed results files to allow the Relax_disp.test_r1rho_kjaergaard_missing_r1 system test to pass. These are the test_suite/shared_data/dispersion/Kjaergaard_et_al_2013/check_graphs/mc_2000/ results files for the No Rex and DPL94 models, as well as the final run. This commit is to allow the test to temporarily pass. It can be reverted once a better solution is discussed and decided upon.
- Altered the number of Monte Carlo simulations in test script to 2000.
- Merger of the No Rex and 'No Rex R1rho off res' models in the specific_analyses.relax_disp package. In the 'variables' module, all *_NOREX_R1RHO variables have simply been deleted and the MODEL_LIST_* structures updated. For the 'data' module, the is_r1_optimised() function was modified to catch the No Rex model and to then use the cdp.exp_type_list structure to determine if the experiment type is EXP_TYPE_R1RHO. This will be modified in the future by using a function for determining if the current experiment is on or off-resonance. The return_r1_data() and return_r1_err_data() functions have also been modified to check if R1 values are fit rather than if the model is in MODEL_LIST_R1RHO_OFF_RES. In the 'model' module, in addition to deleting all *_NOREX_R1RHO variables, the convert_no_rex() function has also been deleted as it no longer serves a purpose. In the 'checks' module, all 'No Rex R1rho off res' model references have been replaced with No Rex.
- Updated the dispersion auto-analysis for the universal No Rex model. The 'No Rex R1rho off res' references have all been deleted. The model conversion logic is also no longer needed and has been deleted.
- Converted the relaxation dispersion GUI interface to the unified No Rex model. All of the MODEL_NOREX_R1RHO references have simply been deleted.
- Converted the relaxation dispersion target function class to the unified No Rex model design. On top of removing all references to MODEL_NOREX_R1RHO, the aliasing of self.func now checks the experiment type list to determine which target function to use. This is not an ideal solution and will not handle mixed CPMG and R1ρ experiment, however neither will the target functions yet. The creation of the off-resonance data structures has also been modified so that they are now R1ρ independent. This allows the structures to be properly created while at the same time enabling this code to be compatible with off-resonance CPMG data in the future.
- Removed the 'No Rex R1rho off res' model from the relax_disp.select_model user function frontend.
- Removed all references to the 'No Rex R1rho off res' model in the system tests. In addition, the Relax_disp.test_convert_no_rex system test has been deleted as it no longer has a purpose. For the Relax_disp.test_model_nesting_and_param system test, to allow this to work the cdp.exp_type_list list is set to EXP_TYPE_LIST.
- Removed all references to the 'No Rex R1rho off res' model in the unit tests.
- Updated the No Rex dispersion model description in the relax manual. The universal nature of the model is now described, including the addition of the off-resonance CPMG and R1ρ equations for the absence of chemical exchange. The R1 parameter optimisation is also shortly covered.
- Added a subsection to the dispersion chapter of the manual about R1 parameter optimisation.
- Added the R1 parameter fitting GUI element to the dispersion GUI tab. This is a simple Boolean toggle element that allows the R1 optimisation to be turned on. The value is passed into the auto-analysis.
- Added the r1_fit argument to the relaxation dispersion auto-analysis. When this is True, the relax_disp.r1_fit user function will be called to turn R1 parameter fitting on.
- Added the relax_disp.spin_lock_offset user function to the dispersion GUI. This has been added to the pop up menu in the spectrum list GUI element, when the relax_disp_flag has been set. It simply mimics the relax_disp.spin_lock_field functionality already present. This follows from Task #7820.
- Fix for the relax_disp.spin_lock_offset user function in the dispersion GUI tab. This is in the spectrum list element popup menu.
- Added the offset column to the spectrum list GUI element for the dispersion analysis. This is to complete Task #7820. The spectrum list GUI element add_offset() method has been added to insert the offset column when the relax_disp_flag is set. This is called by the update_data() method to fill and update the GUI element.
- Implemented the GUI test Relax_disp.test_bug_22501_close_all_analyse to catch bug #22501, 'Close all analyses' raises error.
- Inserted intermediate system test, to profile R2eff calculation for R1ρ. System test is: Relax_disp.test_bug_9999_slow_r1rho_r2eff_error_with_mc. This system test actually fails, if one tries to do a grid search. This is related to the R2eff values stored as dictionary, and pipe_control.minimise.grid_setup() will fail. The function 'isNaN(values[i])' cannot handle dictionary.
- Modified intermediate system test Relax_disp.test_bug_9999_slow_r1rho_r2eff_error_with_mc to see if the initial grid search for I0 and R2eff estimation can be skipped. This is done by converting the exponential curve, to a linear curve, and calculate the best parameters by a line of best fit by least squares. This seems like a promising method as an initial estimator of I0 and R2eff. For 500 to 2000 Monte Carlo simulations, this could have an influence on the timings, as all grid searchs could be skipped.
- Modified system test test_bug_9999_slow_r1rho_r2eff_error_with_mc to save data arrays. This is to prepare a profiling script.
- Added start script with basic data for profiling the relax curve fit.
- Created the Structure.test_create_diff_tensor_pdb system test. This is to show the failure of the structure.create_diff_tensor_pdb user function when no structural data is present.
- Created the Structure.test_create_diff_tensor_pdb2 system test. This is to catch another situation leading to bug #22505, the failure of the structure.create_diff_tensor_pdb user function when no structural data is present.
- Added an optimisation script for the test_suite/shared_data/diffusion_tensor/ellipsoid relaxation data. This is to help catch bug #22502, the geometric prolate diffusion representation does not align with axis in PDB, as reported by Martin Ballaschk. The PDB files of the optimised tensor demonstrate exactly the same problem as seen in the files attached to the bug report. The oblate and spherical diffusion tensor representations match that of the ellipsoid. But the prolate axis and tensor orientation are both different from the ellipsoid as well as themselves.
- Updated the diffusion tensor PDB representation files. This replaces the broken prolate representation with the corrected representation.
- Deleted the duplicated Structure.test_create_diff_tensor_pdb system test.
- Created a number of system tests to check the diffusion tensor PDB representation. This is to prevent bugs such as #22502 from ever reappearing. The PDB file contents are hardcoded into the tests and checked. The tests include Structure.test_create_diff_tensor_pdb_ellipsoid, Structure.test_create_diff_tensor_pdb_oblate, Structure.test_create_diff_tensor_pdb_prolate, and Structure.test_create_diff_tensor_pdb_sphere.
- Improved data checking for all of the Structure system tests. Before looping over the structural data, the number of lines in the real file and the newly generated file are compared. This avoids the situation whereby an empty file is produced, accidentally allowing the test to pass.
- Modified following functions (time points are now saved at the [ei][mi][oi][di] index level, at this index level all time points are saved for the R2eff point): interpolate_disp(), to interpolate time points, all time points through the original dispersion points di, are collected and then made unique - this time list can potentially be the largest of all time lists; interpolate_offset(), to interpolate time points, all time points through the original offset points, and then dispersion points di, are collected and then made unique - this time list can potentially be the largest of all time lists; plot_disp_curves_to_file(), to acquire the original relax_times points; return_r2eff_arrays(), to save all time points on the level of [ei][mi][oi][di]. At this index level, it will be a numpy array list with all time values used for fitting. Bug #22461: NS R1rho 2-site_fit_r1 has extremely high χ2 value in system test Relax_disp.test_r1rho_kjaergaard_missing_r1.
- Modified back_calc_r2eff() to accept interpolated time points. Bug #22461: NS R1rho 2-site_fit_r1 has extremely high χ2 value in system test Relax_disp.test_r1rho_kjaergaard_missing_r1.
- Modified target function of relax dispersion, to use the new list of time points, which are of higher dimension. Bug #22461: NS R1rho 2-site_fit_r1 has extremely high χ2 value in system test Relax_disp.test_r1rho_kjaergaard_missing_r1.
- Fix to system test Relax_disp.test_r1rho_kjaergaard_missing_r1(). After the relaxation times have been fixed, this model now return reasonable χ2 values. The reported parameters are though quite different from all other models, and it seems something may still be wrong. Bug #22461: NS R1rho 2-site_fit_r1 has extremely high χ2 value in system test Relax_disp.test_r1rho_kjaergaard_missing_r1.
- Fix for system test Relax_disp.test_exp_fit(), where the spin.isotope was not set. The new call to return_r2eff_arrays(), when producing graphs, raise RelaxSpinTypeError() if no isotope is set. Bug #22461: NS R1rho 2-site_fit_r1 has extremely high χ2 value in system test Relax_disp.test_r1rho_kjaergaard_missing_r1.
- Modified the Relax_disp.test_r1rho_kjaergaard_missing_r1 system test to pass on 64-bit Linux systems. The accuracy of the checks of the optimised values have been decreased.
- Moved the storing of relax time up before check of missing data in return_r2eff_arrays(). Bug #22461: NS R1rho 2-site_fit_r1 has extremely high χ2 value in system test Relax_disp.test_r1rho_kjaergaard_missing_r1.
- Fix for system test not adding spin.isotope to setup information. Bug #22461: NS R1rho 2-site_fit_r1 has extremely high χ2 value in system test Relax_disp.test_r1rho_kjaergaard_missing_r1.
- Fix for looping over data indices, where tilt_angles has the si index. Bug #22461: NS R1rho 2-site_fit_r1 has extremely high χ2 value in system test Relax_disp.test_r1rho_kjaergaard_missing_r1.
- Added Nikolai's original Matlab code to the lib.dispersion.ns_r1rho_2site module docstring. This is the code taken directly form the original funNumrho.m file, which was the origin of the code in this module.
- Further extended the profiling script for curve fitting. Now profiling is in place for the implemented C code method in relax. A similar code should now be devised for numpy array for comparing. But this profiling shows that when contraints=True, is slowing down this procedure by a factor 10 X.
- Further improved the profiling of relax curve fit. This profiling shows, that Python code is about twice as slow as the C code implemented. But it also shows that optimising with scipy.optimize.leastsq is 20 X faster. It also gives reasonable error values. Combining a function for a linear fit to guess the initial values, together with scipy optimise, will be an extreme time win for estimating R2eff values fast. A further test would be to use relax Monte Carlo simulations for say 1000-2000 iterations, and compare to the errors extracted from estimated covariance.
- Added verification script, that shows that using scipy.optimize.leastsq reaches the exact same parameters as minfx for exponential curve fitting. The profiling shows that scipy.optimize.leastsq is 10X as fast as using minfx (with no linear constraints). scipy.optimize.leastsq is a wrapper around wrapper around MINPACK's lmdif and lmder algorithms. MINPACK is a FORTRAN90 library which solves systems of nonlinear equations, or carries out the least squares minimization of the residual of a set of linear or nonlinear equations. The verification script also shows, that a very heavy and time consuming Monte Carlo simulation of 2000 steps, reaches the same errors as the errors reported by scipy.optimize.leastsq. The return from scipy.optimize.leastsq, gives the estimated covariance. Taking the square root of the covariance corresponds with 2X error reported by minfx. This could be an extremely time saving step, when performing model fitting in R1ρ, where the errors of the R2eff values, are estimated by Monte Carlo simulations. The following setup illustrates the problem. This was analysed on a MacBook Pro, 13-inch, Late 2011 with no multi-core setup. Script running is: test_suite/shared_data/dispersion/Kjaergaard_et_al_2013/2_pre_run_r2eff.py. This script analyses just the R2eff values for 15 residues. It estimates the errors of R2eff based on 2000 Monte Carlo simulations. For each residues, there is 14 exponential graphs. The script was broken after 35 simulations. This was measured to 20 minutes. So 500 simulations would take about 4.8 Hours. The R2eff values and errors can by scipy.optimize.leastsq can instead be calculated in: 15 residues * 0.02 seconds = 0.3 seconds.
- Moved the target function for minimisation of exponential fit into the target functions folder. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Implemented initial system test Relax_disp.test_estimate_r2eff for setting up the new user function to estimate R2eff and errors by scipy. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added front end user function relax_disp.r2eff_estimate to estimate R2eff and errors by exponential curve fitting in scipy.optimize.leastsq. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified check for model, to accept model as input, for error printing. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Implemented back end for estimating R2eff and errors by exponential curve fitting with scipy.optimize.leastsq. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Documentation fix for new exponential target function. Also added new function to estimate R2eff and I0 parameters, before minimisation. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Small changes to verification scripts, to use χ2 function and use the scaling matrix correct. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Split up of system test test_r1rho_kjaergaard_missing_r1, into a verification part. This is to test the new R2eff estimation, which should get the parameter values, as a this 2000 Monto Carlo simulation. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified system test Relax_disp.test_estimate_r2eff. This is to compare against errors simulated with 2000 MC. The parameters are comparable, but not equal. Mostly, it seems that the errors from scipy.optimize.leastsq, are twice as high than the Monte Carlo simulations. This affect model fitting, and the calculated χ2 value.
- Added system test Relax_disp.test_estimate_r2eff_error(). This is to get insight in the error difference between 2000 Monto Carlo simulations and then scipy.optimize.leastsq.
- Add dependency check for scipy.optimize.leastsq. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Lowering precision in system test Relax_disp.test_r1rho_kjaergaard_missing_r1. This is R1 estimation with MODEL_NS_R1RHO_2SITE. The lowering of precision is due different system precision.
- Reused the dependency check "scipy_module", since leastsq() has been part of Scipy since 2003.
- Moved target function for curve fitting with scipy into specific_analyses.relax_disp.estimate_r2eff. This will later include the backend specific_analyses.relax_disp.optimisation.estimate_r2eff() function and the code in the target_functions package. The code in target_functions.relax_disp_curve_fit is a lot more than just a target function, so it doesn't really belong in this package. This is also to isolate this experimental feature.
- Isolated all code related to user function relax_disp.r2eff_estimate into independent module file. All has been isolated to: specific_analyses.relax_disp.estimate_r2eff.
- Split function to minimise with scipy.optimize.leastsq out in estimate_r2eff module. This is to prepare for implementing with minfx.
- Implemented first try to minimise with minfx in estimate_r2eff() function. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Implementation of the target_functions.relax_fit.jacobian() function. This follows from the discussions at http://thread.gmane.org/gmane.science.nmr.relax.devel/6807. The function will calculate the Jacobian matrix for the exponential curve-fitting module. The Jacobian can be used to directly calculate the covariance matrix, for example as described at https://www.gnu.org/software/gsl/manual/html_node/Computing-the-covariance-matrix-of-best-fit-parameters.html. The Jacobian is calculated using the help of the new exponential_dI() and exponential_dR() functions in the target_functions/exponential.c file. These calculate the partial derivatives of the exponential curve with respect to each model parameter separately. The implementation still needs testing and debugging.
- Fixes for the new target_functions.relax_fit.jacobian() function. The Python list of lists is now correctly created and returned.
- Turned off the optimisation constraints for the R2eff model in the dispersion auto-analysis. This follows from http://thread.gmane.org/gmane.science.nmr.relax.scm/22977/focus=6829. This model does not require constraints at all, and the constraints only cause the optimisation to take 10x longer to complete. Therefore the constraint flag has been set to False for the model.
- Initial try to form the Jacobian and Hessian matrix for exponential decay. This can be tried with system test: relax -s Relax_disp.test_estimate_r2eff_error. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Intermediate step in estimate R2eff module. It seems that minfx is minimising in a quadratic space because of the power of χ2, while the general input to scipy.optimize does not do this. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Cleaned up target function for leastsq, since arguments to function can be extracted from class. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Tried to implement with scipy.optimize.fmin_ncg and scipy.optimize.fmin_cg, but cannot get it to work. The matrices are not aligned well. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Implemented the chi-squared gradient as a C module for the target functions. This simply translates the Python code into C to allow any target function C modules to build its own gradient function.
- Implemented the target_functions.relax_fit.dfunc() gradient function. This is using the Python/C interface to provide a Python function for calculating and returned the chi-squared gradient for the exponential curves.
- Implementation of the specific_analyses.relax_fit.optimisation.dfunc_wrapper() function. This interfaces with the target_functions.relax_fit C module and converts the gradient from a Python list to a numpy array.
- The exponential curve-fitting gradient is now scaled by the scaling matrix.
- Clean up of the end of the target_functions.relax_fit.dfunc() function.
- Fixes for the target function chi-squared gradient C function.
- Fixes for all of the exponential functions in target_functions/exponential.c. The condition whereby Rx is zero is now setting the value correctly - the exponential will be 1, not zero, hence the intensity and gradient values should not be zero.
- Clean up of the target function C files (spacing fixes and removal of unused code).
- Changed the argument and variable names in the C code chi-squared gradient function.
- Modified all of the exponential curve functions and gradients in the C target function module. Instead of passing in the parameter vector, now the I0 and R values are passed in separately. This is for greater code flexibility as the parameter order does not need to be known.
- The parameter index is now passed into exponential_dI0() and exponential_dR(). This is for the relaxation curve-fitting C module so that the indices are not hardcoded.
- The I0 and R parameter indices are now defined in the target_function/relax_fit.h header file. This is to abstract the exponential curve parameter indices even more.
- Big cleanup of estimate R2eff module. This is to make the documentation more easy to read and understand. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Created 2 unit tests for the target_functions.relax_fit relax C module. These check the func() and dfunc() Python methods exposed by the module.
- The relax_fit C module unit tests now check if the parameter scaling is functional.
- Added several comments to the R2eff estimate module. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added a script and log file for calculating the numerical gradient for an exponential curve. This uses the data at http://thread.gmane.org/gmane.science.nmr.relax.devel/6807/focus=6840 and calculates the gradient using the scipy.misc.derivative() function both at the minimum and at a point away from the minimum. The values will be used to construct a unit test to check the C module implementation.
- Created a unit test to check the dfunc() function of the relax_fit C module off the minimum.
- Fixes for the target_functions.relax_fit C module unit tests. All values are now set to floats to avoid integer division issues.
- Activated parameter scaling of the gradient in the test_dfunc_off_minimum() unit test. This is the test class test_suite.unit_tests._target_functions.test_relax_fit.Test_relax_fit.
- The exponential curve numeric gradient script now uses only floating point numbers. This is to avoid integer truncation problems.
- Fix for the script for calculating the numerical gradient for an exponential curve. The off-minimum derivative was not correctly calculated.
- Increased the printouts for the script for calculating the numerical gradient for an exponential curve.
- Bug fix for the chi-squared gradient calculation in the C module. The definition of the square() function needed extra brackets so that the 1/error2 calculation would be 1/(error*error) rather than the incorrect 1/error*error form.
- Fix for the test_dfunc_off_minimum() unit test. This is the test class test_suite.unit_tests._target_functions.test_relax_fit.Test_relax_fit. The wrong gradient was being scaled.
- Switched the optimisation algorithm in test_suite/system_tests/scripts/relax_fit.py. This script, used by the Relax_fit.test_curve_fitting_height and Relax_fit.test_curve_fitting_volume system tests, now uses the BFGS optimisation. This is to demonstrate that the exponential curve gradient function dfunc() is implemented correctly and that more advanced optimisation algorithms can be used (excluding those that require the full Hessian d2func() function).
- Got the method of 'Steepest descent' to work properly, by specifying the Jacobian correctly. The Jacobian was derived according to the χ2 function. The key point was to evaluate the two derivative arrays for all times points, and then sum each of the two arrays together, before constructing the Jacobian. This clearly shows the difference between minfx and scipy.optimize.leastsq. scipy.optimize.leastsq takes as input a function F(x0), which should return the array of weighted differences between function value and measured values: "1. / self.errors * (self.calc_exp(self.times, *params) - self.values)". This will be an array with number of elements 'i' corresponding to number of elements. scipy.optimize.leastsq then internally evaluates the sum of squares -> sum[ (O - E)2 ], and minimises this. This is the χ2. Minfx requires the function to minimise before hand. So, the "func" should be χ2. Then the dfunc, and d2func, should be derivative of χ2, but all elements in the array should still be summed together. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Got the Quasi-Newton BFGS to work. This uses only the gradient, this gets the same results as 2000 Monte Carlo with simplex and scipy.optimize.leastsq. Error estimation still not provided. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Removed all code regarding scipy.optimize fmin_cg and fmin_ncg. This problem should soon be able to be solved with minfx. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added initial documentation for multifit_covar. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified profiling script to use the new estimate R2eff module. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified verify error script, to use new estimate R2eff module. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Removed all unnecessary code from estimate R2eff module. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- More removal of code. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Changed the array declarations in the target_functions/exponential C file and header. Instead of using the pointer format of *xyz, the array format of xyz[] is now being used. These are equivalent and the later is more obvious that this is an array.
- Changed the array declarations in the target_functions/c_chi2 C file and header. Instead of using the pointer format of *xyz, the array format of xyz[] is now being used. These are equivalent and the later is more obvious that this is an array.
- Shifted all of the parameter Python lists to C arrays into the new param_to_c() function. This is for the target_functions.relax_fit C module to avoid much duplicated code.
- Removed the comment and docstring saying that the exponential curve-fitting gradient is unimplemented.
- Updated the copyright notices in the C files of the target_functions directory.
- Implemented the C version of the chi-squared Hessian. This is a direct translation of the Python code.
- Changed the internal variables of the chi-squared gradient C code to match the Python code.
- Standardisation of the array dimensionality in the target function C code. The new target_functions/dimensions.h header file defines MAX_PARAMS and MAX_DATA which is then included in the header files of all the other C files. All array declarations now explicitly specify the length of each dimension. The values of MAX_PARAMS and MAX_DATA have increased from 3 and 50 to 20 and 5000. This is to allow for models with more parameters and to allow a much larger number of input data points to be supported before memory corruptions happen. The data structures now take up more memory, but as the functions do not loop up to maximum but only the number of parameters and points specified, this will not make the code slower.
- All of the C code chi-squared functions now have the array argument dimensions explicitly declared.
- The target_functions/exponential.c file no longer includes exponential.h. This is not needed as exponential.h only contains the function definitions of the exponential.c file.
- Clean up of the header and includes of the target_functions/c_chi2.c file. The square() function macro has been shifted to the header file and the stdio.h and math.h standard library headers are no longer included as they are not used.
- Partly implemented the front end target_functions.relax_fit.d2func() C module Python function. This is not fully implemented as the exponential curve double partial derivatives are not implemented.
- Implemented the exponential curve second partial derivative C functions. These are declared in the exponential.h header file and are now used by the Python function target_functions.relax_fit.d2func().
- The square() function macro is now defined for the target_functions/exponential.c file. This is defined in the header, and now the exponential.h is included in the C file to access it.
- Implemented the specific_analyses.relax_fit.optimisation.d2func_wrapper() function. This converts the numpy parameter array into a Python list, calls the target_functions.relax_fit.d2func() function with this list, converts the Hessian output list of lists into a numpy rank-2 array, and returns it. This will allow Newton optimisation to be used for the relaxation curve-fitting analysis.
- Switched the optimisation algorithm in test_suite/system_tests/scripts/relax_fit.py. This script, used by the Relax_fit.test_curve_fitting_height and Relax_fit.test_curve_fitting_volume system tests, now uses Newton optimisation. This is to demonstrate that the exponential curve gradient function dfunc() and Hessian function d2func() are implemented correctly.
- Added a script and log file for calculating the numerical gradient for an exponential curve. This uses the data at http://thread.gmane.org/gmane.science.nmr.relax.devel/6807/focus=6840 and calculates the Hessian using the numdifftools.Hessian object construct and obtain the matrix, both at the minimum and at a point away from the minimum. The values will be used to construct a unit test to check the C module implementation.
- Fixes for the Hessian.py script for numerical integrating the Hessian for an exponential curve.
- Implemented two unit tests to check the Hessian of the target_functions.relax_fit.d2func() function. This compares the calculated Hessian to the numerically integrated values from the test_suite/shared_data/curve_fitting/numeric_gradient/Hessian.py script, showing that the d2func() function is implemented correctly.
- Modified profiling script, but it seems that the dfunc from target_functions.relax_fit does not work. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified estimate R2eff module, to use C code. But system test Relax_disp.test_estimate_r2eff_error shows that the Jacobian is not correctly implemented to be called in minfx.
- Created an initial test suite data directory for a mixed R1ρ + CPMG dispersion analysis. The generate.py script will be extended in the future to generate both synthetic R1ρ and CPMG data for a common exchange process. Such a data combination should show some minor flaws in the current design of the dispersion analysis and will help to solve these.
- Improvements to the pipe_control.minimise.reset_min_stats() function. The minimise statistics resetting is now more elegantly implemented. And the sim_index keyword argument is accepted by the function and individual Monte Carlo simulation elements can now be reset.
- Modified wrapper function for curve_fit, to only change to list type, if the type is a ndarray. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- The model-free reset_min_stats() function has been replaced with the pipe_control.minimise version. The specific_analyses.model_free.optimisation.reset_min_stats() function has been deleted and instead the pipe_control.minimise version is being used.
- Implemented the first try to compute the variance of R2eff and I0, by the covariance. This uses the Jacobian matrix. The errors calculated, are though way to small compared 2000 Monte Carlo simulations. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Tried to implement the Jacobian from C code. This though also report errors which are to small. Maybe some scaling is wrong. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified profiling script to calculate timings. The timings for C code are: Simplex, with constraints = 2.192; Simplex, without constraints = 0.216; BFGS, without constraints = 0.079; Newton, without constraints = 0.031; This is pretty pretty fast. To this profiling script, I would also now add some verification on calculations.
- Tried to verify solution to profiling script. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Set the constraints=False when doing Monte Carlo simulations for R2eff. This is to speed up the Monte Carlo simulations by a factor X10, when estimating the error for R2eff. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Implemented the use of "Newton" as minimisation algorithm for R2eff curve fitting instead of simplex. Running the test script: test_suite/shared_data/dispersion/Kjaergaard_et_al_2013/2_pre_run_r2eff.py. For 50 Monte Carlo simulations, the time drop from: 3 minutes and 13 s, to 1 min an 5 seconds. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Changed the relax_fit.py sample script to use Newton rather than Simplex optimisation. This can lead to significantly faster optimisation times, as shown in the commit message http://article.gmane.org/gmane.science.nmr.relax.scm/23081.
- Changed the optimisation description in the relaxation curve-fitting chapter of the manual. The script example has been converted to match the sample script, replacing the Nelder-Mead simplex algorithm with Newton optimisation, and removing the argument turning diagonal scaling off. All the text about only the simplex algorithm being supported due to the missing gradients and Hessians in the C module have been deleted. The text that linear constraints are not supported has also been removed - but this was fixed when the logarithmic barrier constraint algorithm was added to minfx.
- By using minfx, and the reported Jacobian, it is now possible to get the exact same error estimation as scipy.optimize.leastsq. The fatal error was to set the weighting matrix with diagonal elements as the error. There weights are 1/errors2. There is though some unanswered questions left. The Jacobian used, is the direct derivative of the function. It is not the χ2 derivative Jacobian. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Fixed naming of functions, to better represent what they do in module of estimating R2eff. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Implemented the Jacobian of exponential function in Python code. This now also gets the same error as leastsq and C code. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Tried to implement a safety test for linearly-dependent columns in the covariance matrix. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Fixes for the relax_disp.r2eff_estimate user function documentation. This is to allow the relax manual to compile again as the original documentation was causing LaTeX failures.
- Clean up of the declarations in the target_functions.relax_fit C module. The Python list objects are now declared at the start of the functions, and then PyList_New() is called later on. This allows the code to compile on certain Windows systems.
- Removed the user function to estimate the R2eff values and errors with scipy.optimize.leastsq. With the newly implemented Jacobian and Hessian of the exponential decay function, the front-end to scipy.optimize.leastsq does not serve a purpose. This is because minfx is now as fast as scipy.optimize.leastsq, and can estimate the errors from the Jacobian to the exact same numbers as scipy.optimize.leastsq. In addition to that, the covariance can be calculated by QR decomposition. This adds additional feature for checking for a singular matrix. The back-end will still be kept in place for the coming tim, but could be removed later. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added front-end to the new user function relax_disp.r2eff_err_estimate, which will estimate the R2eff errors from a pipe and spins with optimised values of R2eff and I0. The covariance matrix can be calculated from the optimised parameters, and the Jacobian. Big care should be taken not to directly trust these results, since the errors are quite different compared to the Monte Carlo simulations. This implementation, will reach the exact same error estimation as scipy.optimize.leastsq. But with much better control over the data, and insight into the calculations. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added method to automatically perform error analysis on peak heights. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified system test Relax_disp.test_estimate_r2eff() to first do a grid search, then minimise and then estimate the errors for R2eff and I0. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added back-end to estimate R2eff errors. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Fix to system test test_estimate_r2eff_error(), to first delete the old error estimations. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added several tests to: test_estimate_r2eff_error, to compare different output from algorithms. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Cleaned up code in R2eff error module. Also removed a non working Hessian matrix. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Moved code around, and made function multifit_covar() independent of class object. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Inserted checks for C module is available in module for estimating R2eff error. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Removed unnecessary call to experimental Exp class. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Renamed system test, that test the user function for estimating the R2eff error: test_estimate_r2eff_err, test the user function. test_estimate_r2eff_err_methods, test different methods for getting the error. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added system test, Relax_disp.test_estimate_r2eff_err_auto and extended functionality to the auto-analyses protocol. If "exp_mc_sim_num" is set to "-1" and sent to the auto-analyses, the errors of R2eff will be estimated from the covariance matrix. These errors is HIGHLY likely to be wrong, but can be used in an initial test fase, to rapidly produce data for plotting data. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added script, to be used in GUI test. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added GUI test Relax_disp.test_r2eff_err_estimate, to test the setting of MC sim to -1 for exponential R2eff error estimation. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added keyword "exp_mc_sim_num", to the auto-analyses in the GUI. This sets the number of Monte Carlo simulations for R2eff error estimation in exponential curve fitting. When setting to -1, the errors are estimated from the covariance matrix. These errors are highly likely to be wrong, but can be used in Rapid testing of data and plotting. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Tried to click the "fit_r1" button in the GUI test, but receives an error: relax --gui-tests Relax_disp.test_r2eff_err_estimate, "AttributeError: 'SpinContainer' object has no attribute 'r1'". Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Moved the mc_sim_num GUI element in the analysis tab ip, as it is executed first. Also modified the tooltip. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added a warning to the auto-analyses about error estimation from the covariance. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Removed yet another comma from GUI tooltip. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Formatting changes for the lib.periodic_table module. This is in preparation for extending the information content of this module.
- Modified system test 'test_estimate_r2eff_err_auto', to use the GUI script. It seems to work perfect. This is to test against GUI script: test_r2eff_err_estimate Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified test_estimate_r2eff_err_auto, to set r1_fit to False. This still make the system test pass, and fit R1. So this means R1 fit button is not functioning properly. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Fix for warning message in the auto-analyses in the GUI. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Tried to improve docstring for API documentation. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added all of the IUPAC 2011 atomic weights to the lib.periodic_table module. These will be useful for correctly calculating the centre of mass of a molecule.
- The lib.periodic_table method for adding elements is now private.
- Created the unit test infrastructure for the lib.periodic_table module. This includes one unit test of the lib.periodic_table.periodic_table.atomic_weight() function which has not been implemented yet.
- Implemented the lib.periodic_table.periodic_table.atomic_weight() method. This returns the standard atomic weight of the atom as a float.
- Yet another try to make the API documentation working. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Implemented system test Relax_disp.verify_estimate_r2eff_err_compare_mc for testing R2eff error as function of Monte Carlo simulation. Note, since the name does not start with "test", but with "verify", this test will not be issued in the system test suite. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Converted the periodic table in lib.periodic_table into a dictionary type object. The new Element container has been added for storing the information about each element in the table. The Periodic_table object used the atomic symbol as a key for each Element instance.
- Modified system test test Relax_disp.test_estimate_r2eff_err_methods() to show the difference between using the direct function Jacobian, or the χ2 function Jacobian. Added also the functionality to the estimate R2eff module, to switch between using the different Jacobians. The results show, that R2eff can be estimated better.
- Added isotope information to the lib.periodic_table module including mass number and atomic mass. A new Isotope data container has been added to store this information. The Periodic_table._add() method now returns the initialised Element container. This container has the _add_isotope() method which is used to initialise Isotope data containers with the mass number and atomic mass and append it to the list.
- Created a unit test for the Periodic_table.atomic_mass() method. This method is not implemented yet.
- Changed the method call in the new Test_periodic_table.test_get_atomic_mass unit test.
- Fix for the Test_periodic_table.test_get_atomic_mass unit test - the method calls were incorrect.
- Implemented the lib.periodic_table module Periodic_table.atomic_mass() method. This method will return either the atomic mass of an isotope or the standard atomic weight.
- Changed the operation of the lib.structure.mass.centre_of_mass() function. Instead of using the lib.physical_constants.return_atomic_mass() function, the centre_of_mass() function instead uses the lib.periodic_table.periodic_table.atomic_mass() method. This is a huge improvement in that the exact mass of absolutely all elements are taken into account.
- Deletion of the lib.physical_constants.return_atomic_mass() function and all relative atomic masses. These were inaccurate and only included a tiny subset of all standard atomic weight and isotope masses. The functionality has been replaced by the complete and 100% accurate complete Periodic_table object in the lib.periodic_table module.
- Fix for the MolContainer.fill_object_from_gaussian() method. This is in the lib.structure.internal.molecules module. The Periodic_table.lookup_z_to_symbol() method in the lib.periodic_table module has been renamed to lookup_symbol().
- Fix for the Periodic_table.lookup_symbol() method. The __init__() method of the Periodic_table has been reintroduced to initialise a fast atomic symbol lookup table. The _add() method then updates this table. And the lookup_symbol() method now uses this lookup table to correctly return the symbol.
- Tiny fix for the Diffusion_tensor.test_create_diff_tensor_pdb_ellipsoid system test. The switch to using the lib.periodic_table module for atomic masses has caused the centre of mass of the ellipsoid to shift just enough that one ATOM coordinate in the PDB file has changed its last significant digit.
- Created the lib.periodic_table.process_symbol() function. This will take an atomic symbol and return a copy of it with an uppercase first letter and lowercase second letter. This is used by the Periodic_table methods atomic_mass() and atomic_weight() to allow for non-standard symbol input, for example if the element name comes directly from the all uppercase PDB file format without translation.
- Tried to scale the covariance matrix, as explained here: http://www.orbitals.com/self/least/least.htm. This does not work better. Also replaced "errors" to "weights" to the multifit_covar(), to better determine control calculations. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added all gyromagnetic ratio information from lib.physical_constants to lib.periodic_table. The Periodic_table.gyromagnetic_ratio() method has been added to allow this value to be easily returned.
- Added to back-end of R2eff estimate module, to be able to switch between the function Jacobian or the χ2 Jacobian. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified user function relax_disp.r2eff_err_estimate, to be able switch between the Jacobians. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified system test Relax_disp.verify_estimate_r2eff_err_compare_mc, to try the difference between the Jacobian. The results are: Printing the estimated R2eff error as function of estimation from covariance and number of Monte Carlo simulations. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Deleted the gyromagnetic ratio values and return_gyromagnetic_ratio() function from lib.physical_constants.
- Shifted all of relax to use the lib.periodic_table module for gyromagnetic ratios. The values and value returning function have been removed from lib.physical_constants and replaced by the Periodic_table.gyromagnetic_ratio() method in the lib.periodic_table module.
- Started making functions in R2eff estimate module, independent on the informations stored in the Class. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Cleaned up code in R2eff estimate module, by making each function independent of class. This is to give a better overview, how the different functions connect together. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Made the user function, which estimates the R2eff errors, use the Jacobian derived from χ2 function. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified system test verify_estimate_r2eff_err_compare_mc() to first use the direct function Jacobian, and then the χ2 derived Jacobian. This shows the result better. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added digit to printout in R2eff estimate module. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Locked values for system test test_estimate_r2eff_err, to estimate how the R2eff error estimation reflects on fitted parameters. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- More locking of values, when trying to use different methods for estimating R2eff err values. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- More locking of values. This actually shows, that errors should be estimated from the direct Jacobian. Not, the χ2 Jacobian. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Reverted the logic, that the χ2 Jacobian should be used. Instead, the direct Jacobian exponential is used instead. When fitting with the estimated errors from the direct Jacobian, the results are MUCH better, and comparable to 2000 Monte Carlo simulations. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Various precision fixes for different machine precision. This is in: verify_r1rho_kjaergaard_missing_r1 Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- First attempt at properly implementing the target_functions.relax_fit.jacobian() function. This is now the Jacobian of the chi-squared function. A new jacobian_matrix data structure has been created for holding the matrix data prior to converting it into a Python list of lists. The equation used was simply the chi-squared gradient whereby the sum over i has been dropped and the i elements are stored in the second dimension of matrix.
- Speed up of the target_functions.relax_fit C module. The variances are now pre-calculated in the setup() function from the errors, so that the use of the square() function is minimised. The chi-squared equation, gradient, and Hessian functions now accept the variance rather than standard deviation argument and hence the squaring of errors has been removed. This avoids a lot of duplicated maths operations.
- Alphabetical ordering of global variable declarations in the target_functions.relax_fit header file.
- Added RelaxError, if less than 2 time points is used for exponential curve fitting in R2eff. This follows: http://thread.gmane.org/gmane.science.nmr.relax.user/1718 http://thread.gmane.org/gmane.science.nmr.relax.user/1735 Specifically, data was attached here: http://thread.gmane.org/gmane.science.nmr.relax.user/1735/focus=1736.
- Added system test Relax_disp.test_bug_atul_srivastava(), to catch a bug missing raising a RelaxError, since the setup points to a situation where the data shows it is exponential fitting, but only one time point is added per file. This follows: http://thread.gmane.org/gmane.science.nmr.relax.user/1718 http://thread.gmane.org/gmane.science.nmr.relax.user/1735 Specifically, data was attached here: http://thread.gmane.org/gmane.science.nmr.relax.user/1735/focus=1736.
- Parameter precision lowered for Relax_disp.test_estimate_r2eff_err_auto(). This is due to change to C code. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Created the select.display user function. This simply displays the current spin selections of all spins. In the future it can be extended to display the interatomic data container selections, domain selections, etc.
- Fix for system test: test_estimate_r2eff_err_auto(). The Jacobian to estimate the errors has been changed from the direct function Jacobian, to the Jacobian of the χ2 function. This changes the R2eff error predictions, and hence parameter fitting. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Implemented the direct Jacobian in Python, to be independent of C code in development phase. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Activated all method try in: system test Relax_disp.test_estimate_r2eff_err_methods. This is to quickly estimate errors from all different methods. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Fix to system test: test_estimate_r2eff_err_auto, which now checks the values for the direct Jacobian. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Increased the number of time points for exponential curve fitting to 3.
- Fix to weight properly according to if minimising with direct Jacobian or χ2 Jacobian. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Fix to system test test_estimate_r2eff_err_methods, after modification of weighting. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Switched in estimate_r2eff_err() to use the χ2 Jacobian from C code, and Jacobian from Python code. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Removed all references to test values which was received by wrong weighting. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Better error checking in the relaxation dispersion overfit_deselect() API method. The model must be set for this procedure to work, and the method now checks that this is the case.
- Better error checking for the specific_analyses.relax_disp.average_intensity() function. This function would fail with a traceback if a peak intensity error analysis had not yet been performed. Now it fails instead with a clean RelaxError so that the user knows what is wrong.
- Tried implementing getting the χ2 gradient, using target_function.chi2.dchi2(). The output seem equal. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Replaced the way to calculate the χ2 Jacobian, for exponential fit in minfx. This is only for the test class, but reuses library code. This should make it much easier in the future to implement χ2 gradient functions to minfx, since it is only necessary to implement the direct gradient of the function, and then pass the direct gradient to χ2 library, which turn it into the χ2 gradient function which minfx use. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Moved unnecessary function in R2eff error estimate module into experimental class. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Implemented system test: test_bug_negative_intensities_cpmg, to show lack of error message to user. Maybe these spins should be de-selected, or at least show a better warning. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- An attempt at documenting the Monte Carlo simulation verses covariance matrix error estimates. This is for the R2eff and I0 parameters of the exponential curves. For the Monte Carlo errors, 10000 simulations were preformed. This means that these errors can perform as a gold standard by which to judge the covariance matrix technique. Currently it can be seen that the relax_disp.r2eff_err_estimate user function with the chi2_jacobian flag set to True performs extremely poorly.
- Reintroduced the original target_functions.relax_fit.jacobian() function. The new function for the Jacobian of the chi-squared function has been renamed to target_functions.relax_fit.jacobian_chi2() so that both Python functions are accessible within the C module.
- Epydoc fixes for the pipe_control.mol_res_spin.format_info_full() function.
- Epydoc docstring fixes for many methods in the relaxation dispersion auto-analysis module.
- If math domain errors are found when calculating the two point R2eff values, the point is being skipped. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Moved intensity negative value from reference to CPMG point.
- Modified system test test_bug_negative_intensities_cpmg, to prepare for testing number of R2eff points. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Comparison of 10,000 Monte Carlo simulations to a different covariance matrix error estimate. The covariance_matrix.py script has been duplicated and the chi2_jacobian argument of the relax_disp.r2eff_err_estimate user function has been changed from True to False. As can be seen in the 2D Grace plots, this error estimate is incredibly different. The R2eff errors are overestimated by a factor of 1.9555, which indicates that the Jacobian or covariance matrix formula are not yet correct.
- The target_functions.relax_fit C module Python function jacobian_chi2() is now exposed. This was previously not visible from within Python.
- Added a script and log file for calculating the numerical Jacobian for an exponential curve. This uses the data at http://thread.gmane.org/gmane.science.nmr.relax.devel/6807/focus=6840 and calculates the Jacobian using the numdifftools.Jacobian object construct and obtain the matrix, both at the minimum and at a point away from the minimum. The values will be used to construct a unit test to check the C module implementation.
- Created two unit tests showing the target_functions.relax_fit.jacobian() function is correct. This compares the calculated Jacobian to the numerically integrated values from the test_suite/shared_data/curve_fitting/numeric_gradient/jacobian.py script.
- Renamed the test_data/shared_data/curve_fitting/numeric_gradient/ directory to numeric_topology. This is to better reflect that it contains numeric approximations to the gradient, Hessian, and Jacobian.
- Added a script and log for calculating the numerical chi-squared Jacobian for an exponential curve. This uses the data at http://thread.gmane.org/gmane.science.nmr.relax.devel/6807/focus=6840 and calculates the chi-squared Jacobian using the numdifftools.Jacobian object construct and obtain the matrix, both at the minimum and at a point away from the minimum. The values will be used to construct a unit test to check the C module implementation.
- Fix for the chi-squared Jacobian numerical approximation script. The function was modified to that a list of chi-squared elements are returned, i.e. the sum part of the chi-squared equation has been removed.
- Created two unit tests showing the target_functions.relax_fit.jacobian_chi2() function is correct. This compares the calculated chi-squared Jacobian to the numerically integrated values from the test_suite/shared_data/curve_fitting/numeric_topology/jacobian_chi2.py script.
- Added a script and log for calculating the numerical covariance matrix for an exponential curve. This uses the data at http://thread.gmane.org/gmane.science.nmr.relax.devel/6807/focus=6840 and calculates the covariance matrix via the Jacobian calculated using the numdifftools.Jacobian object construct and obtain the matrix, both at the minimum and at a point away from the minimum. The covariance is calculated as inv(J^T.W.J).
- Added a script and log for calculating the exponential curve parameter errors via bootstrapping. This uses the data at http://thread.gmane.org/gmane.science.nmr.relax.devel/6807/focus=6840 and calculates the parameter errors via bootstrapping. As the parameters at the minimum are the exact parameter values, bootstrapping and Monte Carlo simulation converge and hence this is a true error estimate. 200,000 simulations where used, so the parameter errors are extremely accurate.
- Modified module to estimate R2eff errors, to use the C code Jacobian. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified system test test_estimate_r2eff_err_methods, to check all Jacobian methods are correctly implemented. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added more print out information, when log(I / I_ref) is negative, and raising errors. This can help the user track back information to the error more easily.
- Improved system test test_bug_negative_intensities_cpmg, by counting number of R2eff points. Spin 4, which has one negative intensity, is expected to have one less R2eff point. This makes sure, that all CPMG data set can be loaded and analysed, even if some peaks are very weak are fluctuating with error level.
- Fix for also storing 'r1_fit' to cdp even though it is set to False. Bug #22541: The R1 fit flag does not work in the GUI.
- Cleanup in GUI test Relax_disp.test_r2eff_err_estimate. This now passes after previous commit. Bug #22541: The R1 fit flag does not work in the GUI.
- Added model DPL94, to be tested in GUI test Relax_disp.test_r2eff_err_estimate. This shows that the bug is still there. Bug #22541: The R1 fit flag does not work in the GUI.
- Fix for system test test_estimate_r2eff_err and test_r1rho_kjaergaard_missing_r1, where r1_fit=True, needed to be send to Auto_analyses. Bug #22541: The R1 fit flag does not work in the GUI.
- API documentation fixes.
- Moved multifit_covar into lib.statistics, since it is an independent module. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Moved "func_exp_grad" into experimental class for different minimisation methods. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Improved documentation to user function relax_disp.r2eff_err_estimate, and removed the possibility to use the χ2 Jacobian, as this is rubbish. But the back-end still have this possibility, should one desire to try this. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Moved the argument 'chi2_jacobian' as the last argument in estimate_r2eff_err. This argument is highly likely not to be used, but is kept for future testing purposes. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Fix to experimental class for fitting with different methods. After moving the function into class, 'self' should be added to the function. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Fix to system test test_estimate_r2eff_err, after removing the possibility to use the χ2 Jacobian. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Fix for system test test_estimate_r2eff_err_methods. The function was called wrong in experimental class. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Initial try write comments how to generalize the scaling of the covariance according to the reduced χ2 distribution. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- First try to make a test script for estimating efficiency on R2eff error calculations. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added number of simulations to 10,000 in test script, and varied the random number of time point per simulation between 3 and 10. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- In module for estimating R2eff errors, removed "values, errors" to be send to function for gradient, since they are not used. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added Jacobian to test script, and now correctly do simulations, per R2eff points. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Improved analysing test script, with plotting. It seems that R2eff error estimation always get the same result. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added initial dataset for test analysis. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Deleted test data set. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified data script generator, to handle a situation with fixed 5 time points, and a situations with variable number of time points. Also modified analysis script. It seems, this has an influence how the error estimation is performing. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added simulations that show, there is perfect agreement between Monte Carlo simulations and covariance estimation. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Inserted extra tests in system test Relax_disp.test_estimate_r2eff_err_methods to test that all values of R and I0 are positive, and the standard deviation from Monte Carlo simulations are equal. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Inserted system test Relax_disp.test_finite_value, to illustrate the return of inf from C code exponential, when R is negative. This can be an issue, if minfx takes a wrong step when no constraints are implemented. Bug #22552: χ2 value returned from C code curve-fitting return 0.0 for wrong parameters -> Expected influence on Monte Carlo sim.
- Inserted possibility for bootstrapping in system test Relax_disp.test_estimate_r2eff_err_methods. This shows, that the bootstrapping method get the SAME estimation for R2eff errors, as the estimate_r2eff_err() function. This must either mean, that the OLD Monte Carlo simulation was corrupted, or the creation of data in Monte Carlo simulations is corrupted.
- Modified system test Relax_disp.verify_estimate_r2eff_err_compare_mc to include bootstrapping method. This shows there is excellent agreement between bootstrapping and estimation of errors from covariance, while relax Monte Carlo simulations are very much different. Boot strapping is the "-2": Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added functionality to create peak lists, for virtual data. This is to compare the distribution of R2eff values made by bootstrapping and Monte Carlo simulations. Rest of the analysis will be performed in relax. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added initial peak lists to be analysed in relax for test purposes. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added relax analysis script, to profile distribution of errors drawn in relax, and from Python module "random". It seems that relax draw a lot more narrow distribution of Intensity with errors, than Python module "random". This has an influence on estimated parameter error. This is a potential huge error in relax. A possible example of a catastrophic implementation of Monte Carlo simulations. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added PNG image that show that the distribution which relax makes are to narrow. This is a potential huge flaw in implementation of Monte Carlo simulations. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Modified analysis script, to also make histogram of intensities. This shows that the created intensities are totally off the true intensity. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Comment fix to system test Relax_disp.test_estimate_r2eff_err_methods, after the found of bug in relax. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting. Bug #22554: The distribution of intensity with errors in Monte Carlo simulations are markedly more narrow than expected.
- Cleaned up user function for estimating R2eff errors. Extensive tests have shown, there is a very good agreement between the covariance estimation, and Monte Carlo simulations. This is indeed a very positive implementation. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting. Bug #22554: The distribution of intensity with errors in Monte Carlo simulations are markedly more narrow than expected.
- Removed all junk comments from module for R2eff error estimation. The module runs perfect as it does now. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting. Bug #22554: The distribution of intensity with errors in Monte Carlo simulations are markedly more narrow than expected.
- Fix for inf values being returned from C code exponential function. Values are now converted to high values. This fixes system test Relax_disp.test_finite_value. Example: x = np.array([np.inf, -np.inf, np.nan, -128, 128]), np.nan_to_num(x), array([ 1.79769313e+308, -1.79769313e+308, 0.00000000e+000, -1.28000000e+002, 1.28000000e+002]). Bug #2255: χ2 value returned from C code curve-fitting return 0.0 for wrong parameters -> Expected influence on Monte Carlo sim. Ref: http://docs.scipy.org/doc/numpy/reference/generated/numpy.nan_to_num.html.
- Initial try to reach constrained methods in minfx through relax. This is in system test Relax_disp.verify_estimate_r2eff_err_compare_mc() This though not seem to be supported.
- Allow R2eff model to reach constrained methods in minfx through relax. This is in system test Relax_disp.verify_estimate_r2eff_err_compare_mc() This though not seem to be supported.
- Modified specific_analyses.relax_disp.parameters.r1_setup() to initialise the 'r1' variable. This relates to bug #22541, the R1 fit flag does not work in the GUI. This is a hack, as all of the dispersion analysis code assumes that all parameters are initialised. This is a dangerous assumption that will have to be eliminated in the future.
- The dispersion get_param_values() API method now calls the r1_setup() function. This relates to bug #22541, the R1 fit flag does not work in the GUI. This is to make sure that the parameters are correctly set up prior to obtaining all parameter values. The R1 parameter is dynamic hence r1_setup() needs to be called at any point model parameters are accessed, as the R1 parameter can be turned on or off at any time with the relax_disp.r1_fit user function.
- Yet another try to implement constrained method in verify_estimate_r2eff_err_compare_mc.
- Another attempt to reach constrained method in minfx through relax. I would need to specify: l, lower bound constraint vector (l ≤ x ≤ u); u, upper bound constraint vector (l ≤ x ≤ u); c: user supplied constraint function; dc: user supplied constraint gradient function.
- Added a derivation of the R2eff/R1ρ error estimate for the two-point measurement to the manual. This is from http://thread.gmane.org/gmane.science.nmr.relax.devel/6929/focus=6993 and is for the rate uncertainty of a 2-parameter exponential curve when only two data points have been collected. The derivation has been added to the dispersion chapter of the manual.
- Equation fixes for the two-point exponential error derivation in the dispersion chapter of the manual.
- Updated the minfx version numbers in the release checklist document. The version is now 1.0.10, which has not been released yet but will contain the implementation of the log-barrier constraint algorithm gradient and Hessian.
- Fix for the minfx version checking logic in the dep_check module. Now newer versions of minfx will be handled.
- Fixes for the Relax_disp.test_estimate_r2eff_err system test. The kex parameter value checks have all been scaled by 1e-5 to allow for a meaningful floating point number comparison. The number of significant figures have also been scaled. This allows the test to now pass on one 64-bit GNU/Linux system.
- Another fix for the minfx version checking in the dep_check module. The version_comparison() function has been created to perform a proper version number comparison by stripping trailing zeros, converting the two version numbers to lists of int and comparing the lists using the Python cmp() function. This will return -1 when the version number is too low, 0 when the versions are equal, and 1 when the version is higher than the minimum.
- Added a button for the spectrum.error_analysis user function to the spectra list GUI element. This is placed after the 'Add' and 'Delete' buttons. The functionality could be improved by presetting the spectrum ID argument to anything the user has selected in the spectrum list.
- Modified the behaviour of the spectrum.error_analysis button in the spectrum list GUI element. Now the subset argument of this user function will be pre-set to any spectra selected in the list.
- Improvements for the spectrum.error_analysis button in the spectrum list GUI element. The user function is now launched as being modal so that the rest of the GUI freezes, and after the user function is executed the relax controller window is show and scrolled to the bottom.
- Added Relax_disp system tests to black-list, if they depend on C code module.
- Improvements for the spectrum.error_analysis button in the spectrum list GUI element. The subset argument is set to None if no spectra are selected.
- Loosened a value check to allow the Relax_disp.test_r1rho_kjaergaard_missing_r1 system test to pass. This test fails on MS Windows systems.
- Fix for the Relax_disp.test_estimate_r2eff_err_auto system test on MS Windows systems. One of the value checks has been loosened.
- Python 2 vs. 3 compatibility fix for the pickle module. This is for the estimate_errors*.py scripts in the directory test_suite/shared_data/curve_fitting/numeric_topology/. The lib.compat.pickle module is now used to allow both Python versions to run relax.
- Python 3 fix, the cmp(v1, v2) notation in the dep_check.version_comparison() function has been replaced with (v1 > v2) - (v1 < v2). This allows relax to run on Python 3.
- Python 3 fix for the lib.periodic_table module, the Python string module does not exist in Python 3.
- Created the user_functions.uf_translation_table list. The elements of this list are the names of user functions before and after a renaming. The list is provided for backwards compatibility for relax scripts, though it is not used yet.
- Converted the user_functions.uf_translation_table object to a dictionary. This is for faster access which does not require looping.
- The prompt UI now uses the user_functions.uf_translation_table dictionary. The modified runcode() function will now check if the command typed by the user is a function or method call and then will raise a RelaxError if the command name is in the user_functions.uf_translation_table dictionary, telling the user that the user function has been renamed to the new name in the translation table. This appears to have no effect in the script UI however.
- Hack in the script UI for handling missing user functions due to it being renamed. This script UI requires a different solution as the prompt UI. The script is executed via the runpy Python module and there appears to be no clean way of catching each command before it is executed. So instead, prior to executing the script, the contents of the script are read and old user functions are searched for using re.search(). The old user function name has "(" appended to it in the search so that it is sure that it is a user function call. And the old function must have a space or newline character preceding it.
Bugfixes
- MS Windows fixes to allow relax to run again. The code for eliminating the GNU readline ^[[?1034h escape code emission on Linux systems fails on Windows as the 'TERM' environmental variable does not exist in os.environ.
- Fix for the relaxation dispersion analysis Monte Carlo simulation printouts on clusters. The multi-processor code was calling the print() function from the Slave_command.run() method, however this runs on the slave processor. This has been shifted to the Results_command.run() method which runs on the master once the results have been returned via the Results_command. Now the printout of the simulation number and cluster ID will be visible when running via OpenMPI on a cluster.
- Bug fix for the lib.arg_check.is_num_tuple() function. There was a typo in two of the RelaxError objects so that non-existent errors were being raised.
- Grace string fixes for the alignment tensor parameters defined in the base parameter_object module. This is essentially for allowing relax to run using Python 3. All Grace '\' characters need to be escaped as '\\' in Python strings.
- Another Python 3 fix - the string.split function no longer exists, it is now only a string method.
- Fix for replacing reduce function. This is a Python 3 fix, where this function has been removed. This was reported as a necessity in thread: http://thread.gmane.org/gmane.science.nmr.relax.devel/6544
- Fix for bug #22411, the failure in loading a Bruker DC T1 data file. The problem was that there was an empty line with spaces. The logic for skipping empty lines could not handle lines with just whitespace. This has now been fixed.
- Fix for bug #22501, "Close all analyses" raises error in the GUI. The problem was general for all analysis types. This used to work, but as it was not tested in the test suite, a regression occurred.
- Fix for the return_r2eff_arrays() dispersion function for exponential curves. This is a partial solution for bug #22461. For the Relax_disp.test_r1rho_kjaergaard_missing_r1 system test, there are multiple relaxation times for each data set. For example, printing out the exp_type, frq, offset, point, ei, mi, oi, di, and relax_times data gives: R1rho 799777399.1 118.078 431.0 0 0 0 0 [0.0, 0.04, 0.1, 0.2]; R1rho 799777399.1 118.078 651.2 0 0 0 1 [0.0, 0.04, 0.1, 0.2, 0.4]. Instead of taking the first relaxation time of 0.0, now the maximum time is taken.
- Fix for bug #22505, the failure of the structure.create_diff_tensor_pdb user function when no structural data is present. The solution was simple - the CoM of the representation is set to the origin when no structural data is present, and the check for the presence of data removed.
- Another fix for bug #22505, the failure of the structure.create_diff_tensor_pdb user function when no structural data is present. Now the cdp.structure data structure is checked, when present, if it contains any data using its own empty() method.
- Fix for bug #22502, the problem whereby the geometric prolate diffusion representation does not align with axis in PDB, as reported by Martin Ballaschk. This problem was not the main prolate tensor axis, but that the geometric object needed to be rotated 90 degrees about the z-axis to bring the object and axis into the same frame.
- Fix for time not extracted for CPMG experiments in target_function. Bug #22461: NS R1rho 2-site_fit_r1 has extremely high χ2 value in system test Relax_disp.test_r1rho_kjaergaard_missing_r1.
- Fix for interpolating time points, when producing xmgrace files for CPMG experiments. Bug #22461: NS R1rho 2-site_fit_r1 has extremely high χ2 value in system test Relax_disp.test_r1rho_kjaergaard_missing_r1.
- Correction for catastrophic implementation of Monte Carlo simulations for exponential curve-fitting R2eff values in the dispersion analysis. A wrong implemented "else if" statement, would add the intensity for the simulated intensity together with the original intensity. This means that all intensity values send to minimisation would be twice as high than usually (if spectra were not replicated). This was discovered for Monte Carlo simulations of R2eff errors in exponential fit. This will affect all analyses using full relaxation exponential curves until now. By pure luck, it seems that the effect of this would be that R2eff errors are half the value they should be. A further investigation shows, that for the selected data set, this had a minimum on influence on the fitted parameters, because the χ2 value would be scaled up by a factor 4. Bug #22554: The distribution of intensity with errors in Monte Carlo simulations are markedly more narrow than expected. Task #7822: Implement user function to estimate R2eff and associated errors for exponential curve fitting.
- Added a minfx minimum version check to the dep_check module. This is to avoid problems such as that reported at bug #22408.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.2 series
relax 3.2.3
Description
This is a major bugfix release and the first requiring numpy ≥ 1.6 to allow for faster calculations for certain analyses. There have been improvements to the GUI user functions, the ^[[?1034h
escape code is finally suppressed on Linux systems, and the structure.com user function has been added. Bugfixes include the proper handling of R2A0 and R2B0 parameters in the relaxation dispersion models, the IT99 dispersion model tex parameter was incorrectly handled, the LM63 3-site dispersion models had a fatal mistake in its equations, files with multiple extensions (for example *.pdb.gz) are now correctly handled, and closing the free file format window in Mac OS X systems caused the GUI to freeze. Full details can be found below.
For this release, the Mac OS X framework used to build the universal 3-way (ppc, i386, x86_64) binaries for the stand-alone relax application has been updated. The relax application now bundles Python 2.7.8, numpy 1.8.1, scipy 0.14.0, nose 1.3.3, wxPython 2.9.3.1 osx-cocoa (classic), matplotlib 1.3.1, epydoc 3.0.1, mpi4py 1.3.1 and py2app 0.8.1. This should result in better formatted relax state and results files and give access to more advanced packages for power users to take advantage of.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.2.3
(1 July 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.2.3
Features
- Improvements for a number of GUI elements used in the user function windows.
- The
^[[?1034h
escape code should now no longer be emitted by GNU readline on Linux systems. - Created the very basic structure.com user function for calculating the centre of mass. This is to simply allow an easy interface to the pipe_control.structure.mass.pipe_centre_of_mass() function.
- Expansion of the REMARK section of the PDB file created for the internal structural object. This is visible when using the structure.write_pdb user function, as well as the many other user functions which create PDB files. The relax version as well as the file creation date are now recorded in the PDB file. This extra information should be very useful. Empty lines in the REMARK section improve the formatting.
Changes
- Added proper sectioning to the release checklist document.
- Added the upload script to the release checklist document.
- Modified the Sequence GUI input element used for the user function list arguments. The first column is now of fixed with when titles are supplied. Previously when supplying titles, the width would be tiny and no text would be visible.
- Added titles for all 3D coordinate user function arguments. This is for the Sequence GUI input element, and affects the frame_order.average_position, n_state_model.CoM and paramag.centre user functions.
- The compilation of the C modules now respects the user defined environment. This is the patch from Justin attached to bug #22145. It has been modified to include a comment and remove a double empty line.
- Bug fix for the compilation of the C modules now respects the user defined environment. The problem was that on Mac OS X (as well as other systems), that these environmental variables were not defined and hence the scons commands would all fail with a KeyError and traceback. Now the keys in the os.environ dictionary are being searched for before they are set.
- Fix for the wxPython link in the installation chapter of the manual. This was pointing to the scipy website for some reason.
- Changed the Python readline link for MS Windows in the installation chapter of the manual. This now points to https://pypi.python.org/pypi/pyreadline as the iPython link is broken.
- Implemented system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster. This is to catch the wrong unpacking of R2A0 and R2B0 when performing a clustered full dispersion model analysis. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
- Extended system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster for B14 full model. This is to catch the wrong unpacking of R2A0 and R2B0 when performing a clustered full dispersion model analysis. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
- Extended system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster for NS CPMG 2-site 3D full model. This is to catch the wrong unpacking of R2A0 and R2B0 when performing a clustered full dispersion model analysis. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
- Extended system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster for NS CPMG 2-site star full model. This is to catch the wrong unpacking of R2A0 and R2B0 when performing a clustered full dispersion model analysis. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
- Added synthetic data generator script which created the data to test against. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
- Split system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster up in different tests. A setup function which is: setup_bug_22146_unpacking_r2a_r2b_cluster(self, folder=None, model_analyse=None): And then the tests: test_bug_22146_unpacking_r2a_r2b_cluster_B14 test_bug_22146_unpacking_r2a_r2b_cluster_CR72 test_bug_22146_unpacking_r2a_r2b_cluster_NS_3D test_bug_22146_unpacking_r2a_r2b_cluster_NS_STAR. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
- Modified profiling script to get closer to the implementation in relax. An additional test function is setup to figure out how to reshape the numpy arrays in the target function. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
- Updated profiling text for CR72 model. Now it is tested for 3 fields. This is related to: Task #7807: Speed-up of dispersion models for Clustered analysis.
- Added searching for environment variable PYTHON_INCLUDE_DIR if Python.h is not found in standard Python library. This can be very handsome, if one has a Python virtual environment for multiple users. This relates to the wiki page: http://wiki.nmr-relax.com/Epd_canopy.
- The lib.compat.norm() replacement function for numpy.linalg.norm() now handles no axis argument. This is to allow the function to be used in all cases where numpy.linalg.norm() is used, while providing compatibility with the axis argument and all numpy versions.
- Fix for the scons target for compiling the relax manual when using a repository checkout copy. The method for compiling the relax manual was calling the version.revision() function, however this has been replaced a while ago by the version.repo_revision variable.
- Created two unit tests for the lib.io.file_root() function. The second of the tests demonstrate a failure of the function if multiple file extensions are present.
- Lowered χ2 value test in system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster_NS_STAR. This is due to the data produced on 32 bit machine, and tested on 64 bit machines. The error was: AssertionError: 2.4659455670347743e-05 != 0.0 within 7 places. The reason for this is due to truncation artifacts.
- Fix for wrong path testing of Python.h. Python.h would be in PYTHON_PREFIX/include/pythonX.Y/Python.h and not in PYTHON_PREFIX/include/Python.h.
- Better handling of the control-C keyboard interrupt signal in the relax test suite. This includes two changes. The Python 2.7 and higher unittest.installHandler() function is now called, when present, to terminate all tests using the unittest module control-C handler. The second change is that the keyboard interrupt signal is caught in a try-except statement, a message printed out, and the tests terminated. This should be an improvement for all systems.
- Adding last profiling information for model CR72.
- Added system test for model LM63 3-site. According to results folder in test_suite/shared_data/dispersion/Hansen/relax_results/LM63 3-site. This should pass, but it doesn't.
- Created an initial Relax_disp.test_lm63_3site_synthetic system test. This should have been set up a long time ago. It uses the synthetic noise-free data in the test_suite/shared_data/dispersion/lm63_3site directory which was created for a system test but never converted into one. The test still needs modifications to allow it to pass.
- Modifications for the Relax_disp.test_lm63_3site_synthetic system test. The r2eff_values.bz2 saved state file has been updated, as it was too old to use in the test. The test has also had a typo bug fixed and the data pipe name updated. The test now also checks all of the optimised values.
- Removed system test test_hansen_cpmg_data_to_lm63_3site. This was a temporary implementation and has been replaced with system test Relax_disp.test_lm63_3site_synthetic.
- Fixes for all of the relaxation dispersion system tests which were failing with the new minfx code. Due to the tuning of the log barrier constraint algorithm in minfx in the commit at http://article.gmane.org/gmane.science.mathematics.minfx.scm/25, many system tests needed to be slightly adjusted. Two of the Relax_disp.test_tp02_data_to_* system tests were also failing as the optimisation can no longer move out of the minimum at pA = 0.5 for one spin (due to the low quality grid search in the auto-analysis).
- Updated the release checklist document for the new 1.0.7 release of minfx.
- Fixes for the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test. The pA parameter is no longer tested for one spin as it moves to random values on different operating systems and 32 vs. 64-bit systems. This is because this spin experiences no exchange, both Δω and kex are zero.
- Decreased the value checking precision in the Relax_disp.test_hansen_cpmg_data_to_lm63 system test. This is to allow the test to pass on certain operating systems and 32-bit systems.
- Modified the precision of the output from the relax_disp.sherekhan_input user function. This is simply to allow the Relax_disp.test_sod1wt_t25_to_sherekhan_input system test to pass on certain 32-bit systems, as the float output to 15 decimal places is not always the same. This system test has been updated for the change.
- Modified the Relax_disp.test_sprangers_data_to_mmq_cr72 system test to pass on certain systems. This test fails on 32-bit Linux (and probably other systems as well). To fix the test, the kex values are all divided by 100 before checking them to 4 decimal places of accuracy.
- Improved how the relax installation path is determined in the status object. If the path cannot be found, the current working directory is then checked if it is where relax is installed. This is needed when importing modules outside of relax.
- Hack to permanently eliminate the
^[[?1034h
escape code being produced on Linux systems. This is produced by importing the readline module. The escape code will be sent to STDOUT every time relax is executed, so it will be present in all log files. The problem is the TERM environmental variable being set to 'xterm'. The hack simply sets TERM to an empty string. - More hacks for permanently eliminating the
^[[?1034h
escape code being produced on Linux systems. This is a nasty feature of the GNU readline library. It is now also turned off in the dep_check module, suppressing^[[?1034h
in Python scripts which import only parts of relax. - Numpy version 1.6 or higher is now required to be able to run relax. This follows from the series of messages: http://www.mail-archive.com/relax-devel@gna.org/msg06288.html, http://www.mail-archive.com/relax-devel@gna.org/msg06289.html, http://www.mail-archive.com/relax-devel@gna.org/msg06327.html, and http://www.mail-archive.com/relax-devel@gna.org/msg06335.html. If too many users complain, maybe this change can be reverted later. This minimal numpy version is needed for many of the speed ups going in the relaxation dispersion and frame order analyses. It is required for the numpy ufunc out arguments and for the numpy.eigsum() function. These will likely be used in other analyses in the future for improving the speed of relax, so it might affect users of other analyses later on.
- Updated the numpy minimal dependency in the installation chapter of the manual to version 1.6.
- Added better epydoc sectioning to the lib.dispersion.ns_cpmg_2site_expanded module docstring. This is to better separate the original scripts used to document the code evolution.
- Empty lines are now handled by the lib.structure.pdb_write.remark() function. By supplying the remark as None, empty lines can now be created in the REMARK section of a PDB file. This can be used for nicer formatting.
- Fixes for the Diffusion_tensor system tests due to the recent PDB file changes. Prior to the comparison of the generated PDB files, all REMARK PDB lines are now stripped out.
- Fixes for all system tests failing due to the expanded and improved PDB REMARK section. The system tests now remove all REMARK records prior to comparing file contents. The special strip_remarks() system test method has been created to simplify the stripping process.
- Fix for the software verification tests. The recent expansion and improvements of the REMARK records created by the internal structural object PDB writing method imported the relax version to place this information into the PDB files. However this breaks the relax library design, as shown by the verification tests. Instead the relax version information is being taken from the lib.structure.internal.object.RELAX_VERSION variable. This defaults to None, however the version module now sets this variable directly when it is imported so that it is always set to the current relax version when running relax.
- General Python 3 fixes via the 2to3 script.
- Removed the lib.compat.sorted() function which was providing Python2.3 compatibility. For a while now, relax has been unable to run on Python versions less than 2.5. Therefore there is no use for having this replacement function for Python ≤ 2.3 which was being placed into the builtins module.
- Python 3 fixes for the entire codebase using the 2to3 script. The command used was: 2to3 -j 4 -w -f xrange .
- The internal structural object add_molecule() and has_molecule() methods are now model specific. This allows for finer control of structural object.
- Created the new lib.structure.files module. This currently contains the single find_pdb_files() function which will be used to find all *.pdb, *.pdb.gz and *.pdb.bz2 versions of the PDB file in a given path.
- Fix for the breakage of the relax help system. This was reported at http://thread.gmane.org/gmane.science.nmr.relax.devel/6481. The problem was that the TERM environmental variable was turned off to avoid the GNU readline library on Linux systems emitting the
^[[?1034h
escape code. See the message at http://thread.gmane.org/gmane.science.nmr.relax.devel/6481/focus=6489 for more details. However the Python help system obviously requires this environmental variable. Now only if the TERM variable is set to 'xterm' will it be reset, and to 'linux' instead of the blank string "". This does not affect any relax releases.
Bugfixes
- Fix for the wrong unpacking of R2A0 and R2B0 in model CR72 full. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion model
- Fix for the wrong unpacking of R2A0 and R2B0 in model B14 full. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
- Fix for the wrong unpacking of R2A0 and R2B0 in model NS CPMG 2-site 3D full model. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
- Fix for the wrong unpacking of R2A0 and R2B0 in model NS CPMG 2-site star full model. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
- Bug fix for the lib.io.file_root() function for multiple file extensions. The function will now strip off all file extensions.
- Fix for bug #22210, the failure of the LM63 3-site dispersion model. The problem is described in the bug report - the multiplication in the tanh() function is a mistake, it must be a division.
- Fix for the Library.test_library_independence verification test on MS Windows. The tearDown() method now uses the error handling test_suite.clean_up.deletion() function to remove the copied version of the relax library.
- Fixed the packing out of parameter tex for global analysis in model IT99. Bug #22220: Unpacking of parameters for global analysis in model IT99, is performed wrong.
- Fix for bug #22257, the freezing of the GUI after using the free file format window on Mac OS X. This is a recurring problem in Mac OS X as it cannot be tested in the relax test suite. The problem is with wxPython. The modal dialogs, such as the free file format window, cannot be destroyed on Mac OS X using wx.Dialog.Destroy() - this kills wxPython and hence kills relax. The problem does not exist on any other operating system. To fix this, all wx.Dialog.Destroy() calls have been replaced with wx.Dialog.Close().
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.2.2
Description
This is a major feature and bugfix release. It includes a large speed up of all analytic relaxation dispersion models, the correct handling of edge case failures in all models of the dispersion analysis, a number of fixes for the handling of list-type data in the GUI user function windows including the fatal GUI crashes on Mac OS X systems, and many other bug fixes. Please see below for a full list of features, changes and bugfixes. All users of the dispersion analysis, the relax GUI, or Mac OS X systems are recommended to upgrade to this newest version.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.2.2
(5 June 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.2.2
Features
- Large speedups of all analytical relaxation dispersion models by converting the R2eff calculations and value error checking from single values to numpy arrays.
- Edge cases where function failures occur are now properly handled for all analytical relaxation dispersion models.
- Completion of the frame_order.pdb_model user function backend for the frame order PDB representation.
- relax will now detect when files with *.gz or *.bz2 file extensions are being created and automatically gzip or bzip compress the file.
Changes
- Small speed up for all the isotropic cone and pseudo-elliptic cone frame order models. The vector length calculation for the numeric PCS integration has been simplified and shifted outside of a loop to take advantage of the speed of numpy.
- All three file arguments for the pymol.frame_order user function are now optional.
- Updated all the API documentation links in the dispersion chapter of the manual. These were pointing to http://www.nmr-relax.com/api/3.1/ whereas they should now be point to http://www.nmr-relax.com/api/3.2/.
- Modified a printout in the 'devel_scripts/code_validator' script. This is to clarify that the first method of a class does not need two preceding empty lines.
- Shifted some functions from lib.structure.geometric into their own modules. The angles_regular() and angles_uniform() functions are now in the lib.structure.angles module, and the get_proton_names in lib.structure.conversion.
- Deletion of the pipe_control.structure.main.create_cone_pdb() function. This is only used in the frame order analysis, but has been made redundant by the lib.structure.represent.cone.cone() function.
- Completed the frame_order.pdb_model user function backend for the frame order PDB representation. Most of this backend, including the axes and cone representations, had been broken for quite a while and were being skipped with an early return statement. This has now been made functional and a few fixes have been made. For the 'rotor' and 'free rotor' model, the neg_cone argument is now ignored so that only one model is produced in the final frame order PDB representation file. For all other models, the rotor representation is no longer centred to the point on axis closest to the centre of mass, as the pivot is unambiguously defined. The rotor representation has also been made larger in these models so that it is outside of the cone, and the propeller blades are now staggered.
- Modified py_type from "list" to "float_array" in uf_object type in user function dx.map. Bug #22035 The dx.map user function is broken in the GUI.
- Added py_type "list_val_or_list_of_list_val" to be handled in GUI uf_objects. Bug #22035 The dx.map user function is broken in the GUI.
- Modified the frame order constraints so that coneθx ≤ coneθy. The linear_constraints() function docstring has been updated to include this constraint.
- Set dim=4 when setting chi surface level in user function dx.map.
- Fix for the n_state_model.cone_pdb user function for the recent internal structural object changes. The cone arguments should now be called cone_obj.
- Renamed the relax_disp.set_grid_r20_from_min_r2eff user function to relax_disp.r20_from_min_r2eff. This follows from the proposal at http://thread.gmane.org/gmane.science.nmr.relax.devel/5957.
- Modification to the Sequence_2D GUI element used for some user function windows. The selection_win_show() method has been redefined, as the parent method from the Sequence element is specific for the 1D sequence module. The open_dialog() method has also been modified to use the new selection_win_show(), as well as the parent Sequence class selection_win_data() method.
- Created the User_functions.test_structure_rotate GUI tests. This is to catch bug #22100, the rotation argument for the structure.rotate user function cannot be changed in the GUI, as an AttributeError is raised.
- Moved py_type "list_val_or_list_of_list_val" to 2D sequence types.
- Added dim dimensions to match the {x, y, z} positions for GUI input in user function dx.map.
- Modified the User_functions.test_structure_rotate GUI test to change and check the rotation matrix.
- Some more fixes for the User_functions.test_structure_rotate GUI test. The open_dialog() method cannot be used, as it deletes the window at the end. Instead the selection_win_show() and selection_win_data() method combination is used.
- Expanded the User_functions.test_structure_rotate GUI test. This is to more extensively check the 'float_matrix' user function argument type in the GUI.
- Modified the dim dimensions to (None, 3) to allow the user to change number of points in the GUI. This is for the user function dx.map.
- Simplified the User_functions GUI tests. The exec_uf_pipe_create() method has been created to simplify the data pipe creation in the tests.
- Expanded the User_functions.test_structure_rotate GUI test. The rotation matrix argument checks for the Sequence_2D GUI element have been expanded to check that setting nothing (blank element) returns nothing (None). The other checks have also been slightly modified.
- Expanded the User_functions.test_structure_rotate GUI test to catch more problems. Now the rotation matrix value in the user function window is set to a series of invalid values to test if the Sequence_2D GUI element will handle the rubbish input. This is to mimic user errors.
- Created the is_list() and is_list_of_lists() functions for the lib.check_types module.
- Clean up of the User_functions.test_structure_rotate GUI test. The invalid value check is simpler and the Sequence_2D GUI object return value is now checked to be None.
- Expanded the User_functions.test_structure_rotate GUI once more. This time the setting of invalid values in the Sequence_2D element itself is now checked. For example for the rotation matrix of the structure.rotate user function, if a matrix element is set to a string, a NameError is raised.
- Created the User_functions.test_dx_map GUI test. This extensively checks the 'point' argument for the dx.map user function GUI window. This is to catch bug #22102, the point argument of the dx.map user function being incorrect in the GUI.
- Modified the User_functions.test_dx_map GUI test to catch another problem with the Sequence_2D element.
- Fixes for the frame order PDB presentation in the frame_order.pdb_model user function backend.
- Expanded the User_functions.test_dx_map GUI test once again. The new test is to set 2 valid points in the wizard, open and close the Sequence_2D window (twice), and check that the points come back.
- Increased the width of the first column of the Sequence_2D GUI element for variable lists. This is so the column title "Number" will fit.
- Added list titles for the dx.map user function point argument. This is so that the Sequence_2D GUI element will have column titles of 'X coordinate', 'Y coordinate', and 'Z coordinate'.
- The self.variable_length flag is now used throughout the Sequence GUI element.
- The self.variable_length flag is used in one more spot in the Sequence_2D GUI element.
- Created the User_functions.test_structure_add_atom GUI test. This is used to check the operation of the Sequence GUI element via the 'pos' argument of the structure.add_atom user function. This is a list fixed to 3 elements.
- Titles are now handled and set in the Sequence GUI element. The titles will replace the numbering of 1 onwards in the first column of the GUI element.
- Small fix for switched indices in the new User_functions.test_structure_add_atom GUI test.
- Modified the 'pos' argument of the structure.add_atom user function. The argument is now a list of fixed length of 3, and it has the titles 'X coordinate', 'Y coordinate', and 'Z coordinate' which are shown in the GUI.
- Created the User_functions.test_spectrum_read_intensities GUI test to catch bug #22105. The problem is that a single file name is split up into many files when the file selection button is clicked, one for each character of the file name.
- Fix for the User_functions.test_spectrum_read_intensities GUI test. A valid value was being checked as invalid.
- Shifted all wildcards used in GUI file selection dialogs into the new user_functions.wildcard module. These have now all been standardised, and expanded to include more capitalisation combinations and to include more *.* options.
- Created a file selection wildcard for use in the GUI for selecting peak lists. This is used in the four user functions which read peak lists.
- Changed all *.* GUI file selection wildcards to *.
- Huge speedup for model CR72. Task #7793 Speedup of dispersion models. The system test Relax_disp.test_cpmg_synthetic_cr72_full_noise_cluster changes from 7 seconds to 4.5 seconds. This is won by not checking single values in the R2eff array for math domain errors, but calculating all steps, and in one single round check for finite values. If just one non-finite value is found, the whole array is returned with a large penalty of 1e100. This makes all calculations be the fastest numpy array way.
- Fix for system test test_cpmg_synthetic_dx_map_points. Task #7793 Speedup of dispersion models.
- Critical fixes for system test Relax_disp.test_hansen_cpmg_data_missing_auto_analysis. Task #7793 Speedup of dispersion models. It is suspected that when relax have touched boundary values which made math domain errors, the error catching have created local minima or interfered with the simplex search algorithm.
- Speedup of model TSMFK01. Task #7793 Speedup of dispersion models. This is won by not checking single values in the R2eff array for math domain errors, but calculating all steps, and in one single round check for finite values. If just one non-finite value is found, the whole array is returned with a large penalty of 1e100. This makes all calculations be the fastest numpy array way.
- Huge speedup of model B14. Task #7793 Speedup of dispersion models. Time test for system tests: test_baldwin_synthetic 2.626s -> 1.990s, test_baldwin_synthetic_full 18.326s -> 13.742s. This is won by not checking single values in the R2eff array for math domain errors, but calculating all steps, and in one single round check for finite values. If just one non-finite value is found, the whole array is returned with a large penalty of 1e100. This makes all calculations be the fastest numpy array way.
- Speedup of model TP02. Task #7793 Speedup of dispersion models. The change for running system test is: test_curve_type_r1rho_fixed_time 0.057s -> 0.049s, test_tp02_data_to_ns_r1rho_2site 10.539s -> 10.456s, test_tp02_data_to_tp02 8.608s -> 5.727s. This is won by not checking single values in the R1ρ array for math domain errors, but calculating all steps, and in one single round check for finite values. If just one non-finite value is found, the whole array is returned with a large penalty of 1e100. This makes all calculations be the fastest numpy array way.
- Huge speedup for model TAP03. Task #7793 Speedup of dispersion models. The change for running system test is: test_tp02_data_to_tap03 13.869s -> 7.263s. This is won by not checking single values in the R1ρ array for math domain errors, but calculating all steps, and in one single round check for finite values. If just one non-finite value is found, the whole array is returned with a large penalty of 1e100. This makes all calculations be the fastest numpy array way.
- Speedup of model MP05. Task #7793 Speedup of dispersion models. The change in system test is: test_tp02_data_to_mp05 10.750s -> 6.644s.
- Speedup of model MMQ CR72. Task #7793 Speedup of dispersion models. Change in system test: test_sprangers_data_to_mmq_CR72 9.892s -> 4.121s.
- Speedup for model M61. Task #7793 Speedup of dispersion models. Change in speed is: test_m61_data_to_m61 6.692s -> 3.480s.
- Speedup of model LM63. Task #7793 Speedup of dispersion models. Change in system test was: test_hansen_cpmg_data_auto_analysis 13.731s -> 9.971s, test_hansen_cpmg_data_auto_analysis_r2eff 13.370s -> 9.510s, test_hansen_cpmg_data_to_lm63 3.254s -> 2.080s.
- Speedup of model IT99. Task #7793 Speedup of dispersion models. Change in speed is: test_hansen_cpmg_data_auto_analysis 9.74s -> 8.330s, test_hansen_cpmg_data_to_it99 4.928s -> 3.138s.
- Speedup of model DPL94. Task #7793 Speedup of dispersion models. Change in speed is: test_dpl94_data_to_dpl94 19.412s -> 4.427s.
- Math-domain catching for model B14. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Math-domain catching for model CR72. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Math-domain catching for model: NS CPMG 2-site expanded. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Math-domain catching for model CR72. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. The skipping of test when num_points > 0, is a bad implementation. If such a case should show, it is best to catch the wrong input for the calculations. This is best done with a check before running the calculations.
- Math-domain catching for model TSMFK01. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Math-domain catching for model TP02. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Math-domain catching for model TAP03. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Math-domain catching for model DPL94. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Math-domain catching for model TAP03. Another check for division with 0 inserted.
- Math-domain catching for model MP05. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Math-domain catching for model IT99. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Removed class object "back_calc" being updated per time point for model LM63. Task #7793 Speedup of dispersion models. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Math-domain catching for model M61. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Math-domain catching for model MMQ CR72. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Align math-domain catching for model CR72 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
- Align math-domain catching for model DPL94 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
- Align math-domain catching for model IT99 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
- Align math-domain catching for model LM63 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
- Align math-domain catching for model M61 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
- Align math-domain catching for model MP05 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
- Align math-domain catching for model TAP03 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
- Align math-domain catching for model TP02 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
- Align math-domain catching for model TSMFK01 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
- Removing unnecessary math-domain catching for model IT99. Task #7793 Speedup of dispersion models. The denominator is always positive.
- Align math-domain catching for model NS CPMG 2-site expanded with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
- Modified unit tests demonstrating edge case 'no Rex' failures of the model NS CPMG 2-site expanded. This is to align with the current return of data in the disp_speed branch. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0.
- Added 7 unit tests demonstrating edge case 'no Rex' failures of the model DPL94. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0.
- Unit test _lib/test_ns_cpmg_2site_expanded.py copied to _/test_lm63.py. They are both of CPMG type.
- Added 7 unit tests demonstrating edge case 'no Rex' failures of the model LM63. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0.
- Unit test _lib/_dispersion/test_ns_cpmg_2site_expanded.py copied to _lib/_dispersion/b14.py. They are both of CPMG type, and can be re-used.
- Added 7 unit tests demonstrating edge case 'no Rex' failures of the model B14. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0.
- Removed unnecessary math domain checking in model B14. They are slowing down the code. There is now protection for edge cases, and a last final check, before returning values. That should be sufficient.
- Unit test _lib/_dispersion/test_b14.py copied to _lib/_dispersion/test_CR72.py. They are both of CPMG type, and can be re-used.
- Copied unit test _lib/_dispersion/* to be reused for other models.
- Added 8 unit tests demonstrating edge case 'no Rex' failures of the model CR72. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e5.
- Added the 8th unit tests demonstrating edge case 'no Rex' failures of the model B14. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e5.
- Added the 8th unit tests demonstrating edge case 'no Rex' failures of the model LM63. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
- Small fix for 8 unit tests demonstrating edge case 'no Rex' failures of the model 'ns cpmg_2site_expanded'. The comparison of R2eff is now divided into a special case for kex having large values.
- Deleted unit test case for lm63 3site.
- Added 8 unit tests demonstrating edge case 'no Rex' failures of the model M61. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
- Added the 8th unit tests demonstrating edge case 'no Rex' failures of the model DPL94. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange:
- Added 8 unit tests demonstrating edge case 'no Rex' failures of the model M61b. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
- Math-domain catching for model M61b. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
- Modified script to be able to run system test Relax_disp.xxx_test_m61b_data_to_m61b.
- Added 8 unit tests demonstrating edge case 'no Rex' failures of the model IT99. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e19.
- Added 9 unit tests demonstrating edge case 'no Rex' failures of the model MMQ CR72. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e5; ΔωH = 0.0.
- Added 8 unit tests demonstrating edge case 'no Rex' failures of the model MP05. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
- Added 8 unit tests demonstrating edge case 'no Rex' failures of the model TAP03. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
- Added 8 unit tests demonstrating edge case 'no Rex' failures of the model TP02. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
- Added 7 unit tests demonstrating edge case 'no Rex' failures of the model TSMFK01. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0.
- Copied unit test test_b14.py to test_ns_cpmg_2site_3d.py.
- Added 8 unit tests demonstrating edge case 'no Rex' failures of the model NS CPMG 2-site 3D. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e7.
- Modified unit tests demonstrating edge cases 'no Rex' failures of the model TP02. The catching of errors for off-resonance R1ρ models was implemented wrong. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e5.
- Critical fix for the math domain catching of model TP02. The catching of errors for off-resonance R1ρ models was implemented wrong. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938.
- Modified unit tests demonstrating edge cases 'no Rex' failures of the model DPL94. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938.
- Modified unit tests demonstrating edge cases 'no Rex' failures of the model MP05. The catching of errors for off-resonance R1ρ models was implemented wrong. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur.
- Critical fix for the math domain catching of model MP05. The catching of errors for off-resonance R1ρ models was implemented wrong. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938.
- Modified unit tests demonstrating edge cases 'no Rex' failures of the model TAP03. The catching of errors for off-resonance R1ρ models was implemented wrong. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938. And post http://article.gmane.org/gmane.science.nmr.relax.devel/5944. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models.
- Critical fix for the math domain catching of model TAP03. The catching of errors for off-resonance R1ρ models was implemented wrong. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938. And post http://article.gmane.org/gmane.science.nmr.relax.devel/5944.
- Modified unit tests demonstrating edge cases 'no Rex' failures of the model MMQ CR72. This was pointed out in post http://article.gmane.org/gmane.science.nmr.relax.devel/5940. And in post http://article.gmane.org/gmane.science.nmr.relax.devel/5946. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models.
- Small fix for the math domain catching of model MMQ CR72. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5940. And in post http://article.gmane.org/gmane.science.nmr.relax.devel/5946.
- Various spacing fixes in unit test files _lib/_dispersion. This is the preparation for merging back disp_speed branch into trunk. This follows post http://article.gmane.org/gmane.science.nmr.relax.devel/5948. Usin the code validator script './devel_scripts/code_validator'.
- Modified that unit tests having different r20a and r20b values is checking if the correct one is returned. This is the preparation for merging back disp_speed branch into trunk. This follows post http://article.gmane.org/gmane.science.nmr.relax.devel/5948.
- Modified unit test to have standard population of pA = 0.95, and a correctly calculation of Δω in ppm to rad/s. This is related to Task #7793 Speedup of dispersion models.
- Small fix in parameter calculation in unit test _dispersion/test_ns_cpmg_2site_expanded.
- Increased max kex to value 1e18 for unit test of lin/ns_cpmg_2site_expanded.py.
- Increased max kex to value 1e20 for unit test of lib/ns_cpmg_2site_3d.py.
- Fix for looking for negative values, when all values where converted to positive in matrix in ns_cpmg_2site_3d.py. This is to implement catching of math domain errors, before they occur. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models.
- Removed nested looping of returning back_calc in lib/ns_cpmg_2site_3d.
- Removed the 8th unit test for model NS CPMG 2-site 3D. This was the catching of errors when kex = 1e20. The model cannot handle this situations, and we need to let it fail.
- Removed the 8th unit test for model NS CPMG 2-site expanded. This was the catching of errors when kex has high values. The model cannot handle this situations, and we need to let it fail.
- Fix for differences in system tests which are different from trunk. These were found with the command: diff -bur disp_speed/test_suite/ relax_trunk/test_suite/ | grep -v "Binary files" > diff.txt.
- Converting back to having back_calc as a function argument to model B14. This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model CR72: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model DPL94: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model IT99: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model LM63: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model M61: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model M61b: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model MMQ CR72: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model MP05: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model NS CPMG 2-site expanded. This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model TAP03. This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model TP02. This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Converting back to having back_calc as a function argument to model TSMFK01. This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
- Created the lib.compat.norm() compatibility function for numpy.linalg.norm(). For numpy 1.8 and higher, the numpy.linalg.norm() function has introduced the 'axis' argument. This is an incredibly fast way of determining the norm of an array of vectors. This is used by the frame order analysis. However for older numpy versions, this causes the frame order analysis, and many corresponding system and GUI tests to fail. Therefore this new lib.compat.norm() function has been designed to default to numpy.linalg.norm() if the axis argument is supported, or to switch to the much slower numpy.apply_along_axis(numpy.linalg.norm, axis, x) call which is supported by older numpy.
- The frame order analysis now uses the lib.compat.norm() replacement for numpy.linalg.norm(). This is to allow for the axis argument on numpy versions before version 1.8, though these older versions will result in slower optimisation of the frame order models.
- The built in Python range() function is no longer being replaced by xrange(). Replacing builtin.range() with builtin.xrange() on Python 2 was causing problems with Python site-packages which were not Python 3 compliant. This includes old numpy versions. The original overwriting of range() with xrange() was for both speed and memory conservation. However profiling the system tests, the time for all tests did not change significantly. This change may cause problems in certain places in relax on memory constrained computer systems, so it may need to be reverted in the future.
- The lib.io.open_write_file() function now automatically determines the compression type. This is used by many user functions which create files. The end result for a user is that if they supply a '.gz' or '.bz2' file extension, a gzipped or bzipped file will be produced.
- Removal of the docstring text wrapping in the lib.io module.
- Expanded and improved the docstring for the relax_disp.r20_from_min_r2eff user function. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/5957. The documentation now covers a number of the uses for this user function. The text has also been lightly edited. To fit all the text into the GUI user function window, the size of the dialog and the text high settings have been changed.
- Large improvements for the detection of cross-compilation on Mac OS X systems. The tests for different architecture support now follows the ideas discussed in the post http://thread.gmane.org/gmane.science.nmr.relax.devel/5785/focus=5820. In summary, for each architecture a simple C file is created, compiled with 'gcc -arch xyz', and the resultant binary file tested. To support 64-bit compilation on 32-bit systems, all previously successful architectures are also included in the gcc command. The change allows the 'ppc64' architecture to be reintroduced.
- Fixed the docstring for the det_arch() method of the sconstruct script. This is for the true cross-compilation detection on Mac OS X.
Bugfixes
- Fix for the lib.geometry.lines.closest_point_ax() function for when the two points are the same. If the point on the line and point in the 3D space are the same, then this function used to return an array of NaN values. This situation is now caught and the point in the 3D space is returned.
- Fix for the heterogen section of the internal structural object write_pdb() method. A number of checks were performed to see if the PDB heterogen records were the same for all structures, but this is meaningless as the structures can of course be different.
- Fixes for the lib.structure.represent.cone module. The function arguments named 'cone' have been renamed to 'cone_obj' so that they do not clash with the cone() function in the module namespace.
- Fix for the lib.structure.geometric.generate_vector_residues() function. The atom numbers are no longer read from the internal structural object, as these are not reliable. If another geometric representation exists in the object, then the atom numbers could be None. Or loading structures from multiple PDB files can cause the numbering to be repeated or out of order.
- Fix for the frame_order.pdb_model user function for the rotor models. The rotor axis is no longer defined by spherical angles and therefore needs to be recreated using the create_rotor_axis_alpha() rather than create_rotor_axis_spherical() function from lib.frame_order.rotor_axis.
- Partial fix for bug #22100, the rotation argument for the structure.rotate user function cannot be changed in the GUI, as an AttributeError is raised. The append_row() method call has been replaced by the correct add_element() call.
- Bug fix for the Sequence_2D GUI element. This is used for the user function windows in the GUI for setting lists of lists or matrices. The GUI element GetValue() method will now return None if nothing is set. This prevents a list of lists of None being added to the main user function window.
- Fixes for the Sequence and Sequence_2D GUI elements for handling invalid input data. These elements used by the user function windows previously raised all sorts of errors if the data was not what they expected (lists or lists of lists respectively). These situations are now caught and the input data is ignored, so blank Sequence and Sequence_2D elements are presented to the user.
- Bug fix for the Sequence_2D GUI element. This is used for handling list of lists user function arguments in the user function GUI windows. The setting of invalid values directly in the Sequence_2D GUI element is now detected. These values are now replaced with None.
- Fix for bug #22102, the point argument of the dx.map user function failing in the GUI. The Sequence_2D GUI element used for all list of lists arguments in the user function GUI windows now correctly handles variable length lists. The first column which shows a count of the elements is now properly taken into account in the SetValue(), GetValue() and add_item() methods, via a new self.offset variable. The self.variable_length variable has also been fixed so it is not overwritten by the parent Sequence GUI element.
- Bug fixes for the Sequence GUI element used for lists in the user function windows. Invalid values input into the Sequence GUI window are now ignored rather than raising different types of error. And invalid input lists for fixed dimension arguments are also ignored. This allows the User_functions.test_structure_add_atom GUI test to pass.
- Bug fix for the lib.arg_check.is_float_object() function. The dim argument can sometimes be an integer rather than a tuple, but this was not handled by the function. Now integer dim arguments are pre-converted to lists before performing all the checks.
- Fix for bug #22105, the failure spectrum.read_intensities GUI user function whereby a file name is turned into lists of characters. A few changes were made to allow the Selector_file_multiple GUI element to operate correctly.
- Critical fix for the math domain catching of model LM63. This was discovered with the added 7 unit tests demonstrating edge case 'no Rex' failures. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur.
- Critical fix for the math domain catching of model B14. This was discovered with the added 7 unit tests demonstrating edge case 'no Rex' failures. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur.
- Critical fix for the math domain catching of model CR72. This was discovered with the added 8 unit tests demonstrating edge case 'no Rex' failures. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. When kex is large, e.g. kex = 1e5, the values of etapos = eta_scale * sqrt(Psi + sqrt_psi2_zeta2) / cpmg_frqs will exceed possible numerical representation. The catching of these occurrences needed to be re-written.
- Critical fix for the math domain catching of model B14. This was discovered with the added 8 unit tests demonstrating edge case 'no Rex' failures. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. When kex is large, e.g. kex = 1e5, "nan" values where produced, which were replaced with 1e100. The catching of these occurrences needed to be re-written.
- Critical fix for the math domain catching of model CR72. Removed the test for kex ≥ 1e5. This catching should rather be performed on the math functions instead.
- Critical fix for the math domain catching of model B14. Removed the test for kex ≥ 1e5. This catching should rather be performed on the math functions instead. In this case, it is the catching of sinh(), not evaluating values above 710.
- Critical fix for the math domain catching of model MMQ CR72. This was discovered with the added 9 unit tests demonstrating edge case 'no Rex' failures. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur.
- Critical fix for the math domain catching of model IT99. This was discovered with the added 8 unit tests demonstrating edge case 'no Rex' failures. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of these occurrences needed to be re-written.
- Critical fix for the math domain catching of model TAP03. This was discovered with the added 8 unit tests demonstrating edge case 'no Rex' failures. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur.
- Critical fix for the math domain catching of model TP02. This was discovered with the added 8 unit tests demonstrating edge case 'no Rex' failures. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur.
- Critical fix for the math domain catching of model TSMFK01. This was discovered with the added 8 unit tests demonstrating edge case 'no Rex' failures. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of these occurrences needed to be re-written.
- Critical fix for the math domain catching of model NS CPMG 2-site 3D. This was discovered with the added 8 unit tests demonstrating edge case 'no Rex' failures. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur.
- Fix for bug #22112, the GUI failure when setting list values via the sequence windows, launched from user function windows fails on Mac OS X. The problem was two fold. First the Sequence and Sequence_2D windows from wx.Dialog should not be terminated via the Destroy() method, as wx.Dialog.Destroy() appears to be horribly broken on Macs.
- Another fix for bug #22112, the GUI failure when setting list values via the sequence windows, launched from user function windows fails on Mac OS X. This change is for the multiple file selection window and matches the previous change by replacing the Mac OS X fatal wx.Dialog.Destroy() call with wx.Dialog.Close().
- Fix for the relax start up detection of missing Python packages. The dep_check module is now imported first, as it used to be. This is required to check if all required modules are installed and to present understandable messages to the user rather than cryptic ImportError messages with tracebacks.
- Fix for bug #22033, the inability to use other optimisation algorithms in the dispersion analysis. As mentioned in comment #2, the solution is to raise a RelaxError explaining that only 'simplex' optimisation is possible for the dispersion analysis as the gradients are not derived and implemented in relax.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.2.1
Description
This is a major bugfix release which includes the equations for the B14 and B14 full relaxation dispersion models [Baldwin 2014] introduced with relax version 3.2.0, now being calculated correctly, the NS CPMG 2-site expanded model correctly handling edge cases where no exchange is expected, and the structure.delete user function correctly operating when multiple models are loaded into the data store.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.2.1
(23 May 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.2.1
Features
- N/A
Changes
- Punctuation fixes throughout the CHANGES document.
- Modified system test Relax_disp.test_cpmg_synthetic_ns3d_to_cr72 to catch bug #22017: LinAlgError, for all numerical CPMG models. System test was renamed from test_cpmg_synthetic_cr72 to test_cpmg_synthetic_ns3d_to_cr72, to reflect which model create the data and which model fits the data.
- Modified cpmg_synthetic script to first create all time structures before doing back-calculation. Bug #22017: LinAlgError, for all numerical CPMG models. The numerical models need all time points which are defined in setup to be present when calculating.
- Renamed system test to test_cpmg_synthetic_ns3d_to_cr72_noise_cluster. The model that creates the data has been changed to numerical model. Bug #22017: LinAlgError, for all numerical CPMG models.
- Implemented system test Relax_disp.test_cpmg_synthetic_ns3d_to_b14. Bug #22021: model B14 shows bad fitting to data. This is to catch model B14 showing bad fitting behaviour.
- Parameter precision increase for system test Relax_disp.test_baldwin_synthetic. The correct implementation of the trigonometric functions allow for higher precision. Bug #22021: model B14 shows bad fitting to data. Duplicate line codes were also removed.
- Code cleanup in system test Relax_disp.test_baldwin_synthetic_full. Bug #22021: model B14 shows bad fitting to data. The precision could also be increased by 1 digit.
- Code cleanup in system test Relax_disp.test_baldwin_synthetic. Bug #22021: model B14 shows bad fitting to data. Removing many unnecessary lines of code.
- Added 7 unit tests demonstrating edge case 'no Rex' failures of the NS CPMG 2-site expanded model. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0. Such tests should be replicated for all dispersion models.
- Created the Structure.test_bug_22069_structure_delete_helix_attribute system test. This is to catch bug #22069, the failure of the structure.delete user function with "AttributeError: Internal instance has no attribute 'helices'".
- Created the Structure.test_bug_22070_structure_superimpose_after_deletion system test. This is to catch bug #22070, the failure of the structure.superimpose user function after deleting atoms with structure.delete.
- Added some checks to the Structure.test_bug_22070_structure_superimpose_after_deletion system test. These tests reveal the real problem - that the atoms of the second model have not been removed by the structure.delete user function.
- Added git-svn support for the relax version information module. This allows the subversion revision number and repository URL to be displayed on program startup, so that it is stored in log files. This is very useful for debugging purposes.
- Improvements for the git-svn support in the relax version module. Python 3 is now correctly handled and the URL is properly extracted from the git repository.
- Improvement for the unit test printouts when run with the --time command line option. The full unit test name is now printed out, reverting to the old behaviour. However the shortened test names are preserved for the other test suite categories.
- Created the test_ns_cpmg_2site_expanded_no_rex8() relaxation dispersion unit test. This is a demonstration, showing the NS CPMG 2-site expanded model with no exchange when kex = 1e5. I.e. when the motion is too fast for exchange to be observed. This test should be used for all dispersion models to make sure that they model this edge case correctly as well. This follows from http://article.gmane.org/gmane.science.nmr.relax.devel/5906.
- Attempt at fixing bug #22071, the relax unit test and system test not functioning. The fix here is that the git commands to show the current subversion revision number only works when run from the relax base directory, or one of the subdirectories. This should now be fixed, as the pipe running the command will first 'cd' to the relax base directory.
- Another attempt at fixing bug #22071, the relax unit test and system test not functioning. This time the complicated shell command "cd %s; git svn find-rev $(git rev-parse HEAD)" has been replaced with "cd %s; git svn info".
- Changed most default dispersion parameter values to avoid edge cases where there is no exchange. The Δω parameters were all 0.0 and kex 1e5, both of which result in no exchange. If this is ever used for an optimisation starting point - which it never should, apart from development, test suite, and debugging purposes - then the optimisation algorithm will have a very hard time recovering. The pA parameter has been changed to 0.90 to set it to a reasonable value while still staying far away from the no exchange condition of pA = 1.0. This follows from http://article.gmane.org/gmane.science.nmr.relax.devel/5917.
- Fixes for 3 dispersion system tests for the change in default parameter values. The default values are used in the auto-analysis in the test suite to avoid the grid search. The changed values affected the optimisation of two spins from Flemming Hansen's data located at test_suite/shared_data/dispersion/Hansen/, residue 4 used as an example of no exchange and residue 70 used as an example where data is only available at one field. The system test Relax_disp.test_set_grid_r20_from_min_r2eff_cpmg was also modified as it was directly checking these default values.
- Fix for the Relax_disp.test_cpmg_synthetic_dx_map_points system test. This uses the default parameter values to start the optimisation, therefore the recent change away from edge case 'no Rex' values allows the parameter values stored in ds.dx_clust_val to be correctly optimised.
- Speed up for the version module when using a repository copy of the code. The repository revision and URL and now stored as module variables, so that the 'svn info' and 'git svn info' commands are only run twice, once for the revision() function and once for the url() function.
- Large speed up for the relax start up times for svn and git-svn copies of the relax repository. The 'svn info' and 'git svn info' commands are now only executed once when the version module is first imported. The revision() and url() functions have been merged into the repo_info() function and this is called when the module is imported. This repo_info() function stores the repository revision and URL as the version.repo_revision and version.repo_url module variables. It also catches if these variables are already set, so that multiple imports of the module do not cause the repository information to be looked up each time. Previously the revision() and url() functions where called every time a relax state or result file was created, hence for repository copies the 'svn info' or 'git svn info' commands were being called each time. The functions were also called for each interpreter object instantiated, and for each import of the version module.
Bugfixes
- Extremely important fix for the model B14. This was discovered by author Andrew Baldwin by inspecting his code in relax. Bug #22021: model B14 shows bad fitting to data. The implementation was performed wrong for calculation of g3 and g4. The implementations should be performed by trigonometric functions. The model B14 was previously in a state of non-functioning. The B14 model now shows excellent performance.
- Fix for bug #22069 by only deleting helix and sheet data with structure.delete when it exists. This is bug #22069, the failure of the structure.delete user function with "AttributeError: Internal instance has no attribute 'helices'".
- Fix for all edge case 'no Rex' failures of the NS CPMG 2-site expanded model. This uses the no exchange checking idea, modified to function in the relax trunk, from http://article.gmane.org/gmane.science.nmr.relax.devel/5847. This is importantly on line 1 of the function. The recently introduced set of 7 unit tests comprehensively showing these failures now all pass.
- Important bug fix for the structure.delete user function when multiple models are present. This is to fix bug #22070, the failure of the structure.superimpose user function after deleting atoms with structure.delete. The problem is that structure.delete was removing the atoms from the first model but none of the others. This is because it was using the structural object atom_loop() method to find the atoms to be deleted, but this method operates on the first model. So when the second model is reached, the atoms are already gone.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.2.0
Description
This is a major feature release. It includes the addition of the new B14 and B14 full relaxation dispersion models [Baldwin 2014], a complete rearrangement of the module layout of the specific analyses packages, a number of new user functions, documentation improvements including the addition of a new chapter to the manual for the N-state model or ensemble analysis, and numerous of other features. This is also a major bugfix release, so all users are recommended to upgrade. This is essential if you are using the new relaxation dispersion analysis in relax as a severe bug in the error calculation has been corrected. See below for a comprehensive list of new features, the rather large number of changes, and the long list of all bugs fixed.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.2.0
(20 May 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.2.0
Features
- Addition of the vector_angle() relax library function for calculating the signed or directional angle between two vectors.
- Huge speed up of the interatom.define user function.
- For improved feedback, a busy cursor is shown in the GUI when executing user functions.
- The steady-state NOE auto-analysis now produces a 2D Grace plot of the reference and saturated spectra peak intensity values.
- Complete redesign of the specific analyses backend, simplifying and cleaning up this internal API and making it easier for users to add completely new analysis types to relax.
- Parametric reduction of the rotor frame order model, eliminating one redundant parameter hence simplifying optimisation.
- Large improvement for the lib.software.grace module. The '*_err' and '*_bc' parameter names for the parameter error and back-calculated parameters respectively are now supported, allowing these values to be easily plotted.
- Expansion of the value.set user function to handle parameters which consist of lists of values. The index argument has been added to allow the index of the list to be specified, and this is then propagated into the specific analysis API.
- Improvements for the parameter definitions in all analysis types. This allows for better output in 2D plots and text files.
- Implemented linear constraints for the frame order analysis. This uses the log-barrier constraint algorithm in the minfx library to provide constraints without requiring gradients.
- Improved and expanded the relax command line options for debugging.
- Full independence of the relax library so that it can be used outside of relax.
- The addition of a relaxation dispersion user function for setting the R20 values to the minimum R2eff value.
- Expanded capabilities for the relax_disp.sherekhan_input user function.
- Implementation of the B14 and B14 full relaxation dispersion CPMG models for 2-site exchange for all time scales (from the new paper [Baldwin 2014] at http://dx.doi.org/10.1016/j.jmr.2014.02.023).
- Large improvements to the relax HTML manual including fixes for URLs, bibliography entries, links, and tables.
- Support for multiple point creation for the OpenDX chi-squared space mapping user function.
- Automatic determination of reasonable initial contour levels for the OpenDX mapping user function.
- Addition of a new chapter to the manual for the N-state model or ensemble analysis.
- Creation of the new pymol.frame_order user function for visualising results.
- Expansion of the Grace 2D data plotting capabilities.
Changes
- Shifted two functions from pipe_control.angles into the new lib.geometry.angles module. This is the fold_spherical_angles() and wrap_angles() functions which are not related to the relax data store and hence can be made independent.
- Replaced function atan(ω1 / Δω) with atan2(ω1 , Δω), to make sure returned θ values are between 0 and π. This was done in the function return_offset_data() in the specific_analyses of relax_disp. This was discussed in: http://thread.gmane.org/gmane.science.nmr.relax.devel/5210.
- Changed a unit test and system test, where a change from the atan to atan2 function to calculate θ can give differences to the 15 decimal.
- Replaced how a global analysis average results from a previous run to instead take the median. This is to prevent averaging extreme outliers, and instead take the median of the previous result. This was discussed in: https://mail.gna.org/public/relax-devel/2013-10/msg00009.html.
- Modified system test Relax_disp.test_r1rho_kjaergaard to use input guess values of φex in units of ppm2 instead of rad2/s2.
- Small change to system test Relax_disp.test_r1rho_kjaergaard. The outcome of kex from system test is in the area of 4-5000. The expected value is 13000. A deeer analysis of the input is needed, to judge what is the correct value.
- Created the Structure.test_bug_21814_pdb_no_80_space_padding system test. This is for catching bug #21814, the PDB reading failure when the PDB records are not padded to 80 spaces. The PDB file used for the test is the same file as attached to the bug report.
- The verbosity flag is now used in the centre of mass calculations. The pipe_control.structure.mass.pipe_centre_of_mass() function now passes the verbosity argument into the lib.structure.mass.centre_of_mass() function.
- Created the new vector_angle() library function. This is located in the lib.geometry.vectors module. The function will calculate the angle between two vectors with sign or direction using the atan2() function.
- Addition of a number of unit tests for the new lib.geometry.vectors.vector_angle() function.
- Changes to the lib.geometry.vectors.vector_angle() function. This now expects the normal of the plane in which the angle is defined. The original logic was not functional, therefore the angle is forced to be negative if the cross product between the two vectors points in the opposite direction as the normal.
- Improvements for the log converter script. This is for the script used to convert SVN messages into a format for the relax release announcement. The script now handles spacing better. Multi-line messages are now concatenated into a single line using a double space between separate sentences and a single space in all other cases.
- Improvements for the pipe_control.mol_res_spin.generate_spin_id_unique() function. The unique spin ID now takes into account if the molecule is named or not (for single molecules). This allows the function to be used when dealing with the structural data rather than the molecule, residue, and spin data structure.
- Removed the full stop from the printout of the test names in the test suite. This allows for quicker copying and pasting of the test name when running with the --time option and then selecting a single test to run.
- Modified the Noe.test_noe_analysis system test to catch bug #21863, the failure to create the ref and sat Grace 2D plots in the NOE analysis.
- Improved the user feedback when executing a user function in the GUI. The busy cursor is now turned on at the start of the user function wizard page method _apply() and turned off again at the end. This would avoid user confusion, thinking that the program has frozen (as was the case in bug #21862).
- Changed the Noe.test_noe_analysis system test to handle the peak intensities correctly. This relates to bug #21863, the grace.write user function not being able to write ref/sat plots as described in sample script noe.py. Instead of trying to produce the 'ref.agr' and 'sat.agr' files for the non-existent 'ref' and 'sat' parameters, instead the 'intensities.agr' file is being produced for the peak intensity parameter. The reference and saturated intensities will appear as two graph sets within that plot.
- Changed the Noe analysis sample script to properly handle the peak intensity Grace plots. This relates to bug #21863, the grace.write user function not being able to write ref/sat plots as described in sample script noe.py. Instead of trying to produce the 'ref.agr' and 'sat.agr' files for the non-existent 'ref' and 'sat' parameters, instead the 'intensities.agr' file is being produced for the peak intensity parameter. The reference and saturated intensities will appear as two graph sets within that plot. These changes match those of the test suite.
- Fix for the line numbering for the NOE analysis sample script in the user manual. The line numbering for the code snippets did not match that of the full sample script shown at the start of that section of the NOE chapter.
- Added a simple shell script to quickly grep the entire source tree while excluding .svn directories. This will only work on POSIX systems (Linux and Macs).
- Fix for the Noe.test_bug_21562_noe_replicate_fail system test. The 'ref' and 'sat' parameters do not exist. Therefore the grace.write user function calls in the system test script have been modified to output the 'intensities' parameter instead.
- Complete rearrangement of the specific analysis code for the steady-state NOE. This brings the code into line with the recent specific analysis code rearrangements, specifically in the specific_analyses.relax_disp package.
- Changed the Noe auto-analysis to properly handle the peak intensity Grace plots. This relates to bug #21863, the grace.write user function not being able to write ref/sat plots as described in sample script noe.py. Instead of trying to produce the 'ref.agr' and 'sat.agr' files for the non-existent 'ref' and 'sat' parameters, instead the 'intensities.agr' file is being produced for the peak intensity parameter. The reference and saturated intensities will appear as two graph sets within that plot.
- Redesign and standardisation of the peak intensity data structure throughout all analyses in relax. The various structures 'intensities', 'intensity_err', 'intensity_sim', 'sim_intensity', and 'intensity_bc' have all been renamed. The new structures are called 'peak_intensity', 'peak_intensity_err', 'peak_intensity_sim' and 'peak_intensity_bc'. This allows the structure to be processed as a standard parameter in the specific analysis API. One very visible consequence is that plots of peak intensities, as well as value files, will now have peak intensity errors. For backwards compatibility, the relax data store method _back_compat_hook() has been modified to catch all previous peak intensity object variants and to standardise and rename these to the new object names. As the parameter is now called 'peak_intensity' rather than 'intensities', all calls to the grace.write and value.write for this parameter have been changed in the auto-analyses, the sample scripts, the test suite and the manual.
- Fix for the Noe.test_noe_analysis system test. The grace plots of the peak intensities now have error bars.
- The legends in Grace plots are now turned on by default, if the legend flags are not specified. The Noe.test_noe_analysis system test has been updated for the change.
- Added matplotlib detection to the dep_check module. This follows step 1 from the planning document at http://thread.gmane.org/gmane.science.nmr.relax.devel/5278.
- Added matplotlib to the info module. This follows step 1 from the planning document at http://thread.gmane.org/gmane.science.nmr.relax.devel/5278.
- Modified the python_multiversion_test_suite.py script to run the relax information printout. This is to test out the info module on multiple Python versions and to have a record of the setup of each Python version.
- Python 3 fixes for the info module. The new processor_name() function was not compatible with Python 3 as the text read from STDOUT needs to be 'decoded'.
- The variables in the relax_fit.h file are now all static.
- Added the new exp_mc_sim_num argument to the relaxation dispersion auto-analysis. This is in preparation for fixing bug #21869. This argument allows for a different number of Monte Carlo simulations for the R2eff model when exponential curves are fit. It will mainly be useful in the test suite to improve the accuracy of the R2eff errors, while still running a low number of simulations for the other models to allow optimisation to be quick.
- Modifications to the Relax_disp.test_m61_exp_data_to_m61 system test. This is to fix bug #21869, the failure of this system test. The number of Monte Carlo simulation for the R2eff model has been increased from 3 to 25 using the new exp_mc_sim_num argument to the dispersion auto-analysis. To keep the test fast, only a single spin is optimised.
- Redesign and major clean up of the specific_analyses.jw_mapping package. The code has been broken up into separate modules.
- Fix for the default value table documentation in the specific_analyses.jw_mapping package. This was broken in the last commit.
- Updates for the rest of relax for the redesign of the specific_analyses.jw_mapping package.
- Redesign and major clean up of the specific_analyses.consistency_tests package. The code has been broken up into separate modules. This matches the similar specific_analyses.jw_mapping package.
- Redesign and major clean up of the specific_analyses.relax_fit package. The code has been broken up into separate modules. The rest of relax has been updated to handle the changes.
- Removed the empty documentation strings from the specific analysis API base class. These are being gradually shifted into the specific_analyses.*.uf modules, so do not belong in the API object.
- The specific analysis API classes are now all singletons. This change will reduce the amount of memory used, as these classes are initialised multiple times throughout relax, especially in the test suite. The API objects are not used for local storage so the multiple instance verses singleton design change will make no difference. The singleton design pattern code has been added to the base class specific_analyses.api_base.Api_base so that all classes inherit the __new__() method which implements the singleton.
- Shifted the NOE analysis specific user function documentation from the API object to the uf module.
- More code refactorisation of the specific_analyses.n_state_model package into new modules. The API object is now in the 'api' module, the remaining private methods have been shifted into the 'optimisation' and 'uf' modules, and the user function documentation moved to the 'uf' module.
- Added units tests for package consistency testing for all of the remaining specific analyses.
- Fixes for the unit tests for the N-state model specific analysis package changes.
- Updated the package __all__ lists for a number of the specific analyses.
- Shifted all of the private methods for optimisation from the relaxation dispersion API object. These are now functions of the specific_analyses.relax_disp.optimisation module.
- Created the new specific_analyses.relax_disp.uf module. This consists of the private methods of the relaxation dispersion API object which act as the back end for the user functions, as well as the user function documentation, all shifted from the 'api' module.
- Shifted the model-free analysis specific API object to the specific_analyses.model_free.api module.
- Fixes for the new specific_analyses.model_free.api module.
- Created the new specific_analyses.model_free.uf module. This consists of the private methods from the 'main' module which act as the back end for the user functions as well as the user function documentation.
- Created the new specific_analyses.model_free.parameters module. This consists of the private methods from the 'main' and 'mf_minimise'. All class methods have been converted into functions.
- Created the new specific_analyses.model_free.optimisation module. This consists of the merger of the 'mf_minimise' and 'multi_processor_commands' modules. All the private class methods have been converted into functions.
- Shifted all of the model-free specific analysis API methods into specific_analyses.model_free.api.
- Clean up and refactoring of the specific_analyses.model_free.bmrb module. The class methods have all been converted into functions, the deleted class is no longer a base class for the specific analysis API class.
- The read_columnar_results() method has been removed from the specific analysis API. This is only for backwards compatibility with ancient relax 1.2 and earlier model-free results files, so will never be used by any other analysis.
- Converted all of the class methods in specific_analyses.model_free.results to functions. This class has been removed from the API as well.
- Renamed specific_analyses.model_free.results to back_compat. This is to make the purpose of the module clearer, to avoid developer confusion.
- Shifted the model-free classic_style_doc user function documentation to the 'uf' module.
- Shifted the last private method out of the model-free specific analysis API class. It has been converted into a function of the new 'data' module, for lack of a better name.
- Shifted some of the specific_analyses.model_free.parameters functions into the new 'model' module.
- Removed the test_grid_ops() method from the specific analysis API. This has been shifted into the new lib.optimisation module and converted into a function, breaking a number of circular import kludges.
- Fixes for the specific analysis API unit tests. The 'instance' variable used for the singleton design pattern is skipped in the method and object checks.
- Redesign of the specific analysis API. All parts of relax using this API now work with the API objects directly. The specific_analyses.setup module has been renamed to specific_analyses.api and the get_specific_fn() function has been eliminated. Instead of calling this, the different parts of relax now obtain the API object by calling the new return_api() function. This results in a large cleanup of the API - method names are no longer changed to a different name.
- Fix for the new singleton design of the specific analysis API objects. The use of the class namespace as a storage space has been eliminated. This was causing test suite failures when checking the API objects. For some reason, some of the target function objects were being placed in 'self'.
- Created a directory for holding the CaM double rotor frame order synthetic data.
- Capitalised the pivot and CoM variables in the base frame order distribution generation script.
- Reintroduced the distribution PDB file creation to the frame order test data generation script. This is the generate_base.py script in the test_suite/shared_data/frame_order/cam directory. The ability to create the distribution.pdb file was long lost in this script, and can now be activated using the DIST_PDB class variable.
- The Frame Order test data generation base script now loads all structures before rotating them. This allows the progress printout to function correctly by not having any user function printouts as the rotations are occurring.
- Created a simple double rotor geometric system to used for this frame order test data generation. The system_create.py script creates the geometric system based on the CoMs of both domains in the parent directory and two perpendicular rotation axes passing through both CoMs. A PDB file of the representation is created by the script.
- Improvements and expansion of the frame order test data generation base script. More of the class variables are now defined in this base class and dummy methods are provided to allow certain operations to be skipped (print_axis_system(), axes_to_pdb() and build_axes()). Importantly, the script can now handle multiple modes of motion with the introduction of the key _multi_system() and _state_loop() methods.
- Fixes for the calculation of the frame order matrix in the test data generation base script. The matrix generation now handles multiple modes of motion correctly. The total rotation matrix is constructed when looping over the modes by using the dot product of the individual rotation to the total, and then this is used to create the outer product, summed over all states.
- The frame order test data generation base script now outputs the frame order matrix to 8 places.
- Introduced the ROT_FILE flag to the frame order test data generation base script. This allows the 'rotations.bz2' file creation to be skipped, if set to False. This file takes time to create and is of limited use.
- Removed a duplicated state.save call in the frame order test data generation base script.
- The save file created by the frame order test data generation base script can now be bypassed. When loading 1,000,000 PDB structures as models into the relax data store, the RAM usage can go over 10 Gb. When trying to save this into a relax state file with the state.save user function, the time required can be over a day. Therefore the SAVE_STATE class variable has been introduced to allow the state.save call to be bypassed.
- Created the RDC and PCS back calculated test data for the CaM double rotor frame order model. This consists of a uniform distribution over both rotors, the first centred in the C-domain and the second in the N-domain, and the two axes being perpendicular to each other along the CoM-CoM axis. The distribution consists of 250000 rotated structures. The frame order matrix for this model is also given.
- Created a distribution of structures for the CaM double rotor frame order model. This is only for a distribution of 100 structures, to keep the file size to a reasonable size. A PyMOL *.pse file is also included to show the distribution together with the rotor system and the domain positions.
- Updated the rotation() method of all the CaM frame order test data generation scripts. The motion_index argument is now accepted by all of the methods to allow the base script to execute correctly.
- Fix for the Frame_order.test_generate_rotor2_distribution system test. The rotation() method now must accept the motion_index keyword argument.
- Created the Frame_order.test_cam_double_rotor system test for the CaM synthetic data. This will be used to implement the frame order double rotor model.
- The CaM double rotor frame order test RDC data now has single quotes around the spin IDs. This allows the data to be loaded in the Frame_order.test_cam_double_rotor system test.
- Created subsets of the CaM double rotor frame order test PCS data. This consists of data for only 5 spins, and matches those of the other CaM frame order test data.
- Large refactorisation of the frame order package. The private methods of the frame order package specific_analyses.frame_order have now all been shifted into modules. This is to simplify the package by not having huge quantities of code in the __init__ module. Now the code resides in the api, checks, data, optimisation, parameters, and user_functions modules.
- Added the double rotor frame order model to the frame_order.select_model user function.
- Better support for the parameters of the double rotor frame order model.
- Initial implementation of the double rotor frame order model target function. The target function func_double_rotor() has been created as a copy of the func_rotor_qrint() method, modified for the double rotor model. Modifications will likely be needed as the compile_2nd_matrix_double_rotor() and pcs_numeric_int_double_rotor() functions are implemented.
- Initial implementation of the lib.frame_order.double_rotor module. This module implements the functions needed to solve the frame order analysis for the RDC (via the frame order matrix) and PCS (numerically). The interfaces have been updated for the double rotor but most of the code still implements the basic rotor model from which it derives.
- Fix for the double rotor frame order model when only RDCs are used. The target function was not being aliased when no PCS data was present.
- Changed the precision of the deactivated Frame_order.test_cam_double_rotor_pcs system test. This test will run with the command "relax -s Frame_order.test_cam_double_rotor_pcs" and, because of the small angle of the test model, the chi-squared value differences for just the PCS were too small for the previous precision of 1 decimal place.
- The double rotor system is now truly perpendicular. This is for the CaM frame order synthetic test data. The two axes were not perpendicular whereas for the model they should be.
- Updated the double rotor distribution PDB file for the perpendicular axes. This is for the CaM frame order double rotor synthetic data. The number of structures in this distribution is set to 100 (10 per motional mode). The PyMOL *.pse file has also been updated.
- Updated the CaM frame order double rotor synthetic test data for the perpendicular axes. The RDC and PCS data has been recalculated for 250,000 structures, this time with the axes being truly perpendicular.
- Added a simple script for analysing the eigensystem of the CaM frame order double rotor test model.
- Capitalised the class variables of all of the CaM frame order system test scripts.
- Class variable cleanup for the CaM frame order system test scripts. The variables are now all defined in the base script and only overwritten when needed by the individual tests.
- Changed the handling of the pivot point in the CaM frame order system tests. The pivot point is now a class variable, rather than being hardcoded into a function. The handling of a second pivot has also been added.
- Updated the CaM frame order double rotor system test script to have the correct two pivots.
- Changes to the frame_order.pivot user function. The 'order' argument has been added to allow for multiple pivots to be present. The user function backend will store these as cdp.pivot, cdp.pivot2, cdp.pivot3, etc. The 'fix' argument is now defaulting to False to make sure it is always boolean.
- The second pivot is now being passed into the frame order target function class.
- Simplified the CaM frame order system test base script. The class variables are now always defined, so checking for their existence is pointless. The CONE_S1 is now also defined in the base script as a class variable.
- Added support for the new axis_alpha frame order parameter to the specific_analyses.frame_order package.
- Implemented the new frame order rotor model parameters in the target function. The parameters {axis_theta, axis_phi} have been replaced by the single axis_alpha. To support the new model construct, the CoM of the entire system is now passed into the target function.
- The AXIS_ALPHA parameter is now initialised in the CaM frame order system test base script. The base script was broken a while back due to AXIS_ALPHA not being defined but being checked for.
- Improvements for the centre of mass calculation for the frame order model optimisation. This is now only calculated for the rotor models. The CoM is also printed out for better user feedback.
- The CaM frame order system tests for the rotor models are converted to the new axis_alpha parameter. The axis_theta and axis_phi spherical coordinates are converted to the new reduced parameter set defined by a random point in space (the CoM of all atoms), the pivot point, and a single angle α.
- The CaM frame order system test base script is now using lib.geometry.vectors.vector_angle(). This is for correctly calculating the α axis angle for the rotor models.
- Fixes for the rotor axis α angle conversion in the CaM frame order system test base script. The pivot point is now the point on the rotor axis closest to the reference point (the CoM). Therefore the closest point is now calculated from the pivot point on the axis and the axis vector. This closest point is needed for defining the new minimal parameter set for the rotor models.
- Changes for the convert_rotor() method of the CaM frame order system test base script. The method now sets the α angle rather than returning it. The method now also resets the pivot point to the point on the rotation axis closest to the CoM.
- Fixes for the rotor axis reconstruction in the func_rotor_qrint() frame order target function. This is for the rotor model. The axis α angle is now correctly converted into the rotor axis using the CoM and pivot point.
- Optimisation is now turned on for the Frame_order.test_cam_rotor2 system test. This is to reveal deficiencies in the handling of the new axis α parameter.
- The frame order optimisation results unpacking function now supports the axis α parameter. This is in the function specific_analyses.frame_order.optimisation.unpack_opt_results().
- Updated the χ2 value check in some of the CaM frame order system tests for the rotor model. The χ2 value is slightly different due to truncation and conversion artifacts of the parameter set reduction.
- Shifted all of the code for calculating the frame order rotor axis into lib.frame_order.rotor_axis. The new frame_order.rotor_axis module consists of three function for creating a unit vector or the rotor axis using either the axis α angle, the two spherical angles or the three Euler angles.
- Renamed the specific_analyses.frame_order.user_functions module to uf.
- Fix for the optimised chi-squared value check in the Frame_order.test_cam_rotor system test. The reduced parameter set results in a slightly different χ2 value.
- Shifted the frame order average domain position info check from the 'optimisation' to 'checks' module.
- Fix for the CaM frame order system tests. The axis α angle and pivot shifting to the closest point to the CoM in the base system test script now only happens for the 'rotor' and 'free rotor' models. This allows the tests for the isotropic cone models to pass again.
- Fixes for all of the CaM frame order optimisation scripts in the test data directories. The frame_order.average_position user function is now essential.
- The centre of mass printout in the frame order target function setup now uses the verbosity argument. This means that the printout is not shown for the Monte Carlo simulation optimisation.
- Correction for the νCPMG to τCPMG conversion formula in the dispersion chapter of the manual. In relax, the conversion νCPMG = 0.25 * τCPMG is used and not νCPMG = 0.5 * τCPMG.
- Merged the Van't Hoff and Arrhenius lines of the dispersion software comparison table of the manual.
- Renamed the specific_analyses.relax_disp.disp_data module to specific_analyses.relax_disp.data. This is to match the module naming convention used in the other specific analyses, and as detailed in the post http://thread.gmane.org/gmane.science.nmr.relax.devel/5294.
- Updated all of the frame order optimisation scripts for the CaM test suite data. These simple testing scripts were very much out of date and non-function for the current code. Half of the code in these scripts is now implemented in the concise frame_order.pdb_model user function.
- Updated the ancient test_suite/shared_data/frame_order/cam/rotor2/pcs_only/frame_order.py script. This now matches the script of its parent directory.
- Simplified all of the CaM frame order optimisation testing scripts. These are for the synthetic test suite data. The unnecessary class structure of the scripts has been eliminated.
- The frame_order.pivot user function can now be used to turn the pivot optimisation on and off. If the pivot point is not given, this user function will now just set the fixed flag and nothing else, allowing the optimisation status of a pre-set pivot to be changed.
- Added the axis α angle to the frame order return_units() API method.
- The frame order axis α angle is now defined in the grid search from -π to π.
- Replaced the 'elif' statements with 'if' in the frame order update_model() function. This is to avoid possible future bugs as the logic was not consistent.
- Renamed the specific_analyses.api_objects module to parameter_object to better reflect what it is. This contains a single object for the parameter list object and therefore does not need to be generalised for additional types of objects for the specific analysis API.
- Updated the module docstring of specific_analyses.parameter_object to match its purpose.
- Created a parameter list object for the relaxation dispersion analysis. Instead of using the parameter list object from the base class, the dispersion analysis now defines its own.
- Docstring improvements for the add_min_data() parameter list base class method.
- Created two new parameter list methods from the relaxation dispersion code. The add_model_info() and add_peak_intensity() base class methods have been created from the relaxation dispersion code. These are just aliases for setting up a number of parameters via add() in a standard way.
- The frame order specific analysis API object is now truly a singleton. This should help eliminate some bugs.
- Standardised all of the parameter list objects for the specific analyses. All of the specific_analyses packages now contain a parameter_object module which defines all of the parameters. The ordering of these is now consistent between the analyses, the result of which will be more consistent ordering of element in the relax XML state files. The new Param_list.add_csa() base class method has been added to standardise the CSA parameter. And the capabilities of the add_model_info() method has been expanded.
- The parameter list objects in the specific analysis API are now singletons. This has no immediate benefit as these classes are only instantiated once. But it will allow for efficient reuse of these objects in the future and for expansions of the specific analysis API.
- The frame order pivot points are now stored differently in the current data pipe. Instead of being stored as a list or array of numbers in cdp.pivot, the point is now stored as cdp.pivot_x, cdp.pivot_y and cdp.pivot_z. The second pivot cdp.pivot2 is now stored as cdp.pivot_x_2, cdp.pivot_y_2 and cdp.pivot_z_2. This is to simplify the automated handling of optimisation parameters. Rather than having to convert the pivot_x, pivot_y, and pivot_z parameters to and from a list, the same mechanisms can now be used for all of the optimised frame order parameters. This will be used to hugely simplify many of the functions in the specific_analyses.frame_order.parameters module and eliminate a large source of bugs.
- Temporary deactivation of the Frame_order.test_cam_double_rotor system test.
- Huge simplification of the specific_analyses.frame_order.parameters.assemble_param_vector() function. The parameters are now assembled in a generic way by looping over cdp.params. The simpler code should also be faster.
- Huge simplification of specific_analyses.frame_order.parameters.param_num(). This now simply calls update_model() and then returns the length of cdp.params.
- Clean up of the specific_analyses.frame_order.parameters module. The unused and terribly designed assemble_limit_arrays() function has been deleted. And unused imports have been removed.
- Simplification of the specific_analyses.frame_order.optimisation.unpack_opt_results() function. Looping over the cdp.params parameter list is now used to minimise the amount of replicated code.
- The frame order analysis is now using the special frame order parameter object.
- Elimination of the specific_analyses.frame_order.checks.check_rdcs() function. This function duplicates the functionality of pipe_control.rdc.check_rdcs() while not being as comprehensive. Switching to the pipe_control.rdc version minimises the amount of code in the frame order analysis, decreasing the potential for bugs.
- Simplified the assemble_scaling_matrix() frame order function. The data_type argument no longer does anything, so has been eliminated.
- Clean up of the base_data_types() frame order function and it use. The propagation of the data type list around the frame order code is now greatly reduced. And the alignment tensors and NOESY data have been removed from the base_data_types() function - these are not used.
- Docstring fixes for the frame order specific analysis API object. A number of the methods were referring to the base data as being alignment tensors, but this has changed to RDCs and PCSs a long time ago.
- Clean up and fixes for the frame order model_statistics() API method. The number of data points 'n' was incorrectly calculated using the original alignment tensor base data.
- Eliminated the specific analysis API object base __init__() method. This is no longer needed as the parameter list object is now analysis specific and set up by each analysis type separately. The calls to this method from the derived classes have therefore also been deleted.
- Shifted the frame order specific API deselect() method into the Api_common base class. The method has been renamed to _deselect_global() and extended to handle Monte Carlo simulations.
- The specific API PARAMS object is now private. Apart from fixing a number of unit tests, these aliased singletons should not be accessed by the rest of relax.
- Created the specific analysis API common method _is_spin_param_false(). This simply returns False. The is_spin_param() frame order method has been deleted and this common method is now used instead.
- Replaced the frame order specific API model_loop() method. The base method _model_loop_single_global() is used instead.
- Replaced the frame order API model_type() method with the base _model_type_global() method. The two methods were identical anyway.
- Fixes for the units of the ave_pos_x, ave_pos_y, and ave_pos_z frame order parameters.
- Removed the frame order API return_units() method. This method has been superseded by the parameter list object.
- Added the PCS and RDC as parameters for the frame order and N-state model analyses. These are now defined in the respective parameter list objects. The base method add_align_data() has been created to avoid code duplication.
- Eliminated the last of the specific analysis API return_units() methods. This functionality is now provided by the parameter list object.
- Eliminated the model-free specific analysis API data_type() method. This functionality is now provided by the parameter list object.
- Converted the N_state_model.test_5_state_xz system test to use a new way to set parameters. Instead of using pseudo-parameters for the value.set user function such as 'p0', 'p1', etc. for the probabilities, which are then converted into the 'probs' parameter with the index taken from the name, instead the index is now directly given. The value.set user function will need to be modified to handle this. The aim is to standardise the parameter list object for the N-state model analysis.
- Converted the remaining N-state model system tests to use the new value.set index argument.
- Converted all of the N-state model parameters to use the parameter list object. The default_value(), return_data_name() and return_grace_string() API methods have also been deleted as these have been superseded by the parameter list object.
- Clean up of the N-state model user function docstrings. The pseudo-parameter names such as 'p0', 'p1', etc. no longer exist.
- The new index argument for the value.set user function now defaults to zero. This is for backwards compatibility as the default value of None was not handled by the user function backend.
- Fix for one of the value.set user function unit tests. The 'alpha2' N-state model parameter no longer exists, and the '2' is now supplied as index=2.
- Shifted the core of the data_names() specific analysis API method into the parameter list object.
- Renamed many of the specific analysis parameter list object methods to match the API names. This is to prepare for a decoupling of the parameter list object from the API object.
- Updated the relaxation dispersion citation for relax as this is now officially published.
- All of the add*() methods of the parameter list object have been made private. This is to prepare the object to be accessible to the rest of relax, so that it can be decoupled from the specific API object.
- Shifted the minimisation Grace strings and units into the parameter list object. The return_grace_string() and return_units() functions of the pipe_control.minimise module have been deleted and their contents shifted into the specific analysis parameter list object.
- Simplified the pipe_control.minimise.minimise() function. The specific API object only needs to be fetched once.
- Eliminated the return_data_name() concept from the specific analysis API. The parameter names can now only have one value, i.e. 's2' is different from 'S2'. In addition, a number of related functions have been deleted form the pipe_control.minimise module as these are now handled by the parameter list object. The deleted functions are return_conversion_factor() and return_value().
- Shifted the Brownian rotational diffusion tensor parameters into the parameter list object. This only affects the model-free analysis. The pipe_control.diffusion_tensor module functions default_value(), return_conversion_factor(), return_data_name() and return_units() have been eliminated. These functions have been merged together with the diffusion parameter self._add() calls in the model-free specific parameter list into the new parameter list base class _add_diffusion_params() method. To allow the model-free analysis code to differentiate between diffusion and model-free parameters, the new scope() parameter list base class method has been created. Diffusion parameters return 'global' whereas model-free parameters return 'spin'. The model-free specific API methods default_value() and is_spin_param() have been deleted as these are now provided fully by the parameter list object. The is_spin_param() method has been newly implemented in the parameter list object to check the parameter scope.
- Eliminated a lot of unused code from the pipe_control.align_tensor module. This includes the functions data_names(), default_value(), map_bounds(), map_labels(), return_conversion_factor(), return_data_name() and return_units() as well as the unused and very old user function documentation __default_value_prompt_doc__, __return_data_name_prompt_doc__ and __set_prompt_doc__. These are all remnants from the origin of the module - the copying of the diffusion_tensor module. But they have never been used.
- Changed the values of the set argument for the parameter list object. The 'generic' value has been renamed to 'fixed' and is now for all permanently fixed parameters of the model - for example the CSA value in a number of analyses. The default set argument value of the _add() method has been changed to 'all' so that any parameters registered via that method are not placed in a special set (unless specified otherwise).
- Clean up and fixes for the parameter definitions in the consistency testing analysis. The fixed and calculated parameters are now defined in the correct sets, and the description for tc has been expanded and improved.
- Added the ability to automatically create the parameter tables for the user function documentation. These are the tables used in many of the user function docstrings. This has been added to the parameter list base class. The section title is pre-specified by the new _set_uf_title() method, and the table caption and LaTeX label by the _set_uf_table() method. The documentation is generated by calling the uf_doc() method. This uses the new type_string() method to add a compact parameter type string representation to the table. The aim is to eliminate all of the hard-coded tables in the specific analyses which are always very quickly outdated. By automatically creating the tables, this simplifies the codebase and simplifies the addition of new analysis types.
- The parameter tables are now properly initialised in all of the specific analyses. This will allow the tables to now be used in the user function documentation.
- The auto-generated user function tables can now display the base units.
- Spacing fix for one of the diffusion tensor parameter descriptions.
- The label and caption for the parameter list user function documentation is now supplied to uf_doc(). This allows different types of tables to be generated, for example the default value is useful for the value.set user function but not value.write, while allowing tables to still be shared.
- Expanded the steady-state NOE parameter description.
- Expanded the reduced spectral density mapping parameter descriptions.
- The default scope for the parameter list object uf_doc() method is now 'spin'. Most of the parameter tables are for setting spin parameters, so this will minimise code.
- Fixes for the specific analysis parameter list singleton object. These objects are now really singleton objects and are only initialised once.
- Fixes for some of the NOE system tests - the NOE parameter description is now different.
- Shifted the user function documentation creation into the parameter list objects. The uf_doc() method will now return the pre-created documentation object, and the original base class method for creating the documentation has been renamed to _uf_param_table().
- Shifted all of the user functions to use the auto-generated parameter tables. All of the specific analysis default_value_doc and return_data_name_doc documentation objects have been deleted and replaced with the auto-generated ones. This results in a big code clean up and removes synchronisation issues with the user function documentation quickly becoming out of date when parameters change.
- Expansion of the dx.map user function documentation. This now includes tables of the N-state model and relaxation dispersion parameters. A new auto-generated model-free parameter table including the diffusion parameters has been created and is now used instead of the separate diffusion tensor and model-free parameter tables.
- Deleted the diffusion tensor __return_data_name_doc__ documentation object. This is no longer used, being redundant with the new parameter list objects.
- The frame order parameter tables for the user function documentation are now created.
- The dx.map user function documentation now included the frame order parameters.
- Different parameter sets can now be specified when creating the parameter tables. This is for the analysis specific parameter list objects and the auto-generated user function documentation.
- Clean up of the grace.write user function documentation. The minimisation statistics table has been removed and instead the minimisation statistics are included in the parameter tables of the specific analyses, when appropriate.
- Code clean up - deleted the return_data_name_doc and set_doc pipe.control.minimise objects. These user function documentation objects are no longer used. They were also extremely out of date.
- Created the parameter list object base class _uf_doc_loop() method. This will be used to loop over all or subsets of the user function documentation parameter tables.
- The model-free parameter setting documentation has been shifted into the parameter list object. As the text was quite out of date, it has been updated to the current relax design.
- The J(w) mapping parameter setting documentation has been shifted into the parameter list object. This has also been updated to reflect the current design of relax.
- The consistency testing parameter setting documentation has been shifted into the parameter list object. This has also been updated to reflect the current design of relax.
- The relaxation dispersion parameter setting documentation has been shifted into the parameter list object. The documentation has also been rewritten as it originates from Sebastien Morin back in 2009 and is now very much out of date.
- Deleted the relaxation curve-fitting parameter setting documentation. This really didn't say anything.
- The N-state model parameter setting documentation has been shifted into the parameter list object.
- Updated the two_domain.py N-state model sample script. The value setting for the N-state model is now handled differently.
- More updates of the N-state model sample scripts for the value.set user function changes.
- Deleted the diffusion tensor parameter setting documentation from the value.set user function. These values have not been able to be set by the value.set user function for over half a decade. Therefore this documentation can only lead to user confusion.
- Deletion of the user function documentation in the pipe_control.diffusion tensor module. The __default_value_doc__ and __set_doc__ documentation objects are out of date and no longer used anywhere in relax, so they have been eliminated.
- Shifted the model-free parameter writing documentation to the parameter list object.
- Rearranged the parameter table ordering in the value user functions. The order now better matches that of the chapters of the user manual and is consistent between the functions.
- More reordering of the parameter tables for the value user functions.
- Removed all of the prompt.doc_string.regexp_doc documentation objects from the user functions. This is the regular expression documentation which no longer has a purpose. It was for specifying multiple parameters simultaneously in user functions such as value.set, but this functionality has been removed.
- Created parameter tables with no additional trailing text. This is used in a few of the user functions.
- Updated the value.copy user function documentation for the frame order theory. The value.copy title has been changed to not be spin specific and a table of the frame order parameters has been added.
- Improvements for the value.display user function documentation. The N-state model parameter table has been removed as these parameters are not spin specific and cannot be used. And the title has been modified.
- Improvements for the value.read user function documentation. The N-state model parameter table has been removed as these parameters are not spin specific and cannot be read from a file. And the title has been modified.
- Updated the value.set user function documentation for the frame order theory. The value.set title has been changed to not be spin specific and a table of the frame order parameters has been added. The spin ID documentation has also been rewritten.
- Changed the title in the value.write user function documentation.
- Changed the title for the value user function class.
- Docstring update for the relaxation dispersion linear_constraints() function.
- The pivot point x, y and z coordinates are now registered as parameters of the frame order analysis. These are stored as the parameters pivot_x, pivot_y and pivot_z.
- Docstring fix in the relaxation curve-fitting linear_constraints() function.
- Converted the status.escalate variable into module variables for lib.errors and lib.warnings. This variable is set by the command line flag '-e' or '--escalate'. By converting it into a module variable, the lib.errors and lib.warnings warnings are now independent of relax.
- Created the lib.warnings.TRACEBACK variable. When True, this will cause stack-traceback to be printed out with the warning. This is to decouple the traceback printout from the warning to error escalation.
- Created the new '-r' or '--traceback' relax command line option. If supplied, stack tracebacks will be shown for all RelaxWarnings and RelaxErrors. This allows for finer debugging control.
- Clean up of the debugging command line options. The debug flag now will cause stack tracebacks to be printed on all RelaxWarnings.
- Created the '--error-state' command line option for saving a pickled state when a RelaxError is raised. This gives greater control of a powerful feature added to relax by Chris MacRaild. The pickled state can then be attached to bug reports or can be used to quickly load the state prior to failure when in the scripting UI mode.
- Reordered the debugging command line options and removed the '-r' shortcut.
- The lib.errors module is now really independent of relax - the compat module is no longer used. Instead, the Python 2 and 3 versions of the pickle module are imported using try statements.
- Decreased the amount of newlines around the printout from the '--error-state' command line option.
- Improved support for printing stack tracebacks with RelaxWarnings. The '--traceback' command line option will now show a full traceback. A replacement warnings.showwarning() function has been written to write out the traceback before the warnings.formatwarning() replacement function is called.
- Divide by zero avoidance fix for the rotor frame order model module lib.frame_order.rotor.
- Removed the dependency on the relax dep_check module from the relax library. This is to further decouple the library from relax.
- Added a script for testing the independence of the relax library.
- Removal of all unused imports in the lib.dispersion package.
- Improvements to the test_library_independence.py development script. This script for checking the independence of the relax library will now recursively import all packages and modules in the library and report at the end a list of all failures.
- Shifted the Python 2 and 3 compatibility module 'compat' into the relax library.
- Shifted the pipe_control.sequence.validate_sequence() function into the relax library. For this, the new lib.sequence module has been created. This change is for better library independence. A circular import with lib.io and lib.arg_check has also been resolved.
- Shifted the read_spin_data() and write_spin_data() functions from lib.io to lib.sequence.
- Removed the dependence on the relax 'dep_check' module from the lib.frame_order package. This is for more independence of the relax library.
- Added the missing Bioinformatics journal to the bibtex file journal name aliases.
- Huge clean up of all unused imports in relax. These were found using the find_unused_imports.py development script. A number of these changes significantly decrease the possibilities of circular imports appearing in the future. And this also makes the relax library more independent from the rest of relax.
- Shifted the data_store.relax_xml module into the relax library as lib.xml. This module contains a couple of functions which are used for converting Python objects into an XML representation and back again. These are used not only by the relax data store, but also a number of the structural objects in the relax library (which are themselves placed in the relax data store). This makes the relax library more independent from the rest of relax.
- Shifted many of the pipe_control.structure.geometric functions into the relax library. All but two functions from the pipe_control.structure.geometric module are independent of the relax data store. These have been shifted into the new lib.structure.geometric module. This removes most of the remaining relax dependencies in the relax library.
- Removed the automatic axis labelling in the lib.software.grace.write_xy_header() function. This is the automatic labelling based on the parameter Grace strings and units of the specific analysis type. This is now preformed by the specific analyses themselves so the automatic code is not needed or used. This allows the dependence on the pipe_control.pipes module and the specific analysis API to be removed making the relax library now 100% independent from the rest of relax.
- Editing of the relax command line option descriptions.
- Created a new test category for the relax test suite - the software verification tests. This is part of the full test suite or can be run by itself using the new --verification-tests command line option. Such tests are best described by https://en.wikipedia.org/wiki/Verification_and_validation_%28software%29. These tests will be used to make sure that the design aims of the relax source code are satisfied. For example that the relax library is independent from the rest of relax. Or that the package __all__ lists actually contain all modules and sub-packages (these tests are currently part of the unit tests).
- Shifted the relax library independence developer script into the software verification tests. The functionality of the devel_scripts/test_library_independence.py script is now within the Library.test_library_independence software verification test. Therefore the script has been deleted.
- Updated the text for the software verification tests in the test suite.
- The relaxation dispersion auto-analysis now outputs the R20, R2A0, R2B0, and R1ρ0 parameters. This includes both text files and 2D Grace plots.
- Python 3 fix for one of the old scripts in the test suite directories.
- Improved error messages for when the GUI is launched but the wxPython installation is broken. Now the case of a broken wxPython installation is handled rather than just a missing installation. The dep_check module will store the import error message, and relax will now report that back to the user.
- Python 2.5 and 2.6 compatibility for the Library.test_library_independence verification test. The importlib package is not available in these Python versions, but the code in the Python 2.7 library file importlib/__init__.py is compatible all the way back to Python 2.3. Therefore the importlib functions have been copied directly into the system test script and the importlib dependency removed.
- Updated one of the OMP model-free results files for the different ordering of parameters in the XML. The changes to the parameter setup for the model-free analysis means that the XML files are now ordered differently.
- Fix for the Test_data.test_count_relax_times_r1rho relaxation dispersion unit test for Python 3.2+. The '%s' representation of floating point numbers is different on these Python versions - the number of decimal places used are different by default.
- Added Python 3.4 support to some of the development scripts.
- Added 4 more relaxation dispersion system tests to the blacklist for when C modules are missing. This allows these 4 tests to be skipped in the system test rather than failing with "RelaxError: The exponential curve-fitting C module cannot be found.".
- Changed the backward compatibility hook for old state files for the spectrometer frequency. The behaviour has been changed so that the data pipe structure spectrometer_frq_list is now sorted. This simply allows a number of tests to pass on Python 3.3+, a user would not notice any differences.
- Fixes for two relaxation curve-fitting system tests for Python 3.3+. These are the Relax_fit.test_curve_fitting_height and Relax_fit.test_curve_fitting_volume system tests which fail due to accuracy differences and a bad call to the UnitTest method assertEqaul() which should have been an assertAlmostEqual() call.
- Added the matplotlib module to the Python binary and module seeking script.
- Added dataset for system test. System test for CPMG dataset, (http://dx.doi.org/10.1073/pnas.0907387106) Kaare Teilum, Melanie H. Smith, Eike Schulz, Lea C. Christensen, Gleb Solomentseva, Mikael Oliveberg, and Mikael Akkea 2009 SOD1-WT at 25 C.
- Added system test to analyse data. System test for CPMG dataset, (http://dx.doi.org/10.1073/pnas.0907387106) Kaare Teilum, Melanie H. Smith, Eike Schulz, Lea C. Christensen, Gleb Solomentseva, Mikael Oliveberg, and Mikael Akkea 2009 SOD1-WT at 25 C.
- The debugging command line option no longer turns on RelaxWarning tracebacks. These tracebacks can be separately turned on with the --traceback command line option.
- Made name for system test shorter and moved data to shorter folder name. Regarding bug #21953, weird performance of grid search.
- Modified system test for cleaner implementation of tests. Regarding bug #21953, weird performance of grid search.
- Lowered range for grid search by factor 10 for kex to now between 1-10000. Regarding bug #21953, weird performance of grid search.
- Changes to system test Relax_disp.test_hansen_cpmg_data_auto_analysis. Regarding bug #21953, weird performance of grid search. The grid search needed to be increased by +1 and lowering of some results by one digit.
- Lowering of precision in Relax_disp.test_hansen_cpmg_data_auto_analysis_numeric. Regarding bug #21953, weird performance of grid search.
- Changes to system test Relax_disp.test_hansen_cpmg_data_auto_analysis_r2eff. Regarding bug #21953, weird performance of grid search. The grid search needed to be increased by +1 and lowering of some results by one digit.
- Changes to system test Relax_disp.test_hansen_cpmg_data_missing_auto_analysis. Regarding bug #21953, weird performance of grid search. The grid search needed to be increased by +1 and changes of some results.
- Modified system test Relax_disp.test_sod1wt_t25_to_cr72. Regarding bug #21953, weird performance of grid search.
- Modified Δω and kex in system test Relax_disp.test_tp02_data_to_tap03. Regarding bug #21953, weird performance of grid search.
- Split system test Relax_disp.sod1wt_t25_to_cr72 into part as setup and test part. Regarding bug #21953, weird performance of grid search.
- Started implementation of error analysis bug. Regarding bug #21954, Order of spectrum.error_analysis is important.
- Small edit of the relax command line option descriptions.
- Undid the modification of Δω and kex in system test Relax_disp.test_tp02_data_to_tap03. Regarding bug #21953, weird performance of grid search. The number of iterations needed to be increased from 2000 to 2500 to allow the values to be found.
- Further extended system test Relax_disp.test_sod1wt_t25_bug_21954_order_error_analysis. Regarding bug #21954, order of spectrum.error_analysis is important.
- Extended --gui-tests Relax_disp.test_hansen_trunc_data. Regarding bug #21954, order of spectrum.error_analysis is important. To catch errors in this dataset.
- This time the blacklisted relaxation dispersion system tests have been correctly reduced. Only one blacklisted test did not require the C modules to be compiled.
- Shifted all of the dispersion model descriptions and parameter lists to the variables module. The descriptions and parameter lists which were part of the relax_disp.select_model user function backend have been shifted into the specific_analyses.relax_disp.variables module as MODEL_DESC_* and MODEL_PARAMS_* variables. The descriptions have also all been standardised. The MODEL_DESC and MODEL_PARAMS dictionaries have also been created to hold all of the descriptions and parameters in one place.
- The General.test_bug_21720_pipe_switching_with_tab_closure GUI test now works without compiled C modules.
- Updated the release checklist for the new minfx version 1.0.6 release. See https://web.archive.org/web/gna.org/forum/forum.php?forum_id=2456 and https://freecode.com/projects/minfx.
- Fixes for the specific API _set_param_values_spin() method for lists and dictionaries. This is for the value.set user function to allow it to handle parameters of different types. For example the R2 parameter in the relaxation dispersion analysis. This API common method now sets all dictionary elements, list elements, or the variable to the given value.
- Created the new specific_analyses.relax_disp.variables.PARAMS_R20 list. This variable is a list of all R20 parameters of the dispersion models. It has been shifted out of the parameters module.
- Created a relaxation dispersion specific API set_param_values() method. This originates from the base _set_param_values_spin() method from the api_common module. The method has been extended to handle the R20 parameter types - generating the current dictionary keys as needed.
- Expanded the relaxation dispersion auto-analysis to allow the grid search to be turned off. By setting the grid_inc argument to None, the grid search will be turned off. As a replacement, the value.set user function is used for all model parameters to set them to their default values prior to minimisation. This design is for speed as optimisation from the defaults is often - though not always - good enough. It can be used, for example, in the test suite to make the system tests much faster.
- Changed the default R20 relaxation rate from 15 to 10 rad.s-1. This is probably closer to the average rate expected for molecules studied by NMR.
- The R2eff dispersion parameter now also defaults to 10 rad.s-1.
- Expanded the dispersion specific API set_param_values() method for the R2eff and I0 parameters. This now sets these parameter values correctly if the value sent into the method is not composed of dictionaries.
- Large speed up of the relaxation dispersion system tests by about 20%. This was achieved by turning the grid search off in the following system tests: Some of the optimisation values are slightly different, or completely different for the one example of the CR72 model fitted to no exchange, and these have been updated in the tests.
- Changed the bounds for the R20 parameters in the default grid search. The range of 1 to 40 rad.s-1 was previous used. This has been narrowed to 5 to 20.
- Added function to find minimum R2eff value to set as R20 value before grid search. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Added system test for setting R20 from minimum R2eff before grid search. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value. System test: -s Relax_disp.test_sod1wt_t25_set_grid_r20_from_min_r2eff.
- Extended api value.set to use index in value setting. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value. The index used is expected to match the spectrometer frequency.
- Added user function relax_disp.set_grid_r20_from_min_r2eff. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Modified system test to use user function instead. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Added relax_disp.set_grid_r20_from_min_r2eff functionality to the relax dispersion auto_analyses. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Fix for skipping grid search, when set parameter values are of dict() type. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Extended system test for -s Relax_disp.test_sod1wt_t25_set_grid_r20_from_min_r2eff. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value. The system test now both test the creation of the correct values, the running of grid_search, and the auto_analysis.
- Added True/False button to activate relax_disp.set_grid_r20_from_min_r2eff in auto-analyses. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Set the verbosity=1, since the output is minimal. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Fix for non-existing dictionary keys causing errors. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Fix for setting index=None, when setting default values for parameters. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Parameter values pre-set to 0.0 is now skipped in grid_search. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Renamed system test to reflect what it is testing. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Better formatting of text in user function. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Python 3 fix. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Better wording of experimental feature in GUI tooltip. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- An additional warning paragraph has been added to the user function. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value.
- Fix for system test Relax_disp.test_set_grid_r20_from_min_r2eff_cpmg. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value. Fixed values for testing was added.
- Improved the error reporting from the Library.test_library_independence verification test.
- Fix for setting the pre-set values in grid_search. Support request #3151, user function to set the R20 parameters in the default grid search using the minimum R2eff value. Now test that values which are of dictionary types, has more than 0 values.
- Modified system test Relax_disp.test_sod1wt_t25_to_cr72.
- Added a paragraph to the clustering section of the dispersion chapter covering parameter copying. This explains the purpose of the relax_disp.parameter_copy user function.
- Created system test to catch the error -s Relax_disp.test_sod1wt_t25_to_sherekhan_input. Bug #21995: creating sherekhan input files, with data for several fields and different time_T2.
- Removed the necessity that len(cdp.relax_time_list) = 1 when issuing the sherekhan input user function. Bug #21995: creating sherekhan input files, with data for several fields and different time_T2.
- Made testing of files for system test -s Relax_disp.test_sod1wt_t25_to_sherekhan_input. Bug #21995: creating sherekhan input files, with data for several fields and different time_T2. Warning, the sherekhan user function will write to current directory.
- Added "dir" as input to the user function relax_disp.sherekhan_input in system test. Bug #21995: creating sherekhan input files, with data for several fields and different time_T2.
- Modified the relax_disp.sherekhan_input to accept dir as input. Bug #21995: creating sherekhan input files, with data for several fields and different time_T2.
- Turning off local dir writing in system test and set the correct time_T2. Bug #21995: creating sherekhan input files, with data for several fields and different time_T2.
- Fix for letting ShereKhan user function write the time_T2 correct. Bug #21995: creating sherekhan input files, with data for several fields and different time_T2.
- Fix for correct looping over time points, when creating ShereKhan files. Bug #21995: creating sherekhan input files, with data for several fields and different time_T2.
- Added check for number of time points is 1. Bug #21995: creating sherekhan input files, with data for several fields and different time_T2.
- Added model MODEL_B14 to system test Relax_disp.test_hansen_cpmg_data_auto_analysis_r2eff. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales.
- Added MODEL_B14 to specific_analyses.relax_disp.variables. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales.
- Further added info for MODEL_B14 to specific_analyses.relax_disp.variables. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales.
- Added B14 description to the relax_disp.select_model user function front end. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales.
- Added model B14 to be found as target function. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales.
- Added empty b14.py to relax library lib/dispersion/b14.py. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales.
- Extended docstring in b14.py file. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales.
- Implemented start system test for model B14. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. System test is Relax_disp.test_baldwin_synthetic.
- Added Baldwin model B14 test data. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales.
- Removed MODEL_B14 to be tested in normal setup. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This is to allow the system test to pass.
- Removed the standard transparent "on" setting in grace images script file. This was rather an annoyance than helpful.
- Made a generic script to generate R2eff data for a CPMG model based on spin parameters, and fit the data. Still needs to implement some noise method. 1) The idea is to generate R2eff data with a numerical model with some extreme parameters. 2) Then add noise to the R2eff data. 3) Then fit with a analytical model. Evaluate the performance on the analytical model. This follows the idea of the paper: http://dx.doi.org/10.1016/j.jmr.2014.02.023 "An exact solution for R2,eff in CPMG experiments in the case of two site chemical exchange" Andrew J. Baldwin, Journal of Magnetic Resonance, 2014. The script can be extended to also include global fitting, to test this out. The script is also ideal, when trying to implement a new model, since test-data is ready at hand.
- Docstring update for the test suite runner class - the verification tests are now listed.
- Added the software verification tests to the relax GUI. The verification tests can now be selected via the "Tools->Test suite->Verification tests" menu entry. Running the full test suite via the menus also now included the verification tests.
- Reordered the B14 model according to release year. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
- Redid ordering of model B14 according to release year. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
- Reordered the model B14 according to release year. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
- Reordered model B14 in target functions. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
- Python API documentation corrections for the model B14. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
- Replaced copyright notice for the Baldwin.py script. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
- Changed the compression back 9, when creating grace PNG files. It doesn't change the quality, just the time to create the file and the size of the file. PNG is lossless, so compression levels 1 to 9 are all pixel-perfect.
- Added a check for the existence of data pipes to the return_api() specific analysis function.
- Added a README file to the sample_scripts directory to help users understand what these scripts are for. It also explains how these scripts should be used.
- Implemented synthetic CPMG system test. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
- Small changes to synthetic script data generator. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
- Made synthetic CPMG script accept R2eff noise values as input.
- Added array with zero R2eff error to system test Relax_disp.test_cpmg_synthetic.
- Added a system test, which proves that small Δω values of 1, makes the minimisation goes wrong. This is for synthetic data with R2eff values of +/- 0.05, which is to be expected for real data.
- Added a row to the dispersion software comparison table for TROSY-type data. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/5414/focus=5501.
- Added a row to the dispersion software comparison table for the support of scalar coupling effects. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/5414/focus=5501.
- Added model B14 to the list of MODEL_LIST_NUMERIC_CPMG. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax. Model B14, uses the number of ncyc/CPMG blocks in its analytical equation. To pass this information correct and calculate the ncyc power, it should be in this list.
- Letting the error be 0.1 in the system test for B14. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax. This is just cosmetics, to make the dispersion graph look more beautiful.
- Implemented model B14 in the relax library. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_library. The code is raw implemented, with no optimisation. This is merely to test, that the spin parameters that created R2eff data, can be found again after grid search and minimisation.
- Correctly implemented the target function for model B14. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_target_function.
- Implemented system test "relax -s Relax_disp.test_baldwin_synthetic -d" for model B14. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging. This proves that the model is correctly implemented, and return same data which the Baldwin script created.
- Renamed the relax_disp.cpmg_frq user function to relax_disp.cpmg_setup and added some new options. This follows from the thread http://thread.gmane.org/gmane.science.nmr.relax.devel/5511/focus=5520. The ncyc_even option has been added so the user can specify if the pulse sequence requires an even number of CPMG blocks. This is for use in the interpolated dispersion curves, but could have other uses in the future.
- Removed model B14 to the list of MODEL_LIST_NUMERIC_CPMG. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax. This was not essential anyway.
- Changed the default value of pA, the population for state A, to 0.95. When doing a grid search in the auto-analysis, one can set "None". This will then use default values specified for parameters, instead of a grid search. pA is best to start at 0.95, than 0.5.
- Extended system test Relax_disp.test_baldwin_synthetic to also include a N15 synthetic dataset. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Added the synthetic N15 data for system test Relax_disp.test_baldwin_synthetic. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Modified system_tests/scripts/relax_disp/cpmg_synthetic.py and the corresponding system tests. Relax_disp.test_cpmg_synthetic_cr72. Relax_disp.test_cpmg_synthetic_cr72_full_noise_cluster. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Attempt to implement function map_bounds in API for relax_disp. Bug #22012: dx.map not implemented for pipe type relax_disp.
- Expanded the CR72 full dispersion model description in the manual to explain its origin. This was discussed at http://thread.gmane.org/gmane.science.nmr.relax.devel/5410. The equations used the Davis et al., 1994 simplified form, and this is now explained.
- Changed float powers of 2.0 to integer powers of 2, to speed up the calculations. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This change did not do a large change in speed, but is more proper. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Changed all instances of "r2e" with "r20b", to be consistent with relax nomenclature. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Changed all instances of R2g with r20a. This is to be consistent with the relax nomenclature. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Implemented g3 and g4 as square root functions instead of atan2. atan2 is always returning values between -π and π. https://docs.python.org/2/library/math.html. Next step is to convert g1>-g1, which will truly follow the CR72 Nomenclature. For this, the atan2 function is a blocker. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Had to lower the precision of system test Relax_disp.test_baldwin_synthetic. This was after changing g3, and g4 from atan2 functions, to square root functions. The model is still very precise though. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Altered the sign of g1, to follow CR72 Nomenclature. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Had to lower the precision of system test Relax_disp.test_baldwin_synthetic after sign change of g1. The model is still precise, finding the parameters which generated the data. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Made g2 use the CR72 parameter convention. No change detected, since the change will be erased by going to order2. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Made sign change of δR2, to use parameter convention of CR72. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Implemented the α minus shorting from CR72. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Pretty the code, making space between all multiplications "*". Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Pretty up the code, making space between "=". Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Pretty up the code, making space between all "-". Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Pretty up the code, making space between all "+". Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- More code clean up. Make it look pretty. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Pretty up code, by moving comments up on line. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Pretty up code. Remove trailing spaces. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced expression with -alpha_. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced numpy.XX functions, with just the function. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced "power" with ncyc and made use of numpy power. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Pretty up code, removing multiple "(" and ")". Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced Trel with relax_time, to use relax parameter conventions. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced pb and pa with relax parameter pA. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced keg with relax parameter normal use of kBA. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced kge with kAB, which is relax convention. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Removed where kAB was subtracted with kAB. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup of code, replacing repetitive calculations of Δω2. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup, by removing repetitive calculations of g32. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup, by removing repetitive calculations of g42. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Removed the specific API method aliasing in the pipe_control.opendx module. The API object is now instead aliased as self.api.
- Docsting fixes for some model_statistics() API methods, including the base class method.
- Rewrite of the rotor2 CaM test data optimisation script. This now handles the new rotor frame order model parameterisation. Two functions have been added for converting between the old and new parameters - alpha_angle() to calculate the new α parameter and shift_pivot() for shifting the pivot to the closest point to the CoM on the rotor axis.
- Changed how the rotor axis is calculated in the func_rotor() frame order target function. A new set of notations is now being used to try to solve a nasty α angle parameterisation bug.
- Updated the rotor2 CaM frame order test data optimisation script for the changed notation. A new set of notations is now being used to try to solve a nasty α angle parameterisation bug.
- Fixed the average position Euler angles for the rotor2 CaM frame order test data optimisation script. The angles needed to be reversed.
- Removed an duplicated χ2 printout in the rotor2 CaM frame order test data optimisation script.
- Speedup - made B14 use the pre-calculated inverse time, instead of calculating the inverse time inside the function. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup - moved the repetitive calculations of pB, kBA and kAB out of the library function. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup - moved the calculation of δR2 and alpha_m out of library function. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Pretty-up code. Re-ordered logic of R20 parameters, and exchange parameters in function call. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Moved Carver and Richards (1972) ζ and Ψ notation outside library function. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. Not sure, if this speeds the calculation up. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup - made variable for the repetitive calculations of ζ2, and Ψ2. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup - made "1" and "2" integers to float, to prevent Python conversion. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup - repetitive calculations of 2.0 * tcp. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup - moved g_factor: g = 1/sqrt(2) outside library function to be calculated once. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup - repetitive calculations of sqrt_zeta2_Psi2 = sqrt(zeta2 + Psi2). Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup - converted expressions of complex(x, y) to (x + y*1j). Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Split the func_B14 into full, with a calc function. This is to prepare for the splitting up of B14, into a full: R2A0 != R2B0, and "normal" which is R2A0 = R2B0. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Documentation fix for CR72 calc function.
- Renamed system test Relax_disp.test_baldwin_synthetic to Relax_disp.test_baldwin_synthetic_full. And changed model from B14 to B14 full. This is to help find where modifications now have to be changed. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Added B14_FULL to the lists of the specific_analyses.relax_disp.variables module. The model name is stored in a special variable which will be used throughout relax. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
- Added B14_FULL to the relax_disp.select_model user function front end. Added the model, its description, the equations for the analytic models, and all references to the relax_disp.select_model user function front end. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax##The_relax_disp.select_model_user_function_front_end.
- Added B14_FULL to the target function. The system test Relax_disp.test_baldwin_synthetic_full is now back and running. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_target_function.
- Implemented system test Relax_disp.test_baldwin_synthetic for the model B14, whereby the simplification R2A0 = R2B0 is assumed. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Added target function for the Baldwin (2014) 2-site exact solution model for all time scales, whereby the simplification R2A0 = R2B0 is assumed. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_target_function.
- Finished system test Relax_disp.test_baldwin_synthetic. This proves that model B14 whereby the simplification R2A0 = R2B0 is assumed is successfully implemented. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Added the synthetic data for B14 model whereby the simplification R2A0 = R2B0 is assumed. This is used in system test Relax_disp.test_baldwin_synthetic. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Added B14 and B14_FULL to the relax GUI. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_GUI
- Added the latex bibliography reference for the model B14. This is the reference for Baldwin (2014) B14 model - 2-site exact solution model for all time scales. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Added model B14 description in the manual. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Updated the references in the b14.py library file, to point to the wiki, and the future API and html documentation. The link to API and html documentation is to be updated for the future compilation of these. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Reinserted the library function of b14.py the calculation of: δR2 = R2A0 - R2B0; alpha_m = δR2 + kAB - kBA; ζ = 2 * Δω * alpha_m; and Ψ = alpha_m2 + 4 * kBA * kAB - Δω2. And put the g_fact = 1/sqrt(2), inside the library function. It made no sense to put these calculations outside the library, since there would be no skipping of a loop. It actually makes much better sense to keep these calculation in the library function, to preserve the possibility to import this module in other software. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Removed the pre-calculation of "zeta2 = zeta**2" "Psi2 = Psi**2" since it did not speed-up things. This power 2 of ζ and Ψ is only done once. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Used LaTeX subequations instead, and using R2eff parameter is defined in the relax.tex. Using the defined \Rtwoeff, \RtwozeroA, \RtwozeroB, \kAB, \kBA, \kex. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Fixes for all URLs in the HTML version of the relax manual. This fix will appear later at http://www.nmr-relax.com/manual/index.html once the next version of relax is released. The trick was to translate the \url{} LaTeX commands which are not recognised by latex2html into \htmladdnormallink{#1}{#1} commands using a htmlonly environment in the headers.
- The \bibitem command is no longer ignored when building the HTML version of the relax manual. This will allow the bibliography at http://www.nmr-relax.com/manual/Bibliography.html to be formatted in a reasonable way. And citations will have proper links to the entries in this file rather than the current behaviour of linking to itself, hence not going anywhere.
- Apostrophe fix in the LaTeX bibliography file. This will fix my name at http://www.nmr-relax.com/manual/Bibliography.html so that it is not displayed as d’Auvergne.
- Better latex2html support for the relax manual. The hyperlink command \href{}{} and inline bibliographic reference command \bibentry{} are now supported in the HTML version of the relax manual. These are translated into \htmladdnormallink{#2}{#1} and \citet{#1} command respectively, both of which are supported by latex2html. This will significantly improve the documentation at http://www.nmr-relax.com/manual/index.html.
- Made better notation of equation. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Changed manual, to the recipe at Appendix 1. This was changed after the wish of the author. Discussed in: http://thread.gmane.org/gmane.science.nmr.relax.devel/5632. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Changed that taucpmg = 1 / 4*nucpmg and not taucpmg = 1 / nucpmg. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Added model B14 to the list of dispersion models. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Relax manual fix for model TSMFK01. Added that the model is slow exchange.
- Fix for equation alignment for model B14. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Elimination of minus in library function b14.py. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced f0 with F0, to follow paper and relax manual. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced "ex0b" with "v1c" to follow paper and manual. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced "ex0c" with "v1s" to follow manual and paper. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced f2 with F2, to follow manual and paper. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Sqrt fix in manual for model B14. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Fix for ordering in calculation, to make it look prettier. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced "v2pPdN" with v5, to follow paper and manual. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaved "oGt2" with "v4" to follow manual and paper. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Took inv_tcpmg outside parenthesis to follow manual. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Sign fix in manual. The 1/taucpmg was taken wrong outside parenthesis. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced "t2" with "F1b" to follow paper. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Replaced "t1pt2" with "F1a_plus_b" for better reading. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Reorder of lines to follow appendix 1 in paper. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Rewrote lines to follow appendix 1 in paper. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Made expression according to appendix 1 in manual. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Replaced T_{\textrm{rel} with \taucpmg. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Very small speed-up. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Speedup by removing integer to float conversion part.
- Better latex2html support for the relax manual, specifically the dispersion software comparison table. The \yes and \no commands are now better processed as HTML, and the rotating package 'rotate' environment is replaced by nothing. This will improve the dispersion software comparison table at http://www.nmr-relax.com/manual/Comparison_dispersion_analysis_software.html.
- Fix for catastrophic parameter index error for model B14. The model B14 would get the same parameter index as B14 full, and would hence optimise wrong parameters. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Fix for model B14 making ugly graphs. The power of ncyc has to be an integer. Bug #22018: Model B14 creates ugly graphs ! Sig saw all over the place.
- Fix for model B14 full making ugly graphs. The power of ncyc has to be an integer. Bug #22018: Model B14 creates ugly graphs ! Sig saw all over the place.
- Fixes for the HTML version of the relax manual. The renewal of the \theequation command in the model-free and relaxation dispersion chapters was causing all equation numbers in latex2html to be broken. By placing these in a latexonly environment, the problem is avoided in the HTML version at www.nmr-relax.com/manual/.
- Changed script for synthetic CPMG data. This is to test the fitting of CR72 and B14, when creating R2eff data with numerical model: MODEL_NS_CPMG_2SITE_EXPANDED. This script is ideal for testing cases. One can readily define experiments settings: sfrq_X, time_T2_X, ncycs_X for simulating one or more spectrometer experiments. Spins can readily be set up, to have different dynamics, like: R2, R2A0, R2B0, kex, pA and Δω. The script can test clustering, and can convert to Sherekhan and make a hyper-dimensinal dx map to test χ2 hypersurface on parameter settings. It is also ideal for strees-testing relax, to see if its minimisation algorithm performs well. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Small improvement for generic CPMG data script file. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Added functionality of the visualising the spin dynamics point which generated the data. This is to the script, which can visualize the synthetic CPMG data. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Fix for script for the visualising the spin dynamics point which generated the data. This is to the script, which can visualize the synthetic CPMG data. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Fix for the file name writing of the point file. Bug #22023: relax dx.map produce .net files which makes error.
- Made system test Relax_disp.test_cpmg_synthetic_dx_map_points to start testing. Modified also sample CPMG script to allow for this. Task #7791, the dx.map should accept a list of list with values for points.
- Modified user function dx.map to accept list of lists with values. Task #7791, the dx.map should accept a list of list with values for points.
- Added is_list_val_or_list_of_list_val to lib/arg_check.py. This function is not yet done. Task #7791, the dx.map should accept a list of list with values for points.
- Added list_of_lists to user_functions/objects.py. Task #7791, the dx.map should accept a list of list with values for points.
- Added list_of_lists to uf_objects. Task #7791, the dx.map should accept a list of list with values for points.
- Made multiple writing of point files. Task #7791, the dx.map should accept a list of list with values for points.
- Added B14 to the dispersion software comparison table in the manual (docs/latex/dispersion_software.tex). Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Added B14 to the dispersion auto-analysis. The B14 models will not create output files until this is done. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_GUI
- Completely removed the list_of_lists argument. Task #7791, the dx.map should accept a list of list with values for points.
- Modified the description of making x,y,z points in the χ2 space for the user function dx.map. Task #7791, the dx.map should accept a list of list with values for points.
- Made solutions for math domain error. Prevented to take log of negative values, and division by zero. This though slows the implementation down. System test Relax_disp.test_baldwin_synthetic_full went from 6.x seconds to 8-9.x seconds. Sr #3154, implementation of Baldwin (2014) B14 model - 2-site exact solution model for all time scales. This follows the tutorial for adding relaxation dispersion models at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Changed an 'align' environment to 'equation' as there was only one equation.
- Removed invisible equations from the B14 dispersion model section of the relax manual. The trailing "\\" were causing equation numbers to be produced on empty lines after the block of equations.
- The f00 equation in the B14 dispersion model section of the manual is a single equation. The 'subequations' and 'align' environments are therefore superfluous.
- Modified dx.map to accept more than one point. Task #7791, the dx.map should accept a list of list with values for points.
- Added system test for the production of dx map. Relax_disp.test_cpmg_synthetic_dx_map_points. Task #7791, the dx.map should accept a list of list with values for points.
- Added proper punctuation to the B14 dispersion model equations in the manual. Equations should be readable as English sentences and they follow standard punctuation rules. All of the equations in the B14 model section of the dispersion chapter have been updated to follow this.
- Fixes for quotation marks in the B14 dispersion model section of the manual. LaTeX requires `' for single quotes rather than .
- Standardised the CR72 R2eff factor in the B14 dispersion model section of the manual. This is now defined in the preamble of the LaTeX manuscript.
- Converted all complex numbers 'i' in the B14 dispersion model section of the manual to \imath.
- Removed some unnecessary {} brackets from the user manual. This is for the B14 model section of the dispersion chapter.
- The ncyc variable is now defined in the LaTeX preamble. This is for the B14 model section of the dispersion chapter.
- Fixes for some of the maths in the B14 model section of the dispersion chapter.
- Fix for the arccosh operator in the B14 section of the manual. This is for the B14 model section of the dispersion chapter.
- Switched to using the LaTeX math symbol for real numbers \Re. This is for the B14 model section of the dispersion chapter.
- The Ncyc definition in the manual now uses a capital N.
- The \arccosh LaTeX maths operator is now defined in the preamble of the manual. This is used by the B14 model section of the dispersion chapter.
- Improved brackets for the B14 model section of the dispersion chapter. The \left( and \right) command are used to produce brackets that scale to the size of the maths within these brackets. One set of unneeded brackets were also removed.
- Grammar fixes for the B14 model section of the dispersion chapter.
- Added some text explaining why the B14 equations do not look like those of the paper. This is for the B14 model section of the dispersion chapter.
- Small edits to the text of the B14 dispersion model section of the manual.
- Replaced 'get' and 'got' with alternatives, as this verb is not to be used in formal English. This is for the B14 model section of the dispersion chapter of the manual.
- Clean ups of the Carver and Richards descriptions. This is for the B14 model section of the dispersion chapter of the manual.
- More basic editing of the text of the B14 dispersion model section of the manual.
- The T_relax symbol is now defined in the preamble of the manual. This is to standardise its usage in the dispersion chapter.
- Major fix for the R2eff equations for the B14 dispersion model in the manual. Here τCPMG, the time for one CPMG block, was mixed up with Trelax, the total time of all CPMG blocks.
- Switched some 'v' symbols to '\nu' in the B14 dispersion model section of the manual.
- Standardised the spacing in the equations for the B14 dispersion model in the manual.
- Clean ups for the end of the B14 dispersion model section of the manual. Here a number of 'v' were changed to \nu and the standard \kAB, \pA, and \pB are now used.
- Some more τCPMG verses Trelax fixes for the B14 dispersion model equations in the manual.
- Added some symmetry to the T equation in the B14 dispersion model section of the manual.
- Latex2html fixes for the HTML version of the relax manual. This is for the documentation at http://www.nmr-relax.com/manual/index.html. Latex2html has problems determining if the contents of environments should added to the sub or superscript. For example $1^\textrm{st}$ is not recognised and must be changed to $1^{\textrm{st}}$ for latex2html to function correctly. Therefore these problems have been fixed throughout the manual. The number of errors printed out by latex2html is now significantly less.
- Shifted the model-free model equations for the HTML manual to the subequations environment. This is for the relax manual at http://www.nmr-relax.com/manual/index.html This is to preserve the equation numbering so that the HTML and PDF equation numbers match as closely as possible.
- Fixes for the equation number in the HTML version of the manual. This is for the relax manual at http://www.nmr-relax.com/manual/index.html.
- Made collecting of min, max and median value of χ2, when creating the χ2 map. Task #7792, make the dx.map write suggest chi surface values.
- Made the parsing of the min, max and median χ2 value to be used to define the χ2 hypersurfaces when writing the dx .net program. Task #7792, make the dx.map write suggest chi surface values.
- Updated the latex2html HTML version to 4.1. This is for the relax manual at http://www.nmr-relax.com/manual/index.html.
- Removed the "remap" keyword in the dx.map function, since this is not in use. Task #7792, make the dx.map write suggest chi surface values.
- Removed the keywords for "remap" in backend function, since this was not used. Task #7792, make the dx.map write suggest chi surface values.
- Added the keyword "chi_surface" to the front-end dx.map function. To set the χ2 surface level for the innermost, inner, middle and outer isosurface. Task #7792, make the dx.map write suggest chi surface values.
- Added the chi_surface=None to the backend function. When None, it will try to find reasonable χ2 values. These will define surface levels for the innermost, inner, middle and outer isosurface. Task #7792, make the dx.map write suggest chi surface values.
- Now saves all χ2 values, to better find reasonable chi level for the innermost, inner, middle and outer isosurface. Task #7792, make the dx.map write suggest chi surface values.
- Made the standard values of χ2 surface be 10, 20, 50 and 90 percentile of all χ2 values. Task #7792, make the dx.map write suggest chi surface values.
- Increased the precision of many of the Frame_order.test_rigid_data_to_*_model system tests. This is to fix a test which was failing due to the recent re-parameterisation of the rotor frame order model to eliminate one parameter. The precision of the numeric Sobol' sequence integration has been increased by shifting the fixed parameter values even closer to zero. As a consequence, the chi-squared value of five of these tests is now lower.
- Fix for system test Relax_dips.test_cpmg_synthetic_dx_map_points. Removing keyword "remap", since this is not in use anymore. Task #7792, make the dx.map write suggest chi surface values.
- Changed the import of percentile from lib.mathematics to lib.numpy_future. Task #7792, make the dx.map write suggest chi surface values.
- Changed the percentage which is different in percentile from numpy_future. Task #7792, make the dx.map write suggest chi surface values.
- Added lib/numpy_future.py. This module is for implementing numpy function code from higher versions of numpy. The relax dependencies listed at the download page of relax: http://www.nmr-relax.com/download.html#Source_code_release currently only requires numpy >= 1.0.4. Task #7792, make the dx.map write suggest chi surface values.
- Added "numpy_future" to the __init__.py file in lib directory. Task #7792, make the dx.map write suggest chi surface values.
- Moved numpy_future from lib to extern. Extern is special package for external software or code that is bundled with relax. Task #7792, make the dx.map write suggest chi surface values.
- Updated synthetic CPMG data script. This is to analyse complex dispersion data. Related to bug #22021 model B14 shows bad fitting to data. Bug #22024 minimisation space for CR72 is catastrophic. The χ2 surface over Δω and pA is bounded.
- Added 4 unit tests for the lib.geometry.lines.closest_point_ax() function. This relax library function was previously not being checked in the test suite.
- Modified the rotor2 CaM frame order test data optimisation script to optimise the pivot. Print statements has been added for comparing the optimised to the original pivot. The rotation axis is now also only created once, as it is now used in three places in the script, hence the two functions for converting parameters to the new parameterisation have been updated.
- Modified the rotor2 CaM frame order test data optimisation script to compare the rotor axes. The optimised rotor axis is recreated with the lib.frame_order.rotor_axis.create_rotor_axis_alpha() function, and then the original and optimised axes are compared. The state.save user function has been shifted forwards in the script to avoid a later RelaxError. The log file, average position PDB file, and state file from running the script have been added to the repository or updated.
- Removed the domain rotation code from the pymol.cone_pdb user function backend. This should only display the cone PDB object. The domain rotation is now performed by the far more powerful frame_order.pdb_model user function.
- Created the Status_object.test_install_path system verification test. This is to catch bug #22037, the failure to load graphics in the GUI due to the relax installation path not being set up correctly.
- Started to create a chapter for the N-state model or ensemble analysis in the manual. This simply consists of a few introductory sections and the phathalic acid graphic.
- Spacing improvements in the stereochem_analysis.py N-state mode sample script.
- Docstring improvements for the stereochem_analysis.py N-state model sample script. The paragraphs are now all on one line and 'Q-factor' has been changed to 'Q factor'.
- Replaced 'Q-factor' with 'Q factor' throughout the relax codebase. This change to the correct notation covers code, comments, and docstrings.
- Added a new section for the stereochemistry analysis to the N-state model chapter of the manual. This is just an initial introduction and an inclusion of the sample script.
- Editing of the auto_analyses.stereochem_analysis module docstring. The line wrapping to 100 characters has been removed.
- Exampled the stereochemistry analysis section of the N-state model chapter of the manual.
- Advances to the Grace 2D plotting abilities in the lib.software.grace relax library module. The write_xy_header() function now accepts the new 'world', 'tick_major_spacing', and 'tick_minor_count' arguments. These allow the world view to be preset, and allow the ticks on the X and Y-axes to be programatically changed. The write_xy_data() has also been modified so that the autoscaling can be turned off, as this Grace command will overwrite the world view and tick setup.
- Improvements for the 2D Grace plots created by the rdc.corr_plot user function. The autoscaling is now turned off, as the data set representing the diagonal (with points [-100, -100] and [100, 100]) causes the world view to be set to be between -100 to 100 or -200 to 200. The world view is set to be between -50 and 50 Hz, so that all RDCs should be visible. The ticks in the plot have also been set so that the minor ticks are at every Hz increment.
- The units are now included in the Grace axis labels created by the rdc.corr_plot user function.
- Added the 'title' and 'subtitle' arguments to the rdc.corr_plot user function. This allows the defaults to be overridden with user supplied titles and subtitles.
- The rdc.corr_plot and pcs.corr_plot user function now use the Grace icon in the GUI.
- Created the new pymol.frame_order user function. This user function pairs with the frame_order.pdb_model user function, taking the three PDB files created and displaying them nicely. Neither user function is complete, however the rotor representation of certain frame order models is handled correctly.
- Created a script for finding all dead http://www.nmr-relax.com links in files of a directory tree.
- Created the Structure.test_bug_22041_atom_numbering system test to catch bug #22041. The problem is that the structure.write_pdb user function does not create the correct atom serial numbers.
- Modified the frame_order.pdb_model user function so that the three PDB files are optional. This allows only certain components of the frame order theory to be represented in PDB format.
- Improvements for the rotor PDB representation shown by the pymol.frame_order user function. The stick radius width change is now only for the rotor PDB object, and not everything in PyMOL.
- Modified the 2nd rotor model of CaM frame order optimisation script. The frame_order.pdb_model user function is now used to create a PDB representation of the rotor motions for the real, expected parameters and for the optimisation results when the pivot point is fixed. In addition, the pymol.cone_pdb user function has been replaced by the pymol.frame_order user function. All new files have been added to the repository.
- Added a relax script for creating a PDB representation of the original pivot point. This is for the 2nd rotor model of CaM frame order in the test suite. The resultant PDB file has been added to the repository.
- Modified the pivot point PDB representation script to include the shifted pivot. This is for the 2nd rotor model of CaM frame order in the test suite.
- Added the 'centre_type' argument to the structure.superimpose user function. This allows the default 'centroid' superimposition to be replaced by a centre of mass (CoM) superimposition instead. As the CoM and centroid position do not match, the translation vector and Euler rotation angles will be different.
- Exposed the backend verbosity flag of the structure.read_* user functions. This allows the user to silence these user functions, which can be very useful when loading many 3D structures in the scripting UI mode. This change is for the structure.read_gaussian, structure.read_pdb, and structure.read_xyz user functions.
- Expanded the structure.delete user function to add the 'verbosity' and 'spin_info' arguments. The verbosity argument, when set to zero, allows all output to be suppressed. The spin_info flag allows the deletion of spin and interatomic data to now be turned off, so that only 3D data is deleted.
- The new structure.delete 'verbosity' argument is now propagated into the structural object. This allows the printouts to now be completely suppressed.
- The structure.read_* user function 'verbosity' argument is now passed into the structural object. This allows another printout to be silenced.
- The structure.read_* user function 'verbosity' argument is now passed into lib.io.open_read_file(). This allows all printouts from these three user functions to be suppressed.
- Converted the Mf.test_opendx_s2_te_rex system test into a GUI test. This is to demonstrate bug #22035, the dx.map user function being broken in the GUI.
- Python 3 fixes for the extern.numpy_future module. These changes are necessary to allow relax to even run.
- Python 3 fixes for all of the relax code base. The lib.compat and multi.processor module changes were fatal, not useful for Python 3, and hence reverted.
- Python >= 3.2 fix for the Relax_disp.test_sod1wt_t25_to_sherekhan_input system test. The B0 field value of the ShereKhan input files created by the relax_disp.sherekhan_input user function was formatted as "%s". However in Python >= 3.2, floats are now converted to have 14 decimal places whereas previous Python versions only had 10 places. The user function backend now forces only 10 decimal places to be written to the input files.
Bugfixes
- Fix for bug #21814, the PDB reading failure when the PDB records are not padded to 80 spaces. The fix is simple, all PDB records are pre-validated. This includes removing all newline characters and padding each PDB record to 80 spaces when needed. This will however add an overhead cost -- the internal PDB reader will now be slower. However corrupted PDB files, produced by MODELLER for example, not padded to 80 spaces will now be better supported.
- Bug fix for all of the R1ρ relaxation dispersion models. The atan2() function is now being used rather than atan() for determining the rotating frame tilt angle. This is to allow the angle to be in the correct quadrant - i.e. to have a sign or direction.
- Huge speed up of the interatom.define user function. This is to fix bug #21862, the freezing up of relax when using the dipolar relaxation button in the model-free auto-analysis in the GUI. This involves a number of changes. The algorithm for the backend of the interatom.define user function has been broken into two separate parts. The first part is new and uses the internal structural object atom_loop() twice for each spin ID string. This then calls the new are_bonded_index() structural object method which uses atom indices to find if two atoms are bonded, as the atom indices are returned from the atom_loop(). The are_bonded_index() is orders of magnitude faster than are_bonded() as selection objects are not used and the bonded data structure can be directly accessed. The are_bonded() method has also been slightly speed up by improving its logic. The second part is to perform the original algorithm of two nested spin loops over each spin ID and using the are_bonded() structural method. This second part only happens if the first part finds nothing. The structural object atom_loop() method has been modified to be able to return the molecule index. These indices are needed for the new are_bonded_index() method. When running relax with the profile flag turned on, a simple script which loads the 'Ubiquitin2.bz2' saved state and then the "interatom.define(spin_id1='@N', spin_id2='@H', direct_bond=True)" user function decreases from a total time of 143 to 3.8 seconds. However there are no speed changes detectable in the relax test suite - on one computer the system, unit and GUI tests only only vary by a fraction of a second.
- Fix for the NOE analysis for the peak intensity parameters. This relates to bug #21863, the grace.write user function not being able to write ref/sat plots as described in sample script noe.py. The 'ref' and 'sat' parameters have been replaced by the 'intensities' dictionary data structure a long, long time ago. Therefore they have been eliminated and replaced by the 'intensities' definition.
- Fixes for the definitions of the N-state model analysis parameters. This analysis does not use the CSA value, and the paramagnetic centre is a list, not a float.
- Fixes for the definitions of the 'theta' and 'w_eff' relaxation dispersion parameters. These should not be manually changed by the user and they are not optimised parameter. Therefore they have been shifted from the set 'params' to the set 'all', to avoid listing them in the parameter tables (for example in the value.set user function).
- Fixes for the frame_order.pivot user function - the model parameters were not being updated. The update_model() function is now called to make sure that the pivot point is either added or removed from the list of model parameters.
- Fix for bug #21924, the failure to output 2D Grace plots for the R20, R2A0, R2B0, and R1ρ0 relaxation dispersion parameters. A simple test for missing data fixed the problem.
- Fix for the Relax_disp.test_korzhnev_2005_15n_sq_data system test for certain MS Windows systems. This was reported as sr #3142. The problem was simply the lower precision of this system.
- Fix for the cpmg_analysis.py relaxation dispersion sample script. This was reported as sr #3142. The problem was that one of the paths was in the Linux/Unix format and hence if the path is not changed by the user, then the script will not work for them if they are using MS Windows or Mac OS X.
- More path fixes for the sample scripts to allow them to run on MS Windows and Mac OS X. This is for the relaxation dispersion R1rho_analysis.py script and the N-state model conformation_analysis_rdc+pcs.py script. These scripts should be modified by the user for their own data, so they should not encounter this problem when using the scripts normally.
- Python 3 fixes throughout the codebase.
- Python 3 fix for the Library.test_library_independence software verification test.
- Python 3 fix for the Relax_disp.test_kteilum_mhsmith_eschulz_lcchristensen_gsolomentsev_moliveberg_makke_sod1wt_t25_to_cr72 system test. The xrange builtin function does not exist in Python 3.
- Python 3 fix for the Relax_disp.setup_sod1wt_t25 system test. The xrange builtin function does not exist in Python 3.
- Updated the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test for the recent changes. This is for fixing bug #21960. The chi-square values are different due to the fix for bug #21954, the peak intensity error analysis bug, and the CR72 model results are different due to the fix for bug #21953, the change of the kex values used in the grid search.
- Fix for the relax_disp.select_model user function for when the C modules are not compiled. This was checking for the presence of the compiled C modules whenever the R2eff model was specified. However this behaviour is incorrect. It should only check for the C modules if exponential curves are to be fit.
- Fix for bug. The variances used to calculated std, should only be taken from those which are defined in the subset. Regarding bug #21954, order of spectrum.error_analysis is important.
- Fix for system test Relax_disp.test_hansen_cpmgfit_input. Bug #21989, relax_disp.cpmgfit_input does not work for model CR72. The looping was performed over the file lines instead of the defined fixed lines. The output files is truncated, and does not contain the wished data.
- Fix for "offset" and "point" swapped in looping. Bug #21989, relax_disp.cpmgfit_input does not work for model CR72.
- Fix for bug #21990, the --log and --tee options not functioning with the test suite.
- Fix for bug #21970. The Mac OS X dmg file required one of the test_suite/shared_data directories included in the 'include' list to properly bundle all relax modules inside the Mac App framework. This was achieved by creating a whitelist structure and adding the directory to that.
- Fix for bug #21984, the numpy.float16 error. The numpy.float16 type is not defined for all numpy versions. Therefore the lib.check_types.float16 type is used instead as this defaults to numpy.float32 when numpy.float16 is missing.
- Fix for the relax_disp.parameter_copy function. The median of the values was not performed properly, since 0.0 was already in the starting list of values.
- Fix for the relax_disp.parameter_copy user function description. The parameters are not averaged but instead the median value from all spins is taken.
- Fix for bug #22001, the execution of script changing the current working directory. The changing of the current working directory (CWD) was added to allow for nested scripting. However this is no longer needed as the script import mechanism has changed from the exec() function call to the runpy Python module.
- Fix for bug #22002, the failure of the Library.test_library_independence software verification test. The fix was simply to use the relax installation path in the status singleton object to make sure that the relax 'lib' directory can be found independently of what the current working directory is.
- Fix for path to sample data in sample script: sample_scripts/relax_disp/cpmg_analysis.py.
- Fix for path to sample data in sample script: sample_scripts/relax_disp/R1rho_analysis.py.
- Fix for bug #22004, the conformation_analysis_rdc+pcs.py N-state model sample script not working. The problem was that the return_api() function call needed to be after the creation of the data pipe.
- Fix for the local_min_search.py N-state model sample script. The return_api() function call was preformed too early. Instead of placing it after the data pipe creation, the specific analysis type is now directly specified.
- Fix for type in the eta scale of CR72 model. The calculation in relax was correct, but the scale of eta has been wrong in the documentation. This was discussed in: http://thread.gmane.org/gmane.science.nmr.relax.devel/5506.
- Bug fix for taking the median if there is more than 0 values in the list. Bug #22010: relax_disp.parameter_copy return a list of pA, if copying only for one spin.
- Fix for the dispersion model_statistics() specific API method. The spin ID argument should override the model_info argument. The method now correctly implements this.
- Fixed the rotor axis direction in the lib.frame_order.rotor_axis module. The normalisation code has also been simplified.
- Fix for tabular space and forgotten "\n" when writing dx.maps config files. Bug #22023: relax dx.map produce .net files which makes error.
- Added another sentence telling the user that multiple field relaxation data is essential. This is for the model-free dauvergne_protocol auto-analysis[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b] section of the relax manual and relates to bug #21799.
- Another sentence about multiple field relaxation data added to the manual. This is for the model-free dauvergne_protocol GUI section of the relax manual and relates to bug #21799.
- Documentation fix for IT99. Changed kex to tex. Still needs to be changed at homepage: http://www.nmr-relax.com/analyses/relaxation_dispersion.html#IT99. Bug #22019: the IT99 model is listed with parameter kex instead of tex.
- Documentation fix for IT99. Changed kex to tex in user function. Bug #22019: the IT99 model is listed with parameter kex instead of tex.
- Small extra explanation in auto analysis. Bug #21799: Insufficient recommendations/warning message for the execution of dauvergne protocol with 1 field is incomplete.
- Big bug fix for the relax installation path determination. This is to fix bug #22037, the failure to load graphics in the GUI due to the relax installation path not being set up correctly. The problem is that the status module was looking for the compat.py file to determine where the base directory is, but this file has been moved into the lib/ package. Now the dep_check.py file is being searched for.
- Fixed the description of the N-state model in the pipe.create user function. This has nothing to do with domain motions - it is the treatment of ensembles of structures.
- Fix for the axis labels in the rdc.corr_plot user function when T data is converted to D.
- Fix for bug #22039, the printing out of Dpar twice by the diffusion_tensor.display user function. The solution is given in the original bug report.
- Fix for bug #22041, the PDB atom serial number error from the structure.write_pdb user function. The problem is that the structure.write_pdb user function preserves the atom numbering from the original structure and uses that for the atom serial number. However the atom serial number must be replaced with sequential values to produce a valid PDB file. This is fatal for any CONECT records.
- Fix for the chain-reaction failures in the test suite. This fixes bug #22055, the processor.run_queue() not cleaning up in uni_processor - chain-reaction failures in the test suite. The fix was insanely simple, just implementing what was mentioned Gary Thompson's FIXME comment in the run_queue() method of the uni-processor object. The queue execution code has been placed inside a 'try' statement and the queue cleaning up code in a 'finally' statement. This closes a painfully difficult to find bug that has been in relax since 2006, though only affecting relax developers.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.1 series
relax 3.1.7
Description
This is a minor feature and bugfix release which includes improvements to the relaxation dispersion chapter of the manual and the addition of new infrastructure for R1ρ data handling in the dispersion analysis. More details are given below.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.1.7
(17 March 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.1.7
Features
- Large amounts of new infrastructure for the R1ρ relaxation dispersion analysis type.
- More hardware information printed out when using the '--info' command line option.
- The user function relax_disp.write_disp_curves now produces text files of R2eff verses the rotating frame tilt angle θ.
- Small improvements for the relaxation dispersion GUI tutorial and citation chapter of the relax manual.
Changes
- Added text file for the articles reference values from the global fit in Relax_disp.test_r1rho_kjaergaard. Regarding bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths. This is from optimisation of the Kjaergaard et al., 2013 Off-resonance R1ρ relaxation dispersion experiments using the 'DPL' model. This uses the data from Kjaergaard's paper at DOI: http://dx.doi.org/10.1021/bi4001062.
- Replaced "_" with "-" in text file with global fit residues. Regarding bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Sorted the reference values in residue order. Regarding bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Added reference data and guess data for a global fit R1ρ analysis. Regarding bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths. This system test is setup comparison with paper values, and will be turned off later to prevent long running time.
- Redid dict() keys for unit test of find_intensity_keys(), to pass on Python 3.2 and 3.3. Work in progress for bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths. This is a response to message: http://thread.gmane.org/gmane.science.nmr.relax.devel/5132.
- Added ":" to dictionary keys to match return from spin_loop in system test Relax_disp.test_r1rho_kjaergaard. Regarding bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Removed model No Rex to be tested in system test Relax_disp.test_r1rho_kjaergaard. Regarding bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Aliased spins in system test Relax_disp.test_r1rho_kjaergaard. Regarding bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Set opt_func_tol = 1e-15 and opt_max_iterations = 100000 to run system test Relax_disp.test_r1rho_kjaergaard faster. Regarding bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Re-ordered code lines in system test Relax_disp.test_r1rho_kjaergaard. Regarding bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Assigned guess values for system test Relax_disp.test_r1rho_kjaergaard. Regarding bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Added a section at the start of the dispersion GUI analysis tutorial about 'computation time'. This is for the dispersion chapter of the manual.
- Removed alias of spins in system test Relax_disp.test_r1rho_kjaergaard. Work in progress for bug #21344, handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Added fitted R1 values from paper to system test Relax_disp.test_r1rho_kjaergaard. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. This will be used to test the output of calculating Ωeff, as stated in http://article.gmane.org/gmane.science.nmr.relax.devel/5148.
- Added reading of R1 values in system test Relax_disp.test_r1rho_kjaergaard. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added full manual steps of analysis for system test Relax_disp.test_r1rho_kjaergaard. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Modified the directory separator from "/" to os.sep in system test Relax_disp.test_r1rho_kjaergaard. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Commented out the commands for writing of text files and state files to speed up system test Relax_disp.test_r1rho_kjaergaard. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added the testing of writing out θ values in system test Relax_disp.test_r1rho_kjaergaard. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added the parameter "theta" to specific_analyses/relax_disp/api.py. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Modified parameter py_type to dict() for the θ value. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added parameter "theta" do description tables. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Made an assertion that spin contains attribute "theta" in system test Relax_disp.test_r1rho_kjaergaard. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added unit test for return_offset_data. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. This is development according to thread http://thread.gmane.org/gmane.science.nmr.relax.devel/5157.
- Commented out the expectation of the attribute "theta" to exist in system test Relax_disp.test_r1rho_kjaergaard. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Imported return_param_key_from_data to be used in unit test return_offset_data. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- The relax information printout, from "relax -i" for example, now includes detailed CPU information. This uses operating system specific commands to obtain this information which is not available from the platform Python module.
- Removed the dependence on subprocess.check_output() as this is only for Python 2.7 and higher. This is for the relax information printout about the CPU info recently introduced.
- The RAM in the relax information printout is now displayed for Mac OS X. The 'sysctl' command is now being used to retrieve the RAM size and total memory, and the swap is calculated as the difference.
- Added the OMP relaxation rates and compressed PDB file to the repository. This is to allow users to have a full data set to perform a test model-free analysis with.
- Added a sample_script to generate θ values for R1ρ data. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. The script is explained at the wiki: http://wiki.nmr-relax.com/Sample_scripts.relax_disp.return_offset_data.
- Created a new citation for relax[d'Auvergne and Gooley, 2008c] which concatenates both the d'Auvergne and Gooley 2008 papers[d'Auvergne and Gooley, 2008a][d'Auvergne and Gooley, 2008b]. This is to show to those who are unaware of back-to-back paper concatenation rules how to cite both papers using one reference, saving a lot of space.
- Added lib.rotating_frame module containing functions related to rotating frame NMR calculations. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added rotating_frame to lib.__init__.py. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added unit test file _lib.test_rotating_frame.py to __init__.py. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added unit test file _lib.test_rotating_frame(). Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. These unit test will be used to calculate and return dictionaries of tilt_angles, Delta_omega and omega_eff. Some of the R1ρ data mentioned in: http://www.nmr-relax.com/manual/Dispersion_model_summary.html.
- Added link to manual for calculating NMR parameters in the doctring for lib.rotating_frame.py. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Renamed function "calc_tilt_angle" to "calc_rotating_frame_params" in lib.rotating_frame. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. This is to reflect, that the function should return and store spin values of both tilt_angles, Delta_omega and omega_eff.
- Replaced with calc_rotating_frame_params in unit test, to reflect function renaming. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Improved docstring in lib.rotating_frame. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Rearranged the citations in the citations chapter. The references for relax are now far more prominent.
- Implemented the return of Delta_omega = "average resonance offset in the rotating frame" in specific_analysis.relax_disp.return_offset_data. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Fixed unpacking of return from function calls of return_offset_data. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Made specific_analysis.relax_disp.return_offset_data return "w_eff" - the effective field in rotating frame in rad/s. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Fixed unpacking of return from function calls of return_offset_data, since ωeff is now also returned. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Moved calc_rotating_frame_params() to specific_analysis.relax_disp.disp_data. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. This is in a response to message: http://www.mail-archive.com/relax-devel@gna.org/msg05080.html.
- Started unit test for _specific_analysis._relax_disp.test_disp_data.test_calc_rotating_frame_params. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. This is in response to message: http://www.mail-archive.com/relax-devel@gna.org/msg05080.html.
- Removed lib.test_rotating_frame.py and the associated unit test. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. This is in response to http://www.mail-archive.com/relax-devel@gna.org/msg05080.html.
- Made calc_rotating_frame_params take spin "The spin system specific data container" as input. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Modified calc_rotating_frame_params() to operate on the level of spin container and ID. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Modified unit test test_calc_rotating_frame_params to use spin container and ID in test. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Epydoc docstring fix for the Structure.test_bug_21522_master_record_atom_count system test. This is for the API documentation at http://www.nmr-relax.com/api/3.1/index.html.
- Epydoc docstring fix for the SetValue() method of the File input GUI element. This is for the API documentation at http://www.nmr-relax.com/api/3.1/index.html.
- Removed the test_suite/shared_data directory from the API documentation scanning. This is to avoid trying to import the frame order relax scripts which cannot be imported into Python.
- Added epydoc information about dimensions for w_e in function return_offset_data. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added verbosity flag to calc_rotating_frame_params() to allow switching of print information. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added unit test for use of value.write to write θ values calculated from calc_rotating_frame_params(). Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Extended unit test for use of value.write to write intensities file. This is to test that changing of the API function will retain its function. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Replaced API function in specific_analysis.relax_disp.api to calculate and return values for parameter θ when this is requested. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Implemented according to http://www.mail-archive.com/relax-devel@gna.org/msg05082.html.
- Extended API function in specific_analysis.relax_disp.api to calculate and return values for parameter θ when this is requested. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. Implemented according to http://www.mail-archive.com/relax-devel@gna.org/msg05082.html.
- Variable renaming and closing of files in unit test test_value_write_calc_rotating_frame_params(). Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Extended unit test test_value_write_calc_rotating_frame_params() to also test writing of ωeff values. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Fixed typo and removed grace string for parameter description of θ. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added parameter 'w_eff', the effective field in rotating frame calculation to dispersion API. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added handling of calculating ωeff in dispersion API. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added unit to parameter description of θ and ωeff. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Fix for field count and added check for R1ρ type in calc_rotating_frame_params(). Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added grace string to parameter description of θ and ωeff. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Fixed code duplication in relax_disp API, for calculation of θ and ωeff. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Removed unused lines of code in unit test test_return_offset_data(). Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Moved unit test of value writing of calc_rotating_frame_params() into separate system tests. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added system test Relax_disp.test_value_write_calc_rotating_frame_params_auto_analysis(). Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. This is to test the auto_analysis value.write function to write θ and ωeff values for an R1ρ setup.
- Added writing of parameters θ and ωeff for an auto-analysis if model in MODEL_LIST_R1RHO_FULL. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added return of None values for function calc_rotating_frame_params() if spin is not selected. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Extended system test Relax_disp.test_value_write_calc_rotating_frame_params_auto_analysis() to test the writing of θ values. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Hardcoded the writing of R2eff as function of the tilt angle θ, when using the user function relax_disp.write_disp_curves . Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. θ values per spin will be written if the spin.model is in the list MODEL_LIST_R1RHO_FULL.
- Fix for return of None tuble in calc_rotating_frame_params(). Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Fix for correct use of assertNotEqual or assertEqual in Relax_disp system tests. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Hardcoded contents of writing of parameters θ and ωeff in system test Relax_disp.test_value_write_calc_rotating_frame_params_auto_analysis(). Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. This is to better support testing of key ordering and different architectures, etc.
- Fixed code duplication in specific_analysis.relax_disp.disp_data.write_disp_curves(). Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Modified system test of hardcoded values of θ and ωeff to match precision to 14 digits. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Fix for handling the writing of theta.out and w_eff.out in relax_disp auto_analysis, when model is not of R1ρ type. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. This was discovered using the system tests.
- Fix for calculating the θ angle describing the tilted rotating frame relative to the laboratory, when omega1 / Delta_omega is negative. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. This follows discussion in: http://thread.gmane.org/gmane.science.nmr.relax.devel/5205.
- Modified unit and system test to reflect new calculation of rotating frame tilt angle θ. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff. This was discussed in thread: http://thread.gmane.org/gmane.science.nmr.relax.devel/5205.
- Added interpolation calculation of θ and ωeff, when dispersion points are interpolated. Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
- Added printout of Ω, the average resonance offset, in calc_rotating_frame_params(). Regarding sr #3124, Grace graphs production for R1ρ analysis with R2eff as function of Ωeff.
Bugfixes
- Typo fix for text in the model-free GUI auto-analysis. The maximum iterations for the protocol was misspelled in the GUI. This was spotted by Hessam Nasrollah.
- Fix for one of the frame order system tests when using the Mac OS X application binary. The problem was that the 'shared_data' directory could not be found as it was not included in the test_suite package __all__ list.
- Fix for bug #21754, the failure of the grace.view user function on MS Windows in opening a file in qtgrace when the path includes spaces, as reported by Mengjun Xue (mengjun dott xue att mailbox dott tu-berlin dott de). The fix was to run the program with the file name in double quotes.
- Fix for bug #21763, the problem of the chi2 value not being visible in the parameter list of the grace.write and value.write user functions in the GUI. The problem was that when asking for the parameter name list, the minimisation parameters were not being asked for.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.1.6
Description
This is a major feature and bugfix release. A comprehensive tutorial has been added to the relaxation dispersion chapter of the manual which shows, step-by-step, the dispersion analysis in the GUI using screenshots. Other changes include improved PDB chain ID support, a new mode for running a relax script and then entering the prompt UI mode, multiple file reading by the spectrum.read_intensities user function, and improvements to the relaxation dispersion analysis. A number of major bugs in the dispersion analysis concerning different relaxation delay times for different experiments and for improved handling of the offset have also been fixed. A number of important GUI bugs have also been fixed. All users are recommended to upgrade to this version of relax.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.1.6
(28 February 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.1.6
Features
- Full support for PDB chain IDs in the internal structural object.
- Improvements for the devel_scripts/python_seek.py for finding all installed Python versions and modules. Individual modules can now be specified on the command line.
- The pedantic command line option -p, --pedantic has been renamed to -e, --escalate.
- The new prompt command line option -p, --prompt causes the relax prompt mode to launch after running a script to allow relax to be inspected interactively.
- Better organisation of the relax command line options into groups, as shown by running 'relax -h'.
- A tutorial for using the relaxation dispersion analysis in the GUI has been added to the manual. This includes step-by-step instructions with many screenshots.
- Improvements to the manual including better and consistent line breaking for the GUI menu item text, user functions, file and directory paths, and Python module paths.
- The spectrum.read_intensities user function can now load multiple files simultaneously, allowing for simplified use in the GUI.
- Addition of a new GUI window element for loading multiple files.
- Improvements to the sequence data input GUI window including the item count being displayed and a 'Delete' button to remove the last element.
- Improvement for the relaxation dispersion auto-analysis - the names of the automatically created data pipes are now unique by appending the name of the data pipe bundle to the end. This allows multiple dispersion auto-analyses to exist simultaneously in the GUI or within one relax state file.
- The relaxation dispersion analysis now handles deselected spins.
- Improved colour coding of relax log messages in the relax manual.
- The relaxation dispersion auto-analysis now creates the chi2.out text file. This is for more easily comparing the chi-squared values between analyses.
Changes
- Converted the chain ID list in the internal structural object to the CHAIN_ID_LIST module variable.
- The internal structure object method _pdb_chain_id_to_mol_index() now uses the CHAIN_ID_LIST string. This allows for the full PDB chain ID range to be supported.
- Small improvement for the devel_scripts/python_seek.py script. The list of detected Python binaries files are now sorted prior to determining the installed modules.
- Updated the N_state_model.5_state_xz system test to allow it to complete on i586 Linux systems. The optimisation would continue for a huge amount of time on a test system (Mageia 4 i586 VM) and would make it appear as though the test suite has hung. By limiting the maximum number of iterations in the optimisation to 1000, the test will complete successfully and the parameters optimised to the same precision.
- Loosened the checks for the Frame_order.test_korzhnev_2005_15n_zq_data system test. This is to allow the test to pass on a 32-bit test system (Mageia 4 i586 VM).
- Decreased the accuracy of the Relax_disp.test_korzhnev_2005_15n_dq_data system test. This is to allow the test to pass on a 32-bit test system (Mageia 4 i586 VM).
- Decreased the precision of the Relax_disp.test_hansen_cpmg_data_auto_analysis system test. This is to allow the test to pass on a 32-bit test system using Python 2.5 and Python 3.1 (Mageia 4i586 VM).
- Decreased the precision of the Relax_disp.test_hansen_cpmg_data_auto_analysis_r2eff system test. This is to allow the test to pass on a 32-bit test system using Python 2.5 and Python 3.1 (Mageia 4i586 VM).
- Decreased the precision of the Relax_disp.test_hansen_cpmg_data_to_cr72 system test. This is to allow the test to pass on a 32-bit test system using Python 2.5 and Python 3.1 (Mageia 4i586 VM).
- Fix for the test_suite/system_tests/scripts/noe/bug_21562_noe_replicate_fail.py system test script. There was some invisible binary junk at the start of the file which as causing the Noe.test_bug_21562_noe_replicate_fail system test to fail, as the script could not load. This was only affecting one 32-bit test system using Python 3.1 and Python 3.2 (Mageia 4 i586 VM).
- Fixes for the unit tests of the package __all__ lists for Python 3. When Python 3 generates byte-compiled *.pyc files, these are stored in __pycache__ directories. These directories are now skipped for the package content unit tests, allowing the test to pass.
- Loosened the checks of some of the Relax_fit system tests. These are the Relax_fit.test_curve_fitting_height and Relax_fit.test_curve_fitting_volume system tests. The minor change is required to allow the tests to pass on a 32-bit system with Python 3.3.3.
- The python_seek.py development script now allows the modules to be specified on the command line. This speeds up the script and allows individual modules to be checked and the version displayed.
- Added a copyright notice to the python_seek.py script. The descriptive text has also been converted into a docstring. The copyright is simply to show who wrote the script and how old it is.
- The python_seek.py script can now check for the ancient Numeric module.
- The python_seek.py script can now check for the ancient Scientific module.
- The python_seek.py now lists the Python version again (broken in the last few commits).
- The python_seek.py script now accepts the 'all' argument to display all modules supported by the script.
- Output formatting improvements for the python_seek.py development script.
- Changed the module ordering in the python_seek.py development script.
- Epydoc docstring fix for the pipe_control.structure.main.load_spins() function.
- Created the Mf.test_bug_21615_incomplete_setup_failure GUI tests. This is designed to catch bug #21615 as reported by Ivan Leung (ivanhoe dott leung att chem dott ox dot ac dot uk). Included are the data files Ivan attached to the bug report truncated to two residues. The GUI test follows exactly the steps outlined by Ivan.
- Deleted the ancient, unused 'quit' argument of the relax interpreter object. This code was identified in a post by Troels Linnet at http://thread.gmane.org/gmane.science.nmr.relax.devel/5000/focus=5003. This argument never worked correctly and has not been used for many, many years. Many code paths in relax needed to be updated to remove the argument.
- Shifted the pedantic flag to escalate flag. The option -p would instead be used for the option --prompt. Fix for sr #3117 - Functionality to inspect interactively after running script - The equivalence to python -i.
- Added the -p --prompt option for running a relax script and inspect interactively. Fix for sr #3117 - Functionality to inspect interactively after running script - The equivalence to python -i.
- Modified the help text to explain that -p will launch relax in prompt mode after running any optionally supplied scripts. Fix for sr #3117 - Functionality to inspect interactively after running script - The equivalence to python -i. This is to allow the -p --prompt option to be given without a script. relax should support this so that a user doesn't get too confused when trying to start in prompt mode with the --prompt flag and support the --prompt argument without a script being supplied.
- Finished implementing the functionality of interacting with variables after executing a script. Fix for sr #3117 - Functionality to inspect interactively after running script - The equivalence to python -i. For getting access to variables after executing a script, the variable should be saved under: cdp.X, where X define a container. The name space issue is discussed in: http://thread.gmane.org/gmane.science.nmr.relax.devel/5012.
- Organisation of the relax command line options into distinct groups. This follows from the message at http://thread.gmane.org/gmane.science.nmr.relax.devel/5024. The optparse.OptionGroup object is now used to cluster the arguments. This cleans up the output of 'relax -h' and explains the options to the user in a clearer way.
- Fix for the user function intro flag. Fix for sr #3117 - Functionality to inspect interactively after running script - The equivalence to python -i. It should be turned on for the script so you see the "relax>" messages, and then turned off again for the prompt so that the user function text and "relax>" is not printed out twice.
- Updated the copyright statement shown in the GUI for 2014.
- Save state added for bug #21665. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Added system test to catch bug: relax -s Relax_disp.test_bug_21665_cpmg_two_fields_two_delaytimes_fail. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Added a system test for using both calc() and a system test for relax_disp auto analysis. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Added some initial screenshots of the dispersion GUI analysis. These will be used to create a tutorial for using the relaxation dispersion analysis in the GUI to be added to the dispersion chapter of the relax manual.
- Started to create the tutorial in the manual for using the dispersion GUI. This is at the end of the relaxation dispersion chapter of the manual and covers the basic setup of spin systems. It includes the recently added screenshots.
- Improvement to the formatting of the GUI menu item text in the manual. The text can now have a line break between the items, just after the arrows. This significantly improves the paragraph layout in the manual.
- Created two new LaTeX commands for improving the layout of the relax manual. These are \ossep and \osus which will be used to format the file and directory separator character and the underscore character respectively. They will be used in the \file{} and \directory{}commands to add the '/\linebreak[0]' and '\_\linebreak[0]' text to allow for better line breaking.
- Converted all LaTeX files of the manual to use the new \ossep and \osus commands. This will result in better formatting of the manual by making the linebreaking after '/' and '_'characters consistent and universal.
- Created two new LaTeX commands for improving the layout of user functions in the relax manual. These are \ufsep and \ufus which will be used to format the user function separator character and the underscore character respectively. They are used in the \uf{} commands to add the '.\linebreak[0]' and '\_\linebreak[0]' text to allow for improved and consistent line breaking.
- Added the unit test infrastructure for testing the specific_analyses.relax_disp package. This currently includes the package __all__ list unit test.
- Updated the specific_analyses.relax_disp package __all__ list. This was identified in the previously committed unit test.
- Added the infrastructure for the unit tests of the specific_analyses.relax_disp.disp_data module. This is in response to the post http://thread.gmane.org/gmane.science.nmr.relax.scm/19963/focus=5046 by Troels, and is described in my response at http://thread.gmane.org/gmane.science.nmr.relax.scm/19963/focus=5048.
- Created two new LaTeX commands for improving the layout of Python code in the relax manual. These are \pysep and \pyus which will be used to format the Python module separator character and the Python underscore character respectively. They are used in the \module{}, \pycode{}, etc. commands to add the '.\linebreak[0]' and '\_\linebreak[0]' text to allow for improved and consistent line breaking.
- Complete reformatting of the base LaTeX files. The paragraph structure has been changed so that each sentence now starts on a new line. This is for better tracking of changes (via 'svn diff' for example), for better searchability of certain text elements using command line tools such as 'grep', and for easier easier use of the 'sed' tool. The change tracking is most important as it allows for finer granularity - a small change will now only be shown as a change in one sentence rather than the whole paragraph, allowing the change to be identified more easily. It also allows for easier commit maintenance.
- Reformatting of all of the LaTeX code for the figures in the relax manual. The aim is to have as many parts as possible on separate lines to allow for better control of changes in the subversion repository and for improved usage of command line tools.
- Reformatting of all of the LaTeX code for the itemize and description lists in the relax manual. This is to regularise the LaTeX code throughout the *.tex files of the manual. All items are now indented for easier viewing. And leading empty lines before the lists have all been removed.
- The docstring fetching script for the manual now creates lists in the new, cleaner format.
- Implemented unit test for catching the correct return of loop_exp_frq_offset_point_time. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Grammar corrections - changed the 'eg.' abbreviation to 'e.g.' in a couple of places.
- Modified the unit test name for testing the correct return of the relaxation time periods. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Added more to the dispersion GUI analysis tutorial. This includes a screenshot showing the use of the 'Spin isotope' button in the GUI. Descriptions for all five 'metadata' buttons have been added as well.
- Expanded the relaxation dispersion GUI tutorial in the manual. This now includes the first steps for loading the peak intensity data.
- Added the The relaxation time period to be used when returning cpmg frqs. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Added test for skipping non-matching time points. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Added the time point to be sent into the return function of cpmg frequencies. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Improved unit test for cathing both time and dispersion point when looping over experiment and time points. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Modified the spectrum.read_intensities user function frontend to load multiple files. This follows from the thread http://thread.gmane.org/gmane.science.nmr.relax.devel/5057/focus=5062.
- Implemented the GUI element for loading multiple files. This follows from the thread http://thread.gmane.org/gmane.science.nmr.relax.devel/5057/focus=5062. This is via the new user function argument type "file sel multi", now used by the spectrum.read_intensities user function. The file selection element consists of two parts. The GUI element embedded in the user function wizard page is similar to the "file sel" element, except that the preview button is not present. The file selection button behaviour is also different in that it launches the new multiple file selection window. The multiple file selection window is based on the 'sequence' data window, as used in the spectrum ID argument for the spectrum.read_intensities user function. However the ListCtrl element has been replaced by a custom scrolled panel. The 'Add' button adds a new file selection GUI element consisting of a TextCtrl for displaying and manual editing of the file name, the file selection button for launching the relax file selection dialog, and the preview button lost in the parent GUI element. The scrolled panel allows more elements in the panel than can fit in the window. The 'Delete all' and 'OK' buttons from the 'sequence' data window are also present and function as expected.
- Modification of the new multiple file selection GUI element. The multiple file selection window now shows the index (plus one) of each file selection element at the front of that element. This way the user can easily see how many file elements there are and can match file names to numbers. This will help in making sure that the file names and spectrum ID elements correspond to each other.
- Added a 'Delete' button to the new multiple file selection GUI window. This simply deletes the last item in the list. This will be useful if the user clicks on the 'Add' button too many times - instead of clicking 'Delete all' and having to re-select all files, now the last element can be removed.
- Improved the behaviour of the multiple file selection GUI window. The RelaxFileDialog GUI element is now initialised when the file selection button is clicked rather than in the __init__() method. The result of this change is that the current working directory is dynamically changed in the RelaxFileDialog, hence if the directory is changed in one file selection element, it will look like it is changed in all.
- Renamed the test_loop_exp_time() dispersion unit test to test_loop_exp_frq_offset_point_time(). This is for the specific_analyses.relax_disp.disp_data.Test_disp_data.test_loop_exp_frq_offset_point_time() unit test. The name better reflects the function being tested.
- Created the test_loop_exp_frq() dispersion unit test. This checks the operation of the loop_exp_frq() function from the module specific_analyses.relax_disp.disp_data. It uses the data attached to the bug report at https://gna.org/bugs/?21665.
- Fixes for the unit tests of the spectrum.read_intensities user function. A number of checks were not correctly set up, and the recent changes caused others to now fail.
- Modified the GUI window for inputting sequence data to include item numbers. An non-editable initial column with the number of each item has been added. This is to help the user when, for example, the items of one sequence element should match another (for example in the spectrum.read_intensities user function where multiple file names should match multiple spectrum IDs).
- Added a 'Delete' button to the sequence input GUI window. This is to match the multiple file selection GUI window. The button allows the user to delete the last item from the list. So if 'Add' has been clicked too many times, the user does not have to start again from scratch by clicking on 'Delete all'.
- More modifications to the sequence input GUI window to match the multiple file selection element. The window now starts with a single element rather than nothing.
- Continued expanding the tutorial for performing a relaxation dispersion analysis in the GUI. This is for the dispersion chapter of the manual.
- Created the Peak_lists.test_read_peak_list_sparky_double system test. This is used to test the loading of multiple files simultaneously by the spectrum.read_intensities user function.
- Expanded the Peak_lists.test_read_peak_list_sparky_double system test to check all intensities. This now checks all of the peak heights read by the spectrum.read_intensities user function.
- Expanded the capabilities of the spectrum.read_intensities user function. Now multiple files can be loaded simultaneously.
- Fix for the multiple file selection GUI element. The GUI element now returns single values rather than lists from the GetValue() function when only a single file is selected. This allows the spectrum.read_intensities user function to operate normally again in the GUI.
- Minor fix for the Relax_disp.test_bug_21076_multi_col_peak_list system test. The spectrum ID argument ['auto'] has been changed to the single value of 'auto'. This argument should not be a list.
- Expansion of the tutorial for running the relaxation dispersion analysis in the GUI. The tutorial is now close to complete. The peak intensity loading wizard section is complete as well the model selection window and optimisation settings sections and the relax execution.
- More additions for the tutorial on using the dispersion analysis in the GUI. This is for the relaxation dispersion chapter of the manual. The tutorial is almost complete with descriptions and screenshots for completing the non-clustered analysis and conducting the clustered analysis all the way to execution.
- Created the State.test_bug_21716_no_cdp_state_save system test. This is for catching bug #21716, the failure to save the relax state just after deleting the current data pipe, even if other data pipes exist.
- Created the General.test_bug_21720_pipe_switching_with_tab_closure GUI test. This is to catch bug #21720, the failure to set the current data pipe in the GUI when the current and non-last analysis tab is closed. The test replicates the steps as outlined in the bug report.
- Added unit test for looping over: exp frq offset point. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times. This follows recommendation in thread: http://thread.gmane.org/gmane.science.nmr.relax.devel/5070.
- Added unit test for looping over: exp frq offset point. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times. This follows recommendation in thread: http://thread.gmane.org/gmane.science.nmr.relax.devel/5070.
- Changes for the relaxation dispersion auto-analysis. The final data pipe name now includes the data pipe bundle name. This is so the pipe name is unique, allowing multiple analyses to be executed in one relax state.
- Fixes for all of the Relax_disp system tests for the changes to the dispersion auto-analysis. The automatically created pipe names now include the pipe bundle name to make them unique, so the system tests have been updated to match this behaviour.
- Increased the grid search size in the r1rho_on_res_m61.py dispersion system test script. This is to allow the Relax_disp.test_m61_exp_data_to_m61 system test to pass more often. The increase does not cause a large increase in computation time as less time is spent in the optimisation and Monte Carlo simulation steps.
- Renamed unit test, to follow previous namings of unit tests. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Fix for the relaxation dispersion auto-analysis to improve its behaviour in the test suite. The problem is that the auto-analysis acquires the execution lock (status.exec_lock) but if the analysis cannot complete due to a bug, the lock is never released. This causes nasty problems for many subsequent tests, resulting in a cascade of test failures. This is especially problematic in the GUI tests where the execution lock controls many aspects of the interface. The solution was simply to run the auto-analysis run() method within a try-finally statement. The release of the lock occurs in the 'finally' clause, guaranteeing its release.
- Improvement for GUI test base tearDown() clean up method. A wx.Yield() call has been added to allow all GUI operations after a relax reset to complete prior to the next test starting. This should avoid certain racing conditions which can cause a cascade of tests to fail.
- Added unit test for looping over: exp, frq, offset, point, time. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times. This unit test will fail, since the last loop over the time points has a weak assumption just to loop over all time points, instead of checking for existence of such time point. This unit test follows recommendation in thread: http://thread.gmane.org/gmane.science.nmr.relax.devel/5070.
- Expanded the loop_time function to optional take the spectrometer frequency as input for restricting looping. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Replaced print commands to be compatible with Python 3.x. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- More fixes for the relaxation dispersion auto-analysis for the pipe names now including the bundle name.
- Added some missing RelaxError imports to the dispersion auto-analysis.
- Created the Relax_disp.test_bug_21715_clustered_indexerror system test. This is to catch bug #21715, the failure of the relaxation dispersion auto-analysis when running a clustered analysis due to an IndexError during minimisation.
- Modified unit test to pass. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times. Implemented as suggested in: https://mail.gna.org/public/relax-devel/2014-02/msg00142.html.
- Expanded the loop_time function to optional take the offset and dispersion point as input for restricting looping. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times. This is implemented as suggested in: https://mail.gna.org/public/relax-devel/2014-02/msg00143.html.
- Added system test for loop_time. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times. This system can be extended later for purposes to test the restriction of the looping.
- Replacing looping over time points from cdp.relax_time_list to loop_time(frq=frq). Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times. loop_time has been modified to accept spectrometer frequency as input to restrict the looping.
- Complete support for deselected spins has been added to the relaxation dispersion analysis. This fixes bug #21715, the failure of the relaxation dispersion auto-analysis when running a clustered analysis due to an IndexError during minimisation.
- Added exp_type, frq, offset, point to the loop_time() function. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times. Implemented as suggested in: http://www.mail-archive.com/relax-devel@gna.org/msg04993.html. In all these cases, that information is available, so it should be used. If one are analysing a combination of data types simultaneously (SQ CPMG, DQ CPMG, R1ρ), one will not have the same relaxation time for each. For different spin-lock or 180 degree pulse offsets and even different dispersion points, the time may also be different.
- Made count_relax_times() take optional arguments as: exp_type, frq, offset, point. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times. This is prepare for restricting the looping over time points in the function: loop_time(). This is implemented as suggested in: http://www.mail-archive.com/relax-devel@gna.org/msg04993.html.
- Modified to pass exp_typ, frq, offset or point to loop_time() where such information is available. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Added unit test for count_relax_times. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times. This follows the suggestion in: http://www.mail-archive.com/relax-devel@gna.org/msg04993.html.
- Added test for return of get_curve_type(), to match 'fixed time'. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Added check for return of has_exponential_exp_type to be False. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Added test for the return of get_times(). Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Re-ordered unit tests for test of get_curve_type() and has_exponential_exp_type(). Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Added the extraction of exp_type and frq from cdp, to be sent into count_relax_times. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Modified check_exp_type_fixed_time to loop over ID's and use count_relax_times for each ID. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- The fetch_docstrings.py script now creates a new LaTeX listing language for relax log messages. This is in the script_definitions() method which creates the script_definition.tex file. The idea is to avoid colouring relax/Python keywords such as 'as', 'from', etc. in the log messages.
- Moved the unit test get_times() to its own test. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Moved the unit test of has_exponential_exp_type() to its own test. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Moved the unit test get_curve_type() to its own test. Regarding bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Added save state for bug 21344. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Completed the tutorial for using the dispersion analysis in the GUI. This is for the relaxation dispersion chapter of the manual.
- Some edits for the tutorial on using the dispersion GUI analysis. The results of the relax_disp.insignificance user function are now shown to demonstrate what this does.
- Fixes for some incorrectly reported results in the dispersion GUI tutorial in the manual. The non-clustered results had been incorrectly copied from the log messages.
- More incorrect value fixes for the dispersion GUI tutorial in the manual. The pA and kex values were also somehow incorrect.
- Added system test for bug #21344. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths. This test will fail with: No intensity data could be found corresponding to the spectrometer frequency of 799.7773991 MHz, dispersion point of 431.0 and relaxation time of 0.14 s. Data for a dispersion point of 431.0 and time 0.14 does not exist, and so some of the looping in collecting data for calculation must be wrong. This behaviour and probably its solution is related to bug 21665, "Running a CPMG analysis with two fields at two delay times" (https://gna.org/bugs/?21665).
- Renamed previous disp_data unit tests, to reflect they were from a CPMG setup. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Added unit test for count_relax_times() for and R1ρ setup. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Fixes for the Grace kex plot for the tutorial for dispersion GUI analysis. The values for the Grace plot were not correct.
- Added unit test for loop_time() for R1ρ setup. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Renamed system test. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths. The previous test name was rubbish.
- Editing of the dispersion GUI analysis tutorial in the manual. The whole section has been proofed and improved.
- A concluding statement has been added to the dispersion GUI analysis tutorial in the manual.
- Added spacing in front of all lstlisting environments in the relaxation dispersion chapter of the manual.
- Spelling fix for the spectrometer frequency checks of the spectrometer.frequency user function.
- Spell checking of the entire relaxation dispersion chapter of the manual.
- Correction for some text in the dispersion chapter of the manual. The text 'are differentially defined' has been changed to 'are dually defined', as the word differentially was incorrect.
- Fixes for the spacing after e.g. and i.e. in the relax manual. The character '\' needs to be used after the final dot to indicate that this is not a sentence stop, hence the double spacing normally used between sentences should not be used.
- Extended system test to count number of settings iterations and match with len(cdp.exp_type.keys()). Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths. There is something wrong, since cdp.exp_type.keys()) is not matching.
- Fix for using a wrong index slicing. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Fixes for the wrong use of reading settings file and extracting parameters. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Expanded unit test for test_loop_time() in R1ρ. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Fix for the loop_time function to include point filtering for R1ρ experiments. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Fix for wrong values of "1341.11" in unit test. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths. Replaced 1341.10, and 1341.10 with 1341.11.
- Added truncated SeriesTab intensity file for only 5 spins. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Modified system test for setting up R1ρ analysis to use truncated spin list with 5 spins. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Added 5 spins truncated state file for bug #21344. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Modified unit and system test to use 5 spins truncated state file. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Added unit test for find_intensity_keys() in R1ρ analysis. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Manually reverted the temporary change of r22349 and 22348. The command used was:svn merge -r22349:r22347Reference: http://www.mail-archive.com/relax-devel@gna.org/msg05012.html.
- Modified unit test for find_intensity_keys() to simulate method in sim_pack_data(). Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Re-created the testing dictionary to easier to convert to collections.OrderedDict() if this can be supported in all relax Python versions. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Replaced dictionary keys in unit test, to easier access the original data. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Added experiment ID to dictionary, where dict() keys are offset_point_time. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Fixed to send in offset to find_intensity_keys() which allow system test to pass. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths. This is the first fix to allow system test to pass: relax -s Relax_disp.test_bug_21344_sparse_time_spinlock_acquired_r1rho_fail_relax_disp A better solution is described in: http://thread.gmane.org/gmane.science.nmr.relax.devel/5107 which will be implemented.
- Added text about '~' on MS Windows to the dispersion GUI tutorial in the manual. The home directory ~ on MS Windows will not work, so this is now explained.
- Parsed offset to find_intensity_keys() where such information is available. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Added unit test for return_intensity() for a R1ρ setup. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Fix for wrong use of variable name key and list return from find_intensity_key(). Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Added offset to be sent to return_intensity() function. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Extended return_intensity() unit test to also test for flag ref=True, which return reference intensity instead. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Added offset to be sent to loop_spectrum_ids() function. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Fix for wrong variable spectrometer_frq used instead of frq. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Removed functional return of reference intensity in R1ρ, since this does not exists. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
- Removed function return_intensity(), as this is no longer in use. Regarding bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths. Reference: http://www.mail-archive.com/relax-devel@gna.org/msg05020.html.
Bugfixes
- Minor bugfix for the internal structural object add_model() method. The internal structural object was being called with self as an argument, which would default to the chain_id keyword argument. The result would be relax state files with multiple copies of the internal structural object embedded in the structural XML section.
- Fix for bug #21605, the failure of the Frame_order.test_generate_rotor2_distribution system test. The bug is due to the fact that numpy.float16 is not defined on all systems. Older numpy versions do not have this. Therefore the float16 value is now imported from lib.check_types where it is aliased to float32 when not defined.
- Fix for bug #21615, the missing data dialog failure when executing the GUI model-free analysis, as reported by Ivan Leung (ivanhoe dott leung att chem dott ox dot ac dot uk). The problem is that the spin container's "isotope" variable is being accessed directly after a test showing that this variable does not exist. This is now fixed so that the missing data dialog is now presented explaining that the spin isotope information is not set.
- Fix for bug #21704, the failure of the GUI analyses when the home directory '~' character is used. The problem is located in many parts of the program, and other problematic areas may still be present. In all cases where the directory or file is accessed, the os.path.expanduser() function must be called.
- Fix for bug #21716, the failure to save the relax state just after deleting the current data pipe, even if other data pipes exist. The problem was that the specific analysis functions data_names() and return_data_desc() were being retrieved using the current data pipe rather than the actual data pipe that the data structures belong to. So if the current data pipe is None, then these fail. Now the data pipe type is being passed through all of the to_xml() methods so that the correct data_names() and return_data_desc()methods are retrieved.
- Fix for bug #21720, the faulty pipe switching behaviour when a non-last analysis tab is deleted in the GUI. Now the correct data pipe should be always switched to when closing an analysis tab.
- Fix for bug #21695, the failure of the relaxation dispersion system tests on a 64-bit MS Windows system due to lower precision of the platform. Two of the errors have already been found on a 64-bit Windows Vista virtual machine and fixed. The last test should now also pass.
- Fix for bug #21665 - Running a CPMG analysis with two fields at two delay times.
- Fix for bug #21344 - Handling of in sparse acquired R1ρ dataset with missing combinations of time and spin-lock field strengths.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.1.5
Description
This is a major bugfix release which fixes the complete failure of the NOE analysis for most users, a bug introduced in the last relax release. All users of relax 3.1.4 should upgrade to this version.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.1.5
(4 February 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.1.5
Features
N/A
Changes
- Updated the interatom.unit_vectors user function description to add the text '3D structure'. This is in response to the http://thread.gmane.org/gmane.science.nmr.relax.user/1547 relax-users mailing list message and the change is to clarify the usage of the user function.
- Created the Noe.test_bug_21591_noe_calculation_fail system test. This is to catch bug #21591 submitted by Martin Ballaschk. This is the complete failure of the NOE analysis. The peak lists attached to the bug report have been included in the test suite to create the system test.
- Improvements for the steady-state NOE analysis overfit_deselect() method. The spin deselection which occurs at the start of the calc user function call, used to calculate the NOE, is now clearer. Each deselection condition is now explained in detail and the text is now far more informative. In addition, the special condition of all spins being deselected is now caught. If this happens, a RelaxError is raised to prevent the user from going forwards. This should remove confusion as to why the output file is empty.
Bugfixes
- Fix for bug #21591, the complete failure of the NOE analysis. This bug was reported by Martin Ballaschk. The issue was introduced in the fix for bug #21562. The problem is that the overfit_deselect() method was deselecting all spins with two data points or less rather than one or less.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChives (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.1.4
Description
This is a minor feature and bugfix release which has improvements for the handling of structural data involving multiple molecules or models and improved support in the NOE analysis for replicated spectra. Included are fixes for the failure of the structure.create_diff_tensor_pdb user function for non-spherical diffusion tensors when no Monte Carlo simulations are present and for the failure of the rdc.write user function for back calculated RDC data. Full details are given below.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.1.4
(31 January 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.1.4
Features
- The structure.write_pdb user function now supports multiple molecules being present.
- Large speed optimisations for the internal structural object when multiple models are present.
- Improved support for replicated spectra in the NOE analysis.
Changes
- Created the Frame_order.test_generate_rotor2_distribution system test. This is to test the Frame Order distribution generating base script, used for creating the synthetic Frame Order test data, and to demonstrate a failure in handling back-calculated RDC data. To implement this, the test_suite/shared_data/frame_order/cam/ path has been converted into a Python package (with the addition of the __init__.py files). The base data generation script test_suite/shared_data/frame_order/cam/generate_base.py has also been modified to use the absolute path for the data files and its run() method now accepts the save_path argument to allow the files to be saved into a temporary directory.
- Fixes for the Frame_order.test_generate_rotor2_distribution system test. The test_suite/shared_data/frame_order/cam/generate_base.py script now saves the program state files into the self.save_path directory, preventing the system test from attempting to save files into the relax test suite directories.
- Another fix for the Frame_order.test_generate_rotor2_distribution system test. The test_suite/shared_data/frame_order/cam/generate_base.py script no longer prints its progress indicator to sys.__stderr__ but to sys.stderr instead. This avoids the progress text from appearing during the relax test suite execution.
- Created the Structure.test_bug_21522_master_record_atom_count system test. This is designed to catch bug #21522, the structure.write_pdb user function creating an incorrect MASTER record. This hence also catches bug #21520, the failure of the structure.write_pdb user function when creating the MASTER record due to too many ATOM and HETATM records being present. The test simply creates two structural models, adds one atom, and writes out a PDB file, checking its contents.
- The structure.write_pdb user function can now handle a file instance for the file argument. This is for the Structure.test_bug_21522_master_record_atom_count system test, to allow a dummy file object to be used. This can also be useful for power users.
- Created the lib.geometry.vectors.unit_vector_from_2point() function. This is used to quickly calculate the unit vector between two points.
- The lib.structure.represent.rotor.rotor_pdb() function can now handle multiple rotors. Previously this function would fail if called twice with the same structural object.
- Added the has_molecule() method to the relax internal structural object. This is used to quickly check if a molecule name already exists in the structural object.
- More improvements for handling multiple rotors in the lib.structure.represent.rotor.rotor_pdb() function. The atom numbering is now better handled.
- Better support for the writing out of multiple molecules by the structure.write_pdb user function. This is for the internal structural object write_pdb() method. Now each molecule is assigned a different chain ID in the PDB file, and the chain IDs loaded into the structural object are ignored. The chain IDs should however be preserved when using structure.read_pdb followed by structure.write_pdb, without storing the ID. A number of the Structure system tests had to be updated, as now the relax generated PDB files will always write out a chain ID.
- Large speed up for the internal structural object for when many models are present. The new ModelList.current_models object keeps track of all the models already present in the structural object. This simplifies the checks of the pack_structs() internal structural object method by removing expensive looping. This allows the loading of PDB files to continue to be fast even with many tens or hundreds of thousands of models already loaded.
- More speed ups for the internal structural object when huge numbers of models are present. Another loop over the structural_data object has been eliminated from the PDB reading load_pdb() method.
- Another optimisation for the internal structural object for large numbers of models. The ModelList.add_item() method no longer loops over all models to check if a model is already present, instead using the new current_models list.
- Yet more optimisation for handling large quantities of models in the internal structural model. Now when adding new models to the object, the model_indices and model_list objects are no longer created. This saves much time as the large model_list is now not sorted. A number of structural object methods have been updated to handle the change by switching to the model_loop() method for looping over the models, rather than using the model_indices and model_list objects.
- The frame order matrix printing function can now output the matrix to any precision. The lib.frame_order.format.print_frame_order_2nd_degree() function now accepts the 'places' argument which allows for higher precision printouts.
- The behaviour of the rdc.write user function has been changed to output spin ID strings in single quotes. This is to avoid problems with the '#' molecule identifier and the '#' comment character.
- Fix for the diffusion_tensor.init user function reference in the intro chapter of the manual. This was using a very old and now non-functional syntax.
- Created the Diffusion_tensor.test_bug_21561_tensor_pdb_failure system test. This is to catch bug #21561, failure of the structure.create_diff_tensor_pdb user function for non-spherical diffusion tensors when no Monte Carlo simulations are present, as reported by Martin Ballaschk.
- Added the truncated data for creating a system test to catch bug #21562, the failure of the NOE analysis when spectra are replicated. This bug was reported by Dhanas Muthu. This consists of the Sparky peak lists attached to the bug report and the modified 2AT7 PDB file. The data has been truncated to only include residues :12, :13, and :14.
- Shifted the NOE system test script into the new 'noe' directory.
- Created the Noe.test_bug_21562_noe_replicate_fail system test. This is to catch bug #21562, the failure of the NOE analysis when spectra are replicated, reported by Dhanas Muthu. This uses the truncated data taken from the files attached to the bug report. The NOE output file is checked to see if the contents are correct.
- Better support for replicated spectra in the NOE analysis. The saturated and reference peak intensity and error are now properly averaged. Previously averaging was not used as the number of replicates N are cancelled in the ratios used for the NOE and error calculation. However this fails when the number of replicates for the saturated spectrum does not match the number of replicates for the reference spectrum. Now any data combination is possible.
- Another fix for the NOE analysis for when replicated spectra have been collected. Variance averaging rather than error averaging is now used for the peak intensity errors. This is important if the errors for each replicated spectra are different - a case which is rarely encountered as the replicates are almost always used to determine one error for all the replicates.
Bugfixes
- Fix for bug #21499, the failure of the rdc.write user function. The rdc.write user function fails for back-calculated RDC data. The fix was to handle the missing interatom.rdc_data_types variable.
- Fix for bug #21522, the structure.write_pdb user function creating an incorrect MASTER record and bug #21520, the failure of the structure.write_pdb user function when creating the MASTER record due to too many ATOM and HETATM records being present. The counts for the ATOM, HETATM, and TER records are now only for a single model, rather than being the sum for all models together.
- Fix for bug #21561, the structure.create_diff_tensor_pdb user function failure with no simulations. This was reported by Martin Ballaschk. The problem was that the simulation axes of the tensor PDB file were not being initialised correctly when no Monte Carlo simulations had been run.
- Fix for bug #21562, the failure of the NOE analysis when spectra are replicated. This bug was reported by Dhanas Muthu. The problem was that the NOE overfit_deselect() method was deselecting all spins which do not have exactly 2 intensity values. This is incompatible with replicated spectra as the number will be greater than two. The check has been modified to deselect spins only when the number of intensity values are zero or one.
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChive (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.1.3
Description
This is a minor documentation release which includes small improvements to the documentation of the relaxation dispersion analysis in the manual as well as the API documentation for the lib.dispersion package. As the manual is available from http://download.gna.org/relax/manual/relax.pdf, installing this newer version of relax is not necessary.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.1.3
(16 January 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.1.3
Features
N/A
Changes
- Fix for the parameters listed for the IT99 dispersion model in the manual.
- Improvements and addition of many links to the lib.dispersion.cr72 API documentation.
- Spacing fix for the lib.dispersion.cr72 module docstring.
- Improvements and addition of many links to the lib.dispersion.dpl94 API documentation.
- Improvements and addition of many links to the lib.dispersion.it99 API documentation.
- Improvements and addition of many links to the lib.dispersion.lm63_3site API documentation.
- Improvements and addition of many links to the lib.dispersion.lm63 API documentation.
- Improvements and addition of many links to the lib.dispersion.m61b API documentation.
- Improvements and addition of many links to the lib.dispersion.m61 API documentation.
- Improvements and addition of many links to the lib.dispersion.mmq_cr72 API documentation.
- Improvements and addition of many links to the lib.dispersion.mp05 API documentation.
- Improvements and addition of many links to the lib.dispersion.ns_cpmg_2site_3d API documentation.
- Epydoc URL simplifications.
- Improvements and addition of many links to the lib.dispersion.ns_cpmg_2site_expanded API documentation.
- Improvements and addition of many links to the lib.dispersion.ns_cpmg_2site_star API documentation.
- Added the NS CPMG 2-site 3D full model to the lib.dispersion.ns_cpmg_2site_3d module docstring.
- Improvements and addition of many links to the lib.dispersion.ns_mmq_2site API documentation.
- Improvements and addition of many links to the lib.dispersion.ns_mmq_3site API documentation.
- Improvements and addition of many links to the lib.dispersion.ns_r1rho_2site API documentation.
- Improvements and addition of many links to the lib.dispersion.ns_r1rho_3site API documentation.
- Small docstring edit for the lib.dispersion.mp05 module.
- Improvements and addition of many links to the lib.dispersion.tap03 API documentation.
- Improvements and addition of many links to the lib.dispersion.tp02 API documentation.
- Epydoc URL simplifications in the lib.dispersion.mp05 module.
- Epydoc docstring edit in the lib.dispersion.mmq_cr72 module.
- Improvements and addition of many links to the lib.dispersion.tsmfk01 API documentation.
- Copyright notice updates for the lib.dispersion modules changed today.
- Added links to the relax wiki, API documentation, and relax website to all dispersion models in the manual. This is to make it easier to find additional information about each of the models.
- Updated the author list for the submitted paper for the relaxation dispersion analysis.
- Added the primary reference for relaxation dispersion in relax [Morin et al., 2014] to the dispersion chapter of the manual. This is the paper which is not published yet.
- Removed the single quantum R1ρ-type data reference in the introduction of the dispersion chapter of the manual. This is redundant as R1ρ data is always single quantum.
Bugfixes
N/A
Links
For reference, the announcement for this release can also be found at following links:
- Official release notes on the relax wiki.
- Gna! news item.
- Gmane mailing list archive.
- The Mail Archive.
- Local archives.
- Mailing list ARChive (MARC).
Softpedia also has information about the newest relax releases:
- Softpedia page for relax on GNU/Linux.
- Softpedia page for relax on MS Windows.
- Softpedia page for relax on Mac OS X.
relax 3.1.2
Description
This relax version is a minor bugfix release which repairs a number of icons on newer operating systems and a solves a problem caused by accidentally setting an incorrect spectrometer frequency.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.1.2
(13 January 2014, from /trunk)
http://svn.gna.org/svn/relax/tags/3.1.2
Features
N/A
Changes
- The average_intensity() dispersion function now accepts the offset argument. This is for better support of combined offset and spin-lock varied R1ρ-type data. The argument is then passed into the find_intensity_keys() function.
- Improved the DPL94 dispersion model description in the manual.
- Copied a Sparky peak list to be modified to be a Sparky file without intensity column.
- Modified the Sparky file to have no columns with intensity values.
- Implemented to read spins from a SPARKY list, when no intensity column is present. Addition to Support Request #3044 - load spins from Sparky list.
- Created the Relax_disp.test_bug_21460_disp_cluster_fail system test. This is to catch bug #21460 reported by Min-Kyu Cho. The save file added to the repository consists solely of the data for the first residue.
- Speed ups for the Relax_disp.test_bug_21460_disp_cluster_fail system test. The optimisation precision is not important for demonstrating this bug.
- Updated the main copyright notice for 2014.
- Fix for the main copyright notice.
- Updated the copyright notice visible to the user to 2014.
- Updated the copyright for the relax GUI splash screen for 2014.
- Improvement for the relax test suite printout with the --time command line argument flag. The tests printed out now have the package and module names removed, so that one the test name remains. This removes a large amount of text, simplifying the printout.
Bugfixes
- Partial fix for bug #21338 - the bad sRGB profile in some PNGs. This is only partial as some files are still to be converted (the original Bruker logo, and the 16x16, 22x22 and 32x32 sized Bruker icons).
- Fix for bug #21460, the failure of relaxation dispersion due to incorrect spectrometer information, as reported by Min-Kyu Cho. There was only one place in the dispersion analysis which failed due to a spectrometer frequency not containing any relaxation data - in the insignificance testing in the auto-analysis.
- Loosened the chi2 check in the Relax_disp.test_korzhnev_2005_15n_mq_data system test. This is to allow the test to pass on a 32-bit Linux (Mageia 1) test system.
Links
For reference, the following links are also part of the announcement for this release:
relax 3.1.1
Description
This is a major feature and bugfix release which adds support for reading 3D structures of organic molecules from Gaussian log files, the new lib.periodic_table and lib.nmr modules, the NS MMQ 3-site linear, NS MMQ 3-site, NS R1rho 3-site linear, and NS R1rho 3-site relaxation dispersion models, R1ρ dispersion data sets where multiple offsets and multiple spin-lock fields have been collected for each spin, the loading of spins directly from peak lists, and the reading of NMRPipe seriesTab files. Due to the improvements and the bugs fixed in the relaxation dispersion analysis, all users are recommended to upgrade to this version.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.1.1
(10 December 2013, from /trunk)
http://svn.gna.org/svn/relax/tags/3.1.1
Features
- Support for reading 3D structures of organic molecules from Gaussian log files using the new structure.read_gaussian user function.
- Addition of the lib.periodic_table module for storing information about the periodic table.
- Addition of the lib.nmr module for basic NMR related functions. It currently has functions for converting between ppm, Hz, and rad.s-1 units.
- Many improvements to the relaxation dispersion chapter of the user manual.
- The NS MMQ 3-site linear numeric model - the model for 3-site exchange using 3D magnetisation vectors linearised with kAC = kCA = 0 with the parameters {R20, ..., pA, pB, ΔωAB, ΔωBC, ΔωHAB, ΔωHBC, kexAB, kexBC}.
- The NS MMQ 3-site numeric model - the model for 3-site exchange using 3D magnetisation vectors with the parameters {R20, ..., pA, pB, ΔωAB, ΔωBC, ΔωHAB, ΔωHBC, kexAB, kexBC, kexAC}.
- The NS R1rho 3-site linear numeric model - the model for 3-site exchange using 3D magnetisation vectors linearised with kAC = kCA = 0 with the parameters {R1ρ', ..., pA, pB, ΔωAB, ΔωBC, kexAB, kexBC}.
- The NS R1rho 3-site numeric model - the model for 3-site exchange using 3D magnetisation vectors wit'h the parameters {R1ρ', ..., pA, pB, ΔωAB, ΔωBC, kexAB, kexBC, kexAC}.
- More model nesting in the relaxation dispersion auto-analysis (CR72 and MMQ CR72, LM63 and LM63 3-site).
- Large speed up of the TP02 and NS R1rho 2-site dispersion models by minimising repetitive calculations.
- Support for the loading of spins directly from peak lists.
- Support for the reading of peak intensities from NMRPipe seriesTab formatted files (*.ser).
Changes
- Small improvement for the devel_scripts/log_converter.py script for detecting commit boundaries.
- Added many small details to the release checklist document. This is for the formatting and editing of the CHANGES file, which is used for the release announcements. Some additional details about the API documentation at http://www.nmr-relax.com/api have been added too.
- Added sectioning printouts for the relaxation dispersion auto-analysis. This simply tells the user which part of the protocol is currently being performed.
- Setup for testing the sample_scripts/relax_disp/R1rho_analysis.py sample script. The script was copied into the test_suite/shared_data/dispersion/r1rho_off_res_tp02/ data directory where it will be tested on real data. The 'fake_sequence.in' and 'unresolved' files have been created to allow the script to run. And the script itself has been heavily debugged.
- All of the relaxation dispersion auto-analysis options are now exposed by the sample scripts. This included the pre_run_dir argument for specifying a directory of results from a non-clustered analysis and the flag for running MC simulations for all models.
- Added the DATA_PATH variable to the cpmg_analysis.py dispersion sample script. This allows the user to more easily specify a different directory for the files.
- Docstring improvement for the test_suite/shared_data/dispersion/r1rho_off_res_tp02/R1rho_analysis.py script.
- Synchronised the test_suite/shared_data/dispersion/Hansen/relax_disp.py with the sample script. This script now matches very closely with the sample_scripts/relax_disp/cpmg_analysis.py sample script. This is for sample script debugging purposes.
- Created a base data pipe for Flemming Hansen's truncated CPMG data for testing out missing data. The :4 spin is missing just a few data points, whereas the :71 spin is missing all 800 MHz data.
- Created the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test. This is used to demonstrate a failure in the R2eff model when some data is missing.
- Expansion and fixes for the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test. The parameters for spin :4 are now being checked, and all the checks updated for the changed data. The parameter values are slightly different as data is now missing and because only 3 spins are used for the error analysis whereas in all other Hansen CPMG data sets the more accurate errors are from all spins.
- The lib.dispersion.cr72.r2eff_CR72() function is now more robust. Values less than 1.0 are now caught to avoid passing it into the numpy.arccosh() function. This avoids many warning messages on Mac OS X.
- Added a Gaussian DFT optimisation log file to the shared data directories. This will be used to test the reading of structural data from Gaussian files.
- Modified the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test to catch another failure. This is the failure of all numeric models when all data from one magnetic field strength is missing for a spin.
- Created data for a NS MMQ 3-site (branched) model using cpmg_fit from Dmitry Korzhnev.
- The relax_disp.r2eff_read_spin user function now really strips comments and empty lines from the file.
- A big change to the usage of the relax_disp.r2eff_read_spin user function. Now the nu_CPMG frequency or the spin-lock field strength must be set prior to calling this user function. This allows for more flexibility as often the experiment IDs and frequency values in the files do not match to the same number of decimal places. The frequency is no longer read from the file but must be preset.
- Created a relax script for back calculating R2eff values for the same parameters as cpmg_fit. This is for the NS MMQ 3-site (branched) CPMG dispersion model. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_test_suite.
- Created the Relax_disp.test_ns_mmq_3site_branched system test. This is for the NS MMQ 3-site (branched) CPMG dispersion model. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_test_suite.
- Added the NS MMQ 3-site models to the dispersion variables. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
- Added another Gaussian log file of strychnine, this time with DFT structure optimisation. The file is bzip2 compressed to save space.
- Created the Structure.test_read_gaussian_strychnine system test. This will be used for implementing and testing the structure.read_gaussian user function.
- Created the lib.periodic_table module for storing information about the periodic table. This is via the periodic_table object which will have different methods for obtaining different information about an element.
- Implemented the structure.read_gaussian user function. This will read the final structural data out of a Gaussian log file.
- Improved the checking of the Structure.test_read_gaussian_strychnine system test. This now checks all the atomic information loaded.
- Simple fix for the Relax_disp.test_korzhnev_2005_*_data system tests. The CPMG frequencies are now being set up in the setup_korzhnev_2005_data() method.
- Added support for the NS MMQ 3-site model parameters to the lib.text.gui module. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax.
- Added the NS MMQ 3-site models to the relax_disp.select_model user function frontend. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_disp.select_model_user_function_front_end.
- Added support for the NS MMQ 3-site models to the relax_disp.select_model user function back end. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_disp.select_model_user_function_back_end.
- Added support for the new 3-site exchange dispersion parameters. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_support_for_the_parameters.
- Removed the brackets from the NS MMQ 3-site (linear) dispersion model name.
- Renamed the Relax_disp.test_ns_mmq_3site_branched system test to Relax_disp.test_ns_mmq_3site.
- Fixes for the loop_parameters() dispersion function for the new NS MMQ 3-site model parameters. The new parameters were not being handled by this function.
- Created the target functions for the NS MMQ 3-site models. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_target_function.
- Added the R2eff calculating functions for the NS MMQ 3-site models to the relax library. This is for the NS MMQ 3-site and NS MMQ 3-site linear CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_library.
- Added the NS MMQ 3-site models to the dispersion auto-analysis. This is for the NS MMQ 3-site and NS MMQ 3-site linear CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_auto-analysis.
- Added the NS MMQ 3-site models to the GUI model list. This is for the NS MMQ 3-site and NS MMQ 3-site linear CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_GUI.
- Updated the MMQ 2-site model description in the manual. The R2_DQ = R2_ZQ = R20 assumption is now explained.
- Added the NS MMQ 3-site models to the relax user manual. This is for the NS MMQ 3-site and NS MMQ 3-site linear CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Completed the MMQ 2-site documentation in the manual. The equations for the numeric evolution of SQ, ZQ and DQ data was missing.
- Huge speed ups of the relaxation dispersion analysis. This is due to the removal of huge inefficiencies in the loop_point(), return_cpmg_frqs() and return_spin_lock_nu1() functions of the specific_analysis.relax_disp.disp_data module. Two new functions return_cpmg_frqs_single() and return_spin_lock_nu1_single() have been introduces to pull out the nu_CPMG and spin-lock field strengths for a given experiment and spectrometer frequency. This avoids calling the loop_exp() and loop_frq() functions from within loop_point() which itself is often called inside a loop_exp() and loop_frq() sequence.
- Added the results of cpmg_fit minimisation of the cpmg_fit synthetic data for the NS MMQ 3-site model.
- Fixes for the NS MMQ 3-site dispersion models - the evolution matrix is now correctly constructed.
- Another fix for the NS MMQ 3-site dispersion models. The creation of the Z-matrix had a copy and paste error in that the heteronuclear chemical shift sign was negated when it should be positive. This was only in one of the two chemical shift numbers.
- Loosened the chi-squared check of the Relax_disp.test_ns_mmq_3site system test to allow it to pass.
- Speed up of the Relax_disp.test_ns_mmq_3site system test. The relax_disp.plot_disp_curves user function call is now skipped as it takes too long.
- Renamed the 'ns_mmq_3site_branched' dispersion test data directory to 'ns_mmq_3site'.
- Created the Relax_disp.test_ns_mmq_3site_linear system test and modified Relax_disp.test_ns_mmq_3site. The Relax_disp.test_ns_mmq_3site_linear system test uses the old data from the directory test_suite/shared_data/dispersion/ns_mmq_3site/, as this had kAC = 0, now copied into the ns_mmq_3site_linear/ directory. This system test uses the NS MMQ 3-site linear model. The base data generated by cpmg_fit for the Relax_disp.test_ns_mmq_3site system test was modified so that kAC is no longer 0, but set to 1000. This should properly test the NS MMQ 3-site model.
- Renamed the MMQ 2-site model to NS MMQ 2-site. This is so that the name matches those of the NS MMQ 3-site linear and NS MMQ 3-site models.
- Renamed all remaining instances of MMQ 2-site to NS MMQ 2-site. This is simply changing variable, method and module names.
- Removed the MMQ 3-site branched and MMQ 3-site linear models from the to do list in the manual. These two dispersion models are now implemented.
- Renamed the MQ CR72 dispersion model to MMQ CR72. The model is designed by Korzhnev et al., 2004 for proton-heteronuclear SQ, ZQ, DQ, and MQ data (or MMQ data), so the change is logical as the model is not just for MQ data.
- Clean up of the NS R1rho 3-site model names in the manual. The word 'branched' has been removed and the notation now matches the NS MMQ 3-site models.
- Clean up of the parameter lists in the dispersion model table of the manual.
- The pC parameter constraints are now implemented for the 3-site dispersion models. The new constraints are 0 ≤ pC ≤ pB.
- Editing of the introduction section of the dispersion chapter of the manual.
- Added the NS MMQ 3-site parameters to the optimisation section of the dispersion chapter of the manual.
- Added some R1ρ data from Dmitry Korzhnev's Fyn SH3 domain. This originates from the cpmg_fit software and is published data.
- Small fix for the documentation of the relax_disp.r2eff_read* user functions. This is for both relax_disp.r2eff_read and relax_disp.r2eff_read_spin.
- Created the new lib.nmr relax library module. This currently has a few simple functions for converting between ppm units and Hertz or rad/s units.
- The relax_disp.spin_lock_offset user function now uses the lib.nmr module. This is for converting between ppm and rad/s units.
- The relax_disp.r2eff_read_spin user function now can handle offset data in the file. If the new offset_col argument is set and disp_point_col is not, then the file being read can contain the spin-lock offset information rather than the spin-lock field strength values. This is only for R1ρ-type data.
- Implemented GUI test which caches the bug #21076 - when loading a multi-spectra NMRPipe seriesTab file through the GUI, several Error messages occur.
- Large redesign of the R2eff/R1ρ data structures. The five indices {Ei, Si, Mi, Oi, Di} for the experiment type, the spins of the cluster, the magnetic field strengths, the pulse offsets, and the dispersion points (nu_CPMG or nu1) respectively are now much better defined. The Oi dimension is new and allows for support of R1ρ-type data whereby both different offsets and different spin-lock field strengths have been collected. Previously only one or the other was supported, but not both together. The offset information is now included as part of the spin R2eff/R1ρ key, even if not set. To support this, the specific_analyses.relax_disp.disp_data module now has the new functions loop_exp_frq_offset(), loop_exp_frq_offset_point(), loop_exp_frq_offset_point_time(), loop_frq_offset(), loop_frq_offset_point_key(), loop_offset(), and loop_offset_point(). All of the {Ei, Si, Mi, Oi, Di} dispersion indices throughout the source tree have been changed to ei, si, mi,oi, and di respectively. And the time index ti has also been introduced. These changes hugely simplify the code.
- The relax_disp.plot_disp_curves user function can now support 150 sets per Grace graph.
- The relax_disp.plot_disp_curves user function can now support 3000 sets per Grace graph.
- System test for sequence read expanded to include assertions of correct data. Work in progress for Support Request #3044 - load spins from Sparky list.
- Added some more files for the Fyn SH3 R1ρ test data. This includes the cpmg_fit input and output files, R1 data files for relax as R1 cannot optimised yet, and a relax script.
- Added system test for reading spins from a Sparky list. Work in progress for Support Request #3044 - load spins from Sparky list.
- Added interpreter.spectrum.read_spins function. Work in progress for Support Request #3044 - load spins from Sparky list.
- Created the back end function for the read_spins function. Work in progress for Support Request #3044 - load spins from Sparky list.
- Fix for system test. Work in progress for Support Request #3044 - load spins from Sparky list.
- Extended reading of Sparky files to include residue names. Work in progress for Support Request #3044 - load spins from Sparky list.
- Expanded system test and made it pass for user function spectrum.read_spins. Work in progress for Support Request #3044 - load spins from Sparky list.
- Updated the GUI test to check for first ID in list. Fix for bug #21076 - When loading a multi-spectra NMRPipe seriesTab file through the GUI, several Error messages occur.
- Added keyword dim to frontend function for spectrum.read_spins(). Work in progress for Support Request #3044 - load spins from Sparky list. This is associate data with the spins of up to two dimensions.
- Implemented system test for reading spins from NMRPipe SeriesTab formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Extended reading of spin residue names from NMRPipe SeriesTab formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Modified NMRPipe SeriesTab to read residue numbers and name for two-dimensional list. Work in progress for Support Request #3044 - load spins from Sparky list.
- Insert check if spin already exist before creating it. Work in progress for Support Request #3044 - load spins from Sparky list.
- Issuing a warning instead of error when loading spins from Sparky list where residue names are not present. Work in progress for Support Request #3044 - load spins from Sparky list.
- Issued a warning instead of error when loading spin residue names from a NMRPipe SeriesTab formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Changed to use return_spin for testing presence of spin. Work in progress for Support Request #3044 - load spins from Sparky list.
- Implemented another system test for reading NMRPipe SeriesTab files. Work in progress for Support Request #3044 - load spins from Sparky list.
- Fix for issuing a warning in reading spins from a NMRPipe SeriesTab formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Fix for issuing a warning when reading spins from a Sparky formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Implemented system test for reading spin IDs from NMRView formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Made reading of NMRView formatted file return the residue number as integer instead of string. Work in progress for Support Request #3044 - load spins from Sparky list.
- Fix for calling the warn() function. Work in progress for Support Request #3044 - load spins from Sparky list.
- Extended the error description for reading NMRView files. Work in progress for Support Request #3044 - load spins from Sparky list.
- Implemented system test for reading spins from a NMRPipe SeriesTab formatted file whereby the assignments for second dimension is missing. This will be a typically export from Sparky, converted to NMRPipe format, and processed with SeriesTab. Work in progress for Support Request #3044 - load spins from Sparky list.
- Fixed for reading spins from a NMRPipe SeriesTab formatted file whereby dimension 2 misses residue number and residue name. Work in progress for Support Request #3044 - load spins from Sparky list.
- Expanded the warning message for a system test. Work in progress for Support Request #3044 - load spins from Sparky list.
- Modified system test for reading an assignment whereby the second dimension is missing. Work in progress for Support Request #3044 - load spins from Sparky list.
- If dimension 2 in a SeriesTab formatted file does not contain residue number+name, it defaults to the dimension 1. Work in progress for Support Request #3044 - load spins from Sparky list.
- Implemented system test for reading spins from an XEasy file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Modified XEasy reading function to pass residue names back. Work in progress for Support Request #3044 - load spins from Sparky list.
- Copied a SeriesTab file for the implementation of double assignments in Sparky files.
- Redesign of the CPMG frequency and spin-lock field strength data structures. These now have an extra dimension for the offset so that the values are now experiment, magnetic field strength and offset dependent. If many offsets are present but are variable for each dispersion point, then this saves a lot of calculation time. This mainly affects R1ρ-type data. To better handle this, all of the specific_analyses.relax_disp.disp_data.loop_*() functions have been modified to accept data values rather than indices.
- Improved the printout of the relax_disp.r2eff_read_spin user function for the R2eff keys.
- Extended the system test for reading spins from Sparky files with empty residue name+number second dimension assignment. Work in progress for Support Request #3044 - load spins from Sparky list.
- Modified the Sparky peak list for two dimensional assignment example. This will typically be the export from CcpNmr Analysis. Work in progress for Support Request #3044 - load spins from Sparky list.
- Implemented a system test for using double assignments in Sparky formatted files. Work in progress for Support Request #3044 - load spins from Sparky list.
- Extended reading of spins from Sparky files for up to two dimensional assignments. Work in progress for Support Request #3044 - load spins from Sparky list.
- Added example of CcpNmr analysis exported Sparky file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Added system test for reading CcpNmr Analysis exported Sparky file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Modified the reading of Sparky files when exported from CcpNmr Analysis. The keyword 'Data' is not present here. Work in progress for Support Request #3044 - load spins from Sparky list.
- Added a system test for using generic file for reading spins. Work in progress for Support Request #3044 - load spins from Sparky list.
- Modified the generic list to also return spin information when intensity is not present. Work in progress for Support Request #3044 - load spins from Sparky list.
- Added another system test for returning spins from a generic file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Added residue 4 to the R2eff files for the truncated CPMG data from Flemming Hansen.
- Added cpmg_fit results to the software comparison table for Flemming Hansen's CPMG data. The cpmg_fit input and log files have been added as well.
- Shifted the software comparison down a directory so it can be used for all the different data.
- Added system test for reading chemical shift from NMRPipe SeriesTab file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Implemented reading of chemical shifts from NMRPipe SeriesTab formatted files. Work in progress for Support Request #3044 - load spins from Sparky list.
- Additional chemical shift reading test for SeriesTab formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Improvements for the find_intensity_keys() dispersion analysis function. This now handles the reference point None being converted to NaN in numpy arrays and the logic is now clearer.
- Changed some warnings in the dispersion analysis so they only show if R1ρ data is loaded. This is for missing chemical shifts and R1 data.
- Increased the size of the grid search in the Relax_disp.test_m61_exp_data_to_m61 system test. This should increase the stability of this test.
- Introduced the eliminate argument for the dispersion auto-analysis. This flag allows model and Monte Carlo simulation elimination to be deactivated.
- Updated two dispersion scripts in the test data directories to work with the current design.
- Updated more test suite scripts to call the relax_disp.cpmg_frq user function.
- The CR72 and MMQ CR72 models are now classified as nested in the dispersion auto-analysis. The grid search for the MMQ CR72 model will therefore be skipped and the parameters taken from the CR72 model. This will however rarely, if ever, be used.
- Fix for the relax_disp.plot_disp_curves user function. The interpolated curves now have all invalid points of 1e100 removed from the graph. This allows for reasonable graph scaling.
- The LM63 and LM63 3-site models are now classified as nested in the dispersion auto-analysis. The grid search for the LM63 3-site model is therefore skipped and the starting parameters for optimisation are set to those of the optimised LM63 model.
- Updated the relax results for the truncated CPMG data from Flemming Hansen. This includes the new results for the MMQ CR72 model. The analysis uses more model nesting. And the Grace plots now include the interpolation graphs (hence the plots are now bzip2 compressed).
- Updated the NESSY results for the truncated CPMG data from Flemming Hansen. This now uses the data from all residues to allow for a proper error analysis so the results are comparable to all the other software.
- Updated and reformatted the dispersion software comparison document.
- Made a system test test pass on Mac OS 10.9.
- Complete reworking of the NS R1rho 2-site dispersion model. The original code of Nikolai Skrynnikov and Martin Tollinger has been modified to match the behaviour of Dmitry Korzhnev's cpmg_fit software. The equations from Korzhnev et al., JACS 2005 (http://dx.doi.org/10.1021/ja0446855) have been used for the initial magnetisation and the R1ρ' calculation. All equations have been added to the manual to clarify the model.
- Both relax and cpmg_fit input and output files for the Fyn SH3 R1ρ data have been added. This is for the TP02 model and NS R1rho 2-site models. The cpmg_fit results include source code modifications to show the differences between the various 'corrections'. The dispersion software comparison file has been updated to include this data and to show the cpmg_fit verses relax differences.
- Updated the Relax_disp.test_tp02_data_to_ns_r1rho_2site system test. This is for the fixes of the NS R1rho 2-site dispersion model.
- Added the Korzhnev 2005 R1ρ constant time correction to the 'To do' section of the dispersion chapter of the user manual.
- Removed the CR72 model for cpmg_fit from the dispersion software comparison table in the dispersion chapter of the user manual.
- Removed the CR72 model for GUARDD from the dispersion software comparison table in the dispersion chapter of the user manual. This software, like cpmg_fit, only supports the MMQ CR72 model which gives slightly different results to the original CR72 model when using only SQ CPMG-type data. Hence supporting MMQ CR72 does not automatically mean that the CR72 model can be optimised.
- Updated the ShereKhan error estimation technique in the dispersion software comparison table. This is for the dispersion chapter of the user manual. Adam Mazur communicated that errors are estimated using the covariance matrix in a private mail.
- Large rearrangements in the dispersion chapter of the user manual. The MMQ CPMG-type experiments now follow from the SQ CPMG-type experiments, hence the R1ρ models are now listed last.
- Added a to do entry for the 3-site and N-site analytic R1ρ models listed in Palmer and Massi 2006. This is for the 'To do' section of the dispersion chapter of the user manual.
- Updated the lib.dispersion.ns_r1rho_2site module docstring to explain the origin of the equations. This includes the Korzhnev 2005 reference where the modifications come from.
- Created some synthetic data for the NS R1rho 3-site linear dispersion model using cpmg_fit.
- Added cpmg_fit results for the Fyn SH3 R1ρ test suite data using the 3-site numeric solution.
- Created the Relax_disp.test_ns_r1rho_3site_linear system test. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_test_suite.
- Added the NS R1rho 3-site models to the dispersion variables. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
- Added the NS R1rho 3-site models to the relax_disp.select_model user function frontend. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_disp.select_model_user_function_front_end.
- Changed the order of the experiment types in the relax_disp.select_model user function frontend. The R1ρ-type models have been shifted to the end so that the MMQ CPMG-type models are just after the SQ CPMG-type models.
- Changed the 'CPMG-type' to 'SQ CPMG-type' in the relax_disp.select_model user function frontend.
- Added support for the NS R1rho 3-site models to the relax_disp.select_model user function back end. This is for the NS R1rho 3-site and NS R1rho 3-site linear CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_disp.select_model_user_function_back_end.
- Decreased the amount of synthetic data in the ns_r1rho_3site_linear test suite shared data directory. The number of offsets for this NS R1rho 3-site linear model synthetic data has been decreased from 81 points to 21. This is because the large quantities of data slow the test suite down too much.
- Added a GUI test for reading spins from a spectrum formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Added the GUI key 'new spectrum' to point to 'spectrum.read_spins'. Work in progress for Support Request #3044 - load spins from Sparky list.
- Added the spectrum.read_spins GUI page for reading spins from a spectrum formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Added radio button for reading spins from a spectrum formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Further added to the GUI test for reading spins from spectrum formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Speed up of the Relax_disp.test_ns_r1rho_3site_linear system test. Half of the data has been commented out, as too much data was being loaded for the test.
- Created the target functions for the NS R1rho 3-site models. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_target_function.
- Added the R2eff calculating functions for the NS R1rho 3-site models to the relax library. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_library.
- Fix for GUI text string for the select radio button for reading spins from a spectrum formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
- Bug fix for the new NS R1rho 3-site dispersion models - the Y and Z initial magnetisations were switched. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
- Added cpmg_fit results for the program modified to turn off the PEAK_SHIFT flag. These are the results which should most closely match the relax results. This is for the simulated R1ρ data for the NS R1rho 3-site linear model.
- Fix for the MODEL_NS_R1RHO_3SITE_LINEAR dispersion variable. The model name was not correct.
- Turned off the Δω dispersion parameter constraints for the NS R1rho 3-site models.
- Added the NS R1rho 3-site models to the dispersion auto-analysis. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_auto-analysis.
- Added the NS R1rho 3-site models to the GUI model list. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_GUI.
- Removed the pC ≤ pB constraint from the 3-site dispersion models. This is important for the linear models where a violation of this constraint is reasonable. This has been replaced by the pC ≤ pA constraint.
- Added the NS R1rho 3-site models to the relax user manual. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
- Transposed some of the NS R1rho 3-site model evolution matrix elements. These now match the NS R1rho 2-site model.
- Last fixes for the NS R1rho 3-site dispersion models. These now behave identically to the cpmg_fit program with the PEAK_SHIFT flag disabled. The tilt angle for the initial magnetisation is no longer that for the average offset but that of state A.
- Fixes for swapped indices in the relaxation evolution matrix for the NS R1rho 3-site dispersion models.
- Docstring fix for the lib.dispersion.ns_r1rho_3site module.
- Added the Omega_A,B,C resonance offset parameter definitions to the dispersion chapter of the manual.
- Updated the relax results for the synthetic data of the NS R1rho 3-site linear dispersion model.
- Modified the NS R1rho 2-site dispersion model to match the NS R1rho 3-site models. The 6D evolution matrix indices have been rearranged to match the 9D matrix indices. The tilt angle for the initial magnetisation is no longer that for the average offset but that of state A, as was changed for the NS R1rho 3-site models earlier. The system test was therefore updated for the slightly different behaviour.
- Updated the relax results for the Fyn SH3 R1ρ dispersion data. This is for the recent changes to the NS R1rho 2-site dispersion model.
- Updated the Relax_disp.test_ns_r1rho_3site_linear system test so it now passes. The chi-squared value is not exactly zero as there are numerical differences between relax and cpmg_fit due to different approaches being used.
- Added the RMSD determined via showApod for the 69 experiments. Work in progress for Support Request #3083 - Addition of Data-set for R1ρ analysis.
- Added system test for the analysis of optimisation of the Kjaergaard et al., 2013 Off-resonance R1ρ relaxation dispersion experiments using the DPL model. Work in progress for Support Request #3083 - Addition of Data-set for R1ρ analysis.
- Modified analysis script for example data of R1ρ. Work in progress for Support Request #3083 - Addition of Data-set for R1ρ analysis.
- Created synthetic R1ρ dispersion data for the NS R1rho 3-site model. This is a simple modification of the data for the NS R1rho 3-site linear model. The k_AC parameter was simply changed from 0 to 1000. The cpmg_fit software was used to create the data. Both cpmg_fit and relax results have been updated to the new model.
- Created the new Relax_disp.test_ns_r1rho_3site system test. This was copied from the Relax_disp.test_ns_r1rho_3site_linear test and modified to use the new NS R1rho 3-site model synthetic data.
- Fix for wrong use of relax_fit.relax_time instead of relax_disp.relax_time. Work in progress for Support Request #3044 - load spins from Sparky list.
- Added the ns_r1rho_3site module to the lib.dispersion package __all__ list. This allows the unit tests to pass.
- turned off a system test until the release of relax 3.1.1 is over. Work in progress for Support Request #3044 - load spins from Sparky list.
- Fix for the Relax_disp.test_bug_21076_multi_col_peak_list GUI test. The peak intensity wizard is now closed at the end of the test so that subsequent tests can cleanly operate. Without closing this wizard, launching it a second time in another test will always fail.
- Capitalised 'Python' in the IO redirection messages.
- Epydoc docstring fix for the lib.dispersion.ns_mmq_3site.r2eff_ns_mmq_3site_sq_dq_zq() function. This allows the API to be compiled correctly.
- Bug fix for the dispersion grid_search_setup() optimisation function. This function was not updated for the recent addition of the spin-lock or hard pulse offset dimension in the specific_analyses.relax_disp.disp_data module (and hence all structures used by the dispersion target functions). The loop_exp_frq_point() function call has been replaced by a loop_exp_frq_offset_point() function call to allow the R2eff model parameters to be looped over. For more details, see the thread http://thread.gmane.org/gmane.science.nmr.relax.scm/19685. This solution was mentioned at http://thread.gmane.org/gmane.science.nmr.relax.scm/19685/focus=4859.
- Removed a printout from the Relax_disp.test_r1rho_kjaergaard GUI test as this is fatal for Python 3.
- Python 3 fixes for the relax_disp.r2eff_read_spin user function. The check for the dispersion point column now only runs if that argument is set. In addition, the offset column is now also being checked.
Bugfixes
- Fix for the sample_scripts/relax_disp/R1rho_analysis.py sample script. This was identified by Justin Lecher <jlec att gentoo doot org> in the post http://article.gmane.org/gmane.science.nmr.relax.devel/4748 (Message-ID:<52984043.3030808@gentoo.org>), or the threaded view http://thread.gmane.org/gmane.science.nmr.relax.announce/46/focus=4748. The problem was some extra commas which should not have been there.
- Bug fixes for the non-functional R1rho_analysis.py relaxation dispersion sample script. This script was horribly broken, but it should now work. It can even be executed from the base relax directory or from within the sample_scripts/relax_disp/ directory and perform the full analysis (assuming write access to the relax source directory).
- Fix for a number of PNG files for NESSY and Bruker icons for broken IDAT entries. This problem was identified by Justin Lecher <jlec att gentoo doot org> in the post http://article.gmane.org/gmane.science.nmr.relax.devel/4750 (Message-ID:<5298572C.5010409@gentoo.org>), or in the threaded view http://thread.gmane.org/gmane.science.nmr.relax.announce/46/focus=4750. As a result those icons are missing in the GUI. This was fixed using the pngcrush tool.
- Fix for a typo in a model name in the cpmg_analysis.py relaxation dispersion sample script.
- Fix for bug #21309, the R2eff dispersion model failure when peak intensity data is missing. The problem was that the check for missing data in the _calculate_r2eff() private API method was accidentally deleted in the relax_disp branch. See the commit at http://article.gmane.org/gmane.science.nmr.relax.scm/19261 and the accidental deletion at http://svn.gna.org/viewcvs/relax/branches/relax_disp/specific_analyses/relax_disp/api.py?view=diff&r1=21504&r2=21505&pathrev=21505.
- Another fix for bug #21309, the R2eff dispersion model failure when peak intensity data is missing. This second problem is only for the numeric CPMG models for when all data at one magnetic field strength is missing. When the relaxation dispersion target function is being set up, the creation of the self.power data structure holding the number of CPMG blocks fails. The problem is that the relaxation time for the missing field strength is set to NaN. This is now caught using lib.float.isNaN().
- Loosened a check in the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test. This is to allow this test to pass on certain Mac OS X machines. It was reported by Troels in the post http://thread.gmane.org/gmane.science.nmr.relax.devel/4773/focus=4774.
- Basic fix for the Relax_disp.test_r2eff_read_spin system test - the CPMG frequencies are now set. This was identified in the post http://thread.gmane.org/gmane.science.nmr.relax.devel/4773/focus=4774.
- Fixes for the parameters in the Relax_disp.test_ns_mmq_3site system test script.
- Fix for optimisation of Dr. Flemming Hansen's CPMG data to the NS CPMG 2-site star dispersion model. Fix for bug #21322 - 5x Test suite fail for version 3.1.0, reported for system CentOS 2.6.32-358.18.1.el6.x86_64. Adjusted pA, Δω, kex, χ2.
- Fix for optimisation of the Korzhnev et al., 2005 15N DQ CPMG data using the MMQ 2-site model. Fix for bug #21322 - 5x Test suite fail for version 3.1.0, reported for system CentOS 2.6.32-358.18.1.el6.x86_64.
- Fix for optimisation of the Korzhnev et al., 2005 15N MQ CPMG data using the MMQ 2-site model. Fix for bug #21322 - 5x Test suite fail for version 3.1.0, reported for system CentOS 2.6.32-358.18.1.el6.x86_64.
- Fix for optimisation of the Korzhnev et al., 2005 15N ZQ CPMG data using the MMQ 2-site model. Fix for bug #21322 - 5x Test suite fail for version 3.1.0, reported for system CentOS 2.6.32-358.18.1.el6.x86_64.
- Fix for optimisation of all the Korzhnev et al., 2005 CPMG data using the MMQ 2-site model. Fix for bug #21322 - 5x Test suite fail for version 3.1.0, reported for system CentOS 2.6.32-358.18.1.el6.x86_64.
- Fix for optimisation of Dr. Flemming Hansen's CPMG data to the NS CPMG 2-site star dispersion model. Changed so assertAlmostEqual matches 2 digits. Fix for bug #21322 - 5x Test suite fail for version 3.1.0, reported for system CentOS 2.6.32-358.18.1.el6.x86_64.
- Bug fixes for the dispersion analysis when certain data sets are completely missing.
- Fix for loading a seriesTab formatted intensity, and getting the ID for the following GUI elements. Fix for bug #21076 - when loading a multi-spectra NMRPipe seriesTab file through the GUI, several Error messages occur.
- Fix for the relax_disp.r2eff_read_spin user function. The offsets are now converted to ppm prior to finding the R2eff/R1ρ key.
Links
For reference, the following links are also part of the announcement for this release:
relax 3.1.0
Description
After four years of development by numerous NMR spectroscopists, the relaxation dispersion analysis in relax is finally ready for release [Morin et al., 2014]! This support is complete and includes almost all analytic and numeric dispersion models in existence. These have been labelled as R2eff, No Rex, LM63 [Luz and Meiboom 1963], LM63 3-site [Luz and Meiboom 1963], CR72 [Carver and Richards 1972], IT99 [Ishima and Torchia 1999], TSMFK01 [Tollinger et al., 2001], NS CPMG 2-site expanded, NS CPMG 2-site 3D, NS CPMG 2-site star, M61 [Meiboom 1961], DPL94 [Davis et al., 1994], TP02 [Trott and Palmer 2002], TAP03 [Trott et al., 2003], MP05 [Miloushev and Palmer 2005], NS R1rho 2-site, MQ CR72, and MMQ 2-site, mainly named after the authors and publication date. It includes support for single, zero, double, and multiple quantum CPMG data, including combined proton-heteronuclear data, and off-resonance R1ρ data. An automated protocol has been developed to simplify the analysis and a GUI has been designed around this auto-analysis. Calculations have been parallelised at the spin cluster and Monte Carlo simulation level for speed.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.1.0
(28 November 2013, from /trunk)
http://svn.gna.org/svn/relax/tags/3.1.0
Features
- Full support for the analysis of relaxation dispersion data in the prompt, scripting, and graphical user interfaces.
- Support for single quantum (SQ), zero quantum (ZQ), double quantum (DQ), and multiple quantum (MQ) CPMG-type data.
- Support for R1ρ-type data.
- Support for combined proton-heteronuclear SQ, ZQ, DQ, and MQ CPMG-type data (multiple-MQ or MMQ data).
- The R2eff model - used to determine the R2eff or R1ρ values and errors required as the base data for all other models.
- The No Rex model - the model for no chemical exchange being present.
- The LM63 SQ CPMG-type analytic model - the original Luz and Meiboom 1963 2-site fast exchange equation with parameters {R20, …, φex, kex} [Luz and Meiboom 1963].
- The LM63 3-site SQ CPMG-type analytic model - the original Luz and Meiboom 1963 3-site fast exchange equation with parameters {R20, …, φex,B, kB, φex,C, kC} [Luz and Meiboom 1963].
- The CR72 SQ CPMG-type analytic model - the reduced Carver and Richards 1972 2-site equation for most time scales whereby the simplification R2A0 = R2B0 is assumed with the parameters {R20, …, pA, δω, kex} [Carver and Richards 1972].
- The CR72 full SQ CPMG-type analytic model - the full Carver and Richards 1972 2-site equation for most time scales with parameters {R2A0, R2B0, …, pA, δω, kex} [Carver and Richards 1972].
- The IT99 SQ CPMG-type analytic model - the Ishima and Torchia 1999 2-site model for all time scales with pA ≫ pB and with parameters {R20, …, φex, pA.δω2, kex} [Ishima and Torchia 1999].
- The TSMFK01 SQ CPMG-type analytic model - the Tollinger et al., 2001 2-site very-slow exchange model for time scales within range of microsecond to second time scale with parameters are {R2A0, …, δω, kAB} [Tollinger et al., 2001].
- The NS CPMG 2-site expanded SQ CPMG-type numeric model - A model for 2-site exchange expanded using Maple by Nikolai Skrynnikov (Tollinger et al., 2001) with the parameters {R20, …, pA, δω, kex}.
- The NS CPMG 2-site 3D SQ CPMG-type numeric model - the reduced model for 2-site exchange using 3D magnetisation vectors whereby the simplification R2A0 = R2B0 is assumed with the parameters {R20, …, pA, δω, kex}.
- The NS CPMG 2-site 3D full SQ CPMG-type numeric model - the full model for 2-site exchange using 3D magnetisation vectors with parameters {R2A0, R2B0, …, pA, δω, kex}.
- The NS CPMG 2-site star SQ CPMG-type numeric model - the reduced model for 2-site exchange using complex conjugate matrices whereby the simplification R2A0 = R2B0 is assumed with the parameters {R20, …, pA, δω, kex}.
- The NS CPMG 2-site star full SQ CPMG-type numeric model - the full model for 2-site exchange using complex conjugate matrices with parameters {R2A0, R2B0, …, pA, δω, kex}.
- The M61 R1ρ-type analytic model - the Meiboom 1961 2-site fast exchange equation for on-resonance data with parameters {R1ρ', …, φex, kex} [Meiboom 1961].
- The M61 skew R1ρ-type analytic model - the Meiboom 1961 2-site equation for all time scales with pA ≫ pB and with parameters {R1ρ', …, pA, δω, kex} [Meiboom 1961].
- The DPL94 R1ρ-type analytic model - the Davis et al., 1994 2-site fast exchange equation extending the M61 model for off-resonance data with parameters {R1ρ', …, φex, kex} [Davis et al., 1994].
- The TP02 R1ρ-type analytic model - the Trott and Palmer 2002 2-site equation for all time scales with pA ≫ pB and with parameters {R1ρ', …, pA, δω, kex} [Trott and Palmer 2002].
- The TAP03 R1ρ-type analytic model - the Trott et al., 2003 off-resonance 2-site equation for all time scales with the weak condition pA ≫ pB and with parameters {R1ρ', …, pA, δω, kex} [Trott et al., 2003].
- The MP05 R1ρ-type analytic model - the Miloushev and Palmer 2005 off-resonance 2-site equation for all time scales with parameters {R1ρ', …, pA, δω, kex} [Miloushev and Palmer 2005].
- The NS R1rho 2-site R1ρ numeric model - the model for 2-site exchange using 3D magnetisation vectors with the parameters {R1ρ', …, pA, δω, kex}.
- The MQ CR72 MMQ-type analytic model - the Carver and Richards 1972 2-site model for most time scales expanded for MMQ CPMG data by Korzhnev et al., 2004 with the parameters {R20, …, pA, δω, δωH, kex}.
- The MMQ 2-site MMQ-type numeric model - the model for 2-site exchange whereby the simplification R2A0 = R2B0 is assumed with the parameters {R20, …, pA, δω, δωH, kex}.
- An automated protocol for relaxation dispersion which includes sequential optimisation of the models, fixed model elimination rules to remove failed models and failed MC simulations increasing both parameter reliability and accuracy [d'Auvergne and Gooley 2006], and a final run whereby AIC model selection is used to judge statistical significance.
- Additional methods to speed up the auto-analysis by skipping the grid search: Model nesting, the more complex model starts with the optimised parameters of the simpler; Model equivalence, when two models have the same parameters; And spin clustering, the analysis starts with the averaged parameter values from a completed non-clustered analysis.
- Parallelisation of the dispersion analysis at the level of the spin cluster and Monte Carlo simulation for fast optimisation on computer clusters using OpenMPI.
Changes
Bugfixes
- Bug fixes for a number of broken Oxygen icon lookups in the GUI.
- Bug fixes for the molecule.delete, residue.delete and spin.delete user functions. The molecule, residue, and spin metadata in the relax data store was not being updated correctly after these user function calls so that any subsequent operations on this data was failing. This metadata problem was not noticed before as it disappears if the state is saved and reloaded into relax after a restart.
Links
For reference, the following links are also part of the announcement for this release:
relax 3.0 series
relax 3.0.2
Description
This version is a minor feature and bugfix release which includes better pseudo-atom support, support of the value.write user function to allow model information to be written to file, improvements to the 2D Grace plots, and fixes for missing log messages when running on a cluster using OpenMPI.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.0.2
(26 November 2013, from /trunk)
http://svn.gna.org/svn/relax/tags/3.0.2
Features
- Much better pseudo-atom support, including not requiring tetrahedral geometry.
- The value.write user function can now create files with non-numeric data, such as the models for each spin.
- Improvements to the 2D Grace plotting from the grace.write user function including full support for multiple graphs and the setting of the axes to the zero point.
Changes
- Updated the Release Checklist document rsync instructions to allow resumed uploads. This is needed if the internet connection has been cut, as uploading can take a long time.
- The test_suite.clean_up.deletion() function can now handle the case of missing files and directories. This problem was occurring in the relax_disp branch for some of the system tests.
- Created the is_int() and is_num() functions for the lib.check_types module.
- The value.write user function can now properly handle non-numeric data types. This allows the spin specific model name to be written to file, or any other string defined in the specific analysis PARAMS data object.
- The multi-processor section of the manual is now labelled in the correct position.
- Created a special GUI analysis element for floating point numbers. This allows for user input of floating point numbers into one of the GUI analysis tabs. If the input is not a number, the original value will be restored.
- Created the new pipe_control.spectrum.add_spectrum_id() function. This is used to handle the creation of spectrum ID strings in the data store. This way new spectrum IDs can be created from different parts of relax in a controlled way.
- Created the pipe_control.spectrometer.check_frequency() function to standardise this check.
- Created the pipe_control.spectrometer.get_frequency() function for returning the frequency for a given ID.
- The pipe_control.spectrum.add_spectrum_id() function now returns silently if the ID already exists.
- Improvements to the pymol.view and molmol.view user functions for finding the PDB files. Now the possibility that this is being run from a results subdirectory is taken into consideration. If the file cannot be found, the os.pardir parent directory is added to the start of the relative path and the file checked for.
- The rdc.read user function will now skip all lines of the RDC file starting with '#'. To include molecule identifiers at the start of the line will now require quotation marks.
- Shifted the RDC and PCS assembly methods from the main class to the data module for the N-state analysis.
- Created the pipe_control.mol_res_spin.is_pseudoatom() function to simplify pseudo-atom handling.
- Created the pipe_control.mol_res_spin.pseudoatom_loop() function. This is used to loop over the spin containers corresponding to a given pseudo-atom.
- Added a PDB file and RDC values (and absolute J+D and J) for propylene carbonate. This will be used for testing of pseudo-atoms in the N-state model analysis.
- Renamed the propylene carbonate files to the correct name of pyrotartaric anhydride.
- Created two new system tests based on the new pyrotarctic anhydride long range (1J, 2J & 3J) RDC data. The first (N_state_model.test_pyrotartaric_anhydride_rdcs) optimises an alignment tensor using long range signed RDC data. The second (N_state_model.test_pyrotartaric_anhydride_absT) optimises an alignment tensor using long range absolute T (J+D) data. Both test long range data together with methyl group pseudo-atom data.
- Added all of the pyrotartaric anhydride RDC generation scripts and files. This is simply for reference and reproducibility.
- Modifications for the pyrotartaric anhydride system test script. The grid search now is much quicker, and the RDC correlation plots are now sent to DEVNULL.
- Added the return_id argument to the pipe_control.mol_res_spin.pseudoatom_loop() function. This will then yield both the spin container and spin ID string. This mimics the spin_loop()function.
- Added proper pseudo-atom support for the RDCs in the N-state model analysis. This involves a number of changes. The pseudo-atom specific functions ave_rdc_tensor_pseudoatom() and ave_rdc_tensor_pseudoatom_dDij_dAmn() have been added to the lib.alignment.rdc module. These simply average the values from the equivalent non-pseudo-atom functions. The return_rdc_data()function in the specific_analyses.n_state_model.data module has been modified to assemble the RDC constants and unit vectors for all members of the pseudo-atom and add these to the returned structures, as well as a new list of flags specifying if the interatom pair contains pseudo-atoms. The N-state model target function and gradient have been updated to send the pseudo-atom data to the new lib.alignment.rdc module functions.
- J couplings for the N-state analysis are now properly handled for pseudo-atoms. The measured J couplings for the members of the pseudo-atom should not be used, but rather that of the pseudo-atom spin itself (as the former does not exist).
- Eliminated the old pseudo-atom handling in the N-state model specific return_rdc_data() function. This was multiplying the RDCs by -3 to handle the tetrahedral geometry of the 1J methyl RDCs. However this approach is not valid for non-methyl pseudo-atoms or for 2J, 3J, etc. data.
- A RelaxError is now raised for the N-state model optimisation with gradients when T = J+D data is used. The gradients for this data type are not implemented yet, so it is better to prevent the user from using this.
- The N_state_model.test_pyrotartaric_anhydride_absT system test now uses simplex optimisation to pass. The Newton algorithm cannot be used as the gradients for T = J+D type data have not been implemented.
- An RDC error of 0.0 will now deselect the corresponding interatomic data container. This can be used for simpler pseudo-atom handling.
- Updated the menthol long range RDC data file to include pseudo-atom member distances.
- Renamed the interatomic_loop() function 'selected' argument to 'skip_desel'. This is to match the spin_loop() function arguments.
- The interatom.unit_vectors user function now calculates the unit vectors for deselected containers. This is useful for pseudo-atom handling where the interatomic containers to the pseudo-atom members have already been deselected.
- Updated the value checking for the N_state_model.test_absolute_rdc_menthol system test. The pseudo-atoms are now properly handled so the result is now much better.
- The stereochemistry auto-analysis can now accept a file of interatomic distances. This is for better pseudo-atom support.
- The N-state model specific check_rdcs() function now properly handles pseudo-atoms.
- The pipe_control.rdc.q_factors() function now properly handles pseudo-atoms. If pseudo-atoms are present, then 2Da2(4 + 3R)/5 normalised Q factor is skipped.
- Created the N_state_model.test_pyrotartaric_anhydride_mix system test. This is used to demonstrate a bug in the N-state analysis using mixed RDC and long range absolute J+D data.
- Movement of N-state model specific code to the analysis neutral pipe_control package. Many of the functions of the specific_analyses.n_state_model.data module relating to alignment tensors, RDC data and PCS data have been shifted in to the pipe_control package modules align_tensor, rdc, and pcs respectively. This allows these functions to be made more general and allow the code to be shared with the frame order analysis or any future analysis using such data, and hence remove some code duplication.
- Create two new warnings RelaxNucleusWarning and RelaxSpinTypeWarning to match the equivalent errors.
- Added some RDC data checks to the N_state_model.test_pyrotartaric_anhydride_rdcs system test. This is to demonstrate a problem with the data assembly function pipe_control.rdc.return_rdc_data().
- Clean ups and improvements for the pipe_control.rdc.check_rdcs() function. Pseudo-atoms are now handled much better and correctly in all cases. And many RelaxErrors have been converted to RelaxWarnings followed by a 'return False' statement.
- Created the pipe_control.rdc.setup_pseudoatom_rdcs() function. This is used to make sure that the pseudo-atom interatomic systems (the containers from heternucleus to pseudo-atom and heteronucleus to pseudo-atom members) are properly set up. It will deselect the interatomic containers if incorrectly set up or if they are not part of the main pair.
- Added quotation marks around a number of spin IDs with molecule names in some RDC data files. This is for the N-state model population model data used in the test suite.
- The rdc.read and j_coupling.read user functions now ignore all lines starting with the # character. This is to remove all comment lines silently. Therefore if spin IDs are used which contain the molecule name, then they should be wrapped in quotation marks.
- Updated a number of RDC test suite data files to have quotation marks around the spin IDs. This is to allow the molecule identifier to be present while not being mistaken for a comment line.
- Updated some of the RDC data files used in the frame order system tests. The spin IDs are now in quotation marks as the molecule name is included. This is to prevent the line being removed as a comment.
- Changes to the setup_pseudoatom_rdcs() function and renamed it to setup_pseudoatom_rdc(). The interatomic loop is now within the function to make sure that all is completed before the containers are accessed.
- Started to add better pseudo-atom support for the PCS. The new pipe_control.pcs.setup_pseudoatom_pcs() function has been added to deselect the spins which are members of a pseudo-atom. The return_pcs_data() function in the same module now calls this function and builds a list of pseudo-atom flags for use in the target function (though it is still unused).
- Finally eliminated the gui.paths module, replacing it with graphics.fetch_icon() calls. The GUI was using a mix of the old gui.paths module and the fetch_icon() function.
- Created the pipe_control.sequence.return_attached_protons() function. This is used to return a list of proton spin containers attached to the given spin.
- Improved Grace graph scaling and arrangement when multiple graphs are present. The lib.software.grace.write_xy_data() function now executes the 'autoscale' command for each graph and executes the 'arrange' to layout the graphs automatically.
- The Grace plotting (via lib.software.grace) now fully supports the plotting of multiple graphs.
- Improvements to the lib.software.grace module. The set colours are now applied to all set objects. And the axis label and tick sizes are now much smaller.
- Created the --numpy-raise command line option. When this is set, all numpy warnings will be converted to errors. This is to aid in debugging to locate where the warning messages are coming from. These appear as RelaxWarnings, but there is no indication as to where the problem is.
- The lib.software.grace module now supports setting the X and Y axes at zero.
- Modified the model list GUI window. This can now be resized and it uses a scrolled panel to allow the contents of the window to be bigger than the window size.
Bugfixes
- Fix for bug #21233 - the missing mpi4py multi-processor messages. When multiple commands were being sent to one slave, the captured IO was being overwritten by each executed command. Therefore the slave would only return the printouts from the last command.
- Fix for a fatal bug in the rarely used structure.add_atom user function. The position argument in the user function definitions was incorrectly defined causing the user function to be non-functional. The 'float_object' argument type is now supported in the GUI.
- Fix for the N-state model _target_fn_setup() method for when no PCS data is present.
- Bug fix for the lib.structure.mass.centre_of_mass() function warning when the element is not known. This warning was buggy and resulted tracebacks.
Links
For reference, the following links are also part of the announcement for this release:
relax 3.0.1
Description
This version is a minor feature and bugfix release. The handling of peak lists has been enhanced and chemical shifts can now be read into relax, there are a number of improvements throughout the GUI, and a number of minor bugs have been solved. If these changes affect you, please upgrade to this latest version.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.0.1
(17 October 2013, from /trunk)
http://svn.gna.org/svn/relax/tags/3.0.1
Features
- Improved handling of peak lists.
- Simplification of the user function GUI elements for those associated with the free file format.
- Support for the reading of chemical shifts into the relax data store with the new chemical_shift.read user function.
- Improvements to the appearance of the GUI by using more unicode.
- Redesign of the model list GUI element used in the model-free analysis.
Changes
- The font size is no longer set for the latex2html compiled user manual.
- A number of updates and improvements to the document explaining how to setup a Mac OS X framework. This Framework Python setup is used to build the binary distribution files.
- Updated the Mac Framework testing script to handled 4-way binaries (ppc74 included).
- Better support for 4-way binaries in the Mac OS X Framework detection script.
- Added support for the 'current ar archive random library' file type in the Mac OS X Framework testing script.
- Added py2app to the Mac OS X Framework setup instructions.
- Shifted code from pipe_control.spectrum to the new lib.spectrum.peak_list relax library module. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/3972/focus=4347.
- Added a special script for locating all Python versions and printing out the installed modules.
- Large change to the free file format GUI element for the user functions. The GUI element used in the user function wizard windows has been modified to have both a 'default' form, which is the previous design, and a 'mini' form which is now used for the user functions. This mini form only uses 1 row, rather than the default of 6 or 8. It is a read only text element with a button that launches the free file format window. The amount of space saved is huge.
- Improved the text for the mini free file format GUI element.
- Updated all of the user function GUI window sizes for the 'mini' free file format GUI element. This allows much more text of the description to be displayed.
- Updated the Mac Framework setup document to help with scipy compilation problems.
- Improved the Python seeking and module version print out script for symlinks. This should now be much more capable of finding all Python versions on a system.
- Added support for the Mac OS X Modelfree4 binary results to the Palmer.* system tests. The Mac OS X Modelfree 4.20 binary produces different results than the Linux binaries, mainly due to a compilation problem. In the Linux binaries, the results are written out to 4 decimal places. In the Mac binaries, the results are instead written out to 4 significant figures. Therefore the number of decimal places are much less than the Linux results.
- Syntax error fix for one of the unused scripts in the relax test suite shared data directories. This problem was encountered by Jack Howarth <howarth att bromo dott med dott uc dott edu> and communicated in a private message. The issue was found by fink. This script is never used and will never be used again - it is only there for reference.
- Modification of the spectrum.read_intensities user function front end. The heteronuc and proton arguments have been eliminated. Instead the new dim argument is used to associate the data with the spins of any dimension in the peak list.
- Replaced the 'heteronuc' and 'proton' arguments of the spectrum.read_intensities user function backend with 'dim'.
- Created the new lib.spectrum.objects module. This will hold temporary data structures for representing peak lists and other spectral data. The module currently contains the Peak_list class which is used to hold peak list data.
- Started to shift the spectrum.read_intensities user function backend to use lib.spectrum.peak_list.
- The pipe_control.spectrum.read_intensities() function now works with the Peak_list object.
- The Peak_list object is now used by the lib.spectrum.peak_list.read_peak_list() function.
- The lib.software.sparky.read_list_intensity() function now operates on the Peak_list object.
- Changed the spectrum.read_intensities dim argument default to ω2 and improved the long description.
- Fix for the assignment handling in the lib.software.sparky.read_peak_list() function. The first element is usually the indirect dimension or ω2.
- Fix for many of the Peak_list system tests for the user function argument changes. The heteronuc and proton arguments have been replaced by the dim argument.
- The lib.software.xeasy.read_list_intensity() function now operates on the Peak_list object.
- The lib.software.nmrview.read_list_intensity() function now operates on the Peak_list object.
- The lib.spectrum.peak_list.intensity_generic() function now operates on the Peak_list object.
- Fixes for the pipe_control.spectrum.read() function. An error was referencing a now non-existent variable and the docstring has been fixed.
- The Peak_list object can now store peak intensity names. This is for peak lists such as from NMRPipe seriersTab files where the peak list covers multiple spectra.
- The NMRPipe seriesTab peak lists are now supported through the Peak_list object.
- Unit test fixes for the spectrum.read_intensities user function argument changes.
- Fixes for a few system tests for the spectrum.read_intensities user function argument changes.
- Fixes for a few GUI tests for the spectrum.read_intensities user function argument changes.
- Changes for the spectrum.read_intensities user function dim argument. The default is now ω1, the indirect dimension in a 2D experiment. The description has also been fixed.
- Fixes for all of the peak intensity reading functions - the ω1 and ω2 dimensions were swapped.
- Updates to the sample scripts for the spectrum.read_intensities user function argument changes.
- Updates to the user manual for the spectrum.read_intensities user function argument changes.
- Created the Chemical_shift.test_read_sparky system test for the reading of chemical shifts. This is for the reading of shifts from a Sparky peak list. It tests the currently non-existent chemical_shift.read user function.
- Created some incredibly basic icons for the chemical shift user functions. These are simply an ω symbol and will need to be replaced by something better in the future.
- Created the chemical_shift.read user function. This includes both the front and back end code.
- Shifted all the modules from lib.software to do with peak lists to lib.spectrum. This is for a more logical organisation, as these modules are solely used by the lib.spectrum.peak_list module.
- Renamed all of read_*() functions of the lib.spectrum modules for consistency. These functions will now be used to read all types of data from a peak list, from the assignments to chemical shifts to peak intensities, and everything in between.
- Modified the peak list object. The peak list dimensionality variable is no longer private, and many values of None are now converted to lists of None so that the peak list data is easier to handle.
- Fix for the proton name in the new Chemical_shift.test_read_sparky system test.
- Expanded the functionality of the lib.spectrum.sparky.read_list() function. Now the dimensionality of the peak list is automatically determined, and all peak lists from 1D to 4D are supported. The chemical shifts are also automatically detected and extracted from the list and placed into the peak list object. The peak intensity data is also automatically detected,therefore the int_col argument is no longer used.
- The lib.spectrum.sparky.read_list() function can now auto-detect the peak volume column and use it for intensities.
- Created the Chemical_shift.test_read_xeasy system test. This is for checking the reading of chemical shifts from a 2D XEasy peak list.
- Implemented the reading of chemical shifts in the lib.spectrum.xeasy.read_list() function.
- Created the Chemical_shift.test_read_nmrview system test. This, if not obvious from the name, is for checking the reading of chemical shifts from an NMRView peak list.
- Implemented the reading of chemical shifts in the lib.spectrum.nmrview.read_list() function.
- Assignments can now contain lowercase letters in Sparky and NMRPipe seriesTab peak lists.
- Fix for the unit test for the reading of intensities from Sparky peak lists.
- Updated the nmrPipe processing script in the relax user manual. This is in response to the post by Troels Linnet at http://thread.gmane.org/gmane.science.nmr.relax.user/1520. The text has also been expanded to better explain spectral processing.
- Improvements for the description of the NMRPipe processing script in the R1/R2 chapter of the user manual.
- LaTeX fix for the curvefit chapter of the user manual.
- The isInf() and isNan() functions of lib.float can now handle values of None. If None is encountered, the functions simply return False.
- The model-free optimisation code now handles minfx returning nothing. This is due to the fix of bug #21001 in relax, which is really a fix for an upstream minfx bug #21090.
- Created the Mf.test_bug_21079_local_tm_global_selection system test. This is to catch bug #21079.
- Extended the Mf.bug_21079_local_tm_global_selection system test for all Monte Carlo simulation steps.
- The model_free.select_model user function GUI element now uses unicode for the model parameters. The τ character is now used for the tm, te, tf, and ts parameters. And a superscript 2 is used for the order parameters.
- The model lists in the model-free GUI auto-analysis now use unicode for the S2 parameters.
- The peak intensity wizard in the GUI is now more robust. The wizard_update_ids() method can now better handle missing data. This is encountered if a user skips the first elements of the wizard.
- Created Wiz_window.setup_page() for user function wizard pages to allow for simpler GUI tests. This method can be used to setup any user function wizard page with all its arguments set. It accepts all keyword arguments and sets these for the wizard page, translating to GUI strings as needed. This should save a lot of lines in the GUI tests.
- Simplified the Noe.test_noe_analysis GUI test by using the new Wiz_window.setup_page() method.
- Python 3 fixes for all of the unicode strings in relax. Instead of using the u"xyz" notation, now unicode("xyz") is being used. This works as the relax compat module sets the builtin unicode() function to str() for Python 3, as all strings in Python 3 are unicode and hence both the Python 2 u"xyz" and unicode() code are undefined in Python 3.
- Defined two new functions called u() in the compat module for better unicode string support. The two functions are defined differently for Python2 and Python3. The Python3 function simply returns the text unmodified, as all strings are unicode. The Python2 function converts the str type to a unicode type.
- The new compat.u() function is now being used for all unicode strings.
- All "local tm" text in the GUI now uses a subscript m unicode character as well as the τ character.
- Created the pipe_control.spectrum.test_spectrum_id() function for checking if a spectrum ID exists.
- Renamed pipe_control.spectrum.test_spectrum_id() to check_spectrum_id(). A bug in the function was also removed, and the other code in the module now uses this function.
- Created the pipe_control.mol_res_spin.check_mol_res_spin_data() function. This will check for the existence of molecule, residue and spin data and raise a RelaxError if none exists.
- Simplification of the data checks in the pipe_control.spectrum module. This is using the new pipe_control.*.check*() functions.
- Huge speed up of the GUI tests by the removal of the N_state_model.test_populations test. This problem was identified by running the GUI tests with the '--time' flag. One one test machine, this single test took ~142 seconds to complete when the entire GUI tests took ~242 seconds (i.e. this one test took up to 60% of the whole test suite). This test comes directly from a system test, but the equivalent system test only takes about 6 seconds to complete. The difference is due to the slow generation of the user function GUI pages.
- Created the new RelaxNoPeakIntensityError error object.
- The compat.SYSTEM variable is now set to 'Windows' when 'Microsoft' is detected. This is for easier identification of MS Windows systems, as either string could be used.
- Created the new gui.text module for holding all of the unicode text for the GUI. This module contains unicode strings for the various analysis types, which are then all defined in one location. This is for consistency.
- Converted the model-free user function definitions to use the new gui.text module strings.
- Shifted the gui.text module to lib.text.gui to avoid a fatal circular import in the GUI.
- MS Windows fixes for the GUI for missing unicode font glyphs.
- Added some Mac OS X GUI string fixes for missing unicode characters to lib.text.gui.
- The size of the model list GUI window can now be changed.
- Redesign of the model list GUI element. The wx.ListCtrl element has been replaced by a wx.FlexGridSizer combined with wx.CheckBox and wx.StaticText. The result is a much nicer formatting of the element. The checkboxes in the old element displayed slight rendering problems on all operating systems and did not look neat. The new design is also more flexible in that models of None are now treated as separators in the window.
- The model list GUI element can now display an optional model description column.
- Added model descriptions and adjusted the size of the model-free model list GUI elements.
- Refinements for the model list GUI window. The font for all text elements is now set. And the elements of the wx.FlexGridSizer are now vertically centred so that the text of the checkboxes and text elements line up perfectly.
- The size of the model list GUI window is now automatically set to the best fit.
- The model list GUI element is now centred after the autosizing.
- The titles in the model list GUI window now use a smaller font size.
- Update of the description of the interatom.define user function.
- Added multi-processor support for Monte Carlo simulations. This simply involves accessing the multi-processor box singleton and running the processor.run_queue() method within the pipe_control.minimise.minimise() function. This currently does nothing as the processor queue is always empty. But if the code in the specific_analyses package is modified to add slave commands to the processor but not execute the run_queue() method, then the Monte Carlo simulations will be automatically parallelised.
- Updated the spectrum.error_analysis user function backend to use the lib.statistics.std() function. This simplifies the code. It affects only the peak intensity error analysis when spectra have been replicated.
- Created the Structure.test_bug_21187_corrupted_pdb system test to catch bug #21187. The bug was reported by Martin Ballaschk.
- Bug fix for the specific analysis API _data_init_spin() method. This is used for the API init_spin() method. This is a latent bug which does not affect any of the current analyses in relax. It was discovered in the relaxation dispersion branch.
- Addded a new is_queued() method to the Processor object of the multi package. This allows the Processor object for the uni and mpi4py multi-processor to be queried to see if any slave commands have been queued.
- Created a unit test for the lib.linear_algebra.matrix_exponential module. This module does not exist yet, but it will be used to replace the scipy.linalg.expm() function use in the relaxation dispersion branch.
- Loosened the lib.linear_algebra.matrix_exponential.matrix_exponential() unit test checks.
- Implemented the lib.linear_algebra.matrix_exponential.matrix_exponential() function. This handles square matrices in either complex or real form.
- Created the lib.check_types.is_complex() function. This is used to determine if a number is a Python or numpy complex type.
- The lib.linear_algebra.matrix_exponential.matrix_exponential() function now uses lib.check_types.is_complex(). This fixes the function for complex matrices.
- Created a new unit test for lib.linear_algebra.matrix_exponential.matrix_exponential() for complex matrices.
- Fix for the new lib.linear_algebra.matrix_exponential.matrix_exponential() function. This function now returns a numpy array type rather than matrix type.
Bugfixes
- Bibtex fixes required for proper latex2html compilation.
- Fix for the Palmer.test_palmer_omp for the different Modelfree4 binaries. The gcc and pdf binaries are now properly detected and the slightly different results are now correctly checked for.
- The graphics.fetch_icon() function can now return either the absolute or relative path to the icon. This is a partial solution for bug #21042.
- Fix for bug #21042. The docs/latex/fetch_docstrings.py now asks the graphics.fetch_icon() function for the relative path to the icon rather than the absolute path.
- The fetch_docstrings.py script now asks for the Unix '/' separator through graphics.fetch_icon(). This is a final fix for bug 21042 (https://gna.org/bugs/?21042). The graphics.fetch_icon() function now accepts the 'sep' argument. This defaults to os.sep. But the docs/latex/fetch_docstrings.py script uses the Unix '/' separator to obtain a LaTeX correct path on MS Windows.
- Modified the create_mc_data() method to partly fix bug #21079. Some spins with local tm models remain selected despite not containing any data. These are handled explicitly. Instead of a RelaxNoModelError being raised, the method returns None to indicate that something went wrong.
- Final fix for bug #21079, the failure of the dauvergne_protocol auto-analysis when the "local tm" global model is selected. The Monte Carlo create_data() method not skips data from the base_data_loop() if the create_mc_data() method returns None.
- Fix for bug #21097. This was a simple typo. It has not been encountered before because it is in a rarely encountered RelaxError.
- Fix for bibtex warning 'Warning--string name "mb" is undefined'. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax.
- Fix for latex bibtex string 'cp' instead of 'cj'. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax.
- Another fix for bibtex string 'cp' instead of 'cj'. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax.
- Fix for bug #21187 - the corrupted PDB issue with protons atom numbers of zero. The bug was reported by Martin Ballaschk. The fix was to allow for spin containers in the relax data store to have the same atom number, as long as the atom names are different.
- Modified the Monte Carlo simulation printout behaviour for the minimisation related user functions. This is to help in fixing bug #21190. This includes the calculate, grid_search, and minimise user functions. The new multi-processor is_queued() method is used to determine if the optimisation code of the specific analysis has queued rather than run the calculations. If queued, the 'Simulation X' text will not be printed out. This avoids printing out all the text at the start before anything has happened. The specific multi-processor optimisation code must provide it's own printouts when each calculation is complete.
Links
For reference, the following links are also part of the announcement for this release:
relax 3.0.0
Description
This is the first release of the new relax 3 series. This release marks a major shift of relax towards becoming a scientific computing environment specialised for the study of molecular dynamics using experimental biophysical data. It is designed to be a replacement for numerical computational environments such as GNU Octave, MATLAB, Mathematica, Maple, etc. From the perspective of a user, however, not much has changed. There are only a few modifications to the prompt, script, or graphical user interfaces. Most changes are for the power user as they are rather in the backend. The infrastructure changes are comprehensive and include the reorganisation of most of the relax code base, a large expansion of the relax library, and general improvements and fixes to the user manual, the GUI, and the whole code base. The huge number of changes can be seen below.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 3.0.0
(6 August 2013, from /trunk)
http://svn.gna.org/svn/relax/tags/3.0.0
Features
- Huge amounts of code throughout the relax codebase has been shifted into independent functions in the relax library.
- Many new functions added to the relax library.
- Complete rearrangement of the relax package and module layout.
- Clean up and improvements to the relaxation curve-fitting C module including the removal of a severe memory leak eating up all the RAM when lots of spins are analysed simultaneously.
- Complete redesign of the 2D graphing code for improved data visualisation and to allow expansion to software other than Grace.
- Polishing of the GUI - many bug fixes and improvements throughout the GUI.
- Addition of the --time command line option for the relax test suite.
- Large speed ups of the relax test suite.
- Merger of the dipole_pair and interatomic user function classes into the new interatom user function class.
- Added support for J couplings.
- Import cleanups throughout relax, avoiding potential future bugs and making the code much cleaner.
- Addition of many new scripts for use by the relax developers.
- Support for the NMRPipe SeriesTab format in the spectrum.read_intensities user function.
- Improvements for all code examples in the relax user manual including much better fonts, formatting, line wrapping, line numbering, and colouring using the lstlisting LaTeX environment.
- Created the relax language definition for the lstlisting LaTeX environment for better colouring of relax scripts in the user manual.
- Converted the Citations chapter of the relax user manual into a preface chapter.
- Overhaul of the indexing in the relax user manual.
- Higher level structuring of the user manual into parts.
- Creation of the optimisation chapter of the relax user manual.
- General improvements throughout the user manual.
Changes
- Some small clarifications and reordering of the release checklist document.
- Shifted the pipe_control.structure.superimpose module to lib.structure.superimpose.
- Shifted the pipe_control.structure.statistics module to lib.structure.statistics.
- Created the unit test infrastructure for the lib.structure package.
- Shifted the pipe_control.structure.pdb_read and pipe_control.structure.pdb_write modules to lib.structure.
- Shifted the pipe_control.structure.cones module to lib.structure.cones.
- Split the pipe_control.structure.mass module into two with the CoM code going to lib.structure.mass.
- Removed the data pipe checks from the internal structural object. This decoupling from the relax data store is in preparation for moving into the lib.structure package.
- More decoupling of the internal structural object from the relax data store. Removed the ability of the internal structural object to determine if two atoms are connected by consulting the relax data store.
- Created the empty lib.structure.internal package for holding the internal structural object.
- Shifted part of the internal structural object into the lib.structure.internal.models module. This contains the two classes ModelList and ModelContainer from the pipe_control.structure.api_base module.
- Shifted part of the internal structural object into the lib.structure.internal.molecules module. This contains the class MolList from the pipe_control.structure.api_base module.
- Shifted the MolContainer class from pipe_control.structure.internal into lib.structure.internal.molecules. This is in preparation for shifting the internal structural object to lib.structure.internal and for the elimination of the unused and no longer useful ScientificPython structural object.
- Created the empty lib.structure.represent package. This will be used to hold modules which generate 3D structures as geometric representations of abstract ideas such as tensors, cones, frames, etc.
- Shifted the lib.structure.rotor module to lib.structure.represent.rotor.
- Total elimination of the ScientificPython PDB object. Maintaining this reader was too much effort and the internal structural object has now surpassed the capabilities of the ScientificPython PDB object (for example the internal object is not PDB specific). And ScientificPython is very much a dead project, largely replaced by the more successful scipy.
- Merged the structural API base module api_base into pipe_control.structure.internal. The API base class is no longer needed as the ScientificPython PDB reader has been eliminated.
- Deleted the unit tests of the structural API base class.
- Moved the residual pipe_control.structure.api_base module to lib.structure.internal.displacements. This is because the base module still contained the Displacements class.
- Docstring consistency in the internal structural object.
- Shifted the pipe_control.structure.internal module to lib.structure.internal.object. This is the new location of the internal structural object.
- Shifted the selection object out of pipe_control.mol_res_spin and into the new lib.selection module. The dependence on the MoleculeContainer, ResidueContainer and SpinContainer objects have been removed, as this is part of the relax data store. Therefore all of the private methods (__contains__, __contains_mol_res_spin_containers, and __contains_spin_id) have been deleted. The contains_*() will need to be used instead.
- The pipe_control.mol_res_spin functions no longer use the selection object __contains__() method. All functions now use the contains_*() methods of the lib.selection.Selection object.
- Shifted parse_token() and tokenise() from pipe_control.mol_res_spin to lib.selection.
- The lib.selection.parse_token() function is using the new Python way of splitting strings. This is via the string's split() method.
- Removed the no longer used parser argument for reading PDB files from some unit tests.
- Removed the unit test of the parser argument of the structure.read_pdb user function. The argument no longer exists.
- Shifted the cone geometric object representation functions to lib.structure.represent.cone. The structure.create_cone_pdb user function first calls pipe_control.structure.main.create_cone_pdb() which then calls lib.structure.represent.cone.cone(). This allows the pipe_control function to write out the file and add it to the data pipe's results file list.
- Fixed some name classes in the namespace of pipe_control.structure.mass.
- Shifted the diffusion tensor structural object code to lib.structure.represent.diffusion_tensor. The user function routes to pipe_control.structure.main.create_diff_tensor_pdb(), which pulls the tensor info out of the data store, and then calls the diffusion_tensor() function of lib.structure.represent.diffusion_tensor to create the representation, writes out a PDB file, and finally adds the file to the data pipe's results file list.
- More removals of the now dead parser argument for the structure.read_pdb user function.
- Removed the parser argument from structure.read_pdb in the stereochemistry auto-analysis.
- Restored the selection object __contains_spin_id() method as contains_spin_id(). This will allow for faster checks for matches to spin ID strings.
- Speed ups for the interatom_loop() by restoring some of the code previously deleted. This spin ID lookup table is being used again, as this is much faster than the string parsing of spin IDs.
- The frame order analysis is now using the correct centre of mass function.
- Shifted calc_chi_tensor() and kappa() from pipe_control.align_tensor to lib.alignment.alignment_tensor.
- Shifted some of the pipe_control.diffusion_tensor functions to lib.diffusion.main.
- Created the empty lib.software package. This will be for functions which create input, read output, or control external programs.
- Shifted and decoupled some of the grace code into lib.software.grace. This includes most of the write_xy_header() and write_xy_data() functions. The data store specific part of write_xy_header() has been shifted into pipe_control.grace.axis_setup().
- Missing import fix for the lib.alignment.alignment_tensor module.
- Shifted the lib.opendx package to lib.software.opendx.
- Shifted the lib.xplor module into the lib.software package.
- Shifted the Bruker Dynamics Centre parsing code into the new lib.software.bruker_dc module.
- Deleted the completely unused pipe_control.spectrum.Bruker_import class. This was added by Michael Bieri in Oct 2011, but the code has never been used. Other, simpler code has replaced its functionality.
- Created the Ct.test_bug_20674_ct_analysis_failure system test for catching bug #20674. This was reported by Mengjun Xue <mengjun dott xue att mailbox dott tu-berlin dot de> at https://gna.org/bugs/?20674.
- Decreased the number of Monte Carlo simulations in the Ct.test_bug_20674_ct_analysis_failure system test.
- Created the Jw.bug_20674_jw_mapping system test. This is a modification of the Ct.test_bug_20674_ct_analysis_failure system test for catching bug #20674. The test script was duplicated and the small modifications made to convert it into the J(ω) mapping analysis. This now reveals the same bug but for the J(ω) mapping analysis.
- System test speed ups - decreased the number of Monte Carlo simulations in many tests. Running 500 simulation optimisations in a system test is a total waste of time!
- Converted the bug_20674_jw_mapping.py system test script to use the self._execute_uf() interface. This allows the script to be used in the GUI.
- Created the Mf.test_bug_20683_bdc_inf_values system test. This is for catching bug #20683 reported by Mengjun Xue <mengjun dott xue att mailbox dott tu-berlin dot de>. The problem is due to infinite and NaN values in the Bruker Dynamics Centre file.
- Ported the changes of r19302 to the consistency testing and J(ω) mapping analyses. This is the code for checking for infinite relaxation rates imported from Bruker Dynamics Centre files.
- Missing imports of the lib.float.isInf() function.
- Modified the bug_20674_ct_analysis_failure.py system test script to use self._execute_uf(). This allows the test to operate as a GUI test, which was failing.
- Created the specific API common method _data_init_spin(). This will be used as a general method for aliasing to data_init() for initialising spin parameters.
- Added printouts for the select.read and deselect.read user functions to identify the spins affected.
- Created the new lib.list module with the function count_unique_elements(). This function will be used to determine the unique number of elements in a list.
- Shifted the Sparky peak intensity reading code to lib.software.sparky.read_list_intensity(). This new function comes from the old pipe_control.spectrum.intensity_sparky() function, but with the spin ID code removed.
- Shifted the XEasy peak intensity reading code to lib.software.xeasy.read_list_intensity(). This new function comes from the old pipe_control.spectrum.intensity_xeasy() function, but with the spin ID code removed.
- Docstring fix for the lib.software.xeasy.read_list_intensity() function.
- Shifted the NMRView peak intensity reading code to lib.software.nmrview.read_list_intensity(). This new function comes from the old pipe_control.spectrum.intensity_nmrview() function, but with the spin ID code removed.
- Created the lib.software.sparky.write_list() function and associated unit test. This will be used to create simple Sparky .list files.
- The relaxation curve-fitting analysis parameters are now all lowercase. This is to match the other analysis types so that the parameter names are identical to the corresponding variable name. This is assumed by some of the specific analysis API methods.
- Removal of junk code in the _assemble_scaling_matrix() relaxation curve-fitting method.
- Parameter scaling is now functional in the target_function.relax_fit.c code. Previously the scaling was not being used and the Python to C conversion was broken.
- The scaling matrix is now converted into a usable list of diagonal elements for the relax_fit C module.
- Simplified the code of the relax_fit C module.
- The common spin methods of the specific analysis API now ignore parameters not in the model. This affects the _data_init_spin(), _sim_init_values_spin(), and _sim_return_param_spin() methods. The result is that the spin containers no longer hold parameter variables set to None for non-model parameters.
- Created the pipe_control.plotting module. This will be used as a base for the plotting of all types of data. This includes the current OpenDX and Grace modules, as well as future modules. The determine_functions() function has been added and is used to simplify the pipe_control.grace.get_data() function.
- The grace.write user function data type argument sequence values have changed. Instead of 'spin', this can now be 'res_num' or 'spin_num' to specify that either the residue number or spin number should be plotted on the desired axis. The x_data_type now defaults to 'res_num'.
- Created the pipe_control.mol_res_spin.count_max_spins_per_residue() function. This will be used by the plotting module to determine if more than one spin per residue exists.
- Fixes for the change of the grace.write user function data type 'spin' to 'res_num'.
- Updated the pipe_control.plotting.determine_functions() function.
- Added the skip_desel flag to the important pipe_control.mol_res_spin.spin_loop() generator function. This is used to skip deselected spins within the loop. As must of the code in relax using the spin_loop() does this anyway, this can be used to simplify many of the spin looping elements in relax.
- Expanded the relax_fit system test script to produce all types of currently supported Grace graphs. This is to more extensively test the grace.write user function.
- Large redesign of the 2D graphing code in relax. This currently affects only the grace.write user function, but the new infrastructure will make it much easier to expand the graphing abilities and to support other 2D graphing software. The plotting code has also been significantly simplified. The pipe_control.grace.get_data() function has been shifted into the pipe_control.plotting module. It has been split up into the base assemble_data() function with the data assembly shifted to assemble_data_scatter(), assemble_data_seq_value() and assemble_data_series_series(). This split massively simplifies the code by not packing all different graph and set combinations into one. In addition the auxiliary functions classify_graph_2D(), fetch_1D_data(), get_functions(), and get_data_type() have been created to maximise code sharing between the different assemble_*() functions.
- Modified the relax_fit system test script to generate a new type of graph. This is the residue number sequence verses the peak intensity series data (and vice versa) via the grace.write user function. This is to help in the implementation of this graph type.
- Created the pipe_control.plotting.assemble_data_seq_series() function. This is to allow the residue or spin numbering to be plotted against any series type data (lists or dictionaries), or vice versa.
- Added a link to the PDF user manual from the HTML user manual. This will affect all pages at http://www.nmr-relax.com/manual/ by adding an icon to the navigation bar pointing to the PDF manual at http://download.gna.org/relax/manual/relax.pdf.
- The plotting of residue or spin numbers verses values now handles multiple spin types properly. This is in the pipe_control.plotting.assemble_data_seq_value() function. The spin name is being used to identify different spin types for the graph sets.
- The pipe_control.mol_res_spin.count_max_spins_per_residue() function now accepts a spin ID argument. This can be used to restrict the spins to count.
- The spin ID string is now being used by the plotting functions. The spin ID was not being passed into the assemble_data_*() functions.
- Changed how pipe_control.plotting.assemble_data_seq_value() determines the number of graph sets. Instead of counting the maximum number of spins per residue, different spin names are now checked across the sequence. This is needed as a single residue could have a different type of spin. This was caught by the Mf.test_dauvergne_protocol system test.
- Modified pipe_control.plotting.assemble_data_series_series() to handle dictionaries with keys as values. This will be useful in, for example, relaxation dispersion for plotting the dispersion curves. In this case, the R2eff values are in a dictionary where the keys are the values to plot against. This is different from the current case where the X and Y data dictionaries are required to have the same keys. These changes expand the capabilities of the grace.write user function.
- Formatting change for the auto_analyses __all__ package list.
- Removed the import of the auto-analysis modules into the auto_analyses package __init__ module. This import is not needed.
- The N-state model system test module now imports the auto-analysis to fix an import order error.
- Added a warning for the spectrum.read user function if a peak intensity of zero is encountered. This value can cause singular matrix failures in certain optimisation algorithms.
- The spectrum.error_analysis user function can now be performed on a subset of all spectra. The subset argument has been added to allow the error analysis to be restricted to a subset of all loaded spectral data.
- Created the lib.list.unique_elements() function for returning a list with duplicates removed.
- Shifted the standard deviation code from the Monte Carlo error_analysis() function to the lib package. The new function lib.stats.std() is now used to simplify the error_analysis() function and allow the code to be reused. This will be useful for expanding the pipe_control.monte_carlo.error_analysis() function to handle parameters which are dictionaries, for example as in the relax_disp branch.
- The Monte Carlo error_analysis() function now handles dictionary type parameters.
- Renamed the new lib.stats module to lib.statistics.
- Spun out the model list GUI element from the model-free auto-analysis into its own module. This GUI element is now the gui.analyses.model_list.Model_list class. This code has been spun out as the GUI element will be used by the relaxation dispersion branch.
- The gui.analyses.model_list.Model_list GUI element now can have tooltips via the tooltip class variable.
- Rearrangements of the gui.analyses package. The new gui.analyses.elements package has been created and the model list and text and spin GUI elements have all been shifted into the package.
- Spun out the Spin_ctrl analysis GUI element into its own module in gui.analyses.elements.
- The relaxation time part of the spectra list GUI element can now be turned on or off.
- The execution of the user function GUI pages can now be delayed. The create_page() execute flag has been added to disable execution. This can be later forced with the new on_execute() force_execute flag.
- Modified the GUI new analysis wizard to return a list of user function on_execute methods. This will be used in the relax_disp branch and in the future for when a special user function page is added to the new analysis wizard. This allows the use of user function pages with execution delayed until the analysis __init__() method is being run.
- Standardisation of the text of the GUI elements of the analysis frames and expansion of the tooltips. All the text parts of the Spin_ctrl and Text_ctrl elements now end in a colon. Tooltips are now present on all elements and have been expanded and improved.
- The Text_ctrl analysis frame GUI elements now have separate tooltips for the buttons. This is to give a hint to the user as to what the button does.
- The model selection GUI analysis element can now have a different tooltip for the button.
- Added tooltips to the model-free model list GUI elements in the model-free analysis frame.
- Created the gui.wizards package for holding all of the relax wizards. The gui.wizard module is now called gui.wizards.wiz_objects.
- Shifted and merged the NOE and Rx peak intensity wizards into a new module. The wizards were separate and a part of the analysis frame class objects. The two wizards have been merged into the gui.wizards.peak_intensity.Peak_intensity_wizard class as most of the code is shared. This one wizard class will be useful for reusing in the relaxation dispersion branch.
- The peak intensity wizard class now inherits from Wiz_window. This allows the class to be a wizard window instead of launching a wizard window from within the class.
- Small rearrangements in the gui.wizards.peak_intensity module.
- Alphabetical ordering of the methods in the gui.wizards.peak_intensity module.
- Simplified all of the peak analysis wizard wizard_update_*() methods. They now all defer to the wizard_update_ids() method which updates the spectrum ID fields.
- Simplified the wizard_update_noe_spectrum_type() method as in the previous commit.
- Fixes for the frq.set user function in the GUI. The ID list is now set to the spectrum IDs, and the frequency units are no longer all fused into one string.
- Unicode is now used for the tau symbol in the model-free model parameter lists in the GUI. This is only when modifying the models to optimise, which shouldn't be changed anyway.
- Removed the 'string' from 'Spectrum ID string' in the spectrum list GUI element. This is a GUI - the word string is meaningless here!
- The delay times column string now specifies seconds in the spectrum list GUI element.
- Formatting improvements for the relaxation data list GUI element. The data type column entries are now descriptive and use subscript.
- More unicode strings are used for the GUI for subscripts and Greek letters.
- Fixes for the R1 and R2 GUI analyses for the recently introduced unicode subscript characters. There is now self.label for a pure string version and self.gui_label for the fancier unicode version.
- The frq.set user function 'id' argument is no longer read only - this was causing test suite failures.
- Removed a nasty kludge for releasing the execution lock on failure. This kludge, after the bug fix for https://gna.org/bugs/?20756, was causing failures in the test suite.
- Changed the 'Execute relax' button in all analyses in the GUI to 'Execute analysis'. It makes no sense to execute relax as relax has been executing during the analysis initialisation and the user setting up all the data for the analysis. This is a remnant of ancient design of Michael Bieri's GUI being a separate program to relax, which would execute relax with the click of this button.
- Restored the Py_INCREF() function call in the relaxation curve-fitting C module. This was deleted at r12632 along with Py_XDECREF() and Py_DECREF() calls. The absence of a Py_INCREF() function call causes the module to crash the Python interpreter under certain conditions. The problem was found in the relax_disp branch.
- Clean up of unused headers and declarations in the exponential curve C module.
- The relax_fit C module setup() function now uses the Py_RETURN_NONE macro to terminate. This macro does exactly what the old code does anyway.
- Removed an unused declaration in the relax_fit C module setup() function.
- Increased the maximum number of relaxation times for the relax_fit C module to 50.
- Shifted the C array creation to the relax_fit C module header. The params, values, sd, relax_times, and scaling_matrix C arrays are now declared and allocated in the header file rather than using malloc() calls in the setup() function. This is to attempt to remove a memory leak. The arrays are now of fixed length and reused for each setup() call. These, as well as the other variables declared in the header, are no longer declared in the functions.
- Improved the Python and C documentation of the functions of the relax_fit C module.
- Converted the Py_BuildValue() calls to PyFloat_FromDouble() in the relax_fit C module. This doesn't change much.
- Documentation improvements for the back_calc_I() function of the relax_fit C module.
- The exponential C file now uses the exp() function from math.h rather than Python.h. This file is independent from Python.
- The numpy include is no longer used for the compilation of the C modules. Numpy is not used at all in the C modules, so this just adds an annoying dependency for those who need to compile the module themselves.
- Removed some bad calls to status.exec_lock.release(). This commit may have to be reverted in the future. The problem is that the execution lock is not being held when these calls are made. The calls were added as a kludge to handle certain situations where the execution lock was not released. There may be cases were this behaviour is still needed.
- Added a developer script for testing of memory leaks in the relax_fit C modules.
- Removed the numpy dependence from the manual C module compilation script.
- Created the lib.mathematics relax library module. This currently contains two functions, order_of_magnitude() and round_to_next_order().
- Added unit tests for the lib.mathematics module.
- The relax_fit analysis now uses lib.mathematics.round_to_next_order() for the scaling matrix. This allows the optimised I0 value to be better understandable in the printouts.
- Created the new Value system test class with the first test Value.test_value_copy. This test demonstrates some pretty large bugs in the value.copy user function.
- Modified the Value.test_value_copy system test to check the copying of errors as well.
- Added the error flag argument to all of the specific analysis API set_param_values() methods. This will allow parameter errors as well as values to be set.
- The Value.test_value_copy system test now checks all of the values and errors.
- Added the error flag argument to the value.set user function. This will allow for parameter errors to be set by the user.
- The specific analysis API _return_value_general() method now returns errors even when values are missing.
- The internal structural object PDF file creation now writes out http://www.nmr-relax.com. Previously the link http://nmr-relax.com was written out.
- Diffusion tensor PDB file fixes for the internal structural object changes. This is because the relax website link is now written into the PDB file as http://www.nmr-relax.com rather than http://nmr-relax.com. This fixes the diffusion tensor system tests.
- Converted all of the specific analysis modules into packages. The model-free and steady-state NOE analyses were already packages, and this now brings all other analyses in line with the package design of specific_analyses. The only change is that the files specific_analyses/x.py have been shifted to specific_analyses/x/__init__.py and the __all__ package variable added.
- Epydoc docstring fixes for the compat module.
- The peak intensity wizard can now function remotely when the spins are not named. This will be needed for the GUI tests to allow the Question() call to be bypassed and still adding the spin.name user function as the first page of the wizard. The key for spin.name page has also been fixed so that the page can be accessed.
- The timing of individual tests in the relax test suite can now be printed out. The new command line argument --time has been added which, when supplied with one of the test suite arguments, will cause the time required to complete each individual test to be displayed. Instead of just printing the characters '.', 'F', and 'E' for each test, now these characters are postfixed with the time in seconds, the name of the test and ending in a newline character.
- Big speed up of the test suite by skipping a large number of redundant Frame Order system tests. These are tests of using only PCS or only RDC data. These tests are still active for the pseudo-ellipse just to make sure that a whole missing data type can be handled.
- Suppressed the reporting of skipped tests in the test suite if the module is set to None.
- The preview button in the file selection elements of the user function windows can now be disabled. This is via the new wiz_filesel_preview argument being set to False.
- Merged the frq.set and temperature user functions into the new spectrometer user function class. The frq.set user function is now called spectrometer.frequency and temperature is now spectrometer.temperature. To match these changes, the cdp.frq variable is now called cdp.spectrometer_frq.
- Modified the spectrometer.frequency user function so that a frequency list and count is stored. These are the new cdp.spectrometer_frq_list and cdp.spectrometer_frq_count variables. This will allow various parts of relax which assemble frequency information to be simplified and made more consistent.
- Created basic SVG and PNG graphics for the spectrometer user function class. The spectrometer is black so as not to offend Bruker, Varian, or Jeol users by avoiding a colour from one of these companies.
- The pipe_control.spectrometer.get_frequencies() function can now return MHz or Tesla units.
- Renamed the functions of the pipe_control.spectrometer module. The frequency() and temperature() functions are now called set_frequency() and set_temperature().
- Added backwards compatibility support for the spectrometer frequency list and count. This is needed for old relax state files.
- A whitelist is now being used to limit the number of frame order GUI tests to 1.
- Shifted all frequency data handling associated with relaxation data to pipe_control.spectrometer. This includes the deletion of the relax_data.frq user function as this replicates the behaviour of spectrometer.frequency. A number of functions from the pipe_control.relax_data module have changed: frq() has been deleted as it is replaced by pipe_control.spectrometer.set_frequencies(); frq_checks() has been shifted to pipe_control.spectrometer.frequency_checks(); frq_loop() has been shifted to pipe_control.spectrometer.loop_frequencies(); num_frq() has been deleted as the new variable cdp.spectrometer_frq_count contains this info. Two new functions in the pipe_control.spectrometer module have been added to remove the functionality from pipe_control.relax_data. These are copy_frequencies() and delete_frequencies().
- The molmol.macro_run user function file argument now has a description.
- Huge speed up of the system tests for the loading and creation of model-free saved states. The OMP files used for the system test have been truncated from 134 to 7 spins, changing the timing of 6 system tests from 11-13 seconds to less than 0.5 seconds each.
- All of the binary file arguments for the user functions now are file selection GUI elements. The GUI user function wizard pages now have file selection buttons for selecting the executable to run. These all have the preview button disabled. The results.read and state.load GUI elements also have the preview button disabled.
- The user function 'prompt' description elements as now displayed in the GUI wizard page.
- The monte_carlo.error_analysis user function can now handle parameters which are lists.
- Added the ability for specific analyses to override the optimisation constraint algorithm. The default is still the 'Method of Multipliers', but if the constraint_algorithm() method returns a different string, then that will be used to select the algorithm. This allows the 'Log Barrier' method in minfx to be used.
- The value.display and value.write user functions can now handle list and dictionary type parameters.
- Added two methods to the specific analysis common API class. These are the _model_type_global() and _model_type_local() methods for always specifying that the model type is global (i.e. at the level of the data pipe) or local (i.e. there can be multiple clusters of models).
- Added some more functions to the lib.statistics module. These include the bucket() function for creating a discrete distribution from a list of floating point numbers, and the gaussian() function for calculating the probability of a point on a Gaussian distribution.
- Added a directory and files for testing the white noise in relaxation data. This includes scripts and graphs.
- The initial parameters are now the real parameter rather than the optimised ones. This is for the script for testing white noise in relaxation data.
- The spectrum.peak_intensities is now more robust when reading in a generic formatted file. Firstly there is a check that the intensity column number has been supplied. And then there is a checks that all relevant data could be extracted from each row of the file. This replaces traceback errors with RelaxErrors explaining the problem if the user inputs bad data or forgets the intensity column argument.
- Changed the "Execute analysis" button text back to the original "Execute" text of the old relax GUI.
- Added the 'test.seq' file from bug report #20873. This is from Troels E. Linnet. The bug report and link to http://thread.gmane.org/gmane.science.nmr.relax.user/1452 explains the contents. The file will be used to construct a system test to catch the bug.
- Created the Peak_lists.test_bug_20873_peak_lists system test to catch bug #20873. This was reported by Troels E. Linnet. The test has been created by copying the user function calls from the original bug report and slightly modifying them to suite a 'relax_fit' analysis type.
- Fix for the Peak_lists.test_bug_20873_peak_lists system test. The spectrum IDs are now strings.
- Added checks of the peak intensities to the Peak_lists.test_bug_20873_peak_lists system test.
- The spectrum.integration_points page in the peak analysis GUI wizard has been fixed. It is only shown when volume integration is selected with no replicated spectra.
- Removed a debugging printout which is killing the relax unit tests in Python 3.
- Added an EPS version of the 128x128 pixel spectrometer icon. This is for use in the relax manual.
- Added a README file for the relax 128x128 icons describing how the EPS files should be created.
- Updated the spectrometer 128x128 icon to be of the correct size and colour.
- Added a README file to the graphic/analyses directory describing how to create the EPS files.
- Merger of the dipole_pair and interatomic user function classes. The functionality of these two classes overlaps significantly. And the dipole_pair user functions are not related to magnetic dipole-dipole interactions. Therefore all the user functions from both classes were shifted into the new interatom user function class. This change will affect almost all relax scripts but, as this will form part of the relax 3 release, script breakage should be expected anyway.
- Removed the pipe_control.dipole_pair module as its contents is now in pipe_control.interatomic.
- Removed the dipole_pair module from the pipe_control package __all__ list.
- Merged the interatom.create user function into interatom.define. These user functions had overlapping functionality which would be confusing for a user.
- Added polish to all of the interatom user function docstrings.
- Improved the functionality of the interatom.read_dist user function. The file data is now stripped using lib.io.strip to remove comments and blank lines. And now if the iteratomic data container cannot be found, it is created instead of raising a RelaxError.
- Improvements to the RelaxZeroVectorWarning - the warning message was terribly out of date.
- Polish for the rdc.read user function. Comment lines and blank lines are now removed to suppress useless warning messages about these lines containing no valid data.
- Added some basic initial relax icons for J couplings.
- Created some basic initial GUI wizard graphics for J couplings.
- Modified the titles of all the auto-analysis GUI elements. The text 'Setup for' has been removed as it is meaningless.
- Added more emphasis on the titles of the auto-analysis GUI elements. There is now more space below the title, and a different font (16pt roman italic) is being used.
- Removed some now irrelevant information from the rdc.read user function docstring.
- Removed a false prompt example from the rdc.read user function docstring.
- Created an entire new user function class for handling J couplings in the relax data store. This derives from the RDC user function modules. The following functions have been created: j_coupling.copy, j_coupling.delete, j_coupling.display, j_coupling.read, and j_coupling.write.
- Added a check for the RDC data type to the rdc.read user function.
- The rdc.read user function can now handle T = J+D type data. Support for this in the specific analyses is yet to be added.
- Fixed for the rdc.read, j_coupling.read and interatomic.read_dist user functions. Comment lines are no longer removed, as it is impossible to tell a comment line from a spin ID string.
- Split up the specific_analyses.n_state_model package into modules. The new data and parameter modules have been created by shifting out method from the __init__ module and converting them into functions of the two new modules. This is to simplify the package.
- Shifted another method from the N_state_model class to the specific_analyses.n_state_model.data module.
- Added support for the T = J+D RDC data type to the N-state model target function. The J couplings are sent into the target function class when the 'T' RDC data type is encountered. These measured values are then added to the back-calculated RDC values to produced T(theta) which is then compared to T via the chi-squared function.
- Fix for the new specific_analyses.n_state_model.data.opt_uses_j_couplings() function. The cdp.rdc_data_types appears not to have all alignments IDs within it.
- Removed the check for Numeric Python in the dep_check module. This Python module not been used within relax for the better part of a decade. This check is not needed.
- Added the j_coupling module to the pipe_control __all__ list.
- Fix for the pipe_control.rdc.q_factors() for T = J+D type data. The Q factor normalisation was incorrect, as the J coupling should be subtracted from T first.
- Unit test fixes for the N-state model. This is needed due to the recent package rearrangements.
- Removed the absolute argument for all of the lib.alignment.rdc functions. This should be performed at the level of the target function, as mathematical operations may be required prior to taking the absolute value.
- Fixes for the N-state model target functions for the lib.alignment.rdc changes. The absolute value is now calculated within the target function rather than when back calculating the RDCs.
- Errors are now handled correctly for the N-state model when T = J+D values are used for the RDCs. The error is the square root of the average variance of the RDC error and J coupling error.
- The RDC back-calculation function now supports T = J+D values.
- Created the N_state_model.test_absolute_T system test. This is for checking the optimisation of absolute T=J+D values to find alignment tensors.
- Epydoc docstring fix for the RelaxTestResult.write_time() method.
- Created a script to look through the entire relax source tree for unused imports.
- Removed a large amount of unused imports throughout the relax code base. These were identified by the new ./devel_scripts/find_unused_imports.py script together with pylint.
- Fixes for the pipe_control.rdc module for when the structure cdp.rdc_data_types is missing.
- Improvements to the devel_scripts/find_unused_imports.py script.
- More cleanups of unused imports throughout relax.
- Fixes for how the devel_scripts/find_unused_imports.py script runs pylint.
- More cleanups of unused imports throughout relax.
- Fixes and expansion of the test_suite.unit_tests._lib package __all__ list.
- Fixes and improvements to Gary Thompson's unit_test_runner.py script. The printouts have been improved and the script can now handle more than 3 levels of directories for a package.
- The unit_test_runner.py script now defaults to verbose mode.
- More cleanups of the unit_test_runner.py script.
- Added a printout to the unit_test_runner.py if the TestCase class cannot be found. This normally continued the test loading silently without warning that the TestCase class name is missing or incorrect.
- Missing import in the unit test module for the lib.frame_order.matrix_ops module.
- Shifted the spin_id_to_data_list() function from pipe_control.selection to lib.selection. This is because the selection object requires this function, and the function has nothing to do with the relax data store.
- Lots of import cleanups including removal of '*' imports, missing imports, and unused imports.
- Small change to the find_unused_imports.py printouts.
- Large removal of unused imports throughout relax found using the devel_scripts/find_unused_imports.py script.
- Clean up of all the imports in the relax code base. This is mainly alphabetical reordering of the imports required due to the huge layout changes in the trunk.
- Shifted the user function initalisation. This is from the import of the user_functions package to the package initialise() function. This is for saner importing dependencies in the relax sources.
- The lib.io.open_write_file() function now catches file names of None and raises a RelaxError. This is useful for the GUI if the user forgets to select a file name.
- The rdc.corr_plot user function can now handled T=J+D type data.
- The N-state model analysis can now handle RDC data of mixed D and T=J+D.
- Added support for mixed RDC data types per alignment. This is to allow, for example, one bond RDC values of the 'D' data type and two bond RDC values of the T = J+D data type to be loaded for the same alignment ID. This is now handled in the N-state or ensemble analysis by handling a different RDC data type per RDC value.
- The Peak_lists.test_bug_20873_peak_lists system test is now skipped if the C modules are not compiled. This test requires the presence of the C modules.
- Added a completely empty PNG image to use in the new analysis GUI wizard for blank buttons. This will be used in the relax_disp branch to eliminate a Mac OS X only bug.
- Added the scripts for backing up the relax SVN repository and mailing lists to the repository. This is to make it easier for others to set up the backups on their systems.
- Added comments to the backup scripts to make it easier to use them.
- Added the listings package to the relax user manual LaTeX file. This will be used to improve the formatting and look of relax scripts in the manual.
- Started to convert the relax user manual to use the lstlisting environment for scripts. This is to prettify the scripts in the manual.
- Improvements to the script UI section of the NOE chapter of the user manual. The lstlisting environments now have the correct numbering to match the script at the start,comments have been copied into the split up script elements, and a few comments improved.
- The NMRPipe script in the relaxation curve-fitting chapter of the manual now uses lstlisting. The language has been explicitly set to csh to override the global default of Python.
- Converted all of the relaxation curve-fitting chapter of the user manual to the lstlisting environment. This is for all parts of the script UI section of the chapter.
- Converted all of the model-free chapter of the user manual to the lstlisting environment. This is for all parts of the script UI section of the chapter.
- Converted all of the J(ω) mapping chapter of the user manual to the lstlisting environment. This is for all parts of the script UI section of the chapter.
- Converted all of the Consistency testing chapter of the user manual to the lstlisting environment. This is for all parts of the script UI section of the chapter.
- Created a new listings language definition for relax for the user manual. This is for better highlighting of relax scripts and code in the relax manual.
- Added an EPS version of the 128x128 J coupling icon for use in the relax user manual.
- Removed some junk text from the relax script text in section 6.3.8 of the user manual.
- The relax language definition is now auto-generated by the fetch_docstrings.py script. This is for use in the relax user manual using the listings package. The fetch_docstrings.py script now creates the docs/latex/script_definition.tex file. This is used by the relax.tex file via an \include{} statement. This setup allows all of the relax user functions to be dynamically set as keywords for the relax language definition.
- Converted all of the Development chapter of the user manual to use the listing package. This is for all of the code examples, which are now much more colourful.
- Small typo fix for the relaxation curve-fitting chapter of the user manual.
- Fixed some out of date script code for the relaxation curve-fitting chapter of the user manual.
- Added a section label to the relaxation curve-fitting chapter of the user manual.
- Adding a test data file in NMRPipe SeriesTab format. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. A file in NMRPipe SeriesTab format is added to the test-suite for further development.
- Test function for NMRPipe SeriesTab format implemented. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. An assertEqual test is implemented for the reading of NMRPipe SeriesTab format. The standalone call is: relax -s Peak_lists.test_read_peak_list_NMRPipe_seriesTab.
- Adding a NMRPipe function file in the folder lib\software\nmrpipe.py. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. Initial file for: lib\software\nmrpipe.py. This file will hold the function calls handling NMRPipe SeriesTab format.
- Fix for commit (http://article.gmane.org/gmane.science.nmr.relax.scm/18004). The spin naming was wrong. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. 'spin_id' keywords should be supplied different. Ex: spin.name(name='NE1', spin_id=':62').
- Autodetect format implemented for NMRPipe SeriesTab format implemented. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. The file is determined a NMRPipe SeriesTab if the first two words of the first line is: REMARK SeriesTab.
- Update of the rotation matrix example in the intro chapter of the user manual. The function is now in lib.geometry.rotations.euler_to_R_zyz(). The example has also been converted to the lstlisting environment for better formatting.
- The relax prompt strings and help system are now keywords for the relax listings package definition. The prompt strings "relax>" and "relax|" are now recognised as keywords and are coloured blue. The help system has been added as a normal Python keyword for highlighting.
- Converted all relax prompt examples in the intro chapter of the manual to the lstlisting environment. This is simply for a more colourful representation.
- The prompt examples in the user function chapter of the manual now use the listing environment. This is via the fetch_docstrings.py script and results in much better formatting of these subsections.
- Added function destination for auto-detected NMRPipe SeriesTab format. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. Auto-detected NMRPipe SeriesTab format make function calls to the file: lib\software\nmrpipe.py in function nmrpipe.read_list_intensity_seriestab().
- Imported the missing lib.software.nmrpipe module into pipe_control.spectrum. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. Expected modules for use in lib\software\nmrpipe.py is imported.
- Release checklist minfx and bmrblib version update to the newest versions.
- Spacing fix in an import statement (found using the 2to3 conversion program).
- Added the relax wiki backup script for dumping the MySQL database contents locally. This is from http://article.gmane.org/gmane.science.nmr.relax.devel/4163.
- Added the script from Troels Linnet for backing up the relax wiki via FTP. This is from the post http://article.gmane.org/gmane.science.nmr.relax.devel/4168.
- Added a link to Troels' post to the relax-devel mailing list to the relax wiki FTP backup script. The link is http://article.gmane.org/gmane.science.nmr.relax.devel/4168
- The relax info printout now works in the absence of the bmrblib module.
- Added some Oxygen icons for a boolean GUI input element. The media-record-relax-green.png files are the media-record.png files with the hue set to 117.
- Created a boolean input element for the auto-analyses of the GUI. This simply turns on and off.
- The boolean GUI auto-analysis input element now has a SetValue() method.
- Completed NMRPipe SeriesTab reader. Progress sr #3043 - support for NMRPipe seriesTab format *.ser. Completed NMRPipe SeriesTab reader for assignment according to SPARKY format. Changes implemented according to: http://article.gmane.org/gmane.science.nmr.relax.devel/4120.
- Extraction of NMRPipe SeriesTab changed. Progress sr #3043 - support for NMRPipe seriesTab format *.ser. The Extraction of NMRPipe SeriesTab data is changed in pipe_control/spectrum.py in the read() function.
- Added flag for single or multiple extraction of spectrum. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Flag change added to reading of NMRPipe SeriesTab. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Some small edits to the intro chapter of the relax user manual.
- Many improvements to the indexing in the relax user manual.
- Removed the flag for single_spectrum. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Fixed wrong reference to Sparky format. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Modfied the intensity list to handle intensities for all spectra per spin. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Fixed the extraction of NMRPipe seriestab data in pipe_control.spectrum.read(). Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Fix for handling reading spin of type heteronuc='NE1' and proton='HE1'. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Adding NMR seriesTab data file for a multiple column / multiple spectrum formatted file. This file is from https://gna.org/support/download.php?file_id=18618 attached to the support request https://gna.org/support/?3043 by Troels Linnet. This is if the command "seriesTab -in ../../peaks.dat -out seriesTab_multi.ser -list nmrfiles.list -sum -dx 1 -dy 1" where nmrfiles.list contains file reference to 10 .ft2 files.
- Fix for unit test of nmrpipe. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Replacing a pointer-reference structure to an empty creation of list of lists. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- The ID of spins in seriesTab_multi.ser was not formatted correctly to SPARKY format. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Added system test for reading of a multi column formatted NMRPipe seriesTab file. Progress sr #3043 - support for NMRPipe seriesTab format *.ser. Generated the reference data in Excel, for the system test. The spectrum ID's are auto generated by supplying the keyword spectrum_id='auto'. The first few tests was matched against integers rather than floats. Adding '.0' to the end of each number. Spaces added after the commas in the self.assertAlmostEqual() calls. The 2to3 conversion program (for Python 2 to Python 3 conversion) highlights this issue.
- Added check for number of supplied spectra ID's and the number of returned intensity columns. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Made it possible to autogenerate spectrum ID's, if spectrum_id='auto'. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Remove from datalist where empty list starts. These are created where spins are skipped for ID = '?-?'. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Moved checks for matching length of spectrum IDs and intensities columns. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Moved the adding function of adding the spectrum id (and ncproc) to the relax data store. Progress sr #3043 - support for NMRPipe seriesTab format *.ser. Shifting it to later will prevent the cdp.spectrum_ids list to be populated after the user calls the user function incorrectly.
- Added epydoc documentation in pipe_controlspectrum.read() when supplying keyword 'auto'. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Added GUI description for when supplying 'auto' to the spectrum_id. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Added a stub GUI describtion in the File formats, for NMRPipe seriesTab. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
- Fix for two spaces are used after a period in documentation. Progress sr #3043 - support for NMRPipe seriesTab format *.ser. relax uses the double space for easier for the eyes to pick up the sentence structure.
- The relax user manual is now broken into parts. The higher level LaTeX part command is now used to group related chapters. This should make it easier for users to navigate this huge thing.
- Creation of the optimisation chapter of the relax user manual. The main text of this chapter originates as part of the model-free chapter. As this most of this text was not model-free specific, it has been spun out as its own chapter. Text has also been taken from the "Optimisation of relaxation data – values, gradients, and Hessians" chapter. The indexing for the optimisation topics has also been improved.
- Changed the chapter layout of the relax user manual. The development chapter has been moved forwards.
- Fix for the spectrum.read_intensities user function docstring. Grammatically, the text "spectrum ID's" should be "spectrum IDs". The problem though was that this text was strangely causing the user manual compilation to fail.
- Added subsubindexing for the optimisation algorithm index entries.
- Added extensive cross-referencing to the index of the relax user manual.
- Added some hyphenation rules for better formatting in the user manual. For this, the external hyphenation.tex has been created.
- Better indexing in the relax user manual. The imakeidx LaTeX package is now used instead of makeidx, and the hyphenation has been improved.
- Lots of spelling fixes for the relax user manual.
- Updated the minimum Python version from 2.3 to 2.5 in the user manual.
- Epydoc docstring fix for the pipe_control.spectrum.read() function. The text "Z_A{i}" causes problems when compiling the API documentation, so it has been changed to "Z_Ai".
- Python 3 fix for the new test_suite.clean_up module. The exceptions Python module does not exist in Python 3, so instead the relax compat.builtins object is being used to store the WindowsError variable of None.
- Added a paragraph to the installation chapter of the manual about not supporting the EPD.
Bugfixes
- Fix for bug #20674 - the failure of the consistency testing analysis. This was reported by Mengjun Xue <mengjun dott xue att mailbox dott tu-berlin dot de>. The problem was that the first residue did not have a single proton 'H' in the PDB file, and therefore the dipolar relaxation interaction was not set up. The overfit_deselect() method of the consistency testing specific API was not checking for this. The method is now much more like that of the model-free specific analysis.
- Fix for the model-free analysis specific overfit_deselect() method. The tests for the presence of dipolar relaxation was not correct and was non-functional.
- Fix for the J(ω) mapping analysis matching that for the consistency testing. The overfit_deselect() method is now identical to that of the consistency testing analysis.
- Fix for bug #20683 - the infinite and NaN data in Bruker DC files. This was reported by Mengjun Xue <mengjun dott xue att mailbox dott tu-berlin dot de>. The model-free specific overfit_deselect() method now checks for infinite relaxation data and deselects the spin if such data is encountered.
- Fix for the analysis specific API common method _data_init_spin(). The data types are now correctly checked - they are not strings but types.
- Fix for the relaxation curve-fitting _assemble_scaling_matrix() method. The intensity scaling was never activated before due to a lower vs. uppercase parameter name mismatch. This scaling is now correctly set up as the previous code assumed cdp.relax_times was a list whereas it has been a dictionary since the early 1.3 releases.
- The grid search bounds for the relaxation curve-fitting are no longer affected by scaling. The parameter scaling recently activated revealed a bug in the lower and upper data structures for the grid search in that these were continuously scaled down.
- Fix for the target_functions.relax_fit C module - the scaling was incorrectly performed.
- Fix for the relaxation curve-fitting _back_calc() method for the changes to the C module. The setup() method requires that the scaling matrix is converted to a list of the diagonal elements.
- Fix for the analysis specific API common _return_value_general() method. The value of None is now handled properly when a simulation value is asked for.
- Restored the default behaviour of the spin_loop(). The skip_desel flag is now functional and defaults to False.
- Fix for the relax_times and intensities parameter definitions for specific_analyses.relax_fit. These are dictionaries, not lists.
- Fix for the spectrum.error_analysis user function for replicated spectra and subsets. A second call to spectrum.error_analysis was removing the results from the first call. This is now avoided.
- Bug fix for the right click popup menu in the spectra list GUI element. This affects the NOE, R1, and R2 analyses. The actions of the menu items were all mixed up.
- Fix for the nasty bug #20756. The problem was that the global execution lock was not always released by a relax script when certain errors are raised during the script execution. This does not occur for all types of error though. Now the release of the lock has been shifted into the 'finally' statement to absolutely force lock release.
- Big bug fix for a memory leak in the relaxation curve-fitting C module. Proper reference counting is used for the temporary 'element' Python objects used in the conversion between Python and C objects. The use of the Py_CLEAR() macro removes the memory leak. However the number of references as seen by sys.gettotalrefcount() in a debugging Python version keeps rising and might be a problem in the future.
- Big bug fix for the value.copy user function - it is now functional again.
- Bug fix for the value.copy user function. The user function can now handle parameter errors, and the values are set in the correct data pipe.
- Bug fix for an incorrect print statement in the N_state_model.test_paramag_centre_fit system test. This is in the script, and was uncovered using WinPython by Troels E. Linnet via the relax system tests at http://thread.gmane.org/gmane.science.nmr.relax.devel/3863. The Python bug was detailed at http://thread.gmane.org/gmane.science.nmr.relax.devel/3863/focus=3867.
- Fix for the package checking as part of the unit tests. This was identified from the bug report #20820 submitted by Troels E. Linnet. The problem was that on some systems, the full path is required for checking the presence of the directories which are the sub-packages of the main package being checked. The result was that checking for the package in the __all__ list was skipped. Note that this change does not fix the bug reported.
- Fixes for the Jw.test_calc system test - the spectral density value comparison is now significant.
- Bug fix for the pipe_control.spectrometer.get_frequencies() function. The units argument was incorrectly referenced.
- Fix for bug #20820. Solution found - 'software' was not mentioned in __init__.py, and failed at import.
- Partial fix for bug #20873. The spectrum_id argument for the spectrum.read_intensities user function can now be both a string and a list of strings.
- Fix for bug #20873. This was reported by Troels E. Linnet. The ability to load multiple peak intensities from a single generic formatted file has been correctly implemented. This involves added checks to make sure that the user supplies reasonable arguments and to then loop over the intensity column argument.
- Python 3 fixes via the 2to3 program.
- Bug fix for the value.write user function for list or dictionary type data. This is for the case where the variable of one spin is set to None rather than a list or dictionary type.
- Bug fix for the Sequence GUI input element. This complete the removal of bug #20873. The problem was that the gui_to_str() function was not failing to convert the string into a string list, so the list was deemed as a single string. Now the first character of '[' or '(' for lists or tuples are now searched for instead of relying on the conversion to trigger an error.
- Fixes for the value.write user function for simple parameter values of None. This is a recently introduced bug which causes a complete failure of the user function is the parameter for any spin is None.
- Fix for bug #20888, the autoscaling of Grace graphs. This solution was mentioned in the post at http://thread.gmane.org/gmane.science.nmr.relax.devel/3920/focus=3930. Instead of using minimum and maximum values for the axes in the Grace graphs produced by the lib.software.grace module, which was the old solution for having the graphs scaled to reasonable values, instead the '@autoscale' command is appended to the end of all graphs. This is performed by the write_xy_data() function.
- Bug fix for the running of the test suite in the relax GUI. The fix of r19727 was extended to apply to the GUI as well. Too many arguments were being sent into TextTestRunner Python class on certain Python versions (3.1 and ≤ 2.6).
- Big bug fix - the relax execution lock now truly supports nesting. This fixes bug #20891 reported by Troels Linnet. Scripts can now be executed from the GUI. Note that this is a very dangerous fix.
- Completed the fix for bug #20889. The problem was that the spectrum.read_intensities user function was incorrectly updating the cdp.spectrum_ids list when the spectrum_id argument is set to a list. The list of IDs was being set as a single element of cdp.spectrum_ids, causing problems with the GUI when updating the ComboBox choices and then subsequent setting of the spectrum IDs. This bug and fix is independent of the relax_disp branch, despite being uncovered there and being caught by the Relax_disp.test_bug_20889_multi_col_peak_list GUI test in that branch.
- Bug fix for the GUI element for the interatom.define user function. The special spin ID GUI elements can not be set to the get_spin_ids() function as then SetValue can no longer work for IDs not in the list.
- Fixes for the TestCase class names for a number of lib package modules. As the test class name was incorrect, previously the test suite was skipping these silently. This was dangerous.
- Fixes for the unit tests of the lib.selection module. The contains_*() methods now should be used. And the test_Selection_ful_spin_id() unit test has been completely deleted as this way of checking the selection object is no longer valid.
- Fix for bug #20910 - the broken grace.write GUI interface. The problem was that the Value GUI input element was not detecting list-type data returned by the wiz_combo_iter method.
- Fix for [ (https://gna.org/bugs/?20915 bug #2091 - Failure of Grace opening in MS Windows]. Troels E. Linnet provided this patch, and was discovered during work on a Windows 7 system: telinnet aaattt bio_dot_ku_dot_dk. This is a small fix for a wrong call to "raise RelaxMissingBinaryError(binary)", when issuing an external call to xmgrace. The "path_sep" would be equal = [\/], and the RE search would not find(True) the full path specified for the xmgrace file. This is now shifted to python: os.path.isfile http://docs.python.org/2/library/os.path.html. Another fix, is that as a standard the command "xmgrace" is provided. This will work fine through windows cmd, but the true name for program in windows is "xmgrace.exe", and so an additional search for +".exe" is also performed.
- Fix for the N_state_model.test_absolute_T system test for Mac OS X. The precision of the check needed to be decreased.
- Fix for bug #20918, the hanging of the data pipe editor. This was reported by Troels Linnet and is an MS Windows only problem. The problem is in the wxMSW part of wxPython, and it may be fixed in newer wxPython versions. The issue is nevertheless now avoided by calling the GUI user function store objects with the arguments wx_wizard_sync=True and wx_wizard_modal=True. This appears to solve the problem.
- Decreased the precision of the check in the Frame_order.test_rigid_data_to_rigid_model system test. This is to allow the test to pass on a MS Windows 7 test machine.
- More MS Windows fixes, this time a nasty kludge, for the relax system tests. This is strangely needed for the relax_disp branch and not the trunk for a 64-bit MS Windows 7 test system. The reason why this WindowsError is triggered by the base tearDown() method in the relax_disp branch and not trunk is a total mystery. Actually why Windows refuses to complete the file close() operations of the results.write and state.save user functions before calling the tearDown() method is the greater mystery.
- Bug fix for the batch file permissions for executing Art Palmer's Modelfree4 program. This was identified in the post http://thread.gmane.org/gmane.science.nmr.relax.devel/3953/focus=4000. The file was set to be executable, but on Unix systems it would end up with the permissions "---x------".
- Small comment fix in the sample_scripts/consistency_tests.py script.
- Fix for the scons fetch_docstrings target. The user functions need to be explicitly initialised in the fetch_docstrings script as this is not running through relax.
- Fix for bug #20921, the GUI tests freezing in MS Windows. The problem was that the dipolar interaction wizard in the model-free auto-analysis GUI element was calling its user functions asynchronously. This can lead to racing conditions. The commit r80084 (http://article.gmane.org/gmane.science.nmr.relax.scm/17840) somehow randomly triggers this racing on MS Windows systems only together with the Mf.test_mf_auto_analysis GUI test. Now all user functions are called synchronously.
- Fix for the relax GUI splash screen. On certain systems, the GUI was failing due to the splash screen. It is now shown after the main wxPython window has been created.
- Fix for the new analysis wizard when running the GUI tests. If the create_button() method is called without a function argument, the wizard is still created. This is triggered in the relax_disp branch on certain systems.
- Bug fix for the spin parameter array always being converted to lowercase. The is in the data_store.mol_res_spin.SpinContainer._back_compat_hook() method. It always calls the _back_compat_hook_mf_data() method which converts the spin 'params' list all to lowercase. Now the _back_compat_hook() method first checks that the data pipe is that of a model-free analysis.
- Proper bug fix for the spin parameter array always being converted to lowercase. The previous fix was causing failures in certain cases. One system test and one GUI test were failing. Now the spin container is checked for the presence 'equation' variable to determine if this is a model free data pipe.
- Fix for the relax version file for the relax user manual construction. This was causing 'scons user_manual_pdf' and related targets to fail when a local git repository is used (via git-svn).
- Bug fix for the page numbers in the index - these were often out by a few pages. The makeindex command was being run too early in the repetitive LaTeX compilation chain, causing the page numbers to be incorrect. It is now run twice to fix the problem.
- Fix for the spectrum.read_intensities user function in the GUI. The menu string was truncated to spectrum.read.
- Python 3 fix for the lib.software.nmrpipe.read_list_intensity_seriestab() function. The inbuilt Python filter() function does not return a list in Python 3, as previously, but rather a filter object. Therefore a call to list() is required to properly convert the data.
- An attempt at better handling MS Windows not releasing the file handle on time in the test suite. The system and unit tests tearDown() method should now be resilient to the strange MS Windows behaviour of not releasing the relax state files. The tearDown() method should now complete even when this error occurs. A delay of 3 seconds has been added when the WindowsError occurs to give the OS some time before attempting to delete the file again. If this fails, then the file deletion operation is skipped.
- Better handling of temporary file and directory removal in the relax test suite. The new test_suite.clean_up.deletion() function has been created from the recent method of the same name. This is used by the tearDown() method of the system, unit, and GUI tests. It should prevent rare MS Windows errors from appearing due to the OS not releasing a temporary file after a close() call.
Links
For reference, the following links are also part of the announcement for this release:
Version 2 of relax
relax 2.2 series
relax 2.2.5
Description
This is a minor feature release. Improvements include the creation of Rex value files scaled to all spectrometer frequencies for the model-free auto-analysis [d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b] and some new capabilities in the structural API. Feel free to upgrade if you wish to use these new features.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 2.2.5
(24 March 2013, from /trunk)
http://svn.gna.org/svn/relax/tags/2.2.5
Features
- The files created by the value.write user function now include a header describing the parameter.
- The relax internal structural object now supports the merger of molecules. This can be useful if different domains of one system are in two PDB files or are split in the one file.
- The structure.delete user function can now be used to delete parts of molecules down to the level of individual atoms.
- Helix and sheet information from PDB files are now stored in the internal structural object as metadata. The structure.read_pdb and structure.write_pdb user functions will preserve this information.
- The numbers output by the value.display and value.write user functions can now be scaled.
- The model-free auto-analysis now generates field strength dependent Rex files for each field present.
Changes
- Added a comment to the output from value.display and value.write to describe the parameter. This idea is discussed at http://thread.gmane.org/gmane.science.nmr.relax.user/1428. The idea is to take the parameter description from the specific analysis API and add it to the top of the file or output. This is to help understand what the Rex value are. For example for the Rex parameter the first line would be: "# Parameter description: Chemical exchange relaxation (sigma_ex = Rex / omega**2)."
- Created the Structure.test_read_merge system test to test a new concept - merging of structures. The idea is to add the merge argument to the structure.read_pdb user function to allow two different structures in two PDB files to be merged. This is useful if structures of individual domains have have been solved separately and are located in two PDB files. Then with the merge flag, you will not need to use and external program or hand edit PDB files to join them.
- Added the merge flag to the structure.read_pdb user function. This currently does nothing.
- The merge flag for the structure.read_pdb user function is now propagated to the pack_structs() method. This structure API method calls the ModelList.merge_item() method which is yet to be implemented.
- The MolList.add_item() structural API method now returns the added molecule container. This is used by the pack_structs() method to alias the molecule, and will be required when structure merging is implemented.
- Whitespace fixes - replaced many instances of the tab character '\t' with 4 spaces.
- Implemented the merging of structural objects. This allows the merge flag of the structure.read_pdb user function to work.
- The printouts from the structure.read_pdb user function are now different with the merge flag set. The text now says that the molecules are being merged rather than added.
- Sections of molecules can now be deleted using the structure.delete user function. The atom ID argument has been added and this is now propagated into the internal structural object. This ID string can be used to delete subsets of the 3D structural data in the relax data store.
- Created the Structure.test_read_write_pdb_1UBQ system test. This is for checking the use of the structure.delete user function with the atom ID argument.
- The Structure.test_read_write_pdb_1UBQ system test now checks for HELIX and SHEET records. This is not implemented yet, but the idea is that the structure.read_pdb and structure.write_pdb should preserve the helix and sheet information present in the original PDB and that the internal structural object should store this information.
- Created the internal structural object _pdb_chain_id_to_mol_index() method. This will be used to convert PDB chain IDs, which are used to indicate different molecules in the PDB, into molecule indices for the internal structural object.
- HELIX PDB records are now read, stored, and written out by the internal structural object. This affects the structure.read_pdb and structure.write_pdb user functions. The helix is stored as a metadata type object - its elements do not correspond to the atoms in the structural object.
- SHEET PDB records are now read, stored, and written out by the internal structural object. This affects the structure.read_pdb and structure.write_pdb user functions. The sheet is stored as a metadata type object - its elements do not correspond to the atoms in the structural object.
- Created 13 unit tests of the Internal._trim_helix() internal structural object method.
- Added the index_flag argument to all structural API atom_loop() methods.
- Implemented the internal structural object _trim_helix() method. This is used when the structure.delete user function is called to trim and remove the helix metadata. For this to work, the additional method _residue_data() was written to create a dictionary with residue numbers as keys and the residue names as numbers. This dictionary is used by _trim_helix() to change the residue names in the helix metadata.
- Created 13 unit tests of the Internal._trim_sheet() internal structural object method. These are mirror the 13 unit tests of Internal._trim_helix().
- Implemented the Internal._trim_sheet() internal structural object method. This is also now used by the structure.delete user function to remove sheet metadata for residues which no longer exist.
- Modified the ScientificPython structural object atom_loop() method to match the internal object. If only one element is returned from the atom_loop(), then this is returned as a single item rather than a tuple of length 1.
- Lots of fixes for the change to the structural API atom_loop() method. This method when returning a single item now returns a single item rather than a tuple of length 1.
- The index_flag argument to the ScientificPython structural object atom_loop() method is now used.
- Created the Structure.test_metadata_xml system test. This is used to check that the structural metadata (currently helices and sheets) are stored in the relax XML save files and then can be read back into relax again.
- The helix and sheet metadata is now stored in and read from relax XML state files.
- Added the scaling argument to the value.display and value.write user functions. The idea comes from a suggestion by Angelo Figueiredo <am dott figueiredo att fct dott unl dott pt> and was discussed at http://thread.gmane.org/gmane.science.nmr.relax.user/1428/focus=1430. This allows the user to scale parameters to any value, for example scaling the Rex value to the field strength dependent value.
- The model-free auto-analysis (the dauvergne_protocol [d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b]) now generates field strength dependent Rex files. The idea comes from a suggestion by Angelo Figueiredo <am dott figueiredo att fct dott unl dott pt> and was discussed at http://thread.gmane.org/gmane.science.nmr.relax.user/1428/focus=1430. One file per field strength is generated and named 'rex_600' for 600 MHz, for example. The new scaling argument of the value.write user function is being used to scale the tiny field strength independent value used internally in relax to the Rex value in rad.s-1 that you would see in an R2 data set.
- Added the new 'comment' argument to the value.write user function. This is used to add user comments to the top of the file.
- The model-free auto-analysis (the dauvergne_protocol module [d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b]) now adds comments to the Rex files. This is through the new comment argument of the value.write user function. These comments explain that the Rex values are scaled to the stated field strength.
- Modified the Mf.test_dauvergne_protocol system test to check for all the files and directories created.
- Created the new lib.text.sectioning module for formatting titles, subtitles and other sectioning text. The two functions title() and subtitle() have been implemented.
- Created unit tests for the title() and subtitle() functions of the lib.text.sectioning module.
- Expansion of the lib.text.sectioning module. The following new functions have been added: box(), section(), subsection(), subsubsection(),subtitle(), subsubtitle(), underline().
- Expanded the unit testing of the lib.text.sectioning module to cover all title and section functions.
- Added prespace and postspace arguments to the *section() and *title() functions of lib.text.sectioning. Through these arguments, the amount of spacing above and below the section text can be controlled.
- Split the generic_fns.structure.geometric.create_rotor_pdb() function. The non-relax specific code has been shifted into the rotor_pdb() function.
- Initialised the lib.structure package - this is currently empty.
- Shifted the rotor creation components from generic_fns.structure.geometric to lib.structure.rotor. The create_rotor_pdb() function remains in place as this is the user function backend which checks for data pipes and updates the status object, but the rotor_pdb() and create_rotor_propellers() functions have been moved into the relax library. The create_rotor_propellers() function has been renamed to lib.structure.rotor.rotor_propellers().
- Converted links in all docstrings to use the Epydoc hyperlink notation. This will allow links to be clickable for the API documentation.
- Added Epydoc hyperlink markup for the bug tracker in the system test docstring where missing. This is for a better API documentation.
- The lib.structure.rotor.rotor_pdb() rotor_angle argument should now be in radians. This does not affect the structure.create_rotor_pdb user function as the generic_fns.structure.geometric.create_rotor_pdb() function converts the value to radians prior to calling the rotor_pdb() function.
- The lib.structure.rotor.rotor_pdb() function can now handle structural models. The model number argument has been added to allow the rotor structure to be added to a single model, or to all models if not supplied.
Bugfixes
- Fix for a copy and paste error in the Structure.test_read_merge system test.
- Fixes for all the Ap4Aase truncated PDB files. The atom numbers are now sequential, as defined by the PDB standard.
- Bug fix for the structural data consistency test in the pack_structs() structural API method. The index was not correct causing failures in certain rare cases.
- Python 3 fix for an import into the generic_fns.structure.internal module.
- Python 3 fixes for the relax version information for code checked out from the relax repository. The subversion version.revision() and version.url() functions now handle the Python 3 issue of Popen working with byte arrays instead of normal strings.
Links
For reference, the following links are also part of the announcement for this release:
relax 2.2.4
Description
This is a major bugfix release. System and unit test bugs in the Mac OS X application have been eliminated, the RMSD related functions for systems with old Numpy versions installed have been fixed, the system information printout when the relax path contains spaces now works, Python 3 fixes have been made throughout, problems with the last steps of the model-free auto-analysis under certain conditions have been resolved, and the value.write and value.display user functions no longer present a list of zero values when very small number are encountered (for example the field-strength independent Rex values from a model-free analysis). Upgrading is recommended.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 2.2.4
(17 March 2013, from /trunk)
http://svn.gna.org/svn/relax/tags/2.2.4
Features
- Creation of the structure.create_rotor_pdb user function for creating representations of the rotor frame order model.
Changes
- Updated the release checklist document to include the correct instructions for minfx and bmrblib. These are the packages bundled with relax (https://sourceforge.net/projects/minfx/ and https://sourceforge.net/projects/bmrblib/)
- Improvements for Python 2 and 3 compatibility. Much of the Python 2 verses 3 compatibility, as well as different Python 2 version compatibility and different Python 3 version compatibility, code has been shifted into the compat module. The different parts of relax now import from the compat module for modules/packages with different import semantics for different Python versions. In addition the different handling of the bz2 and gzip module for reading and writing files has been shifted from 'relax_io' into 'compat'.
- Updated the 2to3 checklist document to include multiple threads for faster operation.
- Eliminated the os.devnull import flag dep_check.devnull_import. This is not needed as the compat relax module defines os.devnull for Python ≤ 2.3. The devnull module is no longer part of the relax information printout.
- Added a more informative error message if the platform module is missing. This is for Python ≤ 2.2. The file from http://hg.python.org/cpython/file/2.3/Lib/platform.py can simply be copied into the lib/pythonX.X/ directory to fix this.
- Slight change to the message printed if the platform module is missing.
- Modified the script for running the relax test-suite on multiple Python versions. The pre-2.2 Python versions are now commented out as well as the abortive Python 3.0.
- Created the Mf.test_bug_20613_auto_mf_diff_tensor_pdb system test to catch bug #20613. This was reported by Angelo Miguel Figueiredo <am dott figueiredo att fct dot unl dot pt>. This test is a direct copy of the Mf.test_bug_20563_missing_ri_error system test. The only change is that the local tm global model results file (in the local_tm/aic/ directory) has been modified. This results were read into relax, the file test_suite/shared_data/structures/Ap4Aase_res1-12.pdb loaded into the data pipe, and the results saved again. This triggers the bug as the problem is the presence of structural data with the local tm global model being selected in the auto-analysis.
- Shifted all of the model-free specific analysis class documentation variables to the top. This is simply for better organisation of the code.
- Created the model-free write_doc class variable talking about the field strength independent Rex value. This has been added to the value.display and value.write user functions to explain that Rex values are very small and that the user needs to scale them up.
- Shifted all of the documentation variables to the top of the specific API_base class. This is for better organisation.
- Added the write_doc class variable to the specific analysis API class as a empty string. This is to fix the unit tests.
- Created the front end for the new structure.create_rotor_pdb user function. This will be used to create a PDB representation of a rotor motional model.
- Added file, directory and overwrite force arguments to the structure.create_rotor_pdb user function.
- Started to implement the backend of the structure.create_rotor_pdb user function.
- The internal structural object MolContainer.add_atom() method now returns the index of the new atom.
- Created the internal structural object MolContainer.last_residue() method.
- Fully implemented the structure.create_rotor_pdb user function. For this, the generic_fns.structure.geometric.create_rotor_propellers() function was created.
Bugfixes
- Fix for the system tests in the Mac OS X application binary. The Mf.test_bug_20563_missing_ri_error system test fails in the Mac OS X application binary. The problem is that the py2app extension used to build the Mac application decides that empty directories are not to be included in the app, so naturally the test fails when checking for these. Now empty results files have been added to these directories to trick py2app to include them.
- Fixes for the unit test package __all__ list checking. Now only *.py files and directories are checked. In some cases other files could be present in the packages, for example the object files when compiling the C modules. These would cause the unit tests to fail unnecessarily.
- Fixes for the unit test __all__ list checking for the lib package for the Mac OS X application. For some reason the py2app extension which creates the app merges the Python installation directory Resources/lib/python2.7 and the relax lib package into Resources/lib. Now 'python2.7' is blacklisted when checking the lib package so that the parasitic Python install location is ignored.
- Bug fix for the structure RMSD function for when old numpy versions are present. Older numpy versions do not have the ddof argument for the std() standard deviation function,therefore relax now catches this, calculates the biased standard deviation formula, and then multiplies the value by a correction factor to obtain the non-biased estimator.
- Bug fix for the info relax system information module for when spaces are present in the relax path. If relax is placed into a directory containing spaces, then the determination of the architecture of the compiled C modules fails.
- Python 3 fixes for the model-free analysis specific code. This was causing errors "AttributeError: 'dict_values' object has no attribute 'sort'".
- Python 3 updates and fixes using the 2to3 program.
- Bug fix for the external Scientific Python Geometry package. This is a strange Python 3 issue only triggered when the epydoc Python package is installed.
- Fix for bug #20613, the failure of the diffusion tensor PDB creation. This was reported by Angelo Miguel Figueiredo <am dott figueiredo att fct dot unl dot pt>. The problem was that the diffusion tensor PDB representation structure.create_diff_tensor_pdb user function was being called even when the local tm global model was selected. This naturally failed as there is no global diffusion tensor. Now this user function is avoided for the local tm global model.
- Fix for the value.write user function for very small parameter values (Rex for example). This was reported by Martin Ballaschk <ballaschk att fmp-berlin dott de> in the thread http://thread.gmane.org/gmane.science.nmr.relax.user/1397/focus=1402 and by by Angelo Miguel Figueiredo <am dott figueiredo att fct dot unl dot pt> in the unrelated bug report at https://gna.org/bugs/?20613. The formatting string "20.15f" has been changed to "20.15g" to allow Python to decide if the normal decimal or exponential form of the number should be printed.
- Fix for a strange and extremely rare typo bug in the model-free specific analysis code. This was identified by Manish Chaubey <manish dott chaubey att tuebingen dott mpg dott de> in the message at http://thread.gmane.org/gmane.science.nmr.relax.user/1422. This only occurs if a relaxation data error of zero is encountered and is a bug in the RelaxError message explaining the problem with the data.
Links
For reference, the following links are also part of the announcement for this release:
relax 2.2.3
Description
This relax version is a major feature and bugfix release. It adds the new structure.add_model, structure.rmsd and structure.web_of_motion user functions, enhances the structure.load_spins and structure.find_pivot functions, and PDB support for the internal structural object has been improved and updated. The new 'lib' package is introduced which will, in the future, be extensive collection of functions and special objects for all types of molecular dynamics analyses. The relax controller in the relax GUI has been improved with line wrapping to allow all messages to be seen. And some major bugs affecting the model-free auto-analysis and PDB file creation have been fixed. All users are recommended to upgrade.
Download
The new relax versions can be downloaded from http://www.nmr-relax.com/download.html. If binary distributions are not yet available for your platform and you manage to compile the binary modules, please consider contributing these to the relax project (described in section 3.6 of the relax manual, http://www.nmr-relax.com/manual/relax_distribution_archives.html).
CHANGES file
Version 2.2.3
(11 March 2013, from /trunk)
http://svn.gna.org/svn/relax/tags/2.2.3
Features
- Added the mol_name_target argument to the structure.load_spins user function. This allows spins from different molecules to be placed together in the same molecule container in the relax data store.
- Addition of two new user functions - structure.add_model and structure.rmsd.
- Created the structure.web_of_motion user function. This is used to create a special PDB file which represents the atomic motions between different structural models. Identical atoms of the selected models are concatenated into one model, within a temporary internal structural object, and linked together using PDB CONECT records.
- Better PDB support in the internal structural object with: Improvements and fixes in reading/writing, an update of the format to version 3.30, and faster PDB parsing.
- Creation of two new modules for better PDB support - generic_fns.structure.pdb_read and generic_fns.structure.pdb_write.
- Improvements to the structure.find_pivot user function including the addition of the func_tol argument to better control the simplex optimisation and the use of the logarithmic barrier function to prevent the pivot from heading to infinity when the solution is a line.
- Initialised a new package called 'lib' - this will in the future be an extensive collection of functions, methods, classes, objects, etc. useful for the study of all types of molecular dynamics.
- Line wrapping has been turned on in the relax controller in the GUI so that all text is visible.
Changes
- The relax intro text now includes the repository URL for checked out code. This is for preserving better debugging and logging information, so that it is clear where the code comes from.
- Created the Structure.test_load_spins_mol_cat system test. This will be used to test a new 'mol_name_target' argument to the structure.load_spins user function.
- Created the Structure.test_delete_multi_pipe system test. This is to check that the structure.delete user function is operating on a single data pipe.
- Updated the Freecode instructions in the release checklist document.
- Created the simple Structure.test_delete_empty system test. This is to demonstrate a failure of the structure.delete user function when no structural data is present.
- Added a printout to structure.delete for when no structures are present.
- Created the Structure.test_rmsd system test. This test checks the currently unimplemented structure.add_model and structure.rmsd user functions.
- The structural API num_molecules() method can now handle no data being present.
- Implemented the structure.add_model user function.
- Added some more checks to the Structure.test_rmsd system test.
- Modified the structure.add_model calls in the Structure.test_rmsd system test to include model nums.
- Added the 'model_num' argument to the structure.add_model user function.
- Modified the structure.add_atom user function to allow the position argument to be a rank-2 array. This allows a different coordinate for each model to be specified.
- Spun out the atomic_rmsd() and calc_mean_structure() functions into their own module. They were previously in the generic_fns.structure.superimpose module but are now in the new generic_fns.structure.statistics module.
- Added checks for the atomic information to the Structure.test_rmsd system test. This demonstrates a failure of structure.add_atom user function when specifying different positions for the different models.
- Docstring addition for the generic_fns.structure.statistics.atomic_rmsd() function.
- Implemented the structure.rmsd user function.
- Fixes for the Structure.test_rmsd system test - it now passes.
- Created a new float_object argument type which is used by the 'pos' argument of structure.add_atom. A new arg_check.float_object() function has been created to handle any float object greater than rank-0.
- Created the Structure.test_rmsd_ubi system test to better check the structure.rmsd user function. This uses the truncated ubiquitin ensemble in the test suite shared data directories. The RMSD matches the VMD 1.9.1 output.
- Added a new module generic_fns.structure.pdb_write for generating the PDB records. This decouples the formatting code from the internal structural obect. The PDB format has been updated to version 3.30. There is one function for each PDB record, allowing this to be easily extended and kept up to date.
- Created the generic_fns.structure.pdb_read module. This replaces the internal structural object _parse_pdb_record() method which was handling both ATOM+HETATM and CONECT records. It should allow greater flexibility in reading data out of other PDB records in the future. There is one function per PDB record type in this module.
- Added the full 1UBQ PDB structure to the relax test-suite shared data directories. This is a small, very quick to read structure which will be used for validating the reading and writing of different PDB record types.
- Changes to the internal structural object. The _parse_models_pdb() method has been renamed to _parse_pdb_coord() and the opening of the PDB file shifted into the base load_pdb() method. This is in preparation for better parsing of PDB files to match the main sections of the PDB format, see http://www.wwpdb.org/documentation/format33/v3.3.html.
- Created the Structure.test_read_pdb_1UBQ to check the complete parsing of the complex PDB file. The test is currently quite basic and needs to check more of the internal structural object.
- Better checks for the atomic data in the Structure.test_read_pdb_1UBQ system test.
- Added a series of _parse_pdb_*() methods to the internal structural object. These correspond to each section of the PDB format version 3.30 http://www.wwpdb.org/documentation/format33/v3.3.html. The currently loop over the records of their section, returning the remaining PDB records. The aim is for fast parsing and breaking into sections.
- Faster PDB parsing by the removal of the use of the re.search() function. Now line slices are directly compared instead.
- Added some more unit tests for the generic_fns.structure.pdb_read module. These tests are not yet complete, as it is unknown what these unimplemented functions will return.
- Completed the unit test of the generic_fns.structure.pdb_read.helix() function.
- Implemented the generic_fns.structure.pdb_read.helix() function.
- Created the Mf.test_bug_20531_molmol_macro_write_relaxfault system test. This is an attempt at catching bug #20531. It creates all of the m0-m9 and tm0-tm9 models, sets some parameter values, and then attempts to create all of the Molmol macros, PyMOL macros, Grace plots and parameter text files as present in the auto_analysis.dauvergne_protocol module[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b].
- The spectrometer frequency is now set in the Mf.test_bug_20531_molmol_macro_write_relaxfault system test. This is needed for the Rex scaling.
- The spin name, element and isotope is now set in Mf.test_bug_20531_molmol_macro_write_relaxfault. This is required in this system test so that the marco creation is not skipped.
- Added some work-arounds for the model-free specific code for when no relaxation data is present. This is needed for the Rex scaling, as the ID of the first relaxation data set was being used to select the first frequency. As caught by the Mf.test_bug_20531_molmol_macro_write_relaxfault system test, this fails if no relaxation data is present.
- Expanded the unit test of the generic_fns.structure.pdb_read.sheet() function.
- Implemented the PDB SHEET record parsing function generic_fns.structure.pdb_read.sheet().
- Extended the PDB ATOM record reading unit test to be of 80 characters in length, as per the PDB definition.
- Created unit tests for the generic_fns.structure.pdb_write module. This currently covers the atom(), helix() and sheet() functions (the last 2 are not yet implemented).
- Implemented the PDB HELIX record writing function generic_fns.structure.pdb_write.helix().
- Improved PDB writing capabilities. The functions of the generic_fns.structure.pdb_write module now all use the _handle_none() function to avoid the text "None" from appearing in the PDB file and _record_validate() to be sure the record has not been corrupted with bad input causing it to be either less or greater than 80 characters.
- The Mf.test_bug_20531_molmol_macro_write_relaxfault system test now catches bug #20531. This now uses the results file attached to the bug report.
- Implemented the PDB SHEET record writing function generic_fns.structure.pdb_write.sheet().
- Created a unit test for the generic_fns.structure.pdb_write.het() function.
- Created the generic_fns.structure.pdb_write._handle_text() function. This private function is used to convert text into PDB suitable format (uppercase and values of None converted to empty strings).
- The diffusion tensor PDB files are now conform better to the PDB standard. The HET records are now correct, only capitalised text is present in the files, and trailing whitespace to character 80 has been added.
- Epydoc docstring formatting for the generic_fns.structure.pdb_write modules. These large changes improve the API documentation at http://www.nmr-relax.com/api/.
- Created a unit test for the generic_fns.structure.pdb_write.model() function.
- Added a new PDB file with 3 models and a few atoms for testing of the structure.web_of_motion user function.
- Created the Structure.test_web_of_motion_all system test. This is to check the new structure.web_of_motion user function.
- The structure.web_of_motion user function can now handle file objects as well as file names as input.
- Small fixes for the Structure.test_web_of_motion_all system test.
- Created the Structure.test_web_of_motion_12 system test to show how model sets are currently ignored.
- Implemented the models argument for the structure.web_of_motion user function. This was previously not being used and was caught by the Structure.test_web_of_motion_12 system test.
- Created the Structure.test_web_of_motion_13 system test. This was just to be sure that the models argument was correctly handled by the structure.web_of_motion user function.
- The structure.find_pivot user function now accepts the func_tol argument. This is used to terminate the simplex optimisation when this function tolerance value is reached.
- Shifted the ensemble pivot finding target function into the maths_fns package.
- Added a sentence to the README file about the sample_scripts directory.
- Added a document detailing the possible future layout of relax's packages.
- The structure.find_pivot user function now uses the logarithmic barrier function. This is for constrained optimisation and requires the newest minfx code. The pivot position is constrained within a box of +/- 1000 Angstroms from zero. This is needed for when the solution is an infinite line - i.e. a rotation axis and not a pivot point. Previously the simplex optimisation would head toward + or - infinity. But now with a logarithmic barrier, the simplex algorithm can stabilise and find a point on the axis very quickly, long before reaching the edges of the box.
- The structure.find_pivot user function now accepts the func_tol and box_limit arguments. This allows the function tolerance for the simplex optimisation to be specified, as well as the size of the box to constrain the pivot to be within.
- Initialised the lib.geometry package. This will be a library of all mathematics functions relating to geometry.
- Added empty packages to the unit tests for the lib and lib.geometry packages.
- Updated the maths_fns package __all__ list.
- Updated the test_suite.unit_tests package __all__ list to be more modern.
- The n_state_model.number_of_states user function no longer requires the N-state model to be defined. This was only needed to update the model information, and is skipped if not set.
- The generic_fns.structure.superimpose.find_centroid() function now prints out Euler angles as well.
- Large improvements to the checking for all the rdc and pcs user functions. The new methods check_pipe_setup() have been added to replace all other checking. This standardises all error checking and provides much better coverage. The results is that you will be much less likely to encounter a Python traceback when something is forgotten, and will be told via a RelaxError what is missing.
- The rdc.back_calc and pcs.back_calc user functions now warn if no data was calculated. This is to inform the user about problems at the place that they occur instead of later on with, for example, the creation of empty data files.
- Updated the float module to handle numpy floats. This makes the floatToBinaryString() function compatible with the numpy.float16 type.
- Removed the prune parameter from the backend of the monte_carlo.error_analysis user function. This was a dangerous parameter used to mimic the 'Trim' parameter from the Modelfree4 program. The result is bad statistics. The probable reason for the 'Trim' parameter was the failure of model-free models in the simulations, but this issue was solved using model elimination (see http://www.nmr-relax.com/refs.html#dAuvergneGooley06).
- Created the Structure.test_read_xyz_strychnine system test to demonstrate a bug in the XYZ parser. This is for the reading of XYZ structure files.
- Created the lib.text package for text manipulation. The first module will be the text formatting of tables.
- Created the lib.geometry.lines module for performing geometric operations with lines. This has one stub of a function lib.geometry.lines.closest_point() which will be used to find the closest point on a line to a given point.
- Added the package checking unit tests for the lib package.
- Improved the base class unit test for the package __all__ list. Subpackages are now also checked.
- Blacklisted a number of files in the maths_fns package for the package __all__ list unit test.
- Added a unit test for the lib.geometry package __all__ list.
- Created a unit test for the lib.geometry.lines.closest_point() function.
- Created the lib.text.table module. This originates from the prompt.uf_docstring module as most of that module is functions for creating formatted text tables.
- Updated the lib package __all__ list for the lib.text package.
- Implemented the closest_point() and closest_point_ax() functions of lib.geometry.lines. These two functions do the same thing - find the closest point on a line to any given point - but take different arguments to define the line.
- Improved the package __all__ list base unit test by skipping all hidden files and directories.
- Refactored the lib.text.table module. The create_table() function is now called format_table() and the table_line() function has been made private. All references to the user function tables and the relax status object have been removed and replaced by arguments to format_table().
- The prompt.uf_docstring module now uses lib.text.table.format_table(). This significantly simplifies the module.
- Removed a number of unused imports in prompt.uf_docstring.
- Deleted prompt.uf_docstring.table_line() as this is now a private function of lib.text.table.
- Fix for lib.text.table.format_table() as table_line() is now private.
- Added the spacing argument to lib.text.table.format_table(). This removes the reference to the user function table spacing variable from this function and shifts it to the prompt.uf_docstring.create_table() function.
- Created the framework for the unit tests of the lib.text package.
- Created two unit tests for the lib.text.table.format_table() function.
- Updates to the unit tests of the lib.text.table.format_table() function.
- Many improvements to the lib.text.table module. The format_table() function now accepts arguments for text to prefix and postfix to each line,the text padding to the left and right inside the table, and the text used to separate the columns. The _blank() and _rule() private functions have been added to create distinct table elements.
- Created the lib.text.table.MULTI_COL constant for defining cells spanning multiple columns. This is not used yet.
- Modified the Mf.test_mf_auto_analysis GUI test to catch bug #20603.
- Created a unit test for the lib.text.table.format_table() function to test multiple column support. Support for content spanning multiple cells is yet to be implemented.
- Implemented multi-column support in lib.text.table.format_table().
- Spacing between heading rows is now functional in lib.text.table.format_table().
- Created a new unit test of lib.text.table.format_table() to check for non-string type data.
- The table contents are now all converted to strings in lib.text.table.format_table(). This uses the _convert_to_string() private function.
- Converted the test_format_table4() unit test of lib.text.table.format_table() to check justification. The right justification of cells with numbers will be implemented to match these changes.
- Numbers are now right justified in cells in the lib.text.table.format_table() function.
- Modified the test_format_table4() unit test of lib.text.table.format_table(). This change is to test the currently unimplemented custom_format argument. This will be used to allow special formatting in the table. For example using '%.3f' for a float.
- Implemented the custom_format argument for lib.text.table.format_table(). This allows cell contents to be formatted as the user asks. It defaults to standard string conversion is the custom conversion fails.
- Rounding error fix for the test_format_table4() unit test of lib.text.table.format_table().
- Python 3 fix for the test_format_table4() unit test of lib.text.table.format_table(). The string representation of the builtin list object is different in Python 2 vs. 3.
- Created the test_format_table5() unit test for lib.text.table.format_table(). This test checks what happens if no header is given to format_table(). This currently fails.
- The lib.text.table.format_table() function can now create a table without headers.
- Added column number checks for the data input into lib.text.table.format_table().
- Created the test_format_table6() unit test for lib.text.table.format_table(). This test shows a problem with more than one multi-column cells defined, as well as problems when a multi-column cell is wider than the sum of the widths of the columns it spans.
- Fix for lib.text.table.format_table() when more than one multi-column cell per row is encountered. The algorithm for determining the total width of the multi-column cell in _table_line() was not checking if the end of the span was being reached.
- The lib.text.table.format_table() function now handles overfull multi-column cells. The _determine_widths() private function has been created to better handle the determination of the table column widths. It will now extend the width of the last column to allow overfull multi-column cells to fit.
- Modified the test_format_table5() unit test of lib.text.table.format_table() to check bool types.
- The lib.text.table.format_table() function now handles boolean types.
- Booleans are not numbers, so do not right justify them in lib.text.table.format_table().
- The minfx.__version__ value is now read for the version in the relax information printout.
- The bmrblib.__version__ value is now read for the version in the relax information printout.
- All of the specific API data and error returning common methods can now handle missing data/errors. This affects the _return_data_relax_data() and _return_value_general() methods.
- Updated the release checklist to include information about updating the FSF directory.
- Modified the release checklist document to use the stable release tags of minfx and bmrblib. This is instead of the code in trunk which may not always be in a stable state.
- Redesign of the generic_fns.mol_res_spin.generate_spin_id() function. The function now tries to generate a unique ID based on the spin information in the specified data pipe. This is to attempt to fix a bug uncovered by the Structure.test_read_xyz_internal2 system test. Defaulting in all cases to the spin name rather than spin number will often fail for a small organic molecule, as the name in XYZ files is the atomic symbol and hence will almost never be unique.
- Created the generic_fns.mol_res_spin.return_molecule_by_name() function. This will be used in the future as it is much faster than generic_fns.mol_res_spin.return_molecule()if the molecule name is already known.
- Missing import affecting the generic_fns.interatomic.create_interatom() function.
- Reverted the last revision (r18737) as it was not correct and RelaxErrors should be used instead. The command used was:svn merge -r18737:18736 .
- Fix for the generic_fns.interatomic.create_interatom() function. RelaxNoSpinWarning has been replaced with RelaxNoSpinError.
- Fixes for the metadata update of the residue and spin name and number counts.
- Created the generic_fns.mol_res_spin.generate_spin_id_unique() function. This will return a truly unique spin ID string based on the current molecule, residue, and spin data structure.
- The spin_loop() function now uses generate_spin_id_unique() when the return_id flag is set. This ensures that the caller received a unique spin ID which can be used to retrieve the corresponding spin container.
- Improved the generic_fns.mol_res_spin.generate_spin_id_unique() function. This can now work with molecule, residue, and spin names and numbers alternatively to the containers supplied as arguments. For this to work, the return_molecule_by_name() function has been improved and the functions return_residue_by_info() and return_spin_by_info() have been added.
- The pcs.read user function backend now uses generic_fns.mol_res_spin.generate_spin_id_unique(). This allows the matching spin container to always be returned for storing the data.
- Large speed ups of the Bmrb system tests by the deletion of most of the residues. On one system, this cuts the time for all 3 Bmrb tests from 70 to ~12 seconds.
- Added the profile flag keyword argument to the relax startup script for Unix-like systems. This is to simplify the switching on of profiling.
- Large cleanup and bugfixes for the molecule, residue, and spin data structure metadata maintenance. The bugs fixed are important for non-protein molecules. For example is the spin name is not unique per residue, or per molecule if no residues are defined, many parts of relax would fail. All of the metadata_*() and spin_id_variants*() functions have been redesigned. It was also identified that metadata_prune() was being used by different parts of relax for two different purposes - the removal or pruning of metadata prior to the deletion of a data structure and the clean up of no longer valid metadata. These two goals conflicted resulting in unpredictable behaviour. Therefore the new metadata_cleanup() and spin_id_variants_cleanup() functions have been created and the two behaviours separated.
- Fix for the bmrb.read user function for the recent molecule, residue and spin metadata improvements. The generic_fns.bmrb.generate_sequence() function now calls generic_fns.mol_res_spin.metadata_clean()to be sure that the metadata is correct. The problem is the structure of the BMRB file with no spin information in the entity record, hence the residues are created first and the spins much later in generate_sequence().
- Removed unused imports in the generic_fns.rdc module.
- The generic_fns.mol_res_spin.generate_spin_id_unique() function now handles missing spin containers. Previously if this function was used to generate a spin ID string of a spin not in the data store,it would fail. Now it generates an ID by defaulting to generate_spin_id().
- Converted many calls to generic_fns.mol_res_spin.generate_spin_id() to generate_spin_id_unique(). This will allow many future bugs to be avoided, as the spin ID string is most often used to retrieve spin containers. By using the generate_spin_id_unique() function, the returning of spin containers will always be correct.
- Created the Mf.test_bug_20563_missing_ri_error system test to catch bug #20563. The data added to the test suite is a highly truncated data set of a analysis completed using the data attached to the bug report.
- Modified the dauvergne_protocol model-free auto-analysis[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b] to aid in debugging. The write_results_dir argument has been added to allow the test suite to read from one directory in test suite shared data directories and redirect output to a temporary directory.
- The files from the Mf.test_bug_20563_missing_ri_error system test are now placed in a temporary directory. This is essential for the test suite to prevent files from going everywhere.
- The frq.set user function units argument is no longer read-only. This is needed for some of the GUI tests in the frame_order_testing branch.
Bugfixes
Links
relax 2.2.2
Description
Download
CHANGES file
Version 2.2.2
(12 February 2013, from /trunk)
http://svn.gna.org/svn/relax/tags/2.2.2
Features
- Improvements to the relax API documentation.
Changes
Bugfixes
Links
relax 2.2.1
Description
Download
CHANGES file
Features
N/A
Changes
Bugfixes
Links
relax 2.2.0
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 2.1 series
relax 2.1.2
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 2.1.1
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 2.1.0
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 2.0 series
relax 2.0.0
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
Version 1 of relax
relax 1.3 series
relax 1.3.16
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.15
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.14
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.13
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.12
Description
Download
CHANGES file
Features
Changes
Too many to list.
Bugfixes
Links
relax 1.3.11
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.10
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.9
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.8
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.7
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.6
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.5
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.4
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.3
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.2
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.1
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.3.0
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2 series
relax 1.2.15
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.14
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.13
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.12
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.11
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.10
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.9
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.8
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.7
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.6
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.5
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.4
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.3
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.2
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.1
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.2.0
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.0 series
relax 1.0.10
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.0.9
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.0.8
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.0.7
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.0.6
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.0.5
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.0.4
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.0.3
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.0.2
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.0.1
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links
relax 1.0.0
Description
Download
CHANGES file
Features
Changes
Bugfixes
Links