Relax release changes

From relax wiki
Revision as of 09:57, 13 October 2015 by Bugman (talk | contribs) (Added the relax 4.0.0 release.)
Jump to navigation Jump to search

Contents

Version 4 of relax

relax 4.4 series

relax 4.0.0

  • Deletion of the frame_order.average_position user function and all of the associated backend code. This user function allowed the user to specify five different types of displacement to the average moving domain position: a pure rotation, with no translation, about the pivot of the motion in the system; a rotation about the pivot of the motion of the system together with a translation; a pure translation with no rotation; a rotation about the centre of mass of the moving domain with no translation; a rotation about the centre of mass of the moving domain together with a translation. Now the last option will be the default and only option. This option is equivalent to the standard superimposition algorithm (the Kabsch algorithm) to a hypothetical structure at the real average position. The other four are due to the history of the development of the theory. These limit the usefulness of the theory and will only cause confusion.
  • Clean up of the frame order target function code. This matches the previous change of the deletion of the frame_order.average_position user function. The changes include the removal of the translation optimisation flag as this is now always performed, and the removal of the flag which causes the average domain rotation pivot point to match the motional pivot point as these are now permanently decoupled.
  • Alphabetical ordering of functions in the lib.frame_order.pseudo_ellipse module.
  • Eliminated all of the 'line' frame order models, as they are not implemented yet. This is just frontend code - the backend does not exist.
  • Updated the isotropic cone CaM frame order test model optimisation script. Due to all of the changes in the frame order analysis, the old script was no longer functional.
  • Created a script for the CaM frame order test models for finding the average domain position. As the rotation about a fixed pivot has been eliminated, the shift from 1J7P_1st_NH_rot.pdb to 1J7P_1st_NH.pdb has to be converted into a translation and rotation about the CoM. This script will be used to replace the pivot rotation Euler angles with the translation vector and CoM rotation Euler angles. However the structure.superimpose user function will need to be modified to handle both the standard centroid superimposition as well as a CoM superimposition.
  • Updated the CaM frame order test model superimposition script. The structure.superimpose user function is now correctly called. The output log file has been added to the repository as it contains the correct translation and Euler rotation information needed for the test models.
  • Parameter update for the isotropic cone CaM frame order test model optimisation script. The Euler angles for the rotation about the motional pivot have been replaced by the translation vector and Euler angle CoM rotation parameters.
  • Fix for a number of the frame order models which do not have parameter constraints. The linear_constraint() function was returning A, b = [], [] for these models, but these empty numpy arrays were causing the minfx library to fail. These values are now caught and the constraint algorithm turned off in the minimise() specific API method.
  • Increased the precision of all the data in the CaM frame order test data generation base script. These have all been converted from float16 to float64 numpy types.
  • Fix for the RDC error setting in the CaM frame order test data generation base script. The rdc_err data structure is located in the interatomic data containers, no the spin containers.
  • Modification of the structure loading part of the CaM frame order data generation base script. The structures are now only loaded if the DIST_PDB flag is set, as they are only used for generating the 3D distribution of structures. This saves a lot of time and computer memory.
  • Huge speedup of the CaM frame order test data generation base script. By using multidimensional numpy arrays to store the atomic positions and XH unit vectors of all spins, and performing the rotations on these structures using numpy.tensordot(), the calculations are now a factor of 10 times faster. The progress meter had to be changed to show every 1000 rather than 100 iterations. The rotations of the positions and vectors are now performed sequentially, accidentally fixing a bug with the double motion models (i.e. the 'double rotor' model).
  • Modified the CaM frame order test data generation base script to conserve computer RAM. The XH vector and atomic position data structures for all N rotations are now of the numpy.float32 rather than numpy.float64 type. The main change is to calculate the averaged RDCs and averaged PCSs separately, deleting the N-sized data structures once the data files are written.
  • Complete redesign of the CaM frame order data generation base script for speed and memory savings. Although the rotated XH bond vector and atomic position code was very fast, the amount of memory needed to store these in the spin containers and interatomic data containers was huge when N > 1e6. The subsequent rdc.back_calc and pcs.back_calc user function calls would also take far too long. Therefore the base script has been redesigned. The _create_distribution() method has been split into four: _calculate_pcs(), _calculate_rdc(), _create_distribution(), and _pipe_setup(). The _pipe_setup() method is called first to set up the data pipe with all required data. Then the _calculate_rdc() and _calculate_pcs() methods, and finally _create_distribution() if the DIST_PDB flag is set. The calls to the rdc.back_calc and pcs.back_calc user functions have been eliminated. Instead the _calculate_rdc() and _calculate_pcs() methods calculate the averaged RDC and PCS themselves as numpy array structures. Rather than storing the huge rotated vectors and atomic positions data structures, the RDCs and PCSs are summed. These are then divided by self.N at the end to average the values. Compared to the old code, when N is set to 20 million the RAM usage drops from ~20 GB to ~65 MB. The total run time is also decreased on one system from a few days to a few hours (an order or two of magnitude).
  • Changed the progress meter updating for the CaM frame order test data generation base script. The spinner was far too fast, updating every 5 increments, and is now updated every 250. And the total number is now only printed every 10,000 increments.
  • Improvements to the progress meter for the CaM frame order test data generation base script. Commas are now printed between the thousands and the numbers are now right justified.
  • Large increase in accuracy of the RDC and PCS averaging. This is for the CaM frame order test data generation base script. By summing the RDCs and PCSs into 1D numpy.float128 arrays (for this, a 64-bit system is required), and then dividing by N at the end, the average value can be calculated with a much higher accuracy. As N becomes larger, the numerical averaging introduces greater and greater amounts of truncation artifacts. So this change alleviates this.
  • Fix for the RDC and PCS averaging in the CaM frame order test data generation base script. For the double rotor model, or any multiple motional mode model, the averaging was incorrect. Instead of dividing by N, the values should be divided by NM, where M is the number of motional modes.
  • Huge increase in precision for the CaM frame order free rotor model test data. The higher precision is because the number structures in the distribution is now twenty million rather than one million, and the much higher precision numpy.float128 averaging of the updated data generation base script has been used. This data should allow for a much better estimate of the β and γ average domain position parameter values for the free rotor models which are affected by the collapse of the α parameter to zero.
  • Huge increase in precision for the CaM frame order double rotor model test data. The higher precision is because the number structures in the distribution is now over twenty million (45002) rather than a quarter of a million (5002). And the much higher precision numpy.float128 averaging of the updated data generation base script has been used.
  • Fix for the constraint deactivation in the frame order minimisation when no constraints are present.
  • Huge increase in precision for the CaM frame order rotor model test data. The higher precision is because the number structures in the distribution is now 20 million rather than 166,666, and the numpy.float128 data averaging has been used.
  • Large increase in precision for the 2nd CaM frame order rotor model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1,000,001 and the numpy.float128 data averaging has been used.
  • Parameter update for the 2nd rotor CaM frame order test model optimisation script. The Euler angles for the rotation about the motional pivot have been replaced by the translation vector and Euler angle CoM rotation parameters.
  • Large increase in precision for the 2nd CaM frame order free rotor model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 999,999 and the numpy.float128 data averaging has been used.
  • Updated the CaM frame order test model superimposition script. The Ca2+ atoms are now deleted from the structures before superimposition so that the centroid matches that used in the frame order analysis.
  • The average domain rotation centroid is printed out when setting up the frame order target functions. This is to help the user understand what is happening in the analysis.
  • Faster clearing of numpy arrays in the lib.frame_order modules. The x[:] = 0.0 notation is now used to set all elements to zero, rather than nested looping over all dimensions. This however has a negligible effect on the test suite timings.
  • Large increase in precision for the CaM frame order pseudo-ellipse model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
  • Improved the value setting in the optimisation() method of the CaM frame order system tests. This is in the base script used by all scripts in test_suite/system_tests/scripts/frame_order/cam/.
  • Changed the average domain position parameter values in the CaM frame order system tests. This is in the base script used by all scripts in test_suite/system_tests/scripts/frame_order/cam/. The translation vector coordinates are now set, as well as the CoM Euler angle rotations. These come from the log file of the test_suite/shared_data/frame_order/cam/superimpose.py script, and are needed due to the simplification of the average domain position mechanics now mimicking the Kabsch superimposition algorithm.
  • The CaM frame order system test mesg_opt_debug() method now prints out the translation vector. This is printed out at the end of all CaM frame order system tests to help with debugging when the test fails.
  • Change for how the CaM frame order system test scripts handle the average domain position rotation. The trick of pre-rotating the 3D coordinates was used to solve the {α, β, γ} -> {0, β', γ'} angle conversion problem in the rotor models no longer works now that the average domain position mechanics has been simplified. Instead, high precision optimised β' and γ' values are now set, and the ave_pos_alpha value set to None. The high precision parameters were obtained with the frame_order.py script located in the directory test_suite/shared_data/frame_order/cam/free_rotor. The free rotor target function was modified so that the translation vector is hard-coded to [-20.859750185691549, -2.450606987447843, -2.191854570352916] and the axis θ and φ angles to 0.96007997859534299767 and 4.0322755062196229403. These parameters were then commented out for the model in the module specific_analyses.frame_order.parameters so only β' and γ' were optimised. Iterative optimisation was used with increasing precision, ending up with high precision using 10,000 Sobol' points.
  • Updated a number of the CaM frame order system tests for the higher precision data. The new data results in chi-squared values at the real solution to be much closer to zero.
  • Change for how the CaM frame order free-rotor pseudo-ellipse test script handle the average position.
  • Added FIXME comments to the 2nd free-rotor CaM frame order model system test scripts. These explain the steps required to obtain the correct β' and γ' average domain position rotation angles.
  • Large increase in precision for the CaM frame order isotropic cone model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
  • Large increase in precision for the CaM frame order free-rotor, isotropic cone model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
  • Updated the CaM frame order free-rotor model test data set for testing for missing data. This is the data in test_suite/shared_data/frame_order/cam/free_rotor_missing_data. To simplify the copying of data from test_suite/shared_data/frame_order/cam/free_rotor and then the deletion of data, the missing.py script was created to automate the process. The generate_distribution.py script and some of the files it creates were removed from the repository so it is clearer how the data has been created.
  • Large increase in precision for the 2nd CaM frame order free-rotor, isotropic cone model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
  • Large increase in precision for the CaM frame order free-rotor, pseudo-ellipse model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
  • Large increase in precision for the CaM frame order pseudo-ellipse model test data set. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used.
  • Updated a number of the CaM frame order system tests for the higher precision data. The new data results in chi-squared values at the real solution to be much closer to zero. The free-rotor pseudo-ellipse models might need investigation however as the chi-squared values have increased.
  • Elimination of the error_flag variable from the frame order analysis. This flag is used to activate some old code paths which have now been deleted as they are never used.
  • Optimisation of the average domain position for the CaM frame order free-rotor models. The log file that shows the optimisation of the average domain position for the free-rotor models has been added to the repository for reference. This is for the simple free-rotor model, but the optimised position holds for the isotropic cone and pseudo-ellipse model data too. To perform the optimisation, the axis_theta and axis_phi parameters were removed from the model and hardcoded into the target function. As the rotor axis is know, this allows the average domain position to be optimised in isolation. Visual inspection of the results confirmed the position to be correct.
  • Fixes for the 2nd frame order free-rotor system tests. The average domain position parameters are now set to the correct values, matching those in the relax log file frame_order_ave_pos_opt.log in test_suite/shared_data/frame_order/cam/free_rotor2.
  • Updated the 2nd CaM free-rotor frame order system tests for the correct average domain position. The chi-squared values are now significantly lower.
  • Increased the precision of the chi-squared value testing in the CaM frame order system tests. The check_chi2 method has been modified so that the chi-squared value is no longer scaled, and the precision has been increased from 1 significant figure to 4. All of the tests have been updated to match.
  • The minimisation verbosity flag now effects the frame order RelaxWarning about turning constraints off.
  • Preformed a frame order analysis on the 2nd CaM free-rotor model test data. This is to check that everything is operating as expected.
  • Small speedup for the frame order target functions for most models. The rotation matrix corresponding to each Sobol' point for the numerical integration is now pre-calculated during target function initialisation rather than once for each function call.
  • Updates for some of the frame order system tests for the rotation matrix pre-calculation change. As the rotation matrix is being pre-calculated, one consequence is that the Sobol' angles are now full 64-bit precision rather than 32-bit. Therefore this changes the chi-squared value a little, requiring updates to the tests.
  • Preformed a frame order analysis on the CaM free-rotor mode test data set. This is to demonstrate that everything is operating correctly.
  • Preformed a frame order analysis on the CaM free-rotor mode test data set with missing data. This is to demonstrate that everything is operating correctly.
  • Attempt to speed up the pseudo-elliptic frame order models. The quasi-random numerical integration of the PCS for the pseudo-ellipse has been modified so that the torsion angle check for each Sobol' point is preformed before the tmax_pseudo_ellipse() function call. A new check that the tilt angle is less than cone_theta_y, the larger of the two cone angles, has also been added to avoid tmax_pseudo_ellipse() when the θ tilt angle is outside of an isotropic cone defined by cone_theta_y.
  • Preformed a frame order analysis on a number of the CaM test data sets. This includes the rotor, isotropic cone, and pseudo-ellipse, and the analyses demonstrate a common bug between all these models.
  • Preformed a frame order analysis on the rigid CaM test data set. This is to demonstrate that everything is operating correctly.
  • Optimisation of the rotor model to the rigid CaM frame order test data. The optimisation script and all results files have been added to the repository.
  • Increased the grid search bounds for the frame order average domain translation. Instead of being a 10 Angstrom box centred at {0, 0, 0}, now the translation search has been increased to a 100 Angstrom box.
  • Proper edge case handling and slight speedup of the frame order PCS integration functions. The case whereby no Sobol' points in the numerical integration lie within the motional distribution is now caught and the rotation matrix set to the motional eigenframe to simulate the rigid state. As the code for averaging the PCS was changed, it was also simplified by removing an unnecessary loop over all spins. This should speed up the PCS integration by a tiny amount.
  • Created a new CaM frame order test data set. This is for the rotor model with a very small torsion angle of 1 degree, and will be used as a comparison to the rigid model and for testing the performance of the rotor model for an edge case.
  • Updated the frame order representations in all of the frame_order.py scripts for the CaM test data. All PDB files are now gzipped to save space, the old pymol.cone_pdb user function calls replaced with pymol.frame_order, and an average domain PDB file for the exact solution is now created in all cases.
  • The minimisation constraints are now turned on for all CaM test data frame_order.py optimisation scripts.
  • Updated the rotor CaM test data frame_order.py script for the parameter reduction. The rotor axis {θ, φ} polar angles have been replaced by the single axis α angle. This now matches the script for the 2nd rotor model.
  • Updated the parameters in all of the frame_order.py scripts for the CaM test data. The parameters are now specified at the top of the script as variables. All scripts now handle the change to the translation + CoM rotation for the average domain position rather than having a pure rotation about a fixed pivot, which is no longer supported.
  • The frame_order.num_int_pts user function now throws a RelaxWarning if not enough points are used.
  • Changed the creation of Sobol' points for numerical integration in the frame order target functions. The points are now all created at once using the i4_sobol_generate() rather than i4_sobol() function from the extern.sobol.sobol_lib module.
  • Increased the number of integration points from 50 or 100 to 5000. This is for all CaM frame_order.py test data optimisation scripts. The higher number of points are essential for optimising the frame order models and hence for checking the relax implementation.
  • Updated the frame_order.py optimisation script for the small angle CaM rotor frame order test data. This now has the correct rotor torsion angle of 1 degree, and the spherical coordinates are now converted to the axis α parameter.
  • Expanded the capabilities of the pymol.frame_order user function. The isotropic and pseudo-elliptic cones are now represented as they used to be under the pymol.cone_pdb user function. To avoid code duplication, the new represent_cone_axis(), represent_cone_object() and represent_rotor_object() functions have been created to send the commands into PyMOL.
  • Increased the precision of all of the CaM frame order system tests by 40 times. The number of Sobol' integration points have been significantly increased while only increasing the frame order system test timings by ~10%. This allows for checking for chi-squared values at the minima much closer to zero, and is much better for demonstrating bugs.
  • Optimisation constraints are no longer turned off in the frame order auto-analysis. Constraints are now supported by all frame order models, or automatically turned off for those which do not have parameter constraints.
  • Fix for the frame order visualisation script created by the auto-analysis. The call to pymol.frame_order is now correct for the current version of this user function.
  • Removed a terrible hack for handling the frame order analysis without constraints. This is no longer needed as the log-barrier method is now used to constrain the optimisation, so that the torsion angle can no longer be negative.
  • Constraints are now implemented in the frame order grid search. This is useful for the pseudo-elliptic models as the cone θx < θy constraint halves the optimisation space.
  • Expanded the CaM rotor test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation are included.
  • Expanded the CaM pseudo-ellipse test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation are included.
  • Added one more iteration for the zooming optimisation of the frame order auto-analysis. This is to improve the speed of optimisation when all RDC and PCS data is being used. The previous iterations where with [100, 1000, 200000] Sobol' integration points and [1e-2, 1e-3, 1e-4] function tolerances. This has been increased to [100, 1000, 10000, 100000] and [1e-2, 1e-3, 5e-3, 1e-4]. The final number of points has been decreased as that level of accuracy does not appear to be necessary. These are also only default values that the user can change for themselves.
  • Updated the CaM frame order data generation base script to print out more information. This is for the first axis system so that the same amount of information as the second system is printed.
  • Expanded the CaM isotropic cone test data frame_order.py optimisation script and added the results. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower.
  • Important fix for the 2nd rotor model of the CaM frame order test data. The tilt angle was not set, and therefore the old data matched the non-tilted 1st rotor model. All PCS and RDC data has been regenerated to the highest quality using 20,000,000 structures.
  • Updated the 3 Frame_order.test_cam_rotor2* system tests for the higher quality data.
  • Expanded the 2nd CaM pseudo-ellipse test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation have been added to the repository.
  • Expanded the CaM free-rotor isotropic cone test data frame_order.py optimisation script. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower. The results of the new optimisation have been added to the repository.
  • Expanded all remaining CaM test data frame_order.py optimisation scripts. The optimisation is now implemented as in the auto-analysis, with an iterative increase in accuracy of the quasi-random numerical integration together with an decrease of the function tolerance cutoff for optimisation. The accuracy of the initial chi-squared calculation is now much higher. And the accuracy of the initial grid search and the Monte Carlo simulations is now much lower.
  • Updated the CaM 2-site to rotor model frame_order.py optimisation script for the parameter reduction. The rotor frame order model axis spherical angles have now been converted to a single α angle.
  • Fix for a number of the frame order models which do not have parameter constraints. This change to the grid_search() API method is similar to the previous fix for the minimise() method. The linear_constraint() function was returning A, b = [], [] for these models, but these empty numpy arrays were causing the dot product with A to fail in the grid_search() API method. These values are now caught and the constraint algorithm turned off.
  • Converted the 'free rotor' frame order model to the new axis_alpha parameter system. The axis_theta and axis_phi spherical coordinates are converted to the new reduced parameter set defined by a random point in space (the CoM of all atoms), the pivot point, and a single angle α. The α parameter defines the rotor axis angle from the xy-plane.
  • Parameter conversion for all of the CaM free rotor test data frame_order.py optimisation scripts. The rotor axis spherical angles have been replaced by the axis α angle defining the rotor with respect to the xy-plane.
  • Modified the CaM frame order base system test script to catch a bug in the free rotor model. The axis spherical angles are no longer set for the rotor or free rotor models, as they use the α angle instead and the lack of the θ and φ parameters triggers the bug. The PDB representation of the frame order motions is also now tested for all frame order models, as it was turned off for the rigid, rotor and free rotor models and this is where the bug lies.
  • Fix for the failure of the frame_order.pdb_model user function for the free rotor frame order model. This is due to the recent parameter conversion to the axis α angle.
  • Eliminated the average position α Euler angle parameter from the free-rotor pseudo-ellipse model. As this frame order model is a free-rotor, the average domain position is therefore undefined and it can freely rotate about the rotor axis. One of the Euler angles for rotating to the average position can therefore be removed, just as in the free rotor and free rotor isotropic cone models.
  • Eliminated the ave_pos_alpha parameter from the free rotor psuedo-ellipse model target function. The average domain position α Euler angle has already been removed from the specific analyses code and this change brings the target function into line with these changes.
  • Added the full optimisation results for the 2nd rotor frame order model for the CaM test data. This is from the new frame_order.py optimisation script and the results demonstrate the stability of the rotor model.
  • Added the full optimisation results for the small angle rotor CaM frame order test data. This is from the new frame_order.py optimisation script and the results demonstrate the stability of the rotor model, even when the rotor is as small as 1 degree.
  • Fix for the free rotor PDB representation created by the frame_order.pdb_model user function. The simulation axes were being incorrectly generated from the θ and φ angles, which no longer exist as they have been replaced by the α angle.
  • Added the full optimisation results for the free rotor pseudo-ellipse frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
  • Added the full optimisation results for the rotor frame order model. This is for the 2-site CaM test data using the new frame_order.py optimisation script.
  • The CaM frame order data generation base script now uses lib.compat.norm(). This is to allow the test suite to pass on systems with old numpy versions whereby the numpy.linalg.norm() function does not support the new axis argument.
  • Modified the pymol.cone_pdb and pymol.frame_order user functions to use PyMOL IDs. The PyMOL IDs are used to select individual objects in PyMOL rather than all objects so that the subsequent PyMOL commands will only be applied to that object. This allows for multiple objects to be handled simultaneously.
  • Added the full optimisation results for the free rotor frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
  • Added the full optimisation results for the 2nd free rotor frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
  • Added the full optimisation results for the free rotor frame order model with missing data. This is for the CaM test data using the new frame_order.py optimisation script.
  • Added a script for recreating the frame order PDB representation and displaying it in PyMOL. This is for the optimised results.
  • Fixes for the rotor object created by the frame_order.pdb_model user function. The rotor is now also shown for the free rotor pseudo-ellipse, despite it being a useless model, and the propeller blades are no longer staggered for all the free rotor models so that two circles are no longer produced.
  • Updated the free rotor and 2nd free rotor PDB representations using the represent_frame_order.py script. This is for the CaM frame order test data.
  • Reparameterisation of the double rotor frame order model. The two axes defined by spherical angles have been replaced by a full eigenframe and the second pivot has been replaced by a single displacement along the z-axis of the eigenframe.
  • Removed the 2nd pivot point infrastructure from the frame order analysis. The 2nd pivot is now defined via the pivot_disp parameter.
  • Added the 2nd rotor axis torsion angle to the list of frame order parameters. This is for the double rotor model.
  • Comment fixes for the eigenframe reconstruction in the frame order target functions.
  • Converted the double rotor frame order model target function to use the new parameterisation.
  • Fix for the PDB representation generated by frame_order.pdb_model for the free rotor pseudo-ellipse.
  • Fix for the Frame_order.test_rigid_data_to_free_rotor_model system test. As the free rotor has undergone a reparameterisation, the chi-squared value is now higher. The value is reasonable as the free rotor can never model the rigid system.
  • Removed the structure loading and transformation from the CaM frame order system tests. This was mimicking the old behaviour of the auto-analysis. However as that behaviour has been shifted into the backend of the frame_order.pdb_model user function, which is called by these system tests as well, the code is now redundant and is wasting test suite time.
  • Removed the setting of the second pivot point in the CaM frame order system tests. The second pivot point has been removed from the double rotor frame order model to eliminate parameter redundancy, so no models now have a conventional second pivot.
  • Modified the CaM frame order system test base script to test alternative code paths. This pivot point was fixed in all tests, so the code in the target functions behind the pivot_opt flag was not being tested. Now for those system tests whereby the calc rather than minimise user function is called, the pivot is no longer fixed to execute this code.
  • Simplification and clean up of the RDC and PCS flags in the frame order target functions. The per-alignment flags have been removed and replaced by a global flag for all data. This accidentally fixes a bug when only RDCs are present, as the calc_vectors() method was being called when it should not have been.
  • Speedup and simplifications for the vector calculations used for the PCS numerical integration. This has a minimal effect on the total speed as the target function calc_vectors() method is not the major bottleneck - the slowest part is the quasi-random numerical integration. However the changes may be useful for speeding up the integration later on. The 3D pivot point, average domain rotation pivot, and paramagnetic centre position arrays are now converted into rank-2 arrays in __init__() where the first dimension corresponds to the spin. Each element is a copy of the 3D array. These are then used for the calculation of the pivot to atom vectors, eliminating the looping over spins. The numpy add() and subtract() ufuncs are used together with the out argument for speed and to avoid temporary data structure creation and deletion. The end result is that the calculated vector structure is transposed, so the first dimension are the spins. The changes required minor updates to a number of system tests. The target functions themselves had to be modified so that the pivot is converted to the larger structure when optimised, or aliased.
  • Added a script for timing different ways to calculate PCSs and RDCs for multiple vectors. This uses the timeit module rather than profile to demonstrate the speed of 7 different ways to calculate the RDCs or PCSs for an array of vectors using numpy. In the frame order analysis, this is the bottleneck for the quasi-random numerical integration of the PCS. The log file shows a potential 1 order of magnitude speedup between the 1st technique, which is currently used in the frame order analysis, and the 7th and last technique. The first technique loops over each vector, calculating the PCS. The last expands the PCS/RDC equation of the projection of the vector into the alignment tensor, and calculates all PCSs simultaneously.
  • Added another timing script for RDC and PCS calculation timings. This time, the calculation for multiple alignments is now being timed. An addition set of methods for calculating the values via tensor projections have been added. For 5 alignments and 200 vectors, this demonstrates a potential 20x speedup for this part of the RDC/PCS calculation. Most of this speedup should be obtainable for the numerical PCS integration in the frame order models.
  • Small speedup for all of the frame order models. The PCS averaging in the quasi-random numerical integration functions now uses the multiply() and divide() numpy methods to eliminate a loop over the alignments. For this, a new dimension over the spins was added to the PCS constant calculated in the target function __init__() method. In one test of the pseudo-ellipse, the time dropped from 191 seconds to 172.
  • Added another timing script for helping with speeding up the frame order analysis. This is for the part where the rotation matrix for each Sobol' integration point is shifted into the eigenframe.
  • Python 3 fix for the CaM frame order system test base script.
  • Added the full optimisation results for the torsionless isotropic cone frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
  • Small speedups for all of the frame order models in the quasi-random numerical PCS integration. These changes result in an ~10% speedup. Testing via the func_pseudo_ellipse() target function using the relax profiling flag, the time for one optimisation decreased from 158 to 146 seconds. The changes consist of pre-calculating all rotations of the rotation matrix into the motional eigenframe in one mathematical operation rather than one operation per Sobol' point rotation, unpacking the Sobol' points into the respective angles prior to looping over the points, and taking the absolute value of the torsion angle and testing if it is out of the bounds rather than checking both the negative and positive values.
  • Attempt at speeding up the torsionless pseudo-ellipse frame order model. The check if the Sobol' point is outside of an isotropic cone defined by the largest angle θy is now performed to avoid many unnecessary calls to the tmax_pseudo_ellipse() function. This however reveals a problem with the test suite data for this model.
  • Updated all of the CaM frame order system tests for the recent speedup. The speedup switched to the use of numpy.tensordot() for shifting each Sobol' rotation into the eigenframe rather than the previous numpy.dot(). Strangely this affects the precision and hence the chi-squared value calculated for each system test - both increasing and decreasing it randomly.
  • The frame order target function calc_vectors() method arguments have all been converted to keywords. This is in preparation for handling a second pivot argument for the double rotor model.
  • Updated the double rotor frame order model to be in a pseudo-functional state. Bugs in the target function method have been removed, the calc_vectors() target function method now accepts the pivot2 argument (but does nothing with it yet), and the lib.frame_order.double_rotor module has been updated to match the logic used in all other lib.frame_order modules.
  • The frame_order.pdb_model user function no longer tries to create a cone object for the double rotor.
  • Added a timeit script and log file for different ways of checking a binary numpy array.
  • Modified the rigid_test.py system test script to really be the rigid case. This is used in all of the Frame_order.test_rigid_data_to_*_model system tests. Previously the parameters of the dynamics were set to being close to zero, to catch the cases were a few Sobol' PCS integration points were accepted. But now the case were no Sobol' points can be used is being tested. This checks a code path currently untested in the test suite, demonstrating many failures.
  • Fix for the frame order matrix calculation for a pseudo-elliptic cone with angles of zero degrees. The lib.frame_order.pseudo_ellipse_torsionless.compile_2nd_matrix_pseudo_ellipse_torsionless() function has been changed to prevent a divide by zero failure. The surface area normalisation factor now defaults to 0.0.
  • Fixes for all PCS numeric integration for all frame order models in the rigid case. The exact PCS values for the rigid state are now correctly calculated when no Sobol' points lie within the motional model. The identity matrix is used to set the rotation to zero, and the PCS values are now multiplied by the constant.
  • Updates for the chi-squared value in all the Frame_order.test_rigid_data_to_*_model system tests. This is now much reduced as the true rigid state is now being tested for.
  • The rigid frame order matrix for the pseudo-ellipse models is now correctly handled. This allows the rigid case RDCs to be correctly calculated for both the pseudo-ellipse and torsionless pseudo-ellipse models. The previous catch of the θx cone angle of zero was incorrectly recreating the frame order matrix, which really should be the identity matrix. However truncation artifacts due to the quadratic SciPy integration still cause the model to be ill-conditioned near the rigid case. The rigid case is correctly handled, but a tiny shift of the parameters off zero cause a discontinuity.
  • Updates for the Frame_order.test_rigid_data_to_pseudo_ellipse*_model system tests. The chi-squared value now matches the rigid model.
  • Large increase in precision for the CaM frame order torsionless pseudo-ellipse model test data set. In addition, the θx and θy angles have also been swapped so that the new constraint of 0 ≤ θx ≤ θy ≤ π built into the analysis is satisfied. The higher precision is because the number structures in the distribution is now 20 million rather than 1 million and the numpy.float128 data averaging has been used. The algorithm for finding suitable random domain positions within the motional limits has been changed as well by extracting the θ and φ tilt angles from the random rotation, dropping the torsion angle σ, and reconstructing the rotation from just the tilt angles. This increases the speed of the data generation script by minimally 5 orders of magnitude.
  • Changed the parameter values for the Frame_order.test_cam_pseudo_ellipse_torsionless* system tests. The θx and θy angles are now swapped. The chi-squared values are now also lower in the 3 system tests as the data is now of much higher precision.
  • Speedup for the frame order analyses when only one domain is aligned. When only one domain is aligned, the reverse Ln3+ to spin vectors for the PCS are no longer calculated. For most analyses, this should significantly reduce the number of mathematical operations required for the quasi-random Sobol' point numerical integration.
  • Support for the 3 vector system for double motions has been added to the frame order analysis. This is used for the quasi-random Sobol' numeric integration of the PCS. The lanthanide to atom vector is the sum of three parts: the 1st pivot to atom vector rotated by the 1st mode of motion; the 2nd pivot to 1st pivot vector rotated by the 2nd mode of motion (together with the rotated 1st pivot to atom vectors); and the lanthanide to second pivot vector. All these vectors are passed into the lib.frame_order.double_rotor.pcs_numeric_int_double_rotor() function, which passes them to the pcs_pivot_motion_double_rotor() function where they are rotated and reconstructed into the Ln3+ to atom vectors.
  • Fully implemented the double rotor frame order model for PCS data. Sobol' quasi-random points for the numerical integration are now generated separately for both torsion angles, and two separate sets of rotation matrices for both angles for each Sobol' point are now pre-calculated in the create_sobol_data() target function method. The calc_vectors() target function method has also been modified as the lanthanide to pivot vector is to the second pivot in the double rotor model rather than the first. The target function itself has been fixed as the two pivots were mixed up - the 2nd pivot is optimised and the inter-pivot distance along the z-axis gives the position of the 1st pivot. For the lib.frame_order.double_rotor module, the second set of Sobol' point rotation matrices corresponding to sigma2, the rotation about the second pivot, is now passed into the pcs_numeric_int_double_rotor() function. These rotations are frame shifted into the eigenframe of the motion, and then correctly passed into pcs_pivot_motion_double_rotor(). The elimination of Sobol' points outside of the distribution has been fixed in the base pcs_numeric_int_double_rotor() function and now both torsion angles are being checked.
  • Fix for the unpacking of the double rotor frame order parameters in the target function. This is for when the pivot point is being optimised.
  • Created a new synthetic CaM data set for the double rotor frame order model. This is the same as the test_suite/shared_data/frame_order/cam/double_rotor data except that the angles have been increased from 11.5 and 10.5 degrees to 85.0 and 55.0 for the two torsion angles. This is to help in debugging the double rotor model as the original test data is too close to the rigid state to notice certain issues.
  • Corrected the printout from the CaM frame order data generation base script. The number of states used in the distribution of domain positions is now correctly reported for the models with multiple modes of motion.
  • Created a frame order optimisation script for the CaM double rotor test suite data. This is the script used for testing the implementation, it will not be used in the test suite.
  • Created the Frame_order.test_rigid_data_to_double_rotor_model system test. This shows that the double rotor model works perfectly when the domains of the molecule are rigid.
  • Fix for the frame order target functions for when no PCS data is present. In this case, the self.pivot structure was being created as an empty array rather than a rank-2 array with dimensions 1 and 3. This was causing the rotor models to fail, as this pivot is used to recreate the rotation axis.
  • Fix for the CaM double rotor frame order system tests. The torsion angle cone_sigma_max is a half angle, therefore the full angles from the data generation script are now halved in the system test script.
  • Created 3 frame order system tests for the new large angle double rotor CaM synthetic data. These are the Frame_order.test_cam_double_rotor_large_angle, Frame_order.test_cam_double_rotor_large_angle_rdc, and Frame_order.test_cam_double_rotor_large_angle_pcs system tests.
  • Added the full optimisation results for the torsionless pseudo-ellipse frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
  • Added the full optimisation results for the 2nd free rotor isotropic cone frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
  • Small fix for the large angle CaM double rotor frame order model synthetic test data. The way the rotation angle was calculated was slightly out due to integer truncation. The integers are now converted to floats in the generate_distribution.py script and all of the PCS and RDC data averaged over ~20 million states has been recalculated.
  • Added proper support for the double rotor frame order models to the system test scripts. This is for the CaM synthetic data. The base script can now handle the current parameterisation of the double rotor model with a single pivot, an eigenframe, and the second pivot defined by a displacement along the z-axis. The scripts for the double_rotor and double_rotor_large_angle data sets have been changed to use this parameterisation as well.
  • Attempt at implementing the 2nd degree frame order matrix for the double rotor model. This is required for the RDC.
  • The second torsion angle is now printed out for the frame order system tests. This is in the system test class mesg_opt_debug() method and allows for better debugging of the double rotor models.
  • Fix for the Frame_order.test_cam_double_rotor_large_angle* system tests. The system test script was pointing to the wrong data directory.
  • The double rotor frame order system tests are no longer blacklisted.
  • Updated the chi-squared values being checked for the double rotor frame order system tests.
  • Shifted the frame order geometric representation functions into their own module. This is the new specific_analyses.frame_order.geometric module.
  • The frame order geometric representation functions are no longer PDB specific. Instead the format argument is accepted. This will allow different formats to be supported in the future. Because of this change, all specific_analyses.frame_order.geometric.pdb_*() functions has been renamed to create_*().
  • Created an auxiliary function for automatically generating the pivots of the frame order analysis. This is the new specific_analyses.frame_order.data.generate_pivot() function. It will generate the 1st or 2nd pivot, hence supporting both the single motion models and the double motion double rotor model.
  • Shifted the rotor generation for the frame order geometric representation into its own function. This is the specific_analyses.frame_order.geometric.add_rotors() function which adds the rotors are new structures to a given internal structural object. The code has been extended to add support for the double rotor model.
  • Fix for the pivots created by the specific_analyses.frame_order.data.generate_pivot() function. This is for the double rotor model where the 1st mode of motion is about the 2nd pivot, and the 2nd mode of motion about the 1st pivot.
  • Fixes for the cone geometric representation in the internal structural object. The representation can now be created if the given MoleculeContainer object is empty.
  • Refactored the frame order geometric motional representation code. The code of the specific_analyses.frame_order.geometric.create_geometric_rep() function has been spun out into 3 new functions: add_rotors(), add_axes(), and add_cones(). This is to better isolate the various elements to allow for better control. Each function now adds the atoms for its geometric representation to a separate molecule called 'axes' or 'cones'. The add_rotors() does not create a molecule as the lib.structure.represent.rotor.rotor_pdb() function creates its own. As part of the rafactorisation, the neg_cone flag has been eliminated.
  • Renamed the residues of the rotor geometric object representation. The rotor axis atoms now belong to the RTX residue and the propeller blades to the RTB residue. The 'RT' at the start represents the rotor and this will allow all the geometric objects to be better isolated.
  • Improvements to the internal structural object _get_chemical_name() method. This now uses a translation table to convert the hetID or residue name into a description, for example as used in the PDB HETNAM records to give a human readable description of the residue inside the PDB file itself. The new rotor RTX and RTB residue names have been added to the table as well.
  • Renaming of the residues of the cone geometric representation. The cone apex or centre is now the CNC residue, the cone axis is now CNX and the cone edge is now CNE. These used to be APX, AXE, and EDG respectively. The aim is to make these names 100% specific to the cone object so that they can be more easily selected for manipulating the representation and so that they are more easily identifiable. The internal structural object _get_chemical_name() function now returns a description for each of these. Note that the main cone object is still named CON.
  • The motional pivots for the frame order models are now labelled in the geometric representation. The pivot points are now added as a new molecule called 'pivots' in the frame_order.pdb_model user function. The atoms all belong to the PIV residue. The pymol.frame_order user function now selects this residue, hides its atoms, and then shows the atom name 'Piv' as the label. For the double rotor model, the atom names 'Piv1' and 'Piv2' are used to differentiate the pivots.
  • Renamed the lib.structure.represent.rotor.rotor_pdb() function to rotor(). This function is not PDB specific and it just creates a 3D structural representation of a rotor object.
  • Added support for labels in the rotor geometric object for the internal structural object. The labels are created by the frame_order.pdb_model user function backend. For the double rotor model, these are 'x-ax' and 'y-ax'. For all other models, the label is 'z-ax'. The labels are then sent into the lib.structure.represent.rotor.rotor() function via the new label argument. This function adds two new atoms to the rotor molecule which are 2 Angstrom outside of the rotor span and lying on the rotor axis. These then have their atom name set to the label. The residue name is set to the new RTL name which has been added to the internal structural object _get_chemical_name() method to describe the residue in the PDB file for the user. Finally the pymol.frame_order user function selects these atoms, hides them and then labels them using the atom name (x-ax, y-ax, or z-ax).
  • Modified the rotor representation generated by the pymol.frame_order user function. This is to make the object less bulky.
  • Redesign of the axis geometric representation for the frame order motions. This is now much more model dependent to avoid clashes with the rotor objects and other representations: For the torsionless isotropic cone, a single z-axis is created; For the double rotor, a single z-axis is produced connecting the two pivots, from pivot2 to pivot1; For the pseudo-ellipse and free rotor pseudo-ellipse, the x and y-axes are created; For the torsionless pseudo-ellipse, all three x, y and z-axes are created; For all other models, no axis system is produced as this has been made redundant by the rotor objects.
  • Fixes for the cone geometric object created by the frame_order.pdb_model user function. This was broken by the code refactoring and now works again for the pseudo-ellipse models.
  • Fix for the pymol.frame_order user function. The representation function for the rotor objects was hiding all parts of the representation, hence the pivot labels where being hidden. To fix this, the hiding of the geometric object now occurs in the base frame_order_geometric() function prior to setting up the representations for the various objects.
  • Started to redesign the frame_order.pdb_model user function. Instead of having the positive and negative representations in different PDB models, and the Monte Carlo simulations in different molecules, these will now all be shifted into separate files. For this to be possible, the file root rather than file names must now be supplied to the frame_order.pdb_model user function. To allow for different file compression, the compress_type argument is now used. The backend code correctly handles the file root change, but the multiple files are not created yet.
  • Python 3 fixes using the 2to3 script. Fatal changes to the multi.processor module were reverted.
  • Improvements to the lib.structure.represent.rotor.rotor() function for handling models. The 'rotor', 'rotor2', or 'rotor3' molecule name determination is now also model specific.
  • The frame order generate_pivot() function can now return the pivots for Monte Carlo simulations. This is the specific_analyses.frame_order.data.generate_pivot() function. The sim_index argument has been added to the function which will allow the pivots from the Monte Carlo simulations to be returned. If the pivot was fixed, then the original pivot will be returned instead.
  • Test suite fixes for the recent redesign of the frame_order.pdb_model user function.
  • Fixes for the frame_order.pdb_model user function for the rotor and free rotor models.
  • Redesign of the geometric object representation part of the frame_order.pdb_model user function. The positive and negative representations of the frame order motions have been separated out into two PDB files rather than being two models of one PDB file. This will help the user understand that there are two identical representations of the motions, as both will now be displayed rather than having to understand the model concept of PyMOL. The file root is taken, for example 'frame_order', and the files 'frame_order_pos.pdb' and 'frame_order_neg.pdb' are created. If no inverse representation exists for the model, the file 'frame_order.pdb' will be created instead. The Monte Carlo simulations are now also treated differently. Rather than showing multiple vectors in the axes representation component within one molecule in the same file as the frame order representation, these are now in their own file and each simulation is now a different model. If an inverse representation is present, then the positive representation will go into the file 'frame_order_sim_pos.pdb', for example, and the negative representation into the file 'frame_order_sim_neg.pdb'. Otherwise the file 'frame_order_sim.pdb' will be created.
  • Clean up of the frame_order.pdb_model user function definitions. Some elements were no longer of use, and some descriptions have been updated.
  • Redesign of the pymol.frame_order user function to match the redesign of frame_order.pdb_model. The file names are no longer given but rather the file root. Then all PDB files matching that file root in the given directory will be loaded into PyMOL.
  • Updated all of the frame order scripts for the frame_order.pdb_model and pymol.frame_order changes. These are the scripts for the CaM frame order test data.
  • Redesign of the average domain position part of the frame_order.pdb_model user function. The Monte Carlo simulations are now represented. If the file root is set to the default of 'ave_pos', then these will be placed in the file 'ave_pos.pdb', or a compressed version. Each simulation is in a different model, matching the geometric representation '*_sim.pdb' files. The original structure is copied for each model, and then rotated to the MC simulation average position.
  • Change all of the domain user function calls in the frame order CaM test data scripts. The domains are now identified by the molecule name rather than the range of residues. This allows non-protein atoms, for example the Ca2+ atoms, to be rotated to the average domain position as well.
  • The PyMOL disable command is now used by the pymol.frame_order user function. This is to first disable all PyMOL objects prior to loading anything, to hide the original structures and any previous frame order representations, and then to hide all of the Monte Carlo simulation representations. This is to simplify the picture initially presented to the user while still allowing all elements to be easily found.
  • The pymol.frame_order user function now centers and zooms on all objects.
  • Simplified the PyMOL view commands in all of the CaM test data optimisation scripts. The pymol.view user function is not necessary as the PyMOL GUI will be launched by the pymol.frame_order user function. And the pymol.command user function call for running the 'hide all' command is also now redundant.
  • Removed all remaining uncompressed PDB files from the CaM test data directories. These were complicating the debugging of the pymol.frame_order user function, as they were being loaded on top of the compressed versions.
  • Removed some rotation files from the CaM frame order test data directories. These files are no longer of any use and just take up large amounts of room for nothing.
  • Added titles to the frame order geometric representation PDB files from frame_order.pdb_model. These are in the form of special Ti atoms placed 40 Angstrom away from the pivot along the z-axis of the system, or shifted 3 more Angstrom for the Monte Carlo simulations. These are used to label the alternative representations or the Monte Carlo simulation representations. The residue type is set to TLE and this has been registered in the internal structural object. The pymol.frame_order user function now calls the represent_titles() function to select these atoms, hide them, and then add a long descriptive title. The atom name is used to distinguish between different titles.
  • Changed the alternative representation names for the frame order geometric objects. The aim is to put both representations on a more equal footing, as they are identical solutions. Hence the inverted representation might be the correct representation of the domain motions. So instead of calling these 'positive' and 'negative', the 'A' and 'B' notation will be used. This affects the names of the files produced by the frame_order.pdb_model user function as well as the internal titles. Instead of ending the files with "*_pos.*" and "*_neg.*", these have been changed to "*_A.*" and "*_B.*". The atoms used for the titles have also been renamed, and the pymol.frame_order user function now labels the titles using the 'A' and 'B' notation.
  • Changes to the rotor object in the frame order geometric representations. For the isotropic and pseudo-elliptic cone models, the rotor is now halved. Instead of having two axes radiating from the central pivot and terminating in the propeller blades, now only the positive axis is shown lying in the centre of the cone.
  • Fixes for the MC simulation rotor objects in the frame order geometric representation. The axes of the Monte Carlo simulation rotors objects were being set to the original values and not to the simulation values.
  • Fixes for the titles in the frame order geometric representation from frame_order.pdb_model. There were a few bugs for a number of the frame order models preventing this code from working.
  • Redesign of the geometric representation of the cone structural objects to allow for models. The old representation was not compatible with the PDB model concept whereby each model must have the same number of atoms. To handle this situation, the cone objects have been simplified. Specifically the cone cap. The old behaviour was to remove all points outside of the cone when creating the cone cap, and then to stitch the cap to the cone edge in a subsequent step. Now the behaviour is that all points outside of the distribution are shifted to the cone edge. This avoids the need to stitch the cap to the edge. This behaviour means that all cones with the same inc value will have the same number of atoms. The cones for the pseudo-ellipses are not as nice as the latitudinal lines are not strait at the cone edge, but at least creating multiple models with different cone sizes is now possible.
  • Bug fix for the y-axis rotation matrix for the double rotor Sobol' integration points. The matrix was inverted.
  • Updated the frame order system test chi-squared values for the previous fix.
  • Fixes for the double rotor frame order system tests for the CaM synthetic data. The torsion angles needed to be swapped and the pivot point changed from the C terminal domain CoM to the N domain CoM.
  • More fixes for the double rotor frame order system tests for the CaM synthetic data. The eigensystem was inverted.
  • Updated the χ2 check for the large angle double rotor frame order system tests. This is needed for the eigenframe fix.
  • Updates for the frame order system tests for the float32 to float64 change. Some chi-squared values have slightly changed.
  • The CaM frame order test data optimisation scripts now save more state files. The state of the true dynamics and the fixed pivot optimisation results are now stored as well. This might be useful for extracting these results without redoing the calculations.
  • The script for representing the frame order dynamics for the CaM test data has been updated. The domains of the system are now defined.
  • Changed the CaM frame order test data superimposition values. Because the domains are now defined via the molecule name rather than the residue numbers, the centroid of rotation set to the CoM has been shifted as now the Ca2+ ions are included in the CoM calculation. Therefore the superimpose.py script has been updated to not delete the Ca atoms. All of the frame order optimisation scripts have been updated with the new rotation Euler angles and translation vector. To match this, the system test base script for the CaM frame order test data has also had its rotations and translations updates, and the domain user function call changed to use molecule names.
  • Updated all of the CaM frame order system test chi-squared values. These have changed slightly due to the rotation and translation changes.
  • Added support for the 'pivot_disp' frame order parameter to the grid search. This is required for the double rotor model.
  • Changed some of the default values for the frame order auto-analysis. The number of Sobol' quasi-random integration points were far too low to obtain any reasonable results.
  • Simplified the PyMOL visualisation relax script created by the frame order auto-analysis. This now consists of a single pymol.frame_order user function call. The other pymol user function calls were unnecessary.
  • Added the full optimisation results for the large angle double rotor frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
  • Added model support for the rotor geometric object. This is the structural object used in the frame order analysis to create PDB representations of rotor motions. The number of atoms created for the rotor is now constant, allowing for models whereby the atom number and connectivity must be preserved between all models.
  • Changed the grid search pivot displacement frame order parameter. Instead of searching from 0 to 50 Angstroms, the search is now from 10 to 50. This is to avoid the edge case of pivot_disp = 0.0 from which the optimisation cannot escape.
  • Speedup of the PCS component of the rigid frame order model. The lanthanide to atom vectors are now being calculated outside of the alignment tensor and spin loops, as well as the inverse vector lengths to the 5th power. This increases the speed by a factor of 1.216 (from 38.133 to 31.368 seconds for 23329 calls of the func_rigid() target function).
  • Added the full optimisation results for the rigid frame order model. This is for the CaM test data using the new frame_order.py optimisation script.
  • Numpy ≤ 1.6 fixes for the frame order PCS code. The numpy.linalg.norm function does not have an axis argument in numpy 1.6, therefore the lib.compat.norm() function is now used instead. This function was created exactly for this axis argument problem.
  • Created the new specific_analyses.frame_order.variables module. This currently contains variables for all of the frame order model names, as well as various lists of these models. The rest of the frame order specific analysis code as well as the frame order user functions have been converted to use these model variables exclusively rather than having the model name strings hardcoded throughout the codebase.
  • Added the full optimisation results for the double rotor test data. This is for the CaM frame order test data using the new frame_order.py optimisation script.
  • Added a script for profiling the target function calls of the pseudo-ellipse frame order model.
  • Added a timeit script and log file showing how numpy.cos() is 10 times slower than math.cos(). This is for single floats.
  • Shifted the calculation of the θmax cone opening for the pseudo-ellipse outside of all loops. This is infrastructure change for potentially eliminating all of the looping for the PCS numeric integration in the future. It however slightly speeds up the pseudo-ellipse frame order model. Using 500 target function calls in the profiling_pseudo_ellipse.py script in test_suite/shared_data/frame_order/timings/, the time spent in the pcs_pivot_motion_full_qrint() function decreases from 20.849 to 20.719 seconds.
  • Converted the torsionless pseudo-ellipse model to also use the tmax_pseudo_ellipse_array() function. This allows the calculation of the pseudo-elliptic cone opening θmax to be shifted outside of all loops.
  • Created a profiling script and log file for the isotropic cone frame order model. This shows where the slow points of the model are, using 2000 target function calls.
  • Increased the function call number to 500 in the pseudo-ellipse frame order model profiling script. The profiling log file has also been added to show where the slowness is - specifically that the numeric PCS integration takes almost the same amount of time as the RDC frame order matrix construction using the scipy.integrate.quad() function.
  • Created the specific_analyses.frame_order.checks.check_pivot() function. This is to check that the pivot point has been set.
  • The frame order grid search is now checking if the pivot point has been set.
  • Added a profiling script and log file for the free rotor frame order model.
  • Updated the frame order optimisation results for the CaM isotropic cone test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository and the old ones removed.
  • Modified the script for recreating the frame order PDB representation and displaying it in PyMOL. The state loading, domain redefinition, and representation creation parts have all been removed, as these will soon all be redundant as the frame order analysis for all models is being redone. All that remains are the pymol.frame_order() function calls for displaying all the representations.
  • The pivot point parameters in the frame order analysis are no longer scaled by 100. This is to match the average domain position translation which is also not scaled.
  • The specific_analyses.frame_order.variables module is now used throughout the frame order code. The target function code, auto-analysis, and test suite now all use the variables defined in this module rather than having hardcoded strings. The MODEL_LIST_NONREDUNDANT variable has been created to exclude the redundant free rotor pseudo-ellipse which cannot be optimised, and this is used by the auto-analysis.
  • Removal of many unused imports in the frame_order_cleanup branch. These were detected using the devel_scripts/find_unused_imports.py script which uses pylint to find all unused imports. The false positives also present in the trunk were ignored. And the unused imports in the dispersion code were also left for clean up the disp_spin_speed branch.
  • Changed the minimisation in the frame order system tests where optimisation is activated. The number of iterations is now set to 1 for speed testing, and the constraints are turned on.
  • Turned on the optimisation flag for the Frame_order.test_cam_free_rotor system test. This is to activate code paths currently not tested by the test suite.
  • Constraints are now properly turned off in the minimise user function for the frame order analysis. The A and b matrices from linear_constraints() are now set to None if they are returned as empty arrays.
  • Parallelised the frame order optimisation code to run on clusters or multi-core systems via OpenMPI. The optimisation code has been split into the three standard parts of the multi-processor: 1) Frame_order_memo is the new Memo object used to store data on the master for use when data is returned from the slaves. 2) Frame_order_minimise_command is the Slave_command which stored all required data for the optimisation, is pickled and sent to a slave, sets up the target function, and then performs optimisation. 3) Frame_order_result_command is the Result_command initialised by the Slave_command on the slave for pickling and returning results to the master. To avoid pickling the target function class, which is not possible, the store_bc_data() and target_fn_setup() functions of the specific_analyses.frame_order.optimisation module have been redesigned to work with basic data structures rather than the target function class directly. The target_fn_setup() function no longer returns an initialised target function class, but rather all the data assembled prior to the initialisation. And the target function class was itself modified so that pcs_theta and rdc_theta are always defined to allow the store_bc_data() function to be used successfully. This parallelisation currently only allows the Monte Carlo simulations to be run on slave processors.
  • The frame order linear_constraints() function now returns None if no constraints are present. This allows the code using this to be simplified with respect to turning off the constraints.
  • Improvements for the printout at the start of optimisation of the frame order models. This is in the target_fn_setup() frame order method. All the printouts are now in one place and they are now better formatted and better controlled.
  • Parallelised the frame order grid search to run on clusters or multi-core systems via OpenMPI. This involved the creation of the Frame_order_grid_command class which is the multi-processor Slave_command for performing the grid search. This was created by duplicating the Frame_order_minimise_command class and then differentiating both classes. For the subdivision of the grid search, the new minfx grid.grid_split_array() function is used in the frame order grid() API method. The grid() method no longer calls the minimise() method but instead obtains the processor box itself and adds the subdivided grid slaves to the processor. The relax grid_search user function takes care of the rest.
  • Fixes for the parallelised grid search for the frame order analysis. A chi-squared value check was added to the Frame_order_result_command.run() method to check if the value is lower than the current when the result is returned to the master. Without this check, each grid subdivision result will be stored as they are returned rather than storing the results from the global minimum of the entire grid search.
  • Added a script for testing out the parameter nesting abilities of the frame order auto-analysis. This script attempts to find the dynamics solution without knowing where the pivot is located. Hence this will be as in the auto-analysis were this pivot point will be used as the base for all other models.
  • Sent the verbosity argument to the minfx.grid.grid_split() function for the frame order analysis. This matches the relax trunk changes for the model-free analysis. The minfx function in the next release (1.0.8) will now be more verbose, so this will help with user feedback when running the model-free analysis on a cluster or multi-core system using MPI.
  • Improvements for the parallelised grid search for the frame order analysis. As each grid point can take wildly different numbers of CPU cycles to calculate the chi-squared value for, the result of subdividing the grid search was that some subdivisions where incredibly quick while others required much larger amounts of time. To avoid this bad slave management, the grid points are now randomised. This means that the subdivisions will require about the same amount of time to optimise.
  • Moved the setup of the target function data structures in the frame order analysis. This is for the grid_search and minimise user functions. The target function data setup function has been renamed to target_fn_data_setup(). This is now called before the Frame_order_grid_command and Frame_order_minimise_command multi-processor objects are initialised, and all of the data is now passed into these functions. Although the code is uglier, this has the benefit that the target_fn_data_setup() function will only be called once. This data setup requires a lot of time, so for a large cluster, this can be a large time saving for the grid search.
  • Modified the frame_order_free_start.py script to better mimic the frame order auto-analysis.
  • Updated the frame order optimisation results for the 2nd CaM free rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Updated the frame order optimisation results for the CaM free rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Updated the frame order optimisation results for the CaM missing data free rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Updated the frame order optimisation results for the CaM free rotor isotropic cone test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Updated the frame order optimisation results for the CaM small angle rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Updated the frame order optimisation results for the 2nd CaM free rotor isotropic cone test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Updated the frame order optimisation results for the CaM pseudo-ellipse test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Updated the frame order optimisation results for the CaM torsionless isotropic cone test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Updated the frame order optimisation results for the 2nd CaM pseudo-elliptic cone test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Some more fixes for the optimisation user function changes.
  • Removed the parameter scaling for the pivot point frame order parameters. These were already removed from the frame_order_cleanup branch in the assemble_scaling_matrix() function, however they were reintroduced accidentally via the parameter object where this information is now defined. So this removes the scaling a second time.
  • Fixes for the parameter scaling changes in the trunk. The scaling flag is no longer part of the specific analysis API optimisation methods. Instead the pre-assembled scaling matrices are passed into all three API optimisation methods.
  • Implemented the frame order specific analysis API method print_model_title(). This is simply aliased from the API common method _print_model_title_global().
  • Fix for the grid search in the frame order analysis. This is a recently introduced problem due to the changes of the zooming_grid_search branch.
  • Turned on the optimisation in the Frame_order.test_cam_rigid system test. This is to catch a number of failures in the frame order grid search.
  • Activated the grid search in the frame order system tests using the CaM synthetic data. This is set to one increment so that the tests can complete in a reasonable time.
  • Fix for the specific_analyses.frame_order.optimisation.grid_row() function. This can now handle the case of a single grid increment. The change is similar to r163 in the minfx project.
  • Converted the frame_order_free_start.py script to use the zooming grid search.
  • Added lots of calls to the time user function to the frame_order_free_start.py. This will be used to fine tune the frame order analysis on a cluster.
  • Increased the default grid bounds for the pivot parameters of the frame order models. The pivot point is now searched for in a 50 Angstrom box and the pivot displacement for the double motion models from 10 to 60 Angstroms. These were originally a 20 Angstrom box and 10 to 50 Angstroms. The larger grid is possible when combined with the new zooming grid search.
  • Updated the frame order optimisation results for the 2-site CaM test data fitting to the rotor model. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Updated the frame order optimisation results for the CaM rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Updated the frame order optimisation results for the 2nd CaM rotor test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Fixes for the CaM free-rotor pseudo-ellipse frame order model test data set. This is for the constraint 0 ≤ θx ≤ θy ≤ π, as the old data was created with θx > θy. The new data is also of high quality using 20 million structures and numpy.float128 data averaging.
  • Created the lib.frame_order.rotor_axis.convert_axis_alpha_to_spherical() function. This will convert the axis α angle to the equivalent spherical angles θ and φ.
  • Renamed the lib.frame_order.rotor_axis module to lib.frame_order.conversions. This module will be used for all sorts of frame order parameter conversions.
  • Added the pipe_name argument to the specific_analyses.frame_order.data.generate_pivot() function. This allows the pivot from data pipes other than the current one to be assembled and returned.
  • Updated the frame order optimisation results for the CaM free rotor, pseudo-ellipse test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Updated the frame order optimisation results for the CaM torsionless, pseudo-ellipse test data. The optimisation in the frame_order.py is now of higher precision with the number of Sobol' numeric integration points significantly increased, especially for the Monte Carlo simulations. The new frame order representation files have been added to the repository, as well as the intermediate state files.
  • Fix for the Frame_order.test_cam_pseudo_ellipse_free_rotor system test. This is for the change of the X and Y cone opening angles.
  • Redesign and expansion of the nested model parameter copying in the frame order auto-analysis. The nested parameter protocol used to allow the analysis to complete in under 1,000,000 years was no longer functional due to the switching to the axis α parameter to decrease parameter number and redundancy. The copying of the average domain position for the free rotor models was also incorrect as the dropping of the α Euler angle cause the translation parameters and β and γ angles to change drastically. The new protocol has been split into four methods for the average domain position, the pivot point, the motional eigenframe and the parameters of ordering. These use the fact that the free rotor and torsionless models are the two extrema of the models where the torsion angle is restricted. The pivot copying is a new addition.
  • Created the Frame_order.test_auto_analysis system test. This will be an extremely quick run through of the frame order auto-analysis as this is not currently tested. 1 Sobol' quasi-random integration point will be used for all models for speed. The system test uses the rigid CaM test data to perform a full analysis.
  • Alphabetical ordering of the imports in the frame order auto-analysis module.
  • Fixes for the backend script of the Frame_order.test_auto_analysis system test. This includes a missing import and the removal of a long ago deleted user function.
  • Fix for the frame order auto-analysis for the call to the grid search user function. This user function has been renamed to minimise.grid_search, however not all parts of the analysis had been converted to the new name.
  • Created a method in the frame order auto-analysis to reorder the models. This is needed as the nested model parameter copying protocol requires the simpler models to be optimised first.
  • The Frame_order.test_auto_analysis system test now writes all files to the directory of ds.tmpdir. This is to prevent the system test from dumping files in the current directory.
  • Modified the specific_analyses.frame_order.parameters.update_model() function. This will no longer set all parameters to 0.0, excluding the pivot point.
  • Modified the specific_analyses.frame_order.parameters.assemble_param_vector() function. This can now handle the case of no parameters being present. The corresponding elements of the numpy array will consist of NaN values.
  • Better handling of unset parameters in the frame order optimisation functions. The specific_analyses.frame_order.optimisation.target_fn_data_setup() and specific_analyses.frame_order.parameters.assemble_param_vector() function both now accept the unset_fail argument. This is set in both the calculate() and minimise() API methods. When set, a RelaxError will be raised in the assemble_param_vector() function when a parameter has not been set yet. This together with previous changes will prevent the frame order analysis from using 0.0 as a starting value for unset parameters.
  • Fixes for all of the Frame_order.test_rigid_data_to_*_model system tests. The base script now sets all parameter values so that the minimise.calculate user function can operate. The two free rotor model chi-squared values have been updated as these are sensitive to the motional eigenframe parameter values - these models can never approximate a rigid state.
  • Modified the optimisation of the rigid model in the frame order auto-analysis. The grid search is now implemented as a zooming grid search.
  • Updates and fixes for the frame order auto-analysis. The custom grid setup now works for the new reduced parameter set models and the double rotor model is now also included. The cone axis α angle to spherical angle conversion has had a bug removed. And some of the printouts are now more detailed.
  • Redesigned the Frame_order.test_auto_analysis system test. This now uses a hypothetical new Optimisation_settings object from the frame order auto-analysis module for holding all of the grid search, zooming grid search and minimisation settings. This will allow for far greater user control of the settings and hugely simplify the auto-analysis interface by decreasing the number of input arguments. It should also be less confusing.
  • Implementation of the Optimisation_settings object in the frame order auto-analysis. This object holds all of the grid search, zooming grid search, and minimisation settings. It provides the add_grid() and add_min() methods to allow the user to add successive iterations of optimisation and settings to the object. The loop_grid() and loop_min() methods are used to loop over each iteration of each method. And the get_grid_inc(), get_grid_num_int_pts(), get_grid_zoom_level(), get_min_algor(), get_min_func_tol() and get_min_num_int_pts() methods are used to access the user defined settings. The auto-analysis has been redesigned around this new concept. All of the optimisation arguments have been replaced. Instead there are the opt_rigid, opt_subset, opt_full, and opt_mc arguments which are expected to be instances of the Optimisation_settings object. The optimisation in the auto-analysis is now more advanced in that more user optimisation settings are now available and active.
  • Added linear constraints for the pivot and average domain translation frame order parameters. The pivot coordinates are constrained between -999 and 999 Angstrom and the translation between -500 and 500 Angstrom. This allows the frame_order.pdb_model user function to operate in the case of failed models - often the free rotors fitting to torsionally restricted data - by preventing the PDB coordinates from being out of the PDB format range. It should also speed up optimisation by stopping the optimisation of failed models earlier.
  • The frame order auto-analysis Optimisation_settings object now handles the maximum iterations. The new max_iter argument has been added to the add_min() method, and the new get_min_max_iter() method added to fetch the value. This is used in the auto-analysis to set the maximum number of optimisation iterations in the minimise.execute user function calls. Limiting this will be of greatest benefit for the test suite.
  • Speedup of the Frame_order.test_auto_analysis system test. This involves limiting the maximum number of optimisation steps to 20 for most parts (the rigid model excluded so the average domain position is correctly found), and using the PCS subset data for the full data set.
  • Updated the full_analysis.py script for the CaM frame order test data. This is for the recent changes to the auto-analysis with the Optimisation_settings object and for the changes of this branch.
  • Removed the RDC data checks from the frame order optimisation. This is in the minimise_setup_rdcs() and store_bc_data() functions of the specific_analyses.frame_order.optimisation module, called before and after all optimisation. The reason was identified by profiling - this check was adding significant amounts of time to the setup and results unpacking parts of the optimisation. Specifically the interatomic_loop() function was identified via profiling as the function requiring the most amount of cumulative time in the Frame_order.test_auto_analysis system test (17 seconds out of a total of ~60 seconds).
  • Fixes for the removal of the RDC data checks from the frame order optimisation functions. The specific analysis API method overfit_deselect() has now been created to deselect spins which do not have PCS data or interatomic data containers missing RDC data. The handling of deselected spins and interatomic data containers is now also correctly handled throughout the frame order specific code.
  • Enabled pivot optimisation in the full_analysis.py script for the CaM frame order test data.
  • The frame order auto-analysis now calls the time user function. This is used at the start of each model section, as well as at the very start and very end of the analysis. This feedback is needed for the user to be able to optimise the optimisation settings.
  • Major bugfix for the frame order auto-analysis. The algorithm of using a PCS data subset of a few selected residues to find an initial parameter estimate followed by using all PCS data was badly implemented. The use of the PCS subset caused most spin systems to be deselected, however they remained deselected once all data was being used. So the result was that only the spin subset was ever being used in the analysis.
  • Fix for the recent lib.period_table and lib.physical_constants module changes.
  • Created the model_directory() method for the frame order auto-analysis. This is used to create the full path for saving model specific files. It replaces spaces with underscores in the path and removes all commas. The commas in the path appear to be fatal for certain PyMOL versions when viewing the frame order representation.
  • The frame order auto-analysis results printout has been extended to include the pivot point.
  • Change to the parameter nesting in the frame order auto-analysis. The pivot is now taken from the rotor model for all other models. Taking the pivot point from the isotropic cone model is not a good idea as there are situations where the pivot point optimisation catastrophically fails, sending the point many tens or hundreds of Angstrom away from the molecule.
  • Copied a frame order results file for testing axis permutations. This is from the test_suite/shared_data/frame_order/cam/pseudo_ellipse/ directory. The optimisation results were identified to have failed, in that it found the alternative minimum. The pseudo-ellipse model as two minima in the space, and in this case the global minimum was missed.
  • Created the Frame_order.test_axis_permutation system test. This is to test the operation of the yet-to-be implemented frame_order.permute_axes user function.
  • Implemented the frame_order.permute_axes user function. This is used to switch between local minima in the pseudo-elliptic frame order models.
  • Fix for the Frame_order.test_axis_permutation system test. The motional eigenframe in the old log file was not exactly correct and did not correspond exactly to the Euler angles in the cam_pseudo_ellipse.bz2 results file in test_suite/shared_data/frame_order/axis_permutations/.
  • Extended the Frame_order.test_axis_permutation system test to check frame_order.permute_axes twice. This will check that two calls to the frame_order.permute_axes user function will restore the original parameter values.
  • The frame_order.permute_axes user function can now handle the torsionless pseudo-ellipse. This model does not have the variable cdp.cone_sigma_max set.
  • Added support for axis permutations in the frame order auto-analysis. This is done by copying the data pipe of the already optimised pseudo-elliptic models, permuting the axes, and performing another optimisation using all RDC and PCS data. This allows the second solution for these pseudo-elliptic models to be found. The 2nd pipe is included in the model selection step to allow the best solution for the model to be found.
  • Fix for the reading of old results files in the frame order auto-analysis. The directory name is now processed by the model_directory() method. This will convert the spaces to '_' and remove commas. Without this the already created files could not be found, if the model name contains a space or comma.
  • Made the pivot point in the frame order PDB representation fail-proof. If the pivot position was outside of the bounds [-1000, 1000], the PDB file creation would fail as the record would be too long. So now the pivot is shifted to be in these bounds.
  • The axis permutation step in the frame order auto-analysis is now always performed. If an old results file was found, this step was accidentally skipped.
  • Added extensive printouts to the frame_order.permute_axes user function.
  • Redesigned the frame_order.permute_axes user function frontend. Previously only cyclic permutations were considered, however non-cyclic permutations are also allowed when accompanied by an axis inversion. Therefore 3 combinations exist with cone_theta_x ≤ cone_theta_y, or 2 when the current combination is excluded.
  • Created 6 system tests for the frame_order.permute_axes user function. This covers the 3 starting conditions (x<y<z, x<z<y, z<x<y) and the two permutations ('A' and 'B') for each of these which do not include the starting permutation. They replace the original Frame_order.test_axis_permutation system test with the tests Frame_order.test_axis_perm_x_le_y_le_z_permA, Frame_order.test_axis_perm_x_le_y_le_z_permB, Frame_order.test_axis_perm_x_le_z_le_y_permA, Frame_order.test_axis_perm_x_le_z_le_y_permB, Frame_order.test_axis_perm_z_le_x_le_y_permA, and Frame_order.test_axis_perm_z_le_x_le_y_permB.
  • Implemented the new frame_order.permute_axes backend. The 3 starting conditions x<y<z, x<z<y, and z<x<y and the two permutations 'A' and 'B' (for each of these which do not include the starting permutation) are now supported. For these 6 combinations, the axis and order parameter permutation and the z-axis inversion are selected and applied to the current system.
  • Removed the second permutation from the 6 Frame_order.test_axis_perm_* system tests. A second identical permutation does not necessarily restore the original state.
  • Fix for the frame_order.permute_axes for the torsionless pseudo-ellipse model. The data structure cdp.cone_sigma_max does not exist in this model as cone_sigma_max == 0.0.
  • Modified the frame order auto-analysis axis permutation algorithm to handle both permutations. Instead of creating one additional data pipe for the permutations, two are now created for the permutations 'A' and 'B'. This allows all 3 solutions for the pseudo-elliptic models to be explored and included in the final model selection process.
  • Fix for the Frame_order.test_axis_perm_x_le_z_le_y_permB system test. The permuted z-axis needs to be inverted in the test.
  • Many fixes for the frame_order.permute_axes user function. The z-axis inversion is now encoded into a 3D numpy array as the index of the new z-axis position needs to be stored. The cone_theta_x, cone_theta_y and cone_sigma_max parameters are now permuted in reverse 'perm' data structure by calling its index() method. And the cone_theta_x - cone_theta_y to y-axis - x-axis switch has been removed (this may need to be reintroduced later).
  • Fix for the axis permutation protocol in the frame order auto-analysis. The pipe.copy user function does not switch pipes, therefore the pipe.switch user function is now being called so that the correct pipe is being permuted and optimised.
  • Created some test data files for visualising the frame order axis permutation. This uses the CaM frame order synthetic data for the rotor model to visualise the pseudo-ellipse frame order model axis permutations. The initial conversion sets the pseudo-ellipse torsion angle cone_sigma_max to the rotor opening half-angle, and the pseudo-elliptic cone opening to close to zero. Then the axis permutations are performed. All three solutions are optimised. PDB representations before and after optimisation are included to illustrate any problems.
  • Bug fix for the new frame_order.permute_axes user function. The cone and torsion angles were not being correctly permuted. Now the direct permutation array is being used. And the fact that cone_theta_x is a rotation along the y-axis and cone_theta_y along the x-axis is taken into account.
  • Redesign of the axis permutation algorithm of the frame_order.permute_axes user function. Instead of tracking the fact that cone_theta_x is a rotation around the y-axis and cone_theta_y is about the x-axis, now two permutation arrays are created - one for the three angles and one for the axes. The permutation array values have also been completely changed as previously the incorrect inverse permutation was coded into the algorithm.
  • Updated the frame order pseudo-ellipse motion permutation test data. This is for the CaM frame order rotor model synthetic data. The correct axis and cone angle permutations of the frame_order.permute_axes user function are now being used and optimised.
  • Renamed the pseudo-ellipse permutation directory to perm_pseudo_ellipse_x_le_y_le_z. This is for the CaM frame order rotor model synthetic data.
  • Fix for the frame_order.permute_axes user function. One of the 6 permutations had the x and y axes switched (the x ≤ z ≤ y condition, permutation A).
  • Visualisation files for all of the pseudo-ellipse permutations by frame_order.permute_axes. This includes the x ≤ z ≤ y and z ≤ x ≤ y conditions (the previous files were for x ≤ y ≤ z). In all permutation combinations, optimisation has been performed to demonstrate that these are all local minima. These all approximate the rotor when using the CaM frame order rotor model synthetic data.
  • Added support for the isotopic cone models to the frame_order.permute_axes user function. This is a simpler setup, but it uses the same permutation algorithm as derived for the pseudo-ellipse models. Instead of setting the x and y cone angles separately, they are instead averaged. And as the cone axis is undefined in the xy plane, the axis has been randomly selected as being the axis perpendicular to both the z-axis and the reference frame x-axis.
  • Created set of files showing the axis permutation problem for the isotopic cone frame order model. This shows that there are two minima. However one has a chi-squared value of ~1, and the other a value of ~150. Nevertheless, the optimisation could be trapped in the non-global minimum so the frame_order.permute_axes user function should be used for the isotopic cones as well, just in case.
  • Created the other isotropic cone condition z ≤ x = y. As there are no constraints in this model, this condition should not result in any major differences, just the size of the cone being different and the optimisation having to decrease the cone angle significantly to mimic the rotor.
  • Modified the frame order auto-analysis. The axis permutation algorithm is now performed on all isotopic cone and pseudo-ellipse models. This is just in case the non-global minima was found in the original optimisation. The isotropic cone models possess two local minima whereas the pseudo-ellipse models possess three local minima.
  • Simplified the optimisation in the axis permutation part of the frame order auto-analysis. Only the last, highest quality setting is used for optimisation.
  • Fix for the axis permutation protocol in the frame order auto-analysis. This would fail if a results file for the permuted model already exists as the pipe.copy user function call was being performed too early.
  • Created set of files for the axis permutation of the torsionless isotopic cone frame order model.
  • Created an initial Frame_order.test_frame_order_pdb_model_ensemble system test. This is to check the operation of the frame_order.pdb_model user function when an ensemble of structures is encountered. However as this uses a very minimal number of user functions to set up the system, a number of other minor bugs will probably be uncovered.
  • Added printouts to the specific_analyses.frame_order.parameters.update_model() function. This is to make it easier to understand why certain things fail due to the system not being fully set up.
  • Simplified the operation of the frame_order.select_model user function. This is by removing the check of PCS data from the specific_analyses.frame_order.data.pivot_fixed() function using the base_data_types() function call. This allows the model to be set up more easily.
  • Modified the frame order check_pivot() function to operate on any data pipe. The function now accepts the pipe_name argument so that checks can happen on any data pipe.
  • Missing imports in the specific_analyses.frame_order.checks module. This is from the recent pipe_name argument addition in the check_pivot() function.
  • The frame order generate_pivot() function can now handle no pivot being present. At the start of this specific_analyses.frame_order.data module function, the check_pivot() function is being called to make sure that a pivot is present.
  • Modified the Frame_order.test_frame_order_pdb_model_ensemble system test so it is set up correctly. The pivot point and moving domain are now specified.
  • Added Monte Carlo simulations to the Frame_order.test_frame_order_pdb_model_ensemble system test. This is only setting up Monte Carlo simulation data structures via the monte_carlo.setup user function. This demonstrates a failure of the frame_order.pdb_model user function when an ensemble of structures is present with Monte Carlo simulations.
  • Added support for the model argument for the frame_order.pdb_model user function. This argument is used to specify which of the models in an ensemble will be used to represent the average domain position Monte Carlo simulations, as each simulation is encoded as a model, as well as for the distribution of structures simulating the motion of the system. The argument is therefore passed into the create_ave_pos() and create_distribution() functions of the specific_analyses.frame_order.geometric module. To handle all models being used in the non Monte Carlo simulation PDB file and only one in this file, the internal structural object is copied twice. The second copy for the MC sims has all but the chosen model deleted out of it.
  • Fix for the Frame_order.test_frame_order_pdb_model_ensemble system test. More needed to be done to set up the Monte Carlo simulations - the monte_carlo.initial_values user function call was required.
  • Modified the frame order sim_init_values() API method to handle missing optimisation data. The monte_carlo.initial_values user function was failing if optimisation had not been performed. This is now caught and handled correctly.
  • Created the Frame_order.test_frame_order_pdb_model_failed_pivot system test. This simply shows how the frame_order.pdb_model user function currently fails if the optimised pivot point is outside of the PDB coordinate limits of "%8.3f".
  • The frame_order.pdb_model user function can now properly handle a failed pivot optimisation. This is when the pivot point optimises to a coordinate outside of the PDB limits. Now all calls to specific_analyses.frame_order.data.generate_pivot() from the module specific_analyses.frame_order.geometric set the pdb_limit flag to True. This allows all representation objects to be within the PDB limits. The algorithm in generate_pivot() has been extended to allow higher positive values, as the real PDB limits are [-999.999, 9999.999]. And a RelaxWarning is called when the pivot is outside to tell the user about it.
  • Modified the frame order auto-analysis to be more fail-safe. Almost all of the protocol is now within a try-finally block so that the execution lock will always be released.
  • Fix for the specific_analyses.frame_order.data.pivot_fixed() function. This was recently introduced when the check for PCS data was removed from this function. To fix the problem, instead of calling base_data_types() to see if PCS data is present, the cdp.pcs_ids data structure is checked instead.
  • Fix for the model argument for the frame_order.pdb_model user function. The deletion of structural models for the Monte Carlo simulations in the average domain position representation now only happen if more than one model exists.
  • Modified the Frame_order.test_frame_order_pdb_model_failed_pivot system test. This is to show that the frame_order.pdb_model user function fails if the pivot is close to but still within the PDB coordinate limits.
  • Modified the pivot position checking in specific_analyses.frame_order.data.generate_pivot(). Now the pivot is shifted to be within the limits shrunk by 100 Angstrom. This allows any PDB representation created by the frame_order.pdb_model user function to be within the PDB limits.
  • Fix for the axis permutation protocol in the frame order auto-analysis. If a results file was found for one of the permutations, a return from the function would occur. The result is that the other permutations would not be loaded or optimised.
  • Fix for the RelaxError raised by the frame_order.select_model user function. This is the error if the model name is incorrect.
  • Created the Frame_order.test_pseudo_ellipse_zero_cone_angle system test. This is to catch a bug in optimisation when the cone_theta_x is set to zero in the pseudo-ellipse models.
  • Bug fix for the lib.frame_order.pseudo_ellipse.tmax_pseudo_ellipse_array() function. The problem was that when θx or θy were zero, the floating point value of 0.0 would be returned. This is the incorrect behaviour as the returned value must be an array matching the dimensions of the φ angle array argument.
  • Fix for the Pseudo_elliptic cone object for when the cone angles are zero. The Pseudo_elliptic.phi_max() method now avoids a divide by zero error.
  • Updates for all of the Frame_order.test_axis_perm_* system tests. The axis permutations and angle permutations are now performed correctly within the tests themselves. This allows the tests to pass.
  • Modified the Frame_order.test_pseudo_ellipse_zero_cone_angle system test to be quick. Now that the test passes, the optimisation needs to be short. So a maximum of two iterations are now set. Otherwise the test would take hours to complete.
  • Small speedup of the Frame_order.test_auto_analysis system test.
  • Alphabetical ordering of most of the Frame_order system tests.
  • Created the very simple Frame_order.test_num_int_points system test. This simply creates a data pipe and calls the frame_order.num_int_pts user function to test its operation. This is to increase the test suite coverage of this user function.
  • Created the Frame_order.test_num_int_pts2 system test. This checks the operation of the frame_order.num_int_pts user function when only the model has been chosen.
  • Renamed the Frame_order.test_num_int_points system test to Frame_order.test_num_int_pts.
  • Created the check_domain() function for the frame order analysis. This is in the specific_analyses.frame_order.checks module. The function checks that the reference domain has been specified.
  • Created the check_model() function for the frame order analysis. This is in the specific_analyses.frame_order.checks module. The function checks that the frame order model has been selected via the frame_order.select_model user function.
  • The frame_order.ref_domain user function backend now uses the check_domain() function.
  • Created the check_parameters() function for the frame order analysis. This is in the specific_analyses.frame_order.checks module. The function checks that the frame order parameters have been set up and have values.
  • Created the Frame_order.test_num_int_pts3 system test. This checks the operation of the frame_order.num_int_pts user function when the model has been and the frame order parameters have been set up.
  • Created the Frame_order.test_count_sobol_points system test. This will test that the frame_order.num_int_pts user function can correctly count the number of Sobol' integration points used for the current set of parameter values. This frame_order.num_int_pts functionality does not exist yet.
  • Implementation of the specific_analyses.frame_order.optimisation.count_sobol_points() function. This is used by the frame_order.num_int_pts user function to provide a printout of the number of Sobol' integration points used for the current parameter values. This is to provide user feedback so that it is know if enough Sobol' points have been used.
  • Modified the Frame_order.test_count_sobol_points system test. The number of points has been massively decreased as generating Sobol' points takes a long time, and the check for the number of used Sobol' points has been set to the real value.
  • Created the Frame_order.test_count_sobol_points2 system test. This checks the operation of the frame_order.count_sobol_points user function. As this user function has not been implemented yet, the test currently fails.
  • Created the frame_order.count_sobol_points user function. This is simply a frontend to the new specific_analyses.frame_order.optimisation.count_sobol_points() function.
  • Updated the Frame_order.test_count_sobol_points2 system test for the correct number of Sobol' points.
  • Created the Frame_order.test_count_sobol_points_rigid system test. This is to demonstrate a failure of the frame_order.test_count_sobol_points user function when applied to the rigid frame order model.
  • Fix for the frame_order.count_sobol_points user function for the rigid model. This model is now caught at the start, a message printed out, and the function exited.
  • Fix for the Frame_order.test_count_sobol_points_rigid system test. This now checks that cdp.used_sobol_points does not exist for the rigid frame order model after a call to the frame_order.count_sobol_points user function.
  • Created the Frame_order.test_count_sobol_points_rotor system test. This is to test the frame_order.count_sobol_points user function for the rotor model.
  • Fix for the frame_order.count_sobol_points user function for the rotor model. The σ angles unpacking required a dimensionality collapse in the Sobol' angle data structure.
  • Updated the number of points to allow the Frame_order.test_count_sobol_points_rotor system test to pass.
  • The frame order count_sobol_points() function is now being called by all of minimise user functions. This occurs at the end of the minimise.calculate, minimise.grid_search, and minimise.execute user function backends to provide more feedback to the user as to the quality of the optimisation. To avoid initialising the target function twice, the count_sobol_points() function now accepts the initialised target function as an optional argument.
  • Created the Frame_order.test_count_sobol_points_free_rotor system test. This is to demonstrate that the frame_order.count_sobol_points user function currently fails for the free-rotor model.
  • Fix for the frame_order.count_sobol_points user function for the free-rotor models. The torsion angle is now correctly handled as the 3 free-rotor models do not have cdp.cone_sigma_max set.
  • Updated the number of points in the Frame_order.test_count_sobol_points_free_rotor system test. This is to allow the test to pass.
  • Fix for the frame order count_sobol_points() function. The checks for the model, parameter and domain set up must come first, before cdp.model is accessed. Otherwise the frame_order.num_int_pts user function will often fail.
  • Fix for the frame order count_sobol_points() function. The free-rotor isotropic cone model was incorrectly handled, as the cone parameter is 'cone_s1' and not 'cone_theta'. The order parameter is now converted to an angle before checking if the Sobol' point is outside of the cone or not.
  • More fixes for the frame order count_sobol_points() function. The torsion angle for the torsionless models is no longer accessed, and the cone_theta parameter is only accessed for models with this parameter.
  • Created the Frame_order.test_count_sobol_points_iso_cone_free_rotor system test. This is to test the frame_order.count_sobol_points user function for the free-rotor isotropic cone model.
  • Fix for the frame order count_sobol_points() function. The torsion angle ranges from -π to π, so the absolute value needs to be checked, just as in the lib.frame_order modules.
  • Updates for the number of Sobol' points in the Frame_order.test_count_sobol_points_* system tests. This is simply to allow all Frame_order system tests to pass.
  • Redesigned the frame_order.num_int_pts user function frontend for the oversampling idea. The use of the quasi-random Sobol' sequence for numerical PCS integration will be modified to use the concept of oversampling. Instead of specifying the exact number of points in the Sobol' sequence and then removing points outside of the current parameter values, the algorithm will oversample as N * Ov * 10M, where N is the maximum number of Sobol' points to be used for the integration, Ov is the oversampling factor, and M is the number of dimensions or torsion-tilt angles used in the system. The aim is to try to use the maximum number of points N for all frame order models and all ranges of dynamics.
  • Renamed the frame_order.num_int_pts user function to frame_order.sobol_setup. The user function no longer specifies the number of integration points. Instead it now specifies the maximum number of points N and oversampling factor Ov used to generate the oversampled Sobol' sequence.
  • Implemented the Sobol' sequence oversampling in the frame order target function class.
  • Converted all of the specific_analyses.frame_order package to the Sobol' point oversampling design. The correct values are now sent into the target function and all references to cdp.num_int_pts has been replaced with the cdp.sobol_max_points and cdp.sobol_oversample pair of variables. The frame_order.count_sobol_points user function backend has also been updated to show the total number of oversampling points and the number of points used.
  • The frame_order.count_sobol_points user function now shows more information. The maximum number and oversampling factors are now also printed out for maximum user feedback.
  • Improved the printout formatting for the count_sobol_points() frame order function.
  • The frame order target function now passes the maximum number of Sobol' points to the relax library. The value is being passed into the lib.frame_order.*.pcs_numeric_int_*() functions, though it is not used set.
  • Fix for the percentage calculation for the frame order count_sobol_points() function.
  • Changed the creation of the Sobol' points in the frame order target function. For increased accuracy of the numerical PCS integration, the first 1000 points of the Sobol' sequence are now skipped to avoid any bias. For speed, the axis order of the Sobol' torsion-tilt angles has been swapped so that the numpy.swapaxes() function call is no longer required in the lib.frame_order.*.pcs_numeric_int_*() functions.
  • Updated the frame order count_sobol_points() function to handle the swapped axis order.
  • Huge speedup for the generation of the Sobol' sequence data in the frame order target function. The new Sobol_data class has been created and is instantiated in the module namespace as target_function.frame_order.sobol_data. This is used to store all of the Sobol' sequence associated data, including the torsion-tilt angles and all corresponding rotation matrices. When initialising the target function, if the Sobol_data container holds the data for the same model and same total number of Sobol' points, then the pre-existing data will be used rather than regenerating all the data. This can save a huge amount of time.
  • Updated the frame order count_sobol_points() function to use the new Sobol_data container. The Sobol' sequence data generated by the target function is now located at target_functions.frame_order.sobol_data.
  • Updated all the lib.frame_order.*.pcs_numeric_int_*() functions for the new Sobol' point algorithm. The functions now all accept the max_points argument and terminate the loop over the Sobol' points once the maximum number of points has been reached. The calls to numpy.swapaxes() have also been removed as this is now pre-performed by the target function initialisation.
  • Changed the default oversampling factor from 100 to 1 in the frame_order.sobol_setup user function.
  • Converted the frame order auto-analysis to use the new frame_order.sobol_setup user function design. The auto-analysis Optimisation_settings object has also been modified so that all num_int_pts arguments and internal structures have been split into the two new sobol_max_points and sobol_oversample names and objects.
  • Fix for the rigid frame order model for the recent frame_order.sobol_setup user function changes. For this model, the number of Sobol' points normally is does not exist. This is now correctly handled.
  • Created the sobol_setup() method for the frame order auto-analysis. This is used to correctly handle the new design of the frame_order.sobol_setup user function consistently throughout the protocol.
  • Updated the Frame_order.test_auto_analysis system test script. This now uses the new auto-analysis Optimisation_settings object design.
  • Updated the Frame_order.test_count_sobol_points system test. The call to the frame_order.num_int_pts user function was changed to frame_order.sobol_setup.
  • Fixes for the Frame_order.test_count_sobol_points2 system test. The test_suite/shared_data/frame_order/axis_permutations/cam_pseudo_ellipse.bz2 relax state file has been manual edited to change the num_int_pts data pipe structure to sobol_max_points and to add the new sobol_oversample variable.
  • Added a backwards compatibility hook for state and results files for the Sobol' sequence changes. The data pipe num_int_pts variable is now renamed to sobol_max_points when present, and the sobol_oversample variable is created and set to 1.
  • Updates to all of the Frame_order.test_count_sobol_points_* system tests. The frame_order.sobol_setup user function is used to set a small maximum number of points (20) to allow the tests to be fast. The value of 20 is also checked for to allow the tests to pass.
  • Renamed the cdp.used_sobol_points variable to sobol_points_used. This is created by the count_sobol_points() frame order function. The name change is to match the sobol_max_points and sobol_oversample variable names.
  • Renamed all the Frame_order.test_num_int_pts* system tests to Frame_order.test_sobol_setup*. These system tests where for checking the operation of the old frame_order.num_int_pts user function. But this is now the frame_order.sobol_setup user function.
  • Fix for all of the Frame_order.test_rigid_data_to_*_model system tests. The frame_order.num_int_pts user function call was changed to frame_order.sobol_setup.
  • Updated the χ2 check in the Frame_order.test_rigid_data_to_free_rotor_model system test. This value has changed due to the first 1000 points of the Sobol' sequence being skipped.
  • Fixes for all of the lib.frame_order.*.pcs_numeric_int_*_qrint() functions. The loop over the Sobol' points was broken. As numpy.swapaxes() has been applied to the points argument already, the loop needs to be over the second dimension of the points data structure.
  • Updates for all of the Frame_order.test_cam_* system tests. The NUM_INT_PTS variable in the system tests scripts is now passed into the frame_order.sobol_setup user function as the max_num argument. This number has also been changed so that the tests take a reasonable amount of time. All χ2 value checks were updated. These were validated by increasing the number of integration points and watching the χ2 value of the Frame_order.test_cam_*_pcs version of the system tests head to zero.
  • Another update for the χ2 check in the Frame_order.test_rigid_data_to_free_rotor_model system test. The previous commit used an incorrect value for the χ2. This new value is now much closer to the original.
  • Turned down the verbosity of the update_model() frame order function. The verbosity flag is now accepted and set to zero by the get_param_names() API method and specific_analyses.frame_order.parameters.param_num() function. This removes a lot of useless printouts from many different user functions.
  • Introduced the verbosity argument to the count_sobol_points() frame order function. This is used to turn the printouts on or off. The optimisation code now calls this function with the verbosity argument sent into the minimise.grid_search and minimise.execute user functions. Hence the printouts are suppressed for Monte Carlo simulations.
  • Removed the axis system printout from the frame_order.pdb_model user function. This is for the geometric representation of the frame order dynamics. The axis system is printed out as the rotation matrix used for the lib.structure.geometric.generate_vector_residues() function later on anyway. The change is to simplify the printouts.
  • Editing of the docstring of the frame_order.sobol_setup user function.
  • Fix for the frame order system test optimisation printouts. The cdp.num_int_pts variable is now called cdp.sobol_max_points.
  • The starting time of the axis permutation model optimisations is now output. This is in the frame order auto-analysis. This call to the time user function occurred for the normal models, so extending it to the permuted axes models makes the output more consistent.
  • Simplified the atomic position averaging warning in the frame order analysis. Instead of throwing a warning for each spin, one warning for all spins is now given. This should make the output a lot less verbose.
  • The frame order minimise_setup_atomic_pos() function now accepts the verbosity argument. This is used to silence the warnings in user functions such as frame_order.sobol_setup.
  • Improvements for the frame order overfit_deselect() API method. Three changes have been made: The print statements have been converted to RelaxWarnings; The spin IDs or spin ID pairs are now stored in a list and one RelaxWarning for the missing PCS data and one for the missing RDC data is now given; And the verbose flag is now used to determine if a RelaxWarning will be given.
  • Change to the position averaging warning in the minimise_setup_atomic_pos() frame order function.
  • Improvements for the printout from the update_model() frame order function. A list of updated parameters is now created and everything is printed on a single line at the end. The printout is therefore much more compact.
  • Spun out part of the frame_order.pdb_model user function into the new frame_order.simulate user function. The new user function arguments required for properly creating the pseudo-Brownian dynamics simulation would have made the frame_order.pdb_model user function too complicated. Therefore this part has been spun out into the new frame_order.simulate user function. The frame_order.simulate frontend fully describes the algorithm that will be used to simulate the dynamic content of the PCS and RDC data, and warns that not all modes of motion are visible and present.
  • Updated the frame order auto-analysis to call the new frame_order.simulate user function. Although not implemented yet, this allows the user function to create the simulation PDB file in the future.
  • Small fix for the new frame_order.simulate user function backend.
  • Updated the base script for the Frame_order.test_cam_* system tests. The frame_order.simulate user function is now called directly after the frame_order.pdb_model user function.
  • Created the backend framework for the frame_order.simulate user function. The backend specific_analyses.frame_order.uf.simulate() function performs all data checks required, prepares the output file object, assembles the frame order parameter values and pivot point, and creates a copy of the structural object object with the ensemble collapsed into a single model. All this data is then passed into the new lib.frame_order.simulation.brownian() function. This initialises all required data structures and the structural object. The main loop of the simulation is also implemented, taking snapshots at every fixed number of steps and terminating the loop once the total number of snapshots are reached. The snapshot consists of copying the original unrotated structural model and rotating it into the new position. The rotation is currently the identity matrix. The old specific_analyses.frame_order.geometric.create_distribution() stub function has been deleted.
  • Decreased the time required for the Frame_order.test_cam_* system tests. The frame_order.simulate user function now only creates a total of 20 snapshots rather than 1000.
  • Added new arguments to the frame order auto-analysis for the frame_order.simulate user function. These are the brownian_step_size, brownian_snapshot and brownian_total arguments which are passed directly into the frame_order.simulate user function. This gives the user more control, as well as allowing the test suite to speed up this part of the analysis.
  • Huge speedup for the Frame_order.test_auto_analysis system test. The pseudo-Brownian dynamics simulation via the frame_order.simulate user function has been massively sped up to allow the test to be almost as fast as before.
  • Spun out the code for shifting to the average frame order position into a new function. The old code of the create_ave_pos() of the specific_analyses.frame_order.geometric module has been shifted into the new average_position() function. This will allow the code to be reused by other parts of relax to obtain the average frame order structures.
  • Implemented the shifting to the average position for the frame_order.simulate user function backend. This simply sends the structural object into the new average_position() function of the specific_analyses.frame_order.geometric module.
  • Improvements for the frame_order.simulate user function. The rigid model is now skipped, the PDB file closed, and some printouts for better user feedback have been added.
  • Changed the default PDB file name for the frame_order.simulate user function to 'simulate.pdb'. The '*.bz2' extension has been dropped so that the file is quicker to create and does not need to be decompressed for loading into molecular viewers.
  • Created the specific_analyses.frame_order.geometric.generate_axis_system() function. This is now used by most parts of the frame order analysis to generate the full 3D eigenframe of the motions. Previously this was implemented each time the frame or major axis was required. This replicated and highly inconsistent code has been eliminated.
  • Fix for the new specific_analyses.frame_order.geometric.generate_axis_system() function. The rotor and free rotor models were not correctly handled and the returned eigenframe was the zero matrix.
  • Implemented the pseudo-Brownian frame order dynamics simulation for the single motion models. This uses the same logic as in the test_suite/shared_data/frame_order/cam/*/generate_distribution.py scripts which were used to generate all of the test suite data. However rather than using a random rotation matrix, a random 3D vector is used to rotate a fixed angle. And the rotation is used to rotate the current state to state i+1. The rotation for the state is decomposed into torsion-tilt angles once shifted into the motional eigenframe, the violations checked for as the state shifted to the boundary, then the new state reconstructed from the corrected torsion-tilt angles, and then it is shifted from the motional eigenframe to the PDB frame.
  • Shifted the specific_analyses.frame_order.variables module into the lib.frame_order package. This is both to minimise circular dependencies, as previously the specific_analyses.frame_order modules import from target_functions.frame_order and vice-versa, and to allow the relax library functions to have access to these variables.
  • Implemented the frame_order.simulate user function backend for the double rotor frame order model. This involved extending the algorithm to loop over N states, where N=2 for the double rotor and N=1 for all other models. To handle the rotations being about the x and y-axes, an axis permutation algorithm is used to shift these axes to z prior to decomposing to the torsion-tilt angles. The reverse permutation is used to shift the axes back after correcting for being outside of the allowed angles.
  • Fixes for the specific_analyses.frame_order.geometric.average_position() function. The recent trunk changes with the structural object Internal_selection class required a change in this function.
  • Updated the lib.frame_order.simulation.brownian() function. This now uses the internal structural object selection object logic - the selection() method is called to obtain the Internal_selection object, and this is then passed into the rotation() method.
  • The quad_int argument for the frame order target function class now defaults to False. This is so that quasi-random Sobol' numerical integration will be used by default.
  • The cdp.quad_int flag is now passed into the target function for the frame order calculate() method. This is for the minimise.calculate user function backend.
  • Fixes for the missing cdp.quad_int flag. If the cdp.quad_int flag is missing, this is now set to False before setting up the target function class. The previous behaviour was that the frame_order.quad_int user function must be called prior to optimisation. Now it is optional for turning this flag on and off.
  • The RDC only optimisation now defaults to the *_qrint() frame order target functions. This restores the earlier behaviour prior to the restoration of the SciPy quadratic integration.
  • Clean up for the frame order target function aliasing. The Scipy quadratic integration and the quasi-random Sobol' integration target functions are now aliased using the getattr() Python method to programmatically choose one or the other. The rigid model has been removed from the list as it is not a numeric model, and the func_double_rotor() target function has been renamed to func_double_rotor_qrint() to make it consistent with the naming of the other target functions.
  • Renaming of all the frame order target functions and PCS integration functions. For consistency, all quasi-random Sobol' integration functions now use the 'qr_int' tag whereas the SciPy quadratic integration functions use the 'quad_int' tag. This is not only in the target function names but also the PCS integration functions in lib.frame_order.
  • Duplicated all Frame_order.test_cam_* system tests for testing the SciPy quadratic integration. The Frame_order.test_cam_* system tests have all been renamed to Frame_order.test_cam_qr_int_*. These have been duplicated and renamed to Frame_order.test_cam_quad_int_*. The flag() system test method has been extended to include the quad_int flag which is then stored in the status object and used in the base CaM frame order system test script to activate the frame_order.quad_int user function.
  • Activated the quad_int flag for a number of the Frame_order.test_cam_quad_int_* system tests. The quad_int argument for the flags() test suite method had been missed for a few of these tests.
  • Updated the χ2 check in the Frame_order.test_cam_qr_int_pseudo_ellipse_free_rotor_rdc system test. This test is not normally run as it blacklisted and duplicates the coverage of other tests. However its chi-squared value check had not been updated for a while and hence the test fails when explicitly run.
  • The Sobol' point counting is now turned off for the frame order optimisation functions if none exist. If the cdp.quad_int flag is set, then there will be no Sobol' points to count. This count_sobol_point() user feedback function will therefore not be called by the minimise.calculate, minimise.grid_search and minimise.execute user functions.
  • Turned off optimisation for all of the Frame_order.test_cam_quad_int_* system tests. The SciPy quadratic integration is far too slow to be used in the test suite. The simple call to the minimise.calculate user function is sufficient for checking these target functions.
  • Updated all of the Scipy quadratic integration frame order target functions. A number of the data structures in the target function class have been redesigned since these target functions were deleted. All of the func_*_quad_int*() target functions have been updated for these changes.
  • Updated all of the χ2 value checks for the Frame_order.test_cam_quad_int_* system tests. This is only for those tests which use PCS data - the RDC only test χ2 values are the same as in the Frame_order.test_cam_qr_int_* system tests. In all cases, the χ2 value is lower for the more accurate SciPy quadratic integration as compared to the quasi-random Sobol' integration, as expected.
  • Implemented the SciPy quadratic integration target function for the double rotor frame order model. This simply follows from what all the other quadratic integration target functions and lib.frame_order module functions do.
  • Changed the χ2 value checks in the Frame_order.test_cam_quad_int_double_rotor* system tests. These were the values for the quasi-random Sobol' integration and needed updating for the SciPy quadratic integration.
  • Removed the skip_tests argument for the Frame_order system tests __init__() method. This argument, which was used to manually turn on or off the blacklisted tests, is no longer needed due to the new --no-skip relax command line flag which will enable all blacklisted tests.
  • The [http://www.nmr-relax.com/api/4.0/auto_analyses.frame_order-module.html frame order auto-analysis Optimisation_settings object now supports the quad_int flag. This is for activating the SciPy quadratic integration. It is accepted as an argument for the add_grid() and add_min() methods, and it returned by the new get_grid_quad_int() and get_min_quad_int() methods.
  • Added the ability to specify a pre-run directory in the frame order auto-analysis. This will be used for refinement purposes. If the new pre_run_dir argument, modelled on the relaxation dispersion auto-analysis, is supplied then results files will be loaded from this directory and the base data pipe copying and PCS subset optimisation steps will be skipped. The model nesting algorithm is also deactivated.
  • Activated the SciPy quadratic integration in the frame order auto-analysis. If the Optimisation_settings object has been set up with the quad_int flag, then the auto-analysis will skip the sobol_setup() method and instead directly call the frame_order.quad_int user function. Optimisation will then use the SciPy quadratic integration rather than the quasi-random Sobol' integration.
  • Improvements for the usage of the frame_order.quad_int user function in the auto-analysis. The frame_order.quad_int user function is now called even when the Optimisation_settings object quad_int flag is False. This allows for switching between the SciPy quadratic integration and the quasi-random Sobol' integration, as the SciPy quadratic integration can now be turned off.
  • Additions to the frame order auto-analysis documentation.
  • Incorporated the contents of the summarise.py script into the frame order auto-analysis module. This has been converted into the summarise() function which will generate a results summary table as the analysis is still running.
  • Improved logic in the auto_analyses.frame_order.summarise() function. The model names, directories and titles are now being auto-generated from the full list of frame order models in lib.frame_order.variables.MODEL_LIST. To create a common mechanism for determining the model directory name, the Frame_order_analysis.model_directory() method has been converted into a module function.
  • The frame order auto-analysis now calls the summarise() function at the end to create a summary table.
  • Shifted the final state saving in the frame order auto-analysis to be within the safety of the try block.
  • Turned off the final state saving in the Frame_order.test_auto_analysis system test. This almost halves the time required for the test. A private class variable _final_state has been added to the auto_analyses.frame_order.Frame_order_analysis class which when False will cause the state saving step to be skipped.
  • The summarise() function call is now after saving the final state in the frame order auto-analysis. This is needed because the summarise() function will create a new set of data pipes, loading the results which already exist under a different pipe name in the relax data store. Otherwise the final state file is twice as big as it should be.
  • Incorporated the contents of count_sobol_points.py into the frame order auto-analysis module. The analysis script has been converted into the count_sobol_points() function which will generate a summary table of the number of quasi-random Sobol' points used for the PCS numerical integration.
  • The frame order auto-analysis now calls the count_sobol_points() function at the end. This is to automatically create the Sobol' point summary table.
  • Fixes for the auto_analyses.frame_order.summarise() function. If the count_sobol_points() function is called followed by summarise(), a RelaxError will be raised as the data pipe already exists. The summarise() function has been modified to switch to the data pipe if it already exists.
  • Expanded the frame order auto-analysis documentation. This adds a description for the summarise() and count_sobol_points() functions.
  • Elimination of most of the Frame_order.fixme_test_* system tests and associated data. These tests are from a very early stage of the development of the frame order theory back when the base data was the full and reduced alignment tensors for the each domain calculated from the RDC data. They do not fit into the current analysis where the base data is the RDCs and PCSs for the moving domain. There is no point upgrading the tests as it will be far too much effort and it will only duplicate the coverage of the Frame_order.test_cam_* system tests.
  • Renamed the Frame_order.fixme_test_opendx_map system test to Frame_order.test_opendx_map to activate it.
  • Upgraded the Frame_order.test_opendx_map system test. To upgrade from the ancient design to the current design so that the test is functional and relevant, this now uses the same setup as the Frame_order.test_cam_qr_int_rigid system test. Instead of performing optimisation, the test calls the dx.map user function.
  • Fix for the frame order specific API calculate() method. This was caught by the Frame_order.test_opendx_map system test. The scaling matrix was not being specified by the dx.map user function backend and this was causing the method to fail. Instead of passing the non-existent scaling matrix into the target function, the argument is simply ignored. The scaling matrix has no effect on the minimise.calculate user function so it is not necessary.
  • The verbosity flag is now being respected by the frame order specific API calculate() method. This silences the method when executing the dx.map user function. The χ2 value printout is suppressed and the verbosity argument is being sent into the frame order count_sobol_points() function.
  • Added a section printout to the frame order auto-analysis when summary tables are created.
  • The frame_order.simulate user function now defaults to creating a gzipped PDB file. This is to save room, and because most molecular viewers will automatically read gzipped PDB files.
  • Fix for the change of the pipe_control.pipes.test() function to check_pipe().
  • Small change in the title of the summary table of the frame order auto-analysis. 'Order parameters' has been replaced by 'Cone half angles' to clarify what the values really are.
  • Fix for the frame order optimisation target setup printouts. The 'Numerical integration: ' printout was fixed to 'Quasi-random Sobol' sequence'. This now changes to 'SciPy quadratic integration' if cdp.quad_int is set. The text 'PCS' has also been added for clarification.
  • Removed the call to the frame_order.simulate user function for the rigid model in the auto-analysis. There is no motion to simulate in the rigid model, so the frame_order.simulate user function has no use.
  • Improvements, fixes, and expansion of the results and data visualisation file creation. This is for the frame order auto-analysis. The visualisation() method has been renamed to results_output() and its scope expanded. The function previously only called the frame_order.pdb_model and frame_order.simulate user functions for creating PDB representations of the frame order motions and performing a pseudo-Brownian frame order dynamics simulate. This has been extended to also call the results.write user function for outputting results files and the rdc.corr_plot and pcs.corr_plot for generating correlation plots of the measured vs. back-calculated data. All parts of the auto-analysis were output files are required now call this method. This ensures that all output files are always created, and are placed into the correct directories.
  • Improvements for the sectioning printouts for the frame order auto-analysis. The sections now use the lib.text.formatting subtitle() and subsubtitle() functions to distinguish them from the output of all the user functions, which use the section(), subsection() and subsubsection() functions. New sectioning printouts have been added for clarity.
  • Possible fixes for the frame order auto-analysis. This is just in case a user decides to not perform the optimisation starting with a PCS subset. In this case, the analysis will now execute correctly.
  • Improvements to the summary table for the frame order auto-analysis. The rotor and free rotor model motional eigenframe parameter axis_alpha is now being converted into spherical angles and reported in the table. This allows the motional eigenframe of all models to be easily compared in the table.
  • Created a directory and base PDB system for testing out the PCS information content. The base PDB system consists of Ad Bax's CaM domain structures superimposed onto the open CaM structure, the N-domain CoM shifted to the origin, and the C-domain CoM shifted to the z-axis.
  • Modified the PCS content testing base system. The paramagnetic centre is now shifted to the origin, as this is the real centre of the PCS physics.
  • Intermediate optimisation results are now stored by the frame order auto-analysis. The results from each minimise.grid_search and minimise.execute user function call are now stored in specially named directories located in the 'intermediate_results' directory, which itself is located in the auto-analysis results_dir directory. This allows intermediate results to be more easily analysed later on, which can be useful for optimising the optimisation steps. These directories can also be used for the pre_run_dir auto-analysis argument for subsequent refinements from earlier steps in the optimisation. The results stored include everything from the results_output() method and the count_sobol_points() and summarise() functions. To allow this to work, the auto-analysis functions count_sobol_points() and summarise() required modification. Results files are now always loaded into a temporary data pipe, rather switching to the corresponding pipe, and the temporary data pipe is deleted after the data has been extracted. The original data pipe name is also stored and a switch back to that pipe occurs at the end of each function.
  • The simulation is now turned of for intermediate results in the frame order auto-analysis. The intermediate results are only for checking, so for these the full pseudo-Brownian dynamics simulations are not required. The simulation flag has been introduced into the results_output() method of the auto-analysis to control this.
  • The splitting of the rigid model grid search into rotation and translation parts is now optional. In the frame order auto-analysis, the rigid_grid_split argument has been introduced. The alternating algorithm of performing a grid search over the rotational space followed by translation is now optional and turned off by default. The reason is because the global minimum is sometimes missed with this shortcut algorithm.
  • Speedup of the Frame_order.test_auto_analysis system test. The splitting of the rigid model grid search into rotation and translation parts has been reactivated.
  • Created the Optimisation.has_grid() method for the frame order auto-analysis. This is used to test if the optimisation settings object has a grid search defined.
  • The grid search can now be skipped for the rigid model in the frame order auto-analysis. If the input 3D structures are close to the real solution, the grid search over the translational and rotation parameters of the rigid model could be skipped. This speeds up the analysis and can help find the real solution in problematic cases.
  • The intermediate results storing can now be turned off in the frame order auto-analysis. The new store_intermediate Boolean argument has been added to the analysis to allow the storage of these results to be turned on or off.
  • The intermediate results are no longer stored in the Frame_order.test_auto_analysis system test. This drops the test timing on one system from ~190 seconds to ~50 seconds.
  • The compression level for results files can now be set in the frame order auto-analysis. This is via the new argument results_compress_type, which is used to set the compress_type argument of the results.write user function. The results reading parts of the auto-analysis have been updated to allow uncompressed, bzip2 compressed, and gzip compressed files to be handled.
  • Added a printout of the frame order model in the target function setup function. This is printed out when the minimise.calculate, minimise.grid_search, or minimise.execute user functions are called, and is for better feedback, especially in the auto-analysis where the repetitive optimisations can be confusing.
  • Updated the frame order analysis for the structure.load_spins user function changes. The minimise_setup_atomic_pos() function of the specific_analyses.frame_order.optimisation module now handles the mixed type spin.pos variable correctly.
  • The data pipe containing a PCS subset is now optional in the frame order auto-analysis. This is for systems which have so little data that a subset makes no sense.
  • Redesigned the optimisation steps for the frame order auto-analysis. The code has been significantly simplified as the optimisation for the PCS subset and full data set was the same. The code duplication has been eliminated by combining it into the new optimisation() method. The check for the PCS subset has also been expanded so that it is skipped if the subset data pipe is not supplied, even if an optimisation object for the subset has been (this should prevent strange errors when the auto-analysis is incorrectly used). A side effect of this code merger is that the zooming grid search has now been activated for the full PCS data set. This is of great benefit when a PCS subset is not being used.
  • The minimise.execute user function skip_preset flag is now False in the frame order auto-analysis. This is for the main model optimisation. Without this flag set, the grid search for the pivot point position for the rotor model was being skipped at the first zoom level.
  • The pivot point can now be excluded from the grid search in the frame order auto-analysis. If the initial pivot point is known to be reasonable, then it may be possible to skip it in the grid search for the rotor frame order model. This can lead to a speedup of the analysis and can help with stability. The pivot_search argument has been added to the auto-analysis Optimisation.add_grid() method to enable this. The get_grid_pivot_search() method has also been added to allow the auto-analysis to query this and turn it off if desired.
  • Updated the description of the frame_order.permute_axes user function. This now includes the isotopic cone.
  • Replaced the table in the frame_order.permute_axes user function. The original table was an old and incorrect version. This has been replaced by the correct permutation table.
  • Added some old relax scripts for both simulating and predicting the frame order matrix elements. These were used for the initial implementation of the pseudo-ellipse frame order model back in July 2010. The scripts will be extended for all frame order models. The simulated values could then be used in unit tests of the frame order matrix code in lib.frame_order.
  • Updated the frame_order_simulate.py script for simulating frame order matrix elements. The MODEL variable has been added in preparation for supporting all model types, and this is now added to the file name. The Grace header is now also being automatically generated.
  • Improvements for the Grace files produced by the frame_order_simulate.py script. The model name is now set as a variable and is used for the subheading.
  • Updated the frame_order_solution.py script for directly calculating the frame order matrix elements. The MODEL variable has been added in preparation for supporting all model types, and this is now added to the file name. The Grace header is now also being automatically generated and this matches that for the frame_order_simulation.py script.
  • Zero values can now be handled in the pseudo-ellipse 1st degree frame order matrix function. This is in lib.frame_order.pseudo_ellipse.compile_1st_matrix_pseudo_ellipse().
  • Removed some unused code in the pseudo-ellipse 2nd degree frame order matrix function. This is the compile_2nd_matrix_pseudo_ellipse() function in the lib.frame_order.pseudo_ellipse module. The change should make the RDC part of the frame order analysis for the pseudo-ellipse model slightly faster.
  • Modified the rotate_daeg() function as this is independent of the degree of the frame order matrix. This is the lib.frame_order.matrix_ops.rotate_daeg() function.
  • Fix for the compile_1st_matrix_pseudo_ellipse() function. This function of the lib.frame_order.pseudo_ellipse module now can rotate the 1st degree frame order matrix out of its eigenframe and into the PDB frame.
  • Created an executable Python script for mass converting the frame order matrix Grace graphs. The script converts the *.agr files to EPS and PNG files.
  • Modified the frame order matrix Grace graph to EPS/PNG format conversion script. The binary being called is now 'grace' rather than 'xmgrace'. This allows different Grace versions to be used.
  • Modified the frame order matrix Grace graph to EPS/PNG format conversion script. Grace is now used to create a PostScript file and then the ps2eps program is called to convert to EPS. This produces much better EPS files for inclusion into LaTeX documents.
  • Redesign of the frame_order_solution.py script for calculating the frame order matrix elements. This script now loops over all models, all motional frame orientations, and all order parameters to generate the Grace graphs of all 1st and 2nd degree frame order matrix elements. Therefore the script only needs to be executed once. The script also now calculates a point at zero (slightly shifted to 0.01 to avoid artifacts).
  • Added all of the Grace graphs produced by the frame_order_solution.py script. These are the graphs of the 1st and 2nd degree frame order matrix elements, calculated using the functions in lib.frame_order.
  • Updated frame_order_simulate.py to be much faster in simulating the frame order matrix elements. The script also matches the Grace file output of the frame_order_solution.py script. The inside() method has been renamed for the pseudo-ellipse and the infrastructure for adding support for the other frame order models has been added. By shifting calculations outside of the loops, the script is now many orders of magnitude faster.
  • Implemented the compile_1st_matrix_rotor() function. This is for the lib.frame_order.rotor module. The function will calculate the 1st degree in-frame frame order matrix for the rotor model.
  • Created the Grace graphs for the rotor model 1st degree frame order matrix elements. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
  • Implemented the compile_1st_matrix_free_rotor() function. This is for the lib.frame_order.free_rotor module. The function will calculate the 1st degree in-frame frame order matrix for the free rotor model.
  • Created the Grace graphs for the free rotor model 1st degree frame order matrix elements. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
  • Implemented the compile_1st_matrix_iso_cone() function. This is for the lib.frame_order.iso_cone module. The function will calculate the 1st degree in-frame frame order matrix for the isotropic cone model.
  • Created the Grace graphs for the isotropic cone model 1st degree frame order matrix elements. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
  • Implemented the compile_1st_matrix_iso_cone_torsionless() function. This is for the lib.frame_order.iso_cone_torsionless module. The function will calculate the 1st degree in-frame frame order matrix for the torsionless isotropic cone model.
  • Created the Grace graphs for the torsionless isotropic cone 1st degree frame order matrix elements. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
  • Implemented the compile_1st_matrix_iso_cone_free_rotor() function. This is for the lib.frame_order.iso_cone_free_rotor module. The function will calculate the 1st degree in-frame frame order matrix for the free rotor isotropic cone model.
  • Created the Grace graphs for the free rotor isotropic cone 1st degree frame order matrix elements. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
  • Docstring fixes for the new compile_1st_matrix_iso_cone() function.
  • A minor speedup for the frame_order_simulate.py script. The angles are now being calculated at the very start prior to the main loop, removing repetitive calculations.
  • The frame_order_simulate.py script now uses lib.text.progress.progress_meter(). This script for simulating the frame order matrix elements now uses the standard progress meter in relax to simplify the script. This should also speed up the calculations as the progress printouts were slowing down the calculations.
  • Simulation of the pseudo-ellipse frame order matrix elements. This is for a simulation of 1,000,000 states for each angle increment, and includes in-frame and out-of-frame and varying of θ X, Y, and Z. The resultant Grace graphs have been added to the repository.
  • The frame order matrix element simulation script now uses the Kronecker outer product. This allows the frame order matrix to be in the same notation as that used internally in relax. It will cause the colours of the Sijkl_* curves to match between the simulation and solution scripts.
  • Added the rotor model to the frame order matrix element simulation script. The generated in-frame and out-of-frame Grace graphs containing the matrix values for 1,000,000 simulation values have been added to the repository. The script was modified so that the rotation is generated by special rotation_*() methods which are aliased depending on the model.
  • Added the free rotor model to the frame order matrix element simulation script. The generated in-frame and out-of-frame Grace graphs containing the matrix values for 1,000,000 simulation values have been added to the repository. The inside_free_rotor() method has been added to always return True for the rotation generated by rotation_z_axis().
  • Simplifications and fixes for the 1st degree frame order matrix calculation for the pseudo-ellipse. The compile_1st_matrix_pseudo_ellipse() function of the lib.frame_order.pseudo_ellipse module has been significantly simplified by shifting a lot of maths outside of the quadratic integration.
  • Updated all the calculated 1st degree frame order matrix graphs for the pseudo-ellipse. The changes are due to the fixes in the lib.frame_order.pseudo_ellipse module.
  • Simplifications for all of the torsionless pseudo-ellipse frame order matrix equations.
  • Implemented the compile_1st_matrix_pseudo_ellipse_torsionless() function. This is for the lib.frame_order.pseudo_ellipse_torsionless module. The function will calculate the 1st degree in-frame frame order matrix for the torsionless pseudo-ellipse model.
  • Created the Grace graphs for the torsionless pseudo-ellipse model 1st degree frame order matrix. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
  • Added the isotropic cone model to the frame order matrix element simulation script. The generated in-frame and out-of-frame Grace graphs for the torsion angle cone_sigma_max, containing the matrix values for 1,000,000 simulation values have been added to the repository. The inside_iso_cone() method has been created to check for the θx and θz angle violations from the rotation_hypersphere() method.
  • Simplifications for the inside_*() methods of the frame order matrix element simulation script. The limit() method is now called only once outside of these methods and the maximum cone half-angles passed into the inside_*() methods. Although only slightly faster, this is mainly to simplify the code.
  • Alphabetical ordering of methods in the frame order matrix element simulation script.
  • Simplification of some of the pseudo-ellipse 2nd degree frame order matrix equations.
  • More simplifications of the pseudo-ellipse 2nd degree frame order matrix equations.
  • Integer to float conversions in part_int_daeg2_pseudo_ellipse_13(). This avoid integer to float conversion during execution, saving a little time for the pseudo-ellipse 2nd degree frame order matrix compilation.
  • Removal of many repetitive calculations in the pseudo-ellipse 2nd degree frame order matrix equations.
  • Simplifications of pseudo-ellipse 1st degree frame order matrix functions. The xx, yy, and zz have been renamed to 00, 11, and 22 for consistency. And all sigma_max arguments have been dropped as they are not used.
  • Small numerical changes for the pseudo-ellipse 2nd degree frame order matrix graphs. These are only for the first point close to zero and the changes are minimal, caused by the recent simplifications of the code.
  • Created the Grace graphs for the free rotor pseudo-ellipse model 1st degree frame order matrix. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
  • Implemented the compile_1st_matrix_pseudo_ellipse_free_rotor() function. This is for the lib.frame_order.pseudo_ellipse_free_rotor module. The function will calculate the 1st degree in-frame frame order matrix for the free_rotor pseudo-ellipse model.
  • Speedups and simplifications of the free rotor pseudo-ellipse 2nd degree frame order matrix equations.
  • Added the torsionless isotropic cone model to the frame order matrix element simulation script.
  • Implemented the compile_1st_matrix_double_rotor() function. This is for the lib.frame_order.double_rotor module. The function will calculate the 1st degree frame order matrix for the double_rotor model.
  • Created the Grace graphs for the double rotor model 1st degree frame order matrix. These are the values calculated directly from the lib.frame_order modules. The graphs were previously all zeros.
  • Recreated all of the simulated pseudo-ellipse frame order matrix element graphs. These are now in the Kronecker product notation so that they will match the graphs calculated using the relax lib.frame_order.pseudo_ellipse module.
  • Fix for the pseudo-ellipse 1st degree frame order matrix ᛞ22 element.
  • Updated all of the pseudo-ellipse 1st degree frame order matrix graphs for the recent fix.
  • Converted the Sobol' rotation matrices to float32 in the frame order target function. This is to conserve huge amounts of memory to allow for more Sobol' points to be used. For example for the models which use 3D Sobol' points (isotropic cone and pseudo-ellipse), a maximum of 50000 Sobol' points requires 50000000 to be created, using about 15 Gb of RAM.
  • A few Frame_order system test updates for the float64 to float32 memory saving changes. The chi-squared value of 3 tests was slightly different.
  • Bug fix for the activation of quadratic integration in the frame order auto-analysis. The calls to the frame_order.quad_int user function in the optimisation() method did not supply an argument so the user function was defaulting to False rather than the True value required.
  • The frame order auto-analysis summary functions are now more robust. If the data pipe already exists for some reason, it is deleted prior to the new one being created.
  • Changed the frame_order.quad_int user function argument default to True. This means that calling the user function without arguments will activate the quadratic integration rather than turning it off.
  • Added the isotropic cone model frame order matrix simulation graphs for the cone opening angle θx.
  • Created and added all of the torsionless isotropic cone simulated frame order matrix element graphs.
  • Added the free rotor isotropic cone model to the frame order matrix element simulation script. The generated Grace graphs containing the matrix values for 1,000,000 simulation values have been added to the repository. The self.torsion_check variable has been created to allow the inside_iso_cone() method to skip the torsion angle check when its value is False.
  • Added the torsionless pseudo-ellipse model to the frame order matrix element simulation script. The generated Grace graphs containing the matrix values for 1,000,000 simulation values have been added to the repository. The rotations are generated by the rotation_hypersphere_torsionless() method and the angle violations checked using the inside_pseudo_ellipse() method.
  • Bug fix for the torsionless pseudo-ellipse 1st degree frame order matrix. The 11 element was of the wrong sign.
  • Fixes for the torsionless pseudo-ellipse 1st degree frame order matrix element graphs.
  • Added the free rotor pseudo-ellipse model to the frame order matrix element simulation script. This only required the self.torsion_check variable to be set to False. The model uses the inside_pseudo_ellipse() and rotation_hypersphere() methods.
  • Fixes for free rotor isotropic cone 1st degree frame order matrix graphs calculated using relax. The 1st degree function accepts the cone opening angle θ rather than the order parameter S.
  • Added the frame order matrix element graphs for the in-frame free rotor pseudo-ellipse model.
  • Added the frame order matrix element graphs for the out-of-frame free rotor pseudo-ellipse model.
  • Added support for the double rotor model to the frame order matrix element simulation script. The double rotation is constructed in the new rotation_double_xy_axes() method, and the checks for the violation of the two torsion angles in the inside_double_rotor() method. In the main loop, the θ, φ and σ angles correspond to sigma1, sigma2, and nothing.
  • Fixes for all of the calculated double rotor model frame order matrix graphs. The X and Y angles were mixed up. The first torsion half-angle sigma1 corresponds to a y-axis rotation and the second sigma2 corresponds to a x-axis rotation.
  • Added the frame order matrix element graphs for the double rotor model.
  • A divide by zero fix for the torsionless pseudo-ellipse. This is in the compile_2nd_matrix_pseudo_ellipse_torsionless() relax library function.
  • A divide by zero fix for the free rotor pseudo-ellipse. This is in the compile_2nd_matrix_pseudo_ellipse_free_rotor() relax library function.
  • The 1st angle for the calculated frame order matrix graphs is 0 for all non pseudo-ellipse models. This is for the frame_order_solution.py script. Only the pseudo-ellipse models where numerical integration is required fail for the angle of 0.0. Therefore the changing of the first angle from 0.0 to 0.01 only occurs for the pseudo-ellipse models. All graphs have been updated.
  • The 1st pseudo-ellipse torsion angle value in the frame order matrix graphs is now 0.0. Only the cone opening angles set to 0.0 cause a failure in the pseudo-ellipse models, so the torsion angle is now allowed to start at exactly zero.
  • Clean up of the frame order matrix element simulation script.
  • Redesign of the free rotor isotropic cone frame order model - the order parameter has been replaced. From the frame order matrix element graphs in test_suite/shared_data/frame_order/sim_vs_pred_matrix, specifically Sijkl_iso_cone_free_rotor_in_frame_theta_x_calc.agr, Sijkl_iso_cone_free_rotor_axis2_1_3_theta_x_calc.agr, and Sijkl_iso_cone_free_rotor_out_of_frame_theta_x_calc.agr, it is clear that the symmetry of the order parameter after 120 degrees causes the 2nd degree frame order matrix to be incorrectly estimated. Therefore the S1 order parameter has been replaced with the original cone opening angle cone_theta. All parts of relax have been updated for this large conversion.
  • Updated the frame order matrix element graphs for the free rotor isotropic cone fixes. The cone S1 parameter has been converted back to the original cone θ opening half-angle, allowing the 2nd degree frame order matrix elements to be properly calculated for all motions.
  • Eliminated the lib.frame_order.iso_cone.populate_*() functions. The populate_1st_eigenframe_iso_cone() function was unused and incorrect, so it was deleted. The contents of the populate_2nd_eigenframe_iso_cone() function have been shifted compile_2nd_matrix_iso_cone() as a separate function is unnecessary. This now matches all the other lib.frame_order modules.
  • Bug fix for the frame_order.simulate user function. The incorrect model number was being specified and hence the simulation was not starting from the optimised average domain position but rather the arbitrary position of the original structure.
  • Manual Python 3 fixes for the dict.key() function which returns a list or iterator in Python 2 or 3. This matches r26519 in trunk.
  • Python 3 fixes via 2to3 - the "while 1" construct has been replaces with "while True". The command used was: 2to3 -j 4 -w -f idioms .
  • Python 3 fixes via 2to3 - the spacing around commas has been fixed. The command used was: 2to3 -j 4 -w -f ws_comma .
  • Python 3 fixes via 2to3 - the xrange() function has been replaced by range(). The command used was: 2to3 -j 4 -w -f xrange .
  • Started to create the Frame_order.test_pdb_model_rotor system test. This will be used to check that the PDB representations of the frame order motions are correct.
  • Modified the frame_order.pdb_model user function backend to handle missing structural data. The create_ave_pos() function of the specific_analyses.frame_order.geometric module now checks that cdp.structure exists, and if not a warning is given and the PDB file creating is skipped.
  • Fixes for the frame_order.pdb_model user function backend for when no data is present. The pipe_centre_of_mass() function of pipe_control.structure.mass module is now called with the missing_error flag set to False so that the PDB generation can continue with the CoM set to [0, 0, 0].
  • The geometric representation part of the frame_order.pdb_model user function now checks parameters. This calls the specific_analyses.frame_order.checks.check_parameters Check object to make sure that all necessary parameters for the model exist.
  • Completed the Frame_order.test_pdb_model_rotor system test. This now sets the rotor axis to the z-axis (with a printout to be sure), sets the torsion angle to zero for simplicity, creates a new data pipe and loads the PDB representation file, then checks all of the key atom coordinates.
  • Fixes for the unit tests of the lib.frame_order.matrix_ops module for the free rotor isotropic cone. The S1 order parameter has been eliminated due to angles > π/2.0 causing the frame order matrix to be incorrectly predicted. Therefore all unit tests have been converted to use the cone opening angle θ instead. In addition, the test_compile_2nd_matrix_iso_cone_free_rotor_disorder had been modified to pass with the incorrect frame order matrix by comparing to the half cone frame order matrix rather than the identity frame order matrix.
  • Fix for inverted axes in the new Frame_order.test_pdb_model_rotor system test.
  • Huge bug fix for the frame_order.pdb_model user function - the single axis direction was incorrect. In the PDB representation of the frame order motion for the rotor and isotropic cone models (rotor, free rotor, isotropic cone, free rotor isotropic cone, and torsionless isotropic cone), the X and Z axes were swapped. This is because the eigenframe of the motion was being incorrectly constructed via the lib.geometry.rotations.two_vect_to_R() function. For better control, the specific_analyses.frame_order.geometric.frame_from_axis() function has been created. This constructs a full motional eigenframe from the Z-axis. The problem was detected via the new Frame_order.test_pdb_model_rotor system test.
  • Size fix for the rotor representation from the frame_order.pdb_model user function. The size problem was detected via the Frame_order.test_pdb_model_rotor system test. The rotors in the PDB representation were all fixed in size, and ignored the 'size' argument of the frame_order.pdb_model user function. The size argument is now passed into the add_rotors() function of the specific_analyses.frame_order.geometric module and passed on to the rotor() function of the lib.structure.represent.rotor module.
  • Created the Frame_order.test_pdb_model_rotor2 system test to check for an offset pivot. The pivot is set to [1, 0, 1] so that the rotor axis is tilted -45 degrees in the xz-plane. And the size of the geometric object is set to 100 Angstrom for better testing of the sizes of the elements.
  • Simplification of the Frame_order.test_pdb_model_rotor system test. The size is now programatically handled.
  • Created the Frame_order.test_pdb_model_iso_cone system test. This is for checking the PDB representation of the isotropic cone frame order model created by the frame_order.pdb_model user function. It checks both A and B representations.
  • Fix for the cone sized created by the frame_order.pdb_model user function. The 'size' argument was not being used at all for the cone size. It is now passed into the lib.structure.represent.cone.cone() function as the 'scale' argument.
  • Small fix for the Frame_order.test_pdb_model_iso_cone system test for the 'B' representation.
  • Fix for the representation label positions created by the frame_order.pdb_model user function. The 'size' argument was not being used at all for the representation title atoms. It is now passed into the add_titles() function as the displacement argument + 10 Angstrom.
  • Printout fix for the axis in the Frame_order.test_pdb_model_iso_cone system test.
  • Created the Frame_order.test_pdb_model_iso_cone_xz_plane_tilt system test. This checks the PDB file from the frame_order.pdb_model user function for the isotropic cone model with a xz-plane tilt.
  • Renamed all of the Frame_order.test_pdb_model_* system tests to be more descriptive.
  • Improvements for all of the Frame_order.test_pdb_model_* system tests. The rotate_from_Z() method has been introduced to simplify the determination of the 3D coordinates expected for the PDB file. This will allow for more advanced testing of the PDB for the cone models.
  • Fixes for the printouts from the Frame_order.test_pdb_model_rotor_* system tests.
  • Alphabetical ordering of the Frame_order system test methods.
  • Fixes for all of the Frame_order system tests - the temporary directories are now being deleted. The system test base class tearDown() method is now being called to properly clean up after the tests.
  • Created the Frame_order.test_pdb_model_pseudo_ellipse_z_axis system test. This demonstrates the correct atom coordinates in the PDB file created by the frame_order.pdb_model user function for the pseudo-ellipse model along the z-axis.
  • Fixes for the checks in the Frame_order.test_pdb_model_* system tests. Atomic positions are now checked with self.assertAlmostEqual() to 3 places, and the residue and atom names and numbers are checked with self.assertEqual().
  • Created the Frame_order.test_pdb_model_pseudo_ellipse_xz_plane_tilt system test. This checks the PDB file created by the frame_order.pdb_model user function for the pseudo-ellipse model with a xz-plane tilt. To properly construct the coordinates, the rotate_from_Z() method was modified to accept a rotation matrix argument to allow the geometric shape to be rotated.
  • Modified the Frame_order.test_pdb_model_iso_cone_xz_plane_tilt system test to have a cone angle. The cone opening half-angle was previously 0.0. The test now checks the geometric object in the PDB file for a cone opening half-angle of 2.0.
  • Modified the Frame_order.test_pdb_model_iso_cone_z_axis system test to have a cone angle. The cone opening half-angle was previously 0.0. The test now checks the geometric object in the PDB file for a cone opening half-angle of 2.0.
  • Created two new system tests for the free rotor PDB representation file. This is the file from the frame_order.pdb_model user function. The two new unit tests are Frame_order.test_pdb_model_free_rotor_z_axis and Frame_order.test_pdb_model_free_rotor_xz_plane_tilt.
  • Created two new frame order system tests for the free rotor isotropic cone PDB representation file. This is the two PDB files from the frame_order.pdb_model user function. The two new system tests are Frame_order.test_pdb_model_iso_cone_free_rotor_z_axis and Frame_order.test_pdb_model_iso_cone_free_rotor_xz_plane_tilt.
  • Created two new frame order system tests for the torsionless isotropic cone PDB representation file. This is the two PDB files from the frame_order.pdb_model user function. The two new system tests are Frame_order.test_pdb_model_iso_cone_torsionless_z_axis and Frame_order.test_pdb_model_iso_cone_torsionless_xz_plane_tilt.
  • Created two new frame order system tests for the free rotor pseudo-ellipse PDB representation file. This is the two PDB files from the frame_order.pdb_model user function. The two new system tests are Frame_order.test_pdb_model_pseudo_ellipse_free_rotor_z_axis and Frame_order.test_pdb_model_pseudo_ellipse_free_rotor_xz_plane_tilt.
  • Created two new frame order system tests for the torsionless pseudo-ellipse PDB representation file. This is the two PDB files from the frame_order.pdb_model user function. The two new system tests are Frame_order.test_pdb_model_pseudo_ellipse_torsionless_z_axis and Frame_order.test_pdb_model_pseudo_ellipse_torsionless_xz_plane_tilt.
  • Created two new frame order system tests for the double rotor PDB representation file. This is the two PDB files from the frame_order.pdb_model user function. The two new system tests are Frame_order.test_pdb_model_double_rotor_z_axis and Frame_order.test_pdb_model_double_rotor_xz_plane_tilt.
  • Added relax scripts and PDB files which match the Frame_order.test_test_pdb_model_* system tests. These were used to construct and visually check the tests in a molecular viewer. These could be a useful reference, so have been added to the repository.
  • Simplified all of the Frame_order.test_pdb_model_* system tests. The atom, residue and 3D coordinate checking in all these methods has been shifted into the common check_pdb_model_representation() method. This dramatically decreases the amount of code in the system test file.
  • Simplification for all of the Frame_order.test_pdb_model_* system tests. The model setup in all of these tests has been merged into the common setup_model() method. This not only removes a large quantity of repetitive code, but the new method can also be used for constructing future tests, for example for checking the frame_order.simulate user function.
  • Created an initial version of the Frame_order.test_simulate_rotor_z_axis system test. This is to check the frame_order.simulate user function rotor model along the z-axis. It currently fails due to a bug in the user function.
  • Fixes for the Frame_order.test_simulate_rotor_z_axis system test. Now 6 atoms are being created at X, -X, Y, -Y, Z, and -Z, 100 Angstrom from the origin. This is required so that the CoM is at the origin, to allow the CoM-pivot vector to be unchanged at [1, 0, 0] so that the axis α angle of π/2 creates an axis parallel to Z. The origin to atom distance check has also been loosened due to the PDB truncation artifact.
  • Fix for the Frame_order.test_pdb_model_free_rotor_xz_plane_tilt system test. This was broken while implementing the Frame_order.test_simulate_rotor_z_axis system test. Instead of shifting the 6 atom structure so its CoM is the pivot of the motion when creating the atoms, now the Frame_order.test_simulate_rotor_z_axis system test sets the average domain translation vector to the pivot to achieve the same result. This preserves the z-axis orientation of the rotor models.
  • Created the Frame_order.test_simulate_free_rotor_z_axis system test. This is to check the frame_order.simulate user function for the free rotor model along the z-axis.
  • Created the Frame_order.test_simulate_iso_cone_z_axis system test. This is to check the frame_order.simulate user function for the isotropic cone model along the z-axis.
  • Created the Frame_order.test_simulate_iso_cone_free_rotor_z_axis system test. This is to check the frame_order.simulate user function for the free rotor isotropic cone model along the z-axis.
  • Created the Frame_order.test_simulate_iso_cone_torsionless_z_axis system test. This is to check the frame_order.simulate user function for the torsionless isotropic cone model along the z-axis.
  • Created the Frame_order.test_simulate_pseudo_ellipse_z_axis system test. This is to check the frame_order.simulate user function for the pseudo-ellipse model along the z-axis.
  • Created the Frame_order.test_simulate_iso_cone_xz_plane_tilt system test. This is to check the frame_order.simulate user function for the torsionless isotropic cone model with a xz-plane tilt.
  • Created the Frame_order.test_simulate_pseudo_ellipse_free_rotor_z_axis system test. This is to check the frame_order.simulate user function for the free rotor pseudo-ellipse model along the z-axis.
  • Created the Frame_order.test_simulate_pseudo_ellipse_xy_plane_tilt system test. This is to check the frame_order.simulate user function for the pseudo-ellipse model with a xz-plane tilt.
  • Created the Frame_order.test_simulate_pseudo_ellipse_torsionless_z_axis system test. This is to check the frame_order.simulate user function for the torsionless pseudo-ellipse model along the z-axis.
  • Fix for the Frame_order.test_simulate_pseudo_ellipse_xz_plane_tilt system test name. This was mislabelled as Frame_order.test_simulate_pseudo_ellipse_xy_plane_tilt.
  • Redesign of the pymol.frame_order user function. This user function was still fitting to the old design in the relax trunk. It has been updated for the frame_order_cleanup branch whereby the frame_order.pdb_model user function has been split up and the positional distribution has been replaced by the Brownian simulation user function frame_order.simulate.
  • Better checking for the non-moving domain setup. The frame_order.pdb_model user function will now raise a RelaxError if the frame_order.ref_domain user function has not been called to set up the non-moving domain.
  • Updated the frame_order.ref_domain user function for the current branch design. This user function was quite out of date. The alignment tensor checks have been removed, to allow this to be used in the absence of base data. And the user function description has been updated.
  • Updated all frame order system tests for the frame_order.ref_domain user function requirement.
  • Expanded all of the Frame_order.test_simulate_* system tests. Two atoms have been added to the origin [0, 0, 0], one in the moving domain, the other in the reference non-moving domain. The positions of these atoms are checked to make sure that the domain systems are correctly handled.
  • Expanded the double rotor model description in the frame_order.select_model user function.
  • Added the pipe_name argument to the frame order check_model() function. This is for the specific_analyses.frame_order.checks module.
  • Converted the specific_analyses.frame_order.checks module to the new Check object design. This follows from http://wiki.nmr-relax.com/Relax_source_design#The_check_.2A.28.29_functions and the changes significantly simplify the checking objects.
  • Improved checking for the frame order generate_pivot() function. The check_model() checking object is now called to make sure the frame order model has been specified, as this is essential for this function.
  • Created two system tests for the frame_order.simulate user function for the double rotor model. These are Frame_order.test_simulate_double_rotor_mode1_z_axis and Frame_order.test_simulate_double_rotor_mode2_z_axis.
  • Created two system tests for the frame_order.simulate user function for the double rotor model. These are Frame_order.test_simulate_double_rotor_mode1_xz_plane_tilt and Frame_order.test_simulate_double_rotor_mode2_xz_plane_tilt.
  • Added relax scripts which match the Frame_order.test_test_simulate_* system tests. These are the tests of the frame_order.simulate user function. These were used to construct and visually check the Brownian simulation and PDB model representation in a molecular viewer. These could be a useful reference, so have been added to the repository.
  • Fix for the frame order auto-analysis when only the 'rigid' model is optimised. The final summary table printout for the number of Sobol' points used was failing as there were no models in the table. The table is now only printed out if non rigid models are present in the model list.
  • Introduced the nested_params_ave_dom_pos argument to the frame order auto-analysis. This allows the average domain position to be set to no rotations and translations rather than taking the average position from the rotor or free-rotor model. This can be useful when large motions are present causing the rigid model to have unreasonable domain positions.
  • Fix for the frame_order.permute_axes user function description to allow the manual to be compiled. The table caption containing the user function name was causing the LaTeX compilation to fail. Therefore the captions have been rewritten to avoid the user function name.
  • Modified the frame order system test check_chi2() method to test the statistics.model user function. This causes all of the Frame_order.test_cam_* system tests to fail, as the user function backend is not implemented for the frame order analysis.
  • Implemented the frame order analysis backend for the statistics.model and statistics.aic user functions. This simply required aliasing the specific analysis API common _get_model_container_cdp() method to get_model_container().
  • Bug fix for the frame order specific analysis API base_data_loop() method. This was looping over non-existent PCS and RDC data. Now the alignment ID is checked for in the interatomic data container 'rdc' data structure and the spin container 'pcs' data structure, as well as values of None, before yielding the data.
  • Created a large set of system tests for implementing the frame_order.distribute user function. This user function will be similar to frame_order.simulate. However instead of creating a PDB file with models from a pseudo-Brownian simulation, the frame_order.distribute user function will generate a PDB file of models forming a uniform distribution of structures covering the full frame order motional space. The new system tests are: Frame_order.test_distribute_double_rotor_mode1_xz_plane_tilt, Frame_order.test_distribute_double_rotor_mode1_z_axis, Frame_order.test_distribute_double_rotor_mode2_xz_plane_tilt, Frame_order.test_distribute_double_rotor_mode2_z_axis, Frame_order.test_distribute_free_rotor_z_axis, Frame_order.test_distribute_iso_cone_z_axis, Frame_order.test_distribute_iso_cone_xz_plane_tilt, Frame_order.test_distribute_iso_cone_torsionless_z_axis, Frame_order.test_distribute_pseudo_ellipse_xz_plane_tilt, Frame_order.test_distribute_pseudo_ellipse_z_axis, Frame_order.test_distribute_pseudo_ellipse_free_rotor_z_axis, Frame_order.test_distribute_pseudo_ellipse_torsionless_z_axis, Frame_order.test_distribute_rotor_z_axis. These are aliases for the equivalent Frame_order.test_simulate_* system tests which have had the 'type' keyword argument added, defaulting to 'sim', which allows to switch between the frame_order.simulate and frame_order.distribute user functions. The concept behind these system tests are the same for both user functions, so the code is shared.
  • Created the front-end of the frame_order.distribute user function. This is a copy and modification of the frame_order.simulate user function, as the concepts are similar.
  • Small modification of the frame_order.simulate user function. The GUI file opening dialog wildcard selectors are now set to all PDB file types (plain text, bzip2 compressed, and gzip compressed).
  • Added the frame_order.distribute user function to the auto-analysis results output. This will allow both the pseudo-Brownian simulation and uniform distribution PDB files to be available to the user in all results directories (excluding the intermediate results for speed).
  • Implemented the back-end of the frame_order.distribute user function. This follows the design of the pseudo-Brownian simulation frame_order.simulate user function. The specific_analyses.frame_order.uf.distribute() function has been created as a modified copy of the simulate() function of the same module. This simply performs checks and assembles the data, passing into the new lib.frame_order.simulate.uniform_distribution() function, which itself is a modified copy of the brownian() function in the same module.
  • Introduced the max_rotations argument into the frame_order.distribute user function. This is used to prevent the user function from running forever. This happens whenever a cone opening angle or torsion angle is zero, and hence the random sampling of the rotational space will never find rotations within the motional distribution.
  • Improved control of the frame_order.distribute user function in the frame order auto-analysis. The maximum number of rotations can now be set, and the argument for the total states for the distribution has been shortened.
  • Speedup of the Frame_order.test_auto_analysis system test. After the introduction of the frame_order.distribute user function into the auto-analysis, the test was taking far too long to complete. Now the distribution arguments are set to low values to allow the test to pass in under a minute.
  • Changed the default relax results compression type to bzip2 in the frame order auto-analysis. This was set to no compression for speeding up some system tests, however the system tests can set this for themselves.
  • The Frame_order.test_auto_analysis system test now sets the results file compression type to bzip2.
  • Changed the default max_rotations argument value to 100,000 in the frame_order.distribute user function. This decrease from one million is so the user function completes in a reasonable amount of time.
  • The frame_order.distribute user function now warns when the maximum rotations are reached.
  • Deleted a number of Frame_order.test_distribute* system tests. These are the four double rotor model tests. The frame_order.distribute user function cannot operate on these test cases as one of the two torsion angles are set to zero in the tests.
  • Fix to allow Monte Carlo simulations to be repeated in the frame order analysis. The code for checking for pre-existing Monte Carlo simulation data structures and raising a RelaxError if anything is found has been deleted.
  • Fix of a fatal bug preventing the frame order analysis to be run on a multi-processor system. The multi-processor code was calling the count_sobol_points() function of the specific_analyses.frame_order.optimisation module to give feedback when calling the minimise.execute or minimise.calculate user functions. However this was run in the slave command run() method, hence would be executed on the slave. The problem is that count_sobol_points() performs a number of checks on the current data pipe, however the slaves do not have any data pipes set up.
  • Added the new 'atom_id' argument to the frame_order.distribute user function. This uses the new inverse selection functionality recently introduced into the trunk to delete all structural data not matching the atom_id from the copy of the loaded structural data string prior to generating the distribution of structures.
  • Bug fix for the frame order target function (introduced recently). The copy.deepcopy() function is now used for all numpy input data to avoid the data from being modified between function calls. This is important for missing RDC and PCS data which is sent in as NaN values. In the target function __init__() method, the NaN values are replaced by 0.0 after the self.missing_rdc and self.missing_pcs structures have been by checking for NaN values. However the recent specific_analyses.frame_order.optimisation change in the Frame_order_minimise_command slave command to printout the number of integration points resulted in the target function being initialised twice, causing all NaN values to be 0.0 in the second initialisation. Hence all missing data was being treated as real data with values of 0.0.
  • Created a new skeleton chapter in the relax manual for the frame order analysis.
  • Added a theory section to the new frame order chapter. This is taken from an in-preparation supplement.
  • Rearrangement of the frame order chapter in the manual. The theory section has been spun out into its own frame_order_theory.tex LaTeX file for better organisation.
  • Added two more sections to the frame order chapter of the manual. This includes a frame order modelling section and PCS numerical integration section. Both are from a supplement from an in-preparation manuscript.
  • Added a DOI and ISBN number to the bibliography.
  • Moved the frame_order_theory.tex LaTeX file into the frame_order directory.
  • Shifted the frame order model derivations into their own 'Advanced topics' chapter.
  • Added the frame order sample scripts used in the CaM-IQ analysis.
  • Added an introduction for the frame order chapter of the manual.
  • Added a 'Data analysis' section to the frame order chapter of the manual. This includes the N-state and frame order analysis scripts required to perform a full analysis.
  • Editing of the data analysis section of the frame order chapter of the manual. A PCS structural error figure has been added, all the text improved, and the scripts made to match those in sample_scripts/frame_order/.
  • Added a section to the end of the frame order chapter about the long computation times.
  • The 'scons clean' target now removes all LaTeX *.aux files. The docs/latex/frame_order/ directory is now also being checked for *.aux files.
  • Removed many unnecessary references to relax.
  • Removed lots of useless comments about book references.
  • Added some images missing from the frame order chapter of the manual.
  • Avoided a doubly defined label in the manual.
  • Removed some duplicated text in the frame order models chapter of the manual. This is duplicated from the frame order analysis chapter.
  • Indentation fix for allowing the API documentation to be properly compiled.
  • Added a patch file for fixing Epydoc version 3.0.1. This is needed to allow the dot graph files names to be unique (by no longer truncating to 30 characters), and to allow epydoc to handle newer Graphvis versions.
  • Improvements for the release checklist document. The backporting of the CHANGES file to trunk is now more obvious, and instructions for fixing Epydoc have been added.
  • Clean up of some of the release instructions (for using vim).
  • Added error catching to the find_unused_imports.py developer script.
  • Fix for the error catching in the find_unused_imports.py developer script. The numerous pylint warnings are also sent to STDERR.
  • Removed the printout of pylint STDERR messages in the find_unused_imports.py developer script.
  • Elimination of a number of wildcard imports from some frame order timing scripts. This is to avoid excessive function imports.
  • Removal of an unused import from the user_functions.frame_order module.
  • Removal of unused imports from the test_suite/shared_data/frame_order/simulation scripts.
  • Updated some unused frame order scripts to use the new minimise user function design.
  • Unused import clean up in the test_suite/shared_data/curve_fitting/numeric_topology directory. All the scripts in this directory have been cleaned up to remove unused imports. In one case, commented out code was replaced with an 'if 0:' statement to silence the unused import warnings from the devel_scripts/find_unused_imports.py script.
  • Unused import clean up in the test_suite/shared_data/curve_fitting/profiling directory. The scripts in this directory have been cleaned up to remove unused imports.
  • Added an exception system to the find_unused_imports.py developer script. Sometimes pylint will give an "Unused import" warning for imports that are needed by the module. Therefore an exception list of the file name and module has been created to skip these warnings. The list covers the dep_check module and all of the profiling_*.py scripts in the directory test_suite/shared_data/dispersion/profiling/.
  • Added a copyright notice to the find_unused_imports.py development script. This is mainly to indicate how out of date the script will be in the future.
  • A directory can now be supplied on the command line for the find_unused_imports.py devel script.
  • Changed the imports in the test_monte_carlo_mean.py script. This inconsequential change is to avoid false positives from the find_unused_imports.py devel script.
  • Modifications of the test suite script for calculating synthetic CPMG data. The imports in cpmg_synthetic.py are now all used, rather than being commented out. This allows the find_unused_imports.py devel script to pass.
  • Unused import cleanup of all scripts in the test_suite/shared_data/dispersion/ directories. This both removes unused imports, or uncomments but deactivates temporarily unused code.
  • Removed unused imports from the scripts in the test_suite/shared_data/frame_order subdirectories.
  • Removed unused imports from the spectrum system test base module.
  • Removed unused imports from the relax_disp system test base module.
  • Clean up of all unused imports in the system test scripts.
  • Removed unused imports from the structure system test base module.
  • Changed how the import of lib.regex in the test_regex unit tests is used. The module is no longer stored in the TestCase class namespace, but is rather called directly within the unit test.
  • Changed the import of pipe_control.state in the test_state unit test module.
  • Removed unused imports from the unit tests.
  • Added another exception to the find_unused_imports.py devel script. This is for the test_suite.unit_tests._lib._geometry.test_rotations module which programatically obtains the imports using globals().
  • Added a workaround or hack for exceptions for circular imports in the find_unused_imports.py script. This is currently for the test_suite.unit_tests._lib.test___init__ and test_suite.unit_tests._lib._geometry.test___init__ modules.
  • Removal of unused imports from the GUI test modules.
  • Removed all unused imports from the pipe_control package.
  • Added import exceptions for the lib.compat module in the find_unused_imports.py devel script.
  • Added import exceptions for the lib.xml module in the find_unused_imports.py devel script. These are needed because of eval() function calls on XML stored Python data structures.
  • Removed all unused imports from the relax library package.
  • Removed all unused imports from the target_functions package.
  • Removed unused imports from the developer scripts.
  • Removed all unused imports from the specific_analyses package.
  • Removed all unused imports from the auto_analyses package.
  • Removed all unused imports from the numdifftools extern package.
  • Removal of the last unused import from the target_functions package.
  • Fix for the PCS system tests on old Python versions. The self.assertAlmostEqual() function cannot compare None values in earlier Python versions.
  • MS Windows fix for the Frame_order.test_generate_rotor2_distribution system test. The locale.setlocale() function call for correctly setting up a spinning progress meter was failing on MS Windows. The error is now caught and the local setting skipped.
  • Added Python 3.5 to the manual C module compilation script.
  • Added Python 3.5 to the Python multiversion test suite script.
  • Changes to the introduction of the frame order theory chapter of the manual.


Version 3 of relax

relax 3.3 series

relax 3.3.9


relax 3.3.8


relax 3.3.7


relax 3.3.6


relax 3.3.5


relax 3.3.4


relax 3.3.3


relax 3.3.2

  • Updated the minfx version in the release checklist document to version 1.0.11.
  • Updated the relax version in the release checklist document to be more modern.
  • Spelling fixes for the CHANGES file.
  • Updates for the release checklist document. This is mainly because the main release notes are now the relax wiki, for example for the current version at http://wiki.nmr-relax.com/Relax_3.3.1.
  • Spelling fixed throughout the CHANGES document.
  • Removed a few triple spaces in the CHANGES document.
  • Added periods to the end of all items in the CHANGES document.
  • Fix for an 'N/A' in the CHANGES document.
  • Converted a number of single spaces between sentences to double spaces in the CHANGES document.
  • More updates for the announcement section of the release checklist document.
  • The HTML version of the manual is now compiled with Unicode character support. It allows Greek symbols, for example, to be represented as text rather than LaTeX generated PNG images. This fixes titles and massively decreases the number of images required by the HTML pages.
  • Removal of many dual LaTeX and latex2html section titles in the manual. As the HTML manual is now compiled with Unicode support, the Greek characters in the titles are now supported. Therefore in the model-free and the values, gradients, and Hessians chapters, the dual LaTeX and latex2html section titles could be collapsed to the standard LaTeX section title. This will result in better formatting of the manual and its links.
  • Added instructions and a build script for creating a useful version of latex2html. This version is essential for building the HTML version of the manual. The build script downloads the Debian latex2html-2008 sources as well as all Debian patches for latex2html. It then applies a number of patches for fixing and improving the relax documentation. The program is then compiled and can be installed as the root user into /usr/local/.
  • Extended the number of words used in the HTML webpage file names. This is to hopefully prevent files from being overwritten by multiple files having the same name.
  • Added the write out of parameters and χ2 values, when creating a dx_map. Task #7860: When dx_map is issued, create a parameter file which maps parameters to χ2 value.
  • Created system test Relax_disp.test_dx_map_clustered_create_par_file, which must show that relax is not able to find the local minimum under clustered conditions. When creating the map, the map contain χ2 values, which are lower than the clustered fitted values. This should not be the case. Running a larger map with larger bounds and more increments, which should show that there exist a minimum in the minimisation space with a lower χ2 value. Bug #22754: The minimise.calculate() does not calculate χ2 value for clustered residues. Task #7860: When dx_map is issued, create a parameter file which maps parameters to χ2 value.
  • Renamed test scripts and files for producing surface χ2 plots.
  • Renamed sample scripts making surface maps.
  • Added scripts to make surface plots of spin independents parameters δω and Ra2.
  • Added example surface χ2 values for plots. Task #7826: Write an python class for the repeated analysis of dispersion data.
  • Added example save state for more surface plotting.
  • Added boolean argument to dx.map user function, to specify the creation of a parameter and associated χ2 values file. For very very special situations, the creation of this file is not desired.
  • Modified that structure of points in dx.map is always a list of numpy arrays with 3 values.
  • When issuing dx.map user function with points, implemented the writing out of parameter file, with associated calculated χ2 values.
  • Improved the feedback in the User_functions.test_structure_add_atom GUI test. It is now clearer what the input and output data is.
  • The devel_scripts/python_multiversion_test_suite.py script now runs relax with the --time flag. This is for quicker identification of failure points. It will also force the sys.stdout buffer to be flushed more often on Python 2.5 so that it does not appear as if the tests have frozen.
  • Added check to system test Relax_disp.test_cpmg_synthetic_dx_map_points for the creation of a matplotlib surface command plot file.
  • Added the write out of a matplotlib command file, to plot surfaces of a dx map. It uses the minimum χ2 value in the map space, to define surface definitions. It creates a X,Y; X,Z; Y,Z map, where the values in the missing dimension has been cut at the minimum χ2 value. For each map, it creates a projected 3d map of the parameters and the χ2 value, and a heat map for the contours. It also scatters the minimum χ2 value, the 4 smallest χ2 values, and maps any points in the point file, to a scatter point. Mapping the points from file to map points, is done by finding the shortest Euclidean distance in the space from the points to any map points.
  • Fix for testing the raise of expected errors in system tests. The system test will not be tested, if Python version is under version 2.7. Bug #22801: Failure of the relax test suite on Python 2.5.
  • Inserted a z_axis limit for the plotting of 2D surfaces in matplotlib.
  • Added better figure control of χ2 values on z-axis for surface plots.
  • Narrowed in dx_map in system test Relax_disp.test_dx_map_clustered_create_par_file. This is to illustrate the failure of relax finding the global minimum. It seems there is a shallow barrier, which relax failed to climb over, in order to find the minimum value.
  • Added the verbosity argument to the pipe_control.minimise.reset_min_stats() function. All of the minimisation code which calls this now send in their verbosity arguments. This allows the text "Resetting the minimisation statistics." to be suppressed.
  • Added the verbosity argument to the pipe_control.value.set() function. This is passed into the pipe_control.minimise.reset_min_stats() function so its printouts can be silenced.
  • The pipe_control.opendx space mapping code now calls the value.set() function with verbosity=0. This is to silence the very repetitive statistics resetting messages when executing the dx.map user function.
  • Added more checks to the determine_rnd() of the dauvergne_protocol model-free auto-analysis. This is to try to catch bizarre situations such as bug #22730, model-free auto-analysis - relax stops and quits at the polate step. The following additional fatal conditions are now checked for: A file with the same name as the base model directory already exists; The base model directory is not readable; The base model directory is not writable. The last two could be caused by file system corruptions. In addition, the presence of the base model directory is checked for using os.path.isdir() rather than catching errors coming out of the os.listdir() function. These changes should make the analysis more robust in the presence of 'strangeness'.
  • Added an additional check to determine_rnd() of the dauvergne_protocol model-free auto-analysis. This is to try to catch bizarre situations such as bug #22730, model-free auto-analysis - relax stops and quits at the polate step. The additional check is that if the base model directory is not executable, a RelaxError is raised.
  • Added printouts to the determine_rnd() function of the dauvergne_protocol model-free auto-analysis. This is for better user feedback in the log files as to what is happening. It may help in debugging bug #22730: Model-free auto-analysis - relax stops and quits at the polate step.
  • Alphabetical ordering of imports in the dauvergne_protocol model-free auto-analysis.
  • Changed the model-free single spin optimisation title printouts. The specific_analyses.model_free.optimisation.spin_print() function has been deleted. It has instead been replaced by a call to lib.text.sectioning.subtitle(). This is to match the grid search setup title printouts and to differentiate these titles from those printed out by minfx being underlined by '~' characters.
  • Added extensive sectioning printouts to the dauvergne_protocol model-free auto-analysis. The lib.text.sectioning functions title() and subtitle() are now used to mark out all parts of the auto-analysis. This will allow for a much better understanding of the log files produced by this auto-analysis.
  • Complete redesign of the following of text in the relax controller window in the GUI. The current design for some reason no longer worked very often, and there would be many situations where the scrolling to follow the text output would stop and could never be recovered. Therefore this feature has been redesigned. In the LogCtrl element of the relax controller, which displays the relax output messages, the at_end class boolean variable has been introduced. It defaults to True. The following events will turn it off: Arrow keys, Home key, End key, Ctrl-Home key, Mouse button clicks, Mouse wheel scrolling, Window thumbtrack scrolling (the side scrollbar), finding text, the pop up menu 'Go to start', and Select all (menu or Ctrl-A). It will only be turned on in two cases: The pop up menu 'Go to end', and if the caret is on the final line (caused by Ctrl-End, Mouse wheel scrolling, Page Down, Down arrow, Window thumbtrack scrolling, etc.). Three new methods have been introduced to handle certain events: capture_mouse() for mouse button clicks, capture_mouse_wheel() for mouse wheel scrolling, and capture_scroll for window thumbtrack scrolling.
  • Improvements for selecting all text in the relax controller window. Selecting text using the pop up menu or [Ctrl-A] now shifted the caret to line 1 before selecting all text. This deactivates the following of the end of text, if active, as the text following feature causes the text selection to be lost.
  • Modified the behaviour of the relax controller window so that pressing escape closes the window. This involves setting the initial focus on the LogCtrl, and catching the ESC key press in the LogCtrl as well as all relax controller read only wx.Field elements and calling the parent controller handle_close() method.
  • Replaced the hardcoded integer keycodes in the relax controller with the wx variables. This is for the LogCtrl.capture_keys() handler method for dealing with key presses.
  • Improvement for all wizards and user functions in the relax GUI. The focus is now set on the currently displayed page of the wizard. This allows the keyboard to be active without requiring a mouse click. Now text can be instantly input into the first text control and the tab key can jump between elements. As the GUI user functions are wizards with a single page, this is a significant usability improvement for the GUI.
  • The ESC character now closes all wizards and user functions in the relax GUI. By using an accelerator table set to the entire wizard window to catch the ESC keyboard event, the ESC key will cause the _handler_escape() method to be called which then calls the windows Close() method to close the window.
  • Changed the logic for how the new analysis wizard in the GUI is destroyed. This relates to bug #22818, the GUI test suite failures in MS Windows - PyAssertionError: C++ assertion "Assert failure". The Destroy() method has been added to the Analysis_wizard class to properly close all elements of the wizard. This is now called from the menu_new() method of the Analysis_controller class, which is the target of the menu item and toolbar button. To allow the test suite to use this, the menu_new() method now accepts the destroy boolean argument. The test suite can set this to False and then access the GUI elements after calling the method (however the Destroy() method must be called by the test suite).
  • Resign of how the new analysis wizard is handled in the GUI tests. This relates to bug #22818, the GUI test suite failures in MS Windows - PyAssertionError: C++ assertion "Assert failure". The GUI test base class method new_analysis_wizard() has been created to simplify the process. When a new analysis is desired, this method should be called. It will return the analysis page GUI element for use in the test. The method standardises the execution of the new analysis wizard and sets up the analysis in the GUI. It also properly destroys the wizard to avoid the memory leaking issues such as bug #22818. All GUI tests have been converted to use new_analysis_wizard(). This allows the GUI tests to pass on MS Windows. However there are still significant sources of memory leaks (the USER Objects count) visible in the Windows Task Manager.
  • Fix for the gui.fonts module to allow it to be used outside of the GUI.
  • Updated all of the scripts in devel_scripts/gui/. These have been non-functional since the merger of the relax bieri_gui branch back in January 2011.
  • The gui.misc.bitmap_setup() function can now be used outside of the GUI.
  • Fix for the GUI test base class new_analysis_wizard() method for relaxation dispersion analyses.
  • Modified the pipe_control.pipes.get_bundle() function to operate when no pipe is supplied. In this case, the pipe bundle that the current data pipe belongs to will be returned.
  • Created the Periodic_table.has_element() method for the lib.periodic_table module. This is used to simply check if a given symbol exists as an atom in the periodic table.
  • Added 4 unit tests to the _lib.test_periodic_table module for the Periodic_table.has_element() method.
  • Modified the internal structural object backend for the structure.read_pdb user function. The MolContainer._det_pdb_element() method for handling PDB files with missing element information has been updated to use the Periodic_table.has_element() method to check if the PDB atom name corresponds to any atoms in the periodic table. This allows for far greater support for HETATOMS and all of the metals.
  • Created the Structure.test_load_spins_multi_mol system test. This is to test yet to be implemented functionality of the structure.load_spins user function. This is the loading of spin information similar, but not necessarily identical molecules all loaded into the same structural model. For this, the from_mols argument will be added.
  • Fixes for the Structure.test_load_spins_multi_mol system test. The call to the structure.load_spins user function has also been modified so that all 3 spins are loaded at the same time.
  • Implemented the multiple molecule merging functionality of the structure.load_spins user function. The argument has been added to the user function frontend and a description added for this new functionality. In the backend, the pipe_control.structure.main.load_spins() function will now call the load_spins_multi_mol() function if from_mols is supplied. This alternative function is required to handle missing atoms and differential atom numbering.
  • Modified the N_state_model.test_populations system test to test the grid search code paths. This performs a grid search of one increment after minimisation, then switches to the 'fixed' N-state model and performs a second grid search of one increment. This now tests currently untested code paths in the grid_search() API method behind the minimise.grid_search user function. The test demonstrates a bug in the N-state model which was not uncovered in the test suite.
  • Created the N_state_model.test_CaM_IQ_tensor_fit system test. This is for catching bug #22849, the failure of the N-state model analysis when optimising only alignment tensors using RDCs and/or PCSs. This new test checks code paths unchecked in the rest of the test suite, and is therefore of high value.
  • Modified the atomic position handling in pipe_control.structure.main.load_spins_multi_mol(). The multiple molecule merging functionality of the structure.load_spins user function now handles missing atomic positions differently. The aim is that the length of the spin container position variable is fixed for all spins to the number of structures, as the N-state model analysis assumes this equal length for all spins. When data is missing, the atomic position for that structure is now set to None. This will require other modifications in relax to support this new design.
  • Modified the interatom.unit_vectors user function backend to handle missing atomic positions. This is to match the structure.load_spins user function change whereby missing atomic positions are now set to the value of None.
  • Fix for the atomic position handling in pipe_control.structure.main.load_spins_multi_mol(). The dimensionality of the position structure returned by the structural object atom_loop() method needed to be reduced.
  • The structure.load_spins user function now stores the number of states in cdp.N. This is to help the specific analyses which handle ensembles of structures. With the introduction of the from_mols argument to the structure.load_spins user function, the number of states is now not equal to the number of structural models, as the states can now come from different structures of the same model. Therefore the user function will now explicitly set cdp.N to the number of states depending on how the spins were loaded.
  • Clean up and speed up of the N_state_model.test_CaM_IQ_tensor_fit system test. All output files are now set to 'devnull' so that the system test no longer creates any files within the relax source directories. And the optimisation settings have been decreased to hugely speed up the system test.
  • Expanded the lib.arg_check.is_float_matrix() function by adding the none_elements argument. This matches a number of the other module functions, and allows for entire rows of the matrix to be None.
  • Lists of lists containing rows of None are now better supported by the lib.xml functions. The object_to_xml() function will now convert the float parts to IEEE-754 byte arrays, and the None parts will be stored as None in the <ieee_754_byte_array> list node. The matching xml_to_object() function has also been modified to read in this new node format. This affects the results.write and state.save user functions (as well as the results.read and state.load user functions).
  • Added spacing after the minimise.grid_search user function setup printouts. This is for better spacing for the next messages from the specific analysis.
  • Speed up of the N_state_model.test_CaM_IQ_tensor_fit system test. This test is however still far too slow.
  • Added printouts to pipe_control.pcs.return_pcs_data() and pipe_control.rdc.return_rdc_data(). These functions now accept the verbosity argument which if greater than 0 will activate printouts of how many RDCs or PCSs have been assembled for each alignment. This will be useful for user feedback as the spin verses interatomic data container selections can be difficult to understand.
  • The verbosity argument for the N-state model optimisation is now propagated for more printouts. The argument for the calculate() and minimise() API methods is now sent into specific_analyses.n_state_model.optimisation.target_fn_setup(), and from there into the pipe_control.pcs.return_pcs_data() and pipe_control.rdc.return_rdc_data() functions. That way the number of RDCs and PCSs used in the N-state model is reported back to the user for better feedback.
  • Updated the N_state_model.test_CaM_IQ_tensor_fit system test so it operates correctly as a GUI test. All user functions are now executed through the special self._execute_uf() method to allow either the prompt interpreter or the GUI to execute the user function.
  • Modified the N_state_model.test_CaM_IQ_tensor_fit system/GUI test for implementing a new feature. The 'spin_selection' argument has been added to the interatom.define user function. This will be used to carry the spin selections over into the interatomic data containers.
  • Implemented the spin_selection Boolean argument for the interatom.define user function. This has been added to the frontend with a description, and to the backend. When set, it allows the spin selections to define the interatomic data container selection.
  • Changed the spin_selection argument default in the interatom.define user function backend. This now defaults to False to allow other parts of relax which call this function to operate as previously. The default for the interatom.define user function is however still True.
  • Modified the Structure.test_load_spins_multi_mol system test for the spin.pos variable changes. The atomic position for an ensemble of structures is now set to None rather than being missing, so the system test has been updated to check for this.
  • The align_tensor.display user function now has more consistent section formatting. The section() and subsection() functions of the lib.text.sectioning module are now being used to standardise these custom printouts with the rest of relax.
  • Modifications to the new N_state_model.test_CaM_IQ_tensor_fit system test. The system test now checks all of the optimised values to make sure the correct values have been found. That will block any future regressions in this N-state model code path. The system test is now also faster. And the pcs.structural_noise user function RMSD value has been set to 0.0 so that the test no longer has a random component affecting the final optimised values.
  • Added printouts for the rdc.calc_q_factors and pcs.calc_q_factors user functions. These are activated by the new verbosity user function argument which defaults to 1. If the value is greater than 0, then the backend will print out all the calculated Q factors.
  • The verbosity argument of the RDC and PCS q_factors() functions now defaults to 1. This causes the Q factors to be printed out at the end of all N-state model optimisations.
  • Created the Structure.test_bug_22860_CoM_after_deletion system test. This is to catch bug #22860, the failure of the structure.com user function after calling structure.delete.
  • Fix for the checks in the new Structure.test_load_spins_multi_mol system test. A spin index was incorrect.
  • Fix for the structure.load_spins user function when the from_mols argument is used. The load_spins_multi_mol() function of the pipe_control.structure.main module was incorrectly handling the atomic position returned by the internal structural object atom_loop() method. This position is a list of lists when multiple models are present. But when only a single model is present, it returns a simple list.
  • Modified the Structure.test_bug_22860_CoM_after_deletion system test to expect a RelaxNoPdbError. This tests that the structure.com user function raises RelaxNoPdbError after deleting all of the structural information from the current data pipe.
  • The mol_name argument is now exposed in the structure.add_atom user function. This has been added as the first argument of the user function to allow new molecules to be created or to allow the atom to be placed into a specific molecule container. The functionality was already implemented in the backend, so it has been exposed by simply adding a new argument definition to the user function.
  • Created the Structure.test_bug_22861_PDB_writing_chainID_fail system test. This is to catch bug #22861, the chain IDs in the structure.write_pdb user function PDB files are incorrect after calling structure.delete.
  • Small modification of the Structure.test_bug_22861_PDB_writing_chainID_fail system test. File metadata is now being set to demonstrate that the structure.delete user function does not remove this once there is no more data left for the molecule.
  • Small indexing fixes for the dispersion chapter of the relax manual.
  • Fix for system test Relax_disp.test_cpmg_synthetic_dx_map_points. Another import line was written to the matplotlib script.
  • Speedup and fix for system test Relax_disp.test_dx_map_clustered_create_par_file. The following test was taken out, since this a particular interesting case. There exist a double minimum, where relax has not found the global minimum. This is due to not grid searching for Ra2, but using the minimum value.
  • Removed debugging code from the N_state_model.test_CaM_IQ_tensor_fit system test. This was an accidentally introduced state.save user function used to catch the system test state. It would results in the 'x.bz2' file being dumped in the current directory.
  • Loosened the checks in the Relax_disp.test_baldwin_synthetic_full system test. This is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system.
  • Fix for the Relax_disp.test_cpmg_synthetic_dx_map_points system test for certain systems. This change is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system. This may be related to 32-bit numpy 1.6.2 verses later numpy versions causing precision differences.
  • Fixes for the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test for certain systems. The optimisation precision has been increased, and the value checking precision has been decreased. This change is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system. This may be related to 32-bit numpy 1.6.2 verses later numpy versions causing precision differences.
  • Converted all the extern.numdifftools modules using the dos2unix program.
  • Updated the Python 2 to Python 3 migration document to be more current.
  • Small edit of the docs/devel/2to3_checklist document.
  • Expanded the Python 2 to 3 conversion document to list the 2to3 command individually.
  • The ImportErrors in unit tests are now correctly handled by the relax test suite. If an ImportError occurred, this was previously killing the entire test suite.
  • The target_function.relax_fit module unit tests are now skipped if the C module is not compiled.
  • Expanded the Python 2 to 3 conversion document.
  • Small update to the 2to3_checklist document - the print statement conversion has been added.
  • The lib.errors module is now importing lib.compat.pickle for better Python 2 and 3 support. This shifts the compatibility code from lib.errors into lib.compat so that the 2to3 program will not touch the lib.errors module.
  • Better Python 3 compatibility in some test suite shared data profiling scripts. These changes invert the logic, importing the Python 3 builtins module and aliasing xrange() to range(), and passing if an ImporError occurs. The code will now no longer be modified by the 2to3 program.
  • Unicode fixes for the "\u" string in "\usepackage" in the module docstring. This requires escaping as "\\usepackage" to avoid the unicode character '\u'.
  • The lib.check_types module now imports io.IOBase from the lib.compat module. This is to shift more Python 2 vs. 3 compatibility into lib.compat and out of all other modules.
  • Python 3 improvements - changed how the Python 3 absent builtins.unicode() function is handled. The aliased builtins.str() function is now referenced as lib.compat.unicode(). The Python 2 __builtin__.unicode() function is also aliased to lib.compat.unicode(). The GUI using this function now import it from lib.compat.
  • Removed the writable base directory check in the dauvergne_protocol auto-analysis. This check was causing the system test to fail if the user does not have write access to the installed relax directory.
  • Expanded the Mac_framework_build_3way document to include matplotlib.
  • Important bug fix for racing causing the GUI to freeze. This is really only seen in the GUI tests on MS Windows systems, as a user could never be fast enough with the mouse. The GUI interpreter flush() method for ensuring that all user functions in the queue have been cleared now calls wx.Yield() to force all wxPython events to also be flushed. This change will avoid random freezing of the relax test suite.
  • Bug fix for the Mf.test_bug_21615_incomplete_setup_failure GUI test on MS Windows systems. The GUI interpreter flush() method needs to be called between the two structure.load_spins user function calls. Without this, the test will freeze on MS Windows. The freezing behaviour is however not 100% reproducible and is dependent on the Windows version and wxPython version.
  • Shifted a number of wx.NewId() calls to the module namespace to conserve IDs. These are for the menus in the main window and in the spin view window.
  • Shifted the wx.NewId() calls for the spectrum list GUI element to the module namespace. These IDs are used for the pop up menus. The change avoids repetitive calls to wx.NewId() every time a right click occurs, conserving wx IDs so that they are not exhausted when running the test suite or running the GUI for a long time.
  • More shifting of wx.NewId() calls for popup menus to module namespaces to conserve IDs.
  • Converted all of the GUI wizard button IDs to -1, as they are currently unused. This should conserve wx IDs, especially in the test suite.
  • Shifted the main GUI window toolbar button wx IDs to the module namespace. This has no effect apart from better organising the code.
  • Shifted the relax controller window popup menu wx IDs to the module namespace. This is simply to better organise the code to match the other GUI module changes.
  • Menus created by the gui.components.menu.build_menu_item() now default to the wx ID of -1. This is to conserve wx IDs. If the calling code does not provide the ID, there is no need to grab one from the small pool of IDs.
  • Shifted the spin viewer GUI window toolbar button wx IDs to the module namespace. This should conserve wx IDs as the window is created and destroyed, as only 2 IDs will be taken from the small pool for the entire lifetime of the program.
  • Shifted all of the wx.NewId() calls for the new analysis wizard into the module namespace. This will hugely save the number of wx IDs used by the GUI, especially in the test suite. Instead of grabbing 8 IDs from the small pool every time the new analysis wizard is created, only 8 IDs for the lifetime of the program will be used.
  • Another large wx ID saving change. The ID associated with the special accelerator table that allows the ESC button to close relax wizards is now initialised once in the module namespace, and not each time a wizard is created.
  • A small wx ID conserving change - the 'Execute' button in the analysis tabs now uses the ID of -1. A unique ID is not necessary and is unused.
  • The user function class menus no longer have unique wx IDs, as these are unnecessary. This conserves the small pool of unique wx IDs, as the spin viewer window is created and destroyed.
  • Bug fix for the structure.load_spins user function new from_mols argument. This was incorrectly using the pipe_control.pipes.pipe_names() function to obtain its default values in the GUI (although this is not currently uesd). The result was a non-fatal error message on Mac OS X systems of "Python[1065:1d03] *** __NSAutoreleaseNoPool(): Object 0x3a3944c of class NSCFString autoreleased with no pool in place - just leaking".
  • Added a debugging Python version check to the devel_scripts/memory_leak_test_relax_fit.py script. This prevents the script from being executed with a normal Python binary.
  • Created the blacklisted Noe.test_noe_analysis_memory_leaks GUI test. This long test can be manually run to help chase down memory leaks. This can be monitored using the MS Windows task manager, once the 'USER Objects' column is shown. If the USER Objects count reaches 10,000 in Windows, then no more GUI elements can be created and the user will see errors.
  • Added a printout to the Noe.test_noe_analysis_memory_leaks GUI test to help with debugging.
  • Improved debugging printouts for the Noe.test_noe_analysis_memory_leaks GUI test.
  • Small fix for the GUI analysis deletion method to prevent racing in the GUI tests.
  • Redesigned how wizards are destroyed in the GUI. The relax wizard Destroy() method is now overridden. This allows the buttons in the wizard to be properly destroyed, as well as all wizard pages. This should remove a lot of GUI memory leaks.
  • Created the General.test_new_analysis_wizard_memory_leak blacklisted GUI test. This will be used to check for memory leaks in the new analysis wizard.
  • Removed an unused dictionary from the GUI wizard object.
  • Added a wx.Yield() before destroying the new analysis wizard via menu_new(). This is to avoid racing which can be triggered in the test suite.


relax 3.3.1


relax 3.3.0


relax 3.2 series

relax 3.2.3

  • Added proper sectioning to the release checklist document.
  • Added the upload script to the release checklist document.
  • Modified the Sequence GUI input element used for the user function list arguments. The first column is now of fixed with when titles are supplied. Previously when supplying titles, the width would be tiny and no text would be visible.
  • Added titles for all 3D coordinate user function arguments. This is for the Sequence GUI input element, and affects the frame_order.average_position, n_state_model.CoM and paramag.centre user functions.
  • The compilation of the C modules now respects the user defined environment. This is the patch from Justin attached to bug #22145. It has been modified to include a comment and remove a double empty line.
  • Bug fix for the compilation of the C modules now respects the user defined environment. The problem was that on Mac OS X (as well as other systems), that these environmental variables were not defined and hence the scons commands would all fail with a KeyError and traceback. Now the keys in the os.environ dictionary are being searched for before they are set.
  • Fix for the wxPython link in the installation chapter of the manual. This was pointing to the scipy website for some reason.
  • Changed the Python readline link for MS Windows in the installation chapter of the manual. This now points to https://pypi.python.org/pypi/pyreadline as the iPython link is broken.
  • Implemented system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster. This is to catch the wrong unpacking of R2A0 and R2B0 when performing a clustered full dispersion model analysis. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
  • Extended system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster for B14 full model. This is to catch the wrong unpacking of R2A0 and R2B0 when performing a clustered full dispersion model analysis. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
  • Extended system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster for NS CPMG 2-site 3D full model. This is to catch the wrong unpacking of R2A0 and R2B0 when performing a clustered full dispersion model analysis. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
  • Extended system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster for NS CPMG 2-site star full model. This is to catch the wrong unpacking of R2A0 and R2B0 when performing a clustered full dispersion model analysis. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
  • Added synthetic data generator script which created the data to test against. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
  • Split system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster up in different tests. A setup function which is: setup_bug_22146_unpacking_r2a_r2b_cluster(self, folder=None, model_analyse=None): And then the tests: test_bug_22146_unpacking_r2a_r2b_cluster_B14 test_bug_22146_unpacking_r2a_r2b_cluster_CR72 test_bug_22146_unpacking_r2a_r2b_cluster_NS_3D test_bug_22146_unpacking_r2a_r2b_cluster_NS_STAR. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
  • Modified profiling script to get closer to the implementation in relax. An additional test function is setup to figure out how to reshape the numpy arrays in the target function. Bug #22146 Unpacking of R2A0 and R2B0 is performed wrong for clustered "full" dispersion models.
  • Updated profiling text for CR72 model. Now it is tested for 3 fields. This is related to: Task #7807: Speed-up of dispersion models for Clustered analysis.
  • Added searching for environment variable PYTHON_INCLUDE_DIR if Python.h is not found in standard Python library. This can be very handsome, if one has a Python virtual environment for multiple users. This relates to the wiki page: http://wiki.nmr-relax.com/Epd_canopy.
  • The lib.compat.norm() replacement function for numpy.linalg.norm() now handles no axis argument. This is to allow the function to be used in all cases where numpy.linalg.norm() is used, while providing compatibility with the axis argument and all numpy versions.
  • Fix for the scons target for compiling the relax manual when using a repository checkout copy. The method for compiling the relax manual was calling the version.revision() function, however this has been replaced a while ago by the version.repo_revision variable.
  • Created two unit tests for the lib.io.file_root() function. The second of the tests demonstrate a failure of the function if multiple file extensions are present.
  • Lowered χ2 value test in system test Relax_disp.test_bug_22146_unpacking_r2a_r2b_cluster_NS_STAR. This is due to the data produced on 32 bit machine, and tested on 64 bit machines. The error was: AssertionError: 2.4659455670347743e-05 != 0.0 within 7 places. The reason for this is due to truncation artifacts.
  • Fix for wrong path testing of Python.h. Python.h would be in PYTHON_PREFIX/include/pythonX.Y/Python.h and not in PYTHON_PREFIX/include/Python.h.
  • Better handling of the control-C keyboard interrupt signal in the relax test suite. This includes two changes. The Python 2.7 and higher unittest.installHandler() function is now called, when present, to terminate all tests using the unittest module control-C handler. The second change is that the keyboard interrupt signal is caught in a try-except statement, a message printed out, and the tests terminated. This should be an improvement for all systems.
  • Adding last profiling information for model CR72.
  • Added system test for model LM63 3-site. According to results folder in test_suite/shared_data/dispersion/Hansen/relax_results/LM63 3-site. This should pass, but it doesn't.
  • Created an initial Relax_disp.test_lm63_3site_synthetic system test. This should have been set up a long time ago. It uses the synthetic noise-free data in the test_suite/shared_data/dispersion/lm63_3site directory which was created for a system test but never converted into one. The test still needs modifications to allow it to pass.
  • Modifications for the Relax_disp.test_lm63_3site_synthetic system test. The r2eff_values.bz2 saved state file has been updated, as it was too old to use in the test. The test has also had a typo bug fixed and the data pipe name updated. The test now also checks all of the optimised values.
  • Removed system test test_hansen_cpmg_data_to_lm63_3site. This was a temporary implementation and has been replaced with system test Relax_disp.test_lm63_3site_synthetic.
  • Fixes for all of the relaxation dispersion system tests which were failing with the new minfx code. Due to the tuning of the log barrier constraint algorithm in minfx in the commit at http://article.gmane.org/gmane.science.mathematics.minfx.scm/25, many system tests needed to be slightly adjusted. Two of the Relax_disp.test_tp02_data_to_* system tests were also failing as the optimisation can no longer move out of the minimum at pA = 0.5 for one spin (due to the low quality grid search in the auto-analysis).
  • Updated the release checklist document for the new 1.0.7 release of minfx.
  • Fixes for the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test. The pA parameter is no longer tested for one spin as it moves to random values on different operating systems and 32 vs. 64-bit systems. This is because this spin experiences no exchange, both Δω and kex are zero.
  • Decreased the value checking precision in the Relax_disp.test_hansen_cpmg_data_to_lm63 system test. This is to allow the test to pass on certain operating systems and 32-bit systems.
  • Modified the precision of the output from the relax_disp.sherekhan_input user function. This is simply to allow the Relax_disp.test_sod1wt_t25_to_sherekhan_input system test to pass on certain 32-bit systems, as the float output to 15 decimal places is not always the same. This system test has been updated for the change.
  • Modified the Relax_disp.test_sprangers_data_to_mmq_cr72 system test to pass on certain systems. This test fails on 32-bit Linux (and probably other systems as well). To fix the test, the kex values are all divided by 100 before checking them to 4 decimal places of accuracy.
  • Improved how the relax installation path is determined in the status object. If the path cannot be found, the current working directory is then checked if it is where relax is installed. This is needed when importing modules outside of relax.
  • Hack to permanently eliminate the ^[[?1034h escape code being produced on Linux systems. This is produced by importing the readline module. The escape code will be sent to STDOUT every time relax is executed, so it will be present in all log files. The problem is the TERM environmental variable being set to 'xterm'. The hack simply sets TERM to an empty string.
  • More hacks for permanently eliminating the ^[[?1034h escape code being produced on Linux systems. This is a nasty feature of the GNU readline library. It is now also turned off in the dep_check module, suppressing ^[[?1034h in Python scripts which import only parts of relax.
  • Numpy version 1.6 or higher is now required to be able to run relax. This follows from the series of messages: http://www.mail-archive.com/relax-devel@gna.org/msg06288.html, http://www.mail-archive.com/relax-devel@gna.org/msg06289.html, http://www.mail-archive.com/relax-devel@gna.org/msg06327.html, and http://www.mail-archive.com/relax-devel@gna.org/msg06335.html. If too many users complain, maybe this change can be reverted later. This minimal numpy version is needed for many of the speed ups going in the relaxation dispersion and frame order analyses. It is required for the numpy ufunc out arguments and for the numpy.eigsum() function. These will likely be used in other analyses in the future for improving the speed of relax, so it might affect users of other analyses later on.
  • Updated the numpy minimal dependency in the installation chapter of the manual to version 1.6.
  • Added better epydoc sectioning to the lib.dispersion.ns_cpmg_2site_expanded module docstring. This is to better separate the original scripts used to document the code evolution.
  • Empty lines are now handled by the lib.structure.pdb_write.remark() function. By supplying the remark as None, empty lines can now be created in the REMARK section of a PDB file. This can be used for nicer formatting.
  • Fixes for the Diffusion_tensor system tests due to the recent PDB file changes. Prior to the comparison of the generated PDB files, all REMARK PDB lines are now stripped out.
  • Fixes for all system tests failing due to the expanded and improved PDB REMARK section. The system tests now remove all REMARK records prior to comparing file contents. The special strip_remarks() system test method has been created to simplify the stripping process.
  • Fix for the software verification tests. The recent expansion and improvements of the REMARK records created by the internal structural object PDB writing method imported the relax version to place this information into the PDB files. However this breaks the relax library design, as shown by the verification tests. Instead the relax version information is being taken from the lib.structure.internal.object.RELAX_VERSION variable. This defaults to None, however the version module now sets this variable directly when it is imported so that it is always set to the current relax version when running relax.
  • General Python 3 fixes via the 2to3 script.
  • Removed the lib.compat.sorted() function which was providing Python2.3 compatibility. For a while now, relax has been unable to run on Python versions less than 2.5. Therefore there is no use for having this replacement function for Python ≤ 2.3 which was being placed into the builtins module.
  • Python 3 fixes for the entire codebase using the 2to3 script. The command used was: 2to3 -j 4 -w -f xrange .
  • The internal structural object add_molecule() and has_molecule() methods are now model specific. This allows for finer control of structural object.
  • Created the new lib.structure.files module. This currently contains the single find_pdb_files() function which will be used to find all *.pdb, *.pdb.gz and *.pdb.bz2 versions of the PDB file in a given path.
  • Fix for the breakage of the relax help system. This was reported at http://thread.gmane.org/gmane.science.nmr.relax.devel/6481. The problem was that the TERM environmental variable was turned off to avoid the GNU readline library on Linux systems emitting the ^[[?1034h escape code. See the message at http://thread.gmane.org/gmane.science.nmr.relax.devel/6481/focus=6489 for more details. However the Python help system obviously requires this environmental variable. Now only if the TERM variable is set to 'xterm' will it be reset, and to 'linux' instead of the blank string "". This does not affect any relax releases.


relax 3.2.2

  • Small speed up for all the isotropic cone and pseudo-elliptic cone frame order models. The vector length calculation for the numeric PCS integration has been simplified and shifted outside of a loop to take advantage of the speed of numpy.
  • All three file arguments for the pymol.frame_order user function are now optional.
  • Updated all the API documentation links in the dispersion chapter of the manual. These were pointing to http://www.nmr-relax.com/api/3.1/ whereas they should now be point to http://www.nmr-relax.com/api/3.2/.
  • Modified a printout in the 'devel_scripts/code_validator' script. This is to clarify that the first method of a class does not need two preceding empty lines.
  • Shifted some functions from lib.structure.geometric into their own modules. The angles_regular() and angles_uniform() functions are now in the lib.structure.angles module, and the get_proton_names in lib.structure.conversion.
  • Deletion of the pipe_control.structure.main.create_cone_pdb() function. This is only used in the frame order analysis, but has been made redundant by the lib.structure.represent.cone.cone() function.
  • Completed the frame_order.pdb_model user function backend for the frame order PDB representation. Most of this backend, including the axes and cone representations, had been broken for quite a while and were being skipped with an early return statement. This has now been made functional and a few fixes have been made. For the 'rotor' and 'free rotor' model, the neg_cone argument is now ignored so that only one model is produced in the final frame order PDB representation file. For all other models, the rotor representation is no longer centred to the point on axis closest to the centre of mass, as the pivot is unambiguously defined. The rotor representation has also been made larger in these models so that it is outside of the cone, and the propeller blades are now staggered.
  • Modified py_type from "list" to "float_array" in uf_object type in user function dx.map. Bug #22035 The dx.map user function is broken in the GUI.
  • Added py_type "list_val_or_list_of_list_val" to be handled in GUI uf_objects. Bug #22035 The dx.map user function is broken in the GUI.
  • Modified the frame order constraints so that coneθx ≤ coneθy. The linear_constraints() function docstring has been updated to include this constraint.
  • Set dim=4 when setting chi surface level in user function dx.map.
  • Fix for the n_state_model.cone_pdb user function for the recent internal structural object changes. The cone arguments should now be called cone_obj.
  • Renamed the relax_disp.set_grid_r20_from_min_r2eff user function to relax_disp.r20_from_min_r2eff. This follows from the proposal at http://thread.gmane.org/gmane.science.nmr.relax.devel/5957.
  • Modification to the Sequence_2D GUI element used for some user function windows. The selection_win_show() method has been redefined, as the parent method from the Sequence element is specific for the 1D sequence module. The open_dialog() method has also been modified to use the new selection_win_show(), as well as the parent Sequence class selection_win_data() method.
  • Created the User_functions.test_structure_rotate GUI tests. This is to catch bug #22100, the rotation argument for the structure.rotate user function cannot be changed in the GUI, as an AttributeError is raised.
  • Moved py_type "list_val_or_list_of_list_val" to 2D sequence types.
  • Added dim dimensions to match the {x, y, z} positions for GUI input in user function dx.map.
  • Modified the User_functions.test_structure_rotate GUI test to change and check the rotation matrix.
  • Some more fixes for the User_functions.test_structure_rotate GUI test. The open_dialog() method cannot be used, as it deletes the window at the end. Instead the selection_win_show() and selection_win_data() method combination is used.
  • Expanded the User_functions.test_structure_rotate GUI test. This is to more extensively check the 'float_matrix' user function argument type in the GUI.
  • Modified the dim dimensions to (None, 3) to allow the user to change number of points in the GUI. This is for the user function dx.map.
  • Simplified the User_functions GUI tests. The exec_uf_pipe_create() method has been created to simplify the data pipe creation in the tests.
  • Expanded the User_functions.test_structure_rotate GUI test. The rotation matrix argument checks for the Sequence_2D GUI element have been expanded to check that setting nothing (blank element) returns nothing (None). The other checks have also been slightly modified.
  • Expanded the User_functions.test_structure_rotate GUI test to catch more problems. Now the rotation matrix value in the user function window is set to a series of invalid values to test if the Sequence_2D GUI element will handle the rubbish input. This is to mimic user errors.
  • Created the is_list() and is_list_of_lists() functions for the lib.check_types module.
  • Clean up of the User_functions.test_structure_rotate GUI test. The invalid value check is simpler and the Sequence_2D GUI object return value is now checked to be None.
  • Expanded the User_functions.test_structure_rotate GUI once more. This time the setting of invalid values in the Sequence_2D element itself is now checked. For example for the rotation matrix of the structure.rotate user function, if a matrix element is set to a string, a NameError is raised.
  • Created the User_functions.test_dx_map GUI test. This extensively checks the 'point' argument for the dx.map user function GUI window. This is to catch bug #22102, the point argument of the dx.map user function being incorrect in the GUI.
  • Modified the User_functions.test_dx_map GUI test to catch another problem with the Sequence_2D element.
  • Fixes for the frame order PDB presentation in the frame_order.pdb_model user function backend.
  • Expanded the User_functions.test_dx_map GUI test once again. The new test is to set 2 valid points in the wizard, open and close the Sequence_2D window (twice), and check that the points come back.
  • Increased the width of the first column of the Sequence_2D GUI element for variable lists. This is so the column title "Number" will fit.
  • Added list titles for the dx.map user function point argument. This is so that the Sequence_2D GUI element will have column titles of 'X coordinate', 'Y coordinate', and 'Z coordinate'.
  • The self.variable_length flag is now used throughout the Sequence GUI element.
  • The self.variable_length flag is used in one more spot in the Sequence_2D GUI element.
  • Created the User_functions.test_structure_add_atom GUI test. This is used to check the operation of the Sequence GUI element via the 'pos' argument of the structure.add_atom user function. This is a list fixed to 3 elements.
  • Titles are now handled and set in the Sequence GUI element. The titles will replace the numbering of 1 onwards in the first column of the GUI element.
  • Small fix for switched indices in the new User_functions.test_structure_add_atom GUI test.
  • Modified the 'pos' argument of the structure.add_atom user function. The argument is now a list of fixed length of 3, and it has the titles 'X coordinate', 'Y coordinate', and 'Z coordinate' which are shown in the GUI.
  • Created the User_functions.test_spectrum_read_intensities GUI test to catch bug #22105. The problem is that a single file name is split up into many files when the file selection button is clicked, one for each character of the file name.
  • Fix for the User_functions.test_spectrum_read_intensities GUI test. A valid value was being checked as invalid.
  • Shifted all wildcards used in GUI file selection dialogs into the new user_functions.wildcard module. These have now all been standardised, and expanded to include more capitalisation combinations and to include more *.* options.
  • Created a file selection wildcard for use in the GUI for selecting peak lists. This is used in the four user functions which read peak lists.
  • Changed all *.* GUI file selection wildcards to *.
  • Huge speedup for model CR72. Task #7793 Speedup of dispersion models. The system test Relax_disp.test_cpmg_synthetic_cr72_full_noise_cluster changes from 7 seconds to 4.5 seconds. This is won by not checking single values in the R2eff array for math domain errors, but calculating all steps, and in one single round check for finite values. If just one non-finite value is found, the whole array is returned with a large penalty of 1e100. This makes all calculations be the fastest numpy array way.
  • Fix for system test test_cpmg_synthetic_dx_map_points. Task #7793 Speedup of dispersion models.
  • Critical fixes for system test Relax_disp.test_hansen_cpmg_data_missing_auto_analysis. Task #7793 Speedup of dispersion models. It is suspected that when relax have touched boundary values which made math domain errors, the error catching have created local minima or interfered with the simplex search algorithm.
  • Speedup of model TSMFK01. Task #7793 Speedup of dispersion models. This is won by not checking single values in the R2eff array for math domain errors, but calculating all steps, and in one single round check for finite values. If just one non-finite value is found, the whole array is returned with a large penalty of 1e100. This makes all calculations be the fastest numpy array way.
  • Huge speedup of model B14. Task #7793 Speedup of dispersion models. Time test for system tests: test_baldwin_synthetic 2.626s -> 1.990s, test_baldwin_synthetic_full 18.326s -> 13.742s. This is won by not checking single values in the R2eff array for math domain errors, but calculating all steps, and in one single round check for finite values. If just one non-finite value is found, the whole array is returned with a large penalty of 1e100. This makes all calculations be the fastest numpy array way.
  • Speedup of model TP02. Task #7793 Speedup of dispersion models. The change for running system test is: test_curve_type_r1rho_fixed_time 0.057s -> 0.049s, test_tp02_data_to_ns_r1rho_2site 10.539s -> 10.456s, test_tp02_data_to_tp02 8.608s -> 5.727s. This is won by not checking single values in the R array for math domain errors, but calculating all steps, and in one single round check for finite values. If just one non-finite value is found, the whole array is returned with a large penalty of 1e100. This makes all calculations be the fastest numpy array way.
  • Huge speedup for model TAP03. Task #7793 Speedup of dispersion models. The change for running system test is: test_tp02_data_to_tap03 13.869s -> 7.263s. This is won by not checking single values in the R array for math domain errors, but calculating all steps, and in one single round check for finite values. If just one non-finite value is found, the whole array is returned with a large penalty of 1e100. This makes all calculations be the fastest numpy array way.
  • Speedup of model MP05. Task #7793 Speedup of dispersion models. The change in system test is: test_tp02_data_to_mp05 10.750s -> 6.644s.
  • Speedup of model MMQ CR72. Task #7793 Speedup of dispersion models. Change in system test: test_sprangers_data_to_mmq_CR72 9.892s -> 4.121s.
  • Speedup for model M61. Task #7793 Speedup of dispersion models. Change in speed is: test_m61_data_to_m61 6.692s -> 3.480s.
  • Speedup of model LM63. Task #7793 Speedup of dispersion models. Change in system test was: test_hansen_cpmg_data_auto_analysis 13.731s -> 9.971s, test_hansen_cpmg_data_auto_analysis_r2eff 13.370s -> 9.510s, test_hansen_cpmg_data_to_lm63 3.254s -> 2.080s.
  • Speedup of model IT99. Task #7793 Speedup of dispersion models. Change in speed is: test_hansen_cpmg_data_auto_analysis 9.74s -> 8.330s, test_hansen_cpmg_data_to_it99 4.928s -> 3.138s.
  • Speedup of model DPL94. Task #7793 Speedup of dispersion models. Change in speed is: test_dpl94_data_to_dpl94 19.412s -> 4.427s.
  • Math-domain catching for model B14. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Math-domain catching for model CR72. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Math-domain catching for model: NS CPMG 2-site expanded. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Math-domain catching for model CR72. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. The skipping of test when num_points > 0, is a bad implementation. If such a case should show, it is best to catch the wrong input for the calculations. This is best done with a check before running the calculations.
  • Math-domain catching for model TSMFK01. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Math-domain catching for model TP02. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Math-domain catching for model TAP03. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Math-domain catching for model DPL94. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Math-domain catching for model TAP03. Another check for division with 0 inserted.
  • Math-domain catching for model MP05. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Math-domain catching for model IT99. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Removed class object "back_calc" being updated per time point for model LM63. Task #7793 Speedup of dispersion models. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Math-domain catching for model M61. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Math-domain catching for model MMQ CR72. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Align math-domain catching for model CR72 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
  • Align math-domain catching for model DPL94 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
  • Align math-domain catching for model IT99 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
  • Align math-domain catching for model LM63 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
  • Align math-domain catching for model M61 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
  • Align math-domain catching for model MP05 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
  • Align math-domain catching for model TAP03 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
  • Align math-domain catching for model TP02 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
  • Align math-domain catching for model TSMFK01 with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
  • Removing unnecessary math-domain catching for model IT99. Task #7793 Speedup of dispersion models. The denominator is always positive.
  • Align math-domain catching for model NS CPMG 2-site expanded with trunk implementation. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. The catching of errors have to be more careful.
  • Modified unit tests demonstrating edge case 'no Rex' failures of the model NS CPMG 2-site expanded. This is to align with the current return of data in the disp_speed branch. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0.
  • Added 7 unit tests demonstrating edge case 'no Rex' failures of the model DPL94. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0.
  • Unit test _lib/test_ns_cpmg_2site_expanded.py copied to _/test_lm63.py. They are both of CPMG type.
  • Added 7 unit tests demonstrating edge case 'no Rex' failures of the model LM63. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0.
  • Unit test _lib/_dispersion/test_ns_cpmg_2site_expanded.py copied to _lib/_dispersion/b14.py. They are both of CPMG type, and can be re-used.
  • Added 7 unit tests demonstrating edge case 'no Rex' failures of the model B14. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0.
  • Removed unnecessary math domain checking in model B14. They are slowing down the code. There is now protection for edge cases, and a last final check, before returning values. That should be sufficient.
  • Unit test _lib/_dispersion/test_b14.py copied to _lib/_dispersion/test_CR72.py. They are both of CPMG type, and can be re-used.
  • Copied unit test _lib/_dispersion/* to be reused for other models.
  • Added 8 unit tests demonstrating edge case 'no Rex' failures of the model CR72. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e5.
  • Added the 8th unit tests demonstrating edge case 'no Rex' failures of the model B14. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e5.
  • Added the 8th unit tests demonstrating edge case 'no Rex' failures of the model LM63. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
  • Small fix for 8 unit tests demonstrating edge case 'no Rex' failures of the model 'ns cpmg_2site_expanded'. The comparison of R2eff is now divided into a special case for kex having large values.
  • Deleted unit test case for lm63 3site.
  • Added 8 unit tests demonstrating edge case 'no Rex' failures of the model M61. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
  • Added the 8th unit tests demonstrating edge case 'no Rex' failures of the model DPL94. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange:
  • Added 8 unit tests demonstrating edge case 'no Rex' failures of the model M61b. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
  • Math-domain catching for model M61b. Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These can be found via the --numpy-raise function to the system tests. To make the code look clean, the class object "back_calc" is no longer being updated per time point, but is updated in the relax_disp target function in one go.
  • Modified script to be able to run system test Relax_disp.xxx_test_m61b_data_to_m61b.
  • Added 8 unit tests demonstrating edge case 'no Rex' failures of the model IT99. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e19.
  • Added 9 unit tests demonstrating edge case 'no Rex' failures of the model MMQ CR72. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e5; ΔωH = 0.0.
  • Added 8 unit tests demonstrating edge case 'no Rex' failures of the model MP05. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
  • Added 8 unit tests demonstrating edge case 'no Rex' failures of the model TAP03. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
  • Added 8 unit tests demonstrating edge case 'no Rex' failures of the model TP02. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e20.
  • Added 7 unit tests demonstrating edge case 'no Rex' failures of the model TSMFK01. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0.
  • Copied unit test test_b14.py to test_ns_cpmg_2site_3d.py.
  • Added 8 unit tests demonstrating edge case 'no Rex' failures of the model NS CPMG 2-site 3D. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e7.
  • Modified unit tests demonstrating edge cases 'no Rex' failures of the model TP02. The catching of errors for off-resonance R models was implemented wrong. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0; kex = 1e5.
  • Critical fix for the math domain catching of model TP02. The catching of errors for off-resonance R models was implemented wrong. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938.
  • Modified unit tests demonstrating edge cases 'no Rex' failures of the model DPL94. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938.
  • Modified unit tests demonstrating edge cases 'no Rex' failures of the model MP05. The catching of errors for off-resonance R models was implemented wrong. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models. This is to implement catching of math domain errors, before they occur.
  • Critical fix for the math domain catching of model MP05. The catching of errors for off-resonance R models was implemented wrong. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938.
  • Modified unit tests demonstrating edge cases 'no Rex' failures of the model TAP03. The catching of errors for off-resonance R models was implemented wrong. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938. And post http://article.gmane.org/gmane.science.nmr.relax.devel/5944. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models.
  • Critical fix for the math domain catching of model TAP03. The catching of errors for off-resonance R models was implemented wrong. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5938. And post http://article.gmane.org/gmane.science.nmr.relax.devel/5944.
  • Modified unit tests demonstrating edge cases 'no Rex' failures of the model MMQ CR72. This was pointed out in post http://article.gmane.org/gmane.science.nmr.relax.devel/5940. And in post http://article.gmane.org/gmane.science.nmr.relax.devel/5946. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models.
  • Small fix for the math domain catching of model MMQ CR72. This was pointed out in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5940. And in post http://article.gmane.org/gmane.science.nmr.relax.devel/5946.
  • Various spacing fixes in unit test files _lib/_dispersion. This is the preparation for merging back disp_speed branch into trunk. This follows post http://article.gmane.org/gmane.science.nmr.relax.devel/5948. Usin the code validator script './devel_scripts/code_validator'.
  • Modified that unit tests having different r20a and r20b values is checking if the correct one is returned. This is the preparation for merging back disp_speed branch into trunk. This follows post http://article.gmane.org/gmane.science.nmr.relax.devel/5948.
  • Modified unit test to have standard population of pA = 0.95, and a correctly calculation of Δω in ppm to rad/s. This is related to Task #7793 Speedup of dispersion models.
  • Small fix in parameter calculation in unit test _dispersion/test_ns_cpmg_2site_expanded.
  • Increased max kex to value 1e18 for unit test of lin/ns_cpmg_2site_expanded.py.
  • Increased max kex to value 1e20 for unit test of lib/ns_cpmg_2site_3d.py.
  • Fix for looking for negative values, when all values where converted to positive in matrix in ns_cpmg_2site_3d.py. This is to implement catching of math domain errors, before they occur. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. This is related to Task #7793 Speedup of dispersion models.
  • Removed nested looping of returning back_calc in lib/ns_cpmg_2site_3d.
  • Removed the 8th unit test for model NS CPMG 2-site 3D. This was the catching of errors when kex = 1e20. The model cannot handle this situations, and we need to let it fail.
  • Removed the 8th unit test for model NS CPMG 2-site expanded. This was the catching of errors when kex has high values. The model cannot handle this situations, and we need to let it fail.
  • Fix for differences in system tests which are different from trunk. These were found with the command: diff -bur disp_speed/test_suite/ relax_trunk/test_suite/ | grep -v "Binary files" > diff.txt.
  • Converting back to having back_calc as a function argument to model B14. This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model CR72: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model DPL94: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model IT99: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model LM63: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model M61: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model M61b: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model MMQ CR72: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model MP05: This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model NS CPMG 2-site expanded. This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model TAP03. This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model TP02. This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Converting back to having back_calc as a function argument to model TSMFK01. This is to clean up the API. There can be bo no partial measures/implementations in the relax trunk. The problem is, that many numerical models can't be optimised further, since they evolve the spin-magnetisation in a matrix. That spin evolution can't be put into a larger numpy array. This is related to Task #7793 Speedup of dispersion models.
  • Created the lib.compat.norm() compatibility function for numpy.linalg.norm(). For numpy 1.8 and higher, the numpy.linalg.norm() function has introduced the 'axis' argument. This is an incredibly fast way of determining the norm of an array of vectors. This is used by the frame order analysis. However for older numpy versions, this causes the frame order analysis, and many corresponding system and GUI tests to fail. Therefore this new lib.compat.norm() function has been designed to default to numpy.linalg.norm() if the axis argument is supported, or to switch to the much slower numpy.apply_along_axis(numpy.linalg.norm, axis, x) call which is supported by older numpy.
  • The frame order analysis now uses the lib.compat.norm() replacement for numpy.linalg.norm(). This is to allow for the axis argument on numpy versions before version 1.8, though these older versions will result in slower optimisation of the frame order models.
  • The built in Python range() function is no longer being replaced by xrange(). Replacing builtin.range() with builtin.xrange() on Python 2 was causing problems with Python site-packages which were not Python 3 compliant. This includes old numpy versions. The original overwriting of range() with xrange() was for both speed and memory conservation. However profiling the system tests, the time for all tests did not change significantly. This change may cause problems in certain places in relax on memory constrained computer systems, so it may need to be reverted in the future.
  • The lib.io.open_write_file() function now automatically determines the compression type. This is used by many user functions which create files. The end result for a user is that if they supply a '.gz' or '.bz2' file extension, a gzipped or bzipped file will be produced.
  • Removal of the docstring text wrapping in the lib.io module.
  • Expanded and improved the docstring for the relax_disp.r20_from_min_r2eff user function. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/5957. The documentation now covers a number of the uses for this user function. The text has also been lightly edited. To fit all the text into the GUI user function window, the size of the dialog and the text high settings have been changed.
  • Large improvements for the detection of cross-compilation on Mac OS X systems. The tests for different architecture support now follows the ideas discussed in the post http://thread.gmane.org/gmane.science.nmr.relax.devel/5785/focus=5820. In summary, for each architecture a simple C file is created, compiled with 'gcc -arch xyz', and the resultant binary file tested. To support 64-bit compilation on 32-bit systems, all previously successful architectures are also included in the gcc command. The change allows the 'ppc64' architecture to be reintroduced.
  • Fixed the docstring for the det_arch() method of the sconstruct script. This is for the true cross-compilation detection on Mac OS X.


relax 3.2.1

  • Punctuation fixes throughout the CHANGES document.
  • Modified system test Relax_disp.test_cpmg_synthetic_ns3d_to_cr72 to catch bug #22017: LinAlgError, for all numerical CPMG models. System test was renamed from test_cpmg_synthetic_cr72 to test_cpmg_synthetic_ns3d_to_cr72, to reflect which model create the data and which model fits the data.
  • Modified cpmg_synthetic script to first create all time structures before doing back-calculation. Bug #22017: LinAlgError, for all numerical CPMG models. The numerical models need all time points which are defined in setup to be present when calculating.
  • Renamed system test to test_cpmg_synthetic_ns3d_to_cr72_noise_cluster. The model that creates the data has been changed to numerical model. Bug #22017: LinAlgError, for all numerical CPMG models.
  • Implemented system test Relax_disp.test_cpmg_synthetic_ns3d_to_b14. Bug #22021: model B14 shows bad fitting to data. This is to catch model B14 showing bad fitting behaviour.
  • Parameter precision increase for system test Relax_disp.test_baldwin_synthetic. The correct implementation of the trigonometric functions allow for higher precision. Bug #22021: model B14 shows bad fitting to data. Duplicate line codes were also removed.
  • Code cleanup in system test Relax_disp.test_baldwin_synthetic_full. Bug #22021: model B14 shows bad fitting to data. The precision could also be increased by 1 digit.
  • Code cleanup in system test Relax_disp.test_baldwin_synthetic. Bug #22021: model B14 shows bad fitting to data. Removing many unnecessary lines of code.
  • Added 7 unit tests demonstrating edge case 'no Rex' failures of the NS CPMG 2-site expanded model. This follows from the ideas in the post http://article.gmane.org/gmane.science.nmr.relax.devel/5858. These tests cover all parameter value combinations which result in no exchange: Δω = 0.0; pA = 1.0; kex = 0.0; Δω = 0.0 and pA = 1.0; Δω = 0.0 and kex = 0.0; pA = 1.0 and kex = 0.0; Δω = 0.0, pA = 1.0, and kex = 0.0. Such tests should be replicated for all dispersion models.
  • Created the Structure.test_bug_22069_structure_delete_helix_attribute system test. This is to catch bug #22069, the failure of the structure.delete user function with "AttributeError: Internal instance has no attribute 'helices'".
  • Created the Structure.test_bug_22070_structure_superimpose_after_deletion system test. This is to catch bug #22070, the failure of the structure.superimpose user function after deleting atoms with structure.delete.
  • Added some checks to the Structure.test_bug_22070_structure_superimpose_after_deletion system test. These tests reveal the real problem - that the atoms of the second model have not been removed by the structure.delete user function.
  • Added git-svn support for the relax version information module. This allows the subversion revision number and repository URL to be displayed on program startup, so that it is stored in log files. This is very useful for debugging purposes.
  • Improvements for the git-svn support in the relax version module. Python 3 is now correctly handled and the URL is properly extracted from the git repository.
  • Improvement for the unit test printouts when run with the --time command line option. The full unit test name is now printed out, reverting to the old behaviour. However the shortened test names are preserved for the other test suite categories.
  • Created the test_ns_cpmg_2site_expanded_no_rex8() relaxation dispersion unit test. This is a demonstration, showing the NS CPMG 2-site expanded model with no exchange when kex = 1e5. I.e. when the motion is too fast for exchange to be observed. This test should be used for all dispersion models to make sure that they model this edge case correctly as well. This follows from http://article.gmane.org/gmane.science.nmr.relax.devel/5906.
  • Attempt at fixing bug #22071, the relax unit test and system test not functioning. The fix here is that the git commands to show the current subversion revision number only works when run from the relax base directory, or one of the subdirectories. This should now be fixed, as the pipe running the command will first 'cd' to the relax base directory.
  • Another attempt at fixing bug #22071, the relax unit test and system test not functioning. This time the complicated shell command "cd %s; git svn find-rev $(git rev-parse HEAD)" has been replaced with "cd %s; git svn info".
  • Changed most default dispersion parameter values to avoid edge cases where there is no exchange. The Δω parameters were all 0.0 and kex 1e5, both of which result in no exchange. If this is ever used for an optimisation starting point - which it never should, apart from development, test suite, and debugging purposes - then the optimisation algorithm will have a very hard time recovering. The pA parameter has been changed to 0.90 to set it to a reasonable value while still staying far away from the no exchange condition of pA = 1.0. This follows from http://article.gmane.org/gmane.science.nmr.relax.devel/5917.
  • Fixes for 3 dispersion system tests for the change in default parameter values. The default values are used in the auto-analysis in the test suite to avoid the grid search. The changed values affected the optimisation of two spins from Flemming Hansen's data located at test_suite/shared_data/dispersion/Hansen/, residue 4 used as an example of no exchange and residue 70 used as an example where data is only available at one field. The system test Relax_disp.test_set_grid_r20_from_min_r2eff_cpmg was also modified as it was directly checking these default values.
  • Fix for the Relax_disp.test_cpmg_synthetic_dx_map_points system test. This uses the default parameter values to start the optimisation, therefore the recent change away from edge case 'no Rex' values allows the parameter values stored in ds.dx_clust_val to be correctly optimised.
  • Speed up for the version module when using a repository copy of the code. The repository revision and URL and now stored as module variables, so that the 'svn info' and 'git svn info' commands are only run twice, once for the revision() function and once for the url() function.
  • Large speed up for the relax start up times for svn and git-svn copies of the relax repository. The 'svn info' and 'git svn info' commands are now only executed once when the version module is first imported. The revision() and url() functions have been merged into the repo_info() function and this is called when the module is imported. This repo_info() function stores the repository revision and URL as the version.repo_revision and version.repo_url module variables. It also catches if these variables are already set, so that multiple imports of the module do not cause the repository information to be looked up each time. Previously the revision() and url() functions where called every time a relax state or result file was created, hence for repository copies the 'svn info' or 'git svn info' commands were being called each time. The functions were also called for each interpreter object instantiated, and for each import of the version module.


relax 3.2.0


relax 3.1 series

relax 3.1.7


relax 3.1.6


relax 3.1.5

  • Updated the interatom.unit_vectors user function description to add the text '3D structure'. This is in response to the http://thread.gmane.org/gmane.science.nmr.relax.user/1547 relax-users mailing list message and the change is to clarify the usage of the user function.
  • Created the Noe.test_bug_21591_noe_calculation_fail system test. This is to catch bug #21591 submitted by Martin Ballaschk. This is the complete failure of the NOE analysis. The peak lists attached to the bug report have been included in the test suite to create the system test.
  • Improvements for the steady-state NOE analysis overfit_deselect() method. The spin deselection which occurs at the start of the calc user function call, used to calculate the NOE, is now clearer. Each deselection condition is now explained in detail and the text is now far more informative. In addition, the special condition of all spins being deselected is now caught. If this happens, a RelaxError is raised to prevent the user from going forwards. This should remove confusion as to why the output file is empty.


relax 3.1.4

  • Created the Frame_order.test_generate_rotor2_distribution system test. This is to test the Frame Order distribution generating base script, used for creating the synthetic Frame Order test data, and to demonstrate a failure in handling back-calculated RDC data. To implement this, the test_suite/shared_data/frame_order/cam/ path has been converted into a Python package (with the addition of the __init__.py files). The base data generation script test_suite/shared_data/frame_order/cam/generate_base.py has also been modified to use the absolute path for the data files and its run() method now accepts the save_path argument to allow the files to be saved into a temporary directory.
  • Fixes for the Frame_order.test_generate_rotor2_distribution system test. The test_suite/shared_data/frame_order/cam/generate_base.py script now saves the program state files into the self.save_path directory, preventing the system test from attempting to save files into the relax test suite directories.
  • Another fix for the Frame_order.test_generate_rotor2_distribution system test. The test_suite/shared_data/frame_order/cam/generate_base.py script no longer prints its progress indicator to sys.__stderr__ but to sys.stderr instead. This avoids the progress text from appearing during the relax test suite execution.
  • Created the Structure.test_bug_21522_master_record_atom_count system test. This is designed to catch bug #21522, the structure.write_pdb user function creating an incorrect MASTER record. This hence also catches bug #21520, the failure of the structure.write_pdb user function when creating the MASTER record due to too many ATOM and HETATM records being present. The test simply creates two structural models, adds one atom, and writes out a PDB file, checking its contents.
  • The structure.write_pdb user function can now handle a file instance for the file argument. This is for the Structure.test_bug_21522_master_record_atom_count system test, to allow a dummy file object to be used. This can also be useful for power users.
  • Created the lib.geometry.vectors.unit_vector_from_2point() function. This is used to quickly calculate the unit vector between two points.
  • The lib.structure.represent.rotor.rotor_pdb() function can now handle multiple rotors. Previously this function would fail if called twice with the same structural object.
  • Added the has_molecule() method to the relax internal structural object. This is used to quickly check if a molecule name already exists in the structural object.
  • More improvements for handling multiple rotors in the lib.structure.represent.rotor.rotor_pdb() function. The atom numbering is now better handled.
  • Better support for the writing out of multiple molecules by the structure.write_pdb user function. This is for the internal structural object write_pdb() method. Now each molecule is assigned a different chain ID in the PDB file, and the chain IDs loaded into the structural object are ignored. The chain IDs should however be preserved when using structure.read_pdb followed by structure.write_pdb, without storing the ID. A number of the Structure system tests had to be updated, as now the relax generated PDB files will always write out a chain ID.
  • Large speed up for the internal structural object for when many models are present. The new ModelList.current_models object keeps track of all the models already present in the structural object. This simplifies the checks of the pack_structs() internal structural object method by removing expensive looping. This allows the loading of PDB files to continue to be fast even with many tens or hundreds of thousands of models already loaded.
  • More speed ups for the internal structural object when huge numbers of models are present. Another loop over the structural_data object has been eliminated from the PDB reading load_pdb() method.
  • Another optimisation for the internal structural object for large numbers of models. The ModelList.add_item() method no longer loops over all models to check if a model is already present, instead using the new current_models list.
  • Yet more optimisation for handling large quantities of models in the internal structural model. Now when adding new models to the object, the model_indices and model_list objects are no longer created. This saves much time as the large model_list is now not sorted. A number of structural object methods have been updated to handle the change by switching to the model_loop() method for looping over the models, rather than using the model_indices and model_list objects.
  • The frame order matrix printing function can now output the matrix to any precision. The lib.frame_order.format.print_frame_order_2nd_degree() function now accepts the 'places' argument which allows for higher precision printouts.
  • The behaviour of the rdc.write user function has been changed to output spin ID strings in single quotes. This is to avoid problems with the '#' molecule identifier and the '#' comment character.
  • Fix for the diffusion_tensor.init user function reference in the intro chapter of the manual. This was using a very old and now non-functional syntax.
  • Created the Diffusion_tensor.test_bug_21561_tensor_pdb_failure system test. This is to catch bug #21561, failure of the structure.create_diff_tensor_pdb user function for non-spherical diffusion tensors when no Monte Carlo simulations are present, as reported by Martin Ballaschk.
  • Added the truncated data for creating a system test to catch bug #21562, the failure of the NOE analysis when spectra are replicated. This bug was reported by Dhanas Muthu. This consists of the Sparky peak lists attached to the bug report and the modified 2AT7 PDB file. The data has been truncated to only include residues :12, :13, and :14.
  • Shifted the NOE system test script into the new 'noe' directory.
  • Created the Noe.test_bug_21562_noe_replicate_fail system test. This is to catch bug #21562, the failure of the NOE analysis when spectra are replicated, reported by Dhanas Muthu. This uses the truncated data taken from the files attached to the bug report. The NOE output file is checked to see if the contents are correct.
  • Better support for replicated spectra in the NOE analysis. The saturated and reference peak intensity and error are now properly averaged. Previously averaging was not used as the number of replicates N are cancelled in the ratios used for the NOE and error calculation. However this fails when the number of replicates for the saturated spectrum does not match the number of replicates for the reference spectrum. Now any data combination is possible.
  • Another fix for the NOE analysis for when replicated spectra have been collected. Variance averaging rather than error averaging is now used for the peak intensity errors. This is important if the errors for each replicated spectra are different - a case which is rarely encountered as the replicates are almost always used to determine one error for all the replicates.


relax 3.1.3


relax 3.1.2

  • The average_intensity() dispersion function now accepts the offset argument. This is for better support of combined offset and spin-lock varied R-type data. The argument is then passed into the find_intensity_keys() function.
  • Improved the DPL94 dispersion model description in the manual.
  • Copied a Sparky peak list to be modified to be a Sparky file without intensity column.
  • Modified the Sparky file to have no columns with intensity values.
  • Implemented to read spins from a SPARKY list, when no intensity column is present. Addition to Support Request #3044 - load spins from Sparky list.
  • Created the Relax_disp.test_bug_21460_disp_cluster_fail system test. This is to catch bug #21460 reported by Min-Kyu Cho. The save file added to the repository consists solely of the data for the first residue.
  • Speed ups for the Relax_disp.test_bug_21460_disp_cluster_fail system test. The optimisation precision is not important for demonstrating this bug.
  • Updated the main copyright notice for 2014.
  • Fix for the main copyright notice.
  • Updated the copyright notice visible to the user to 2014.
  • Updated the copyright for the relax GUI splash screen for 2014.
  • Improvement for the relax test suite printout with the --time command line argument flag. The tests printed out now have the package and module names removed, so that one the test name remains. This removes a large amount of text, simplifying the printout.


relax 3.1.1

  • Small improvement for the devel_scripts/log_converter.py script for detecting commit boundaries.
  • Added many small details to the release checklist document. This is for the formatting and editing of the CHANGES file, which is used for the release announcements. Some additional details about the API documentation at http://www.nmr-relax.com/api have been added too.
  • Added sectioning printouts for the relaxation dispersion auto-analysis. This simply tells the user which part of the protocol is currently being performed.
  • Setup for testing the sample_scripts/relax_disp/R1rho_analysis.py sample script. The script was copied into the test_suite/shared_data/dispersion/r1rho_off_res_tp02/ data directory where it will be tested on real data. The 'fake_sequence.in' and 'unresolved' files have been created to allow the script to run. And the script itself has been heavily debugged.
  • All of the relaxation dispersion auto-analysis options are now exposed by the sample scripts. This included the pre_run_dir argument for specifying a directory of results from a non-clustered analysis and the flag for running MC simulations for all models.
  • Added the DATA_PATH variable to the cpmg_analysis.py dispersion sample script. This allows the user to more easily specify a different directory for the files.
  • Docstring improvement for the test_suite/shared_data/dispersion/r1rho_off_res_tp02/R1rho_analysis.py script.
  • Synchronised the test_suite/shared_data/dispersion/Hansen/relax_disp.py with the sample script. This script now matches very closely with the sample_scripts/relax_disp/cpmg_analysis.py sample script. This is for sample script debugging purposes.
  • Created a base data pipe for Flemming Hansen's truncated CPMG data for testing out missing data. The :4 spin is missing just a few data points, whereas the :71 spin is missing all 800 MHz data.
  • Created the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test. This is used to demonstrate a failure in the R2eff model when some data is missing.
  • Expansion and fixes for the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test. The parameters for spin :4 are now being checked, and all the checks updated for the changed data. The parameter values are slightly different as data is now missing and because only 3 spins are used for the error analysis whereas in all other Hansen CPMG data sets the more accurate errors are from all spins.
  • The lib.dispersion.cr72.r2eff_CR72() function is now more robust. Values less than 1.0 are now caught to avoid passing it into the numpy.arccosh() function. This avoids many warning messages on Mac OS X.
  • Added a Gaussian DFT optimisation log file to the shared data directories. This will be used to test the reading of structural data from Gaussian files.
  • Modified the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test to catch another failure. This is the failure of all numeric models when all data from one magnetic field strength is missing for a spin.
  • Created data for a NS MMQ 3-site (branched) model using cpmg_fit from Dmitry Korzhnev.
  • The relax_disp.r2eff_read_spin user function now really strips comments and empty lines from the file.
  • A big change to the usage of the relax_disp.r2eff_read_spin user function. Now the nu_CPMG frequency or the spin-lock field strength must be set prior to calling this user function. This allows for more flexibility as often the experiment IDs and frequency values in the files do not match to the same number of decimal places. The frequency is no longer read from the file but must be preset.
  • Created a relax script for back calculating R2eff values for the same parameters as cpmg_fit. This is for the NS MMQ 3-site (branched) CPMG dispersion model. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_test_suite.
  • Created the Relax_disp.test_ns_mmq_3site_branched system test. This is for the NS MMQ 3-site (branched) CPMG dispersion model. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_test_suite.
  • Added the NS MMQ 3-site models to the dispersion variables. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
  • Added another Gaussian log file of strychnine, this time with DFT structure optimisation. The file is bzip2 compressed to save space.
  • Created the Structure.test_read_gaussian_strychnine system test. This will be used for implementing and testing the structure.read_gaussian user function.
  • Created the lib.periodic_table module for storing information about the periodic table. This is via the periodic_table object which will have different methods for obtaining different information about an element.
  • Implemented the structure.read_gaussian user function. This will read the final structural data out of a Gaussian log file.
  • Improved the checking of the Structure.test_read_gaussian_strychnine system test. This now checks all the atomic information loaded.
  • Simple fix for the Relax_disp.test_korzhnev_2005_*_data system tests. The CPMG frequencies are now being set up in the setup_korzhnev_2005_data() method.
  • Added support for the NS MMQ 3-site model parameters to the lib.text.gui module. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax.
  • Added the NS MMQ 3-site models to the relax_disp.select_model user function frontend. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_disp.select_model_user_function_front_end.
  • Added support for the NS MMQ 3-site models to the relax_disp.select_model user function back end. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_disp.select_model_user_function_back_end.
  • Added support for the new 3-site exchange dispersion parameters. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_support_for_the_parameters.
  • Removed the brackets from the NS MMQ 3-site (linear) dispersion model name.
  • Renamed the Relax_disp.test_ns_mmq_3site_branched system test to Relax_disp.test_ns_mmq_3site.
  • Fixes for the loop_parameters() dispersion function for the new NS MMQ 3-site model parameters. The new parameters were not being handled by this function.
  • Created the target functions for the NS MMQ 3-site models. This is for the NS MMQ 3-site and NS MMQ 3-site (linear) CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_target_function.
  • Added the R2eff calculating functions for the NS MMQ 3-site models to the relax library. This is for the NS MMQ 3-site and NS MMQ 3-site linear CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_library.
  • Added the NS MMQ 3-site models to the dispersion auto-analysis. This is for the NS MMQ 3-site and NS MMQ 3-site linear CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_auto-analysis.
  • Added the NS MMQ 3-site models to the GUI model list. This is for the NS MMQ 3-site and NS MMQ 3-site linear CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_GUI.
  • Updated the MMQ 2-site model description in the manual. The R2_DQ = R2_ZQ = R20 assumption is now explained.
  • Added the NS MMQ 3-site models to the relax user manual. This is for the NS MMQ 3-site and NS MMQ 3-site linear CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
  • Completed the MMQ 2-site documentation in the manual. The equations for the numeric evolution of SQ, ZQ and DQ data was missing.
  • Huge speed ups of the relaxation dispersion analysis. This is due to the removal of huge inefficiencies in the loop_point(), return_cpmg_frqs() and return_spin_lock_nu1() functions of the specific_analysis.relax_disp.disp_data module. Two new functions return_cpmg_frqs_single() and return_spin_lock_nu1_single() have been introduces to pull out the nu_CPMG and spin-lock field strengths for a given experiment and spectrometer frequency. This avoids calling the loop_exp() and loop_frq() functions from within loop_point() which itself is often called inside a loop_exp() and loop_frq() sequence.
  • Added the results of cpmg_fit minimisation of the cpmg_fit synthetic data for the NS MMQ 3-site model.
  • Fixes for the NS MMQ 3-site dispersion models - the evolution matrix is now correctly constructed.
  • Another fix for the NS MMQ 3-site dispersion models. The creation of the Z-matrix had a copy and paste error in that the heteronuclear chemical shift sign was negated when it should be positive. This was only in one of the two chemical shift numbers.
  • Loosened the chi-squared check of the Relax_disp.test_ns_mmq_3site system test to allow it to pass.
  • Speed up of the Relax_disp.test_ns_mmq_3site system test. The relax_disp.plot_disp_curves user function call is now skipped as it takes too long.
  • Renamed the 'ns_mmq_3site_branched' dispersion test data directory to 'ns_mmq_3site'.
  • Created the Relax_disp.test_ns_mmq_3site_linear system test and modified Relax_disp.test_ns_mmq_3site. The Relax_disp.test_ns_mmq_3site_linear system test uses the old data from the directory test_suite/shared_data/dispersion/ns_mmq_3site/, as this had kAC = 0, now copied into the ns_mmq_3site_linear/ directory. This system test uses the NS MMQ 3-site linear model. The base data generated by cpmg_fit for the Relax_disp.test_ns_mmq_3site system test was modified so that kAC is no longer 0, but set to 1000. This should properly test the NS MMQ 3-site model.
  • Renamed the MMQ 2-site model to NS MMQ 2-site. This is so that the name matches those of the NS MMQ 3-site linear and NS MMQ 3-site models.
  • Renamed all remaining instances of MMQ 2-site to NS MMQ 2-site. This is simply changing variable, method and module names.
  • Removed the MMQ 3-site branched and MMQ 3-site linear models from the to do list in the manual. These two dispersion models are now implemented.
  • Renamed the MQ CR72 dispersion model to MMQ CR72. The model is designed by Korzhnev et al., 2004 for proton-heteronuclear SQ, ZQ, DQ, and MQ data (or MMQ data), so the change is logical as the model is not just for MQ data.
  • Clean up of the NS R1rho 3-site model names in the manual. The word 'branched' has been removed and the notation now matches the NS MMQ 3-site models.
  • Clean up of the parameter lists in the dispersion model table of the manual.
  • The pC parameter constraints are now implemented for the 3-site dispersion models. The new constraints are 0 ≤ pC ≤ pB.
  • Editing of the introduction section of the dispersion chapter of the manual.
  • Added the NS MMQ 3-site parameters to the optimisation section of the dispersion chapter of the manual.
  • Added some R data from Dmitry Korzhnev's Fyn SH3 domain. This originates from the cpmg_fit software and is published data.
  • Small fix for the documentation of the relax_disp.r2eff_read* user functions. This is for both relax_disp.r2eff_read and relax_disp.r2eff_read_spin.
  • Created the new lib.nmr relax library module. This currently has a few simple functions for converting between ppm units and Hertz or rad/s units.
  • The relax_disp.spin_lock_offset user function now uses the lib.nmr module. This is for converting between ppm and rad/s units.
  • The relax_disp.r2eff_read_spin user function now can handle offset data in the file. If the new offset_col argument is set and disp_point_col is not, then the file being read can contain the spin-lock offset information rather than the spin-lock field strength values. This is only for R-type data.
  • Implemented GUI test which caches the bug #21076 - when loading a multi-spectra NMRPipe seriesTab file through the GUI, several Error messages occur.
  • Large redesign of the R2eff/R data structures. The five indices {Ei, Si, Mi, Oi, Di} for the experiment type, the spins of the cluster, the magnetic field strengths, the pulse offsets, and the dispersion points (nu_CPMG or nu1) respectively are now much better defined. The Oi dimension is new and allows for support of R-type data whereby both different offsets and different spin-lock field strengths have been collected. Previously only one or the other was supported, but not both together. The offset information is now included as part of the spin R2eff/R key, even if not set. To support this, the specific_analyses.relax_disp.disp_data module now has the new functions loop_exp_frq_offset(), loop_exp_frq_offset_point(), loop_exp_frq_offset_point_time(), loop_frq_offset(), loop_frq_offset_point_key(), loop_offset(), and loop_offset_point(). All of the {Ei, Si, Mi, Oi, Di} dispersion indices throughout the source tree have been changed to ei, si, mi,oi, and di respectively. And the time index ti has also been introduced. These changes hugely simplify the code.
  • The relax_disp.plot_disp_curves user function can now support 150 sets per Grace graph.
  • The relax_disp.plot_disp_curves user function can now support 3000 sets per Grace graph.
  • System test for sequence read expanded to include assertions of correct data. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Added some more files for the Fyn SH3 R test data. This includes the cpmg_fit input and output files, R1 data files for relax as R1 cannot optimised yet, and a relax script.
  • Added system test for reading spins from a Sparky list. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Added interpreter.spectrum.read_spins function. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Created the back end function for the read_spins function. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Fix for system test. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Extended reading of Sparky files to include residue names. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Expanded system test and made it pass for user function spectrum.read_spins. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Updated the GUI test to check for first ID in list. Fix for bug #21076 - When loading a multi-spectra NMRPipe seriesTab file through the GUI, several Error messages occur.
  • Added keyword dim to frontend function for spectrum.read_spins(). Work in progress for Support Request #3044 - load spins from Sparky list. This is associate data with the spins of up to two dimensions.
  • Implemented system test for reading spins from NMRPipe SeriesTab formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Extended reading of spin residue names from NMRPipe SeriesTab formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Modified NMRPipe SeriesTab to read residue numbers and name for two-dimensional list. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Insert check if spin already exist before creating it. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Issuing a warning instead of error when loading spins from Sparky list where residue names are not present. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Issued a warning instead of error when loading spin residue names from a NMRPipe SeriesTab formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Changed to use return_spin for testing presence of spin. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Implemented another system test for reading NMRPipe SeriesTab files. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Fix for issuing a warning in reading spins from a NMRPipe SeriesTab formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Fix for issuing a warning when reading spins from a Sparky formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Implemented system test for reading spin IDs from NMRView formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Made reading of NMRView formatted file return the residue number as integer instead of string. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Fix for calling the warn() function. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Extended the error description for reading NMRView files. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Implemented system test for reading spins from a NMRPipe SeriesTab formatted file whereby the assignments for second dimension is missing. This will be a typically export from Sparky, converted to NMRPipe format, and processed with SeriesTab. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Fixed for reading spins from a NMRPipe SeriesTab formatted file whereby dimension 2 misses residue number and residue name. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Expanded the warning message for a system test. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Modified system test for reading an assignment whereby the second dimension is missing. Work in progress for Support Request #3044 - load spins from Sparky list.
  • If dimension 2 in a SeriesTab formatted file does not contain residue number+name, it defaults to the dimension 1. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Implemented system test for reading spins from an XEasy file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Modified XEasy reading function to pass residue names back. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Copied a SeriesTab file for the implementation of double assignments in Sparky files.
  • Redesign of the CPMG frequency and spin-lock field strength data structures. These now have an extra dimension for the offset so that the values are now experiment, magnetic field strength and offset dependent. If many offsets are present but are variable for each dispersion point, then this saves a lot of calculation time. This mainly affects R-type data. To better handle this, all of the specific_analyses.relax_disp.disp_data.loop_*() functions have been modified to accept data values rather than indices.
  • Improved the printout of the relax_disp.r2eff_read_spin user function for the R2eff keys.
  • Extended the system test for reading spins from Sparky files with empty residue name+number second dimension assignment. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Modified the Sparky peak list for two dimensional assignment example. This will typically be the export from CcpNmr Analysis. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Implemented a system test for using double assignments in Sparky formatted files. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Extended reading of spins from Sparky files for up to two dimensional assignments. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Added example of CcpNmr analysis exported Sparky file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Added system test for reading CcpNmr Analysis exported Sparky file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Modified the reading of Sparky files when exported from CcpNmr Analysis. The keyword 'Data' is not present here. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Added a system test for using generic file for reading spins. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Modified the generic list to also return spin information when intensity is not present. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Added another system test for returning spins from a generic file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Added residue 4 to the R2eff files for the truncated CPMG data from Flemming Hansen.
  • Added cpmg_fit results to the software comparison table for Flemming Hansen's CPMG data. The cpmg_fit input and log files have been added as well.
  • Shifted the software comparison down a directory so it can be used for all the different data.
  • Added system test for reading chemical shift from NMRPipe SeriesTab file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Implemented reading of chemical shifts from NMRPipe SeriesTab formatted files. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Additional chemical shift reading test for SeriesTab formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Improvements for the find_intensity_keys() dispersion analysis function. This now handles the reference point None being converted to NaN in numpy arrays and the logic is now clearer.
  • Changed some warnings in the dispersion analysis so they only show if R data is loaded. This is for missing chemical shifts and R1 data.
  • Increased the size of the grid search in the Relax_disp.test_m61_exp_data_to_m61 system test. This should increase the stability of this test.
  • Introduced the eliminate argument for the dispersion auto-analysis. This flag allows model and Monte Carlo simulation elimination to be deactivated.
  • Updated two dispersion scripts in the test data directories to work with the current design.
  • Updated more test suite scripts to call the relax_disp.cpmg_frq user function.
  • The CR72 and MMQ CR72 models are now classified as nested in the dispersion auto-analysis. The grid search for the MMQ CR72 model will therefore be skipped and the parameters taken from the CR72 model. This will however rarely, if ever, be used.
  • Fix for the relax_disp.plot_disp_curves user function. The interpolated curves now have all invalid points of 1e100 removed from the graph. This allows for reasonable graph scaling.
  • The LM63 and LM63 3-site models are now classified as nested in the dispersion auto-analysis. The grid search for the LM63 3-site model is therefore skipped and the starting parameters for optimisation are set to those of the optimised LM63 model.
  • Updated the relax results for the truncated CPMG data from Flemming Hansen. This includes the new results for the MMQ CR72 model. The analysis uses more model nesting. And the Grace plots now include the interpolation graphs (hence the plots are now bzip2 compressed).
  • Updated the NESSY results for the truncated CPMG data from Flemming Hansen. This now uses the data from all residues to allow for a proper error analysis so the results are comparable to all the other software.
  • Updated and reformatted the dispersion software comparison document.
  • Made a system test test pass on Mac OS 10.9.
  • Complete reworking of the NS R1rho 2-site dispersion model. The original code of Nikolai Skrynnikov and Martin Tollinger has been modified to match the behaviour of Dmitry Korzhnev's cpmg_fit software. The equations from Korzhnev et al., JACS 2005 (http://dx.doi.org/10.1021/ja0446855) have been used for the initial magnetisation and the R' calculation. All equations have been added to the manual to clarify the model.
  • Both relax and cpmg_fit input and output files for the Fyn SH3 R data have been added. This is for the TP02 model and NS R1rho 2-site models. The cpmg_fit results include source code modifications to show the differences between the various 'corrections'. The dispersion software comparison file has been updated to include this data and to show the cpmg_fit verses relax differences.
  • Updated the Relax_disp.test_tp02_data_to_ns_r1rho_2site system test. This is for the fixes of the NS R1rho 2-site dispersion model.
  • Added the Korzhnev 2005 R constant time correction to the 'To do' section of the dispersion chapter of the user manual.
  • Removed the CR72 model for cpmg_fit from the dispersion software comparison table in the dispersion chapter of the user manual.
  • Removed the CR72 model for GUARDD from the dispersion software comparison table in the dispersion chapter of the user manual. This software, like cpmg_fit, only supports the MMQ CR72 model which gives slightly different results to the original CR72 model when using only SQ CPMG-type data. Hence supporting MMQ CR72 does not automatically mean that the CR72 model can be optimised.
  • Updated the ShereKhan error estimation technique in the dispersion software comparison table. This is for the dispersion chapter of the user manual. Adam Mazur communicated that errors are estimated using the covariance matrix in a private mail.
  • Large rearrangements in the dispersion chapter of the user manual. The MMQ CPMG-type experiments now follow from the SQ CPMG-type experiments, hence the R models are now listed last.
  • Added a to do entry for the 3-site and N-site analytic R models listed in Palmer and Massi 2006. This is for the 'To do' section of the dispersion chapter of the user manual.
  • Updated the lib.dispersion.ns_r1rho_2site module docstring to explain the origin of the equations. This includes the Korzhnev 2005 reference where the modifications come from.
  • Created some synthetic data for the NS R1rho 3-site linear dispersion model using cpmg_fit.
  • Added cpmg_fit results for the Fyn SH3 R test suite data using the 3-site numeric solution.
  • Created the Relax_disp.test_ns_r1rho_3site_linear system test. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_test_suite.
  • Added the NS R1rho 3-site models to the dispersion variables. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
  • Added the NS R1rho 3-site models to the relax_disp.select_model user function frontend. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_disp.select_model_user_function_front_end.
  • Changed the order of the experiment types in the relax_disp.select_model user function frontend. The R-type models have been shifted to the end so that the MMQ CPMG-type models are just after the SQ CPMG-type models.
  • Changed the 'CPMG-type' to 'SQ CPMG-type' in the relax_disp.select_model user function frontend.
  • Added support for the NS R1rho 3-site models to the relax_disp.select_model user function back end. This is for the NS R1rho 3-site and NS R1rho 3-site linear CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_disp.select_model_user_function_back_end.
  • Decreased the amount of synthetic data in the ns_r1rho_3site_linear test suite shared data directory. The number of offsets for this NS R1rho 3-site linear model synthetic data has been decreased from 81 points to 21. This is because the large quantities of data slow the test suite down too much.
  • Added a GUI test for reading spins from a spectrum formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Added the GUI key 'new spectrum' to point to 'spectrum.read_spins'. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Added the spectrum.read_spins GUI page for reading spins from a spectrum formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Added radio button for reading spins from a spectrum formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Further added to the GUI test for reading spins from spectrum formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Speed up of the Relax_disp.test_ns_r1rho_3site_linear system test. Half of the data has been commented out, as too much data was being loaded for the test.
  • Created the target functions for the NS R1rho 3-site models. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_target_function.
  • Added the R2eff calculating functions for the NS R1rho 3-site models to the relax library. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_library.
  • Fix for GUI text string for the select radio button for reading spins from a spectrum formatted file. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Bug fix for the new NS R1rho 3-site dispersion models - the Y and Z initial magnetisations were switched. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Debugging.
  • Added cpmg_fit results for the program modified to turn off the PEAK_SHIFT flag. These are the results which should most closely match the relax results. This is for the simulated R data for the NS R1rho 3-site linear model.
  • Fix for the MODEL_NS_R1RHO_3SITE_LINEAR dispersion variable. The model name was not correct.
  • Turned off the Δω dispersion parameter constraints for the NS R1rho 3-site models.
  • Added the NS R1rho 3-site models to the dispersion auto-analysis. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_auto-analysis.
  • Added the NS R1rho 3-site models to the GUI model list. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_GUI.
  • Removed the pC ≤ pB constraint from the 3-site dispersion models. This is important for the linear models where a violation of this constraint is reasonable. This has been replaced by the pC ≤ pA constraint.
  • Added the NS R1rho 3-site models to the relax user manual. This is for the NS R1rho 3-site and NS R1rho 3-site linear dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
  • Transposed some of the NS R1rho 3-site model evolution matrix elements. These now match the NS R1rho 2-site model.
  • Last fixes for the NS R1rho 3-site dispersion models. These now behave identically to the cpmg_fit program with the PEAK_SHIFT flag disabled. The tilt angle for the initial magnetisation is no longer that for the average offset but that of state A.
  • Fixes for swapped indices in the relaxation evolution matrix for the NS R1rho 3-site dispersion models.
  • Docstring fix for the lib.dispersion.ns_r1rho_3site module.
  • Added the Omega_A,B,C resonance offset parameter definitions to the dispersion chapter of the manual.
  • Updated the relax results for the synthetic data of the NS R1rho 3-site linear dispersion model.
  • Modified the NS R1rho 2-site dispersion model to match the NS R1rho 3-site models. The 6D evolution matrix indices have been rearranged to match the 9D matrix indices. The tilt angle for the initial magnetisation is no longer that for the average offset but that of state A, as was changed for the NS R1rho 3-site models earlier. The system test was therefore updated for the slightly different behaviour.
  • Updated the relax results for the Fyn SH3 R dispersion data. This is for the recent changes to the NS R1rho 2-site dispersion model.
  • Updated the Relax_disp.test_ns_r1rho_3site_linear system test so it now passes. The chi-squared value is not exactly zero as there are numerical differences between relax and cpmg_fit due to different approaches being used.
  • Added the RMSD determined via showApod for the 69 experiments. Work in progress for Support Request #3083 - Addition of Data-set for R analysis.
  • Added system test for the analysis of optimisation of the Kjaergaard et al., 2013 Off-resonance R relaxation dispersion experiments using the DPL model. Work in progress for Support Request #3083 - Addition of Data-set for R analysis.
  • Modified analysis script for example data of R. Work in progress for Support Request #3083 - Addition of Data-set for R analysis.
  • Created synthetic R dispersion data for the NS R1rho 3-site model. This is a simple modification of the data for the NS R1rho 3-site linear model. The k_AC parameter was simply changed from 0 to 1000. The cpmg_fit software was used to create the data. Both cpmg_fit and relax results have been updated to the new model.
  • Created the new Relax_disp.test_ns_r1rho_3site system test. This was copied from the Relax_disp.test_ns_r1rho_3site_linear test and modified to use the new NS R1rho 3-site model synthetic data.
  • Fix for wrong use of relax_fit.relax_time instead of relax_disp.relax_time. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Added the ns_r1rho_3site module to the lib.dispersion package __all__ list. This allows the unit tests to pass.
  • turned off a system test until the release of relax 3.1.1 is over. Work in progress for Support Request #3044 - load spins from Sparky list.
  • Fix for the Relax_disp.test_bug_21076_multi_col_peak_list GUI test. The peak intensity wizard is now closed at the end of the test so that subsequent tests can cleanly operate. Without closing this wizard, launching it a second time in another test will always fail.
  • Capitalised 'Python' in the IO redirection messages.
  • Epydoc docstring fix for the lib.dispersion.ns_mmq_3site.r2eff_ns_mmq_3site_sq_dq_zq() function. This allows the API to be compiled correctly.
  • Bug fix for the dispersion grid_search_setup() optimisation function. This function was not updated for the recent addition of the spin-lock or hard pulse offset dimension in the specific_analyses.relax_disp.disp_data module (and hence all structures used by the dispersion target functions). The loop_exp_frq_point() function call has been replaced by a loop_exp_frq_offset_point() function call to allow the R2eff model parameters to be looped over. For more details, see the thread http://thread.gmane.org/gmane.science.nmr.relax.scm/19685. This solution was mentioned at http://thread.gmane.org/gmane.science.nmr.relax.scm/19685/focus=4859.
  • Removed a printout from the Relax_disp.test_r1rho_kjaergaard GUI test as this is fatal for Python 3.
  • Python 3 fixes for the relax_disp.r2eff_read_spin user function. The check for the dispersion point column now only runs if that argument is set. In addition, the offset column is now also being checked.


relax 3.1.0

  • Started to implement the framework for relaxation dispersion system tests.
  • Copied 'test_suite/system_tests/relax_fit.py' for relaxation dispersion.
  • Started to implement relaxation dispersion system tests.
  • Created the user_functions.relax_disp module by copying user_functions.relax_fit. This file now needs to be modified to suit the needs of relaxation dispersion.
  • Manually created the relax_disp user functions. This is equivalent to Seb's commit for the prompt.relax_disp module. The equivalent changes to the user_functions.relax_disp were hand edited. Added functions to select the experiment type and mathematical model used. These functions allow the user to select the experiment type (cpmg or r1rho) as well as the mathematical model to fit the data (fast or slow).
  • Copied the 'relax_fit.py' script to 'relax_disp.py'. This file, obviously, will need to be modified to suit the needs of the relaxation dispersion code.
  • Modified the script so it will test for fast-exchange curve fitting from CPMG data. Data and functions to treat it are still missing.
  • Added a test for CPMG data in slow-exchange and changed the name of the test for fast-exchange.
  • Copied the 'relax_fit.py' specific functions to 'relax_disp.py'. The code will now need many many many changes to suit the needs of relaxation dispersion.
  • Made a few changes towards a functional relaxation dispersion code. This includes several modifications as well as the addition of the exp_type() function.
  • Moved the relax_time() function to cpmg_frq() and made other small changes. Still much (!) work is needed for this code to be complete.
  • Renamed 'cdp.frq' to 'cdp.cpmg_frqs' so it is not confusing with the spectrometer frequency. Indeed, 'cdp.cpmg_frqs' points to the CPMG pulse train frequency (nu_cpmg).
  • Changed all instances of 'relax_times' to 'cpmg_frqs' and made other small changes.
  • Changed 'relax_time' instances to 'cpmg_frq'.
  • Changed the index name and description. The description might change later to be more appropriate when the code is more mature…
  • Included the setting of the spectrometer frequency and uncommented a few lines of code. Of course, this won't work until the sample data has been introduced and the right names for the different files be input in the system test script…
  • Fixed many formatting errors and made the 'relax_disp' code accessible (pipes, interpreter, etc.). These changes also include a coming back to using the C code 'math_fns/relax_fit.py' since there is still no such code associated to relaxation dispersion. This will allow working in the code without relax crashing and complaining about the lack of a C module name 'relax_disp.py'.
  • Added the user function cpmg_delayT() which allows setting the CPMG constant time delay T used for the analysed dataset. This follows a post at https://mail.gna.org/public/relax-devel/2009-01/msg00027.html.
  • Made a few changes so the cpmg_delayT() function now works.
  • Added the user function cpmg_frq() and added examples to the user function cpmg_delayT().
  • Corrected remaining frq instances to cpmg_frq when appropriate to avoid confusion and corrected a few related things in the system test script.
  • Made the cpmg_frq() function accept only None for the reference spectrum and corrected a typo.
  • Added the parameters for the slow- and fast-exchange regime.
  • Added the parameters for the slow- and fast-exchange regime in the function data_names().
  • Corrected a few formatting issues and still added parameters for the slow- and fast-exchange regime.
  • Corrected a few formatting issues and still added parameters for the slow- and fast-exchange regime. Formatting issues corrected were spotted by Ed in a post at https://mail.gna.org/public/relax-devel/2009-01/msg00045.html.
  • A bit more changes to introduce parameters for CPMG relaxation dispersion.
  • Introduced CPMG parameters into the function return_grace_string() and corrected formatting issues.
  • Introduced relaxation dispersion parameters in the function return_data_name().
  • Changed the default cpmg_frq value in cpmg_frq() from 0 to None.
  • Added a relaxation dispersion dataset in the system-test. This was kindly provided by Dr Flemming Hansen (flemming AT pound DOT med DOT utoronto DOT ca) and was previously published in Hansen, Vallurupalli & Kay (2008) J. Phys. Chem. B, 112, 5898-5904. The original format was different and was modified to better suit the way relax handles datasets. Finally, the information contained here were written in a 'readme' file placed in the same directory as the dataset itself to allow referencing and acknowledgments.
  • Added 'Sparky' formatted files to the system-test so the files can be input and development of the branch continued.
  • Changed the format of the CPMG frequency and corrected the names of some input files.
  • Added an unresolved file to meet the script requirements.
  • Copied the script for the fast-exchange regime to the slow-exchange regime.
  • Modified the newly copied script so it is effectively for the slow-exchange regime.
  • Added details to the readme file and changed the directory name where the sample data is located. The directory is now named 'dataset_1-a'. This contains data recorded at 500 MHz. Data recorded at 800 MHz will be put in a directory called 'dataset_1-b'.
  • Created a directory for the data recorded at 800 MHz and put a readme file explaining its origin.
  • Added the relaxation dispersion dataset recorded at 800 MHz in the system-test. This was kindly provided by Dr Flemming Hansen (flemming AT pound DOT med DOT utoronto DOT ca) and was previously published in Hansen, Vallurupalli & Kay (2008) J. Phys. Chem. B, 112, 5898-5904. The original format was different and two formats were made ('generic' and 'sparky'), as for the dataset recorded at 500 MHz.
  • Renamed the directories containing the sample datasets provided by Flemming Hansen. The names are now more obvious as to their content… This was proposed by Ed in a post at https://mail.gna.org/public/relax-devel/2009-01/msg00056.html.
  • Added an 'unresolved' file to the 800 MHz data and moved (and modified) some files (sequence and readme) so there is only one copy for the 500 and 800 MHz data. This prevents duplicated files.
  • Changed the object names so they are lower case as they should be, based on the rest of the code. Made the equivalent change in the function assemble_param_vector() to allow the system-test to go further.. This was spotted by Ed in a post at https://mail.gna.org/public/relax-devel/2009-01/msg00058.html.
  • Corrected capitalisation issues for param names. These were spotted by Ed in a thread starting at https://mail.gna.org/public/relax-devel/2009-01/msg00059.html.
  • Rearranged commands in the scripts. The experiment type and exchange regime will have to be input before the cpmg pulse train delay T.
  • Introduced a RelaxError when chosing 'r1rho' as experiment type as this won't be implemented now. Efforts will be concentrated on the CPMG code first, then on the R code.
  • Added tests, print statements and other code to the relaxation dispersion specific functions. Tests were proposed by Ed in a post at https://mail.gna.org/public/relax-devel/2009-01/msg00065.html.
  • Started to implement a function for calculating the effective transversal relaxation rate (R2eff). This follows a thread at https://mail.gna.org/public/relax-devel/2009-01/msg00067.html.
  • Converted the function linear_constraints() for relaxation dispersion needs.
  • Started to implement the scaling matrix for scaling the 'R2eff' values. This might change in the future as other possible curve fitting parameters ('R2', 'Rex', 'kex', 'R2A0', 'kA', 'δω') might need some scaling.
  • Completed the scaling matrix code. This follows a thread at https://mail.gna.org/public/relax-devel/2009-01/msg00073.html.
  • Imported relaxation dispersion in grace user functions.
  • Added a missing quote which prevented the user manual to be sconstructed. This was discussed in a thread starting at https://mail.gna.org/public/relax-devel/2009-01/msg00082.html.
  • Started to implement a function for reading 'R2eff' values directly. This is as proposed in a post at https://mail.gna.org/public/relax-devel/2009-01/msg00020.html. The function does not contain code yet.
  • Started to put equations and references in the user function docstrings and also corrected a small typo. This was proposed by Ed in a post at https://mail.gna.org/public/relax-devel/2009-01/msg00028.html.
  • Corrected the way the scaling matrix is assembled. This is as proposed by Ed in a post at https://mail.gna.org/public/relax-devel/2009-01/msg00079.html. The scaling values are now based on the default values for the different parameters which were slightly modified. The only parameter for which the average is still used (as for intensities in the 'relax_fit.py' code) is 'R2eff'.
  • Continued to implement the user function calc_r2eff(). This follows a discussion at https://mail.gna.org/public/relax-devel/2009-01/msg00067.html.
  • Copied 'test_relax_fit.py' to 'test_relax_disp.py'. This will allow the design of a few unit tests for the relaxation dispersion code.
  • Added two unit tests for the relaxation dispersion code and fixed errors in the corresponding code. More unit tests will be added soon to help debugging and developing.
  • Added two more unit tests.
  • One more unit test.
  • One more unit test for the relaxation dispersion code.
  • Added more unit tests and tried to debug what was uncovered by these tests. Still more work is needed for debugging…
  • A few fixes based on the unit tests problems.
  • Changed the default value for 'int_cpmg' to avoid an impossible mathematical situation: ln(0).
  • Fixed a bug where the 'id' argument was not set. This was proposed by Ed in a post at https://mail.gna.org/public/relax-devel/2009-01/msg00127.html.
  • Started to make changes for multiple field relaxation dispersion analysis. This seems necessary, so maybe we should not support single field analysis of relaxation dispersion at all. -> Kovrigin et al. (2006) JMagRes, 180: 93-104. The changes made here are only a first draft and may not work. In particular, maybe the spectrum.read_intensities(), relax_disp.cpmg_frq(), spectrum.replicated(), spectrum.error_analysis(), and deselect.read() functions will need to know the magnetic field to which the particular dataset is associated… In fact, the different datasets should be input first and their R2eff calculated independently. In a second step, the actual relaxation dispersion curve fitting should be made with all data.
  • Fixed a bug which prevented the manual pdf to be compiled. The problem was caused by a ':' character in the references (after the volume number, as usual). This was changed for a '.' character. Equations were fine.. Moreover, a better formatting was done by adding ':' characters after the word 'are' before enumerations.
  • Fixed the unit tests. This is as proposed by Ed in a post at https://mail.gna.org/public/relax-devel/2009-01/msg00132.html.
  • Started to implement the reading of 'r2eff' by relax_data.read() by first writing a system test.
  • Updated a few dosctrings and tried to improve the system test.
  • Removed the obsolete function 'relax_disp.r2eff_read()'. R2eff values will be read directly by 'relax_data.read()'.
  • Reordered a few functions for alphabetical reasons.
  • A small fix to the system test. However, is this fix the solution or is there something wrong with the reading of data (such as 'R2eff') by relax_data.read()? Shouldn't the data, for example 'R2eff', be available in 'cdp.mol[0].res[0].spin[0].R2eff_val[0]' or 'cdp.mol[0].res[0].spin[0].R2eff[0]' for the 1st spin of the 1st residue in the 1st molecule?
  • Fixed an import (as well as a few comments). This however introduces an error concerning the 'chi2' being undefined in the C module for relaxation dispersion…
  • Solved an issue created during the merge process concerning the 'return_data_name_doc' call. The solution is based on the code in 'specific_fns/relax_fit.py'.
  • Brought the relaxation dispersion branch into sync with the 1.3 line. There were many design changes within the 1.3 line that required that the old relaxation dispersion code be updated.
  • Fixes for the relaxation dispersion system tests. The install path is now in the status object, and not in __main__.
  • GPLv3 license updates for all files not found in the trunk.
  • Import fixes for the specific_analyses.relax_disp due to the recent trunk package layout redesign.
  • Made the non-API methods of the specific_analyses.relax_disp.Relax_disp class private.
  • Improvements for the GUI representation of the relax_disp user functions.
  • More import fixes for the new package layout.
  • Fix for the relax_data.read user function call in the Relax_disp.test_read_r2eff system test. The column numbers must be supplied.
  • Some more fixes to make the Relax_disp.test_read_r2eff system test pass. These are again changes needed due to the trunk now being very different.
  • The cpmg_frq argument of the relax_disp.cpmg_frq user function can now be None.
  • The cpmg_frq argument of the relax_disp.cpmg_frq user function can now be an integer as well as a float.
  • Updates for the relaxation dispersion system test scripts for the newer design of relax. A number of changes were required as the code was quite old.
  • Created the lib.dispersion.equations module. This is a translation of Sebastien Morin's C code in the old relax_disp branch.
  • Created a very basic initial target function class for relaxation dispersion. This code is a translation of Sebastien Morin's C code in the old relax_disp branch.
  • The relaxation dispersion specific analysis code now uses the Python target function rather than the C.
  • Fix for the Relax_disp.test_curve_fitting_cpmg_fast system test variable names.
  • Added the model argument to the dispersion target function class to select between different equations.
  • The relaxation dispersion target function class now imports the equations from lib.dispersion.equations.
  • The relaxation dispersion target function class raises a RelaxError when the model is not implemented.
  • Modified all the relaxation dispersion test data Sparky files at 800 MHz. The last three lines of the files were not properly formatted.
  • Converted all of the raising of RelaxErrors in the specific_analyses.relax_disp to the new standard. This is for Python 3 support.
  • Converted all print statements in specific_analyses.relax_disp to function calls. This is for Python 3 compatibility.
  • Converted the prompt unit tests for relaxation dispersion to the current relax design.
  • Updated the target_functions package __all__ list for the relax_disp module.
  • Another fix for the prompt argument unit tests of the relax_disp user functions.
  • Big changes to the front end of the relax_disp.select_model user function. The model strings have been changed and are now programmatically added to the user function documentation. The main text has also been redesigned. And the new model 'exp_fit' has been added which allows just the exponential curves to be fit.
  • Python 3 import fix for the specific_analyses.relax_disp module.
  • Updated the documentation in specific_analyses.relax_disp to use the user_functions package design. The user_functions.objects.Desc_container and user_functions.data.Uf_tables objects are now used to construct the relaxation dispersion documentation.
  • The relax_disp.select_model backend now handles the 'exp_fit' model.
  • Removed all aliasing of the current data pipe in specific_analyses.relax_disp as this is in __builtin__.
  • The specific_analyses.relax_disp module now uses the parameter list object to define parameters. This allows the now unused methods data_names(), default_value(), return_data_name(), and return_grace_string() to be deleted and their contents copied into the parameter definitions in the class __init__() method.
  • Alphabetical arrangement of methods in the specific_analyses.relax_disp module.
  • Docstring cleanups for the specific_analyses.relax_disp module.
  • The relaxation dispersion specific analysis now aliases API base methods for a number of methods.
  • Import cleanup in the specific_analyses.relax_disp module.
  • The relaxation dispersion specific analysis module is now using the base _data_init_spin() method. This is aliased to data_init() and replaces the old non-functional method.
  • Created the relax_disp.spin_lock_field user function. This is used to set the spin-lock field strength of a given R spectrum.
  • Created the relax_disp.relax_time user function. This is almost a direct copy of the relax_fit.relax_time user function, but has been modernised.
  • Fix for the printout from the relax_disp.relax_time user function - the time is no longer divided by 1k.
  • Expanded the dispersion model parameters to include the exponential curve parameters.
  • Clean up of some of the old relax_disp user functions - many argument types are now numbers rather than floats.
  • Unit test fixes for the prompt relax_disp user function argument checks.
  • Added the specific_analyses.relax_disp module to the unit test checking of the specific API.
  • Big cleanup of the relaxation dispersion code to match the analysis specific API. All methods not belonging to the API have been made private. The arguments and keyword arguments for the API methods now match the API.
  • Completely redesigned the minimisation parts of the specific_analyses.relax_disp module. Instead of dealing with the optimisation of individual spins, groups of spins are now optimised together. This allows for the clustering analysis of relaxation dispersion. The method _block_loop() has been created to loop over spin blocks, but it currently only returns individual spins. However with the rest of the code designed to handle this loop, only this function needs to be modified to enable clustering. The method _param_num() has also been added to determine the total parameter number per spin block. The data structures sent into the Dispersion target function class have also been redesigned to handle spin blocks instead of individual spins.
  • Modified the relax_disp.cpmg_frq user function to match relax_disp.spin_lock_field. Both the front and back ends now use the same logic as the relax_disp.spin_lock_field user function and will allow some sanity to the analysis specific code.
  • The relax_disp.cpmg_frq and relax_disp.spin_lock_field user functions now create cdp.curve_count. This is an integer which indicates the number of exponential curves which are to be optimised per spin block.
  • The relaxation dispersion analysis specific _param_num() method now takes the number of curves into account.
  • Better setup checking for the relaxation dispersion specific analysis minimise() method.
  • Renamed cdp.curve_type to cdp.model to better explain the variable.
  • Fixes for the dispersion specific analysis separating R2eff from R2. There is one R2eff parameter per exponential curve, but only one R2 per model. The code now better handles this.
  • The dispersion specific methods now handle one R2eff and I0 parameter per exponential curve.
  • Better management of the global relaxation dispersion data. The user functions relax_disp.cpmg_frq, relax_disp.spin_lock_nu1, and relax_disp.relax_time now maintain data structures in the current data pipe of the unique frequencies, fields, and times (sorted) as well as the number of frequencies, fields, and times. This data is used by the minimise user function back end to set up the target function, and will be required by many other parts of the analysis.
  • The dispersion specific _assemble_param_vector() method now handles multiple R2eff and I0 values. These spin structures are dictionaries holding multiple values.
  • Created the dispersion specific _exp_curve_loop() method for looping over each exponential curve. This yields the index and key for each curve, simplifying the handling of this data.
  • Expanded the relax_disp.select_model user function documentation to cover R2eff and I0. These parameters and how they are optimised are now better described.
  • Updated the relaxation dispersion target function class to handle the recent changes.
  • First attempt at a target function for fitting exponential curves for relaxation dispersion.
  • Added some synthetic data to test the 'exp_fit' relaxation dispersion model fitting. These are just basic synthetic exponential curves. R2eff and I0 should be very easy to find.
  • The lib.software.sparky.read_list_intensity() function can now handle lowercase in the residue names.
  • Created the Relax_disp.test_exp_fit system test for checking the relaxation dispersion 'exp_fit' model.
  • The specific_analyses.relax_disp module is now using minfx correctly. The minfx grid search is no longer part of generic_minimise() and must be called separately.
  • The relax_disp function _grid_search_setup() now operates in the same way as the relax_fit code. This function originates from the 'relax_fit' specific analysis code, but that code has since evolved. The 'relax_disp' code now mimics the new code, returning lists of grid search increments and upper and lower limits.
  • The scaling flag is now initialised in the relaxation dispersion target function class.
  • Created the lib.curve_fit package and associated unit tests. This will be used for holding modules such as for exponential curve-fitting required for the relaxation dispersion analysis.
  • Created the new lib.curve_fit.exponential module for exponential curve-fitting. This contains the single exponential_2param_neg() function which will be used for the relaxation dispersion target functions. This is based on Sebastien Morin's function exp_2param_neg in maths_fns.exponential.c in his dormant inversion-recovery branch.
  • Typo fix for the new lib.curve_fit.exponential.exponential_2param_neg() function.
  • The relaxation dispersion func_exp_fit() target function now uses exponential_2param_neg(). This is from the lib.curve_fit.exponential module.
  • Fix for the relaxation dispersion specific _assemble_scaling_matrix() method. The values were all inverted - the matrix should hold values on the same order as the parameter values.
  • Fix for the func_exp_fit() relaxation dispersion target function. The parameter index was not correctly calculated.
  • The 'exp_fit' relaxation dispersion model now uses the minfx.grid sparseness argument. This is used to skip all parts of the grid search belonging to a different exponential curve or different spin. If the number of curves is N and the number of spins M, the grid size decreases from inc(2*N*M) to (inc2)*N*M. For lots of spins and curves, this is a huge decrease.
  • The relaxation dispersion specific _disassemble_param_vector() method is now functional. This should allow the minimise user function to complete.
  • Fixes for the dispersion specific _assemble_param_vector() method. The R2eff and I0 spin dictionary structures are now checked for their keys before pulling the value out.
  • Fix for the relaxation dispersion grid search. The lower and upper bounds are no longer continually scaled with each optimisation.
  • Increased the speed of the Relax_disp.test_exp_fit system test by using a smaller grid search.
  • The relaxation dispersion target function class back_calc variable now matches the values variable. Instead of being a temporary structure which is overwritten for each spin and each exponential curve, the structure now matches the dimensions of the values variable and hence is persistent per function call. This allows external code to access the structure - for example for data back calculation in the relaxation dispersion specific analysis module.
  • Fixes for the dispersion specific _back_calc() method. This method still has a long way to go before it is of any use.
  • Created a custom base_data_loop() method for the relaxation dispersion analysis. This defines the base data as the peak intensities of a single exponential curve and yields the spin container and exponential curve key identifying the individual curves.
  • Activated Monte Carlo simulations for the relaxation dispersion analysis. This required a bit of work. The key parts were renaming _block_loop() to the API method model_loop() as that is exactly what the model_loop() method is supposed to do, converting a bunch of API common spin-based methods to handle dispersion clustering, and to modify existing methods from Seb's original branch to handle the base_data_loop() method. The following methods have been added or modified. _back_calc(): This method has been modified to handle clustering and the returning of peak intensities from only one exponential curve. _exp_curve_index_from_key(): This new method is used to convert exponential curve key into the corresponding index. _intensity_key(): This new method is for converting an exponential curve key and relaxation time into the corresponding intensity key. create_mc_data(): This method is now functional and handles the data from the base_data_loop() method. return_error(): This method now handles the data from the base_data_loop() method. set_selected_sim(): This new method has been modified from the common _set_selected_sim_spin() method but modified for the model_loop() method. sim_pack_data(): This method now handles the data from the base_data_loop() method. sim_return_param(): This new method has been modified from the common _sim_return_param_spin() method to suit the model_loop(). sim_return_selected(): This new method has been modified from the common _sim_return_selected_spin() method again to suit the model_loop().
  • Modified the Relax_disp.test_exp_fit system test to be faster and not create plots which it cannot.
  • The Relax_disp.test_exp_fit system test now checks some of the final results.
  • The relaxation dispersion parameter errors from Monte Carlo simulations are now stored. Previously MC simulations could run, but the errors were not being calculated and stored. The sim_return_param() method was empty. This method is now complete. In addition the set_error() method has been created for setting the parameter errors. And the _exp_curve_key_from_index() and _param_index_to_param_info() auxiliary methods added to facilitate data access.
  • Expanded the checking in the Relax_disp.test_exp_fit system test.
  • Converted all relaxation dispersion parameters to lowercase. This is so the variable names match the parameter names identically, avoiding problems with some of the shared methods of the specific analysis API.
  • The spin parameters are now set up last by the relax_disp.select_model user function back end.
  • Added 'spin_lock_nu1' as a dictionary type parameter of the relaxation dispersion specific analysis.
  • Rearrangements of the 2 system tests of Fleming Hansen's CPMG data. The system tests are now called Relax_disp.test_hansen_cpmg_data_fast_2site and Relax_disp.test_hansen_cpmg_data_slow_2site, and the system test scripts are now all in test_suite/system_tests/scripts/relax_disp/.
  • Created a basic initial auto-analysis script for relaxation dispersion. This currently only supports the optimisation of the 'exp_fit' diffusion model, but has all of the infrastructure set up to make it easy to add other models.
  • Added the relaxation dispersion module to the auto_analyses package __all__ list.
  • The relaxation dispersion system test class now imports the auto-analysis. This fixes an import order error.
  • The Relax_disp.test_exp_fit system test now uses the auto_analyses.relax_disp analysis.
  • Fix for the relaxation dispersion auto-analysis. The exponential fit model is now selected prior to optimisation.
  • Removed the relax_disp.select_model user function call from the exp_fit dispersion system test script. This is performed by the auto-analysis and not during setup.
  • Added testing for spin clustering to the Relax_disp.test_exp_fit system test. This includes calls to the new relax_disp.cluster user function and the checking of pipe variables holding the clustering information.
  • Fix for the spin ID string for the relax_disp.cluster user function. This is for the exp_fit.py relaxation dispersion system test script.
  • Implemented the relax_disp.cluster user function. This is for clustering spins together for a dispersion analysis.
  • Clustering is now enabled for relaxation dispersion. The model_loop() analysis specific API method now loops over the spin clusterings, allowing a cluster of spins to be optimised simultaneously to one set of model parameters.
  • Fixes for the spin clustering for relaxation dispersion. Both optimisation and Monte Carlo simulations were affected by these bugs.
  • Speed up of the Relax_disp.test_exp_fit system test by cutting the grid size down to 3 increments.
  • Expanded the write_results() method of the relaxation dispersion auto-analysis. More Grace graphs are now being produced, and the Rex file creation is now model dependent.
  • Fix for the relax_disp.cluster user function. The 'free spins' category is now not deleted when empty.
  • Created an icon set for relaxation dispersion.
  • Renamed the relaxation dispersion test suite data directory to 'dispersion'.
  • Changed the relax_disp.cpmg_frq user function id argument to spectrum_id. All the relax_disp user functions now operate with the spectrum IDs instead of experiment IDs.
  • The relax_disp.cpmg_delayT user function backend now uses the spectrum ID rather than experiment ID.
  • Expanded the relax_disp.exp_type user function to include the fixed period CPMG experiments.
  • The relax_disp.cpmg_delayT backend can now handle the 'cpmg fixed' experiment type.
  • The relax_disp.cpmg_frq user function can now handle values of None. The float function is no longer used if the value of None is encountered.
  • Updated the dispersion system test script for Flemming Hansen's data. This script should now be close to the final form for a relaxation dispersion analysis of CPMG data with a fixed relaxation time period.
  • Combined all the system test scripts of Flemming Hansen's fixed time period CPMG data. For details of this data, see http://thread.gmane.org/gmane.science.nmr.relax.devel/3790/focus=3827.
  • Fixes for the renaming of the relaxation dispersion test suite shared data directory.
  • Started to redesign how R2eff is handled in the relaxation dispersion analysis. Instead of being part of the optimisation of the dispersion model, it will itself be the model named R2eff (converted from the 'exp_fit' model). This model will either use the calc user function to determine R2eff when the fixed relaxation period experiment is selected, or fit exponential curves using the relax_fit C module for the variable relaxation period experiments. The R2eff values will then be copied over for each dispersion model in the auto-analysis using the value.copy user function.
  • Created the relax_disp.plot_exp_curves user function. This is to be used to create 2D graphs of the exponential curves (relaxation time verses peak intensity) as the grace.write user function plots are not very nice - the curves from each spectrometer field strength and dispersion point are mixed into one curve.
  • The relaxation dispersion auto-analysis is now created plots of the exponential curves.
  • The R2eff model now works for the variable time relaxation period and exponential curve-fitting.
  • The relax_disp.select_model user function now checks for the compiled C module when required.
  • Expanded the new analysis wizard in the GUI to accommodate the relaxation dispersion auto-analysis. Now the buttons are a matrix of 4x2 with the NOE, R1, R2, and model-free analyses at the top and two new blank buttons have been added to the bottom. One will be used for the dispersion analysis.
  • Created some basic graphics for the relaxation dispersion analysis fur use in the GUI.
  • Added the correct sized graphic for the relaxation dispersion button in the new analysis wizard.
  • Created the relaxation dispersion button in the new analysis wizard.
  • Created the initial version of the relaxation dispersion auto-analysis GUI frame. This is built from copying lots of code from the NOE, R1, and R2 analyses. The dispersion specific parts will be added later.
  • The relaxation dispersion GUI analysis now has an element for selecting the models to be optimised.
  • Removed some unneeded checks in the relax_disp.exp_type user function.
  • Added a GUI element to the relaxation dispersion auto-analysis for selecting the experiment type.
  • The relax_disp.exp_type user function has been shifted to the new analysis wizard. Instead of being one of the elements on the relaxation dispersion analysis frame, it is now placed between the analysis selection page and the data pipe page of the new analysis wizard. The user function execution is delayed until the set up of the frame, just after the execution of the pipe.create user function. This will allow the frame to be set up differently for each experiment type.
  • Extended the tooltip for the experiment type GUI element in the relaxation dispersion frame.
  • Improvements to the tooltips in the relaxation dispersion analysis frame of the GUI.
  • Changed the peak intensity wizard for the relaxation dispersion frame to match the other analyses.
  • Unused import removal from the gui.analyses.auto_relax_disp module.
  • Missing import in the gui.analyses.auto_relax_disp module.
  • Added support for all the relaxation dispersion user functions in the Peak_intensity_wizard object.
  • Modified how the fixed time period is specified in the Flemming Hansen data system test. Instead of using relax_disp.cpmg_delayT user function, the relax_disp.relax_time user function will be used. The functionality is duplicated and relax_disp.cpmg_delayT is not needed.
  • Modified the Spectra_list GUI element to handle relaxation dispersion data.
  • The relaxation dispersion GUI analysis now uses the dispersion parts of the peak intensity elements. This includes activating the dispersion parts of the Spectra_list GUI element for displaying the spectrum ID with associated metadata and the dispersion parts of the Peak_intensity_wizard for loading the data.
  • The relaxation dispersion auto-analysis is now correctly executed from the GUI. The GUI data gathering is also now complete in the assemble_data() method.
  • Added some more module variables to specific_analyses.relax_disp for the experiment types.
  • The relaxation dispersion auto-analysis now performs the peak intensity error analysis. This is essential for when the GUI is used.
  • More Unicode characters are now used in the relaxation dispersion GUI analysis frame. The model parameter lists have also been improved.
  • Removed the spectrum.error_analysis user function call in the exp_fit.py dispersion system test script. This is now performed by the auto-analysis.
  • Fixed for the error_analysis() method of the relaxation dispersion auto-analysis. The method can now handle missing spectrometer field strength data.
  • More fixed for the peak intensity error analysis method of the relaxation dispersion auto-analysis. The fixed relaxation time period type experiments can now be handled.
  • Elimination of the relax_disp.cpmg_delayT user function. This user function is not necessary as the relax_disp.relax_time user function serves the same purpose. The use of relax_disp.relax_time instead allows for code sharing between the fixed and variable time period relaxation dispersion experiment types.
  • Elimination of the relax_disp.calc_r2eff user function. This user function, which is non-functional anyway, is not needed. The calculation of the R2eff values will occur with the optimisation of the R2eff model (with a call to the calc user function for the fixed time period experiment types) so direct calculation through a specific user function is not needed.
  • Improvements to the GUI text subscripting in the relaxation dispersion analysis frame.
  • Removed the temporary relaxation dispersion SVG graphic for the GUI analysis.
  • Redesign of the graphic for the relaxation dispersion analysis. This is a modification of the r1.svg graphic to show roughly the graphic as in "Protein NMR Spectroscopy, Principles and Practice" by Cavanagh, Fairbrother, Palmer and Skelton.
  • Editing of the relaxation dispersion analysis graphic.
  • Added the relaxation dispersion graphic to all of the dispersion GUI user functions missing a graphic.
  • Redesign of the relaxation dispersion models in the relax_disp.select_model user function front-end. The models have been renamed and better defined based on the experiment type (CPMG or R).
  • The relaxation dispersion scaling matrix assembly now uses lib.mathematics.round_to_next_order(). This allows the printed out I0 values for the optimisation of the exponential curves to be easier to scale back to the real value.
  • The Relax_disp.test_hansen_cpmg_data_fast_2site system test now uses the R2eff model. The equivalent slow exchange system test also uses the model. This model will be used to find the R2eff values from the fixed relaxation time period data.
  • Fix for some RelaxError printouts in the relaxation dispersion specific code.
  • The relaxation dispersion class variables for the experiment types are now used for all comparisons. This should avoid random bugs.
  • Fix for the calculation part of the relaxation dispersion auto-analysis. This is for the fixed relaxation period data types.
  • The 2D Grace plots of the exponential curves are now skipped for the fixed relaxation period data types. This is in the relaxation dispersion auto-analysis.
  • Started to implement the relaxation dispersion analysis specific calculate() method. This will be used to calculate the R2eff/R values for the fixed relaxation time period data types and is equivalent to Sebastien Morin's relax_disp.calc_r2eff user function which was deleted (see http://thread.gmane.org/gmane.science.nmr.relax.scm/17336).
  • Converted the specific_analyses.relax_disp module into its own package. This is to allow the code to be broken up into separate modules to simplify the analysis.
  • Shifted out all of the variables and dispersion data specific code into separate modules. The dispersion data private methods have been converted into functions of the specific_analyses.relax_disp.disp_data module. The package variables have also been shifted into the specific_analyses.relax_disp.variables module to avoid circular imports.
  • Alphabetical ordering of the functions of the specific_analyses.relax_disp.disp_data module.
  • Created the specific_analyses.relax_disp.disp_data.loop_all_data() function. This is to loop over all possible base relaxation dispersion data (spectrometer frequencies then dispersion points).
  • Updates for the dispersion user functions for the changes in specific_fns.relax_disp.
  • Typo fix in the new loop_all_data() function.
  • Created the lib.dispersion.calc_two_point_r2eff() function. This is for calculating the R2eff/R value for the fixed relaxation time data.
  • Improvements to the specific_analysis.relax_disp.disp_data module. The function loop_all_data() has been expanded to include the relaxation time period into the loop. The functions return_intensity() and return_key() have been added to return peak intensities and the key corresponding to the data returned by loop_all_data().
  • Fixes for some latent bugs in the specific_analyses.relax_disp.disp_data module. The checks for the CPMG-data type in a number of functions now uses the CPMG_EXP list instead of fixed strings.
  • Completed the relaxation dispersion calculate() method. This allows the R2eff/R values to be calculated for the fixed relaxation time period experiments through the calc user function.
  • Created a script for running a full relaxation dispersion analysis on Flemming Hansen's data. This is located in the shared data directories and is not part of the test suite as a full analysis will take far too long.
  • Updated the models in the script for the full relaxation dispersion analysis of Hansen's data.
  • Updated the backend of the relax_disp.select_model to handle the new model names.
  • Spun out a number of dispersion methods into the new specific_analyses.relax_disp.parameters module. This is a module of functions relating to the parameters of the relaxation dispersion models.
  • More spacing before the sectioning printouts in the relaxation dispersion auto-analysis.
  • Modified the printouts of the relax_disp.select_model user function.
  • Fix for the relaxation dispersion auto-analysis. The data pipes created for each model optimised are now switched to prior to any operations on the pipe.
  • Changed the φex parameter in the LM63 model back to rex.
  • Changed the Grace string for the rex parameter to be φex.
  • Converted all of the specific_analyses.relax_disp.parameters module to handle different models. The R2eff and I0 parameters are now only part of the R2eff model and all other standard parameters belong to all of the other models.
  • Shifted all of the constant relaxation dispersion variables into the specific module. All of the dispersion code now uses the variables of specific_analyses.relax_disp.variables.
  • Renamed the lib.dispersion.equations.fast_2site() function to r2eff_LM63(). The number of relaxation times has also been replaced by the number of dispersion points.
  • Added the return_cpmg_frqs() and return_spin_lock_nu1() functions to specific_analyses.relax_disp.disp_data.
  • Updates to the relaxation dispersion auto-analysis. The Grace plots created are now more dependent on the current model, so that dispersion curves are only created for the R2eff model. The specific_analyses.relax_disp.variables module is now also being used.
  • Started to redesign the relaxation dispersion target function class. The input data is now expected to be R2eff/R data and all mentions of exponential curves have been eliminated. The func_exp_fit() target function has been deleted as it is not used - as now the _minimise_r2eff() method in the dispersion specific analysis class is used instead. And the func_fast_2site() target function has been renamed to func_LM63().
  • Redesigned the optimisation code of the dispersion analysis specific class for the new target functions. This includes the assembling of R2eff/R values instead of peak heights, and a number of small fixes.
  • Shifted the LM63 dispersion model functions into the new lib.dispersion.lm63 module.
  • The reference spectrum is now not included when counting the number of dispersion points.
  • Fix for the lib.dispersion.lm63 module and parameters of zero are now gracefully handled.
  • Fixes for the func_LM63() dispersion target function.
  • Shifted the R2eff/R value and error assembly into specific_analyses.relax_disp.disp_data. This is in the new return_r2eff_arrays() function. The code has also been debugged and made functional.
  • Added support for handling missing data in the relaxation dispersion analysis. This support was mentioned in the post http://thread.gmane.org/gmane.science.nmr.relax.devel/3835.
  • Added a FIXME to a comment about the hardcoded Bootstrap number for relaxation dispersion.
  • Started to add support for Monte Carlo simulations for the relaxation dispersion models. This is for all models except R2eff. The changes are extensive but incomplete. The new functions disp_point_key_from_index() and disp_point_index_from_key() have been added to the specific_analyses.relax_disp.disp_data module, but the disp_point_index_from_key() function still needs work. The _back_calc() method of the specific_analyses.relax_disp.Relax_disp class has been redesigned, as well as base_data_loop() method and all methods which depend on it.
  • Updated the relaxation dispersion system tests of the Hansen CPMG data for the new models. The models are now LM63 and CR72, and the tests have been renamed to Relax_disp.test_hansen_cpmg_data_LM63 and Relax_disp.test_hansen_cpmg_data_CR72.
  • Update of the specific_analyses.relax_disp package docstring.
  • Fix for the linear constraints setup of the R2eff relaxation dispersion model. There are no constraints, so the specific_analyses.relax_disp.parameters.linear_constraints() function now returns A and b values of None.
  • Basic fix for the _back_calc_r2eff() relaxation dispersion method. A variable was misnamed.
  • Major redesign of the relaxation dispersion data model in the relax data store. The data model now revolves around the three concepts of the spectrometer frequency, the dispersion points, and the relaxation times. Peak intensity data is now handled through averaging using the new specific_analyses.relax_disp.disp_data.average_intensity() function. R2eff/R values are now referenced by a key generated from the spectrometer frequency and νCPMG frequency or ν1 spin-lock field strength. All of the specific_analyses.relax_disp package has been standardised around these concepts. This simplifies all of the modules of the package and removes a large number of latent bugs.
  • A number of fixes to partly enable Monte Carlo simulations for the non R2eff dispersion models.
  • Finally Monte Carlo simulations for the relaxation dispersion models now work. This was a simple fix for the specific_analyses.relax_disp.parameters.param_index_to_param_info() function.
  • Created truncated data files of the Hansen CPMG data. This consists of residues 70 and 71 and will be used to massively speed up the system tests.
  • The truncated Hansen CPMG data is now in the form of Sparky peak lists.
  • Now all of the Hansen CPMG data is present as truncated Sparky peak lists.
  • Speedup for the relaxation dispersion system tests which use Flemming Hansen's CPMG data. The system test script now reads the truncated data files (of only residues 70 and 71) to minimise the time required to read the data and store it in the relax data store.
  • Added a script to the test suite shared data for analysing the truncated Hansen CPMG data.
  • Fixes for the LM63 dispersion CPMG model. The 'r2' model parameter is now an array as there is one R2 value per magnetic field strength. And the 'rex' parameter has been renamed to 'φex' and is scaled quadratically with the field strength within the optimisation target function.
  • Fix for the setup of the relaxation dispersion GUI analysis. The base method add_execute_relax() has been renamed to add_execute_analysis().
  • Added support for interfacing with Art Palmer's CPMGFit program. The two new user functions relax_disp.cpmgfit_input and relax_disp.cpmgfit_execute have been created to interface with CPMGFit. The first creates the per spin system CPMGFit input files as well as a batch script for executing CPMGFit with all the input files. The second bypasses the batch script and allows CPMGFit to be executed from within relax. This mimics the palmer and dasha user functions. The back end code is in the new specific_analyses.relax_disp.cpmgfit module.
  • Created the Relax_disp.test_hansen_cpmgfit_input system test. This is for checking the operation of the relax_disp.cpmgfit_input user function conversion of Flemming Hansen's CPMG R2eff values into input files for CPMGFit. A relax state file containing the results of an analysis of an R2eff model analysis of the truncated data has been added to the test suite data to speed up the test and to check the loading of dispersion state files.
  • Created a directory for the results of the CPMGFit program using Hansen's truncated CPMG data. The script 'cpmgfit.py' has been added to create the input files for CPMGFit and execute the program. The input and batch files have been added to the repository as well.
  • Added the results from NESSY of the analysis of Flemming Hansen's truncated CPMG data. This is only for the truncated data of residues 70 and 71. All files, except for the PNG graphics, have been added to the repository. The 'summary' file has been created to hold the data from NESSY's summary tab, as this is not stored in the NESSY saved state and is permanently lost after closing NESSY.
  • A dispersion saved state from the prompt or script UI can now be associated with a GUI analysis.
  • Created the Relax_disp.test_hansen_trunc_data GUI test for checking the GUI dispersion auto-analysis. This checks the full operation of the relaxation dispersion GUI analysis, without checking the final results (to be added later).
  • Fixes for the change to the new spectrometer.frequency user function and associated data structures.
  • Removed the preview button from the file selection GUI element of the CPMGFit user functions. These are the relax_disp.cpmgfit_execute and relax_disp.cpmgfit_input user functions.
  • The relaxation dispersion specific code now uses the changes of the spectrometer.frequency user function. This simplifies the handling of magnetic field strength data.
  • More fixes to the relax_disp branch for the changes of the spectrometer.frequency user function.
  • Changes to the CPMGFit input files due to the new spectrometer.frequency user function.
  • The relax_disp.cpmgfit_execute user function now correctly calls CPMGFit. The -grid command line option has been added and the output for each spin is sent to a special output file.
  • Updated the input files and added the output files for the CPMGFit program with Hansen's CPMG data. This is for the data truncated to residues 70 and 71.
  • Fixes for the relax_disp branch for the spectrometer.frequency user function changes.
  • Fix for the Relax_disp.test_hansen_cpmgfit_input system test. This is for the recent spectrometer.frequency user function changes.
  • The specific_analyses.relax_disp.disp_data.loop_frq() function can now handle missing data. This allows the loop to yield a single value of None when the spectrometer information has not been loaded and enables R analyses at a single field strength.
  • Fix for the LM63 dispersion model target function - the scaled φex value is now used for the R2.
  • Fixes for the relaxation dispersion auto-analysis for the LM63 model. The Rex parameter is now the φex parameter.
  • Added printouts of the optimised parameters to the Relax_disp.test_hansen_cpmg_data_LM63 system test. This includes the conversion to the equivalent CPMGFit parameters.
  • Massively increased the precision of the R2eff error analysis. The hard-coded simulation number variable is now set to 100000. This appears to be necessary for reliably reproducing results in the subsequent dispersion models.
  • Created the specific_analyses.relax_disp.disp_data.spin_has_frq_data() function. This is for determining if a spin has peak intensity for the given spectrometer field strength.
  • Updates some scripts for the spectrometer.frequency user function change.
  • Created a script to calculate the R2eff rate errors extremely precisely for Hansen's CPMG data. This uses 1 million Bootstrap simulations for calculating the errors. The 'r2eff_values.bz2' is saved after deleting the spin specific r2eff_sim structures so that the file drops from 388 Mb in size to 7.3 kb.
  • The CPMGFit script for Hansen's CPMG data now starts with the high precision error r2eff_values.bz2 file. This ensures consistency between comparisons between relax, NESSY, CPMGFit, etc.
  • Removed the '_trunc' part of the nessy_results directory from the NESSY final save file.
  • The relaxation dispersion loop_point() function can now return the reference point. This is enabled via the skip_ref argument.
  • Created the relax_disp.nessy_input user function front and backends. This user function takes the data in the relax data store and creates a NESSY save file to be opened within NESSY. The backend is the new specific_analyses.relax_disp.nessy module. For the GUI frontend, graphics for icons and the wizard have been taken from the NESSY repository file pics/nessy_new.png@r1088 in the trunk.
  • A script has been added to create the NESSY input for Flemming Hansen's CPMG data.
  • Updated the NESSY results for Flemming Hansen's CPMG data for the R2eff values with high precision errors. A file containing the log or printouts from NESSY has been added for reference.
  • Updated the NESSY log from the Hansen CPMG data of residue 70 to remove the NESSY errors. These were removed with the commit r1090 to the NESSY trunk.
  • Split up the r2eff_values.bz2 save file into the results files for each data pipe. This is for Flemming Hansen's CPMG data truncated to residues 70 and 71. This is to simplify the system tests which use this data.
  • Large simplification of the Relax_disp system tests using Hansen's CPMG data. Instead of calculating the R2eff values in the test, these are read from the high error precision results files in test_suite/shared_data/dispersion/Hansen. This allows the model parameters to be consistently found and to be identical between different runs of the test.
  • Added a file which compares the results for the LM63 model with Hansen's CPMG data between all programs. This currently includes relax, NESSY and CPMGFit.
  • Added a printout to the specific_analyses.relax_disp.cpmgfit.translate_model() function.
  • The dispersion system test script for Hansen's CPMG data can now run stand-alone.
  • The log barrier constraint algorithm is now used for the relaxation dispersion optimisation. This is to allow constraints in the absence of gradient target functions. The constraints have been turned on by default in the auto-analysis.
  • Changed the dispersion GUI tab to use the model names from specific_analyses.relax_disp.variables.
  • The spectrum wizard now uses the spectrometer.frequency user function rather than frq.set. The frq.set user function is now called spectrometer.frequency.
  • An upper limit of 200 rad/s has been added to the linear constraints for the R2 dispersion parameters.
  • Fixes for the checking in the Relax_disp.test_hansen_cpmgfit_input system test.
  • The relaxation dispersion auto-analysis now calls the relax_disp.plot_disp_curves user function. This user function is not implemented yet, but will be used to create plots of the dispersion curves.
  • Implemented a basic graph for the relax_disp.plot_disp_curves user function. This simply plots out the νCPMG value or spin-lock field verses the R2eff/R values from the experiment. The graph of the back calculated R2eff/R values from the model fit is still to be added.
  • Fix for the linear constraints for the R2eff model. The A and b matrices are no longer set to None, as this kills the auto-analysis or any analysis when constraints are turned on. Now the constraints 0 ≤ R2eff ≤ 200 and I0 ≥ 0 are used.
  • Fixes for the peak intensity loading wizard for the frq.set to spectrometer.frequency user function change.
  • Fixes for the backend of the relax_disp.plot_exp_curves user function. This code needed to be updated for the major changes in the relax_disp branch.
  • Fixes for the checks in the Relax_disp.test_exp_fit system test. The r2eff and i0 spin data structure keys are now strings.
  • Two class variables have been added to the dispersion auto-analysis for fast optimisation. This includes variables for the function tolerance and maximum number of iterations, and matches those of the model-free auto-analysis of the dauvergne_protocol module[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b]. These will be used to speed up the test suite.
  • Speed up of the system tests using Flemming Hansen's CPMG data. The grid search increments, function tolerance and maximum number of iterations have all been made looser.
  • Added parameter checks to the Relax_disp.test_hansen_cpmg_data_LM63 system test.
  • Fix for the φex dispersion parameter scaling - the scaling was in the wrong direction.
  • Added a directory of relax results for the truncated high-precision Hansen CPMG R2eff values. This uses the base_pipe.bz2 and r2eff_pipe.bz2 files with the high-precision R2eff errors, and hence can be compared to the NESSY and CPMGFit results.
  • Added the residue :71 results to the lm63_comparison file. This is a summary of the optimisation results using the high-precision R2eff error results for the different dispersion softwares.
  • Changes to the dispersion auto-analysis write_results() method. This is to output more of the dispersion parameters to text files and 2D grace plots.
  • Created a directory and script in preparation for the relax_disp.sherekhan_input user function.
  • Created the relax_disp.sherekhan_input user function. This includes an icon for the GUI, and the full front and backends.
  • Added a wizard graphic for the relax_disp.sherekhan_input user function.
  • Shifted the core of the model_loop() dispersion method into its own function. The new function specific_analyses.relax_disp.disp_data.loop_cluster() can now be used by other parts of relax. The model_loop() method now yields the data that loop_cluster() yields.
  • Redesign of the relax_disp.sherekhan_input user function to handle spin clustering.
  • Added the ShereKhan results for the high-precision R2eff data for Hansen's CPMG data.
  • Converted the readme file for Flemming Hansen's CPMG data directory to uppercase.
  • Updated the LM63 model comparison table.
  • Modified the dispersion calculate() method for the R2eff values to use the analytic equation. For the R2eff/R values calculated for the fixed time period dispersion experiments via the calc user function, the very slow and tedious bootstrapping approach has been replaced by the very quick direct error calculation. The two techniques produce the same results as the bootstrap simulation number approaches infinity.
  • Bug fix for the peak intensity error analysis in the dispersion auto-analysis. Now the error analysis is split based on the magnetic field strength. Previously the analysis was a mess with the split often being individual spectra.
  • The proper experiment type is now set for the Relax_disp.test_hansen_trunc_data GUI test.
  • Updated the relax_disp.exp_type user function to be more specific an include more experiment types.
  • Updated the specific_analyses.relax_disp.variables module for the relax_disp.exp_type changes.
  • The relax_disp.relax_time page is now always shown in the peak intensity wizard for the dispersion GUI. This number is needed for the fixed time period experiments as well to calculated the R2eff/R values and errors.
  • Fix for the dispersion auto-analysis write_results() method. The i0 parameter text file and 2D Grace file are now only produced for the R2eff model with the exponential curve base data types.
  • Simplified the Relax_disp.test_hansen_trunc_data GUI test. The CR72 model is now deactivated and the grid search sized decreased from the default of 21 to 4.
  • Big speed ups of the Relax_disp.test_hansen_trunc_data GUI test. The optimisation function tolerance and maximum number of iterations are now set to the same low precision as the system tests. This involves adding hidden variables to the dispersion GUI analysis.
  • Removed the data pipe name check from the Relax_disp.test_hansen_trunc_data GUI test. This makes no sense as this analysis generates a data pipe for each model (similar to the model-free analysis).
  • Fix for the relax_disp.exp_type call in the Relax_disp.test_exp_fit system test script.
  • Better formatting of the references for the dispersion analytic model equations.
  • Updated the relax_disp.select_model user function frontend for the CR72 dispersion model. This includes fixing the parameter list and the equations presented to the user.
  • Removed the commented out junk model code from the relax_disp.select_model user function frontend.
  • Added the CR72 model equations to the relax library. This is for the Carver and Richards 1972 2-site exchange model covering all time scales.
  • Initial implementation of the CR72 target function.
  • Import fix for the lib.dispersion.cr72 module.
  • Fixes to the specific_analyses.relax_disp modules to add support for the CR72 dispersion model. The parameters for the CR72 model are now both correct and correctly handled.
  • Fix for the spin container list of parameters for the CR72 model.
  • The CR72 dispersion model equations are now more robust against math domain errors. This is for the trigonometric functions which cannot handle certain input values.
  • Renamed the file for comparing different dispersion software with Flemming Hansen's CPMG data.
  • Added the initial results of the CR72 model in relax for Flemming Hansen's truncated CPMG data.
  • Simplified the pA ≥ pB constraint in the dispersion linear_constraints() function.
  • Fixes for the dispersion linear_constraints() function. The indices were being incorrectly handled - the i and k index should be one and the same parameter index.
  • Added support for the CR72 or 'Full_CPMG' model to the relax_disp.cpmgfit_input user function.
  • Added the results for the CR72 model optimisation in CPMGFit using Flemming Hansen's truncated CPMG data.
  • Added the CR72 model results to the software comparison document for Hansen's CPMG data.
  • Improvements for the φex and δω relaxation dispersion model parameters. These are now stored with the units of ppm2 and ppm respectively. The conversion to (rad/s)2 and rad/s units respectively now is spin specific, allowing mixed spin types (1H, 13C, 15N, etc.) to be analysed simultaneously.
  • Updated the relax results for Hansen's CPMG data for the recent φex and δω changes.
  • Fixes for the CPMGFit results in the software comparison table for Hansen's CPMG data.
  • Fix for the grid search setup for the pA dispersion parameter. As pA > pB, then the region from 0.0 to 0.5 does not need to be searched.
  • The back calculated R2eff values are now stored for dispersion analysis after minimisation.
  • Fix for the CR72 model equation in lib.dispersion.cr72.r2eff_CR72(). The eta scaling factor was incorrect.
  • Updated the relax results for the truncated CPMG data from Flemming Hansen. This is for the recent fixes of the CR72 model equations. Now relax produces identical results to ShereKhan for the LM63 and CR72 models.
  • Created a directory for holding relaxation dispersion sample scripts.
  • Added the model for no chemical exchange relaxation to the dispersion analysis.
  • Updated the NESSY log file for its improved printouts. These printouts allow the R20 values to be accessed.
  • Another update of the NESSY log for the improved and more detailed printouts.
  • And again, another update of the NESSY log.
  • Added the relax results for the No Rex model.
  • Updated the software comparison tables for the model of no exchange. This is for Flemming Hansen's truncated CPMG data.
  • Fix for the Relax_disp system tests using Flemming Hansen's truncated CPMG data. The nuclear isotope is now being set.
  • Increased the grid size for the hansen_data.py system test script. This is needed to allow the parameters to be reliably found.
  • Fixes for the checks and printouts of the Relax_disp.test_hansen_cpmg_data_LM63 system test.
  • Updated some NESSY results in the software comparison document.
  • Fix for the CPMGFit batch file creation. The command line options are now correct and output is redirected to output files.
  • Updated the CPMGFit batch file.
  • Created the Relax_disp.test_hansen_cpmg_data_auto_analysis system test. This is designed to fully test the dispersion auto-analysis for CPMG-type data.
  • Fix for the Relax_disp.test_hansen_cpmgfit_input system test. This is for the recent CPMGFit batch file changes.
  • Better checking of optimisation in the Relax_disp system tests. This affects the Relax_disp.test_hansen_cpmg_data_LM63 and Relax_disp.test_hansen_cpmg_data_CR72 system tests. Instead of using the auto-analysis, these tests now set the initial parameters close to the minimum, skip the grid search, and perform a low precision optimisation to reach the minimum. This is important because the low quality grid search and optimisation can not always find the real minimum.
  • Created the lib.dispersion.equations.calc_two_point_r2eff_err() function. This complements the lib.dispersion.equations.calc_two_point_r2eff() function and is used by the dispersion calculate() method to abstract the mathematics.
  • Updated the relax_disp.select_model user function docstring for the R2eff error analysis. This properly describes how the R2eff/R errors are calculated for the fixed relaxation time period experiments.
  • Docstring fixes for the lib.dispersion.equations module.
  • Expanded the number of model list variables in specific_analyses.relax_disp.variables. This is to include lists which are specific to CPMG-type and R-type experiments.
  • Added the new M61 model to the specific_analyses.relax_disp.variables module. This is the Meiboom 1961 model for 2-site fast exchange for R-type experiments.
  • Added the M61 model to the relax_disp.select_model user function frontend. This is the Meiboom 1961 model for 2-site fast exchange for R-type experiments.
  • Added the M61 model equations to the relax library. This is for the Meiboom 1961 2-site fast exchange model for R-type experiments.
  • Created the M61 2-site fast exchange dispersion model target function. This is for the Meiboom 1961 model for 2-site fast exchange for R-type experiments. The code for the func_M61() method was copied without modification from the func_LM63() method.
  • Added support for the R-type experiments to the relaxation dispersion analysis in the GUI. This involves using a different model list for these experiments compared to the CPMG-type experiments.
  • Updated the relaxation dispersion GUI to handle the current set of experiment types.
  • Fix for the Relax_disp.test_hansen_cpmg_data_auto_analysis system test. The correct model list is now being used - the R-type experiments should not be included.
  • Fix for the missing import of the lib.dispersion.equations.calc_two_point_r2eff_err() function.
  • Added support for the M61 model to the relax_disp.select_model user function back end. This is for the Meiboom 1961 2-site fast exchange model for R-type experiments.
  • Another fix for the Relax_disp.test_hansen_cpmg_data_auto_analysis system test. The correct model list is finally being used.
  • Fix for the lib.dispersion.equations.calc_two_point_r2eff_err() function. The variables were incorrectly named.
  • Added support for setting the spin isotope information in the dispersion GUI. A new Text_ctrl element has been added just after the spin system GUI element. This displays a list of all the spin isotopes currently defined and is updated after every GUI user function call. The button of the element launches the spin.isotope user function. The spin isotope information is now checked for prior to executing the GUI analysis and added to the missing list to present to the user when blocking the execution of the analysis. The dispersion GUI test has been updated to use this new element.
  • Added support for model selection to the relaxation dispersion specific analysis package. This involved redesigning the model_loop() method. Instead of yielding both the spin containers and the spin IDs, now only spin IDs are yielded. This is important as the model loop is used independently of the data pipes. Hence the spin containers cannot be yielded as multiple pipes are compared within the model loop. The auxiliary method _spin_ids_to_containers(spin_ids) has been added to obtain the list of spin containers from the list of spin IDs. To support model selection, the methods duplicate_data(), model_desc() and model_statistics() have been added, and model_type() aliased to the common _model_type_local() method.
  • Expanded the relaxation dispersion auto-analysis. A final step of model selection has been added to select between the different models for each spin cluster. This is stored in the 'final' data pipe, and its results output via the write_results() method.
  • The model selection technique can now be changed in the dispersion auto-analysis.
  • The error when selecting a non-existent model using relax_disp.select_model is now more informative.
  • Model selection in the dispersion auto-analysis is only performed if 2 or more models are present. Excluding the R2eff model, if only 0 or 1 models are optimised, then model selection is skipped and a warning is given. This avoids tracebacks in the model_selection user function.
  • Added some synthetic on-resonance R data to the test suite. This is in the form of Sparky peak list files containing two spin systems.
  • Expanded the synthetic on-resonance R test suite data. The data now consists of a full set of dispersion curves for the M61 model.
  • Added a reference to the synthetic on-resonance R test suite data. The first ncyc1 data point now has a relaxation time period of zero, hence it can be used as the reference for a fixed time period experiment.
  • The reference spectra can now be set in the relax_disp.spin_lock_field user function. By setting the field to None, the reference spectrum for a fixed relaxation time period experiment type can now be specified. This mimics the behaviour of the relax_disp.cpmg_frq user function.
  • Added some error checking to the specific_analyses.relax_disp.disp_data.average_intensity() function. This is for better feedback to the user in case they have not set up their data correctly.
  • The relax_disp.select_model user function now operates without the spectrometer frequency being set. The special loop_frq() function is now used as this can handle missing spectrometer frequency information.
  • The find_intensity_keys() function can now handle the reference spectrum. This function in the specific_analyses.relax_disp.disp_data module was failing if the relaxation time period for the reference spectrum was missing. Time information shouldn't be needed for the reference, so is no longer checked.
  • The dispersion specific optimisation methods can now handle missing spectrometer information.
  • The return_index_from_frq() now handles missing frequency information. This is in the specific_analyses.relax_disp.disp_data module.
  • Better support for missing frequency information in the specific_analyses.relax_disp.disp_data module. This is in the return_index_from_frq() function which now returns an index of 0, and in return_r2eff_arrays() which skips calculating the frequency information.
  • The dispersion disassemble_param_vector() function now handles missing spectrometer information. The loop_frq() function replaces direct looping over cdp.spectrometer_frq_count.
  • Variable renaming in the lib.dispersion.m61 module. The variable names are now more suited to R-type data, rather than CPMG-type data.
  • Fix for the M61 model target function. The spin-lock fields need to be used, not the CPMG frequencies.
  • Created the Relax_disp.test_r1rho_on_res_fixed_time_m61 system test. This checks the R-type experiment with a fixed relaxation time period using the R2eff and M61 models. It uses the auto-analysis for this, and the 'r1rho_on_res' synthetic relaxation data.
  • Created the Relax_disp.test_r1rho_on_res_exponential_m61 system test. This is identical to the Relax_disp.test_r1rho_on_res_fixed_time_m61 system test except that the full exponential curves are used rather than the 2-point fixed time approach.
  • Python 3 fixes for the relaxation dispersion parameter Grace strings.
  • Python 3 fixes for the modules of the specific_analyses.relax_disp package.
  • Fix for a bug preventing the optimisation of the dispersion models.
  • Fixes for the file permission setting on the CMPGFit batch script. The correct file mode is now set for Unix-based systems.
  • Python 3 fixes for the relax_disp.cpmg_frq and relax_disp.spin_lock_field user functions. The sorting of lists with None is not supported by Python 3, so this has to be carefully handled.
  • Removed the grid search size check from the dispersion _grid_search_setup() method. This is performed by minfx anyway, and the code was incompatible with Python 3.
  • Fix for the Relax_disp.test_hansen_cpmgfit_input system test. The frequencies for the CPMGFit input files now are only written to 10 places. This is for Python 2 vs. 3 consistency.
  • Python 3 fix for the relax_disp.cluster user function.
  • Fix for the Grace plots created by the relax_disp.plot_disp_curves user function. The data set from each frequency is now a separate set in the G0 graph.
  • Improvements to the relax_disp.plot_disp_curves user function. The back-calculated R2eff/R values are now included in the plot as separate sets. In addition, the residuals have also been added to allow for a visual statistical comparison.
  • More improvements to the relax_disp.plot_disp_curves user function. The data sets now have labels, and the residuals have errors set to those of the R2eff/R data.
  • More improvements to the relax_disp.plot_disp_curves user function. The graph axes maximum is now set to a reasonable value for the given data.
  • Added the No Rex model to the relax script for optimising Flemming Hansen's CPMG data.
  • The isotope type is now set in the relax script for optimising Flemming Hansen's CPMG data.
  • Shifted the _spin_ids_to_containers() method to the disp_data.spin_ids_to_containers() function.
  • Fix for the relax_disp.sherekhan_input user function. The loop_cluster() function no longer returns spin containers.
  • Fix for the r2eff_calc.py script for calculating R2eff values from Flemming Hansen's CPMG data.
  • Added a check to the dispersion specific minimise() function for the spectrometer field strength. This is essential in all dispersion models to convert between ppm and rad/s units, or ppm2 and (rad/s)2 for the φex parameter.
  • The r1rho_on_res_m61.py dispersion system test script now sets the spectrometer frequency information.
  • Removed cdp.model as this makes no sense - a different model can be used per spin cluster. Now the variable cdp.model_type is used to identify the R2eff model. For all other dispersion models this variable is set to 'Disp'.
  • Added a log file for the data generation script for the r1rho_on_res dispersion data.
  • Fixes for the parameter checks in the system tests for the r1rho_on_res synthetic data. This includes both the Relax_disp.test_r1rho_on_res_fixed_time_m61 and Relax_disp.test_r1rho_on_res_exponential_m61 tests.
  • Fixes for the lib.dispersion.m61.r2eff_M61() function.
  • Increased the precision of the Sparky peak lists for the r1rho_on_res dispersion test data. All peak intensities are now 1000 bigger. As the values are integers in the Sparky files, the previous values were too truncated for the system tests to properly optimise and find the original parameters.
  • Speed up of the r1rho_on_res_m61.py system test script. The optimisation precision is now much lower. And the peak intensity errors now have been scaled by 1000 just as the base data was in the previous commit.
  • Improvements for the parameter checks in the system tests for the r1rho_on_res synthetic data. This includes both the Relax_disp.test_r1rho_on_res_fixed_time_m61 and Relax_disp.test_r1rho_on_res_exponential_m61 tests.
  • Clustering was accidentally turned off in the r1rho_on_res_m61.py system test script.
  • Created the specific_analyses.relax_disp.disp_data.count_frq() function. This is for determining the number of spectrometer frequencies present, even if not data has been defined.
  • Loosened the checks for the Relax_disp.test_hansen_cpmg_data_CR72 system test.
  • Completely redesigned how parameters are handled in the relaxation dispersion analyses. The key concept is that everything revolves around the new loop_parameter() function. This is a generator function which loops over the parameters of a given cluster, yielding all the information required to access the parameter. The other functions of the parameters module use loop_parameter() to sequentially handle each parameter. This allows for huge simplifications of these functions.
  • Fixes for the Relax_disp.test_hansen_cpmg_data_auto_analysis system test. One of the models for one spin now optimises completely and the checks have all been loosened.
  • Fixes for the dispersion specific model_statistics() method. This now handles spin clustering correctly.
  • Updated the results of relax's analysis of the truncated CPMG data from Flemming Hansen.
  • Updates for the model variable docstrings.
  • Added the M61 skew model to the specific_analyses.relax_disp.variables module. This is the Meiboom 1961 model for skewed populations (pA ≫ pB). This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Added the M61 skew model to the relax_disp.select_model user function frontend. This is the Meiboom 1961 model for skewed populations (pA ≫ pB). This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Fixes for the spacing after some equations in the relax_disp.select_model docstring.
  • Simplifications and comment fixes in the lib.dispersion.m61.r2eff_m61() function.
  • Renamed the lib.dispersion.m61.r2eff_M61() function to r1rho_M61().
  • Added the M61 skew model equations to the relax library. This is the Meiboom 1961 on-resonance 2-site model for skewed populations (pA ≫ pB). This commit follows step 3 of the relaxation dispersion model addition tutorial.
  • Created the M61 skew model target function. This is the Meiboom 1961 on-resonance 2-site model for skewed populations (pA ≫ pB). This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Added support for the skewed condition (pA ≫ pB) to the specific_analyses.relax_disp.parameters module. This is currently done by constraining pA to be greater than 0.85.
  • Added support for the M61 skew model to the relax_disp.select_model user function back end. This is the Meiboom 1961 on-resonance 2-site model for skewed populations (pA ≫ pB). This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • Big speeds ups of the lib.dispersion modules. Many replicated calculations have been shifted outside of the dispersion point loop, as these only need to be calculated once per function call. Some if statements have consequently been simplified.
  • Renamed the lib.dispersion.equations module to lib.dispersion.two_point.
  • Renamed the r1rho_on_res dispersion test suite data directory to r1rho_on_res_m61.
  • Created test data for the M61 skew R model. This is the Meiboom 1961 on-resonance 2-site model for skewed populations (pA ≫ pB). This commit follows step 7 of the relaxation dispersion model addition tutorial.
  • Created the Relax_disp.test_r1rho_on_res_fixed_time_m61b system test. This is for the Meiboom 1961 on-resonance 2-site model for skewed populations (pA ≫ pB). This commit follows step 7 of the relaxation dispersion model addition tutorial.
  • Small simplification of the lib.dispersion.m61b module.
  • Fix for the specific_analyses.relax_disp.disp_data.return_value_from_frq_index() function. This cdp.spectrometer_frq_list list structure rather than the cdp.spectrometer_frq dictionary should be used.
  • Added a printout at the end of the optimisation of the final dispersion parameter values.
  • Modified the optimisation printout for better formatting.
  • Increased the precision of the hansen_data.py relaxation dispersion system test script. This actually speeds up the test, as the Monte Carlo simulations are significantly speed up when the CR72 model optimises to the solution.
  • Updates for the pA dispersion parameter optimisation constraints. The parameter is now limited to be between pB and 1. In the case of the limit pA ≫ pB, then instead the constraint is between 0.85 and 1.
  • Updated the Relax_disp system tests. This is for the recent precision change and constraint changes.
  • Fixes for the grid search for the M61 skew dispersion model. The pA parameter search is now between 0.85 and 1.
  • Fixes for the func_M61b() dispersion target function. This is the Meiboom 1961 on-resonance 2-site model for skewed populations (pA ≫ pB).
  • Small changes to the r1rho_on_res_m61b dispersion test data. One R20 rate has been increased.
  • Completed the lib.dispersion.m61.r1rho_M61() function. Now the R1 relaxation rate and rotating frame tilt angle are correctly handled. This is not used in the target functions as support for the R1 and offset is not yet implemented.
  • Added the DPL94 model to the specific_analyses.relax_disp.variables module. This is the David, Perlman and London 1994 R 2-site fast exchange model. This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Added the DPL94 model to the relax_disp.select_model user function frontend. This is the David, Perlman and London 1994 R 2-site fast exchange model. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Updates to the Relax_disp.test_r1rho_on_res_fixed_time_m61b system test.
  • Added the DPL94 model equations to the relax library. This is the David, Perlman and London 1994 R 2-site fast exchange model. This commit follows step 3 of the relaxation dispersion model addition tutorial.
  • Created the DPL94 model target function. This is the David, Perlman and London 1994 R 2-site fast exchange model. This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Added support for the DPL94 model to the relax_disp.select_model user function back end. This is the David, Perlman and London 1994 R 2-site fast exchange model. This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • Created the Relax_disp.test_r1rho_on_res_fixed_time_dpl94 system test. This is the David, Perlman and London 1994 R 2-site fast exchange model. This commit follows step 7 of the relaxation dispersion model addition tutorial.
  • Added the IT99 model to the specific_analyses.relax_disp.variables module. This is the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB. This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Added the IT99 model to the relax_disp.select_model user function frontend. This is the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Updated the model lists of the dispersion analyses GUI element. This adds the IT99 CPMG-type model and the DPL94 and M61b R-type models.
  • Fixes for the IT99 model description in the relax_disp.select_model user function. This is the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Added the IT99 model equations to the relax library. This is the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB. This commit follows step 3 of the relaxation dispersion model addition tutorial.
  • Added the it99 module to the lib.dispersion package __all__ list. This is the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB. This commit follows step 3 of the relaxation dispersion model addition tutorial.
  • Created the IT99 model target function. This is the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB. This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Fix for the lib.dispersion.it99 module. This is the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB. This commit follows step 3 of the relaxation dispersion model addition tutorial.
  • Added the support for the pA.δω2 parameter 'padw2' to the dispersion specific analysis. This is needed for the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB. This commit follows step 5 of the relaxation dispersion model addition tutorial.
  • Added support for the IT99 model to the relax_disp.select_model user function back end. This is the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB. This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • Added the support for the tex parameter (tex = 1/(2kex)) to the dispersion specific analysis. This is needed for the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB. This commit follows step 5 of the relaxation dispersion model addition tutorial.
  • Added support for the IT99 model to the relax_disp.cpmgfit_input user function. This is the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB.
  • Fix for the relax_disp.cpmgfit_input user function for when no directory is given. This was causing tracebacks.
  • Fix for the LM63 model for the relax_disp.cpmgfit_input user function. The grid search was incorrectly set up - the parameter is Tau, not tex.
  • Fixes for the IT99 model for the relax_disp.cpmgfit_input user function.
  • Changed the CPMGFit grid search options for the IT99 model in the relax_disp.cpmgfit_input user function.
  • Fix for the setup of the relaxation dispersion target functions for the IT99 model.
  • Added the relax optimisation results for the IT99 model applied to Flemming Hansen's CPMG data.
  • Removed an unnecessary calculation from the lib.dispersion.it99.r2eff_IT99() function.
  • Added the IT99 model to the software comparison table for Hansen's CPMG data. This includes only the results from relax (and possibly not fully debugged results at that).
  • Replaced '-' with 'N/A' if the software is missing the model. This is for the software comparison table using Flemming Hansen's CPMG data.
  • Updated the CPMGFit results for the IT99 model using Flemming Hansen's CPMG data.
  • Fixed the scaling of the parameter tex.
  • Fixes for the lib.dispersion.it99 module. This is mainly because the omega_1eff parameter was not being correctly converted from the nu_cpmg values.
  • Updated the relax results for Flemming Hansen's CPMG data for the IT99 model fixes.
  • Fixes for the relax_disp.cpmgfit_input user function for the IT99 model grid search options.
  • Updated the CPMGFit results for the IT99 grid search fixes of the last commit.
  • Basic fix for the checks of the Relax_disp.test_hansen_cpmgfit_input user function. The 'tex' parameter is now set as 'Tau'.
  • Disabled the Relax_disp.test_r1rho_on_res_fixed_time_m61b system test as the M61b model is rubbish. The model cannot be properly optimised as the parameters are not independent of each other.
  • Fixes for the dispersion specific code. The Grace graph code of lib.software.grace no longer accepts the axis min and max arguments.
  • Created the Relax_disp.test_bug_20889_multi_col_peak_list system test to catch bug #20889.
  • Fixes for the Relax_disp.test_bug_20889_multi_col_peak_list GUI test.
  • Fixes for the checks of the Relax_disp.test_bug_20889_multi_col_peak_list GUI test. Intensity errors will not have been calculated yet, and the structure is called baseplane_rmsd anyway.
  • Fix for the Relax_disp.test_bug_20889_multi_col_peak_list GUI test. The peak intensity wizard _ok() method is now called to terminate the wizard. Otherwise this causes the subsequent GUI test which tries to access the peak intensity wizard to fail.
  • Created the Relax_disp.test_hansen_cpmg_data_IT99 system test. This is for testing the Ishima and Torchia 1999 2-site model for all timescales with pA ≫ pB. This commit follows step 7 of the relaxation dispersion model addition tutorial.
  • Initialised the relaxation dispersion chapter in the relax manual.
  • Added 600x600 pixel version of the relaxation dispersion analysis graphic. This is for use in the relax manual.
  • Fix for the definition of the \Ronerho LaTeX command for the relax manual.
  • Added EPS versions of the nessy and relax_disp 128x128 icons for the relax manual.
  • Added icons of all the sizes for ShereKhan.
  • Updated the relaxation dispersion 128x128 EPS icons to be the correct size and colour.
  • Updated the relaxation dispersion analysis EPS graphic to be the correct size and colour.
  • Copied the tutorial for adding dispersion modes to relax into the manual. This was copied from http://article.gmane.org/gmane.science.nmr.relax.devel/3907.
  • Editing of the tutorial for adding dispersion models in the relax manual.
  • Edits of the relax_disp.select_model user function docstring.
  • Added all of the contents of the relax_disp.select_model user function docstring to the manual.
  • The relaxation dispersion parameters are now defined in the main manual LaTeX file.
  • Added a couple of sentences about bit rot to the dispersion chapter of the relax manual. This is to the test suite part of the tutorial on adding new dispersion models.
  • The dispersion auto-analysis now saves the final program state before terminating.
  • Shifted the dispersion specific Grace plotting code into specific_analyses.relax_disp.disp_data. The private _plot_disp_curves() and _plot_exp_curves() methods of the analysis specific object are now public functions of the specific_analyses.relax_disp.disp_data module.
  • Removed the state.save user function calls from the relax scripts for Hansen's CPMG data.
  • Updated the model lists for the relax scripts for Flemming Hansen's CPMG data.
  • Added a sample script for the relaxation dispersion analysis of CPMG-type data.
  • Added a preliminary icon set for spin clustering.
  • The relax_disp.cluster user function GUI menu entry now uses the cluster icon.
  • Created a very basic GUI element for the dispersion analysis for clustering. This is simply to make this feature more obvious. The button just launches the relax_disp.cluster user function.
  • Modified the experiment type descriptions in the dispersion GUI.
  • Shifted the spin cluster GUI element to be just after the spin system GUI element. This is simply a more logical placement.
  • Modified the title of the dispersion auto-analysis GUI element, removing the 'Setup for' text.
  • Removed some unused imports from the CPMG dispersion analysis sample script.
  • Added the CPMG dispersion analysis sample script to the relax manual.
  • Epydoc docstring fixes for all of the modules of the lib.dispersion package.
  • Alphabetical ordering of imports.
  • Shifted the core of the relaxation dispersion API object into its own api module. This is to simplify the relax import cascade - by removing the code from the specific_analyses/relax_disp/__init__.py file, the import of the package no longer results in the imports of other relax modules and packages.
  • Expanded the modelling of dispersion data section of the relax user manual.
  • Expansion of the modelling of dispersion data section of the relax user manual.
  • The relaxation dispersion auto-analysis now outputs text and Grace files for all parameters. This is in response to bug #20917 submitted by Troels Linnet.
  • The Monte Carlo simulations now generate parameter errors for the relaxation dispersion analysis. The simulation index was being ignored, hence the input data was never the randomised data and all errors were zero.
  • Removed many decimal points from the MHz value in the Grace plots from relax_disp.plot_disp_curves.
  • Added support for converting between kex and tex, and pA and pB for the dispersion analysis. This is performed by the new specific_analyses.relax_disp.parameters.param_conversion() function. For this, most of the code from the assemble_param_vector() function has been shifted into get_value(), and most of disassemble_param_vector() into set_value(). The dispersion analysis now also has a custom sim_init_values() method to handle these parameters.
  • Added support for calculating auxiliary parameter errors for the dispersion analysis. This is via the monte_carlo.error_analysis user function. The errors for the parameter pairs kex-tex and pA-pB for the non-model parameter are now calculated as well.
  • Fix for the dispersion auto-analysis - pA and pB parameters are no longer output for the IT99 model. These are not parameters of this model.
  • Updated the relax results for Flemming Hansen's truncated CPMG data for all the recent changes.
  • Fix for bug #2091 - Suggestion for Python script for PNG/EPS/SVG conversion of grace files. Troels Linnet provided this patch, and was discovered during work on a Windows 7 system. This patch will provide a grace2images.py file in each folder where a call to specific_analyses/relax_disp/disp_data.py is called. It is called in plot_disp_curves(dir=None, force=None) and call the function lib.software.grace.script_grace2images(). The conversion script can be executed in Linux and Windows, if the PATH to xmgrace has been specified. It will look in a folder for grace files of ending *.agr and by default convert to PNG. One can also convert to EPS and SVG. Probably more options could be added, as PDF. The conversion depends on xmgrace compilation, and so PNG conversion is for fast inspection of graphs in folder, and EPS for further external conversion to PDF etc. The patch, the output file, and small script to test is attached. I miss to make the file executable in relax, so the script can be executed directly in Linux.
  • Mac OS X bug fix for the new analysis GUI wizard. The blank button is now using the blank_150x150.png file instead of no image, preventing nasty wxPython bugs from appearing on that system.
  • Fix for bug #20917. The problem is that the Grace files for each spin system are not created by the relax_disp.plot_disp_curves user function as the ':' character cannot be placed in a file name in MS Windows. All of the file name from the ':' onwards is lost. The solution is to replace each of the characters '#:@' in the spin ID string with '_'.
  • Another update of the relax results for Flemming Hansen's truncated CPMG data. This includes the grace2images.py script creation contributed by Troels Linnet and the change of the file name of the per-spin dispersion curves.
  • The value checks in the Relax_disp.test_hansen_cpmg_data_auto_analysis system test are now less precise. This is to allow the tests to pass on certain MS Windows systems.
  • Fix for the setting of the execute permissions on the grace2images.py scripts. The problem was identified in the post at http://thread.gmane.org/gmane.science.nmr.relax.devel/3953/focus=4000. This is within the relax_disp.plot_disp_curves user function after the grace2images.py script has been created. The commit matches the changes from trunk for the Modelfree4 batch script.
  • Shifted from argparse to optparse in the grace2images.py scripts from relax_disp.plot_disp_curves. This is associated with bug #20916 and the change suggested in the post http://thread.gmane.org/gmane.science.nmr.relax.devel/3953/focus=4000. The argparse module is only available from for Python 2.7.3 (the version with many Python 3 features backported) and Python ≥ 3.2. The module has been replaced with the similar optparse module as used by relax, and which available in all Python version supported by relax.
  • Updated the grace2images.py scripts created by the relax_disp.plot_disp_curves user function. This was discussed in bug #20916 and the change suggested in the post http://thread.gmane.org/gmane.science.nmr.relax.devel/3953/focus=4000. Improved that both small and big letters for image types can be used on the command line.
  • Converted the relaxation dispersion chapter of the user manual to the lstlisting environment. This matches the changes occurring within the trunk.
  • Added an EPS version of the 128x128 cluster icon for the user manual.
  • Renamed the LaTeX file for the relaxation dispersion chapter of the user manual.
  • Completed the script UI section of the relaxation dispersion chapter of the user manual. The sample script is now fully explained.
  • Added a demonstration of why the Ishima and Torchia 2005 error formula is incorrect. The script test_suite/shared_data/dispersion/error_testing/simulation.py has been added to simulate the fixed relaxation time period error propagation. This produces the test_suite/shared_data/dispersion/error_testing/error_plot.agr Grace graph. The formula, graph and a description has been added to the relax manual explaining everything.
  • Fix of the two-point dispersion error formula in the docs. This includes the relax_disp.select_model user function docstring and the relax manual.
  • Loosened a parameter check in the Relax_disp.test_hansen_cpmg_data_IT99 system test to pass on certain Linux systems.
  • Small edit of the legend of the relaxation dispersion figure showing the Ishima & Torchia 2005 being wrong.
  • Added Paul Schanda's code for the numerical solution to the Bloch-McConnell equations for 2-sites. This is specifically code which uses complex conjugate matrices. The code was submitted at http://thread.gmane.org/gmane.science.nmr.relax.devel/4132.
  • Made the lib.dispersion.ns_2site_star module importable in the absence of Scipy.
  • Polished the lib.dispersion.ns_2site_star module docstring.
  • Added some code missing from the lib.dispersion.ns_2site_star module. This code was accidentally not copied from http://thread.gmane.org/gmane.science.nmr.relax.devel/4132.
  • Significant speed ups of the lib.dispersion.ns_2site_star.r2eff_ns_2site_star() function. Replicated calculations have been minimised.
  • Added the NS 2-site star model to the specific_analyses.relax_disp.variables module. This is the model of the numerical solution for the 2-site Bloch-McConnell equations using complex conjugate matrices. This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Added the lib.dispersion.ns_2site_star module name to the package __all__ list.
  • Updated the lib.dispersion.ns_2site_star module with additional information from Paul Schanda. The details come from http://thread.gmane.org/gmane.science.nmr.relax.devel/4132/focus=4135. The exchange-free R2 value parameter names have been changed to match the convention of the other lib.dispersion modules.
  • Added the NS 2-site star model to the relax_disp.select_model user function frontend. This is the model of the numerical solution for the 2-site Bloch-McConnell equations using complex conjugate matrices. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Fix for the non-ASCII character '\xe2' in the lib.dispersion.ns_2site_star module.
  • Created the NS 2-site star model target function. This is the model of the numerical solution for the 2-site Bloch-McConnell equations using complex conjugate matrices. This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Added support for the R2B0 parameter as required by the NS 2-site star model. This is the model of the numerical solution for the 2-site Bloch-McConnell equations using complex conjugate matrices. This commit follows step 5 of the relaxation dispersion model addition tutorial.
  • Added support for the NS 2-site star model to the relax_disp.select_model user function back end. This is the model of the numerical solution for the 2-site Bloch-McConnell equations using complex conjugate matrices. This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • Better support for the R2A0 and R2B0 relaxation rate parameters in the relaxation dispersion analysis. This includes a number of fixes to allow these two parameters to be handled correctly.
  • Added parameter conversions to go from pA and kex to kge and keg. This is for the NS 2-site star numerical model. The conversions have been added to the start of the target function to minimise mathematical operations to speed up the code.
  • Added the missing mpower() function as lib.linear_algebra.matrix_power.square_matrix_power(). This is needed by the lib.dispersion.ns_2site_star module. The function comes from the 'fitting_main_kex.py' file attached to comment 3 of task #7712. The mpower() function was copied and modified to suite relax's coding conventions.
  • Added a module docstring to lib.linear_algebra.matrix_power.
  • Created the lib.dispersion.ns_matrices module. This module contains a collection of functions for generating the relaxation matrices for the numerical solutions to the Bloch-McConnell equations for relaxation dispersion. The code comes from the 'fitting_main_kex.py' file attached to https://web.archive.org/web/gna.org/task/?7712#comment3.
  • Docstring fix for the lib.dispersion.ns_matrices.rcpmg_2d() function.
  • Added the functions for creating the X-axis pi-pulse rotation matrices in lib.dispersion.ns_matrices. The code comes from the 'fitting_main_kex.py' file attached to https://web.archive.org/web/gna.org/task/?7712#comment3.
  • Huge amounts of documentation added to the lib.dispersion.ns_2site_star module. This comes from Paul Schanda's post at http://thread.gmane.org/gmane.science.nmr.relax.devel/4132/focus=4152
  • Spacing fixes for the lib.dispersion.ns_2site_star module as determined by the 2to3 program. This is the Python 2 to 3 conversion program.
  • Docstring fix for the lib.dispersion.ns_2site_star.r2eff_ns_2site_star() function.
  • Comment updates in the lib.dispersion.ns_2site_star module.
  • Completed the conversion of the ground and excited states (G, E) to the A and B states. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4132/focus=4154.
  • Fixes for the construction of the complex conjugate matrix in lib.dispersion.ns_2site_star.
  • The chemical shift difference is now passed into lib.dispersion.ns_2site_star. This is currently set to the fA parameter, though it is not yet clear if this is correct.
  • Basic fix for the lib.linear_algebra.matrix_power.square_matrix_power() function.
  • The fixed relaxation time period is now sent into the NS 2-site star dispersion model.
  • Fix for the state G+E to A+B conversion in lib.dispersion.ns_2site_star.
  • The NS 2-site star model is now more robust against math domain failures. This includes the failure of the logarithmic of zero matrices.
  • Speed ups of the NS 2-site star dispersion model optimisation. The relaxation and magnitisation data structures are now initialised with the target function initialisation, rather than being created at each target function call. The Rex and M0 matrices are now updated at the base of the target function rather than in the lib.dispersion.ns_2site_star module to minimise the number of mathematical operations per target function call. And the M0 matrix has changed shape and a dot product is used in lib.dispersion.ns_2site_star to create Moft instead.
  • Shifted to using the faster numpy.linalg.matrix_power() function in lib.dispersion.ns_2site_star. This was originally using the lib.linear_algebra.matrix_power.square_matrix_power() function, however the numpy equivalent is faster.
  • More speed ups of the NS 2-site star dispersion model. A number of calculations have been shifted to the target function initialisation code, avoiding unnecessary repetitive mathematical operations.
  • Improvement of the error handling in the NS 2-site star model. The fA and pB parameters are no longer being checked. Instead a Mgx value of 0.0 is being checked for. This catches additional problems. And now instead of the R2eff value being set to zero, it is set to 1e99. This is because log of zero is -inf, and then multiplied by a negative constant gives positive inf.
  • Docstring completion for lib.dispersion.ns_2site_star.r2eff_ns_2site_star(). Epydoc text was missing for some of the function arguments.
  • Changed 'numerical integration' to 'numerical solutions' in the dispersion chapter of the manual.
  • Reworked the dispersion chapter of the manual for the recent support of numerical models. This includes better sectioning and section labelling and referencing, and the addition of the NS 2-site star numerical model. The model and parameter tables have been updated as well.
  • Added the NS 2-site star red model to the specific_analyses.relax_disp.variables module. This is the model of the numerical solution for the 2-site Bloch-McConnell equations using complex conjugate matrices, whereby the simplification R2A0 = R2B0 is assumed. This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Rewrote the relax_disp.select_model user function documentation. All of the detailed model information has been removed as it is now in the relax user manual. The model lists have been modified to match the analytic-numeric sectioning of the manual.
  • Added the NS 2-site star red model to the relax_disp.select_model user function frontend. This is the model of the numerical solution for the 2-site Bloch-McConnell equations using complex conjugate matrices, whereby the simplification R2A0 = R2B0 is assumed. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Docstring fix for the lib.dispersion.ns_2site_star.r2eff_ns_2site_star() function.
  • Created the NS 2-site star red model target function. This is the model of the numerical solution for the 2-site Bloch-McConnell equations using complex conjugate matrices, whereby the simplification R2A0 = R2B0 is assumed. The code in common with the NS 2-site star model has been shifted into the new calc_ns_2site_star_chi2() method. This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Added support for the NS 2-site star red model to the relax_disp.select_model user function back end. This is the model of the numerical solution for the 2-site Bloch-McConnell equations using complex conjugate matrices, whereby the simplification R2A0 = R2B0 is assumed. This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • Added the NS 2-site star red model to the relax user manual. This is the model of the numerical solution for the 2-site Bloch-McConnell equations using complex conjugate matrices, whereby the simplification R2A0 = R2B0 is assumed. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Fix for the Monte Carlo simulations for the numeric dispersion models. The back-calculation method was not correctly initialising the target function class.
  • Added the CR72 red model to the specific_analyses.relax_disp.variables module. This is the Carver and Richards 1972 analytic model with the simplification R2A0 = R2B0. The current CR72 makes the same assumption, but that model will be expanded to support R2A0 and R2B0 later. This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Added the CR72 red model to the relax_disp.select_model user function frontend. This is the Carver and Richards 1972 analytic model with the simplification R2A0 = R2B0. The current CR72 makes the same assumption, but that model will be expanded to support R2A0 and R2B0 later. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Created the CR72 red model target function. This is the Carver and Richards 1972 analytic model with the simplification R2A0 = R2B0. The current CR72 makes the same assumption, but that model will be expanded to support R2A0 and R2B0 later. The code in common with the CR72 model has been shifted into the new calc_CR72_chi2() method. This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Added support for the CR72 red model to the relax_disp.select_model user function back end. This is the Carver and Richards 1972 analytic model with the simplification R2A0 = R2B0. The current CR72 makes the same assumption, but that model will be expanded to support R2A0 and R2B0 later. This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • Added the CR72 red model to the relax user manual. This is the Carver and Richards 1972 analytic model with the simplification R2A0 = R2B0. The current CR72 makes the same assumption, but that model will be expanded to support R2A0 and R2B0 later. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • LaTeX improvements for the NS 2-site star red model section of the dispersion chapter of the manual.
  • Expanded the CR72 model to handle both the R2A0 and R2B0 parameters. The CR72 red model now functions as the old CR72 model where R2A0 = R2B0 = R20. All parts of the code have been modified. The lib.dispersion.cr72.r2eff_CR72() function has been expanded to support the full Carver and Richards 1972 equations, dropping back to the simplified form if R2A0 = R2B0.
  • Fix for the dispersion specific loop_parameters() method for the R2A0 and R2B0 parameters. The frequency index is now correctly returned for these and the R20 parameter.
  • Better printouts of the R2A0 and R2B0 parameters at the end of minimisation.
  • Documentation fix for the lib.dispersion.cr72 module.
  • Small speed up for the lib.dispersion.cr72 module for the R2A0 != R2B0 case. Replicated calculations have been minimised.
  • Added support for model nesting in the relaxation dispersion auto-analysis. This involves copying the parameters from the simpler nested model rather than performing a full grid search. This is currently used to handle all models with R2A0 and R2B0 parameters where a simpler model with the single R20 parameter is optimised first.
  • Improvements for the write_results() method of the dispersion auto-analysis. The parameter value and Grace files are now correctly created for all the recent models.
  • Fix for the Relax_disp.test_hansen_cpmg_data_auto_analysis system test for model name change. This is for the change from the CR72 model to CR72 red model.
  • Added the NS 2-site model to the specific_analyses.relax_disp.variables module. This is the model of the numerical solution for the 2-site Bloch-McConnell equations. It originates as optimization function number 1 from the fitting_main_kex.py script from Mathilde Lescanne, Paul Schanda, and Dominique Marion (see http://thread.gmane.org/gmane.science.nmr.relax.devel/4138, https://web.archive.org/web/gna.org/task/?7712#comment2 and https://gna.org/support/download.php?file_id=18262). This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Added the NS 2-site model to the relax_disp.select_model user function frontend. This is the model of the numerical solution for the 2-site Bloch-McConnell equations. It originates as optimization function number 1 from the fitting_main_kex.py script from Mathilde Lescanne, Paul Schanda, and Dominique Marion (see http://thread.gmane.org/gmane.science.nmr.relax.devel/4138, https://web.archive.org/web/gna.org/task/?7712#comment2 and https://gna.org/support/download.php?file_id=18262). This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Added the NS 2-site R2eff calculating function to the relax library. This is the model of the numerical solution for the 2-site Bloch-McConnell equations. It originates as optimization function number 1 from the fitting_main_kex.py script from Mathilde Lescanne, Paul Schanda, and Dominique Marion (see http://thread.gmane.org/gmane.science.nmr.relax.devel/4138, https://web.archive.org/web/gna.org/task/?7712#comment2 and https://gna.org/support/download.php?file_id=18262). This commit follows step 3 of the relaxation dispersion model addition tutorial.
  • Updates and fixes for the lib.dispersion.ns_2site module. The function has been renamed, and the R1 arguments default to 0.0. The flip angle for the from the pulse.
  • Created the NS 2-site model target function. This is the model of the numerical solution for the 2-site Bloch-McConnell equations. It originates as optimization function number 1 from the fitting_main_kex.py script from Mathilde Lescanne, Paul Schanda, and Dominique Marion (see http://thread.gmane.org/gmane.science.nmr.relax.devel/4138, https://web.archive.org/web/gna.org/task/?7712#comment2 and https://gna.org/support/download.php?file_id=18262). This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Converted the NS 2-site model to NS 2-site 3D to be more specific. This might change again in the future.
  • Added support for the NS 2-site 3D model to the relax_disp.select_model user function back end. This is the model of the numerical solution for the 2-site Bloch-McConnell equations. It originates as optimization function number 1 from the fitting_main_kex.py script from Mathilde Lescanne, Paul Schanda, and Dominique Marion (see http://thread.gmane.org/gmane.science.nmr.relax.devel/4138, https://web.archive.org/web/gna.org/task/?7712#comment2 and https://gna.org/support/download.php?file_id=18262). This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • Lots of fixes for the relaxation dispersion target function module for the NS 2-site 3D model.
  • Improvements to the nesting() method of the relaxation dispersion auto-analysis. The use of the parameters of the simpler model in a nested pair now only works if the simpler model is in the model list.
  • Converted the pi-pulse propagator matrices to numpy array format. This is to enable the use of the much faster numpy.dot() function for performing the dot products.
  • Speed ups for the NS 2-site 3D model. The pi-pulse propagator is created only once upon target function initialisation rather than for each function call, each spin cluster, each magnetic field strength, each dispersion point, and each CPMG block.
  • Modified the df, fA, and fB parameters to match the relax omega conventions of δω, wA, and wB. This follows from Paul Schanda's confirmation at http://thread.gmane.org/gmane.science.nmr.relax.devel/4132/focus=4159.
  • Speed up for the lib.dispersion.ns_matrices.rcpmg_3d() function. The pA and pB parameters are now sent into the function rather than being recreated by the function.
  • More changes to the numerical solution dispersion code to match relax's conventions. This includes the changes of df->δω, fA->wA, fB->wB, and Mgx->Mx.
  • Added the NS 2-site 3D red model to the specific_analyses.relax_disp.variables module. This is the NS 2-site 3D model with R2A0 = R2B0 = R20. This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Added the NS 2-site 3D red model to the relax_disp.select_model user function frontend. This is the NS 2-site 3D model with R2A0 = R2B0 = R20. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Created the NS 2-site 3D red model target function. This is the NS 2-site 3D model with R2A0 = R2B0 = R20. This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Added support for the NS 2-site 3D red model to the relax_disp.select_model user function back end. This is the NS 2-site 3D model with R2A0 = R2B0 = R20. This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • Updated all of the numerical model sections of the dispersion chapter of the manual. This includes additions for the NS 2-site 3D and NS 2-site 3D red models.
  • Updated the NS 2-site 3D and NS 2-site 3D red models in the dispersion chapter of the relax manual. The models are now included in the tables and in the introduction.
  • Added support for nesting to the relaxation dispersion auto-analysis for the 'NS 2-site 3D*' models.
  • Added the NS 2-site expanded model to the specific_analyses.relax_disp.variables module. This is the numerical model for the 2-site Bloch-McConnell equations expanded using Maple by Nikolai Skrynnikov. This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Added the NS 2-site expanded model to the relax_disp.select_model user function frontend. This is the numerical model for the 2-site Bloch-McConnell equations expanded using Maple by Nikolai Skrynnikov. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Added the NS 2-site expanded R2eff calculating function to the relax library. This is the numerical model for the 2-site Bloch-McConnell equations expanded using Maple by Nikolai Skrynnikov. It originates as optimization function number 5 from the fitting_main_kex.py script from Mathilde Lescanne, Paul Schanda, and Dominique Marion (see http://thread.gmane.org/gmane.science.nmr.relax.devel/4138, https://web.archive.org/web/gna.org/task/?7712#comment2 and https://gna.org/support/download.php?file_id=18262). This commit follows step 3 of the relaxation dispersion model addition tutorial.
  • Fix for the lib.dispersion.ns_2site_expanded.r2eff_ns_2site_expanded() function. The pg variable should have been pA and it needs to be sent into the function.
  • Created the NS 2-site expanded model target function. This is the numerical model for the 2-site Bloch-McConnell equations expanded using Maple by Nikolai Skrynnikov. It originates as optimization function number 5 from the fitting_main_kex.py script from Mathilde Lescanne, Paul Schanda, and Dominique Marion (see http://thread.gmane.org/gmane.science.nmr.relax.devel/4138, https://web.archive.org/web/gna.org/task/?7712#comment2 and https://gna.org/support/download.php?file_id=18262). This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Added support for the NS 2-site expanded model to the relax_disp.select_model user function back end. This is the numerical model for the 2-site Bloch-McConnell equations expanded using Maple by Nikolai Skrynnikov. It originates as optimization function number 5 from the fitting_main_kex.py script from Mathilde Lescanne, Paul Schanda, and Dominique Marion (see http://thread.gmane.org/gmane.science.nmr.relax.devel/4138, https://web.archive.org/web/gna.org/task/?7712#comment2 and https://gna.org/support/download.php?file_id=18262). This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • Fix for the relax_disp.select_model user function for the NS 2-site expanded model. There is only one R20 parameter as R2A0 = R2B0 in this model.
  • Added the NS 2-site expanded model to the relax user manual. This is the numerical model for the 2-site Bloch-McConnell equations expanded using Maple by Nikolai Skrynnikov. It originates as optimization function number 5 from the fitting_main_kex.py script from Mathilde Lescanne, Paul Schanda, and Dominique Marion (see http://thread.gmane.org/gmane.science.nmr.relax.devel/4138, https://web.archive.org/web/gna.org/task/?7712#comment2 and https://gna.org/support/download.php?file_id=18262). This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Large renaming of the relaxation dispersion models. This includes both the analytic and numerical models. All of the models with separate R2A0 and R2B0 parameters now have ' full' added to the end of the model name. And all of the corresponding reduced models whereby R2A0 = R2B0 = R20 have had the trailing ' red' removed. All descriptions and variable names have been updated to match.
  • Updated the dispersion auto-analysis write_results() method for the recent model changes.
  • Import fix for the NS 2-site expanded dispersion model target function.
  • Fix for the lib.dispersion.ns_2site_expanded module for the missing sqrt() function import.
  • Simplified the test_hansen_cpmg_data_*() system tests by shifting most shared code into setup_hansen_cpmg_data().
  • Created the Relax_disp.test_hansen_cpmg_data_CR72_full system test for checking the CR72 full model.
  • Expanded the dispersion target function class docstring to include all current dispersion models.
  • Updated the parameter checks in the Relax_disp.test_hansen_cpmg_data_CR72_full system test.
  • Fixes for all of the definitions of the kAB and kBA exchange parameters. These were inverted in all parts of relax. The changes only affect the numerical dispersion models.
  • Created the Relax_disp.test_hansen_cpmg_data_ns_2site_3D system test. This checks the NS 2-site 3D numerical dispersion model against some truncated CPMG data from Flemming Hansen.
  • Created the Relax_disp.test_hansen_cpmg_data_ns_2site_3D_full system test. This checks the NS 2-site 3D full numerical dispersion model against some truncated CPMG data from Flemming Hansen. The parameter checks have not been updated as there appears to be a bug.
  • Created system tests for the rest of the numerical dispersion models. These include Relax_disp.test_hansen_cpmg_data_ns_2site_expanded, Relax_disp.test_hansen_cpmg_data_ns_2site_star and Relax_disp.test_hansen_cpmg_data_ns_2site_star_full. This checks the NS 2-site expanded, NS 2-site star, and NS 2-site star full numerical dispersion models against some truncated CPMG data from Flemming Hansen. The parameter checks have not been updated for the NS 2-site expanded and NS 2-site star full models as there appears to be bugs.
  • Fixes for the Relax_disp.test_hansen_cpmg_data_auto_analysis system test. The checks for the CR72 red model are now against the CR72 model. And the models optimised only now include No Rex, LM63, CR72, and IT99, massively speeding up the test.
  • Fixes for the lib.dispersion.ns_2site_expanded module. These problems were identified using the Relax_disp.test_hansen_cpmg_data_ns_2site_expanded system test. They correspond to the issues with the original fitting_main_kex.py program identified by Mathilde Lescanne in her post at http://thread.gmane.org/gmane.science.nmr.relax.devel/4144.
  • The Relax_disp.test_hansen_cpmg_data_ns_2site_expanded system test now passes. The test has been set up to match Relax_disp.test_hansen_cpmg_data_CR72. This allows the efficiency of each method to be compared by running the tests with the --time flag.
  • Fix for the model nesting method of the relaxation dispersion auto-analysis for deselected spins.
  • Added an upper constraint of 2e6 rad/s for the kex dispersion parameter. This is to prevent slow optimisation to values in the order of 1e20 for models which fail.
  • Updated the model lists for the relax scripts for optimising Flemming Hansen's CPMG data. The model lists now include the numeric models and the CR72 full model.
  • The lib.software.grace.write_xy_header() can now set the legend fill pattern and font size.
  • The relax_disp.plot_disp_curves user function backend now sets clear legend boxes with smaller text.
  • Fix for the Grace string for the δω dispersion parameter.
  • Updated the parameter value checks in the Relax_disp.test_hansen_cpmg_data_CR72 system test. The low precision parameters are slightly different because of the new upper constraint on kex, simply because optimisation is terminated early rather than optimisation giving different results.
  • Updated the rest of the dispersion system tests to check the correct parameter values. It is currently assumed that the 'full' dispersion models are correct as there is currently no way of testing if they are not. So the Relax_disp.test_hansen_cpmg_data_ns_2site_3D_full and Relax_disp.test_hansen_cpmg_data_ns_2site_star_full system tests have been updated to pass.
  • Modified how the relaxation dispersion auto-analysis handles Monte Carlo simulations. Now there is a flag which allows per-model simulations to be enabled. By default, simulations are now only performed at the end. This is to allow for massive speed ups in the auto-analysis.
  • Modified the dispersion GUI analysis to not include all dispersion models.
  • Added Mathilde Lescanne to the copyright notices of the numeric dispersion code in the relax library. The dates must still be checked and updated correctly.
  • Added support for the mc_sim_all_models flag for the dispersion auto-analysis in the GUI. The new boolean auto-analysis GUI input element is being used for this purpose.
  • All of the numeric dispersion models are now much more robust. The real part of the magnetization vector for the A state could, for some parameter combinations, be either negative or NaN. These situations are now caught, and the R2eff value set to a very large number.
  • Sectioning improvements for the relaxation dispersion chapter of the relax user manual.
  • Added DOI numbers to a number of bibliography entries for quick links in the relax user manual.
  • Added the LM63 3-site model to the specific_analyses.relax_disp.variables module. This is the Luz and Meiboom 1963 analytic model for three exchanging sites. This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Added the LM63 3-site model to the relax_disp.select_model user function frontend. This is the Luz and Meiboom 1963 analytic model for three exchanging sites. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Added the LM63 3-site model to relaxation dispersion chapter of the relax user manual. This is the Luz and Meiboom 1963 analytic model for three exchanging sites. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Added the LM63 3-site R2eff calculating function to the relax library. This is the Luz and Meiboom 1963 analytic model for three exchanging sites. This commit follows step 3 of the relaxation dispersion model addition tutorial.
  • Fixes for the LM63 3-site model equations in the relaxation dispersion chapter of the user manual.
  • Created the LM63 3-site model target function. This is the Luz and Meiboom 1963 analytic model for three exchanging sites. This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Added support to the relaxation dispersion analysis for the LM63 3-site model parameters. This is the Luz and Meiboom 1963 analytic model for three exchanging sites. This commit follows step 3 of the relaxation dispersion model addition tutorial.
  • Added support for the LM63 3-site model to the relax_disp.select_model user function back end. This is the Luz and Meiboom 1963 analytic model for three exchanging sites. This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • Added support for the LM63 3-site parameters to the specific_analyses.relax_disp.parameters module. This is the Luz and Meiboom 1963 analytic model for three exchanging sites. This commit follows step 5 of the relaxation dispersion model addition tutorial.
  • Added the LM63 3-site model to the dispersion scripts for Flemming Hansen's CPMG data.
  • The relaxation dispersion auto-analysis can now resume if it has been interrupted.
  • Some fixes for the LM63 3-site dispersion model. The library code was not accepting the correct arguments and it was referencing a non-existent parameter, and the grid search setup was failing.
  • Added support for optimising the LM63 3-site dispersion model with Art Palmer's CPMGFit. This is for the relax_disp.cpmgfit_input user function. This model in CPMGFit is called '3-site_CPMG'.
  • Python 3 fixes for the specific_analyses.relax_disp.disp_data module.
  • Fixes for the relax_disp.cpmgfit_execute user function backend. This would hang if CPMGFit produced error messages. Hopefully this is now fixed.
  • Updated the CPMGFit results for the LM63 3-site model. This is for the truncated CPMG data from Flemming Hansen.
  • Speed ups for the LM63 3-site target function. Some mathematical operations have been shifted from the library code into the target function so that is only calculates once per function call.
  • Updated the relax results for Flemming Hansen's truncated CPMG data for the LM63 3-site model.
  • The dispersion auto-analysis now outputs files for the kB, kC, φex,B, and φex,C parameters. This is in the write_results() method and is for creating text files and Grace plots for the LM63 3-site model.
  • Created some synthetic test data for the LM63 3-site dispersion model. This will be used to test CPMGFit and relax's implementations.
  • Updated the LM63 3-site dispersion model test data. The CPMG frequencies are now more realistic.
  • Removed the old Sparky peak lists for the LM63 3-site dispersion model test data.
  • Added the new Sparky peak lists for the LM63 3-site dispersion model test data.
  • Updated the reference Sparky peak lists for the LM63 3-site dispersion model test data.
  • Missing import of the specific_analyses.relax_disp.variables.MODEL_LM63_3SITE variable.
  • Added scripts for calculating the R2eff values for the LM63 3-site dispersion model test data.
  • Created the Relax_disp.test_r2eff_fit_fixed_time system test to show a failure in the auto-analysis. This shows a failure of the R2eff fitting in the dispersion auto-analysis due to Monte Carlo simulations being run when the calc() function should be called.
  • Removed some parts of the Relax_disp.test_r2eff_fit_fixed_time system test. The last lines were non-functional.
  • Fix for the dispersion auto-analysis if not enough models have been input for a final run. The final model selection, Monte Carlo simulation, and results writing stage of the auto-analysis now only occurs when enough models are present for model selection.
  • Fix for the dispersion auto-analysis for when only the single R2eff model is optimised. This is for the case of exponential curve fitting, and allows Monte Carlo simulations to proceed even when the mc_sim_all_models flag is False.
  • Removed some unused parts of the r2eff_calc.py script and added the results file.
  • Made the LM63 3-site dispersion model test data more realistic. Previously all the rates were within a few decimal places of the R20 values. Now the dispersion is much more significant.
  • Modified the LM63 3-site dispersion model test data again. This time the data has been changed to be that of two residues rather than two spins.
  • Another update of the LM63 3-site dispersion model test data. This data now makes CPMGFit happy.
  • Added the CPMGFit results for the LM63 3-site dispersion model test data.
  • Added the relax results for the LM63 3-site dispersion model test data.
  • Added a warning not to use the LM63 3-site model to the dispersion chapter of the user manual.
  • Added the LM63 3-site dispersion model to the model list in the GUI. It is not selected by default.
  • Updated the Noe.test_noe_analysis system test. These were the changes to the lib.software.grace.write_xy_header() function.
  • Fix for the model equivalence setup in the nesting() method of the dispersion auto-analysis. This is the use of the analytic CR72 model parameters for the numeric models to avoid the grid search.
  • Removed a double full stop in the relax_disp.select_model docstring.
  • Updates for the test suite data script for optimising Flemming Hansen's CPMG data. The model list has been shorted to the useful models, and the grid search size is now reasonable.
  • Updated the software_comparison file for the numeric model results from relax. This is the file comparing the results for residues 70 and 71 from Flemming Hansen's CPMG data.
  • Updated the numeric model results for the software_comparison file.
  • Updated the relax results for Flemming Hansen's truncated CPMG data. This includes the CR72 full model and all the numeric models (excluding the *full models).
  • Added Dominique Marion to the copyright notices of all the lib/dispersion/ns_*.py files. This is in response to Paul Schanda's message at http://thread.gmane.org/gmane.science.nmr.relax.devel/4225/focus=4226.
  • Small fix for the relax_disp.cluster documentation.
  • Added the new pre_run_dir argument to the relaxation dispersion auto-analysis. This is to enable clustered optimisation. This specifies a directory containing a completed analysis. The parameters from this previous run will be used as the starting point for optimisation of the clustered analysis.
  • Fix for the Monte Carlo simulations for the dispersion auto-analysis failing under certain conditions. The wrong variable was being checked to see if more than two models were being optimised.
  • The dispersion minimisation() method now skipped deselected spin clusters. This is defined as all spins of the cluster being deselected.
  • Implemented the new relax_disp.parameter_copy user function. This is used to copy relaxation dispersion parameters from one data pipe to another, taking cluster averaging into account. It is used by the dispersion auto-analysis to handle clustering.
  • Added an element to the dispersion GUI analysis for specifying the directory of previous results. This is used for the pre_run_dir argument for the dispersion auto-analysis.
  • Reactivated Monte Carlo simulations for the R2eff model for exponential data curves. This is within the optimise() method of the dispersion auto-analysis.
  • Updated the intro chapter of the user manual for the now supported dispersion analysis. This is no longer listed as a future to be implemented feature.
  • Updated the screenshot of the analysis selection wizard to include the dispersion analysis. This new figure has been updated in the intro chapter of the relax user manual as well.
  • Renamed an instance of 'numerical simulation' in the dispersion chapter of the manual.
  • Fix for the final data pipe in the dispersion auto-analysis. The final data pipe is now placed in the data pipe bundle. This is needed to allow the final state file to be opened in the GUI with an associated GUI analysis tab.
  • Fixes for the clustering display in the GUI. This is for the relaxation dispersion GUI analysis tab.
  • Updated the README file for Dr. Flemming Hansen's CPMG data in the test suite.
  • Added Martin Tollinger to the copyright of the lib.dispersion.ns_2site_expanded module. This follows Martin's post at http://article.gmane.org/gmane.science.nmr.relax.devel/4276.
  • Added links to all of the copyright licensing agreements for the lib.dispersion.ns_2site_expanded module.
  • Added Nikolai Skrynnikov to the copyright notice of the lib.dispersion.ns_2site_expanded module.
  • Added the TP02 model to the specific_analyses.relax_disp.variables module. This is the Trott and Palmer 2002 R analytic model for 2-site exchange. This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Added the TP02 model to the relax_disp.select_model user function frontend. This is the Trott and Palmer 2002 R analytic model for 2-site exchange. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Added the TP02 model to relaxation dispersion chapter of the relax user manual. This is the Trott and Palmer 2002 R analytic model for 2-site exchange. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Added the TP02 R' calculating function to the relax library. This is the Trott and Palmer 2002 R analytic model for 2-site exchange. This commit follows step 3 of the relaxation dispersion model addition tutorial. The Matlab code from Skrynnikov and Tollinger has not been converted to Python code yet. This is to allow the Matlab->Python conversion to be followed.
  • Fix for the M61 skew dispersion model indexing in the user manual.
  • Added the NS 2-site expanded model to the CPMG dispersion sample script.
  • Added the NS 2-site expanded and TP02 models to the script in the manual. This is in the script section of the dispersion chapter of the user manual.
  • Converted the lib.dispersion.tp02 module from Matlab code to Python. The code has also been made fail-safe and repetitive calculations have been shifted outside of the loop to speed things up.
  • Fixes for the TP02 model section of the dispersion chapter of the manual.
  • Created the TP02 model target function. This is the Trott and Palmer 2002 R analytic model for 2-site exchange. This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Added support for the TP02 model to the relax_disp.select_model user function back end. This is the Trott and Palmer 2002 R analytic model for 2-site exchange. This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • The spectrometer frequency information is now checked for the TP02 model.
  • Started to create a script to create synthetic data for the TP02 dispersion model. This still needs a lot of work.
  • Added the tp02 module to the lib.dispersion package __all__ list.
  • Created the synthetic data for the TP02 dispersion model. The Sparky peak lists have been created and added to the repository.
  • Modified the synthetic data for the TP02 dispersion model. The data is more closely mimics that from the paper, and should now be in the slow exchange regime.
  • Updated the M61 R model conditions in the table in the user manual.
  • Updated the TP02 R model conditions in the table in the manual. This cannot be fast exchange.
  • Started to create the Relax_disp.test_r1rho_off_res_fixed_time_tp02 system test.
  • A fix for older numpy versions, as the numpy.add() function argument 'out' is relatively new.
  • Added the new TSMFK01 model to the specific_analyses\relax_disp\variables.py module. This is the Tollinger/Kay model for the 2-site very-slow exchange model for CPMG-type experiments, which cover the range of microsecond to second time scale. Paper by M. Tollinger, N.R. Skrynnikov, F.A.A. Mulder, J.D.F. Kay and L.E. Kay (2001). Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • lib/dispersion/lm63.py is copied to tsmfk01.py as part of the implementation of the TSMFK01 model equation. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at Tutorial for adding relaxation dispersion models to relax.
  • Added the TSMFK01 model to the user_functions/relax_disp.py select_model user function frontend. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Created the TSMFK01 model target function for 2-site very-slow exchange model, range of microsecond to second time scale. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added the TSMFK01 model equations to the relax library lib/dispersion/tsmfk01.py. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • The chemical shift of each spin is now taken into account for the synthetic data for the TP02 dispersion model. The data now properly reflects the spin lock offset.
  • Updated all of the dispersion system tests for the spectrum.read_intensities user function changes. The arguments heteronuc and proton have been replaced with 'dim'.
  • Improved the error message for when peak intensity data cannot be found in a dispersion analysis. This is to better aid the user to track down what they did wrong.
  • More error message improvements for when peak intensity data cannot be found in a dispersion analysis.
  • Created the relax script for the Relax_disp.test_r1rho_off_res_fixed_time_tp02 system test.
  • Changed the synthetic data file names for the TP02 dispersion model.
  • Updated the Relax_disp.test_r1rho_off_res_fixed_time_tp02 system test script for the new file names.
  • Added a new user function to the Relax_disp.test_r1rho_off_res_fixed_time_tp02 system test. This is the chemical_shift.read user function which currently does not exist. Chemical shifts are needed to interpret off-resonance R data.
  • Copyright of Sebastien Morin and Edward d'Auvergne re-inserted, since tsmfk01 is an alteration of lm63.py and m61.py in same directory. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fix for converting δω from ppm to rad/s. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added support for the TSMFK01 model to the relax_disp.select_model user function back end. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fix for the reading of chemical shifts in the Relax_disp.test_r1rho_off_res_fixed_time_tp02 system test.
  • Added the Trott and Palmer, 2002 bibtex reference for the user manual.
  • Added preliminary support for chemical shifts to the dispersion target functions.
  • Modified the GUI behaviour for a dispersion analysis when the C modules are not compiled. Previously a user was blocked from performing any dispersion analysis in the GUI is the modules were not compiled. Now instead, although an error is still thrown, the analysis will be initialised.
  • A new check blocking exponential curve fitting in the dispersion analysis when the C modules are not compiled.
  • Changed how chemical shifts are handled in the dispersion target function class. The chemical shifts in ppm are accepted and they are converted to rad/s inside the __init__() method. A structure for rotating frame tilt angles is now also accepted.
  • Added a relax_disp.spin_lock_offset user function call to Relax_disp.test_r1rho_off_res_tp02. This is the Relax_disp.test_r1rho_off_res_tp02 system test and the user function does not exist yet.
  • Implemented the relax_disp.spin_lock_offset user function. This includes both the front end and the back end specific_analyses.relax_disp.disp_data.spin_lock_offset() function.
  • The offset is now set for all spectra in the Relax_disp.test_r1rho_off_res_fixed_time_tp02 system test. Previously only the reference was set.
  • Fixed a typo in the dispersion chapter of the user manual. This was identified by Troels Linnet at http://thread.gmane.org/gmane.science.nmr.relax.devel/4410.
  • Fix for the relax_disp.spin_lock_offset user function. The cdp.dispersion_points structure was being updated when it should not be touched - a remnant of the relax_disp.spin_lock_field backend which this code was copied from.
  • Added some more arguments to the dispersion target function class for off-resonance R models. This is the structure for the spin-lock offsets and the tilt angles for each spin.
  • Fix for the Relax_disp.test_r1rho_off_res_fixed_time_tp02 system test. The correct spectrum ID is now used for the relax_disp.spin_lock_offset user function calls in the script.
  • The dispersion specific optimisation code is now assembling chemical shift related data. The specific_analyses.relax_disp.disp_data.return_offset_data() function has been written to return structures for the chemical shifts, offsets, and tilt angles. These are then used by the optimisation functions by passing them into the target function code.
  • The TP02 model code is now in a semi functional state. The lib.dispersion code has been fixed to properly handle the data it receives and the target function code has been updated to pass in the correct data.
  • The TP02 model R off-resonance test data creation script now creates files of the R1 relaxation data. These files are needed for the system tests, as R1 data needs to be read.
  • The dispersion target function class now handles R1 relaxation data. This data is essential for the off-resonance R models.
  • The Relax_disp.test_r1rho_off_res_fixed_time_tp02 system test script now reads the R1 relaxation data. This data is essential for the model optimisation.
  • The dispersion specific code is now assembling the R1 data and passing it to the target function. The new specific_analyses.relax_disp.disp_data.return_r1_data() function creates a data structure holding all the R1 data. This is used in the off-resonance R experiments.
  • Added checks to the specific_analyses.relax_disp.disp_data.return_r1_data() function. This is to help indicate to the user when data is missing.
  • Fix for the Relax_disp.test_bug_20889_multi_col_peak_list GUI test. The spectrum.read_intensities user function no longer has 'heteronuc' and 'proton' arguments.
  • Fix to allow R1 data to be randomised for Monte Carlo simulations for off-resonance R data. This is a temporary kludge for the dispersion analysis and needs to be replaced by a cleaner solution via the base_data_loop() method.
  • Fix for the synthetic data for the TP02 dispersion model. The nitrogen chemical shift was not converted from ppm to rad/s before being used to calculate the offsets.
  • Fixes for the parameter checks in the Relax_disp.test_r1rho_off_res_fixed_time_tp02 system test. The parameter values had not been updated from when the test was copied from one of the other tests.
  • Turned off clustering in the Relax_disp.test_r1rho_off_res_fixed_time_tp02 system test. This speeds the test up by more than half.
  • Fix for the TP02 dispersion model. The rotating frame tilt angle for this model is calculated from the population averaged chemical shift and not the equal weighted average.
  • Attempt at fixing bug #21080. This was reported by Troels Linnet. The problem is a standard GUI problem. The text from a wxPython GUI is a Unicode string. But relax requires standard strings. Therefore the gui.string_conv.gui_to_str() function needs to be used on the return value of the GUI element GetValue() method, but this call was forgotten.
  • Fix for the synthetic data for the TP02 dispersion model. The chemical shift is being set to that of state A, the major species, rather than the non-weighted chemical shift average. This could also have been set to the population weighted average.
  • The TP02 dispersion model now assumes the chemical shift is that of the major population A. Previously the linear chemical shift average was assumed.
  • Increased the grid search size in the Relax_disp.test_r1rho_off_res_fixed_time_tp02 system test.
  • The self.field_pre_run_dir GUI element is now deactivated with the execution lock.
  • Many fixes and improvements for all of the R dispersion models in the user manual. The equations are now correct and the parameter table updated with new parameters and equations.
  • Removed the unused theta and R1 arguments for the lib.dispersion.m61.r1rho_M61() function. These off-resonance parameters are not used in the on-resonance model.
  • Updated the r1rho_on_res_m61 dispersion on-resonance data for off-resonance models. The chemical shifts are now the same for all spins, to force perfect on-resonance, and the two spins are now different residues.
  • Added an R1 data file to r1rho_on_res_m61, by copying from the r1rho_off_res_tp02 test suite data.
  • Updated the Relax_disp.test_r1rho_on_res_fixed_time_dpl94 system test for off-resonance data. The offsets, R1 data, and chemical shifts are now setup or read by the script.
  • Fixes for the DPL94 model to make it truly off-resonance. The tilt angles and R1 data are now used by the target function.
  • Fixes for the r1rho_on_res_m61.py system test script. The spins are now different residues. This fixes two system tests.
  • Renamed all of the current numeric dispersion models in relax to be specific to CPMG-type data. This is in preparation for adding R numeric models. It was proposed at http://thread.gmane.org/gmane.science.nmr.relax.devel/4461.
  • Added the NS R1rho 2-site model to the specific_analyses.relax_disp.variables module. This is the numerical model for the 2-site Bloch-McConnell equations for R data. This commit follows step 1 of the relaxation dispersion model addition tutorial.
  • Added the NS R1rho 2-site model to the relax_disp.select_model user function frontend. This is the numerical model for the 2-site Bloch-McConnell equations for R data. This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Added the NS R1rho 2-site R calculating function to the relax library. This is the numerical model for the 2-site Bloch-McConnell equations for R data. This code originates from the Skrynikov & Tollinger code (the sim_all.tar file https://gna.org/support/download.php?file_id=18404 attached to https://web.archive.org/web/gna.org/task/?7712#comment5). Specifically the funNumrho.m file. This commit follows step 3 of the relaxation dispersion model addition tutorial.
  • Fix for the function name in the lib.dispersion.ns_r1rho_2site module and removed misplaced copyrights.
  • Created the NS R1rho 2-site model target function. This is the numerical model for the 2-site Bloch-McConnell equations for R data. The code originates from the funNumrho.m file from the Skrynikov & Tollinger code (the sim_all.tar file https://gna.org/support/download.php?file_id=18404 attached to https://web.archive.org/web/gna.org/task/?7712#comment5). This commit follows step 4 of the relaxation dispersion model addition tutorial.
  • Added support for the NS R1rho 2-site model to the relax_disp.select_model user function back end. This is the numerical model for the 2-site Bloch-McConnell equations for R data. The code originates from the funNumrho.m file from the Skrynikov & Tollinger code (the sim_all.tar file https://gna.org/support/download.php?file_id=18404 attached to https://web.archive.org/web/gna.org/task/?7712#comment5). This commit follows step 6 of the relaxation dispersion model addition tutorial.
  • Added the NS R1rho 2-site model to the relax user manual. This is the numerical model for the 2-site Bloch-McConnell equations for R data. The code originates from the funNumrho.m file from the Skrynikov & Tollinger code (the sim_all.tar file https://gna.org/support/download.php?file_id=18404 attached to https://web.archive.org/web/gna.org/task/?7712#comment5). This commit follows step 2 of the relaxation dispersion model addition tutorial.
  • Rearrangement of the model sections in the dispersion chapter of the user manual. These are now better separated into different categories.
  • Created a save file for the r1rho_off_res_tp02 dispersion data optimised to the R2eff model. This will be used for faster system tests.
  • Created the Relax_disp.test_r1rho_ns_r1rho_2site_to_tp02 system test for the new NS R1rho 2-site model. This tests the NS R1rho 2-site model against the R off-resonance test data from the TP02 model.
  • A number of fixes for the NS R1rho 2-site dispersion model. The model should now be fully functional. The chemical shift and R1 related data are now assembled for this model, and the data correctly passed from the target function to the lib.dispersion module.
  • The Relax_disp.test_r1rho_ns_r1rho_2site_to_tp02 system test now passes. The optimised values have been hard-coded into the system test. They do not match the TP02 results, but are close.
  • Renamed many of the Relax_disp system tests to bring some order to the naming.
  • Alphabetical ordering of all of the Relax_disp system tests.
  • Created a system test to catch bug #21081. This uses a truncated version of Troel Linnet's save state attached to the bug report (the data pipes not used in the model selection have been manually deleted as well as all by the first 3 spins in the remaining 2 data pipes).
  • Fix for bug #21081 - the failure of a dispersion cluster analysis. The problem was that the specific_analyses.relax_disp.disp_data.loop_cluster() generator method was not taking the spin.select flag into account. Now all deselected spins are excluded from the spin clusters and the free spins.
  • Better support for off-resonance R data in the dispersion GUI. A new row of buttons has been added to the dispersion GUI, just above the Peak list GUI element. The first button is for the spin.isotope user function and replaces the old GUI element. Two new buttons for loading R1 data and chemical shifts have also been added, as required for off-resonance R data.
  • Changed the chemical shift icon to that of the chemical shift in ppm units - the δ symbol.
  • The chemical shift icon now has a transparent background.
  • Small changes to the tooltips of the R1 and chemical shift buttons.
  • Used far more Unicode for superscript, subscript and Greek letters for the model parameters. This is for the model list elements in the dispersion GUI tab.
  • Added the TP02 and NS R1rho 2-site models to the R model list in the dispersion GUI. These models were missing from the list.
  • Fix for the NS R1rho 2-site model description in the relax_disp.select_model user function.
  • The relax_disp.select_model GUI wizard combo element now uses Unicode for the dispersion parameters. This is for all the models. The LM63 3-site model parameter list has also been fixed to match the current set.
  • The CPMGFit input and output file names for relaxation dispersion are now MS Windows compatible. This is needed to allow the files in the test suite to exist on Windows systems, as the '#:@'symbols cause problems. The same logic as used for the relax_disp.plot_disp_curves is used to replace these characters with an underscore.
  • CPMGFit file name fixes for MS Windows. The '#:@' characters have all been replaced by underscores.
  • Fix for the Relax_disp.test_hansen_cpmgfit_input system test. The '#:@' characters are no longer used in the file names.
  • Updates to the Relax_disp system tests for the lower precision of MS Windows. These fixes allow the tests to pass on MS Windows.
  • Renamed ka parameter to kA, to be consistent with naming conventions. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fix for r20 should be called r20a. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fix for unpacking the parameters correctly. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added the kAB and kBA parameters to the table of all dispersion parameters.
  • Proper ordering of all the dispersion models. See the thread at http://thread.gmane.org/gmane.science.nmr.relax.devel/4498 for details.
  • Added the NS R1rho 2-site model to the dispersion auto-analysis.
  • Added the TP02 model to the dispersion auto-analysis.
  • The tutorial for adding dispersion models in the user manual has been simplified. Most of the text from the dispersion model addition tutorial in the dispersion chapter of the manual has been removed. Instead a link to the tutorial on the wiki is given as this is a much better place for such information (Tutorial for adding relaxation dispersion models to relax).
  • Moved the ordering of the model TSMF. Ordering conventions mentioned in this post http://article.gmane.org/gmane.science.nmr.relax.devel/4500. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added the relax_disp.spin_lock_offset user function to the peak intensity wizard of the GUI. This is only for R-type data and allows off-resonance data to be analysed in the GUI.
  • Data provided for the implementation of the slow-exchange analytic model of the Tollinger/Kay (2001). This model were used for fitting in the paper http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1458987.
  • Created the Relax_disp.test_tp02_data_to_tp02 GUI test. This is based on the system test of the same name. This GUI test checks that an off-resonance R analysis is functional in the GUI.
  • Python 3 space fixes for the lib.software.grace.script_grace2images() function. As the script is encoded by strings, the 2to3 program cannot fix this script. Therefore the changes were made by hand.
  • Unicode strings in the dispersion GUI elements is now set up with the compat.u() function.
  • Fix for the y-axis in the per spin dispersion curve plots. This fix follows from the thread http://thread.gmane.org/gmane.science.nmr.relax.devel/4512. The test for CPMG-type data was incorrect and should use the CPMG_EXP variable.
  • Added setup function for the system test of KTeilum_FMPoulsen_MAkke_2006 data. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fixed a spelling mistake in a number of file names. This is for the test suite data located at test_suite/shared_data/dispersion/KTeilum_FMPoulsen_MAkke_2006.
  • Fixes for the units in the dispersion parameter table in the user manual. The units for δω are rad.s-1 when used in the equations, but it is stored internally as ppm.
  • Truncated the dataset to only one residue L61. The truncated dataset will be expanded later. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Modified the script file for saving of a truncated base_pipe state file. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added script files for generating a saved state file with R2eff values. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added the first system test for model CR72 for the kteilum_fmpoulsen_makke_cpmg_data. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fix trailing spaces. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fix epydoc HTML markup code. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added some MQ dispersion data taken from the GUARDD program.
  • Modified the Relax_disp.test_dpl94_data_to_dpl94 system test for a relax_disp.exp_type change. This is so that the relax_disp.exp_type user function associates the experiment types with a spectrum ID. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530.
  • Clean up and expansion of the dispersion experiment type variables.
  • Another change to the dispersion experiment type variables.
  • Fixes for the changes to the dispersion experiment type variables throughout the dispersion code.
  • Redesigned the relax_disp.exp_type user function to be associated with spectrum IDs. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously. The user function backend has been moved from specific_analyses.relax_disp.api to specific_analyses.relax_disp.disp_data. A few temporary code additions have been made to keep the user function functional with the current dispersion code.
  • Fixes for the relaxation dispersion system tests for the relax_disp.exp_type changes.
  • The relaxation dispersion system tests requiring the compiled C modules are now skipped when not compiled.
  • Created the specific_analyses.relax_disp.disp_data.loop_exp*() functions. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously. The methods added are loop_exp(), loop_exp_frq(), loop_exp_frq_point() and loop_exp_frq_point_time().
  • Removed the relax_disp.exp_type user function page from the new analysis wizard.
  • Modified the dispersion GUI analysis to handle the relax_disp.exp_type user function changes. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously. The experiment type GUI element has been removed, the CPMG and R model lists merged into one, and much code for the experiment type removed. The peak analysis wizard will need to be heavily modified for the changes.
  • Added the relax_disp.exp_type user function to the peak intensity loading wizard.
  • Added the experiment type to the spectrum list GUI element. This is only activated if the exp_type_flag argument is True.
  • The CPMG frequency and spin-lock field strength columns are merged in the spectrum list GUI element. The column is now for the dispersion point data, and allows different experiment types to be mixed.
  • The spectra list GUI element in the dispersion auto-analysis now shows all columns.
  • Removed the temporary FIXMEs from the relax_disp.exp_type user function backend. This is needed to enable the mixed experiment type code to be developed further, but means that the relax_disp branch will be broken for a while.
  • The specific_analyses.relax_disp.disp_data.loop_point() function now requires the exp_type argument. The exp_type can no longer be determined within the loop_point() function. Therefore it must be specified using a function argument. The rest of the module has been updated for this change.
  • Updated specific_analyses.relax_disp.parameters.param_num() for the relax_disp.exp_type changes.
  • Fix for the Relax_disp.test_dpl94_data_to_dpl94 system test. The experiment type is now set for the reference spectrum.
  • Created the new specific_analyses.relax_disp.checks module. This contains many check_*() functions for raising RelaxErrors to tell the user when something is wrong. This will be used to simplify, make more consistent, and fix cdp.exp_type errors in the dispersion code.
  • Added a number of auxiliary functions to specific_analyses.relax_disp.disp_data. These are get_curve_type(), has_exponential_exp_type(), and has_fixed_time_exp_type() and will be used to simplify the dispersion code.
  • Fixes for the specific_analyses.relax_disp.api module for the relax_disp.exp_type change. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously. The loop_exp*() functions are now being used throughout the module. The specific_analyses.relax_disp.checks.check_*() functions are also used to simplify the code and fix changes to cdp.exp_type. And some auxiliary functions from specific_analyses.relax_disp.disp_data are being used as well.
  • Added some functions to specific_analyses.relax_disp.disp_data for checking if certain experiments exist. These are the has_cpmg_exp_type() and has_r1rho_exp_type() functions.
  • The dispersion auto-analysis no longer references cdp.exp_type. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously.
  • Fix for the new loop_exp_frq() dispersion function.
  • A few fixes for the relax_disp.exp_type user function changes. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously.
  • Fixes for the specific_analyses.relax_disp.disp_data.find_intensity_keys() function. This is for the cdp.exp_type changes.
  • Fixes for the relax_disp.plot_disp_curves user function backend for the cdp.exp_type changes.
  • A number of fixes for the relax_disp.exp_type user function changes. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously.
  • Updated the Hansen CPMG data relax save files for the cdp.exp_type changes.
  • Fix for the Relax_disp.test_hansen_cpmgfit_input system test for a new set of errors. The Hansen R2eff values have been recalculated and the errors are now slightly different.
  • More fixes due to the cdp.exp_type change. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously.
  • Updated the r1rho_off_res_tp02 dispersion system test data for the cdp.exp_type changes.
  • Some more fixes for the cdp.exp_type now being dependent on the spectrum ID. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously.
  • Changes so that the target function will handle multiple experiment types. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously. The data structures from return_r2eff_arrays() now have an additional dimension. The new first dimension is that of the experiment type. This affects the values, errors, and missing data structures. This dimension is stripped in the dispersion target function class for the single experiment type models, but will be preserved for the combined models to be added in the future.
  • The spectrum list GUI element is now more robust to missing data. The cdp.spectrum_ids data structure no longer needs to exist.
  • The peak intensity wizard requires more than 10 pages. The hardcoded limit of a maximum of 10 wizard pages has been increased to 15. Due to the non-linearity for the wizard pages, not all are seen, but many pages are required.
  • The spectrum list GUI element can now handle the cdp.exp_type data structure not existing.
  • Fixes for all of the specific_analyses.relax_disp.disp_data.has_*_exp_type() functions. They now operate when no experiment types have been specified.
  • Redesigned the peak intensity loading GUI wizard for handling multiple experiment types. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously. The logic for the page ordering needed to be changed to be more dynamic. The CPMG and R pages need to be shown only if the corresponding experiment type exists in the current data pipe. Hence the has_cpmg_exp_type() and has_r1rho_exp_type() dispersion functions are now used by the new methods wizard_page_after_relax_time() and wizard_page_after_cpmg_frq(). A number of now useless flags have also been removed.
  • Added some sanity checks to the dispersion target function class. R models cannot be used with CPMG-type experiments, and CPMG models cannot be used with R-type experiments.
  • Fixes for all of the GUI dispersion tests for the changes to cdp.exp_type. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously.
  • Large speed up of the Relax_disp.test_tp02_data_to_tp02 GUI test. By minimising the number of times the peak intensity wizard pages are regenerated, the test time decreased on one system from ~32s to ~12s.
  • Simplified the code of the Relax_disp.test_hansen_trunc_data GUI test.
  • The GUI now warns if the user choses inappropriate models. When clicking on 'Execute', an error message appears if R models are selected for CPMG data and vice versa. This is simply for more intuitive user feedback.
  • Fix for the relax_disp.exp_type pop up menu entry in the spectrum list GUI element. This was calling relax_fit.exp_type rather than relax_disp.exp_type.
  • Fix for the relax_disp.cpmg_frq pop up menu entry in the spectrum list GUI element. The method associated with the menu entry action_relax_disp_cpmg_frq() was buggy.
  • Fix for the relax_disp.spin_lock_field pop up menu entry in the spectrum list GUI element. Another action method bug - the same as in the last commit.
  • Added two functions for determining if a spectrum ID corresponds to a CPMG or R experiment. These are functions in specific_analyses.relax_disp.disp_data and they are called is_cpmg_exp_type() and is_r1rho_exp_type().
  • Big redesign of the spectrum list GUI element for the dispersion analysis. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/4530, the thread about supporting multiple data types such as SQ+MQ data simultaneously. The popup menu must be generated on the fly, as the CPMG frequency and spin-lock field strength menu entries should only be shown for the appropriate experiment type. Therefore the new generate_popup_menu() method has been added to the gui.components.base_list GUI element. The flags sent into the spectrum list GUI element have also been completely changed to now indicate the analysis type directly.
  • Expanded the Relax_disp.test_hansen_trunc_data GUI test to check the spectrum list GUI element. The popup menu is now tested with the Fake_right_click() trick. And the actions of a number of the menu items, the action*() methods, are tested to see if the user functions are correctly called.
  • Modified many of the spectrum list GUI element action_*() methods for the GUI tests. These now take the 'item' keyword argument which overrides the ListCtrl.GetFirstSelected() call. This ListCtrl call cannot be reliably simulated on all operating systems, so the item keyword argument can be used to explicitly select list items.
  • Fix for setting the relaxation time in the spectrum list GUI element for the dispersion analysis. The popup menu item was calling the relax_fit.relax_time user function and not relax_disp.relax_time.
  • Fix for the action_relax_disp_cpmg_frq() method of the spectrum list GUI element. The relax_disp.cpmg_frq user function was being incorrectly called. This was identified via the Relax_disp.test_hansen_trunc_data system test.
  • Modified the Relax_disp.test_tp02_data_to_tp02 GUI test to check the spectrum list GUI element. The popup menu is now tested in the same way as in the Relax_disp.test_hansen_trunc_data GUI test.
  • Modified the spectrum list GUI element action_relax_disp_spin_lock_field() method for the GUI tests. This now accepts the optional 'item' keyword argument like the other action_*() methods.
  • Bug fix for the spectrum list GUI element popup menu relax_disp.spin_lock_field entry. This was calling the relax_disp.spin_lock_field user function incorrectly. The bug was identified by the Relax_disp.test_tp02_data_to_tp02 system test.
  • Fix for the Mf.test_auto_analysis GUI test due to the spectrum list GUI element changes. The Fake_right_click() class now needs a GetPosition() method.
  • Moved the experiment type setting into per spectra settings. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fixed typo. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added the truncated test data for system test: relax -s Relax_disp.test_kteilum_fmpoulsen_makke_cpmg_data_to_cr72. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Created a relax save file with just R2eff values for the r1rho_on_res_m61 dispersion system tests data.
  • Created 3 new dispersion system tests. These are for checking a new function that doesn't exist yet. The get_curve_type() function will be used to determine if the experiment corresponding to the given ID consists of exponential curves or of fixed time data.
  • Fixed values for system test: relax -s Relax_disp.test_kteilum_fmpoulsen_makke_cpmg_data_to_cr72. The test now passes. The values are compared to a relax run with 500 Monte Carlo simulations. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added "CR72 full" test suite for kteilum_fmpoulsen_makke_cpmg_data. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added file which setup a truncated spin system. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Changed the initialization script to use the truncated spin system. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Changed the saved states to the truncated spin system. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fix for the residue index in the test suite when using the truncated spin system. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • The R2eff result save file for the r1rho_on_res_m61 dispersion data now contains the full data set. The previous file did not contain the full exponential curves.
  • The get_curve_type() function now works with the spectrum ID. This specific_analyses.relax_disp.disp_data.get_curve_type() function already existed but it operated on all the loaded data. Now it can handle a single spectrum ID. The count_relax_times() function has been added to aid get_curve_type().
  • The get_curve_type() function is now imported into the dispersion system test module.
  • Modified the Relax_disp.test_dpl94_data_to_dpl94 system test. This is in preparation for another relax_disp.exp_type change - the fixed and exponential parts will be dropped as this can be determined automatically by relax.
  • Changed the relax_disp.exp_type user function front end. The supported types will now be 'CPMG' and 'R1rho', as the fixed time verses full exponential curve can be automatically determined by relax from what the user inputs.
  • Started a system test for model TSMFK01. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Modified the script for the full analysis of all models of CPMG type. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Simplified the dispersion experiment type classes. The supported types will now be 'CPMG' and 'R1rho', as the fixed time verses full exponential curve can be automatically determined by relax from what the user inputs. The EXP_TYPE_* dispersion variables have all changed and many have now been lost. To support the changes, the new specific_analyses.relax_disp.disp_data.loop_spectrum_ids() function has been created. This is a loop over all spectrum IDs whereby the experiment type, magnetic field strength, dispersion point, or relaxation time can be specified to isolate ID subsets. Many of the specific_analyses.relax_disp.checks.check_*() has also been modified as their logic no longer works. The auxiliary get_times() function has been added to create a per-experiment dictionary of relaxation times so that the checks can be independent of the other dispersion modules.
  • Updated much of the dispersion test data. The experiment type has been changed in all the scripts and the relax save files updated.
  • Fixed expydoc formatting. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Created some more specific_analyses.relax_disp.checks.check_*() functions. This is for better checking of the dispersion data. The check_spectra_id_setup() is useful for checking that all of the spectrum information is set up.
  • The checks prior to minimisation of the dispersion models is now more comprehensive.
  • Bug fixes for the specific_analyses.relax_disp.checks.get_times() function. The function is now more tolerant if certain data has now been set up yet.
  • Fixes for some of the R dispersion system test scripts. The relaxation time must be set for the reference spectrum.
  • Fixes for the Relax_disp.test_exp_fit system test - the spectrometer frequency is now set. This information is now compulsory.
  • Converted references of ka and kA to kAB. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Updated the CPMG dispersion analysis sample script for the relax_disp.exp_type user function changes.
  • Updated the user manual for the relax_disp.exp_type user function changes. The script in the prompt/script UI section of the dispersion chapter needed to be updated.
  • Referencing fixes for the dispersion chapter of the user manual.
  • Updated the scripts and save files for the KTeilum_FMPoulsen_MAkke_2006 dispersion data. This is for the recent relax_disp.exp_type user function changes and this allows the tests to pass. Information on how to run the scripts and tee the output to logs has been added, and the logs added to the repository.
  • Added kAB to parameters. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • More fixes for the relax_disp.exp_type user function changes.
  • The relaxation dispersion GUI elements now use the lib.text.gui module for Unicode strings.
  • Some Unicode text fixes in the dispersion GUI analysis for older MS Windows versions.
  • Updated the Grace string for the kAB parameter - it was being shown as kA.
  • Fix for the model list in the GUI - the TSMFK01 model entry was broken.
  • Increased the size of the dispersion model list GUI window so that all models fit without scrolling.
  • Refinement of the dispersion model list in the GUI. Descriptions have been added and the fixed window size adjusted to the best fit.
  • Modified system test after inclusion of 1M GuHCl dataset. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Moved files into folder which is specific for the experiment. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Changed scripts after moving data. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Re-run of data after movement of scripts. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added dataset experiment in 1.01 M GuHCl (guanidine hydrochloride). Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added to the README file for the 1.01 M GuHCL experiment. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Modified doc string for the script analysing all models for residue L61. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added the output from relax after analysis of all models. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Removed the size variable from the dispersion model list GUI window as it is no longer used.
  • Added the kAB and kBA conversion equations to the dispersion parameter table in the user manual.
  • Changed reference to Tollinger et al. instead of Tollinger/Kay. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fixes for the dispersion GUI tests for the changes to the model list GUI element.
  • Added the button for the interatom.define user function in preparation for the MQ dispersion data. This is in the dispersion tab of the GUI.
  • The return_cpmg_frqs() and return_spin_lock_nu1() functions now return numpy arrays. These are functions from specific_analyses.relax_disp.disp_data.
  • Speed ups for the optimisation of all of the R dispersion models. The spin-lock field strength data structure is now converted from Hz to rad.s-1 in the dispersion target function initialisation. Previously the conversion was happening multiple times per target function call. This has a noticeable effect on the test suite timings.
  • Some small speed ups for the TP02 R dispersion model optimisation. Some unneeded calculations and aliases were removed.
  • Added the write-out of 'δω' and 'kAB' for model TSMFK01, when performing auto-analysis. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added to calculate the tau_cpmg times when model is TSMFK01. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Optimized the target function for model TSMFK. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added the conversion to kAB from kex and pA. kAB = kex * (1.0 - pA). Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Some more speed ups for the R dispersion models. For many models, the square of the spin-lock field strength is a part of the equations. Therefore this is now pre-calculated when the target function is initialised.
  • Added the relaxation dispersion documentation to all of the value user function documentation.
  • Fix for the CPMG dispersion sample script - the numeric solution model name was not correct.
  • Fix for the dispersion model list in the GUI - the R models were mixed up.
  • Added a sample script for an off-resonance R dispersion analysis.
  • Created the empty specific_analyses.relax_disp.optimisation module. This will contain functions and other objects relating to the optimisation of the dispersion models.
  • Fixed bug, where kex to kAB where not possible if the model does not contain parameter pA. The conversion is now skipped. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added the conversion to kBA from kex and pA. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added system test for testing conversion to kBA from kex and pA. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fix for passing system test on Windows with Python 32. Precision lowered by 2 decimals. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added system tests for conversion of kex to kAB/kBA for models where kex and pA is present. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Modified headers for scripts producing analysis for data which is full or truncated. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Created the dispersion default_value_doc object. This table is needed for the value.set user function.
  • Huge speed win for the relaxation dispersion analysis - optimisation now uses the multi-processor. The relaxation dispersion optimisation has been parallelised at the level of the spin clustering. It uses Gary Thompson's multi-processor framework. This allows the code to run on multi-core, multi-processor systems, clusters, grids, and anywhere the OpenMPI protocol is available. Because the parallelisation is at the cluster level there are some situations, whereby instead of optimisation being faster when running on multiple slaves, the optimisation will be slower. This is the case when all spins being studied in clustered into a small number of clusters. It is also likely to be slower for the minimise user function when no clustering is defined, due to the overhead costs of data transfer (but for the numeric models, in this case there will be a clear win). The two situations where there will be a huge performance win is the grid_search user function when no clustering is defined and the Monte Carlo simulations for error analysis.
  • Decreased the number of grid increments in the dispersion sample scripts from 21 to 11. This is a much easier optimisation problem than the other analyses in relax, so 21 increments is an overkill. It also takes far too long for some of the models due to the high number of parameters.
  • Removed a tonne of unused imports from the modules of the specific_analyses.relax_disp package.
  • Deselected most of the default dispersion models from the dispersion GUI model list. Now only one analytic and numeric model is selected per experiment type. This is to hint to the user that maybe they shouldn't just use all models.
  • Added desc. item for model TSMFK01. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added TSMFK01 to model overview table. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added subsection with TSMFK01 model. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fix for adding TSMFK01 to sample scripts. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Updated the relax_disp_trunc.py script for Flemming Hansen's CMPG test data.
  • Started to create the relax_disp.catia_input user function. The frontend has been written and a stub of a function for the backend. The new specific_analyses.relax_disp.catia module has been created for this.
  • Created the Relax_disp.test_hansen_catia_input system test. This is to check the output of the relax_disp.catia_input user function.
  • The relax_disp.catia_input user function now creates the main CATIA input file and all R2eff data files.
  • Created a script for converting Flemming Hansen's data into CATIA input files. This is for checking the relax_disp.catia_input and relax_disp.catia_execute user functions.
  • Fix for the CATIA main execution file created by relax_disp.catia_input. The CATIA DataDirectory needs a '/' at the end.
  • Improvements to the relax_disp.catia_input user function. On top of general improvements, the global parameter and parameter set files are now created.
  • More improvements for the relax_disp.catia_input user function. The output directory for CATIA results is now an argument for the main backend function. This directory is now also created, as required by CATIA.
  • Implemented the relax_disp.catia_execute user function. This is modelled on the palmer.execute user function.
  • The relax_disp.catia_input user function now has a GUI icon associated with it.
  • Added the CATIA input files generated by relax for Flemming Hansen's truncated CPMG data set.
  • Rearranged the numeric CPMG models in the dispersion model list in the GUI.
  • The main CATIA input file requires the chemical shifts and R1 values to be fixed, even when missing. This is for the relax_disp.catia_input user function.
  • Added Tollinger reference. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Added Tollinger model TSMFK01 to sample scripts. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Fixed several typo errors of "Is it selected"->"It is selected". A copy-paste error which has spread. Progress sr #3071 - Implementation of Tollinger/Kay dispersion model (2001). Following the guide at: Tutorial for adding relaxation dispersion models to relax.
  • Spacing fixes for the Tollinger01 Bibtex entry for the author initials.
  • Removed some strange characters from the Tollinger01 Bibtex abstract.
  • Fix for some inline references in the dispersion chapter of the user manual.
  • Enabled the parallelisation of Monte Carlo simulations for the relaxation dispersion analysis.
  • Created a set of scripts for testing out the multi-processor abilities of the dispersion analysis.
  • Added Remco Sprangers' truncated ClpP data to test_suite/shared_data/dispersion/Spranger_ClpP. This is the data attached to https://web.archive.org/web/gna.org/task/?7712#comment6, and it will be used for testing the implementation of the MQ NS 2-site model, when added to relax.
  • Concatenated the peak intensity files.
  • Created a relax script for analysing Remco Sprangers' ClpP data with the MQ NS 2-site model. This currently does not work, as the model is absent.
  • Modified the dispersion auto-analysis to check if peak intensity errors have been pre-calculated. This allows the user to perform custom analyses and the auto-analysis will then not overwrite these values.
  • Bug fixes for the averaging of peak intensity errors in the dispersion analysis. This is in the specific_analysis.relax_disp.disp_data.average_intensity() function.
  • Fix for the docstring formula in lib.dispersion.two_point.calc_two_point_r2eff_err().
  • Updated the relax script for analysing Remco Sprangers' ClpP data with the MQ NS 2-site model. The error analysis has been removed as it is identical to what the auto-analysis does.
  • Renamed the directory of Remco Sprangers' CPMG dispersion data to correctly spell his name.
  • Updated the script for Remco Sprangers' MQ CPMG data.
  • Created the Relax_disp.test_sprangers_cpmg_data_auto_analysis system test. This checks the MQ NS 2-site model against Remco Sprangers' MQ CPMG data using the auto-analysis.
  • Fixes for the checks of the new Relax_disp.test_sprangers_cpmg_data_auto_analysis system test. The MQ NS 2-site model checks were still set up to those of the Relax_disp.test_hansen_cpmg_data_auto_analysis system test.
  • Added the MQ NS CPMG 2-site model to the dispersion variables. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax The new dispersion variable MODEL_MQ_NS_CPMG_2SITE has been added. As this is a new data type, multi-quantum CPMG, the new MODEL_LIST_MQ_CPMG and MODEL_LIST_MQ_CPMG_FULL lists have been created.
  • Rearranged the documentation for the relax_disp.select_model user function to simplify the text.
  • Created the lib.text.gui.dwH Unicode string for use with the MQ NS 2-site dispersion model.
  • Added the MQ NS CPMG 2-site model to the relax_disp.select_model user function frontend. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax disp.select model user function front end Two new sections were added to the user function docstring for the MQ CPMG and MQ R experiment types.
  • Added support for the MQ NS CPMG 2-site model to the relax_disp.select_model user function back end. This is the numeric solution for 2-site exchange for multi-quantum CPMG-type data. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax disp.select model user function back end.
  • Added support for the new 'δωH' dispersion parameter. This is needed for the MQ NS CPMG 2-site model support. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Adding support for the parameters.
  • Improved the RelaxError from the relax_disp.exp_type user function when an invalid experiment type is set.
  • Added the multi-quantum CPMG and R experiment types to the dispersion variables. This is needed for the MQ NS CPMG 2-site model. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Creating a new experiment type.
  • Added relaxation dispersion model lists for the MQ R models. These are stubs as no MQ R models are yet supported by relax.
  • Added support for the MQ dispersion data type to the specific_analyses.relax_disp.disp_data module. This is needed for the MQ NS CPMG 2-site model. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Creating a new experiment type.
  • Added support for the MQ dispersion data types to the rest of relax. This is needed for the MQ NS CPMG 2-site model, and the changes affect the dispersion data checks and the dispersion target functions. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Creating a new experiment type.
  • Updated the Relax_disp.test_sprangers_cpmg_data_auto_analysis system test for 'MQ CPMG' data. This also affects the script in the shared_data test suite directory. The relax_disp.exp_type user function exp_type argument has been changed from 'CPMG' to 'MQ CPMG'.
  • Updated the relax_disp.exp_type user function for the new 'MQ CPMG' and 'MQ R1rho' experiment types. This is needed for the MQ NS CPMG 2-site model. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Creating a new experiment type.
  • Created the MQ NS CPMG 2-site model target function. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The target function.
  • Added the MQ NS CPMG 2-site R calculating function to the relax library. This is the 2-site numeric solution for multi-quantum CPMG-type data. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax library.
  • Updated the MQ NS CPMG 2-site model target function to match the function in the relax library.
  • Decreased the grid increments in the Relax_disp.test_sprangers_cpmg_data_auto_analysis system test. This is to speed up this test.
  • Some small changes for the script for optimising Sprangers' ClpP MQ CPMG data.
  • Added the MQ NS CPMG 2-site model to the dispersion auto-analysis. This is the 2-site numeric solution for multi-quantum CPMG-type data. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The auto-analysis.
  • Added the MQ NS CPMG 2-site model to the GUI model list. This is the 2-site numeric solution for multi-quantum CPMG-type data. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The GUI.
  • Rearranged the 'Implemented models' subsection of the dispersion chapter of the manual.
  • Fixed the MQ NS CPMG 2-site model description in the relax_disp.select_model user function. The magnitisation vector is 2D, not 3D.
  • Added a latex definition for the δωH dispersion parameter and added the 'MQ' abbreviation.
  • Added the MQ NS CPMG 2-site model to the relax user manual. This is the 2-site numeric solution for multi-quantum CPMG-type data. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax manual.
  • Simplified the MQ NS CPMG 2-site model code in lib.dispersion.
  • Fixes for the MQ NS CPMG 2-site model equations in the user manual.
  • Rearrangements of the tables in the dispersion chapter of the user manual. The tables have been shifted out into their own LaTeX files, and all dispersion model tables have been concatenated into one.
  • Edited the MQ abbreviation in the user manual.
  • Fixed some bad referencing in the dispersion chapter of the manual.
  • Docstring fix for the lib.dispersion.mq_ns_cpmg_2site.populate_matrix() function.
  • Fix for a bug in the specific_analyses.relax_disp.disp_data.loop_point() function introduced at r21060.
  • Speed ups for the Relax_disp.test_sprangers_cpmg_data_auto_analysis system test. This test does not pass yet, but this should allow the test to complete in under an hour.
  • Added some value.set user function calls to the script for Sprangers' ClpP data.
  • Added some value.set calls to the Relax_disp.test_sprangers_cpmg_data_auto_analysis system test. These user function calls will be used to test a new concept of fixing parameters in the grid search. The δω and δωH parameters are fixed to the experimental values, as described in the README file from Remco Sprangers (in test_suite/shared_data/dispersion/Sprangers_ClpP).
  • Changed the operation of the grid search for the relaxation dispersion analysis. If a parameter is a simple floating number type and it already has a value, then the grid search over that dimension is fixed. The grid increments are set to 1, and the upper and lower bounds set to the parameter value. This allows parameters to be pre-set, if known from experiment. They will nevertheless be optimised via the minimise user function.
  • Added a printout for the pre-set dispersion parameter skipping in grid search.
  • Updated the dispersion grid search function to user the loop_parameters() function. This is an important fix as the specific_analyses.relax_disp.optimisation.grid_search_setup() function was not matching the rest of the dispersion code, hence the parameters of the grid increments and bounds were not matching the parameter vector, scaling matrix, target function parameter depacking, etc.
  • A bit of help for some of the R dispersion model system tests. These now fail after a fundamental fix. The problem is only due to the very coarse grid search size - a finer grid search allows the solution to be correctly found. However as this is far too slow, instead the kex parameter is set to be close to the solution to skip a grid search dimension.
  • Some basic fixes for the Relax_disp.test_hansen_catia_input system test. The relax_disp.catia_input user function is not complete, but this allows the Relax_disp system tests to pass.
  • The dispersion multi-processor optimisation code now prints out its own simulation messages. This is to fix bug #21190. The memo object now is feed in the spin IDs of the cluster and stores this as the cluster_name variable. This is used by the results object run() method, which is run on the master at the end, to print out a message along the lines of "Simulation X, cluster yyy". Therefore the message is only printed out once the calculation of that slave command is complete and returned to the master.
  • Replaced all usage of scipy.linalg.expm() with lib.linear_algebra.matrix_exponential.matrix_exponential(). This is for the functions of the lib.dispersion package used for the relaxation dispersion numeric solution models. The change eliminates a bug in the scipy function which uses the Pade approximation which fails horribly for the complex part of the matrix. The real part looks good, but the complex part looks to have nasty truncation artefacts which is propagated and amplified through the Bloch-McConnell equations.
  • Modified the Relax_disp.test_sprangers_cpmg_data_auto_analysis system test so the models are programatically changed.
  • Changes to the Sprangers ClpP data analysis script.
  • Simplified the Relax_disp.test_sprangers_cpmg_data_auto_analysis system test script. The pA and kex parameters are now also pre-set to speed things up.
  • Added a script and results files for the base R2eff model for Remco Sprangers' ClpP data.
  • Fixes for the R2eff data files for Sprangers ClpP data.
  • Artificially increased the errors in Sprangers ClpP data to match the publication. The R2eff errors are simply multiplied by 5, as the errors from the paper cannot be replicated.
  • Converted the Relax_disp.test_sprangers_cpmg_data_auto_analysis system test to not use the auto-analysis. The test has been renamed to Relax_disp.test_sprangers_data_to_mq_ns_cpmg_2site. The optimisation is now for the cluster and has been severely cut back. The MQ NS CPMG 2-site model appears to be rubbish anyway - it looks to be indeterminate with multiple solutions, and possibly infinite lines of solutions. The test now passes, and quickly.
  • Created the Relax_disp.test_sprangers_data_to_mq_cr72 system test. This was copied from the Relax_disp.test_sprangers_data_to_mq_ns_cpmg_2site system test and the model changed to MQ CR72. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The test suite.
  • Added the MQ CR72 model to the dispersion variables. This is the Carver and Richards (1972) 2-site model expanded for MQ CPMG data by Korzhnev et al., 2004. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Adding the model to the list.
  • Added the MQ CR72 model to the relax_disp.select_model user function frontend. This is the Carver and Richards (1972) 2-site model expanded for MQ CPMG data by Korzhnev et al., 2004. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax disp.select model user function front end.
  • Added support for the MQ CR72 model to the relax_disp.select_model user function back end. This is the Carver and Richards (1972) 2-site model expanded for MQ CPMG data by Korzhnev et al., 2004. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax disp.select model user function back end.
  • Created the MQ NS CPMG 2-site model target function. This is the Carver and Richards (1972) 2-site model expanded for MQ CPMG data by Korzhnev et al., 2004. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The target function.
  • Added the MQ CR72 R2eff calculating function to the relax library. This is the Carver and Richards (1972) 2-site model expanded for MQ CPMG data by Korzhnev et al., 2004. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax library. The corresponding target function was updated to input the correct arguments.
  • Added the MQ CR72 model to the dispersion auto-analysis. This is the Carver and Richards (1972) 2-site model expanded for MQ CPMG data by Korzhnev et al., 2004. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The auto-analysis.
  • Added the MQ CR72 model to the GUI model list. This is the Carver and Richards (1972) 2-site model expanded for MQ CPMG data by Korzhnev et al., 2004. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The GUI.
  • A number of fixes for the lib.dispersion.mq_cr72 module.
  • The lib.dispersion.mq_cr72 module now more closely resembles the cr72 module in syntax.
  • Added a relax script for the MQ CR72 model optimised using Flemming Hansen's CPMG data. This is to demonstrate, though not exactly successfully, that the MQ CR72 model can collapse to the CR72 model. The imperfection might be due to truncation artefacts in the sin ratio in the mD and mZ factors. The results files and output log file from the script have been added to the repository as well.
  • Updates for the script and results for the MQ CR72 model optimised using Flemming Hansen's CPMG data.
  • Added a script and results files for optimising Sprangers' ClpP MQ CPMG data to the MQ CR72 model.
  • Bug fix for the dispersion specific loop_parameters() function for the multiple quantum models. The δω and δωH parameters were being interleaved rather than all δω for all spins first and then all δωH. The result was that these parameters were being mixed up in the MQ model target functions when clustering was activated, causing total failure of the MQ models.
  • Added a script and results files for optimising Sprangers' ClpP MQ CPMG data to the MQ CR72 model. This is with all spins clustered. It complements the files which are used for the pre-run results of the auto-analysis.
  • Better spacing in the model table of the relaxation dispersion chapter of the relax manual.
  • Added the Tollinger et al., 2001 reference for the NS CPMG 2-site expanded model. This reference was communicated in a private email.
  • Improvements for the LaTeX maths commands used in the dispersion chapter of the user manual.
  • Added Skrynnikov and Tollinger to the copyright notice in lib/dispersion/ns_cpmg_2site_star.py. I can now see that the code derives from the funNumcpmg.m of the sim_all.tar file (https://gna.org/support/download.php?file_id=18404) attached to https://web.archive.org/web/gna.org/task/?7712#comment5. This sim_all.tar file is the original code of Nikolai and Martin.
  • Modified the relaxation dispersion auto-analysis to take nesting of MQ models. This is specifically the nesting of the analytic MQ CR72 model and the MQ NS CPMG 2-site models. The analytic solution is now used as the optimisation starting point for the numeric model.
  • Used the \imath LaTeX symbol for complex numbers in the dispersion chapter of the manual.
  • Added scripts and results for optimising Sprangers' ClpP MQ CPMG data to the MQ NS CPMG 2-site model. This includes two scripts for non-clustered followed by clustered analysis using the MQ CR72 model in the auto-analysis so its parameters will be used as the optimisation starting point for the MQ NS CPMG 2-site model. The results files for both scripts have been added to the repository.
  • Added the MQ CR72 model to the relax user manual. This is the Carver and Richards (1972) 2-site model expanded for MQ CPMG data by Korzhnev et al., 2004. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax manual.
  • Modified the δωH symbol in the relax user manual.
  • Created a 'TODO' section in the dispersion chapter of the relax user manual. This lists all of the features and models currently missing from the dispersion analysis in relax.
  • Added the original Maple script to the lib.dispersionns_cpmg_2site_expanded module docstring for reference. This was sent by Nikolai in a private communication.
  • More expansion of the lib.dispersionns_cpmg_2site_expanded module docstring for reference. The link https://web.archive.org/web/gna.org/task/?7712#comment8 to the p3.analytical script in the Gna! tasks has been added and the contents of the sim_all.tar file funNikolai.m has been copied into the docstring as well.
  • Epydoc docstring fixes to allow the API documentation to be properly compiled.
  • Python 3 fix for the lib.dispersion.mq_ns_cpmg_2site module. The numpy.linalg.matrix_power requires an integer power, but Python 3 was creating a floating point number for the 'fact' variable.
  • Updated the Relax_disp.test_sprangers_data_to_mq_cr72 system test so it passes. The parameters found in the analysis located in the directory test_suite/shared_data/dispersion/Sprangers_ClpP/mq_cr72_analysis_clustered have been used as the starting point.
  • A number of Python 3 fixes.
  • Python 3 fixes for the dispersion data key generation and the data assembly. The specific_analyses.relax_disp.disp_data.return_param_key_from_data() function was generating different keys for Python 2 and 3. This has been fixed. The return_r2eff_arrays() function has also been modified to correctly check for these keys.
  • Removed an insanely large log file from the Flemming Hansen dispersion data directories. This is the log file for the CPMGFit analysis.
  • A large number of fixes for the relaxation dispersion system tested needed for the fix which changed the format of the keys by which the R2eff/R data is accessed.
  • Updated the Relax_disp.test_sprangers_data_to_mq_ns_cpmg_2site system test to allow it to pass.
  • Created the new Relax_disp.test_hansen_cpmg_data_auto_analysis_numeric system test. This will be used to test a new feature whereby pure numeric models will be used in the auto-analysis.
  • Added the model_class variable to the relaxation dispersion auto-analysis class.
  • Changed the new dispersion auto-analysis class variable model_class to the numeric_only flag.
  • Created list variables of all analytic and numeric dispersion models. These are the MODEL_LIST_ANALYTIC and MODEL_LIST_NUMERIC lists in the module specific_analyses.relax_disp.variables.
  • Fix for the hansen_data.py dispersion auto-analysis script used for a number of system tests. The numeric_only flag was not being handled correctly.
  • Implemented the numeric only option for the dispersion auto-analysis. If the numeric_only flag is set to True, then no analytic models will be used in the final model selection.
  • Completed the Relax_disp.test_hansen_cpmg_data_auto_analysis_numeric system test. This now checks all the optimised parameter values and makes sure that no CR72 model was selected.
  • Added a new button to the button bar in the relaxation dispersion GUI analysis tab. This is a button used to launch the value.set user function to allow the user to pre-set certain parameters so that they are not used in the grid search.
  • Created a GUI element for the numeric_only flag of the auto-analysis for the dispersion GUI tab. This defaults to false to allow all model types to be used.
  • Loosened the Relax_disp.test_sprangers_data_to_mq_ns_cpmg_2site system test to allow it to pass on Mac OS X.
  • Fixes to allow the Mf.test_mf_auto_analysis system test to pass on Mac OS X. The simulated event.GetPosition() method in the Fake_right_click class the file test_suite/gui_tests/model_free.py must return a wx.Point object and not a Python tuple. The gui.components.base_list.Base_list.on_right_click() method has also been modified with a wx.Yield() call to allow the test to pass.
  • Loosened some of the relaxation dispersion system tests to allow them to pass on MS Windows.
  • Commented out some checks of the Relax_disp.test_hansen_cpmg_data_auto_analysis_numeric system test. This is to allow this test to pass on 32-bit GNU/Linux systems. The numeric model optimisation is incomplete but different between the 32-bit and 64-bit systems.
  • Fix for the relaxation dispersion system test tearDown() method. The rmtree function is no longer user, rather the test_suite.clean_up.deletion() function is being used to handle the issue of MS Windows not releasing the file in time.
  • Fix for the test_suite.clean_up.deletion() method for another MS Windows problem. Sometimes the failed rmtree() call actually deletes the files and throws the WindowsError error. Therefore the second rmtree() call will throw another WindowsError for the missing files. This is now caught.
  • Elimination of the relaxation dispersion system test tearDown() method. The functionality is fully covered by the base system test method.
  • Shifted all of the numerical dispersion code to use the internal matrix power function. Instead of using the numpy.linalg.matrix_power() function, the relax lib.linear_algebra.square_matrix_power() function is being used instead. This allows the code to run on many older systems, as the numpy function is relatively new.
  • Updated the Relax_disp.test_hansen_cpmg_data_to_ns_cpmg_2site_star_full system test.
  • Fix for the lib.dispersion.cr72 module for early Python versions. For Python 2.5 and earlier, the math.acosh() function does not exist. Therefore the numpy equivalents are now being used.
  • Loosened the checks for the Relax_disp.test_hansen_cpmg_data_to_ns_cpmg_2site_star_full system test. This is to allow the test to pass on 32-bit Linux systems.
  • Caught a divide by zero in the specific_analyses.relax_disp.disp_data.return_offset_data() function. This was identified by turning all numpy warnings to errors.
  • More loosening of the Relax_disp.test_hansen_cpmg_data_to_ns_cpmg_2site_star_full system test. This is now for 64-bit Mac OS X to pass.
  • The dispersion GUI analysis cluster_update() method is now thread safe. This removes many error messages when running the dispersion analysis in the GUI, especially for Mac OS X systems.
  • The dispersion data return_cpmg_frqs() and return_spin_lock_nu1() functions are now safer. These specific_analyses.relax_disp.disp_data module functions can now be called when no data is present.
  • Fixes for the calc user function for the dispersion analysis. This now does something logical for the non-R2eff models. The chi-squared value is now being calculated and stored. Previously this was only calculating the R2eff/R values for fixed relaxation time period data for the R2eff model and failing for all others. Now the pre-existing _back_calc_r2eff() method is used to back-calculate and store the chi-squared value.
  • Redesigned the Relax_disp.test_hansen_cpmg_data_to_ns_cpmg_2site_star_full system test. The R2A0 and R2B0 rates cannot be distinguished for this data, therefore there was no unique solution. This resulted in too much variability between 32 and 64-bit systems as well as different operating systems. Instead a single calc user function call is used to determine the chi-squared value for a fixed set of parameters.
  • Loosened the test_hansen_cpmg_data_to_ns_cpmg_2site_star_full system test for Mac OS X. Even the calc user function does not help, the results are quite different between different systems.
  • The specific API calculate_r2eff() method for the dispersion analysis is now private. This is not part of the API, so it must be made private for the test suite to pass.
  • Fix for the Mf.test_mf_auto_analysis system test on MS Windows. The Fake_right_click.GetPosition() method now returns a valid position. This is the original (10, 10) position.
  • Fix for a bug introduced earlier - the call to the calculate_r2eff() must also be made private.
  • Fixes for 2 Relax_disp GUI tests to match the previous model-free fixes. The Fake_right_click.GetPosition() method now returns a wx.Point object.
  • Added test data where both the spin-lock time, the spin lock offset and the spin lock field is varied. The data is published in "Kjaergaard, M., Andersen, L., Nielsen, L.D. & Teilum, K. (2013). A Folded Excited State of Ligand-Free Nuclear Coactivator Binding Domain (NCBD) Underlies Plasticity in Ligand Recognition. Biochemistry, 52, 1686-1693" with experimental conditions that "off-resonance R relaxation dispersion experiments on 15N were recorded at 18.8 T and 31 C." and "using the pulse sequence of Mulder et al. with spin-lock field strengths from 431 to 1649 Hz and offsets ranging from 0 to 10000 Hz."
  • Shifted the NS CPMG 2-site expanded model to the top of the CPMG numerical solutions in the manual. This is because this is the default model which should be used in most cases.
  • A 20-25% speed increase for the NS CPMG 2-site expanded dispersion model. Many repetitive mathematical operations have been eliminated and the equations have been changed to optimise the calculation speed.
  • Modified settings script for R test dataset.
  • Fix for the amsmath LaTeX package in the user manual. It needs to be after the hyperref package, as hyperref clobbers a number of amsmath features.
  • Added all of the equations for the NS CPMG 2-site expanded dispersion model to the relax manual. These are essentially the source code modified to look good in LaTeX.
  • Fix for the NS CPMG 2-site expanded model equations in the manual.
  • Better section spacing in the dispersion chapter of the manual. Each model section is now on a new page.
  • Fix for the display of the spin-lock ν1 values in the dispersion GUI tab. This was reported by Troels at http://thread.gmane.org/gmane.science.nmr.relax.devel/4708. The GUI spectrum element at gui.components.spectrum was at fault, the add_disp_point() method was buggy.
  • Fix for the right click pop up menu entry "Set the spin-lock field" in the dispersion GUI tab. This is for the spectra list relax_disp.spin_lock_field user function call. The reference spectra are now detected and the field value set to None. This fix has been propagated to the relax_disp.cpmg_frq user function menu entry as well.
  • Correcting the R settings script for the right calculation of the spin-lock offset, omega_rf, in ppm when offset values are provided in Hz.
  • Added ZQ and DQ data to the TODO list in the dispersion chapter of the manual.
  • Fix for the relaxation dispersion specific private _cluster_ids() method. This was identified at http://thread.gmane.org/gmane.science.nmr.relax.devel/4716. The cluster data structure was not being referenced correctly.
  • Added some lines to the end of the script UI section of the dispersion chapter about custom protocols.
  • Added a new section to the dispersion chapter of the manual for comparing different dispersion software. This is an expansion of the table in the paper draft.
  • Updates for the dispersion software comparison section of the user manual.
  • Bug fix for the MQ NS CPMG 2-site model. This was found with the aid of private feedback from Dmitry Korzhnev and him emailing his cpmg_fitd9 program. The problem is that he defines the 'n' parameter as half of a CPMG block. The code was however assuming that 'n' is a full CPMG block.
  • Added ZZ exchange as a missing feature to the dispersion chapter of the manual.
  • Added Dmitry Korzhnev's Fyn SH3 domain data for Asp 9 to the repository. This is from Dmitry M. Korzhnev, Philipp Neudecker, Anthony Mittermaier, Vladislav Yu. Orekhov, and Lewis E. Kay (2005) Multiple-site exchange in proteins studied with a suite of six NMR relaxation dispersion experiments: An application to the folding of a Fyn SH3 domain mutant. 127, 15602-15611 (doi: http://dx.doi.org/10.1021/ja054550e). It consists of the 1H SQ, 15N SQ, ZQ, DQ, 1H MQ and 15N MQ data for residue Asp 9 of the Fyn SH3 domain mutant.
  • Added the results from Korzhnev's cpmg_fit program for the Asp9 Fyn SH3 dispersion data.
  • Created a relax state for the R2eff SQ data of Korzhnev et al., 2005.
  • Added printouts for the overfit_deselect() specific API method for the dispersion analysis. This is to inform the user whenever spins are deselected and why. This is to help avoid user confusion.
  • Started to add some preliminary dispersion results for the Korzhnev data.
  • Started the conversion of the MQ NS CPMG 2-site model to MMQ 2-site. This follows from the post at http://article.gmane.org/gmane.science.nmr.relax.devel/4734.
  • Renamed all of the MQ NS CPMG 2-site modules and functions for the change to MMQ 2-site. This follows from the post at http://article.gmane.org/gmane.science.nmr.relax.devel/4734.
  • Added the ZQ and DQ CPMG experiment types to the dispersion variables. This is needed for the MQ NS CPMG 2-site model change to MMQ 2-site and follows from the post at http://article.gmane.org/gmane.science.nmr.relax.devel/4734. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Creating a new experiment type.
  • Created two new dispersion variables - EXP_TYPE_LIST_CPMG and EXP_TYPE_LIST_R1RHO. This will be used to simplify identifying CPMG vs. R data types.
  • Added support for the ZQ and DQ CPMG data type to the specific_analyses.relax_disp.disp_data module. This is needed for the MQ NS CPMG 2-site model change to MMQ 2-site and follows from the post at http://article.gmane.org/gmane.science.nmr.relax.devel/4734. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Creating a new experiment type.
  • Completed the support for ZQ and DQ CPMG experiment types in relax. This is needed for the MQ NS CPMG 2-site model change to MMQ 2-site and follows from the post at http://article.gmane.org/gmane.science.nmr.relax.devel/4734. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Creating a new experiment type.
  • The MMQ 2-site model target function can now handle multiple CPMG data types.
  • Added dispersion curve plotting to the relax script for Korzhnev et al., 2005 MMQ data.
  • Turned off the δω, δωH > 0 constraint for the MMQ 2-site model.
  • Added a page reference back to the intro chapter in the scripting section of the dispersion chapter. This is to help the user work out how to run a relax script.
  • Fix for the sqrt() function in the dispersion parameter table.
  • Added a section to the dispersion chapter about spin clustering.
  • Removed most of the \clearpage commands in the dispersion chapter of the manual. There was far too much whitespace.
  • Added a 600x600 pixel graphic for the spin cluster for use in the user manual.
  • Added the cluster graphic to the cluster section of the dispersion chapter and improved the text.
  • Proper handling of the back-calculated dispersion data for the new MMQ 2-site model.
  • Shifted the optimisation printouts for the dispersion analysis out of the memo. This improved the ordering of the printed out messages when running on a cluster. Instead of having multiple optimisation printouts followed by a list of the corresponding optimised values, now they are interleaved as they should be.
  • Changed the definition of tex thanks to feedback from Nikolai Skrynnikov. This was previously defined as tex = 1/(2kex) to be compatible with CPMGFit, but has now been changed to tex = 1/kex.
  • Converted the IT99 dispersion model parameters to pA and δω. This is thanks to feedback from Nikolai Skrynnikov. I have no idea why the φex and pA.δω2 parameters were being used in the first place. The model results after the change are identical.
  • Fix for the optimised parameter printout - the parameters are now scaled. This problem was only recently introduced.
  • The dispersion sample scripts now have the NUMERIC_ONLY boolean variable defined.
  • Decreased the number of models presented to the user in the dispersion sample scripts.
  • The model type is now being written to file for the final run of the dispersion auto-analysis.
  • Added the model_type spin variable to the dispersion analysis specific PARAMS data object.
  • Updated the text and Grace files output for the IT99 model in the dispersion auto-analysis.
  • Fixes for the output of the selected mode in the dispersion auto-analysis. The correct variable is now used.
  • Proper fix for the printout of the optimised dispersion parameters. The loop_parameters() dispersion function is no longer used, avoiding all requirements on the current data pipe existing. This allows for proper printouts on a MPI cluster.
  • Added a page reference to the multi-processor section in the script section of the dispersion chapter.
  • Added residue 4 to the truncated CPMG data from Flemming Hansen for another test system.
  • Updated the CPMGFit results for Flemming Hansen's CPMG data truncated to 3 spins.
  • Updated the README file explaining how to convert the CPMGFit parameters to those of other software.
  • Updated the relax results for Flemming Hansen's CPMG data for the recent changes.
  • Updated the NESSY results for Flemming Hansen's data. A number of improvements have been added to NESSY including being able top optimise residues with missing data sets. A number of bugs have also been eliminated.
  • Updated the NESSY log for the bug fix of r1105 (in the NESSY repository).
  • Updated the ShereKhan results to include residue :4 and the ShereKhan numeric results. The numeric model in ShereKhan was previously buggy and did not return results. This has been fixed after I sent feedback to the authors.
  • Updated the software comparison document for a subset of Flemming Hansen's CPMG data. This now includes residue 4, the changes in results for all software, new NESSY results due to fixes I made in NESSY, and the new results for numeric model in ShereKhan.
  • Added all of the new NESSY plots for the truncated Hansen CPMG data.
  • Fixes for all of the system tests using Flemming Hansen's CPMG data subset. The errors are now different and the new residue 4 has to be deselected and ignored.
  • Created the new relax_disp.insignificance user function. This will be used to deselect all spins whereby the maximum difference in all its dispersion curves is below a certain cutoff.
  • Improvements for the relax_disp.insignificance user function. Text is now printed out when a spin is deselected. And all spins set to the R2eff model are skipped.
  • The relaxation dispersion auto-analysis now accepts the 'insignificance' argument. This is then used in the relax_disp.insignificance user function prior to the optimisation of each model, so that spins with insignificant dispersion curves are not optimised. The R2eff and No Rex models are skipped for obvious reasons.
  • Created an INSIGNIFICANCE variable for the relaxation dispersion sample scripts. This is to allow the user to eliminate insignificant models.
  • Added the insignificance dispersion auto-analysis argument to the Hansen CPMG data optimisation script.
  • Updated script UI section of the dispersion chapter of the user manual. This is for the recent changes to the sample scripts including the addition of the RESULTS_DIR and INSIGNIFICANCE variables.
  • Added the No Rex model to the R1rho_analysis.py sample script.
  • A number of fixes for the script UI section of the dispersion chapter of the manual. The NUMERIC_ONLY variable is now explained and the R MODEL list has been changed to a set of reasonable models.
  • A GUI element for the insignificance level for the dispersion auto-analysis has been added. This defaults to 1.0. The user can input any number they wish. Checks were added for non-numerical input.
  • Updated the insignificance argument docstring for the dispersion auto-analysis.
  • The dispersion analysis GUI element now uses the float GUI element for the insignificance level. This makes sure that the user can only enter a number.
  • Created the Relax_disp.test_r2eff_read and Relax_disp.test_r2eff_read_spin system tests. These check the operation of the currently non-existent relax_disp.r2eff_read and relax_disp.r2eff_read_spin user functions.
  • Modified the Relax_disp.test_r2eff_read system test. A new disp_frq argument has been added for the relax_disp.r2eff_read user function.
  • Renamed specific_analyses.relax_disp.disp_data.exp_type() to set_exp_type(). This is to avoid classes with the 'exp_type' function arguments.
  • Small fix for the printout from specific_analyses.relax_disp.disp_data.set_exp_type().
  • Improved printout from the specific_analyses.relax_disp.disp_data.set_exp_type() function.
  • Improved printout for the relax_disp.cpmg_frq user function.
  • Improved printout for the relax_disp.spin_lock_field user function.
  • Implemented the relax_disp.r2eff_read user function. Bot the frontend and backend have been implemented and are functional.
  • Created the Relax_disp.test_hansen_cpmg_data_auto_analysis_r2eff system test. This is to test the full dispersion auto-analysis on Flemming Hansen's CPMG data using the original R2eff data rather than the derived peak heights.
  • Changes for the Relax_disp.test_hansen_cpmg_data_auto_analysis_r2eff system test. The file paths have been changed.
  • Created files of R2eff values and errors for Flemming Hansen's CPMG data.
  • File path fixes for the script of the Relax_disp.test_hansen_cpmg_data_auto_analysis_r2eff system test.
  • The error analysis is now skipped in the dispersion auto-analysis if the R2eff model is not given. It is then assumed that R2eff/R data has already been loaded into the base data pipe and hence the error analysis is not needed. This avoids fatal errors.
  • The specific_analyses.relax_disp.disp_data.loop_time() function can now handle no relaxation times being set.
  • The relax_disp.r2eff_read user function now prints out all the data which has been read. This feedback is useful for the user to know what has or has not been read into relax.
  • Fix for the dispersion auto-analysis if R2eff data already exists. The data is no longer copied from the non-existent 'R2eff' data pipe.
  • Fixes for the dispersion specific overfit_deselect() method for when R2eff data is read. This now no longer checks for intensity data but rather R2eff data, as intensity data will not be present if R2eff data is directly read rather than peak intensities.
  • Fixes for the Relax_disp.test_hansen_cpmg_data_auto_analysis system test. The setup of the auto-analysis could be simplified as the base data pipe can now contain R2eff data. The R2eff data in the 'R2eff' data pipe was no longer being read.
  • Some small fixes to allow the optimisation of dispersion models when no peak intensity data has been read. This is for when R2eff data has been read instead.
  • The relax_disp.insignificance user function can now handle selected spins with no R2eff/R data.
  • Fixes for the Monte Carlo simulations in the dispersion analysis when R2eff data has been read. As peak intensity data has not been read, the relaxation time period will not have been set. The _back_calc_r2eff() method can now handle this.
  • Improved the R2eff errors for Flemming Hansen's CPMG data. The errors are now calculated using the data from all spins rather than a truncated subset. The errors will therefore be much more accurate.
  • Fix for return_index_from_disp_point() for when R2eff/R data is loaded rather than intensities. This specific_analyses.relax_disp.disp_data.return_index_from_disp_point() function was always subtracting 1 from the dispersion point index to take the reference spectrum into account. This however fails if R2eff/R data is loaded instead.
  • Fixes for the Relax_disp.test_hansen_cpmg_data_auto_analysis* system tests. The Relax_disp.test_hansen_cpmg_data_auto_analysis system test needed updating due to the more accurate R2eff errors. The Relax_disp.test_hansen_cpmg_data_auto_analysis_r2eff system test also needed this change. It also no longer has a spin system for residue 4.
  • Fixes for all of the Relax_disp system tests which use Flemming Hansen's CPMG data. These are needed due to the improved error estimates in the data files.
  • Fix for a duplicated line typo in the Relax_disp.test_hansen_cpmgfit_input system test.
  • Fixed a typo in the user function name in the Relax_disp.test_r2eff_read_spin system test.
  • Fixes for the Relax_disp.test_r2eff_read_spin system test.
  • Implemented the relax_disp.r2eff_read_spin user function. This allows R2eff/R files for each spin to be read.
  • Fixed a docstring talking about RDC data in the dispersion analysis.
  • Fix for the Relax_disp.test_hansen_cpmg_data_auto_analysis_numeric system test for 32-bit Linux. The NS CPMG 2-site expanded model checks have been turned off again for residue 71 as these results are far to variable.
  • Another fix for the Relax_disp.test_hansen_cpmg_data_auto_analysis_numeric system test. The selected model is no longer checked for residue 71.
  • Loosened the checks for a number of Relax_disp system tests to allow them to pass on 32-bit Linux.
  • Loosened a check for the Relax_disp.test_hansen_cpmg_data_to_ns_cpmg_2site_star system test for Mac OS X.
  • Loosened a check for the test_hansen_cpmg_data_to_ns_cpmg_2site_star system test for MS Windows.
  • Added some polish to the relax_disp.exp_type user function frontend.
  • Created the MODEL_LIST_CPMG_NUM dispersion list variable. This is for defining in one place the list of models which require the number of CPMG blocks.
  • The dispersion optimisation code now checks for the relaxation time period being set for certain models. This is for the models which require the number of CPMG blocks, calculated via the relaxation time and νCPMG.
  • The dispersion target function setup now uses the new MODEL_LIST_CPMG_NUM variable.
  • The dispersion specific check_exp_type() function now accepts the id argument to check individual IDs.
  • Redesigned the relax_disp.r2eff_read and relax_disp.r2eff_read_spin user functions. These now no longer set the metadata (spectrometer frequency and experiment type) themselves. Instead an experiment ID string must be supplied. The spectrometer.frequency and relax_disp.exp_type user functions will therefore need to be called before these R2eff functions.
  • Fixes for the Relax_disp.test_hansen_cpmg_data_auto_analysis_r2eff system test. This is for the changes in the relax_disp.r2eff_read user function.
  • Fixes and completion of the Relax_disp.test_r2eff_read and Relax_disp.test_r2eff_read_spin system tests. These now handle the new user function design and now also check all of the global and spin data.
  • A number of fixes for the dispersion analysis for all the recent changes.
  • Better MMQ data support for the dispersion specific loop_cluster() function. For the models using proton-heteronuclear multi-multiple quantum data, proton spin containers are now skipped as all the data will be analysed from the perspective of the heteronucleus.
  • Conversion of the format of the relaxation dispersion R2eff/R data structures. These are now lists of lists of lists of numpy arrays instead of pure numpy rank-4 arrays. This only affects a number of related data structures in the dispersion target function class. The main purpose is to prepare to have a different number of dispersion points per experiment, per spin, and per spectrometer frequency.
  • The return_cpmg_frqs() and return_spin_lock_nu1() dispersion functions now return lists of lists of arrays. The dispersion data structures are now experiment and spectrometer frequency dependent. Therefore the number of dispersion points can now be different for each.
  • The dispersion target function num_disp_points structure is now variable. The number of points can now be different for each experiment type and each magnetic field strength.
  • Added a header comment to the grace2images.py script to explain its dependence on Grace. This is thanks to feedback from Nikolai Skrynnikov.
  • Better organisation of the models by data type in the dispersion software comparison table in the manual.
  • Added Dmitry Korzhnev's cpmg_fit software to the dispersion chapter of the manual. This is in the last section of that chapter and in the software comparison table.
  • Added the chemex software to the dispersion chapter of the user manual.
  • Updated the GLOVE details in the dispersion software comparison table in the manual.
  • Updates for the TODO section of the dispersion chapter of the user manual. Some of the entries were rubbish.
  • Readded the accidentally deleted \clearpage command to keep the dispersion software table nicely formatted.
  • Added the scripting interface for cpmg_fit to the dispersion software comparison table in the manual.
  • Added constrained optimisation and Monte Carlo simulations to the dispersion software comparison table. This is for the user manual.
  • Added a section on open source licencing to the dispersion software comparison table. This is for the dispersion chapter of the user manual.
  • Updated the GUARDD details in the dispersion software comparison table of the manual.
  • Added a section about programming language to the dispersion software comparison table of the manual.
  • Added a page break for better formatting of the dispersion software comparison table of the manual.
  • Removed a now unneeded midrule from the dispersion software comparison table.
  • Editing and expansion of the dispersion software comparison table in the manual. The optimisation algorithms are now listed, where known. A number of entries and sections have also been rearranged.
  • More updates for the dispersion software comparison in the manual.
  • Updates for the grid search and GLOVE in the dispersion software comparison table in the manual.
  • More updates for the dispersion software comparison table in the manual.
  • Updated the dispersion software comparison table in the manual for GUARDD. This is based on feedback from Ian Kleckner.
  • A bit more editing of the dispersion software comparison table of the manual.
  • Expanded the abbreviations of the user manual for many relaxation dispersion terms.
  • Update for NESSY in the dispersion software comparison table of the manual.
  • Added more R model references to the bibliography file for the manual. This includes the Trott and Palmer 2004 N-site and the Miloushev and Palmer 2005 2-site models. The Trott and Palmer 2002 R model reference has been expanded to include all details.
  • Added the TP02 and MP05 R dispersion models to the manual. These are not implemented in relax, or any of the software in the software comparison section, but are included for completeness. This was pointed out by Art Palmer.
  • Added the Korzhnev et al., 2005 reference to the bibliography file for the manual.
  • Fixes for a number of page numbers in the bibtex file for the user manual.
  • Expanded the numeric dispersion models to include the linear and branched 3-site models in the manual.
  • Removed a typo from the dispersion model table.
  • Rearranged the sections of the dispersion chapter of the manual.
  • Improvements for the supported dispersion model table in the manual. Footnotes have been added to indicate which models are not implemented yet.
  • Updated the TODO section of the dispersion chapter of the manual for the newly listed models.
  • Fix for the figure labelling in the dispersion chapter of the manual.
  • Small LaTeX layout changes to the dispersion chapter file.
  • Updated the dispersion software comparison table for the optimisation in GUARDD. I have added the 'MATLAB interior-point black magic' algorithm as MATLAB is not kind enough to explain what algorithm it is really using.
  • The Arrhenius analysis is also performed by cpmg_fit. This is for the dispersion software comparison table in the manual.
  • Added the TAP03 model to the dispersion chapter of the user manual.
  • Updated some ShereKhan language details in the dispersion software comparison table of the manual.
  • The dispersion GUI analysis now uses graphics.fetch_icon() for all icons. The gui.paths module no longer exists.
  • Created the Relax_disp.test_tp02_data_to_mp05 system test. This was copied from the Relax_disp.test_tp02_data_to_tp02 system test. The r1rho_off_res_tp02.py system test script was modified to handle both tests by allowing the list of models to optimise to be set via the ds.models variable. This follows the tutorial for adding relaxation dispersion models at [[Tutorial for adding relaxation dispersion models to relax#The test suite].
  • Added the MP05 model to the dispersion variables. This is the Miloushev and Palmer 2005 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Adding the model to the list.
  • Added the MP05 model to the relax_disp.select_model user function frontend. This is the Miloushev and Palmer 2005 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax disp.select model user function front end.
  • Added support for the MP05 model to the relax_disp.select_model user function back end. This is the Miloushev and Palmer 2005 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax disp.select model user function back end.
  • Created the MP05 model target function. This is the Miloushev and Palmer 2005 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The target function.
  • Added the MP05 R2eff calculating function to the relax library. This is the Miloushev and Palmer 2005 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax library. Just in case git-svn does not preserve the file copying history, the lib/dispersion/mp05.py file was copied from the tp02.py file.
  • Debugging of the MP05 dispersion model - optimisation is now setup correctly. This is the Miloushev and Palmer 2005 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Debugging.
  • Fixes and improvements to the Relax_disp.test_tp02_data_to_mp05 system test. The MP05 model values, which are almost the same as the TP02 model parameters, are now being checked. The optimised parameters are now being printed out to aid in debugging. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Debugging.
  • Speed ups of the Relax_disp.test_tp02_data_to_tp02 and Relax_disp.test_tp02_data_to_mp05 system tests. The optimisation precision and number of Monte Carlo simulations have both been dropped.
  • Added the MP05 model to the GUI model list. This is the Miloushev and Palmer 2005 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The GUI.
  • Added the MP05 model to the dispersion auto-analysis. This is the Miloushev and Palmer 2005 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The auto-analysis.
  • Added the MP05 model to the relax user manual. This is the Miloushev and Palmer 2005 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax manual. The MP05 model was already partly in the manual, however it was listed as unimplemented. All of the tables and the dispersion chapter text has been updated for the model.
  • Modified the R1rho_analysis.py sample script to use the MP05 model. This is the Miloushev and Palmer 2005 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The sample scripts.
  • The limitations of the 'TP03' dispersion model are now listed in the user manual.
  • The MP05 and NS R1rho 2-site are now nested in the dispersion auto-analysis. As the MP05 model is valid across all times scales and does not require skewed populations, its optimised parameters can be used as the starting point of optimisation of the NS R1rho 2-site numeric model. This results in huge speed ups of the numeric model as previously a grid search was being performed.
  • Removed all remnants of the MQ R data type. This data type does not exist and was mostly removed, but some small bits remained.
  • Created the Relax_disp.test_tp02_data_to_tap03 system test. This is the Trott et al, 2003 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at [[Tutorial for adding relaxation dispersion models to relax#The test suite] This was copied from the Relax_disp.test_tp02_data_to_mp05 system test.
  • Added the TAP03 model to the dispersion variables. This is the Trott, Abergel and Palmer 2003 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Adding the model to the list.
  • Added the TAP03 model to the relax_disp.select_model user function frontend. This is the Trott, Abergel and Palmer 2003 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax disp.select model user function front end.
  • Added support for the TAP03 model to the relax_disp.select_model user function back end. This is the Trott, Abergel and Palmer 2003 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax disp.select model user function back end.
  • Created the TAP03 model target function. This is the Trott, Abergel and Palmer 2003 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The target function.
  • Added the TAP03 R2eff calculating function to the relax library. This is the Trott, Abergel and Palmer 2003 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax library.
  • Debugging of the TAP03 dispersion model - optimisation is now setup correctly. This is the Trott, Abergel and Palmer 2003 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Debugging.
  • Debugging of the TAP03 dispersion model. Removed a Unicode character from the lib.dispersion.tap03 module docstring to allow it to be used in Python 2. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#Debugging.
  • The lib.dispersion.tap03 module can now handle negative γ values. This avoids fatal errors during optimisation.
  • Many fixes for the lib.dispersion.tap03 module to match the original equations. The TAP03 model solution is now similar to those of TP02 and MP05.
  • Updated the Relax_disp.test_tp02_data_to_tap03 system test numbers to match the optimised values. These were so close to the MP05 model values that the test was passing anyway.
  • Added the TAP03 model to the GUI model list. This is the Trott, Abergel and Palmer 2003 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The GUI.
  • Added the TAP03 model to the dispersion auto-analysis. This is the Trott, Abergel and Palmer 2003 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The auto-analysis.
  • Added the TAP03 model to the relax user manual. This is the Trott, Abergel and Palmer 2003 R analytic model for 2-site off-resonance exchange. This follows the tutorial for adding relaxation dispersion models at Tutorial for adding relaxation dispersion models to relax#The relax manual. The TAP03 model was already partly in the manual, however it was listed as unimplemented. All of the tables and the dispersion chapter text has been updated for the model.
  • Added the TAP03 and MP05 models to the abbreviations in the user manual.
  • Improvements to all of the R model descriptions in the dispersion chapter of the manual.
  • Added a placeholder for the relaxation dispersion citation to the citation chapter of the manual. The bibtex entry for this will need to be updated later once the citation is published.
  • Added support for 1H SQ CPMG data for the MMQ-type dispersion models. The key is to skip the protons in the spin cluster loops and to instead find the proton spin containers attached to the heteronuclei of the spins of the cluster. The EXP_TYPE_PROTON_SQ_CPMG and EXP_TYPE_PROTON_MQ_CPMG experiment type variables have been created to aid this. The MODEL_LIST_MMQ list variable has also been created to more consistently identify the MMQ-type dispersion models. The has_disp_data() function has been created to simplify the finding of dispersion data for a given cluster, experiment type, spectrometer frequency and dispersion point. The has_proton_sq_cpmg() and has_proton_mq_cpmg() are used to determine if there is proton dispersion data for the given heteronucleus. The loop_exp() function has been modified to yield the proton SQ and MQ data if present. Similarly the num_exp_types() and return_index_from_exp_type() functions exhibit different behaviour if this data is present. The return_r2eff_arrays() function now assembles all of the proton data on top of the heteronuclear data by fetching the protons attached to the heteronuclei and aliasing the correct spin for the given experiment type.
  • Updated the relaxation dispersion target functions. The input data structures have changed type.
  • Implemented the MMQ 2-site CPMG model equations [Korzhnev et al., 2005a]. The original code from Mathilde Lescanne and Dominique Marion has only slightly been modified for this change as the MQ data treatment in the Korzhnev et al., 2004 reference is the same as in the 2005 reference, but using a different notation. This has been renamed to r2eff_mmq_2site_mq(). The new r2eff_mmq_2site_sq_dq_zq() function has been added to the lib.dispersion.mmq_2site module to allows the SQ, DQ, and ZQ R2eff data to be calculated. This function follows the notation of the 2005 paper. The populate_matrix() function has been modified to only accept one combined chemical shift difference value. It can now also accept different values for R2A0 and R2B0, though the mmq_2site module defaults to R2A0=R2B0.
  • The r2eff_mmq_*() functions of lib.dispersion.mmq_2site now accept different R2A0 and R2B0 arguments. These are set to the same thing within the dispersion target function.
  • Converted the spin specific 'r2', 'r2a', and 'r2b' dispersion parameters from lists to dictionaries. The new parameter keys are based on the experiment type and the spectrometer frequency. These keys are supported by the generate_r20_key() and decompose_r20_key() pair of functions in the specific_analyses.relax_disp.disp_data module. This enables support for different R20 parameters for each experiment type - a key piece of infrastructure for the MMQ models. The relax_disp.select_model user function backend was modified so the parameter list only contains one instance for each of the 'r2', 'r2a', or 'r2b' strings. The specific_analyses.relax_disp.parameters.loop_parameters() function was modified so that the R20key rather than frequency index is returned for the R20 parameters. Many other code changes were required.
  • The R20 values are now correctly handled in the dispersion target function for MMQ-type data.
  • Simplified the MMQ 2-site dispersion model target function. The r2eff_mmq_2site_sq_dq_zq() and r2eff_mmq_2site_mq() functions from lib.dispersion.mmq_2site are now aliased by the experiment_type_setup() target function method. Both functions now have matching arguments.
  • Change of the base relaxation dispersion experiment types. The base CPMG-type experiment has been changed from "CPMG" to "SQ CPMG". This is for better combined proton-heteronuclear SQ, ZQ, DQ, and MQ (MMQ) data support. The relax_disp.exp_type user function now also as the proton SQ and MQ CPMG-types available to select from rather than the previous behaviour of relax automatically determining the type from the spin type. All of the CPMG experiment type variables in specific_analyses.relax_disp.variables have been renamed for better ordering. Many changes were therefore required.
  • Fix for the specific_analyses.relax_disp.disp_data.num_exp_types() function. This needed updating after the change in the relaxation dispersion experiment type variables.
  • Different relaxation time periods for each experiment is now taken into account in the dispersion code. Previously only the first relaxation time period was being used. This was fine for single data type models, but was preventing the MMQ-type models from working. Now the return_r2eff_data() function of the specific_analyses.relax_disp.disp_data module assembles and returns the relax_times data structure which has two dimensions - the experiment type and the magnetic field strength.
  • Added a relax script to optimise just the 15N SQ CPMG data from Korzhnev et al., 2005. The corresponding log file has also been added to the repository.
  • Created the Relax_disp.test_korzhnev_2005_15N_sq_data system test. This is used to check the optimisation of the 15N SQ CPMG data using the MMQ 2-site model.
  • Fixes for the dispersion optimisation Disp_result_command.run() method. The dispersion point loop is fixed over all dispersion points, but the 'missing' data structure has a variable length.
  • Big redesign of the dispersion point returning and loop_*() functions. These are the functions in the specific_analyses.relax_disp.disp_data module. The return_cpmg_frqs() and return_spin_lock_nu1() functions now no longer take the spins and spin_ids arguments. Instead they determine if a dispersion point exists for the given experiment and spectrometer frequency using the intensity keys and data in the base of the data pipe. The specific_analyses.relax_disp.disp_data.loop_*() functions now accept the return_indices argument which if True will cause all of the relevant experiment type, spectrometer frequency, dispersion point, and relaxation time indices to be returned. The behaviour of the loop_point() method is now different. Instead of looping over all possible dispersion points, it only loops over those points present for the given experiment and spectrometer frequency. This change allows for many simplifications and latent bug fixes in the dispersion analysis.
  • Added cmpg_fit input and results files for the 15N SQ CPMG data from Korzhnev et al., 2005.
  • Added cmpg_fit input and results files for all single CPMG data combinations from Korzhnev et al., 2005.
  • Updated the cpmg_fit results for the Korzhnev et al., 2005 single data sets. The starting point for optimisation is now the solution for using all data together. This allows a much better solution to be found for each script.
  • Created 5 more system tests for checking the optimisation of single sets of Korzhnev et al., 2005 data. These are Relax_disp.test_korzhnev_2005_15n_dq_data, Relax_disp.test_korzhnev_2005_15n_mq_data, Relax_disp.test_korzhnev_2005_15n_zq_data, Relax_disp.test_korzhnev_2005_1h_mq_data, and Relax_disp.test_korzhnev_2005_1h_sq_data. These should individually test out all parts of the MMQ 2-site dispersion model.
  • The cpmg_fit script for the Korzhnev et al., 2005 15N ZQ CPMG data now starts at the relax solution. This is to try to find better solutions for δω and δωH, thought it was not so successful.
  • Updated the Relax_disp.test_korzhnev_2005_15n_zq_data system test. It now starts at the relax solution and the test passes as it seems to reasonably match the cpmg_fit results.
  • Reintroduced the F vector into r2eff_mmq_2site_mq() to calculate the magnetisation.
  • Added the cpmg_fit results for optimisation all of the Korzhnev et al., 2005 CPMG data. This is for the 2-site model. It includes all proton-nitrogen SQ, ZQ, DQ, and MQ data.
  • Updated the cpmg_fit results for all Korzhnev et al., 2005 data.
  • Shifted the relax results for the 15N SQ Korzhnev 2005 CPMG data to its own directory. The relax save state and grace curve have been added to the repository as well.
  • Created a Grace plot of the failed cpmg_fit results. This is for the Korzhnev et al., 2005 data, using all data sets.
  • Fixes for the cpmg_fit results for all of the data from Korzhnev et al., 2005. The δωH value must start negative, otherwise optimisation will fail to find the correct minimum.
  • Created a Grace graph for the 1H SQ data fitting of cpmg_fit.
  • The dispersion specific overfit_deselect() method now handles the MMQ-type models better.
  • The MMQ 2-site dispersion model can now be optimised if no heteronuclear R2eff data is loaded.
  • Many more fixes for the MMQ-type dispersion models for the proton spin data.
  • Added many new relax results for the CPMG data form Korzhnev et al., 2005.
  • The R2eff data key has been changed in the dispersion analysis. The experiment type has been added to the key so that R2eff data is not mixed up when data from multiple experiments is present.
  • Updated the synthetic TP02 model data for the recent changes.
  • Fix for the dispersion base_data_loop() method for deselected spins. A recent change broke this function when spins were deselected.
  • Updated the truncated CPMG data set from Flemming Hansen to include residue :4. This is deselected in the test suite, but allows the comparison in the shared_data directory to use all four spins (:4, :70, :71).
  • Changed the current data pipe in the relax saved states for Flemming Hansen's truncated CPMG data.
  • Another change of the base relax files of the truncated CPMG data.
  • Bug fix for the relax_disp.cpmgfit_input user function. The νCPMG values need to be doubled and then divided by 1e3 to obtain the 1/τCPMG values in ms.
  • Fix for the relax_disp.sherekhan_input user function for the recent changes.
  • Updated all of the results for the truncated CPMG data from Flemming Hansen in the test suite. The results are now different as the errors are now much more precise as they come from all spin systems rather than just the truncated set of :4, :70, and :71.
  • Bug fix for the Ishima and Torchia 1999 dispersion model. Their value of omega_1eff is defined in terms of νCPMG, hence it is missing the radian unit. This is clearly a mistake, but is probably compensated by their stated rather than derived definitions.
  • Updated all of the relax results for the IT99 model fix.
  • Added the new relax IT99 model results to the software_comparison file. This is for the truncated CPMG data from Flemming Hansen.
  • Fix for the LM63 dispersion model equation in the manual.
  • The CR72 dispersion model descriptions now emphasise the fact that it is not accurate on all time scales. This is for the dispersion chapter of the user manual.
  • Modified the relax_disp.select_model user function CR72 model descriptions. Instead of saying all time scales, the CR72, CR72 full, and MQ CR72 model descriptions instead now say most time scales.
  • Minor equation improvement in the dispersion chapter of the manual.
  • Fix for the relax_disp.plot_disp_curves user function in the GUI. The directory argument was incorrectly set to the 'dir' type rather than 'dir sel' type so it was not shown in the GUI.
  • Created the relax_disp.write_disp_curves user function. This is based on feedback from Nikolai Skrynnikov. The user function will generate one file per spin system and dump all of the R2eff values (measured, back calculated, and errors) into the file.
  • The relax_disp.write_disp_curves user function is now called from the dispersion auto-analysis.
  • Another bug fix for the IT99 model. This was pointed out by Nikolai Skrynnikov that the omega_1eff definition is incorrect and instead it should be omega_1eff = 4 * sqrt(3) * νCPMG.
  • Updated the Relax_disp.test_hansen_cpmg_data_to_it99 system test for the IT99 model fixes.
  • Updated the relax results for the truncated CPMG data from Flemming Hansen. This is needed as the IT99 model has been fixed and the new relax_disp.write_disp_curves user function introduced.
  • Fix for the relax_disp.write_disp_data user function. The spectrometer frequency in the output files is now in MHz.
  • A small output formatting change for the relax_disp.write_disp_curves user function.
  • The relax_disp.write_disp_curves user function is now more robust for when data is missing.
  • Fix for the setup of the Relax_disp.test_korzhnev_2005_1h_mq_data system test.
  • Fixes for the Relax_disp.test_hansen_cpmgfit_input system test. These are needed as the relax_disp.cpmgfit_input user function has been fixed resulting in different files being produced.
  • Bug fix for the relaxation dispersion model selection. Deselected spins in the current pipe were being skipped, so for model selection between different data pipes that results in spins not being used when they should be.
  • Bug fix for the dispersion specific model_information() method. This can now handle deselected spins with no data.
  • Bug fixes for the model_loop() method no longer skipping deselected spins. This is needed for model selection when the spins from all data pipes are deselected.
  • One last fix for the dispersion analysis for the changes of the model_loop() method.
  • Updated the relax script for optimising Flemming Hansen's CPMG data.
  • Better support for the MMQ-type data dispersion models for the end of the optimisation. The back calculated R2eff values are now handled correctly for the attached proton in the spin system.
  • Updated the Relax_disp.test_korzhnev_2005_15n_dq_data system test so it passes. The optimised values are very similar to that from cpmg_fit, so the code must be functioning correctly.
  • Improvement for the file names in the relax_disp.plot_disp_curves user function. The '_' character is now used between the experiment name and the rest of the file name.
  • Bug fix for the specific_analyses.relax_disp.disp_data.find_intensity_keys() function. This function was not handling multiple experiment types correctly.
  • Created the Relax_disp.test_korzhnev_2005_all_data system test for checking the MMQ 2-site model. This checks against all six data types, 1H SQ, 15N SQ, DQ, ZQ, 1H MQ, and 15N MQ. This is currently set to the values found by cpmg_fit. As this is the true solution, relax should find similar parameter values.
  • Created a Grace plot of the 15N MQ CPMG data fitting from cpmg_fit.
  • Bug fix for the multiple quantum relaxation dispersion models. These require both the heteronuclear and proton chemical shift differences. But the proton difference was being scaled by the heteronuclear Larmor frequency and not the proton frequency.
  • The relaxation dispersion calculate user function now stores the back calculated R2eff values. A number of changes were required for this. The code from the end of the Disp_result_command.run() method was converted to the function specific_analyses.relax_disp.disp_data.pack_back_calc_r2eff(). This allows the back calculation R2eff unpacking code to be shared. The new has_proton_mmq_cpmg() function has also been created to simplify the code.
  • Bug fix for the dispersion calculate user function.
  • Created a script to compare the cpmg_fit and relax solutions for the MMQ 2-site dispersion model.
  • Clean ups and speed ups of the 1H MMQ flag calls.
  • Large improvements to the relax_disp.plot_disp_curves user function including :Category:MMQ CPMG data|MMQ]] model support. This user function now handles multiple dispersion data sets better by placing each into a new graph. All graphs have also been improved by matching the colours of the sets for each field strength and using different symbols and line styles to emphasize the data.
  • Fixes for the relax_disp.plot_exp_curves user function for the lib.software.grace changes.
  • The relax_disp.plot_disp_curves now shows the experiment type as part of the Y-axis label. This is to allow for easy identification of the experiment when more than one is present.
  • Bug fix for the MMQ 2-site dispersion model target function. The relaxation time was being taken as that of the first experiment for all experiments. This is a relic from the code being copied from a single experiment type model.
  • Converted the MQ CR72 dispersion model to handle MMQ data. This model can now handle proton-heteronuclear SQ, ZQ, DQ, and MQ CPMG-type data. Some debugging might still be required.
  • Fix for the MQ CR72 model for MQ-type data. The check to prevent acos of a number less than 1 has been changed to switch the sign rather than to set the back calculated R2eff to 1e99.
  • Another bug fix for the MQ CR72 dispersion model. The νCPMG value rather than the relaxation time was being used to calculate the R2eff values as the division by 'n' was missing.
  • The relax_disp.plot_disp_curves user function can now handle values of NaN. These are simply replaced by 0.0 to allow Grace to open the file.
  • Fixes for the MQ CR72 dispersion model target function.
  • Removed a latent bug in the MMQ 2-site dispersion model. This was not being seen but might have caused problems in the future.
  • Fix for the MQ CR72 dispersion model target function. The correct R20 values are now extracted from the parameter vector.
  • Improvements for the CR72 and MQ CR72 dispersion model R2eff calculating functions. The numpy.arccosh() function can handle all input values when complex, therefore the checks for the real part being above 1 are not necessary.
  • General improvement for the optimisation of many target functions. For those models which use the τCPMG value, this is now recalculated. This means that if a user inputs truncated νCPMG values, these are corrected when calculating τCPMG so that full precision values will be used for the optimisation.
  • Changed the sign of the δω frequency for the ZQ data in the CR72 and MQ CR72 models.
  • Last fix for the MQ CR72 dispersion model. The wrong value was being subtracted from the first eigenvalue - the value of log(Q)/relax_T should not be divided by the number of CPMG blocks.
  • Simplified the first MQ CR72 dispersion model formula in the manual.
  • Created a relax script to compare the MQ CR72 dispersion model results to cpmg_fit. The cpmg_fit solution is used as the input parameters for relax, and then a calc user function call is used to back calculate the R2eff values. These values are then plotted to show the perfect match.
  • Bug fixes for the MMQ 2-site dispersion model. The matrix power factor must be found with the Python math.floor() function and not int() as the later will sometimes round up.
  • Updated all of the relax vs. cpmg_fit comparison files in the shared data directory. These now show the perfect match between the programs. The cpmg_fit source code was modified to improve the accuracy of the gyromagnetic ratio values.
  • Updated the cpmg_fit results for the CPMG data of Korzhnev et al., 2005. This is using a modified binary wherein the gyromagnetic ratio and optimisation tolerances and maximum number of iterations are far more accurate (to the same level as relax). The cpmg_fit output has also been made more accurate by writing out the values to much higher precision.
  • Fixes for the relaxation dispersion system tests for the changed behaviour of the CR72 model. The optimisation is slightly different as values are now always passed into the numpy.arccosh() function.
  • Eliminated the MODEL_LIST_CPMG_NUM variable. This was far too specific and its misuse caused a bug in the target function of a number of dispersion models.
  • Fixes for a number of dispersion system tests due to the higher accuracy of the τCPMG values. This is required as the τCPMG values have been corrected to eliminate user input truncation artifacts.
  • The Relax_disp.test_korzhnev_2005_all_data system test no longer dumps files in the current directory.
  • Updated all of the cpmg_fit results to use the numeric 2-site CPMG model. This also uses the modified cpmg_fit binary with higher accuracy.
  • Updated the Relax_disp.test_sprangers_data_to_mq_cr72 system test to pass. The MQ CR72 model is now much more accurate due to a number of recent bug fixes.
  • Fixes for all of the Relax_disp.test_korzhnev_2005_*_data system tests. These now start optimising at the solution found by cpmg_fit. All tests now pass.
  • Fix for the legends in the Grace graphs produced by the relax_disp.plot_disp_curves user function.
  • The grid search for the MMQ-type models now looks for negative chemical shift differences.
  • Converted the dispersion api method _back_calc_r2eff() into a function of the optimisation module.
  • Updated the spin-lock field strength data structures to be experiment and field specific. This allows different spin-locks to be used as different field strengths, or different experiments. It brings the structures in line with those for CPMG-type experiments.
  • Updates for the dispersion auto-analysis system tests using Flemming Hansen's data. The grid search increments have been increased by one to make sure the solution is always found.
  • Increased the range of chemical shift differences in the grid search for the dispersion models. The range was too narrow.
  • Fix for the Relax_disp.test_hansen_cpmg_data_auto_analysis system test. The kex value check needed to be scaled back.
  • The relax_disp.plot_disp_curves user function now produces interpolated dispersion curves. For this the new 'num_points' and 'extend' arguments have been added to the user function to give the user better control of this plotting. The interpolated curve is disabled from the numeric CPMG models as these do not support interpolation, and the R2eff model as interpolation is not needed. To support this, the specific_analyses.relax_disp.optimisation.back_calc_r2eff() function has been extended to support the CPMG frequencies or spin-lock field strengths been supplied instead of retrieved. This allows a set of custom dispersion points to be used in the back calculation. The dispersion target function setup was modified to prevent the recalculation of τCPMG values when asked, as interpolation is not compatible with this.
  • The relax_disp.plot_disp_curves user function now places the X-axis at zero. This is for better visualisation of the residuals.
  • Interpolated curves are now produced for the numeric CPMG-type models. This if for the relax_disp.plot_disp_curves user function. The resolution of these are limited to the frequency of a single CPMG block in the relaxation time period. Therefore the plots are produced slightly differently. To enable this functionality, the new count_exp() and return_relax_times() functions have been added to the specific_analyses.relax_disp.disp_data module.
  • Improved the text for the relax_disp.plot_disp_curves user function.
  • Fix for the interpolation for the numeric CPMG-type models in relax_disp.plot_disp_curves.
  • Updated the relax results files for the CPMG data from Korzhnev et al., 2005.
  • Improvements to the data-type labelling in the dispersion chapter of the user manual.
  • The dispersion model GUI window is now set to a reasonable size for most screens. The scrolled panel now allows all contents to be shown while having the window smaller than its contents. The height of 750 pixels should be visible on the majority of computer monitors. According to Google Analytics, ~13% of visits to http://www.nmr-relax.com have screen resolutions of 1366x768, therefore the dispersion model list window should now not be bigger than their screens.
  • Merged the MQ CR72 dispersion model into the MMQ data type sections in the tables of the manual.
  • Implemented model elimination for the relaxation dispersion analysis. This currently uses the pA limits of 0.501 < pA < 0.999 to determine if a model has failed. To implement this, the dispersion API methods deselect(), eliminate(), get_param_names() and get_param_values() were written. These were copied from the model-free analysis and modified as needed.
  • Model elimination is now activated in the dispersion auto-analysis.
  • The relaxation dispersion target function class can now handle cpmg_frqs arguments of None. This is useful for R models.
  • Bug fix for the recently added dispersion API eliminate method. This was accidentally always eliminating the model.
  • Created a new section in the dispersion chapter of the manual covering optimisation. This describes the auto-analysis, the chi-squared function, the grid search values, how optimisation is implemented, the linear constraints used, the diagonal scaling, model elimination, and the use of OpenMPI. It absorbs the clustering section.
  • Improvements for the dispersion API eliminate() method.
  • Added text about the relax_disp.insignificance user function to the dispersion chapter of the manual.
  • Updates for the MMQ 2-site model equations in the manual.
  • Added the tex > 1.0 model elimination rule for the dispersion analysis.
  • Updated the description of the dispersion auto-analysis in the manual.
  • Added a MC simulation elimination section to the dispersion chapter of the manual.
  • Fix for the new analysis GUI wizard - two model-free analysis buttons were present. This is due to an imperfect merge of the relax_disp branch back to trunk.
  • Fixes for the lib.software.grace for an imperfect merger of the relax_disp branch.
  • Fix for the Noe.test_noe_analysis system test. The old Grace file was turning the legend first off and then on, but now this is fixed.
  • Fixes for the Relax_disp.test_tp02_data_to_tp02 GUI test. This should have been fixed in the relax_disp branch.
  • Fix for the Wiz_window.setup_page() method. The user function SetValue() methods are no longer called but instead the Uf_page.SetValue() method is used to set up user function arguments. This is important as this later method can properly handle the free file format arguments and other special arguments whereas the former cannot.
  • Attempts at fixing and improving the Relax_disp.test_hansen_trunc_data GUI test. These changes have uncovered a spin ID updating problem in the relax data store after calling the residue.delete user function.
  • Fix for two system tests to prevent relax save files from being dumped in the installation directory. This would have been fatal for the tests suite on systems with relax installed as root.
  • Fix for the GUI tests for a wxPython 2.9 ListCtrl.HitTest() bug. This only affects the relax test suite. The suite should now pass on all systems.
  • Shifted the dispersion chapter of the user manual to its correct position. Somehow during the relax_disp branch merger, this chapter was shifted into the "Advanced Topics" partition of the manual.
  • API documentation fix for test_suite.system_tests.relax_disp.Relax_disp.setup_korzhnev_2005_data().
  • Limited the optimisation time in the N_state_model.test_populations system test. This test can take a huge amount of time on Mac OS X and MS Windows (~6 seconds on Linux, ~360 seconds on Mac OS X, and ~120 seconds on MS Windows, all on similar hardware). Now the minimise user function max_iter argument is set to 2000 to speed the test up.
  • Increased the speed of the N_state_model.test_populations system test again. The maximum number of iterations for the minimise user function is now set to 500.
  • Fix for the N_state_model.test_populations system test on Mac OS X. The optimisation on Macs is not as precise as on Linux, so the test was loosened.
  • Fix for the Relax_disp.test_m61_exp_data_to_m61 system test on 32-bit Mac OS X. The optimisation precision was not great enough to find the minimum, so the grid search increments have been increased from 3 to 4.
  • Loosened all of the Relax_disp.test_korzhnev_2005_*_data system tests to pass on Mac OS X. This should hopefully fix the tests for MS Windows as well.
  • More loosening of the Relax_disp.test_korzhnev_2005_*_data system tests to pass on Mac OS X. These problems were identified on a different test machine.
  • Loosened all checks of the N_state_model.test_populations system test. This is needed for the woeful optimisation capabilities of Mac OS X (and partly MS Windows as well).
  • Avoided some wxPython 2.9.4.1 in the Relax_disp GUI tests. wxPython is quite buggy, so certain checks and tests cannot be performed.
  • Fix for the specific_analyses.relax_disp.optimisation.back_calc_r2eff() function. The R2eff error data structure when the cpmg_frqs or spin_lock_nu1 argument is supplied was all zeros, whereas it should all be ones. This was causing many divide by zero numpy warnings to appear on certain operating systems (Mac OS X).
  • The relax system test base tearDown() method should now be fail proof. Most code is now warped in a 'try: except: pass' block to catch all failures.
  • Improvements in the test_suite.clean_up.deletion() function. It is now more fail safe on Python 3 by completely avoiding the WindowsError checking.


relax 3.0 series

relax 3.0.2

  • Updated the Release Checklist document rsync instructions to allow resumed uploads. This is needed if the internet connection has been cut, as uploading can take a long time.
  • The test_suite.clean_up.deletion() function can now handle the case of missing files and directories. This problem was occurring in the relax_disp branch for some of the system tests.
  • Created the is_int() and is_num() functions for the lib.check_types module.
  • The value.write user function can now properly handle non-numeric data types. This allows the spin specific model name to be written to file, or any other string defined in the specific analysis PARAMS data object.
  • The multi-processor section of the manual is now labelled in the correct position.
  • Created a special GUI analysis element for floating point numbers. This allows for user input of floating point numbers into one of the GUI analysis tabs. If the input is not a number, the original value will be restored.
  • Created the new pipe_control.spectrum.add_spectrum_id() function. This is used to handle the creation of spectrum ID strings in the data store. This way new spectrum IDs can be created from different parts of relax in a controlled way.
  • Created the pipe_control.spectrometer.check_frequency() function to standardise this check.
  • Created the pipe_control.spectrometer.get_frequency() function for returning the frequency for a given ID.
  • The pipe_control.spectrum.add_spectrum_id() function now returns silently if the ID already exists.
  • Improvements to the pymol.view and molmol.view user functions for finding the PDB files. Now the possibility that this is being run from a results subdirectory is taken into consideration. If the file cannot be found, the os.pardir parent directory is added to the start of the relative path and the file checked for.
  • The rdc.read user function will now skip all lines of the RDC file starting with '#'. To include molecule identifiers at the start of the line will now require quotation marks.
  • Shifted the RDC and PCS assembly methods from the main class to the data module for the N-state analysis.
  • Created the pipe_control.mol_res_spin.is_pseudoatom() function to simplify pseudo-atom handling.
  • Created the pipe_control.mol_res_spin.pseudoatom_loop() function. This is used to loop over the spin containers corresponding to a given pseudo-atom.
  • Added a PDB file and RDC values (and absolute J+D and J) for propylene carbonate. This will be used for testing of pseudo-atoms in the N-state model analysis.
  • Renamed the propylene carbonate files to the correct name of pyrotartaric anhydride.
  • Created two new system tests based on the new pyrotarctic anhydride long range (1J, 2J & 3J) RDC data. The first (N_state_model.test_pyrotartaric_anhydride_rdcs) optimises an alignment tensor using long range signed RDC data. The second (N_state_model.test_pyrotartaric_anhydride_absT) optimises an alignment tensor using long range absolute T (J+D) data. Both test long range data together with methyl group pseudo-atom data.
  • Added all of the pyrotartaric anhydride RDC generation scripts and files. This is simply for reference and reproducibility.
  • Modifications for the pyrotartaric anhydride system test script. The grid search now is much quicker, and the RDC correlation plots are now sent to DEVNULL.
  • Added the return_id argument to the pipe_control.mol_res_spin.pseudoatom_loop() function. This will then yield both the spin container and spin ID string. This mimics the spin_loop()function.
  • Added proper pseudo-atom support for the RDCs in the N-state model analysis. This involves a number of changes. The pseudo-atom specific functions ave_rdc_tensor_pseudoatom() and ave_rdc_tensor_pseudoatom_dDij_dAmn() have been added to the lib.alignment.rdc module. These simply average the values from the equivalent non-pseudo-atom functions. The return_rdc_data()function in the specific_analyses.n_state_model.data module has been modified to assemble the RDC constants and unit vectors for all members of the pseudo-atom and add these to the returned structures, as well as a new list of flags specifying if the interatom pair contains pseudo-atoms. The N-state model target function and gradient have been updated to send the pseudo-atom data to the new lib.alignment.rdc module functions.
  • J couplings for the N-state analysis are now properly handled for pseudo-atoms. The measured J couplings for the members of the pseudo-atom should not be used, but rather that of the pseudo-atom spin itself (as the former does not exist).
  • Eliminated the old pseudo-atom handling in the N-state model specific return_rdc_data() function. This was multiplying the RDCs by -3 to handle the tetrahedral geometry of the 1J methyl RDCs. However this approach is not valid for non-methyl pseudo-atoms or for 2J, 3J, etc. data.
  • A RelaxError is now raised for the N-state model optimisation with gradients when T = J+D data is used. The gradients for this data type are not implemented yet, so it is better to prevent the user from using this.
  • The N_state_model.test_pyrotartaric_anhydride_absT system test now uses simplex optimisation to pass. The Newton algorithm cannot be used as the gradients for T = J+D type data have not been implemented.
  • An RDC error of 0.0 will now deselect the corresponding interatomic data container. This can be used for simpler pseudo-atom handling.
  • Updated the menthol long range RDC data file to include pseudo-atom member distances.
  • Renamed the interatomic_loop() function 'selected' argument to 'skip_desel'. This is to match the spin_loop() function arguments.
  • The interatom.unit_vectors user function now calculates the unit vectors for deselected containers. This is useful for pseudo-atom handling where the interatomic containers to the pseudo-atom members have already been deselected.
  • Updated the value checking for the N_state_model.test_absolute_rdc_menthol system test. The pseudo-atoms are now properly handled so the result is now much better.
  • The stereochemistry auto-analysis can now accept a file of interatomic distances. This is for better pseudo-atom support.
  • The N-state model specific check_rdcs() function now properly handles pseudo-atoms.
  • The pipe_control.rdc.q_factors() function now properly handles pseudo-atoms. If pseudo-atoms are present, then 2Da2(4 + 3R)/5 normalised Q factor is skipped.
  • Created the N_state_model.test_pyrotartaric_anhydride_mix system test. This is used to demonstrate a bug in the N-state analysis using mixed RDC and long range absolute J+D data.
  • Movement of N-state model specific code to the analysis neutral pipe_control package. Many of the functions of the specific_analyses.n_state_model.data module relating to alignment tensors, RDC data and PCS data have been shifted in to the pipe_control package modules align_tensor, rdc, and pcs respectively. This allows these functions to be made more general and allow the code to be shared with the frame order analysis or any future analysis using such data, and hence remove some code duplication.
  • Create two new warnings RelaxNucleusWarning and RelaxSpinTypeWarning to match the equivalent errors.
  • Added some RDC data checks to the N_state_model.test_pyrotartaric_anhydride_rdcs system test. This is to demonstrate a problem with the data assembly function pipe_control.rdc.return_rdc_data().
  • Clean ups and improvements for the pipe_control.rdc.check_rdcs() function. Pseudo-atoms are now handled much better and correctly in all cases. And many RelaxErrors have been converted to RelaxWarnings followed by a 'return False' statement.
  • Created the pipe_control.rdc.setup_pseudoatom_rdcs() function. This is used to make sure that the pseudo-atom interatomic systems (the containers from heternucleus to pseudo-atom and heteronucleus to pseudo-atom members) are properly set up. It will deselect the interatomic containers if incorrectly set up or if they are not part of the main pair.
  • Added quotation marks around a number of spin IDs with molecule names in some RDC data files. This is for the N-state model population model data used in the test suite.
  • The rdc.read and j_coupling.read user functions now ignore all lines starting with the # character. This is to remove all comment lines silently. Therefore if spin IDs are used which contain the molecule name, then they should be wrapped in quotation marks.
  • Updated a number of RDC test suite data files to have quotation marks around the spin IDs. This is to allow the molecule identifier to be present while not being mistaken for a comment line.
  • Updated some of the RDC data files used in the frame order system tests. The spin IDs are now in quotation marks as the molecule name is included. This is to prevent the line being removed as a comment.
  • Changes to the setup_pseudoatom_rdcs() function and renamed it to setup_pseudoatom_rdc(). The interatomic loop is now within the function to make sure that all is completed before the containers are accessed.
  • Started to add better pseudo-atom support for the PCS. The new pipe_control.pcs.setup_pseudoatom_pcs() function has been added to deselect the spins which are members of a pseudo-atom. The return_pcs_data() function in the same module now calls this function and builds a list of pseudo-atom flags for use in the target function (though it is still unused).
  • Finally eliminated the gui.paths module, replacing it with graphics.fetch_icon() calls. The GUI was using a mix of the old gui.paths module and the fetch_icon() function.
  • Created the pipe_control.sequence.return_attached_protons() function. This is used to return a list of proton spin containers attached to the given spin.
  • Improved Grace graph scaling and arrangement when multiple graphs are present. The lib.software.grace.write_xy_data() function now executes the 'autoscale' command for each graph and executes the 'arrange' to layout the graphs automatically.
  • The Grace plotting (via lib.software.grace) now fully supports the plotting of multiple graphs.
  • Improvements to the lib.software.grace module. The set colours are now applied to all set objects. And the axis label and tick sizes are now much smaller.
  • Created the --numpy-raise command line option. When this is set, all numpy warnings will be converted to errors. This is to aid in debugging to locate where the warning messages are coming from. These appear as RelaxWarnings, but there is no indication as to where the problem is.
  • The lib.software.grace module now supports setting the X and Y axes at zero.
  • Modified the model list GUI window. This can now be resized and it uses a scrolled panel to allow the contents of the window to be bigger than the window size.


relax 3.0.1

  • The font size is no longer set for the latex2html compiled user manual.
  • A number of updates and improvements to the document explaining how to setup a Mac OS X framework. This Framework Python setup is used to build the binary distribution files.
  • Updated the Mac Framework testing script to handled 4-way binaries (ppc74 included).
  • Better support for 4-way binaries in the Mac OS X Framework detection script.
  • Added support for the 'current ar archive random library' file type in the Mac OS X Framework testing script.
  • Added py2app to the Mac OS X Framework setup instructions.
  • Shifted code from pipe_control.spectrum to the new lib.spectrum.peak_list relax library module. This follows from http://thread.gmane.org/gmane.science.nmr.relax.devel/3972/focus=4347.
  • Added a special script for locating all Python versions and printing out the installed modules.
  • Large change to the free file format GUI element for the user functions. The GUI element used in the user function wizard windows has been modified to have both a 'default' form, which is the previous design, and a 'mini' form which is now used for the user functions. This mini form only uses 1 row, rather than the default of 6 or 8. It is a read only text element with a button that launches the free file format window. The amount of space saved is huge.
  • Improved the text for the mini free file format GUI element.
  • Updated all of the user function GUI window sizes for the 'mini' free file format GUI element. This allows much more text of the description to be displayed.
  • Updated the Mac Framework setup document to help with scipy compilation problems.
  • Improved the Python seeking and module version print out script for symlinks. This should now be much more capable of finding all Python versions on a system.
  • Added support for the Mac OS X Modelfree4 binary results to the Palmer.* system tests. The Mac OS X Modelfree 4.20 binary produces different results than the Linux binaries, mainly due to a compilation problem. In the Linux binaries, the results are written out to 4 decimal places. In the Mac binaries, the results are instead written out to 4 significant figures. Therefore the number of decimal places are much less than the Linux results.
  • Syntax error fix for one of the unused scripts in the relax test suite shared data directories. This problem was encountered by Jack Howarth <howarth att bromo dott med dott uc dott edu> and communicated in a private message. The issue was found by fink. This script is never used and will never be used again - it is only there for reference.
  • Modification of the spectrum.read_intensities user function front end. The heteronuc and proton arguments have been eliminated. Instead the new dim argument is used to associate the data with the spins of any dimension in the peak list.
  • Replaced the 'heteronuc' and 'proton' arguments of the spectrum.read_intensities user function backend with 'dim'.
  • Created the new lib.spectrum.objects module. This will hold temporary data structures for representing peak lists and other spectral data. The module currently contains the Peak_list class which is used to hold peak list data.
  • Started to shift the spectrum.read_intensities user function backend to use lib.spectrum.peak_list.
  • The pipe_control.spectrum.read_intensities() function now works with the Peak_list object.
  • The Peak_list object is now used by the lib.spectrum.peak_list.read_peak_list() function.
  • The lib.software.sparky.read_list_intensity() function now operates on the Peak_list object.
  • Changed the spectrum.read_intensities dim argument default to ω2 and improved the long description.
  • Fix for the assignment handling in the lib.software.sparky.read_peak_list() function. The first element is usually the indirect dimension or ω2.
  • Fix for many of the Peak_list system tests for the user function argument changes. The heteronuc and proton arguments have been replaced by the dim argument.
  • The lib.software.xeasy.read_list_intensity() function now operates on the Peak_list object.
  • The lib.software.nmrview.read_list_intensity() function now operates on the Peak_list object.
  • The lib.spectrum.peak_list.intensity_generic() function now operates on the Peak_list object.
  • Fixes for the pipe_control.spectrum.read() function. An error was referencing a now non-existent variable and the docstring has been fixed.
  • The Peak_list object can now store peak intensity names. This is for peak lists such as from NMRPipe seriersTab files where the peak list covers multiple spectra.
  • The NMRPipe seriesTab peak lists are now supported through the Peak_list object.
  • Unit test fixes for the spectrum.read_intensities user function argument changes.
  • Fixes for a few system tests for the spectrum.read_intensities user function argument changes.
  • Fixes for a few GUI tests for the spectrum.read_intensities user function argument changes.
  • Changes for the spectrum.read_intensities user function dim argument. The default is now ω1, the indirect dimension in a 2D experiment. The description has also been fixed.
  • Fixes for all of the peak intensity reading functions - the ω1 and ω2 dimensions were swapped.
  • Updates to the sample scripts for the spectrum.read_intensities user function argument changes.
  • Updates to the user manual for the spectrum.read_intensities user function argument changes.
  • Created the Chemical_shift.test_read_sparky system test for the reading of chemical shifts. This is for the reading of shifts from a Sparky peak list. It tests the currently non-existent chemical_shift.read user function.
  • Created some incredibly basic icons for the chemical shift user functions. These are simply an ω symbol and will need to be replaced by something better in the future.
  • Created the chemical_shift.read user function. This includes both the front and back end code.
  • Shifted all the modules from lib.software to do with peak lists to lib.spectrum. This is for a more logical organisation, as these modules are solely used by the lib.spectrum.peak_list module.
  • Renamed all of read_*() functions of the lib.spectrum modules for consistency. These functions will now be used to read all types of data from a peak list, from the assignments to chemical shifts to peak intensities, and everything in between.
  • Modified the peak list object. The peak list dimensionality variable is no longer private, and many values of None are now converted to lists of None so that the peak list data is easier to handle.
  • Fix for the proton name in the new Chemical_shift.test_read_sparky system test.
  • Expanded the functionality of the lib.spectrum.sparky.read_list() function. Now the dimensionality of the peak list is automatically determined, and all peak lists from 1D to 4D are supported. The chemical shifts are also automatically detected and extracted from the list and placed into the peak list object. The peak intensity data is also automatically detected,therefore the int_col argument is no longer used.
  • The lib.spectrum.sparky.read_list() function can now auto-detect the peak volume column and use it for intensities.
  • Created the Chemical_shift.test_read_xeasy system test. This is for checking the reading of chemical shifts from a 2D XEasy peak list.
  • Implemented the reading of chemical shifts in the lib.spectrum.xeasy.read_list() function.
  • Created the Chemical_shift.test_read_nmrview system test. This, if not obvious from the name, is for checking the reading of chemical shifts from an NMRView peak list.
  • Implemented the reading of chemical shifts in the lib.spectrum.nmrview.read_list() function.
  • Assignments can now contain lowercase letters in Sparky and NMRPipe seriesTab peak lists.
  • Fix for the unit test for the reading of intensities from Sparky peak lists.
  • Updated the nmrPipe processing script in the relax user manual. This is in response to the post by Troels Linnet at http://thread.gmane.org/gmane.science.nmr.relax.user/1520. The text has also been expanded to better explain spectral processing.
  • Improvements for the description of the NMRPipe processing script in the R1/R2 chapter of the user manual.
  • LaTeX fix for the curvefit chapter of the user manual.
  • The isInf() and isNan() functions of lib.float can now handle values of None. If None is encountered, the functions simply return False.
  • The model-free optimisation code now handles minfx returning nothing. This is due to the fix of bug #21001 in relax, which is really a fix for an upstream minfx bug #21090.
  • Created the Mf.test_bug_21079_local_tm_global_selection system test. This is to catch bug #21079.
  • Extended the Mf.bug_21079_local_tm_global_selection system test for all Monte Carlo simulation steps.
  • The model_free.select_model user function GUI element now uses unicode for the model parameters. The τ character is now used for the tm, te, tf, and ts parameters. And a superscript 2 is used for the order parameters.
  • The model lists in the model-free GUI auto-analysis now use unicode for the S2 parameters.
  • The peak intensity wizard in the GUI is now more robust. The wizard_update_ids() method can now better handle missing data. This is encountered if a user skips the first elements of the wizard.
  • Created Wiz_window.setup_page() for user function wizard pages to allow for simpler GUI tests. This method can be used to setup any user function wizard page with all its arguments set. It accepts all keyword arguments and sets these for the wizard page, translating to GUI strings as needed. This should save a lot of lines in the GUI tests.
  • Simplified the Noe.test_noe_analysis GUI test by using the new Wiz_window.setup_page() method.
  • Python 3 fixes for all of the unicode strings in relax. Instead of using the u"xyz" notation, now unicode("xyz") is being used. This works as the relax compat module sets the builtin unicode() function to str() for Python 3, as all strings in Python 3 are unicode and hence both the Python 2 u"xyz" and unicode() code are undefined in Python 3.
  • Defined two new functions called u() in the compat module for better unicode string support. The two functions are defined differently for Python2 and Python3. The Python3 function simply returns the text unmodified, as all strings are unicode. The Python2 function converts the str type to a unicode type.
  • The new compat.u() function is now being used for all unicode strings.
  • All "local tm" text in the GUI now uses a subscript m unicode character as well as the τ character.
  • Created the pipe_control.spectrum.test_spectrum_id() function for checking if a spectrum ID exists.
  • Renamed pipe_control.spectrum.test_spectrum_id() to check_spectrum_id(). A bug in the function was also removed, and the other code in the module now uses this function.
  • Created the pipe_control.mol_res_spin.check_mol_res_spin_data() function. This will check for the existence of molecule, residue and spin data and raise a RelaxError if none exists.
  • Simplification of the data checks in the pipe_control.spectrum module. This is using the new pipe_control.*.check*() functions.
  • Huge speed up of the GUI tests by the removal of the N_state_model.test_populations test. This problem was identified by running the GUI tests with the '--time' flag. One one test machine, this single test took ~142 seconds to complete when the entire GUI tests took ~242 seconds (i.e. this one test took up to 60% of the whole test suite). This test comes directly from a system test, but the equivalent system test only takes about 6 seconds to complete. The difference is due to the slow generation of the user function GUI pages.
  • Created the new RelaxNoPeakIntensityError error object.
  • The compat.SYSTEM variable is now set to 'Windows' when 'Microsoft' is detected. This is for easier identification of MS Windows systems, as either string could be used.
  • Created the new gui.text module for holding all of the unicode text for the GUI. This module contains unicode strings for the various analysis types, which are then all defined in one location. This is for consistency.
  • Converted the model-free user function definitions to use the new gui.text module strings.
  • Shifted the gui.text module to lib.text.gui to avoid a fatal circular import in the GUI.
  • MS Windows fixes for the GUI for missing unicode font glyphs.
  • Added some Mac OS X GUI string fixes for missing unicode characters to lib.text.gui.
  • The size of the model list GUI window can now be changed.
  • Redesign of the model list GUI element. The wx.ListCtrl element has been replaced by a wx.FlexGridSizer combined with wx.CheckBox and wx.StaticText. The result is a much nicer formatting of the element. The checkboxes in the old element displayed slight rendering problems on all operating systems and did not look neat. The new design is also more flexible in that models of None are now treated as separators in the window.
  • The model list GUI element can now display an optional model description column.
  • Added model descriptions and adjusted the size of the model-free model list GUI elements.
  • Refinements for the model list GUI window. The font for all text elements is now set. And the elements of the wx.FlexGridSizer are now vertically centred so that the text of the checkboxes and text elements line up perfectly.
  • The size of the model list GUI window is now automatically set to the best fit.
  • The model list GUI element is now centred after the autosizing.
  • The titles in the model list GUI window now use a smaller font size.
  • Update of the description of the interatom.define user function.
  • Added multi-processor support for Monte Carlo simulations. This simply involves accessing the multi-processor box singleton and running the processor.run_queue() method within the pipe_control.minimise.minimise() function. This currently does nothing as the processor queue is always empty. But if the code in the specific_analyses package is modified to add slave commands to the processor but not execute the run_queue() method, then the Monte Carlo simulations will be automatically parallelised.
  • Updated the spectrum.error_analysis user function backend to use the lib.statistics.std() function. This simplifies the code. It affects only the peak intensity error analysis when spectra have been replicated.
  • Created the Structure.test_bug_21187_corrupted_pdb system test to catch bug #21187. The bug was reported by Martin Ballaschk.
  • Bug fix for the specific analysis API _data_init_spin() method. This is used for the API init_spin() method. This is a latent bug which does not affect any of the current analyses in relax. It was discovered in the relaxation dispersion branch.
  • Addded a new is_queued() method to the Processor object of the multi package. This allows the Processor object for the uni and mpi4py multi-processor to be queried to see if any slave commands have been queued.
  • Created a unit test for the lib.linear_algebra.matrix_exponential module. This module does not exist yet, but it will be used to replace the scipy.linalg.expm() function use in the relaxation dispersion branch.
  • Loosened the lib.linear_algebra.matrix_exponential.matrix_exponential() unit test checks.
  • Implemented the lib.linear_algebra.matrix_exponential.matrix_exponential() function. This handles square matrices in either complex or real form.
  • Created the lib.check_types.is_complex() function. This is used to determine if a number is a Python or numpy complex type.
  • The lib.linear_algebra.matrix_exponential.matrix_exponential() function now uses lib.check_types.is_complex(). This fixes the function for complex matrices.
  • Created a new unit test for lib.linear_algebra.matrix_exponential.matrix_exponential() for complex matrices.
  • Fix for the new lib.linear_algebra.matrix_exponential.matrix_exponential() function. This function now returns a numpy array type rather than matrix type.


relax 3.0.0

  • Some small clarifications and reordering of the release checklist document.
  • Shifted the pipe_control.structure.superimpose module to lib.structure.superimpose.
  • Shifted the pipe_control.structure.statistics module to lib.structure.statistics.
  • Created the unit test infrastructure for the lib.structure package.
  • Shifted the pipe_control.structure.pdb_read and pipe_control.structure.pdb_write modules to lib.structure.
  • Shifted the pipe_control.structure.cones module to lib.structure.cones.
  • Split the pipe_control.structure.mass module into two with the CoM code going to lib.structure.mass.
  • Removed the data pipe checks from the internal structural object. This decoupling from the relax data store is in preparation for moving into the lib.structure package.
  • More decoupling of the internal structural object from the relax data store. Removed the ability of the internal structural object to determine if two atoms are connected by consulting the relax data store.
  • Created the empty lib.structure.internal package for holding the internal structural object.
  • Shifted part of the internal structural object into the lib.structure.internal.models module. This contains the two classes ModelList and ModelContainer from the pipe_control.structure.api_base module.
  • Shifted part of the internal structural object into the lib.structure.internal.molecules module. This contains the class MolList from the pipe_control.structure.api_base module.
  • Shifted the MolContainer class from pipe_control.structure.internal into lib.structure.internal.molecules. This is in preparation for shifting the internal structural object to lib.structure.internal and for the elimination of the unused and no longer useful ScientificPython structural object.
  • Created the empty lib.structure.represent package. This will be used to hold modules which generate 3D structures as geometric representations of abstract ideas such as tensors, cones, frames, etc.
  • Shifted the lib.structure.rotor module to lib.structure.represent.rotor.
  • Total elimination of the ScientificPython PDB object. Maintaining this reader was too much effort and the internal structural object has now surpassed the capabilities of the ScientificPython PDB object (for example the internal object is not PDB specific). And ScientificPython is very much a dead project, largely replaced by the more successful scipy.
  • Merged the structural API base module api_base into pipe_control.structure.internal. The API base class is no longer needed as the ScientificPython PDB reader has been eliminated.
  • Deleted the unit tests of the structural API base class.
  • Moved the residual pipe_control.structure.api_base module to lib.structure.internal.displacements. This is because the base module still contained the Displacements class.
  • Docstring consistency in the internal structural object.
  • Shifted the pipe_control.structure.internal module to lib.structure.internal.object. This is the new location of the internal structural object.
  • Shifted the selection object out of pipe_control.mol_res_spin and into the new lib.selection module. The dependence on the MoleculeContainer, ResidueContainer and SpinContainer objects have been removed, as this is part of the relax data store. Therefore all of the private methods (__contains__, __contains_mol_res_spin_containers, and __contains_spin_id) have been deleted. The contains_*() will need to be used instead.
  • The pipe_control.mol_res_spin functions no longer use the selection object __contains__() method. All functions now use the contains_*() methods of the lib.selection.Selection object.
  • Shifted parse_token() and tokenise() from pipe_control.mol_res_spin to lib.selection.
  • The lib.selection.parse_token() function is using the new Python way of splitting strings. This is via the string's split() method.
  • Removed the no longer used parser argument for reading PDB files from some unit tests.
  • Removed the unit test of the parser argument of the structure.read_pdb user function. The argument no longer exists.
  • Shifted the cone geometric object representation functions to lib.structure.represent.cone. The structure.create_cone_pdb user function first calls pipe_control.structure.main.create_cone_pdb() which then calls lib.structure.represent.cone.cone(). This allows the pipe_control function to write out the file and add it to the data pipe's results file list.
  • Fixed some name classes in the namespace of pipe_control.structure.mass.
  • Shifted the diffusion tensor structural object code to lib.structure.represent.diffusion_tensor. The user function routes to pipe_control.structure.main.create_diff_tensor_pdb(), which pulls the tensor info out of the data store, and then calls the diffusion_tensor() function of lib.structure.represent.diffusion_tensor to create the representation, writes out a PDB file, and finally adds the file to the data pipe's results file list.
  • More removals of the now dead parser argument for the structure.read_pdb user function.
  • Removed the parser argument from structure.read_pdb in the stereochemistry auto-analysis.
  • Restored the selection object __contains_spin_id() method as contains_spin_id(). This will allow for faster checks for matches to spin ID strings.
  • Speed ups for the interatom_loop() by restoring some of the code previously deleted. This spin ID lookup table is being used again, as this is much faster than the string parsing of spin IDs.
  • The frame order analysis is now using the correct centre of mass function.
  • Shifted calc_chi_tensor() and kappa() from pipe_control.align_tensor to lib.alignment.alignment_tensor.
  • Shifted some of the pipe_control.diffusion_tensor functions to lib.diffusion.main.
  • Created the empty lib.software package. This will be for functions which create input, read output, or control external programs.
  • Shifted and decoupled some of the grace code into lib.software.grace. This includes most of the write_xy_header() and write_xy_data() functions. The data store specific part of write_xy_header() has been shifted into pipe_control.grace.axis_setup().
  • Missing import fix for the lib.alignment.alignment_tensor module.
  • Shifted the lib.opendx package to lib.software.opendx.
  • Shifted the lib.xplor module into the lib.software package.
  • Shifted the Bruker Dynamics Centre parsing code into the new lib.software.bruker_dc module.
  • Deleted the completely unused pipe_control.spectrum.Bruker_import class. This was added by Michael Bieri in Oct 2011, but the code has never been used. Other, simpler code has replaced its functionality.
  • Created the Ct.test_bug_20674_ct_analysis_failure system test for catching bug #20674. This was reported by Mengjun Xue <mengjun dott xue att mailbox dott tu-berlin dot de> at https://gna.org/bugs/?20674.
  • Decreased the number of Monte Carlo simulations in the Ct.test_bug_20674_ct_analysis_failure system test.
  • Created the Jw.bug_20674_jw_mapping system test. This is a modification of the Ct.test_bug_20674_ct_analysis_failure system test for catching bug #20674. The test script was duplicated and the small modifications made to convert it into the J(ω) mapping analysis. This now reveals the same bug but for the J(ω) mapping analysis.
  • System test speed ups - decreased the number of Monte Carlo simulations in many tests. Running 500 simulation optimisations in a system test is a total waste of time!
  • Converted the bug_20674_jw_mapping.py system test script to use the self._execute_uf() interface. This allows the script to be used in the GUI.
  • Created the Mf.test_bug_20683_bdc_inf_values system test. This is for catching bug #20683 reported by Mengjun Xue <mengjun dott xue att mailbox dott tu-berlin dot de>. The problem is due to infinite and NaN values in the Bruker Dynamics Centre file.
  • Ported the changes of r19302 to the consistency testing and J(ω) mapping analyses. This is the code for checking for infinite relaxation rates imported from Bruker Dynamics Centre files.
  • Missing imports of the lib.float.isInf() function.
  • Modified the bug_20674_ct_analysis_failure.py system test script to use self._execute_uf(). This allows the test to operate as a GUI test, which was failing.
  • Created the specific API common method _data_init_spin(). This will be used as a general method for aliasing to data_init() for initialising spin parameters.
  • Added printouts for the select.read and deselect.read user functions to identify the spins affected.
  • Created the new lib.list module with the function count_unique_elements(). This function will be used to determine the unique number of elements in a list.
  • Shifted the Sparky peak intensity reading code to lib.software.sparky.read_list_intensity(). This new function comes from the old pipe_control.spectrum.intensity_sparky() function, but with the spin ID code removed.
  • Shifted the XEasy peak intensity reading code to lib.software.xeasy.read_list_intensity(). This new function comes from the old pipe_control.spectrum.intensity_xeasy() function, but with the spin ID code removed.
  • Docstring fix for the lib.software.xeasy.read_list_intensity() function.
  • Shifted the NMRView peak intensity reading code to lib.software.nmrview.read_list_intensity(). This new function comes from the old pipe_control.spectrum.intensity_nmrview() function, but with the spin ID code removed.
  • Created the lib.software.sparky.write_list() function and associated unit test. This will be used to create simple Sparky .list files.
  • The relaxation curve-fitting analysis parameters are now all lowercase. This is to match the other analysis types so that the parameter names are identical to the corresponding variable name. This is assumed by some of the specific analysis API methods.
  • Removal of junk code in the _assemble_scaling_matrix() relaxation curve-fitting method.
  • Parameter scaling is now functional in the target_function.relax_fit.c code. Previously the scaling was not being used and the Python to C conversion was broken.
  • The scaling matrix is now converted into a usable list of diagonal elements for the relax_fit C module.
  • Simplified the code of the relax_fit C module.
  • The common spin methods of the specific analysis API now ignore parameters not in the model. This affects the _data_init_spin(), _sim_init_values_spin(), and _sim_return_param_spin() methods. The result is that the spin containers no longer hold parameter variables set to None for non-model parameters.
  • Created the pipe_control.plotting module. This will be used as a base for the plotting of all types of data. This includes the current OpenDX and Grace modules, as well as future modules. The determine_functions() function has been added and is used to simplify the pipe_control.grace.get_data() function.
  • The grace.write user function data type argument sequence values have changed. Instead of 'spin', this can now be 'res_num' or 'spin_num' to specify that either the residue number or spin number should be plotted on the desired axis. The x_data_type now defaults to 'res_num'.
  • Created the pipe_control.mol_res_spin.count_max_spins_per_residue() function. This will be used by the plotting module to determine if more than one spin per residue exists.
  • Fixes for the change of the grace.write user function data type 'spin' to 'res_num'.
  • Updated the pipe_control.plotting.determine_functions() function.
  • Added the skip_desel flag to the important pipe_control.mol_res_spin.spin_loop() generator function. This is used to skip deselected spins within the loop. As must of the code in relax using the spin_loop() does this anyway, this can be used to simplify many of the spin looping elements in relax.
  • Expanded the relax_fit system test script to produce all types of currently supported Grace graphs. This is to more extensively test the grace.write user function.
  • Large redesign of the 2D graphing code in relax. This currently affects only the grace.write user function, but the new infrastructure will make it much easier to expand the graphing abilities and to support other 2D graphing software. The plotting code has also been significantly simplified. The pipe_control.grace.get_data() function has been shifted into the pipe_control.plotting module. It has been split up into the base assemble_data() function with the data assembly shifted to assemble_data_scatter(), assemble_data_seq_value() and assemble_data_series_series(). This split massively simplifies the code by not packing all different graph and set combinations into one. In addition the auxiliary functions classify_graph_2D(), fetch_1D_data(), get_functions(), and get_data_type() have been created to maximise code sharing between the different assemble_*() functions.
  • Modified the relax_fit system test script to generate a new type of graph. This is the residue number sequence verses the peak intensity series data (and vice versa) via the grace.write user function. This is to help in the implementation of this graph type.
  • Created the pipe_control.plotting.assemble_data_seq_series() function. This is to allow the residue or spin numbering to be plotted against any series type data (lists or dictionaries), or vice versa.
  • Added a link to the PDF user manual from the HTML user manual. This will affect all pages at http://www.nmr-relax.com/manual/ by adding an icon to the navigation bar pointing to the PDF manual at http://download.gna.org/relax/manual/relax.pdf.
  • The plotting of residue or spin numbers verses values now handles multiple spin types properly. This is in the pipe_control.plotting.assemble_data_seq_value() function. The spin name is being used to identify different spin types for the graph sets.
  • The pipe_control.mol_res_spin.count_max_spins_per_residue() function now accepts a spin ID argument. This can be used to restrict the spins to count.
  • The spin ID string is now being used by the plotting functions. The spin ID was not being passed into the assemble_data_*() functions.
  • Changed how pipe_control.plotting.assemble_data_seq_value() determines the number of graph sets. Instead of counting the maximum number of spins per residue, different spin names are now checked across the sequence. This is needed as a single residue could have a different type of spin. This was caught by the Mf.test_dauvergne_protocol system test.
  • Modified pipe_control.plotting.assemble_data_series_series() to handle dictionaries with keys as values. This will be useful in, for example, relaxation dispersion for plotting the dispersion curves. In this case, the R2eff values are in a dictionary where the keys are the values to plot against. This is different from the current case where the X and Y data dictionaries are required to have the same keys. These changes expand the capabilities of the grace.write user function.
  • Formatting change for the auto_analyses __all__ package list.
  • Removed the import of the auto-analysis modules into the auto_analyses package __init__ module. This import is not needed.
  • The N-state model system test module now imports the auto-analysis to fix an import order error.
  • Added a warning for the spectrum.read user function if a peak intensity of zero is encountered. This value can cause singular matrix failures in certain optimisation algorithms.
  • The spectrum.error_analysis user function can now be performed on a subset of all spectra. The subset argument has been added to allow the error analysis to be restricted to a subset of all loaded spectral data.
  • Created the lib.list.unique_elements() function for returning a list with duplicates removed.
  • Shifted the standard deviation code from the Monte Carlo error_analysis() function to the lib package. The new function lib.stats.std() is now used to simplify the error_analysis() function and allow the code to be reused. This will be useful for expanding the pipe_control.monte_carlo.error_analysis() function to handle parameters which are dictionaries, for example as in the relax_disp branch.
  • The Monte Carlo error_analysis() function now handles dictionary type parameters.
  • Renamed the new lib.stats module to lib.statistics.
  • Spun out the model list GUI element from the model-free auto-analysis into its own module. This GUI element is now the gui.analyses.model_list.Model_list class. This code has been spun out as the GUI element will be used by the relaxation dispersion branch.
  • The gui.analyses.model_list.Model_list GUI element now can have tooltips via the tooltip class variable.
  • Rearrangements of the gui.analyses package. The new gui.analyses.elements package has been created and the model list and text and spin GUI elements have all been shifted into the package.
  • Spun out the Spin_ctrl analysis GUI element into its own module in gui.analyses.elements.
  • The relaxation time part of the spectra list GUI element can now be turned on or off.
  • The execution of the user function GUI pages can now be delayed. The create_page() execute flag has been added to disable execution. This can be later forced with the new on_execute() force_execute flag.
  • Modified the GUI new analysis wizard to return a list of user function on_execute methods. This will be used in the relax_disp branch and in the future for when a special user function page is added to the new analysis wizard. This allows the use of user function pages with execution delayed until the analysis __init__() method is being run.
  • Standardisation of the text of the GUI elements of the analysis frames and expansion of the tooltips. All the text parts of the Spin_ctrl and Text_ctrl elements now end in a colon. Tooltips are now present on all elements and have been expanded and improved.
  • The Text_ctrl analysis frame GUI elements now have separate tooltips for the buttons. This is to give a hint to the user as to what the button does.
  • The model selection GUI analysis element can now have a different tooltip for the button.
  • Added tooltips to the model-free model list GUI elements in the model-free analysis frame.
  • Created the gui.wizards package for holding all of the relax wizards. The gui.wizard module is now called gui.wizards.wiz_objects.
  • Shifted and merged the NOE and Rx peak intensity wizards into a new module. The wizards were separate and a part of the analysis frame class objects. The two wizards have been merged into the gui.wizards.peak_intensity.Peak_intensity_wizard class as most of the code is shared. This one wizard class will be useful for reusing in the relaxation dispersion branch.
  • The peak intensity wizard class now inherits from Wiz_window. This allows the class to be a wizard window instead of launching a wizard window from within the class.
  • Small rearrangements in the gui.wizards.peak_intensity module.
  • Alphabetical ordering of the methods in the gui.wizards.peak_intensity module.
  • Simplified all of the peak analysis wizard wizard_update_*() methods. They now all defer to the wizard_update_ids() method which updates the spectrum ID fields.
  • Simplified the wizard_update_noe_spectrum_type() method as in the previous commit.
  • Fixes for the frq.set user function in the GUI. The ID list is now set to the spectrum IDs, and the frequency units are no longer all fused into one string.
  • Unicode is now used for the tau symbol in the model-free model parameter lists in the GUI. This is only when modifying the models to optimise, which shouldn't be changed anyway.
  • Removed the 'string' from 'Spectrum ID string' in the spectrum list GUI element. This is a GUI - the word string is meaningless here!
  • The delay times column string now specifies seconds in the spectrum list GUI element.
  • Formatting improvements for the relaxation data list GUI element. The data type column entries are now descriptive and use subscript.
  • More unicode strings are used for the GUI for subscripts and Greek letters.
  • Fixes for the R1 and R2 GUI analyses for the recently introduced unicode subscript characters. There is now self.label for a pure string version and self.gui_label for the fancier unicode version.
  • The frq.set user function 'id' argument is no longer read only - this was causing test suite failures.
  • Removed a nasty kludge for releasing the execution lock on failure. This kludge, after the bug fix for https://gna.org/bugs/?20756, was causing failures in the test suite.
  • Changed the 'Execute relax' button in all analyses in the GUI to 'Execute analysis'. It makes no sense to execute relax as relax has been executing during the analysis initialisation and the user setting up all the data for the analysis. This is a remnant of ancient design of Michael Bieri's GUI being a separate program to relax, which would execute relax with the click of this button.
  • Restored the Py_INCREF() function call in the relaxation curve-fitting C module. This was deleted at r12632 along with Py_XDECREF() and Py_DECREF() calls. The absence of a Py_INCREF() function call causes the module to crash the Python interpreter under certain conditions. The problem was found in the relax_disp branch.
  • Clean up of unused headers and declarations in the exponential curve C module.
  • The relax_fit C module setup() function now uses the Py_RETURN_NONE macro to terminate. This macro does exactly what the old code does anyway.
  • Removed an unused declaration in the relax_fit C module setup() function.
  • Increased the maximum number of relaxation times for the relax_fit C module to 50.
  • Shifted the C array creation to the relax_fit C module header. The params, values, sd, relax_times, and scaling_matrix C arrays are now declared and allocated in the header file rather than using malloc() calls in the setup() function. This is to attempt to remove a memory leak. The arrays are now of fixed length and reused for each setup() call. These, as well as the other variables declared in the header, are no longer declared in the functions.
  • Improved the Python and C documentation of the functions of the relax_fit C module.
  • Converted the Py_BuildValue() calls to PyFloat_FromDouble() in the relax_fit C module. This doesn't change much.
  • Documentation improvements for the back_calc_I() function of the relax_fit C module.
  • The exponential C file now uses the exp() function from math.h rather than Python.h. This file is independent from Python.
  • The numpy include is no longer used for the compilation of the C modules. Numpy is not used at all in the C modules, so this just adds an annoying dependency for those who need to compile the module themselves.
  • Removed some bad calls to status.exec_lock.release(). This commit may have to be reverted in the future. The problem is that the execution lock is not being held when these calls are made. The calls were added as a kludge to handle certain situations where the execution lock was not released. There may be cases were this behaviour is still needed.
  • Added a developer script for testing of memory leaks in the relax_fit C modules.
  • Removed the numpy dependence from the manual C module compilation script.
  • Created the lib.mathematics relax library module. This currently contains two functions, order_of_magnitude() and round_to_next_order().
  • Added unit tests for the lib.mathematics module.
  • The relax_fit analysis now uses lib.mathematics.round_to_next_order() for the scaling matrix. This allows the optimised I0 value to be better understandable in the printouts.
  • Created the new Value system test class with the first test Value.test_value_copy. This test demonstrates some pretty large bugs in the value.copy user function.
  • Modified the Value.test_value_copy system test to check the copying of errors as well.
  • Added the error flag argument to all of the specific analysis API set_param_values() methods. This will allow parameter errors as well as values to be set.
  • The Value.test_value_copy system test now checks all of the values and errors.
  • Added the error flag argument to the value.set user function. This will allow for parameter errors to be set by the user.
  • The specific analysis API _return_value_general() method now returns errors even when values are missing.
  • The internal structural object PDF file creation now writes out http://www.nmr-relax.com. Previously the link http://nmr-relax.com was written out.
  • Diffusion tensor PDB file fixes for the internal structural object changes. This is because the relax website link is now written into the PDB file as http://www.nmr-relax.com rather than http://nmr-relax.com. This fixes the diffusion tensor system tests.
  • Converted all of the specific analysis modules into packages. The model-free and steady-state NOE analyses were already packages, and this now brings all other analyses in line with the package design of specific_analyses. The only change is that the files specific_analyses/x.py have been shifted to specific_analyses/x/__init__.py and the __all__ package variable added.
  • Epydoc docstring fixes for the compat module.
  • The peak intensity wizard can now function remotely when the spins are not named. This will be needed for the GUI tests to allow the Question() call to be bypassed and still adding the spin.name user function as the first page of the wizard. The key for spin.name page has also been fixed so that the page can be accessed.
  • The timing of individual tests in the relax test suite can now be printed out. The new command line argument --time has been added which, when supplied with one of the test suite arguments, will cause the time required to complete each individual test to be displayed. Instead of just printing the characters '.', 'F', and 'E' for each test, now these characters are postfixed with the time in seconds, the name of the test and ending in a newline character.
  • Big speed up of the test suite by skipping a large number of redundant Frame Order system tests. These are tests of using only PCS or only RDC data. These tests are still active for the pseudo-ellipse just to make sure that a whole missing data type can be handled.
  • Suppressed the reporting of skipped tests in the test suite if the module is set to None.
  • The preview button in the file selection elements of the user function windows can now be disabled. This is via the new wiz_filesel_preview argument being set to False.
  • Merged the frq.set and temperature user functions into the new spectrometer user function class. The frq.set user function is now called spectrometer.frequency and temperature is now spectrometer.temperature. To match these changes, the cdp.frq variable is now called cdp.spectrometer_frq.
  • Modified the spectrometer.frequency user function so that a frequency list and count is stored. These are the new cdp.spectrometer_frq_list and cdp.spectrometer_frq_count variables. This will allow various parts of relax which assemble frequency information to be simplified and made more consistent.
  • Created basic SVG and PNG graphics for the spectrometer user function class. The spectrometer is black so as not to offend Bruker, Varian, or Jeol users by avoiding a colour from one of these companies.
  • The pipe_control.spectrometer.get_frequencies() function can now return MHz or Tesla units.
  • Renamed the functions of the pipe_control.spectrometer module. The frequency() and temperature() functions are now called set_frequency() and set_temperature().
  • Added backwards compatibility support for the spectrometer frequency list and count. This is needed for old relax state files.
  • A whitelist is now being used to limit the number of frame order GUI tests to 1.
  • Shifted all frequency data handling associated with relaxation data to pipe_control.spectrometer. This includes the deletion of the relax_data.frq user function as this replicates the behaviour of spectrometer.frequency. A number of functions from the pipe_control.relax_data module have changed: frq() has been deleted as it is replaced by pipe_control.spectrometer.set_frequencies(); frq_checks() has been shifted to pipe_control.spectrometer.frequency_checks(); frq_loop() has been shifted to pipe_control.spectrometer.loop_frequencies(); num_frq() has been deleted as the new variable cdp.spectrometer_frq_count contains this info. Two new functions in the pipe_control.spectrometer module have been added to remove the functionality from pipe_control.relax_data. These are copy_frequencies() and delete_frequencies().
  • The molmol.macro_run user function file argument now has a description.
  • Huge speed up of the system tests for the loading and creation of model-free saved states. The OMP files used for the system test have been truncated from 134 to 7 spins, changing the timing of 6 system tests from 11-13 seconds to less than 0.5 seconds each.
  • All of the binary file arguments for the user functions now are file selection GUI elements. The GUI user function wizard pages now have file selection buttons for selecting the executable to run. These all have the preview button disabled. The results.read and state.load GUI elements also have the preview button disabled.
  • The user function 'prompt' description elements as now displayed in the GUI wizard page.
  • The monte_carlo.error_analysis user function can now handle parameters which are lists.
  • Added the ability for specific analyses to override the optimisation constraint algorithm. The default is still the 'Method of Multipliers', but if the constraint_algorithm() method returns a different string, then that will be used to select the algorithm. This allows the 'Log Barrier' method in minfx to be used.
  • The value.display and value.write user functions can now handle list and dictionary type parameters.
  • Added two methods to the specific analysis common API class. These are the _model_type_global() and _model_type_local() methods for always specifying that the model type is global (i.e. at the level of the data pipe) or local (i.e. there can be multiple clusters of models).
  • Added some more functions to the lib.statistics module. These include the bucket() function for creating a discrete distribution from a list of floating point numbers, and the gaussian() function for calculating the probability of a point on a Gaussian distribution.
  • Added a directory and files for testing the white noise in relaxation data. This includes scripts and graphs.
  • The initial parameters are now the real parameter rather than the optimised ones. This is for the script for testing white noise in relaxation data.
  • The spectrum.peak_intensities is now more robust when reading in a generic formatted file. Firstly there is a check that the intensity column number has been supplied. And then there is a checks that all relevant data could be extracted from each row of the file. This replaces traceback errors with RelaxErrors explaining the problem if the user inputs bad data or forgets the intensity column argument.
  • Changed the "Execute analysis" button text back to the original "Execute" text of the old relax GUI.
  • Added the 'test.seq' file from bug report #20873. This is from Troels E. Linnet. The bug report and link to http://thread.gmane.org/gmane.science.nmr.relax.user/1452 explains the contents. The file will be used to construct a system test to catch the bug.
  • Created the Peak_lists.test_bug_20873_peak_lists system test to catch bug #20873. This was reported by Troels E. Linnet. The test has been created by copying the user function calls from the original bug report and slightly modifying them to suite a 'relax_fit' analysis type.
  • Fix for the Peak_lists.test_bug_20873_peak_lists system test. The spectrum IDs are now strings.
  • Added checks of the peak intensities to the Peak_lists.test_bug_20873_peak_lists system test.
  • The spectrum.integration_points page in the peak analysis GUI wizard has been fixed. It is only shown when volume integration is selected with no replicated spectra.
  • Removed a debugging printout which is killing the relax unit tests in Python 3.
  • Added an EPS version of the 128x128 pixel spectrometer icon. This is for use in the relax manual.
  • Added a README file for the relax 128x128 icons describing how the EPS files should be created.
  • Updated the spectrometer 128x128 icon to be of the correct size and colour.
  • Added a README file to the graphic/analyses directory describing how to create the EPS files.
  • Merger of the dipole_pair and interatomic user function classes. The functionality of these two classes overlaps significantly. And the dipole_pair user functions are not related to magnetic dipole-dipole interactions. Therefore all the user functions from both classes were shifted into the new interatom user function class. This change will affect almost all relax scripts but, as this will form part of the relax 3 release, script breakage should be expected anyway.
  • Removed the pipe_control.dipole_pair module as its contents is now in pipe_control.interatomic.
  • Removed the dipole_pair module from the pipe_control package __all__ list.
  • Merged the interatom.create user function into interatom.define. These user functions had overlapping functionality which would be confusing for a user.
  • Added polish to all of the interatom user function docstrings.
  • Improved the functionality of the interatom.read_dist user function. The file data is now stripped using lib.io.strip to remove comments and blank lines. And now if the iteratomic data container cannot be found, it is created instead of raising a RelaxError.
  • Improvements to the RelaxZeroVectorWarning - the warning message was terribly out of date.
  • Polish for the rdc.read user function. Comment lines and blank lines are now removed to suppress useless warning messages about these lines containing no valid data.
  • Added some basic initial relax icons for J couplings.
  • Created some basic initial GUI wizard graphics for J couplings.
  • Modified the titles of all the auto-analysis GUI elements. The text 'Setup for' has been removed as it is meaningless.
  • Added more emphasis on the titles of the auto-analysis GUI elements. There is now more space below the title, and a different font (16pt roman italic) is being used.
  • Removed some now irrelevant information from the rdc.read user function docstring.
  • Removed a false prompt example from the rdc.read user function docstring.
  • Created an entire new user function class for handling J couplings in the relax data store. This derives from the RDC user function modules. The following functions have been created: j_coupling.copy, j_coupling.delete, j_coupling.display, j_coupling.read, and j_coupling.write.
  • Added a check for the RDC data type to the rdc.read user function.
  • The rdc.read user function can now handle T = J+D type data. Support for this in the specific analyses is yet to be added.
  • Fixed for the rdc.read, j_coupling.read and interatomic.read_dist user functions. Comment lines are no longer removed, as it is impossible to tell a comment line from a spin ID string.
  • Split up the specific_analyses.n_state_model package into modules. The new data and parameter modules have been created by shifting out method from the __init__ module and converting them into functions of the two new modules. This is to simplify the package.
  • Shifted another method from the N_state_model class to the specific_analyses.n_state_model.data module.
  • Added support for the T = J+D RDC data type to the N-state model target function. The J couplings are sent into the target function class when the 'T' RDC data type is encountered. These measured values are then added to the back-calculated RDC values to produced T(theta) which is then compared to T via the chi-squared function.
  • Fix for the new specific_analyses.n_state_model.data.opt_uses_j_couplings() function. The cdp.rdc_data_types appears not to have all alignments IDs within it.
  • Removed the check for Numeric Python in the dep_check module. This Python module not been used within relax for the better part of a decade. This check is not needed.
  • Added the j_coupling module to the pipe_control __all__ list.
  • Fix for the pipe_control.rdc.q_factors() for T = J+D type data. The Q factor normalisation was incorrect, as the J coupling should be subtracted from T first.
  • Unit test fixes for the N-state model. This is needed due to the recent package rearrangements.
  • Removed the absolute argument for all of the lib.alignment.rdc functions. This should be performed at the level of the target function, as mathematical operations may be required prior to taking the absolute value.
  • Fixes for the N-state model target functions for the lib.alignment.rdc changes. The absolute value is now calculated within the target function rather than when back calculating the RDCs.
  • Errors are now handled correctly for the N-state model when T = J+D values are used for the RDCs. The error is the square root of the average variance of the RDC error and J coupling error.
  • The RDC back-calculation function now supports T = J+D values.
  • Created the N_state_model.test_absolute_T system test. This is for checking the optimisation of absolute T=J+D values to find alignment tensors.
  • Epydoc docstring fix for the RelaxTestResult.write_time() method.
  • Created a script to look through the entire relax source tree for unused imports.
  • Removed a large amount of unused imports throughout the relax code base. These were identified by the new ./devel_scripts/find_unused_imports.py script together with pylint.
  • Fixes for the pipe_control.rdc module for when the structure cdp.rdc_data_types is missing.
  • Improvements to the devel_scripts/find_unused_imports.py script.
  • More cleanups of unused imports throughout relax.
  • Fixes for how the devel_scripts/find_unused_imports.py script runs pylint.
  • More cleanups of unused imports throughout relax.
  • Fixes and expansion of the test_suite.unit_tests._lib package __all__ list.
  • Fixes and improvements to Gary Thompson's unit_test_runner.py script. The printouts have been improved and the script can now handle more than 3 levels of directories for a package.
  • The unit_test_runner.py script now defaults to verbose mode.
  • More cleanups of the unit_test_runner.py script.
  • Added a printout to the unit_test_runner.py if the TestCase class cannot be found. This normally continued the test loading silently without warning that the TestCase class name is missing or incorrect.
  • Missing import in the unit test module for the lib.frame_order.matrix_ops module.
  • Shifted the spin_id_to_data_list() function from pipe_control.selection to lib.selection. This is because the selection object requires this function, and the function has nothing to do with the relax data store.
  • Lots of import cleanups including removal of '*' imports, missing imports, and unused imports.
  • Small change to the find_unused_imports.py printouts.
  • Large removal of unused imports throughout relax found using the devel_scripts/find_unused_imports.py script.
  • Clean up of all the imports in the relax code base. This is mainly alphabetical reordering of the imports required due to the huge layout changes in the trunk.
  • Shifted the user function initalisation. This is from the import of the user_functions package to the package initialise() function. This is for saner importing dependencies in the relax sources.
  • The lib.io.open_write_file() function now catches file names of None and raises a RelaxError. This is useful for the GUI if the user forgets to select a file name.
  • The rdc.corr_plot user function can now handled T=J+D type data.
  • The N-state model analysis can now handle RDC data of mixed D and T=J+D.
  • Added support for mixed RDC data types per alignment. This is to allow, for example, one bond RDC values of the 'D' data type and two bond RDC values of the T = J+D data type to be loaded for the same alignment ID. This is now handled in the N-state or ensemble analysis by handling a different RDC data type per RDC value.
  • The Peak_lists.test_bug_20873_peak_lists system test is now skipped if the C modules are not compiled. This test requires the presence of the C modules.
  • Added a completely empty PNG image to use in the new analysis GUI wizard for blank buttons. This will be used in the relax_disp branch to eliminate a Mac OS X only bug.
  • Added the scripts for backing up the relax SVN repository and mailing lists to the repository. This is to make it easier for others to set up the backups on their systems.
  • Added comments to the backup scripts to make it easier to use them.
  • Added the listings package to the relax user manual LaTeX file. This will be used to improve the formatting and look of relax scripts in the manual.
  • Started to convert the relax user manual to use the lstlisting environment for scripts. This is to prettify the scripts in the manual.
  • Improvements to the script UI section of the NOE chapter of the user manual. The lstlisting environments now have the correct numbering to match the script at the start,comments have been copied into the split up script elements, and a few comments improved.
  • The NMRPipe script in the relaxation curve-fitting chapter of the manual now uses lstlisting. The language has been explicitly set to csh to override the global default of Python.
  • Converted all of the relaxation curve-fitting chapter of the user manual to the lstlisting environment. This is for all parts of the script UI section of the chapter.
  • Converted all of the model-free chapter of the user manual to the lstlisting environment. This is for all parts of the script UI section of the chapter.
  • Converted all of the J(ω) mapping chapter of the user manual to the lstlisting environment. This is for all parts of the script UI section of the chapter.
  • Converted all of the Consistency testing chapter of the user manual to the lstlisting environment. This is for all parts of the script UI section of the chapter.
  • Created a new listings language definition for relax for the user manual. This is for better highlighting of relax scripts and code in the relax manual.
  • Added an EPS version of the 128x128 J coupling icon for use in the relax user manual.
  • Removed some junk text from the relax script text in section 6.3.8 of the user manual.
  • The relax language definition is now auto-generated by the fetch_docstrings.py script. This is for use in the relax user manual using the listings package. The fetch_docstrings.py script now creates the docs/latex/script_definition.tex file. This is used by the relax.tex file via an \include{} statement. This setup allows all of the relax user functions to be dynamically set as keywords for the relax language definition.
  • Converted all of the Development chapter of the user manual to use the listing package. This is for all of the code examples, which are now much more colourful.
  • Small typo fix for the relaxation curve-fitting chapter of the user manual.
  • Fixed some out of date script code for the relaxation curve-fitting chapter of the user manual.
  • Added a section label to the relaxation curve-fitting chapter of the user manual.
  • Adding a test data file in NMRPipe SeriesTab format. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. A file in NMRPipe SeriesTab format is added to the test-suite for further development.
  • Test function for NMRPipe SeriesTab format implemented. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. An assertEqual test is implemented for the reading of NMRPipe SeriesTab format. The standalone call is: relax -s Peak_lists.test_read_peak_list_NMRPipe_seriesTab.
  • Adding a NMRPipe function file in the folder lib\software\nmrpipe.py. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. Initial file for: lib\software\nmrpipe.py. This file will hold the function calls handling NMRPipe SeriesTab format.
  • Fix for commit (http://article.gmane.org/gmane.science.nmr.relax.scm/18004). The spin naming was wrong. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. 'spin_id' keywords should be supplied different. Ex: spin.name(name='NE1', spin_id=':62').
  • Autodetect format implemented for NMRPipe SeriesTab format implemented. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. The file is determined a NMRPipe SeriesTab if the first two words of the first line is: REMARK SeriesTab.
  • Update of the rotation matrix example in the intro chapter of the user manual. The function is now in lib.geometry.rotations.euler_to_R_zyz(). The example has also been converted to the lstlisting environment for better formatting.
  • The relax prompt strings and help system are now keywords for the relax listings package definition. The prompt strings "relax>" and "relax|" are now recognised as keywords and are coloured blue. The help system has been added as a normal Python keyword for highlighting.
  • Converted all relax prompt examples in the intro chapter of the manual to the lstlisting environment. This is simply for a more colourful representation.
  • The prompt examples in the user function chapter of the manual now use the listing environment. This is via the fetch_docstrings.py script and results in much better formatting of these subsections.
  • Added function destination for auto-detected NMRPipe SeriesTab format. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. Auto-detected NMRPipe SeriesTab format make function calls to the file: lib\software\nmrpipe.py in function nmrpipe.read_list_intensity_seriestab().
  • Imported the missing lib.software.nmrpipe module into pipe_control.spectrum. Progress sr #3043 - Support for NMRPipe seriesTab format *.ser. Expected modules for use in lib\software\nmrpipe.py is imported.
  • Release checklist minfx and bmrblib version update to the newest versions.
  • Spacing fix in an import statement (found using the 2to3 conversion program).
  • Added the relax wiki backup script for dumping the MySQL database contents locally. This is from http://article.gmane.org/gmane.science.nmr.relax.devel/4163.
  • Added the script from Troels Linnet for backing up the relax wiki via FTP. This is from the post http://article.gmane.org/gmane.science.nmr.relax.devel/4168.
  • Added a link to Troels' post to the relax-devel mailing list to the relax wiki FTP backup script. The link is http://article.gmane.org/gmane.science.nmr.relax.devel/4168
  • The relax info printout now works in the absence of the bmrblib module.
  • Added some Oxygen icons for a boolean GUI input element. The media-record-relax-green.png files are the media-record.png files with the hue set to 117.
  • Created a boolean input element for the auto-analyses of the GUI. This simply turns on and off.
  • The boolean GUI auto-analysis input element now has a SetValue() method.
  • Completed NMRPipe SeriesTab reader. Progress sr #3043 - support for NMRPipe seriesTab format *.ser. Completed NMRPipe SeriesTab reader for assignment according to SPARKY format. Changes implemented according to: http://article.gmane.org/gmane.science.nmr.relax.devel/4120.
  • Extraction of NMRPipe SeriesTab changed. Progress sr #3043 - support for NMRPipe seriesTab format *.ser. The Extraction of NMRPipe SeriesTab data is changed in pipe_control/spectrum.py in the read() function.
  • Added flag for single or multiple extraction of spectrum. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Flag change added to reading of NMRPipe SeriesTab. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Some small edits to the intro chapter of the relax user manual.
  • Many improvements to the indexing in the relax user manual.
  • Removed the flag for single_spectrum. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Fixed wrong reference to Sparky format. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Modfied the intensity list to handle intensities for all spectra per spin. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Fixed the extraction of NMRPipe seriestab data in pipe_control.spectrum.read(). Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Fix for handling reading spin of type heteronuc='NE1' and proton='HE1'. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Adding NMR seriesTab data file for a multiple column / multiple spectrum formatted file. This file is from https://gna.org/support/download.php?file_id=18618 attached to the support request https://gna.org/support/?3043 by Troels Linnet. This is if the command "seriesTab -in ../../peaks.dat -out seriesTab_multi.ser -list nmrfiles.list -sum -dx 1 -dy 1" where nmrfiles.list contains file reference to 10 .ft2 files.
  • Fix for unit test of nmrpipe. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Replacing a pointer-reference structure to an empty creation of list of lists. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • The ID of spins in seriesTab_multi.ser was not formatted correctly to SPARKY format. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Added system test for reading of a multi column formatted NMRPipe seriesTab file. Progress sr #3043 - support for NMRPipe seriesTab format *.ser. Generated the reference data in Excel, for the system test. The spectrum ID's are auto generated by supplying the keyword spectrum_id='auto'. The first few tests was matched against integers rather than floats. Adding '.0' to the end of each number. Spaces added after the commas in the self.assertAlmostEqual() calls. The 2to3 conversion program (for Python 2 to Python 3 conversion) highlights this issue.
  • Added check for number of supplied spectra ID's and the number of returned intensity columns. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Made it possible to autogenerate spectrum ID's, if spectrum_id='auto'. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Remove from datalist where empty list starts. These are created where spins are skipped for ID = '?-?'. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Moved checks for matching length of spectrum IDs and intensities columns. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Moved the adding function of adding the spectrum id (and ncproc) to the relax data store. Progress sr #3043 - support for NMRPipe seriesTab format *.ser. Shifting it to later will prevent the cdp.spectrum_ids list to be populated after the user calls the user function incorrectly.
  • Added epydoc documentation in pipe_controlspectrum.read() when supplying keyword 'auto'. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Added GUI description for when supplying 'auto' to the spectrum_id. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Added a stub GUI describtion in the File formats, for NMRPipe seriesTab. Progress sr #3043 - support for NMRPipe seriesTab format *.ser.
  • Fix for two spaces are used after a period in documentation. Progress sr #3043 - support for NMRPipe seriesTab format *.ser. relax uses the double space for easier for the eyes to pick up the sentence structure.
  • The relax user manual is now broken into parts. The higher level LaTeX part command is now used to group related chapters. This should make it easier for users to navigate this huge thing.
  • Creation of the optimisation chapter of the relax user manual. The main text of this chapter originates as part of the model-free chapter. As this most of this text was not model-free specific, it has been spun out as its own chapter. Text has also been taken from the "Optimisation of relaxation data – values, gradients, and Hessians" chapter. The indexing for the optimisation topics has also been improved.
  • Changed the chapter layout of the relax user manual. The development chapter has been moved forwards.
  • Fix for the spectrum.read_intensities user function docstring. Grammatically, the text "spectrum ID's" should be "spectrum IDs". The problem though was that this text was strangely causing the user manual compilation to fail.
  • Added subsubindexing for the optimisation algorithm index entries.
  • Added extensive cross-referencing to the index of the relax user manual.
  • Added some hyphenation rules for better formatting in the user manual. For this, the external hyphenation.tex has been created.
  • Better indexing in the relax user manual. The imakeidx LaTeX package is now used instead of makeidx, and the hyphenation has been improved.
  • Lots of spelling fixes for the relax user manual.
  • Updated the minimum Python version from 2.3 to 2.5 in the user manual.
  • Epydoc docstring fix for the pipe_control.spectrum.read() function. The text "Z_A{i}" causes problems when compiling the API documentation, so it has been changed to "Z_Ai".
  • Python 3 fix for the new test_suite.clean_up module. The exceptions Python module does not exist in Python 3, so instead the relax compat.builtins object is being used to store the WindowsError variable of None.
  • Added a paragraph to the installation chapter of the manual about not supporting the EPD.


Version 2 of relax

relax 2.2 series

relax 2.2.5

  • Added a comment to the output from value.display and value.write to describe the parameter. This idea is discussed at http://thread.gmane.org/gmane.science.nmr.relax.user/1428. The idea is to take the parameter description from the specific analysis API and add it to the top of the file or output. This is to help understand what the Rex value are. For example for the Rex parameter the first line would be: "# Parameter description: Chemical exchange relaxation (sigma_ex = Rex / omega**2)."
  • Created the Structure.test_read_merge system test to test a new concept - merging of structures. The idea is to add the merge argument to the structure.read_pdb user function to allow two different structures in two PDB files to be merged. This is useful if structures of individual domains have have been solved separately and are located in two PDB files. Then with the merge flag, you will not need to use and external program or hand edit PDB files to join them.
  • Added the merge flag to the structure.read_pdb user function. This currently does nothing.
  • The merge flag for the structure.read_pdb user function is now propagated to the pack_structs() method. This structure API method calls the ModelList.merge_item() method which is yet to be implemented.
  • The MolList.add_item() structural API method now returns the added molecule container. This is used by the pack_structs() method to alias the molecule, and will be required when structure merging is implemented.
  • Whitespace fixes - replaced many instances of the tab character '\t' with 4 spaces.
  • Implemented the merging of structural objects. This allows the merge flag of the structure.read_pdb user function to work.
  • The printouts from the structure.read_pdb user function are now different with the merge flag set. The text now says that the molecules are being merged rather than added.
  • Sections of molecules can now be deleted using the structure.delete user function. The atom ID argument has been added and this is now propagated into the internal structural object. This ID string can be used to delete subsets of the 3D structural data in the relax data store.
  • Created the Structure.test_read_write_pdb_1UBQ system test. This is for checking the use of the structure.delete user function with the atom ID argument.
  • The Structure.test_read_write_pdb_1UBQ system test now checks for HELIX and SHEET records. This is not implemented yet, but the idea is that the structure.read_pdb and structure.write_pdb should preserve the helix and sheet information present in the original PDB and that the internal structural object should store this information.
  • Created the internal structural object _pdb_chain_id_to_mol_index() method. This will be used to convert PDB chain IDs, which are used to indicate different molecules in the PDB, into molecule indices for the internal structural object.
  • HELIX PDB records are now read, stored, and written out by the internal structural object. This affects the structure.read_pdb and structure.write_pdb user functions. The helix is stored as a metadata type object - its elements do not correspond to the atoms in the structural object.
  • SHEET PDB records are now read, stored, and written out by the internal structural object. This affects the structure.read_pdb and structure.write_pdb user functions. The sheet is stored as a metadata type object - its elements do not correspond to the atoms in the structural object.
  • Created 13 unit tests of the Internal._trim_helix() internal structural object method.
  • Added the index_flag argument to all structural API atom_loop() methods.
  • Implemented the internal structural object _trim_helix() method. This is used when the structure.delete user function is called to trim and remove the helix metadata. For this to work, the additional method _residue_data() was written to create a dictionary with residue numbers as keys and the residue names as numbers. This dictionary is used by _trim_helix() to change the residue names in the helix metadata.
  • Created 13 unit tests of the Internal._trim_sheet() internal structural object method. These are mirror the 13 unit tests of Internal._trim_helix().
  • Implemented the Internal._trim_sheet() internal structural object method. This is also now used by the structure.delete user function to remove sheet metadata for residues which no longer exist.
  • Modified the ScientificPython structural object atom_loop() method to match the internal object. If only one element is returned from the atom_loop(), then this is returned as a single item rather than a tuple of length 1.
  • Lots of fixes for the change to the structural API atom_loop() method. This method when returning a single item now returns a single item rather than a tuple of length 1.
  • The index_flag argument to the ScientificPython structural object atom_loop() method is now used.
  • Created the Structure.test_metadata_xml system test. This is used to check that the structural metadata (currently helices and sheets) are stored in the relax XML save files and then can be read back into relax again.
  • The helix and sheet metadata is now stored in and read from relax XML state files.
  • Added the scaling argument to the value.display and value.write user functions. The idea comes from a suggestion by Angelo Figueiredo <am dott figueiredo att fct dott unl dott pt> and was discussed at http://thread.gmane.org/gmane.science.nmr.relax.user/1428/focus=1430. This allows the user to scale parameters to any value, for example scaling the Rex value to the field strength dependent value.
  • The model-free auto-analysis (the dauvergne_protocol [d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b]) now generates field strength dependent Rex files. The idea comes from a suggestion by Angelo Figueiredo <am dott figueiredo att fct dott unl dott pt> and was discussed at http://thread.gmane.org/gmane.science.nmr.relax.user/1428/focus=1430. One file per field strength is generated and named 'rex_600' for 600 MHz, for example. The new scaling argument of the value.write user function is being used to scale the tiny field strength independent value used internally in relax to the Rex value in rad.s-1 that you would see in an R2 data set.
  • Added the new 'comment' argument to the value.write user function. This is used to add user comments to the top of the file.
  • The model-free auto-analysis (the dauvergne_protocol module [d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b]) now adds comments to the Rex files. This is through the new comment argument of the value.write user function. These comments explain that the Rex values are scaled to the stated field strength.
  • Modified the Mf.test_dauvergne_protocol system test to check for all the files and directories created.
  • Created the new lib.text.sectioning module for formatting titles, subtitles and other sectioning text. The two functions title() and subtitle() have been implemented.
  • Created unit tests for the title() and subtitle() functions of the lib.text.sectioning module.
  • Expansion of the lib.text.sectioning module. The following new functions have been added: box(), section(), subsection(), subsubsection(),subtitle(), subsubtitle(), underline().
  • Expanded the unit testing of the lib.text.sectioning module to cover all title and section functions.
  • Added prespace and postspace arguments to the *section() and *title() functions of lib.text.sectioning. Through these arguments, the amount of spacing above and below the section text can be controlled.
  • Split the generic_fns.structure.geometric.create_rotor_pdb() function. The non-relax specific code has been shifted into the rotor_pdb() function.
  • Initialised the lib.structure package - this is currently empty.
  • Shifted the rotor creation components from generic_fns.structure.geometric to lib.structure.rotor. The create_rotor_pdb() function remains in place as this is the user function backend which checks for data pipes and updates the status object, but the rotor_pdb() and create_rotor_propellers() functions have been moved into the relax library. The create_rotor_propellers() function has been renamed to lib.structure.rotor.rotor_propellers().
  • Converted links in all docstrings to use the Epydoc hyperlink notation. This will allow links to be clickable for the API documentation.
  • Added Epydoc hyperlink markup for the bug tracker in the system test docstring where missing. This is for a better API documentation.
  • The lib.structure.rotor.rotor_pdb() rotor_angle argument should now be in radians. This does not affect the structure.create_rotor_pdb user function as the generic_fns.structure.geometric.create_rotor_pdb() function converts the value to radians prior to calling the rotor_pdb() function.
  • The lib.structure.rotor.rotor_pdb() function can now handle structural models. The model number argument has been added to allow the rotor structure to be added to a single model, or to all models if not supplied.


relax 2.2.4

  • Updated the release checklist document to include the correct instructions for minfx and bmrblib. These are the packages bundled with relax (https://sourceforge.net/projects/minfx/ and https://sourceforge.net/projects/bmrblib/)
  • Improvements for Python 2 and 3 compatibility. Much of the Python 2 verses 3 compatibility, as well as different Python 2 version compatibility and different Python 3 version compatibility, code has been shifted into the compat module. The different parts of relax now import from the compat module for modules/packages with different import semantics for different Python versions. In addition the different handling of the bz2 and gzip module for reading and writing files has been shifted from 'relax_io' into 'compat'.
  • Updated the 2to3 checklist document to include multiple threads for faster operation.
  • Eliminated the os.devnull import flag dep_check.devnull_import. This is not needed as the compat relax module defines os.devnull for Python ≤ 2.3. The devnull module is no longer part of the relax information printout.
  • Added a more informative error message if the platform module is missing. This is for Python ≤ 2.2. The file from http://hg.python.org/cpython/file/2.3/Lib/platform.py can simply be copied into the lib/pythonX.X/ directory to fix this.
  • Slight change to the message printed if the platform module is missing.
  • Modified the script for running the relax test-suite on multiple Python versions. The pre-2.2 Python versions are now commented out as well as the abortive Python 3.0.
  • Created the Mf.test_bug_20613_auto_mf_diff_tensor_pdb system test to catch bug #20613. This was reported by Angelo Miguel Figueiredo <am dott figueiredo att fct dot unl dot pt>. This test is a direct copy of the Mf.test_bug_20563_missing_ri_error system test. The only change is that the local tm global model results file (in the local_tm/aic/ directory) has been modified. This results were read into relax, the file test_suite/shared_data/structures/Ap4Aase_res1-12.pdb loaded into the data pipe, and the results saved again. This triggers the bug as the problem is the presence of structural data with the local tm global model being selected in the auto-analysis.
  • Shifted all of the model-free specific analysis class documentation variables to the top. This is simply for better organisation of the code.
  • Created the model-free write_doc class variable talking about the field strength independent Rex value. This has been added to the value.display and value.write user functions to explain that Rex values are very small and that the user needs to scale them up.
  • Shifted all of the documentation variables to the top of the specific API_base class. This is for better organisation.
  • Added the write_doc class variable to the specific analysis API class as a empty string. This is to fix the unit tests.
  • Created the front end for the new structure.create_rotor_pdb user function. This will be used to create a PDB representation of a rotor motional model.
  • Added file, directory and overwrite force arguments to the structure.create_rotor_pdb user function.
  • Started to implement the backend of the structure.create_rotor_pdb user function.
  • The internal structural object MolContainer.add_atom() method now returns the index of the new atom.
  • Created the internal structural object MolContainer.last_residue() method.
  • Fully implemented the structure.create_rotor_pdb user function. For this, the generic_fns.structure.geometric.create_rotor_propellers() function was created.


relax 2.2.3

  • The relax intro text now includes the repository URL for checked out code. This is for preserving better debugging and logging information, so that it is clear where the code comes from.
  • Created the Structure.test_load_spins_mol_cat system test. This will be used to test a new 'mol_name_target' argument to the structure.load_spins user function.
  • Created the Structure.test_delete_multi_pipe system test. This is to check that the structure.delete user function is operating on a single data pipe.
  • Updated the Freecode instructions in the release checklist document.
  • Created the simple Structure.test_delete_empty system test. This is to demonstrate a failure of the structure.delete user function when no structural data is present.
  • Added a printout to structure.delete for when no structures are present.
  • Created the Structure.test_rmsd system test. This test checks the currently unimplemented structure.add_model and structure.rmsd user functions.
  • The structural API num_molecules() method can now handle no data being present.
  • Implemented the structure.add_model user function.
  • Added some more checks to the Structure.test_rmsd system test.
  • Modified the structure.add_model calls in the Structure.test_rmsd system test to include model nums.
  • Added the 'model_num' argument to the structure.add_model user function.
  • Modified the structure.add_atom user function to allow the position argument to be a rank-2 array. This allows a different coordinate for each model to be specified.
  • Spun out the atomic_rmsd() and calc_mean_structure() functions into their own module. They were previously in the generic_fns.structure.superimpose module but are now in the new generic_fns.structure.statistics module.
  • Added checks for the atomic information to the Structure.test_rmsd system test. This demonstrates a failure of structure.add_atom user function when specifying different positions for the different models.
  • Docstring addition for the generic_fns.structure.statistics.atomic_rmsd() function.
  • Implemented the structure.rmsd user function.
  • Fixes for the Structure.test_rmsd system test - it now passes.
  • Created a new float_object argument type which is used by the 'pos' argument of structure.add_atom. A new arg_check.float_object() function has been created to handle any float object greater than rank-0.
  • Created the Structure.test_rmsd_ubi system test to better check the structure.rmsd user function. This uses the truncated ubiquitin ensemble in the test suite shared data directories. The RMSD matches the VMD 1.9.1 output.
  • Added a new module generic_fns.structure.pdb_write for generating the PDB records. This decouples the formatting code from the internal structural obect. The PDB format has been updated to version 3.30. There is one function for each PDB record, allowing this to be easily extended and kept up to date.
  • Created the generic_fns.structure.pdb_read module. This replaces the internal structural object _parse_pdb_record() method which was handling both ATOM+HETATM and CONECT records. It should allow greater flexibility in reading data out of other PDB records in the future. There is one function per PDB record type in this module.
  • Added the full 1UBQ PDB structure to the relax test-suite shared data directories. This is a small, very quick to read structure which will be used for validating the reading and writing of different PDB record types.
  • Changes to the internal structural object. The _parse_models_pdb() method has been renamed to _parse_pdb_coord() and the opening of the PDB file shifted into the base load_pdb() method. This is in preparation for better parsing of PDB files to match the main sections of the PDB format, see http://www.wwpdb.org/documentation/format33/v3.3.html.
  • Created the Structure.test_read_pdb_1UBQ to check the complete parsing of the complex PDB file. The test is currently quite basic and needs to check more of the internal structural object.
  • Better checks for the atomic data in the Structure.test_read_pdb_1UBQ system test.
  • Added a series of _parse_pdb_*() methods to the internal structural object. These correspond to each section of the PDB format version 3.30 http://www.wwpdb.org/documentation/format33/v3.3.html. The currently loop over the records of their section, returning the remaining PDB records. The aim is for fast parsing and breaking into sections.
  • Faster PDB parsing by the removal of the use of the re.search() function. Now line slices are directly compared instead.
  • Added some more unit tests for the generic_fns.structure.pdb_read module. These tests are not yet complete, as it is unknown what these unimplemented functions will return.
  • Completed the unit test of the generic_fns.structure.pdb_read.helix() function.
  • Implemented the generic_fns.structure.pdb_read.helix() function.
  • Created the Mf.test_bug_20531_molmol_macro_write_relaxfault system test. This is an attempt at catching bug #20531. It creates all of the m0-m9 and tm0-tm9 models, sets some parameter values, and then attempts to create all of the Molmol macros, PyMOL macros, Grace plots and parameter text files as present in the auto_analysis.dauvergne_protocol module[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b].
  • The spectrometer frequency is now set in the Mf.test_bug_20531_molmol_macro_write_relaxfault system test. This is needed for the Rex scaling.
  • The spin name, element and isotope is now set in Mf.test_bug_20531_molmol_macro_write_relaxfault. This is required in this system test so that the marco creation is not skipped.
  • Added some work-arounds for the model-free specific code for when no relaxation data is present. This is needed for the Rex scaling, as the ID of the first relaxation data set was being used to select the first frequency. As caught by the Mf.test_bug_20531_molmol_macro_write_relaxfault system test, this fails if no relaxation data is present.
  • Expanded the unit test of the generic_fns.structure.pdb_read.sheet() function.
  • Implemented the PDB SHEET record parsing function generic_fns.structure.pdb_read.sheet().
  • Extended the PDB ATOM record reading unit test to be of 80 characters in length, as per the PDB definition.
  • Created unit tests for the generic_fns.structure.pdb_write module. This currently covers the atom(), helix() and sheet() functions (the last 2 are not yet implemented).
  • Implemented the PDB HELIX record writing function generic_fns.structure.pdb_write.helix().
  • Improved PDB writing capabilities. The functions of the generic_fns.structure.pdb_write module now all use the _handle_none() function to avoid the text "None" from appearing in the PDB file and _record_validate() to be sure the record has not been corrupted with bad input causing it to be either less or greater than 80 characters.
  • The Mf.test_bug_20531_molmol_macro_write_relaxfault system test now catches bug #20531. This now uses the results file attached to the bug report.
  • Implemented the PDB SHEET record writing function generic_fns.structure.pdb_write.sheet().
  • Created a unit test for the generic_fns.structure.pdb_write.het() function.
  • Created the generic_fns.structure.pdb_write._handle_text() function. This private function is used to convert text into PDB suitable format (uppercase and values of None converted to empty strings).
  • The diffusion tensor PDB files are now conform better to the PDB standard. The HET records are now correct, only capitalised text is present in the files, and trailing whitespace to character 80 has been added.
  • Epydoc docstring formatting for the generic_fns.structure.pdb_write modules. These large changes improve the API documentation at http://www.nmr-relax.com/api/.
  • Created a unit test for the generic_fns.structure.pdb_write.model() function.
  • Added a new PDB file with 3 models and a few atoms for testing of the structure.web_of_motion user function.
  • Created the Structure.test_web_of_motion_all system test. This is to check the new structure.web_of_motion user function.
  • The structure.web_of_motion user function can now handle file objects as well as file names as input.
  • Small fixes for the Structure.test_web_of_motion_all system test.
  • Created the Structure.test_web_of_motion_12 system test to show how model sets are currently ignored.
  • Implemented the models argument for the structure.web_of_motion user function. This was previously not being used and was caught by the Structure.test_web_of_motion_12 system test.
  • Created the Structure.test_web_of_motion_13 system test. This was just to be sure that the models argument was correctly handled by the structure.web_of_motion user function.
  • The structure.find_pivot user function now accepts the func_tol argument. This is used to terminate the simplex optimisation when this function tolerance value is reached.
  • Shifted the ensemble pivot finding target function into the maths_fns package.
  • Added a sentence to the README file about the sample_scripts directory.
  • Added a document detailing the possible future layout of relax's packages.
  • The structure.find_pivot user function now uses the logarithmic barrier function. This is for constrained optimisation and requires the newest minfx code. The pivot position is constrained within a box of +/- 1000 Angstroms from zero. This is needed for when the solution is an infinite line - i.e. a rotation axis and not a pivot point. Previously the simplex optimisation would head toward + or - infinity. But now with a logarithmic barrier, the simplex algorithm can stabilise and find a point on the axis very quickly, long before reaching the edges of the box.
  • The structure.find_pivot user function now accepts the func_tol and box_limit arguments. This allows the function tolerance for the simplex optimisation to be specified, as well as the size of the box to constrain the pivot to be within.
  • Initialised the lib.geometry package. This will be a library of all mathematics functions relating to geometry.
  • Added empty packages to the unit tests for the lib and lib.geometry packages.
  • Updated the maths_fns package __all__ list.
  • Updated the test_suite.unit_tests package __all__ list to be more modern.
  • The n_state_model.number_of_states user function no longer requires the N-state model to be defined. This was only needed to update the model information, and is skipped if not set.
  • The generic_fns.structure.superimpose.find_centroid() function now prints out Euler angles as well.
  • Large improvements to the checking for all the rdc and pcs user functions. The new methods check_pipe_setup() have been added to replace all other checking. This standardises all error checking and provides much better coverage. The results is that you will be much less likely to encounter a Python traceback when something is forgotten, and will be told via a RelaxError what is missing.
  • The rdc.back_calc and pcs.back_calc user functions now warn if no data was calculated. This is to inform the user about problems at the place that they occur instead of later on with, for example, the creation of empty data files.
  • Updated the float module to handle numpy floats. This makes the floatToBinaryString() function compatible with the numpy.float16 type.
  • Removed the prune parameter from the backend of the monte_carlo.error_analysis user function. This was a dangerous parameter used to mimic the 'Trim' parameter from the Modelfree4 program. The result is bad statistics. The probable reason for the 'Trim' parameter was the failure of model-free models in the simulations, but this issue was solved using model elimination (see http://www.nmr-relax.com/refs.html#dAuvergneGooley06).
  • Created the Structure.test_read_xyz_strychnine system test to demonstrate a bug in the XYZ parser. This is for the reading of XYZ structure files.
  • Created the lib.text package for text manipulation. The first module will be the text formatting of tables.
  • Created the lib.geometry.lines module for performing geometric operations with lines. This has one stub of a function lib.geometry.lines.closest_point() which will be used to find the closest point on a line to a given point.
  • Added the package checking unit tests for the lib package.
  • Improved the base class unit test for the package __all__ list. Subpackages are now also checked.
  • Blacklisted a number of files in the maths_fns package for the package __all__ list unit test.
  • Added a unit test for the lib.geometry package __all__ list.
  • Created a unit test for the lib.geometry.lines.closest_point() function.
  • Created the lib.text.table module. This originates from the prompt.uf_docstring module as most of that module is functions for creating formatted text tables.
  • Updated the lib package __all__ list for the lib.text package.
  • Implemented the closest_point() and closest_point_ax() functions of lib.geometry.lines. These two functions do the same thing - find the closest point on a line to any given point - but take different arguments to define the line.
  • Improved the package __all__ list base unit test by skipping all hidden files and directories.
  • Refactored the lib.text.table module. The create_table() function is now called format_table() and the table_line() function has been made private. All references to the user function tables and the relax status object have been removed and replaced by arguments to format_table().
  • The prompt.uf_docstring module now uses lib.text.table.format_table(). This significantly simplifies the module.
  • Removed a number of unused imports in prompt.uf_docstring.
  • Deleted prompt.uf_docstring.table_line() as this is now a private function of lib.text.table.
  • Fix for lib.text.table.format_table() as table_line() is now private.
  • Added the spacing argument to lib.text.table.format_table(). This removes the reference to the user function table spacing variable from this function and shifts it to the prompt.uf_docstring.create_table() function.
  • Created the framework for the unit tests of the lib.text package.
  • Created two unit tests for the lib.text.table.format_table() function.
  • Updates to the unit tests of the lib.text.table.format_table() function.
  • Many improvements to the lib.text.table module. The format_table() function now accepts arguments for text to prefix and postfix to each line,the text padding to the left and right inside the table, and the text used to separate the columns. The _blank() and _rule() private functions have been added to create distinct table elements.
  • Created the lib.text.table.MULTI_COL constant for defining cells spanning multiple columns. This is not used yet.
  • Modified the Mf.test_mf_auto_analysis GUI test to catch bug #20603.
  • Created a unit test for the lib.text.table.format_table() function to test multiple column support. Support for content spanning multiple cells is yet to be implemented.
  • Implemented multi-column support in lib.text.table.format_table().
  • Spacing between heading rows is now functional in lib.text.table.format_table().
  • Created a new unit test of lib.text.table.format_table() to check for non-string type data.
  • The table contents are now all converted to strings in lib.text.table.format_table(). This uses the _convert_to_string() private function.
  • Converted the test_format_table4() unit test of lib.text.table.format_table() to check justification. The right justification of cells with numbers will be implemented to match these changes.
  • Numbers are now right justified in cells in the lib.text.table.format_table() function.
  • Modified the test_format_table4() unit test of lib.text.table.format_table(). This change is to test the currently unimplemented custom_format argument. This will be used to allow special formatting in the table. For example using '%.3f' for a float.
  • Implemented the custom_format argument for lib.text.table.format_table(). This allows cell contents to be formatted as the user asks. It defaults to standard string conversion is the custom conversion fails.
  • Rounding error fix for the test_format_table4() unit test of lib.text.table.format_table().
  • Python 3 fix for the test_format_table4() unit test of lib.text.table.format_table(). The string representation of the builtin list object is different in Python 2 vs. 3.
  • Created the test_format_table5() unit test for lib.text.table.format_table(). This test checks what happens if no header is given to format_table(). This currently fails.
  • The lib.text.table.format_table() function can now create a table without headers.
  • Added column number checks for the data input into lib.text.table.format_table().
  • Created the test_format_table6() unit test for lib.text.table.format_table(). This test shows a problem with more than one multi-column cells defined, as well as problems when a multi-column cell is wider than the sum of the widths of the columns it spans.
  • Fix for lib.text.table.format_table() when more than one multi-column cell per row is encountered. The algorithm for determining the total width of the multi-column cell in _table_line() was not checking if the end of the span was being reached.
  • The lib.text.table.format_table() function now handles overfull multi-column cells. The _determine_widths() private function has been created to better handle the determination of the table column widths. It will now extend the width of the last column to allow overfull multi-column cells to fit.
  • Modified the test_format_table5() unit test of lib.text.table.format_table() to check bool types.
  • The lib.text.table.format_table() function now handles boolean types.
  • Booleans are not numbers, so do not right justify them in lib.text.table.format_table().
  • The minfx.__version__ value is now read for the version in the relax information printout.
  • The bmrblib.__version__ value is now read for the version in the relax information printout.
  • All of the specific API data and error returning common methods can now handle missing data/errors. This affects the _return_data_relax_data() and _return_value_general() methods.
  • Updated the release checklist to include information about updating the FSF directory.
  • Modified the release checklist document to use the stable release tags of minfx and bmrblib. This is instead of the code in trunk which may not always be in a stable state.
  • Redesign of the generic_fns.mol_res_spin.generate_spin_id() function. The function now tries to generate a unique ID based on the spin information in the specified data pipe. This is to attempt to fix a bug uncovered by the Structure.test_read_xyz_internal2 system test. Defaulting in all cases to the spin name rather than spin number will often fail for a small organic molecule, as the name in XYZ files is the atomic symbol and hence will almost never be unique.
  • Created the generic_fns.mol_res_spin.return_molecule_by_name() function. This will be used in the future as it is much faster than generic_fns.mol_res_spin.return_molecule()if the molecule name is already known.
  • Missing import affecting the generic_fns.interatomic.create_interatom() function.
  • Reverted the last revision (r18737) as it was not correct and RelaxErrors should be used instead. The command used was:svn merge -r18737:18736 .
  • Fix for the generic_fns.interatomic.create_interatom() function. RelaxNoSpinWarning has been replaced with RelaxNoSpinError.
  • Fixes for the metadata update of the residue and spin name and number counts.
  • Created the generic_fns.mol_res_spin.generate_spin_id_unique() function. This will return a truly unique spin ID string based on the current molecule, residue, and spin data structure.
  • The spin_loop() function now uses generate_spin_id_unique() when the return_id flag is set. This ensures that the caller received a unique spin ID which can be used to retrieve the corresponding spin container.
  • Improved the generic_fns.mol_res_spin.generate_spin_id_unique() function. This can now work with molecule, residue, and spin names and numbers alternatively to the containers supplied as arguments. For this to work, the return_molecule_by_name() function has been improved and the functions return_residue_by_info() and return_spin_by_info() have been added.
  • The pcs.read user function backend now uses generic_fns.mol_res_spin.generate_spin_id_unique(). This allows the matching spin container to always be returned for storing the data.
  • Large speed ups of the Bmrb system tests by the deletion of most of the residues. On one system, this cuts the time for all 3 Bmrb tests from 70 to ~12 seconds.
  • Added the profile flag keyword argument to the relax startup script for Unix-like systems. This is to simplify the switching on of profiling.
  • Large cleanup and bugfixes for the molecule, residue, and spin data structure metadata maintenance. The bugs fixed are important for non-protein molecules. For example is the spin name is not unique per residue, or per molecule if no residues are defined, many parts of relax would fail. All of the metadata_*() and spin_id_variants*() functions have been redesigned. It was also identified that metadata_prune() was being used by different parts of relax for two different purposes - the removal or pruning of metadata prior to the deletion of a data structure and the clean up of no longer valid metadata. These two goals conflicted resulting in unpredictable behaviour. Therefore the new metadata_cleanup() and spin_id_variants_cleanup() functions have been created and the two behaviours separated.
  • Fix for the bmrb.read user function for the recent molecule, residue and spin metadata improvements. The generic_fns.bmrb.generate_sequence() function now calls generic_fns.mol_res_spin.metadata_clean()to be sure that the metadata is correct. The problem is the structure of the BMRB file with no spin information in the entity record, hence the residues are created first and the spins much later in generate_sequence().
  • Removed unused imports in the generic_fns.rdc module.
  • The generic_fns.mol_res_spin.generate_spin_id_unique() function now handles missing spin containers. Previously if this function was used to generate a spin ID string of a spin not in the data store,it would fail. Now it generates an ID by defaulting to generate_spin_id().
  • Converted many calls to generic_fns.mol_res_spin.generate_spin_id() to generate_spin_id_unique(). This will allow many future bugs to be avoided, as the spin ID string is most often used to retrieve spin containers. By using the generate_spin_id_unique() function, the returning of spin containers will always be correct.
  • Created the Mf.test_bug_20563_missing_ri_error system test to catch bug #20563. The data added to the test suite is a highly truncated data set of a analysis completed using the data attached to the bug report.
  • Modified the dauvergne_protocol model-free auto-analysis[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b] to aid in debugging. The write_results_dir argument has been added to allow the test suite to read from one directory in test suite shared data directories and redirect output to a temporary directory.
  • The files from the Mf.test_bug_20563_missing_ri_error system test are now placed in a temporary directory. This is essential for the test suite to prevent files from going everywhere.
  • The frq.set user function units argument is no longer read-only. This is needed for some of the GUI tests in the frame_order_testing branch.


relax 2.2.2

  • Updated the release checklist to include the devel_scripts/log_converter.py script usage.
  • Modified the relax manual subtitle as this is no longer only for relaxation analysis.
  • Docstring fix for the maths_fns.vectors.random_unit_vector() function (this is for the API documentation at http://www.nmr-relax.com/api/).
  • Epydoc docstring fix for the dfunc_standard() N-state model target function (this is for the API documentation at http://www.nmr-relax.com/api/).
  • Epydoc docstring fixes for the diffusion tensor objects of the relax data store.
  • Added and edited a number of module docstrings.
  • Module and package docstrings additions/improvements for the SCons scripts.
  • Lots of module and package docstring updates for the analysis specific code.
  • Module docstring additions and improvements for the relax data store modules.
  • Addition of the generic_fns package docstring.
  • Added a module docstring for the main relax module.
  • Created the State.test_bug_20480 system test to catch bug #20480, the failure to load a saved relax model-free state in the GUI. This bug was reported by Stanislava Panova.
  • Created the Mf.test_bug_20479_gui_final_pipe GUI test to catch bug #20479, the model-free analysis failure in the GUI after setting the protocol mode to local τm. This bug was reported by Stanislava Panova.
  • Added a printout to the GUI test case base check_exceptions() method to explain what is happening.
  • Large expansion of the Mf.test_bug_20479_gui_final_pipe GUI test. Instead of loading the bug #20480 state file, now the entire GUI analysis starting from scratch and using the relaxation data files is performed remotely by the test. This is necessary because the result of the bug is present in the state file.
  • Added spherical diffusion to the optimisation in the Mf.test_bug_20479_gui_final_pipe GUI test.
  • Added a global model print out to the Mf.test_bug_20479_gui_final_pipe GUI test. This is to help identify where failures occur.
  • Proper handling of the dipole interaction wizard in the Mf.test_bug_20479_gui_final_pipe GUI test. This was causing the subsequent GUI tests to fail as the observer objects from the wizard were not all being unregistered.
  • Added skips for some GUI tests when wxPython version '2.9.4.1 gtk2 (classic)' is used. There are a number of bugs in this version which cannot be worked around in certain GUI tests, so they must be skipped.
  • More wxPython version '2.9.4.1 gtk2 (classic)' bug avoidances in the GUI tests. Now the auto-analyses do not check the gauges in the relax controller at the end of the auto NOE, Rx, and model-free analyses, as reading gauge values is faulty in this version. The Rx test is no longer skipped for this wxPython version.
  • Improved the printout from the align_tensor.matrix_angles user function. The relax_io.write_data() function is being used and the tensors are now identified by name rather than index.
  • Improved the printouts from the align_tensor.svd user function.
  • The relax program introduction now includes the revision number for subversion checked out copied. This allows for better identification of the code base being used.
  • Fixes for the Pcs.test_structural_noise system test. As this is based on random functions, sometimes, though rarely, the test fails. Now the simulation accuracy has been increased and the tests are less rigorous.
  • Spacing fixes as identified by the Python 2to3 conversion program.


relax 2.2.1

  • Replaced a reference to freshmeat with Freecode in the Release checklist document. Freshmeat no longer exists and is now called Freecode (http://freecode.com/projects/nmr-relax).
  • Created the Mf.test_bug_20464_missing_ri_data system test to catch bug #20464. The data comes from the bug report submitted by Stanislava Panova (stpanova att gmail dot com).
  • Created the Structure.test_bug_sr_2998_broken_conect_records system test. This is to catch the bug reported as service request #2998 and is for corrupted PDB files with broken CONECT records.
  • Created the Structure.test_bug_20469_scientific_parser_xray_records system test. This is to catch bug #20469.
  • Created the Structure.test_bug_20470_alternate_location_indicator system test to catch bug #20470.
  • Created the Structure.test_alt_loc_missing system test. This is to test that the internal relax PDB reader raises an error when a PDB file is encountered with alternate location indicators but the alt_loc argument has not been specified.
  • Created the Bmrb.test_bug_20471_structure_present to catch bug #20471.
  • Modified the bmrb.read documentation to make it clearer that the data pipe must be empty.


relax 2.2.0

  • The relax HTML user manual footer has been modified to remove the name of the person who compiled it. This is for http://www.nmr-relax.com/manual/index.html, and now contains links for relax (http://www.nmr-relax.com), the manual (http://www.nmr-relax.com/manual) and the PDF version of the manual (http://download.gna.org/relax/manual/relax.pdf).
  • Small syntax fix in the release checklist document.
  • Added the MARC archive links to the development chapter of the relax user manual. These links are: http://marc.info/?l=relax-announce&r=1&w=2, http://marc.info/?l=relax-users&r=1&w=2, http://marc.info/?l=relax-devel&r=1&w=2, and http://marc.info/?l=relax-commits&r=1&w=2.
  • The model-free overfit deselection algorithm now fails with a RelaxError when no spins are selected. This is to avoid situations such as bug #20277.
  • The pipe.display user function now uses relax_io.write_data() for better output formatting.
  • Created the N_state_model.test_data_copying system test for the rdc.copy and pcs.copy user functions. These user functions do not exist yet, but this test will be used to implement them.
  • Reactivated the rdc.copy and pcs.copy user function front-ends. The backends are missing, so relax is currently broken.
  • Created the RelaxNoAlignError error class for use by rdc.copy and pcs.copy.
  • Created the RelaxAlignError error class for use by the rdc.copy and pcs.copy user functions.
  • Implemented the rdc.copy and pcs.copy user function backends. This code is copied from the relax_data.copy user function and has been tailored to the different data types.
  • Modified the RDC and PCS data copying system test script to check overwriting. The rdc.copy and pcs.copy user function should support the overwriting of existent values.
  • The rdc.copy and pcs.copy user functions now support overwriting pre-existing data.
  • Removed some debugging printouts.
  • The N_state_model.test_data_copying system test now checks the spin RDC and PCS data.
  • The model_selection user function is now using relax_io.write_data() for its printouts. This allows for clean formatting when data pipes have long names.
  • The rdc.write and pcs.write user functions now skip deselected spins.
  • The axis for PDB geometric cone can now be turned off in the create_cone_pdb() function. The axis_flag keyword argument is now accepted and if False will cause the axis to be excluded. This is useful for the frame order cones for example as its own {x,y,z}-axis system is created.
  • Many docstring fixes for the functions of the generic_fns.structure.geometric module.
  • Created the N_state_model.test_absolute_rdc_menthol system test to demonstrate a pseudo-atom failure. This is a test of the long range, absolute RDCs for menthol.
  • Added a check for the second Q factor in the N_state_model.test_absolute_rdc_menthol system test.
  • Modified the N_state_model.test_populations system test to catch bug #20335. This simply adds calls to the rdc.delete and pcs.delete user functions, and then reloads the RDC and PCS data.
  • Modified the temperature user function - the value can be set twice if it is the same value.
  • Modified the frq.set user function - the value can be set twice if it is the same value.
  • The rdc.back_calc user function now handles absolute RDCs.
  • Created the Align_tensor.test_copy system test to catch bug #20338.
  • The spin.create_pseudo user function 'members' argument is no longer read only in the GUI. This allows the user to type in shorter spin IDs rather than selecting them from the list.
  • Shifted and renamed the arg_check.check_float() function to check_types.is_float().
  • The relax_io.write_spin_data() function now formats floating point numbers better. This affects the printouts of many data loading user functions.
  • Better printouts from the rdc.read user function - the numbers are now formatted.
  • Created the interatomic.copy and interatomic.create user functions. This is simply new front ends for the user for the functions of generic_fns.interatomic.
  • The generic_fns.interatomic.copy() function now accepts spin IDs as arguments to partially copy the data.
  • Expanded the RelaxNoSpinError class to accept the data pipe name for the error printout.
  • Created the Interatomic.test_copy system test to check the interatomic.copy user function.
  • Expanded the Interatomic.test_copy system test to check interatomic.copy without spin IDs.
  • Added a test for the presence of target sequence data in generic_fns.interatomic.copy().
  • Spun out code from generic_fns.pipes.create() into the new check_type() function. This code will be reused in a new pipe user function.
  • Created the Pipes.test_change_type system test to check the non-existent pipe.change_type user function.
  • Implemented the pipe.change_type user function front and back ends.
  • Created the Align_tensor.test_fix() system test to check the operation of align_tensor.fix.
  • Created some synthetic paramagnetically aligned RDC and PCS data to the test suite. This will be used in later system tests.
  • Fixes for the PCS values of the paramagnetic alignment test suite data. The data generation script output and results file have been added to the repository as well.
  • Created the N_state_model.test_paramag_align_fit system test to check the paramagnetic data. This test check the alignment tensor optimisation of the RDC and PCS data in test_suite/shared_data/align_data/paramagnetic/, loading both alignment data sets but only optimising one tensor.
  • The RelaxErrors when calling user functions in the prompt/script interface are now more informative. The user function is now stated. This is to better help the user work out where the problem is.
  • Created the Rdc.test_rdc_copy system test to demonstrate the failure of the rdc.copy user function.
  • Created the Pcs.test_pcs_load and Pcs.test_pcs_copy system tests to check some of the PCS user functions. The Pcs system test class is new, and these tests check untested areas of relax.
  • Created RelaxInteratomInconsistentError for when the data is inconsistent between two data pipes.
  • Created the generic_fns.interatomic.consistent_interatomic_data() function for checking data consistency.
  • The rdc.copy user function now uses the new consistent_interatomic_data() function prior to copying. To copy the RDC data, the interatomic data containers must be identical between the two data pipes.
  • Fix for the N_state_model.test_data_copying system test. The interatomic data is now copied prior to copying the RDC data.
  • Created 4 unit tests to demonstrate the failure of the selection object with spin IDs.
  • The molecule, residue and spin selection object now works with spin IDs.
  • Docstring consistency editing for all parts of the generic_fns.mol_res_spin module.
  • Created the Selection system test class. This currently has the test Selection.test_deselect_all for checking the deselect.all user function. The number of tests will be expanded in the future to cover interatomic data containers and the operation of all the select and deselect user functions.
  • Shifted the boolean selection operations of the generic_fns.selection module into two new functions. These are the boolean_select() and boolean_deselect() functions. The change removes much duplicated code which could be a source of bugs in the future.
  • The frq.set user function now warns if the frequency is lower than 100 MHz or higher than 2 GHz.
  • Updated the diffusion tensor minimisation sample script as the code is very old and useless.
  • Created the State.test_align_tensor_with_mc_sims system test to catch bug #20414.
  • Modified the align_tensor_mc.bz2 save file to catch a strange and rare bug. This is caught by the State.test_align_tensor_with_mc_sims system test.
  • Spun out the maths_fns.rotation_matrix.random_rot_axis() function into its own module. The function is now called maths_fns.vectors.random_unit_vector().
  • Added a second data pipe with data to the 'align_tensor_mc.bz2' saved state to catch a bug. This bug was recently introduced.
  • Added checks for the RDC data in the State.test_align_tensor_with_mc_sims system test. This is to be sure that the data is properly converted from the old design.
  • Added the 'empty' flag to the sequence.copy user function to allow all the spin contents to be copied. The user function was only copying the basic empty molecule, residue and spin containers, in contrast to the interatomic.copy user function which copies all of the container contents as well. This new flag is for backwards compatibility - it allows old scripts to operate as before while enabling the new functionality.
  • Removed the check for the 3D structural data in the paramag.centre user function. This check is not needed.
  • Created the Pcs.test_structural_noise system test for the new pcs.structural_noise user function.
  • Created the N_state_model.test_mc_sim_failure system test to demonstrate a bug in the N-state model. This appears to be a problem with Monte Carlo simulations when data is missing.
  • Modified the N_state_model.test_mc_sim_failure system test to include missing PCS data. This is to catch another bug.
  • Modified the missing data system test script to include Monte Carlo simulations. This is to cover untested code paths.
  • Added calls to rdc.set_errors and pcs.set_errors in the missing data N-state model system test script. These user functions currently do not exist, but are needed as the data files contain no errors.
  • Modified all generic_fns.mol_res_spin.get_*() functions to handle no data pipes being present. These functions were previously raising RelaxErrors as no pipes were present. They now return empty lists instead. This allows many of the GUI user functions to open in the GUI when no data is present, allowing better debugging and less confusion for the user.
  • The Pipes.test_change_type system test is skipped if the required scipy module is not installed.
  • Python 3 fix for the new pcs.structural_noise user function. There was a string/unicode problem in the Grace plot creation code.
  • Created the Pcs.test_load_multi_col_data system test to demonstrate a failure of PCS data loading. This is a problem when 15N data is in one column and 1H data is in another, and the spin_id argument is used to specify which is which.
  • Added some printouts to the Pcs.test_load_multi_col_data system test.
  • Created the Pcs.test_grace_plot system test to check the pcs.corr_plot user function.
  • Created the Pcs.test_load_multi_col_data2 system test to catch a bug with the molecule name. This is the same as the Pcs.test_load_multi_col_data system test but the spins have the molecule name set.
  • Created the Mol_res_spin.test_prune_metadata system test to catch a bug in the spin ID lookup table. Spin IDs appear not to be correctly removed from the lookup table.
  • Added some more checks to the Mol_res_spin.test_prune_metadata system test to demonstrate more bugs.
  • Activated the Monte Carlo simulations in the metal_pos_opt.py system test script. This is to test the combination of Monte Carlo simulations and paramagnetic centre position optimisation.
  • Added Monte Carlo simulations to the N_state_model.test_paramag_centre_fit system test. This is to better test the code paths.
  • Modified the N_state_model.test_mc_sim_failure to demonstrate a failure in paramagnetic centre code. The failure is for the combination of paramagnetic centre optimisation and Monte Carlo simulations.
  • Modified the paramag.centre user function printouts for the 'fix' flag.
  • The alignment tensor objects in the relax data store now support sequential Monte Carlo analyses. The AlignTensorData.set_sim_num() method was preventing a second Monte Carlo error analysis from being performed by throwing a RelaxError. The check for previous simulations has been killed.
  • Added checks to the N-state model for the paramagnetic centre optimisation. Only simplex optimisation without constraints is allowed for the paramagnetic centre position as the PCS gradients and Hessians are not yet implemented for the coordinate parameters.
  • Improved the RDC and PCS Q factor calculation warnings to be more informative. These warnings sometimes appear at the end of the N-state model optimisation, but it is not clear where they come from.
  • Clean up of some of the logic in N-state model analysis specific code. The following methods have been added: _opt_tensor(), _opt_uses_align_data(), _opt_uses_pcs(), and _opt_uses_rdc(). These are used through the class to determine what is needed for or used during optimisation,making a lot of checking code more consistent (hence removing latent bugs).
  • Added some more checks to the metal_pos_opt.py N-state model system test script.
  • First attempt at implementing the paramagnetic centre position gradient in the N-state model. This will be used for faster optimisation of the lanthanide position. Two new functions have been added: maths_fns.pcs.ave_pcs_tensor_ddeltaij_dc() and maths_fns.pcs.pcs_constant_grad(). These are used by the dfunc_*() methods of the N-state model target function class.
  • Major code simplification of the N-state model target functions. The func_tensor_opt(), dfunc_tensor_opt(), and d2func_tensor_opt() methods have been merged with the func_population(), dfunc_population(), and d2func_population() methods into the new func_standard(),dfunc_standard(), and d2func_standard() methods. This halves the amount of code required to be maintained and debugged. For the merger, the new probs_fixed class instance variable has been created to determine when the probabilities need to be unpacked from the parameter vector.
  • Removed the unused parameter scaling in the N-state model gradient and Hessian target functions.
  • Added a RelaxError to the N-state model Hessian for the optimisation of the paramagnetic position. This is because these equations are not derived or coded yet.
  • Expanded the N-state model target function func_standard() docstring to include the xi derivative. This is the partial derivative with respect to the paramagnetic centre position.
  • Comment fixes in the ave_pcs_tensor_ddeltaij_dc() and pcs_constant_grad() functions.
  • Modified the N-state model metal_pos_opt.py system test script. This is to test optimisation with the new paramagnetic position gradients.
  • BFGS optimisation is now being used for the N_state_model.test_mc_sim_failure system test. This is to have better test coverage of the paramagnetic centre position optimisation gradient code paths.
  • Simplified the parameter unpacking in the func_standard() N-state model target function.
  • Improved the comments in the _disassemble_param_vector() N-state model method.
  • Modified the populations.py N-state model system test script to better test optimisation. The probability of the 2nd state has been slightly shifted to make sure the original value can be found.
  • Modified the metal_pos_opt.py N-state model system test script to demonstrate some failures.
  • Improved the checks of the metal_pos_opt.py N-state model system test script.
  • Modified the metal_pos_opt.py N-state model system test script to catch yet another bug.
  • Added Monte Carlo simulations to the align_fit.py N-state model system test script. This is to increase the very low coverage of Monte Carlo simulation testing for the N-state model.
  • Modified the metal_pos_opt.py N-state model system test script to test the bootstrapping code path. This converts the Monte Carlo simulations into bootstrapping to make sure this method also functions correctly.
  • Implemented the N-state model specific return_data() method. This is needed for bootstrapping.
  • Fixes for the N-state model return_data() method.
  • Modified the RelaxNoRDCError and RelaxNoPCSError to accept no alignment ID. This is then used to indicate the complete absence of data.
  • Modified the initial testing of the rdc.set_errors and pcs.set_errors user functions. This is to better indicate to the user what the problem is and why the user function cannot operate.
  • Fixes for the align_fit.py N-state model system test script. The recently introduced Monte Carlo simulations and associated RDC and PCS error setting was failing when RDC or PCS data was missing. The script now checks the mode of operation and only sets errors if the corresponding data is present.
  • The N_state_model.test_align_fit system test now checks the simulation PCS values.
  • Fix for the metal_pos_opt.py N-state model system test script. The moving interatomic data containers are now also deselected.
  • Added extensive data checks to the N_state_model.test_metal_pos_opt system test script.
  • Added new checks in the N_state_model.test_metal_pos_opt system test. These are for structures which should not be in the deselected spins and interatomic containers.
  • The N-state model _check_rdcs() method now skips deselected interatomic data containers. A FIXME comment has also been added to highlight a possible future problem.
  • Added some consistency to the specific analysis API base class. The return_data() method argument has been changed from 'spin' to 'data_id', as the data from the base_data_loop() methods are often not spin containers.
  • Made the χ2 value check less stringent in the N_state_model.test_metal_pos_opt system test. For some bizarre reason, the calc() call in the GUI is less precise.
  • The N_state_model.test_populations system test has been made less stringent to allow MS Windows to pass.


relax 2.1 series

relax 2.1.2

  • The scons 'clean' target now removes the Python 3 __pycache__ directories.
  • Small edit to the installation chapter of the user manual.
  • Decreased the Python version dependency from 2.5 to 2.3 in the installation chapter of the user manual.
  • More error checking for the associate_auto() method of the data pipe editor window.
  • Added data pipe bundle error checking for the GUI pipe editor window associate_auto() method.
  • Added some error checking for the data pipe bundle in the auto model-free analysis GUI code.
  • Added some special RelaxErrors for data pipe bundles.
  • Added some bug catching code for the observer objects. In some rare cases a registered method's key was set to None. This is now caught and a RelaxError thrown to prevent later indecipherable errors.
  • The setup.py application building script now complains if the Python setuptools are not installed.
  • Updated the relax prompt mode figure in the intro chapter of the user manual to the more modern prompt.
  • Improvements to the API documentation compilation. The excluded files and directories, as well as hidden ones, are no longer included in the list of files/directories to add to the documentation.
  • Added a file with the relax user functions used for the prompt screenshots. This is for the manual and the website.
  • Added the public domain LaTeX nth.sty style file for the user manual. Some LaTeX distributions do not have this style file and, as it is public domain, it can be legally distributed with relax allowing the PDF manual to compile on more systems.
  • Fixes for weird print statements with double brackets generated by the 2to3 Python conversion script.
  • Removed a debugging printout.
  • Python 3 fixes for one of the test data scripts - print statement with function call replacements.
  • Python 3 fixes for non-used Python code - converted print statements to function calls.
  • Python 3 fixes for the script for generating plots of magnetic field lines.
  • Another print statement to function Python 3 fix for the user manual.
  • The Python print statements in the user manual are now function calls to be Python 3 compatible.
  • Python 3 fix for the generic_fns.structure.geometric.angles_regular() function. Integer divisions no longer produce integers.
  • Better formatting of the test suite summary.
  • The text relax controller log is no longer cleared when a reset occurs. This allows the test suite results to still be presented in GUI mode.
  • Even cleaner exiting of the GUI - the interpreter thread is terminated by the exit_gui() method.
  • The GUI is now cleanly exited with a call to wx.App.ExitMainLoop rather than wx.Exit.
  • Python 3 fix for the compat module - the Queue2 object needs to always be defined.
  • Added support for Python 2.2 and earlier for the compilation of the C modules.
  • Removed an unused import of the Queue module from the multi-processor.
  • Python 3 fix for the ScientificPython PDB reader unit tests. The order of the keys returned by a dictionary's keys() method changes randomly in Python 3, so now they are sorted prior to comparison.
  • Redesigned the reset user function backend. This now no only clears out the relax data store, but it also resets the GUI if present. Some of the reset code comes from the tearDown() method of the GUI tests. All windows but the main GUI window are closed and the relax controller gauges are set to zero and the log window text cleared. These changes should allow GUI tests after an error or failure to pass, something which is currently problematic.
  • Disabled the initial relax intro printout from the GUI when running the test suite. This prevents the intro text from appearing in the first failed test.
  • Fix for the Mf.test_read_results_1_3_v2_broken() system test for Python 3.2. The object comparison method no longer converts dictionaries to strings for the comparison, as the string version is different in different Python versions.
  • Fix for the Mf.test_write_results() system test for the Python 3 versions. The logic for determining Python 3 versions was broken and the incorrect files was used for Python 3.1.
  • Better Python 2.3 support. The compat module is now imported at the very start to allow the builtins to be set before any other imports. The sorted() builtin method is now mimicked and the os.devnull string set for Python 2.3 and earlier.
  • Fix for the Mf.test_write_results() system test for Python 3.1. The XML version in Python 3.1 is the old style. Therefore the old results file is being used to check this Python 3.1 result.
  • Small improvements to the multiple Python version test suite testing script.
  • Reactivated support for Python 2.3. This mainly skips the missing 'subprocess' module. This however decreases relax's functionality a little.
  • Created a special script for testing out relax with Python versions 1.0 all the way to 3.3. This builds the C modules for each Python version in ~/bin and then runs the test suite, outputting everything to log files.
  • The Results system tests are no longer dependent on the relaxation curve-fitting C modules. This allows these tests to run when the module cannot be imported.
  • Python 2.5 and lower fix for the test_write_protein_sequence() unit test. The byte array is wrapped in an eval() statement to allow Python 2.5 and lower to parse the code without failing, and the byte array comparison is now only used for Python 3+.
  • All system and GUI tests reliant on the relax-fit C modules are deactivated if import fails. This removes a pile of useless error messages from the test suite and presents a table of skipped tests at the end.
  • More Python 3 fixes for the use of now non-existent string module functions.
  • Python 3 fix for the model-free BMRB export - many string module methods no longer exist.
  • Mass conversion of the alignment tensor data structures to the same new design as the diffusion tensor. This large set of changes matches all of those revisions for the diffusion tensor already committed. The alignment tensor data structures are now read only, and can only be modified via the set() method. This is a much simplified design which works on all Python versions.
  • Small clean ups of the diffusion tensor data structure code.
  • Deleted the now unused _update_sim_set() method of the diffusion tensor data structure.
  • Removed the now unused _update_sim_append() method from the diffusion tensor data structure.
  • Cleaned up the docstring of the diffusion tensor data object __setattr__() method.
  • Updated all of the diffusion tensor unit tests to the new design.
  • Fix for the reading of model-free results files from relax 1.2 when simulation data is missing.
  • Fix for the reading of relax 1.2 model-free results files for the diffusion tensor structure redesign.
  • Another fix for the fold_angles() diffusion tensor function - again an incomplete design conversion.
  • Fix for the setting of the diffusion tensor parameter errors in the model-free specific analysis code.
  • Fix for the setup of the model-free Monte Carlo simulations for the new diffusion tensor design.
  • Another fix for the diffusion_tensor.init user function - it was not completely converted.
  • Fix for the fixing of parameters in the model-free analyses. The diffusion tensor set_fixed() method is now used.
  • Fix for the XML output of the diffusion tensor - only the modifiable parameters are output. This was the previous behaviour and is needed for the test suite to pass.
  • Converted the palmer.extract user function to use the new diffusion tensor design.
  • The diffusion tensor bmrb_read() function now uses the set_fixed() method instead of fixed().
  • The fix user function now uses the diffusion tensor set_fixed() method.
  • Renamed the diffusion tensor fixed() method to set_fixed() to avoid clashing with the 'fixed' object.
  • Fix for the model-free specific analysis duplicate_data() method for the new design. The diffusion tensor __mod_attr__ object is now called _mod_attr.
  • Fix for the diffusion tensor to_xml() method for the new design. For some reason the methods of the Element class are no longer blacklisted.
  • Converted the diffusion tensor data structure from_xml() method to the new tensor design.
  • Fix for the Diffusion_tensor.test_copy system test - the simulation parameters are now read-only. Instead, the diffusion tensor set() method needs to be called.
  • The setting of list values for the DiffTensorSimList object now works correctly. The private _set() method now works correctly by calling the base class method, and the normal setting of diffusion tensor simulation values produces a RelaxError.
  • Fix for the diffusion tensor __deepcopy__() replacement method for the new design.
  • The model-free specific analysis _disassemble_param_vector() method now uses the new diffusion tensor design.
  • Modified the setUp() method for the diffusion tensor system tests to use the new design.
  • Redesigned how diffusion tensor simulation structures are handled. The design is now much cleaner and works with all Python versions.
  • Removed all the unused imports from specific_fns.model_free.main.
  • A number of private diffusion tensor objects and methods have switched to the single leading '_' format.
  • Improvements to the diffusion tensor set() method. The parameters, errors and simulations are now properly differentiated and stored.
  • Converted the old diffusion tensor __setattr__() method into the set() method. This is the only way in which diffusion tensor parameters, errors and simulations can be set.
  • Renamed the diffusion tensor data structure type() method to set_type(). This is because the type is stored as the 'type' object, clashing with the method name.
  • Created the diffusion tensor data structure type() method for setting the tensor type. This is to remove the "cdp.diff_tensor.type = 'x'" code from the core of relax, as the structure is now read only.
  • The new diffusion tensor fixed() method has been created to allow the fixed flag to be changed.
  • Fix for the initialisation of the diffusion tensor data structure, now that it is read-only.
  • The diffusion tensor data structure has been completely converted into a read-only structure. The __setattr__() method now will always raise a RelaxError, and the diffusion tensor simulation data structure objects __setitem__() method will raise the same error.
  • Updated the relax version numbers and 'trunk' used relax user manual. For example the information about checking out the main development line was still talking about 1.3 rather than the trunk.
  • Python 3 fix for the setting of diffusion and alignment tensor simulation values. The previous code somehow worked in Python 2 but was not formally correct and broke in Python 3.
  • Python 3 fix for the model-free results file reading tests. The ordering of dictionaries is different in Python 3, so now these are properly converted from strings to dictionaries before comparison. This was not happening because of the XML changes from Python 2.7.3 onwards.
  • The relaxation curve-fitting system tests are now skipped if the module is missing or broken. This improves the printouts from the test suite and shows a summary of skipped tests rather than a pile of traceback messages and errors.
  • The message about skipping the GUI tests due to wxPython being missing is now more specific. This was being shown for all runs of the test suite when it only needs to appear if GUI tests have been run.
  • Added a Python 3 version of the truncated OMP model-free results file. This was created with trunk.
  • Removed the Python 3 byte array hack which should have been removed earlier.
  • The OMP model-free results file generation script now outputs for any relax version.
  • Python 3 fix for the Mf.test_latex_table system test. The latex_mf_table.py model-free system test script docstring contains backslashes, so the raw string format r"""Text""" is now used.
  • Python 3 support for Modelfree4 and DASHA. The subprocess.Popen class works with byte arrays rather than strings in Python 3+. The Python objects are now interconverted when the Python 3 encode() and decode() methods are detected.
  • Removed the pickle format information and arguments from the state user function definitions.
  • Eliminated the State.test_state_pickle() system test as pickled states are no longer supported.
  • Removed the ability to save and restore states using the pickle module. A pickled state is of no use to relax anymore. It's removal is needed for Python 3 support. So now everything defaults to the XML formatted output.
  • Python 3 fix - removed the use of the string module from generic_fns.spectrum.
  • Python 3 fix for the relax_io.open_write_file() function. This now matches the behaviour of open_read_file() in that there are three different behaviours for opening bz2 and gz files for writing to for the different Python versions (one for Python 2, one for Python 3.0 to 3.2, and one for Python 3.3+). All byte streams have been eliminated as open_write_file() is for creating text files.
  • Python 3 fix for the Noe.test_noe_analysis() system test for the grace.write precision changes.
  • For consistency between Python 2 and 3, the grace.write user function outputs to 15 decimal places. This increased precision will only be of use in the relax test suite.
  • Python 3 fix for the Pipes.test_pipe_bundle() system test. The order of bundle names returned by generic_fns.pipes.bundle_names() is not guaranteed in Python 3.
  • The C module compilation testing script now accepts the Python version as a first argument.
  • The relax_io.open_read_file() now supports all Python versions over 2.4. This required some really nasty hacks for Python 3.0, 3.1 and 3.2 with the Bzip2Fixed and GzipFixed classes overriding the incomplete and buggy bz2.BZ2File and gzip.GzipFile modules, and being wrapped around io.TextIOWrapper().
  • Added the IO module to the relax information printout and dependency checks.
  • The manual C module compilation script is now executable.
  • Renamed the 'scripts' directory to 'devel_scripts' so that users are less likely to ask about the scripts.
  • Finished off the C module compilation testing script.
  • Added a script for testing out the C module compilation on multiple Python targets.
  • The relax_fit specific analysis module now supports both Python 2 and 3.
  • The relaxation curve-fitting C module now supports compilation on both Python 2 and 3.
  • Created the simple Sequence.test_sequence_copy() system test to catch bug #20213.
  • The Mf.test_bug_20213_asn_sidechain() system test now uses a temporary directory for output.
  • Added the Mf.test_bug_20213_asn_sidechain() system test to catch bug #20213. The data and script comes from the files 'sh3-47.2.zip' and 'run.py' attached to the bug report https://gna.org/bugs/?20213. The PDB now only contains Asp47, the optimisation parameters have been made almost insignificant, and all models but 'tm0' have been removed from the analysis.
  • The Python 3 dictionary values() method no longer returns a list, so a list() call is needed.
  • Python 3 bug fix for the geometric structure module - another integer division to float problem.
  • The Mf.test_write_results system test can now select the correct file to compare against in Python 3. The algorithm for determining if the 'final_results_trunc_1.3_v2' or 'final_results_trunc_1.3_pre_py2.7.3_v2' file should be used could not handle Python 3.
  • Python 3 fix for the format detection of results and save files.
  • Python 3 import fixes for the generic_fns.structure package using relative paths.
  • Python 3 fix - removed the use of the string.lower() function in the OpenDX mapping code.
  • Python 3 fix for the frame order system tests. As float to string conversions behave differently, the %.1f formatting is used to force only a single decimal place float.
  • Python 3 fix for the frame order system tests - float to string conversions behave differently. Now the explicit %.1f formatting is used to force only a single decimal place float.
  • Python 3 integer division to float fix for the frame order analysis.
  • Python 3 bug fix for the frame order analysis - another int division problem.
  • Python 3 fixes - eliminated all usage of the dictionary has_key() calls as they are no longer present.
  • Python 2 and 3 support in the generic_fns.relax_data module using 2to3. One print call was fixed after running 2to3.
  • Python 3 bug fix for the Structure.test_read_pdb_mol_2_model_scientific system tests. This is again an integer division problem returning a float.
  • Python 3 fix for the test_write_protein_sequence() unit test. This is again a string verses byte verses unicode problem.
  • Python 3 fix for the user function docstring creation in the prompt UI mode. Again this is the problem of a division now returning a float rather than an int.
  • Python 3 bug fix for the N-state model target function setup. The num_tensors variable needs to be an integer, but the Python 3 division will create a float type.
  • Python 3 fix for the results.read user function matching that of state.load.
  • Python 3 bug fix for the relax_io.read_spin_data() function. The built in max() function cannot handle the value of None, therefore the filter() function is used to remove all instances of None from the list.
  • Python 3 bug fix for the state.load user function. The header line of pickled states (rather than the standard XML states) is of the b byte format. This is now converted to a string, and the search expression is comparing it to the raw string r"<\?xml".
  • Better support for both Python 2 and 3 in the relax data store. The 2to3 script was used on all of the files in the data package.
  • Python 3 preparation - the relax data store (the data package) now supports both Python 2 and 3.
  • Python 3 fix - the relax_errors.AllRelaxErrors object is now a proper tuple. Due to bad coding, it was previously a nested tuple. This nested tuple worked in Python 2, but is fatal for Python 3.
  • Python 3 fixes - the character '\' is now properly escaped as '\\' in the stereochemistry auto-analysis.
  • Fix for the test suite summary for Python 3. The test suite now runs, but fails miserably, under Python 3.
  • Fix for the running of the test suite under Python 3. The zip() function used in the loadTestsFromTestCase() function is now an iterator, so it needs to be passed through the list() function to generate a list.
  • Fix for the test_parse_token_multi_element_name() unit test, as parse_token() no longer sorts.
  • Python 3 fix for the generic_fns.mol_res_spin.parse_token() function. Mixed lists for int and string can no longer be sorted. This sort call is not needed anyway.
  • Automatically converted the generic_fns.mol_res_spin module to support both Python 2 and 3.
  • For running relax with Python 2, the __builtin__.range() function has been replaced with xrange. This causes large speed ups (speed that was lost with the earlier xrange() to range() conversions), and memory decreases. For example on one system, the system test time decreased from 513.029s to 487.586s.
  • The compat module now has the py_version variable specifying if this is Python 2 or 3.
  • Import fix for the OpenDX mapping package, recently broken with the relative import for Python 3 change.
  • More usage of the is_unicode() function in the generic_fns.mol_res_spin module.
  • Created the check_types.is_unicode() function for Python 2+3 compatibility. This is used in the generic_fns.mol_res_spin module.
  • Another raise() function call to statement change for 2to3 preparations.
  • Converted some raise() function calls to raise statements in preparation for the 2to3 conversion.
  • Converted the ScientificPython PDB reader to support both Python 2 and 3. The __repr__() method was manually modified due to the 'list' variable clashing with the 'list'type.
  • Created a new module for simultaneous Python 2 and 3 support in relax called 'compat'.
  • Python 3 fixes - the list() function is now used in combination with range() to generate the ordered list. range() in Python 3 is an iterator object (just as xrange was), so now the list() function has to be called.
  • Python 3 preparations - mass conversion of all xrange() calls to range().
  • Created the special check_types.is_filetype() function for checking for files in all Python versions.
  • Python 3 - eliminated an unneeded xrange call.
  • Python 3 - eliminated the use of the map() function, as this behaves differently in Python 3.
  • Python 3 - removed the use of the string.lower() function as it is missing in Python 3.
  • Python 3 fix for the relax information printout. The try blocks seem to now operate slightly differently in Python 3.
  • Python 2 fixes - the Python 3 fixes for the ScientificPython module imports broke Python 2.
  • Python 3 - the cStringIO.StringIO import switches to io.StringIO if missing.
  • Python 3 - relative module paths are now used for the test suite runner.
  • Python 3 fixes for the version module for catching empty lists.
  • Python 3 fix - ensure an integer is actually an integer (division now converts ints to floats).
  • Python 3 - fixes for the renaming of the Queue package.
  • Python 3 - converted the last of the except error catching statements to be Python 2.4+ compatible.
  • Python 3 - removal of the use of the string.atoi and string.atof functions. These have been depreciated since Python 2.0! They have been replace by the int and float functions.
  • Python 3 - a number of fixes for running the ScientificPython modules in relax on Python 2 and 3. This includes relative imports, converting raise statements to function calls, removal of the use of many string module functions which do not exist in Python 3, etc.
  • Python 3 - modified some except statements to be Python 2.4+ compatible in a ScientificPython module.
  • Python 3 - converted some print statements to function calls in the ScientificPython modules.
  • Python 3 - fix for an os.chmod() call by using the stat module rather than the number 0775. The number 0755 is no longer valid in Python 3.
  • Python 3 - a pile of relative path fixes for many relax modules.
  • Python 3 - removed the use of the types module from generic_fns.sequence. The relax arg_check module is now being used instead.
  • Python 3 preparations - removed all of the string module functions which no longer exist in Python 3. These functions are part of the strings themselves now.
  • Improvements for the relax test suite synopsis for when the wxPython module is missing or broken. This is simply a printout improvement.
  • Python 3 preparations - removal of some unneeded xrange() calls.
  • Python 3 preparations - the data package now really does use the absolute path for its module imports.
  • Python 3 preparations - the data package now uses absolute imports for all its modules.
  • Python 3 preparations - eliminated the use of the types.ListType object.
  • Python 3 preparations - absolute module path fixes.
  • Python 3 preparations - support for both Python 2 __builtin__ and Python 3 builtins modules.
  • Python 3 preparations - absolute module path fix.
  • Python 3 preparations - more exception handling updates for all Python 2.4+ versions.
  • Python 3 preparation - all raising of RelaxErrors is now Python 2.4+ compatible.
  • Python 3 preparations - error handling is now Python 2 and 3 compatible in the relax_io module.
  • Python 3 preparations - converted the relax prompt/script interpreter to be Python 2 and 3 compatible.
  • Python 3 preparations - removed the use of the types.ClassType object.
  • Python 3 preparations - compatibility for both the Python 2 cPickle and Python 3 pickle modules.
  • Python 3 preparations - all usage of string.split() and string.strip() has been eliminated.
  • Removed the completely unused gui.components.conversion module.
  • Removed an unused import (which was breaking relax in Python 3).
  • Python 3 preparations - all os.popen3() instances in relax have been replaced by the subprocess module.
  • Python 3 preparations - eliminated the use of the os.popen3 function from the info module.
  • More exception handling changes to be Python 2.4+ compatible.
  • Python 3 preparations - exception handling fix to be Python 2.4+ compatible.
  • Python 3 conversions using 2to3.
  • Updated the Python 2 to 3 checklist document for the shifting of the 'relax' file to 'relax.py'.
  • Python 3 preparations - removed all usage of the xrange() in the generic_fns package as none are needed.
  • Python 3 preparation - eliminated the unneeded use of xrange().
  • Python 3 preparation - the use of an absolute module path for import.
  • Python 3 preparations - the auto_analyses package is now fully Python 2 and 3 compatible.
  • Python 3 preparation - the auto_analyses package now uses absolute paths for the module imports.
  • Python 3 preparations - the use of the queue module in the status module is now compatible with 2 and 3.
  • Python 3 preparations - the GUI tests are now fully Python 2 and 3 compatible.
  • Python 3 preparations - the queue modules for both Python versions are now supported in the GUI tests.
  • Python 3 preparations - the test_suite.gui_tests package now uses absolute module path imports.
  • Python 3 preparations - the unit tests are now fully Python 2 and 3 compatible.
  • Python 3 preparation - all of the _generic_fns unit tests now use absolute module imports.
  • Python 3 preparations - all the _prompt unit tests now use absolute module imports.
  • Python 3 preparation - removed all xrange() calls from the unit tests, these are not needed.
  • Last Python 3 compatibility update for the system tests - they are now both Python 2 and 3 compatible.
  • Python 3 preparation - the test_suite.system_tests package now uses the absolute module path for imports.
  • Python 3 preparation - changed the import of SystemTestCase to use the absolute module path.
  • Removed all of the xrange() calls from the system tests as these are not necessary. This is in preparation for Python 3.
  • Some changes in preparation for Python 3.
  • Removed the 'force flag' text from the RelaxWarning messages output by the bruker.read user function. The force flag arguments of the generic_fns.mol_res_spin.name_spin() and generic_fns.mol_res_spin.set_spin_isotope() functions can now be set to None to suppress the text.
  • Fixes for the checks in the Mf.test_mf_auto_analysis() GUI test for the recent test suite data changes.
  • The CSA setting in the model-free auto GUI analysis now defaults to the '@N*' spin ID. Previously no spin ID was being used so that the protons where also having their CSA values set to that of the nitrogens. Now the execution checking code skips the proton CSA check.
  • Added star versions of the standard spin IDs to the spin ID GUI element (e.g. '@N*', '@H*').
  • Fix for the comment on the 'Export' button in the BMRB export window.
  • Lots of editing of the model-free GUI section of the user manual.
  • Fix for the Relax_data.test_delete system test for the changes to the relax_data.read user function.
  • Fix for the Relax_data.test_read unit tests for the relax_data.read user function changes.
  • Fix to the DASHA system test needed for the changes to the relax_data.read user function.
  • Fix for the N_state_model.test_monte_carlo_sims due to the changed sphere.pdb test suite file.
  • Relaxation data is no longer loaded by relax_data.read if the values and errors are both None.
  • Modified the Mf.test_dauvergne_protocol system test to catch bug #20197. The sphere test data NE1 and HE1 data is now being used in this system test, triggering the bug.
  • Small change to the sphere model model-free test suite data. The trptophan indole data is now merged into the last residue (a glycine) to catch bug #20197.
  • The overfit_deselect() printouts for all specific analyses are now regularised and match the model-free printouts.
  • All overfit_deselect() methods now accept and use the verbose argument.
  • Printouts for the over-fitting deselection of spins are suppressed for the back-calculation of relaxation data. This affects the model-free Monte Carlo simulations, improving the output.
  • More improvements to the model-free over-fitting deselection printouts.
  • Improved the model-free overfitting deselection printouts prior to optimisation. Only a single message per spin is now given when the spin is deselected, minimising the amount of output.
  • Added a tryptophan NE1 data set to the sphere model-free model test data. This is in preparation to catch bug #20197. The scripts have also been updated for the newer relax designs.
  • Added the data_check Boolean argument to all of the specific analysis overfit_deselect() methods. This allows the unit tests to pass.


relax 2.1.1

  • Modified the model-free optimisation final printout to be more multi-processor friendly. The message saying that the optimised χ2 is an improvement or not now includes the spin ID string if present. This is more informative for the multi-processor mpi4py printouts.
  • Added the use of the program 'nice' to the model-free GUI tutorial in the user manual.
  • Removed the out of date and useless README file for the HTML version of the user manual.
  • Added a BMRB section to the end of the model-free chapter of the user manual.
  • Massive expansion of the model-free chapter of the user manual including script and GUI tutorials. The model-free chapter now has step-by-step tutorials for both the prompt/script mode and GUI mode for the new automated model-free protocol (the d'Auvergne protocol)[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b]. This includes a large set of screenshots for the GUI mode.
  • Created the User_functions.test_value_set GUI test demonstrating the failure of the value.set user function.
  • Modified the dauvergne_protocol sample script[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b] to handle tryptophan indole NE1 data.
  • The graphics.fetch_icon() function argument 'format' can now be set to None. This will return the file path without the extension.
  • Improvements to the duplicate user manual title finding script.
  • Created a simple shell script to find duplicate titles in the relax user manual. This is important for the HTML version of the manual as duplicated titles causes HTML pages to be overwritten. For example all chapters, sections and subsections titled "Introduction" will load the introduction.html file which will contain the text of the last section with that title!
  • Additions to the scripting section of the relaxation curve-fitting chapter of the user manual.
  • A small edit to the intro chapter for the multi-processor operation and logging.
  • Added some labelling to the infrastructure chapter of the user manual for referencing.
  • A number of updates and edits to the intro chapter of the user manual. The model-free GUI screenshot has been shifted to the intro chapter in preparation for a full tutorial with screenshots in the model-free chapter.
  • Updated the data model chapter of the user manual to cover the handling of protons. This change includes the modification of the PDB reading screenshot to demonstrate the reading of a specific model and the naming of the molecule.
  • All of the GUI strings and text are now formatted with a small sans serif font in the user manual. This is because in the GUI, a sans serif font is almost always used be default.
  • Modified the User_functions.test_structure_pdb_read() GUI test to catch another bug. This is a bug recently introduced with the fixes to the other sequence editor GUI window problems.
  • Created the User_functions.test_structure_pdb_read GUI test for checking the sequence editor window. This new user function GUI testing class is to be used for testing out the special GUI elements not invoked within the unit testing. The test_structure_pdb_read() test specifically shows a number of failures of the sequence editor window.
  • Modified the operation of the sequence GUI element to have access to the sequence editor window. This is to allow this GUI element to be blasted within the test suite.
  • Improvements to the descriptions of the structure.read_xyz user function arguments.
  • Improvements to the descriptions of the structure.read_pdb user function arguments.
  • Added @HE1 to the spin ID list of the structure.load_spins user function. This is only seen in the GUI.
  • Created the new generic_fns.result_files for standardising the handling of results files. This fixes the bug where results files are repetitively added to the list. All of the code touching cdp.result_files now uses this module instead.
  • Updated the scripting section of the intro chapter of the user manual for non-technical users.
  • Expanded the spin ID list for the structure.load_spins user function. This now includes the spins "@N", "@NE1", "@C", "@H", "@O", "@P", ":A@C2", ":A@C8", ":G@N1",":G@C8", ":C@C5", ":C@C5", ":U@N3", ":U@C5", ":U@C6".
  • Changed the RelaxError for missing relaxation times in the relaxation curve-fitting analyses.
  • Modified the test_bug_20152_read_dc_file() GUI test to catch the RelaxError. This error is because of the old PDC format.
  • Created the test_bug_20152_read_dc_file() GUI test for catching bug #20152. This includes truncated data taken from the bug report (with data for only the first 3 residues).
  • Set up the Bruker Dynamics Center system tests as GUI tests. This is in preparation for catching bug #20152.
  • Re-added Dominique Marion's solvent suppression to the NMRPipe script in the curve-fitting chapter.
  • A few small edits of the relaxation curve-fitting chapter. This is to reinforce the exact time of the relaxation time period.
  • Added some text to explain why test only J(0) is discussed whereas the script also calculated FR2 and Fη. This was suggested by Edward d'Auvergne in a post at:https://mail.gna.org/public/relax-devel/2012-09/msg00044.html.
  • Big clean up of the Bibtex bibliography file for the relax user manual.
  • Small edits of the consistency testing figure caption in the relax user manual.
  • Editing and a number of fixes/cleanups for the consistency testing chapter of the user manual.
  • Editing of the "Values, gradients, and Hessians" chapter of the user manual to make it fit better. The context of this chapter has been specified by changing the title to "Optimisation of relaxation data -- values, gradients, and Hessians" and the intro text has been updated. As this chapter is no longer straight after the model-free chapter, this is needed.
  • Made a small correction to a reference such that a superscript is correctly displayed.
  • Added the bounding box and a centerline command to the code for the figure for consistency testing. This follows two remarks by Edward d'Auvergne at https://mail.gna.org/public/relax-devel/2012-09/msg00030.html and https://mail.gna.org/public/relax-devel/2012-09/msg00032.html.
  • Added more text to describe the consistency testing approach. Also includes a very basic point by point protocol for consistency testing. This was proposed by Edward d'Auvergne at https://mail.gna.org/public/relax-devel/2012-09/msg00028.html. This also follows a discussion started by Edward d'Auvergne at https://mail.gna.org/public/relax-devel/2012-09/msg00019.html.
  • Added some text to describe the consistency testing example figure. This follows a discussion started by Edward d'Auvergne at https://mail.gna.org/public/relax-devel/2012-09/msg00019.html.
  • Added a modified version of Figure 1 from Morin and Gagne (JBNMR, 2009 (http://dx.doi.org/10.1007/s10858-009-9381-4)). File formats are .agr (xmgrace), eps (gzipped), and png. This follows a discussion started by Edward d'Auvergne at https://mail.gna.org/public/relax-devel/2012-09/msg00019.html.
  • Added a directory for placing consistency testing graphics. This follows a discussion started by Edward d'Auvergne at https://mail.gna.org/public/relax-devel/2012-09/msg00019.html.
  • Corrected the bibliography entries whih were still in plain text and not as a Latex \cite call. Also renamed the MorinGagne09 entry to MorinGagne09a as there is now also MorinGagne09b. This was proposed by Edward d'Auvergne in a post at https://mail.gna.org/public/relax-devel/2012-09/msg00025.html.
  • Added the DOI to reference Morin11 and fixed indentation (10.1016/j.pnmrs.2010.12.003). This follows a comment by Edward d'Auvergne at https://mail.gna.org/public/relax-devel/2012-09/msg00022.html.
  • Deletion of the relax version LaTeX file - this is automatically created anyway.
  • Added text to detail the usage of the consistency testing script. This text was modified from the corresponding text for jw_mapping. This follows a discussion started by Edward d'Auvergne at https://mail.gna.org/public/relax-devel/2012-09/msg00019.html.
  • Added some text and a reference to the consistency testing chapter. This follows a discussion started by Edward d'Auvergne at https://mail.gna.org/public/relax-devel/2012-09/msg00019.html.
  • Editing of the nmrPipe script in the Rx curve-fitting chapter of the manual.
  • Some editing of the NOE chapter of the relax user manual.
  • The old R1 and R2 analysis screenshots have been shifted to the intro chapter.
  • Editing of the relax data model chapter of the user manual.
  • Large expansion and lots of editing of the relaxation curve-fitting chapter of the user manual. The GUI section has been added which includes step-by-step instructions on how to use relax,illustrated with screenshots at each step. There has been general editing of the whole of the chapter as well.
  • Added a tonne of GUI screenshots of an R1 analysis. These will be used in the relaxation curve-fitting chapter of the user manual.
  • Added some Grace plots from an NOE analysis for use in the user manual.
  • Small edits of the relax data model chapter of the user manual.
  • Editing of the relaxation curve-fitting and NOE chapters of the user manual. This is to synchronise the format of the two chapters, and includes the swapping of text between them.
  • Added trp indole NH loading into the relaxation curve-fitting sample script.
  • Large edits of the consistency testing chapter of the user manual.
  • Activated the consistency testing chapter of the user manual.
  • Added a LaTeX label to the J(ω) mapping chapter.
  • Added the other consistency testing references to the citation chapter of the manual.
  • Added the Fushman et al., 1998 reference.
  • Fix for the Farrow et al., 1995 DOI number.
  • Changed the order of the Rx curve-fitting and NOE chapters in the relax manual. This is because the NOE chapter references passages from the Rx curve-fitting chapter, so it's more logical to have the Rx curve-fitting chapter first.
  • Clean up of a paragraph of the data model chapter of the user manual.
  • Improved the consistency in the user manual by using the new LaTeX commands. These changes are throughout the manual and affect all the text of user functions, menu items, prompt examples, GUI elements, files, directories, etc.
  • Removed some '()' text from the end of the user functions in the user function documentation.
  • Added some more LaTeX functions for formatting consistency.
  • Defined a new set of LaTeX commands for prompt/script/GUI strings and elements for the user manual. These will be used to regularize the text throughout the manual, as this is currently quite mixed up.
  • More rearrangements of data model and NOE chapters of the relax manual. The GUI spin deselection part of the NOE chapter has been shifted into the data model chapter. And the GUI loading of spins from a sequence file section has been completed.
  • Added screenshots of the spin viewer spin loading wizard sequence.read page.
  • Redesign of the data model chapter of the user manual. This includes the moving of all of the spin viewer window text and screenshots from the NOE chapter.
  • Shifted the spin viewer screenshots into their own directory.
  • Changed the "View->Spin view" menu item to "View->Spin viewer".
  • Created a directory for screenshots of the spin viewer window operation.
  • The NOE auto-analysis GUI test now checks the support for Trp indole N data as well.
  • The spectrum.read_intensities user function now prints out a list of the intensities read in. This is for better user feedback as to what the user function has actually done.
  • Created the GUI wizard _apply() method for executing the current page's _apply() method. This is for the GUI tests to simulate a click on the 'Apply' button.
  • Removed a debugging print out.
  • Modified the NOE system test to catch bug #20120.
  • Lots of editing of the NOE chapter of the user manual.
  • Significant update of the NOE chapter of the user manual. The sample script used in this chapter was incredibly out of date.
  • Modified the NOE system test to test the usage of Trp indole 15N data. This is to catch bug #20119.
  • Added some Trp peak data (backbone and indole N) to the Sparky steady-state NOE peak lists. This is in preparation for the modification of the NOE system test to catch bug #20119.
  • Modified the NOE sample script to include Trp indole NH data.
  • Added a step-by-step tutorial for the GUI NOE auto-analysis to the user manual. This includes 22 screenshots of all the steps.
  • Added a section label.
  • Added some Sparky info to the Rx curve-fitting chapter of the user manual.
  • Allowed the raggedbottom LaTeX setting as this is better for the screenshot layout in the user manual.
  • Added the nth package for the user manual LaTeX compilation.
  • The NOE chapter now points to the recommendations in the Rx fitting chapter.
  • Added a new section called 'From spectra to peak intensities' to the Rx fitting chapter of the manual. This adds a number of recommendations for high quality relaxation rates.
  • Added the Viles et al., 2001 reference.
  • Small description edit for the relax_data.temp_control user function.
  • Added a LaTeX label to the NOE chapter of the user manual.
  • Added a paragraph to the model-free chapter of the user manual explaining the J(ω) equation forms.
  • Added a label to the data model chapter of the user manual.
  • Created an initial rough version of the RSDM chapter of the user manual.
  • Better figure layout in the NOE chapter of the user manual.
  • The relax data model chapter of the user manual now uses the higher quality graphics.
  • Some more high quality graphics.
  • Added more high resolution graphics for use in the relax user manual.
  • Expanded the size of the specific analysis graphics - mainly for use in the relax user manual.
  • Added the specific analysis graphics to the start of each chapter of the relax user manual.
  • Small edit of the 'Citations' chapter of the relax user manual.
  • Added EPS versions of the specific analysis graphics for use in the user manual.
  • Added the Fushman et al., 1999 reference for consistency testing to the intro chapter of the user manual.
  • More chapter cross referencing in the relax user manual.
  • Added the Horne 2007 paper to the 'Citations' chapter of the user manual. Whitespace has also been cleaned up, and a chapter label added.
  • Added the Horne 2007 paper to the 'Supported NMR theories' subsection of the user manual intro.
  • Shortened the Literature subsection of the intro chapter to point to the citations chapter. This part of the user manual is now redundant.
  • Small edits of the relax user manual.
  • Edits to the abbreviations chapter of the relax user manual.
  • Added another abbreviation.
  • Expansion of the abbreviations chapter of the relax user manual.
  • Added a tonne of DOI numbers to the relax user manual bibliography. This will simplify accessing these references for the user.
  • Added and fixed DOI numbers for many bibliographic entries.
  • The LaTeX bibliography style for PhD theses now includes the DOI hyperlink. This is for the user manual.
  • Slight modification of the DOI hyperlink formatting bibliography style for the user manual.
  • Modified the relax LaTeX bibliography style file relax.bst to convert DOI numbers to hyperlinks. This is to add links to the references within the relax user manual.
  • Created the new 'Citations' chapter of the relax user manual. This is to clearly outline to the user the citations required for the various components of relax.
  • Added the Fushman 1999 reference and a few formatting fixes in other references.
  • Improvements to the 'Supported NMR theories' section of the user manual introduction. This includes the addition of the Morin and Gagne 2009 reference.
  • Added the Morin and Gagne 2009 reference for the consistency testing.
  • Added some more abbreviations to the relax user manual.
  • Created a new chapter for the relax user manual titled 'The relax data model'.
  • Fix for the pipe editor window screenshot width in the relax manual.
  • Converted some more wizard graphics to the EPS format for the user manual.
  • Updates and small expansion of the intro chapter of the relax user manual.
  • The user manual now specifies the repository revision if a non-tagged version is built. This enables easier tracking and editing of the manual.
  • Updates to the generic_fns.mol_res_spin.id_string_doc documentation structure.
  • Updated the screenshot of the pipe editor window.
  • Created EPS versions of a number of wizard graphics for use in the user manual.
  • Removed some now useless whitespace from the top of each user function subsection of the manual.
  • Redesigned the formatting of the user function chapter of the relax manual. The fetch_docstrings.py now forces each user function to start in a new column. This increases the size of the manual, but makes the reading of the user function documentation much easier. The user function class and function icons (128x128 format) are now placed between the top bar and the subsection title and are left and right justified. This prettification simply allows the user functions to be more quickly identified.
  • Large expansion of the relax icon set. All 128x128 versions of the icons used by the user functions have been added as both PNG and gzipped EPS files. A few gzipped SVG and non-sized icons have been added as well.
  • Large expansion of the Oxygen icons within relax. All 128x128 versions of the icons used by the user functions have been added as both PNG and gzipped EPS files. A number of gzipped SVG icons has been added as well.
  • Modified the graphics.fetch_icon function to return different file formats. This will be used for the relax manual where eps.gz files are required.
  • Improvements to the final section of the relaxation curve-fitting chapter. The Xmgrace screenshot and page references for the user functions have been added.
  • Added screenshots of Xmgrace displaying relaxation curves.
  • The fetch_docstrings.py now adds LaTeX labels to each user function section. This has the form of 'uf: ' followed by the user function name, and is for referencing purposes within the main text.
  • Rewrote the relaxation curve-fitting chapter of the relax manual. This chapter was quite out of date and was of no use to modern relax versions.
  • Redesign of how the GPL license is presented to the user. The old prompt.gpl module with the version 2 of the license has been deleted. Now the text form the docs/COPYING file is passed through pydoc.pager for the './relax --licence' and the prompt mode GPL object, and is simply printed to STDOUT for the GUI help system.
  • Import clean ups for the N-state model specific module.
  • Added the 'unit' argument to the dipole_pair.read_dist and dipole_pair.set_dist user functions. This is to allow distances in Angstroms to be read into relax and converted to meters.


relax 2.1.0

  • Updated the journal reference for the published lactose conformational search scripts.
  • Shifted some of the sample scripts into the analysis specific sub-directories.
  • Increased the size of the test suite warning dialog for MS Windows.
  • Improvements to the skipped test printout from the test suite. Now all test categories (system, unit and GUI) are printed if a module/package is missing. This allows for better debugging.
  • Shifted all of the observer registration and unregistration to observer_setup() in the pipe editor.
  • Improved debugging print outs for the observer objects. The method name is now stored and included in all the observer 'debug>' printouts.
  • Removed the notebook tab deletion from the GUI tests tearDown() method. This should be performed when a relax reset happens, so it is not needed in the tearDown() method just before the reset() call.
  • Merger of the absolute RDC branch (absolute_rdc).
  • Added support for the absolute or signless RDCs to the N-state model. This simply propagates the absolute flags into the maths_fns.rdc module functions whereby the absolute RDC values and gradients can be returned.
  • The rdc.read user function backend is now storing the absolute value flag in interatom.absolute_rdc.
  • Converted the N-state model absolute_rdcs.py system test script to the interatomic data design.
  • Created an absolute value version of the synthetic CaM RDC file for the test suite.
  • The rdc.read user function backend now accepts the 'absolute' argument. This is used to signal that the RDCs are signless absolute values.
  • Created an initial system (and GUI) test for the absolute RDC concept.
  • Added the 'absolute' keyword arg to the rdc.read user function definition. This will be used to mark RDCs as being unsigned.
  • Merger of the interatomic data container branch (interatomic).
  • Added a new screenshot of the GUI model-free auto-analysis, as it is now quite different.
  • Added a wizard graphic (SVG form) for the 13C-1H dipole-dipole pair.
  • A number of the generic_fns.mol_res_spin functions now accept the pipe argument. These include name_spin(), set_spin_element(), and set_spin_isotope() and this allows the functions to operate on any data pipe.
  • The dipole_pair.define user function backend now can handle the pipe argument. This allows it to operate on an alternative data pipe.
  • The State.test_old_state_loading() GUI test now checks the loaded data to a small extent.
  • Better backwards compatibility of old relax results and state XML files for the interatomic design. The MoleculeContainer._back_compat_hook() method has been shifted into the Relax_data_store._back_compat_hook() method. This allows the spin containers with attached protons to be converted (with new spin containers for the attached protons added) after loading of the XML state.
  • A small speed up for the model-free duplicate_data() function. This is used in model selection.
  • The generic_fns.mol_res_spin.index_molecule() function now handles no molecule name given. If no name is given and only a single molecule is in the current data pipe, then the index of 0 will be returned.
  • The sequence.attach_protons user function now ignores spins with pre-existing attached protons.
  • Improvement for the generic_fns.interatomic.return_interatom_list() function. The spin ID matching is now through the id_match() function, allowing unique but different spin IDs to be used. This now matches the return_interatom() behaviour.
  • The relax_io.read_spin_data() function can now handle spin IDs in quotes.
  • The Scientific Python structural object are_bonded() now uses 2.0 Angstrom as a cutoff radius.
  • Added the are_bonded() and get_molecule() methods for the Scientific Python PDB reader. This is now needed for defining interatomic vectors in the interatomic data design.
  • Expanded the Interatomic.test_manipulation system test to demonstrate a few current failures.
  • Some changes to the Interatomic.test_manipulation system test to better test selection/deselection.
  • The interatomic test suite scripts are now GUI tests as well.
  • Created the new Interatomic system test class for testing out the interatomic data containers.
  • The rdc.read user function can now handle spin IDs in quotation marks.
  • Interatomic data containers can now be selected and deselected. The user functions select.interatom and deselect.interatom have been created mimicking the equivalent select.spin and deselect.spin functions. Each interatomic data container now has a select flag.
  • Modified the interatomic_loop() function so that spin IDs can be used to restrict the looping.
  • Modified InteratomContainer.id_match() to handle a single spin ID and to match to all unique IDs. This uses the spin container _spin_ids list private metadata structure.
  • Split the return_interatom() function into two. The new return_interatom() function is used for returning single interatomic data containers for perfect matches, whereas the return_interatom_list() function is used to return a list of containers matching a given spin. This simplifies the behaviour of the module.
  • The RelaxNucleusError and RelaxSpinTypeError can now have the spin ID supplied.
  • Bug fix for the dipole_pair.unit_vectors user function positional checking. The arg_check.is_float() function needs the raise_error flag turned off.
  • dipole_pair.unit_vectors now raises a RelaxNoInteratomError if not interatomic data is present.
  • The specific API base skip_function() method now returns False. This was previously raising a RelaxImplementError, but as Monte Carlo simulations now require this function, but returning always False, all analyses will be automatically supported.
  • Eliminated all of the bond length and heteronucleus type value.set units tests. These are no longer specific analysis parameters.
  • Removed all of the unit tests of the deleted structure.vectors user function.
  • The model-free data_init() method now sets boolean parameters to the default of False. This excludes the selection flag which is set to True. The data_init() method no longer uses the data_names() API method but the self.PARAMS.loop() method for returning the parameter names.
  • Improvements for the reading of old 1.2 relax results files for the attached proton spin containers.
  • Improvements and fixes for the generic_fns.relax_data.pack_data() function. This affects all the relaxation data reading user functions.
  • The structure.get_pos user function now prints out all data and fails if nothing was extracted. This is to prevent the user from going too far without realising that something is wrong.
  • More print outs and better data loading checks in the dipole_pair user functions.
  • The relax_data.read user function now prints out all of the data read in. This is to better inform the user that something has happened.
  • The return_spin_from_selection() function now lists all matching spins in RelaxMultiSpinIDError.
  • The return_spin() and return_spin_from_selection() functions can now handle multiple spins. It the 'multi' flag is supplied, then lists of spins (and associated data) will be returned, rather than a RelaxMultiSpinIDError error raised.
  • Created the list_to_text() RelaxError system function for prettifying the output of RelaxMultiSpinIDError.
  • Expanded RelaxMultiSpinIDError to be able to print out a list of all the matching spin Ids.
  • Improvements for the MoleculeContainer backwards compatibility hook for the creation of proton spins. The proton element and isotope type is now set to 'H' and '1H' respectively. This now means that the old XML files require less work by the user to convert to the new interatomic data design.
  • Eliminated the RelaxProtonTypeError error and changed the RelaxSpinTypeError message.
  • The sequence.attach_protons user function now sets the proton element and isotope types. This reduces the amount of work required from the user.
  • Rearranged the spin.element user function arguments.
  • Created the sequence.attach_protons user function. This will be useful for analyses which are missing structural data.
  • The dipole_pair user functions now fail if nothing could be done. This is for the dipole_pair.define, dipole_pair.read_dist, and dipole_pair.set_dist user functions.
  • The Monte Carlo select_all_sims() function is now using the specific skip_function(). This is needed for recreating model-free simulations as deselected proton spin containers now exist.
  • The MoleculeContainer XML backwards compatibility hook now deletes the spin 'r_err' and 'r_sim' vars.
  • Added a backwards compatibility hook for converting old XML files to the interatomic data design. This will convert the variable names, deleting the old, and create proton spins and interatomic data containers populating them with the old spin parameters.
  • Added a verbose flag to the generic_fns.dipole_pair.define() function.
  • Added a check to the model-free overfit_deselect() to see if a relaxation mechanism is present.
  • Expanded the functionality of the generic_fns.interatomic module. The copy() and exists_data() functions have been added to copy all interatomic data from one data pipe to another and to check if interatomic data exists within a data pipe respectively. The create_interatom() function now also accepts a 'pipe' argument so that non-current pipes can be used.
  • Created the RelaxInteratomError and RelaxNoInteratomError classes for interatomic data errors.
  • The interatomic data container now has the dipole_pair flag initialised to False.
  • Expanded the return_interatom() function to handle a single spin ID. This function now returns a list of matching interatomic data containers.
  • Modified the check_args() method of the dauvergne_protocol model-free auto-analysis[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b] for the new data. The heteronuc_type and proton_type are now in the spin's isotope variable, and the bond length 'r' is now the interatomic distance variable 'r'. All spin containers and interatomic data containers are being checked.
  • Expanded the RelaxNoValueError to handle one or two spin IDs as arguments. This is to better identify which spins or interatomic data containers are deficient.
  • The nuclear isotope is now defined via spin.isotope.
  • Eliminated a number of the specific API parameters relating to dipole-dipole interactions. These are now provided by the spin.isotope user function and the dipole_pair user functions. The eliminated parameters are: 'r' - replaced by dipole_pair.set_dist or dipole_pair.read_dist, 'xh_vect' - replaced by dipole_pair.unit_vectors, 'heteronuc_type' - replaced by spin.isotope, 'proton_type' - replaced by spin.isotope, and 'attached_proton' - replaced by dipole_pair.define.
  • Created the spin.isotope user function. This is designed to be a permanent replacement for the specific analysis API 'heteronuc_type' and 'proton_type' parameters.
  • Added the nuclear symbol as a wizard graphic.
  • Added a set of icons for nuclear or isotope related usage.
  • Deletion of the structure.vectors user function as it has been superseded by dipole_pair.unit_vectors. Only the user function definition has deleted - the backend code will remain so that it can be used internally.
  • Implemented the dipole_pair.unit_vectors user function backend. This code originates from the generic_fns.structure.main.vectors() function (the structure.vectors user function backend). The dipole_pair.unit_vectors user function is designed to replace structure.vectors.
  • Created the user function definition.
  • Created the backend of the dipole_pair.read_dist user function.
  • Created the dipole_pair.read_dist user function definitions. This new user function is for simplifying the loading of many different interatomic distances into relax.
  • Created a set of icons for the dipole_pair user functions.
  • Shifted the relax_data.dipole_pair user function into the new dipole_pair user function class. This has also been split into two new user functions: dipole_pair.define used to set up the magnetic dipole-dipole interactions, and dipole_pair.set_dist used to set up the r-3 averaged interatomic distances.
  • The relax_data.dipole_pair backend now uses the direct_bond flag.
  • Added CONECT records to the sphere.pdb file to allow connectivities to be more easily determined. This is for the internal reader, as the current algorithm for finding attached atoms is distance based, and as all N atoms of all residues are at [0, 0, 0], this algorithm fails.
  • Implemented the are_bonded() structural API method for the internal structural object.
  • Created the structural API base are_bonded() method - this is for determining if 2 atoms are bonded. This is a method stub which raises a RelaxImplementError.
  • Started to add the backend of the relax_data.dipole_pair user function.
  • Shifted the dipole-dipole graphics to the Wizard directory, as this is a wizard graphic.
  • Created graphics for the magnetic dipole-dipole interaction.
  • Removed the bond length from the model-free parameter list.
  • The bond length setting via value.set has no been merged into relax_data.dipole_pair. This averaged length is dipole-dipole distance and does not need to be a model-free parameter.
  • Added the definition for the new relax_data.dipole_pair user function.
  • Started to change the structure.vectors backend to handle two spin IDs.
  • Expanded the description of the negative gyromagnetic ratio flag for the rdc.read user function.
  • Created the generic_fns.interatomic.interatomic_loop() generator function.
  • The generic_fns.interatomic.create_interatom() function now returns the created container. This comes from the InteratomicContainer.add_item() method which now also returns the container.
  • Renamed the interatomic function return_container() to return_interatom(). This is to make the name more unique.
  • Shifted some code from InteratomList.add_item() to generic_fns.interatomic.create_interatom(). This is to break a circular import problem.
  • The rdc.read user function backend is now adding the RDCs to the interatomic data containers.
  • Created the generic_fns.interatomic.create_interatom() function for creating interatomic data containers.
  • Added checks to the InteratomList.add_item() method to make sure that the spin IDs already exist.
  • Created the generic_fns.interatomic module and added the return_container() method.
  • Created the InteratomContainer.id_match() method for checking the spin IDs in both directions for a match.
  • Created the interatomic data list and containers, and added these to the data pipe structures. This is modelled on the molecule/residue/spin structures. The new containers have is_empty(), from_xml() and to_xml() methods and should be fully functional with the relax infrastructure.
  • Improvement for the RelaxNoVectorsError class - the data pipe name is now optional. The print outs have been improved as well.
  • The spin container hidden objects are now replicated when the object is copied. The old Prototype.__deepcopy__() method was skipping all hidden objects, but now only objects starting with '__' are skipped.
  • Created a system test to replicate Romel Bobby's bug #19887.
  • Modified the working of the n_state_model.elim_no_prob backend. This user function is not functional anyway and is not tested by the relax test suite, but will remain as it might be useful in the future.
  • Added the consistency testing documentation to the grace.write and value.* user functions.
  • Converted the consistency testing documentation strings to the Uf_tables and Desc_container design. This is needed to use the consistency testing documentation within the user function help system.
  • The sequence.read user function now fails with a RelaxError if no sequence data was loaded. This is for better user feedback.
  • Creation of a fast molecule, residue and spin data lookup framework using private metadata. This consists of two elements: The already existing private lookup table now at cdp.mol._spin_id_lookup which is a dictionary with spin IDs as keys and a list of molecule, residue and spin indices as values; and a set of private variables within the molecule, residue and spin containers which identify the parent container names, numbers and indices. As all data is private,it will not be visible to the user or be saved in the XML results and save files and should be considered volatile. All this private metadata is kept up to date via the two new generic_fns.mol_res_spin functions metadata_prune() and metadata_update(). For fast operation, these methods can update specific container subsets via the mol_index, res_index and spin_index arguments. All parts of relax which modify the data pipe's molecule, residue and spin data structure (the generic_fns.mol_res_spin functions and test suite) call these two functions as needed. Two auxiliary functions spin_id_variants() and spin_id_variants_elim() have been added to create all possible matching spin ID strings for a given spin (the second created IDs strings which should no longer exist). The speed ups from this change are significant. On one system, the system and unit tests decrease from 492.8s/26.4s to 434.3s/25.1s. On another the decrease is from 330.7s/17.4s to 258.9s/15.4s. In addition, the pipe argument has been added to the generic_fns.mol_res_spin functions create_molecule(), create_residue(), create_pseudo_spin() and create_spin(). Also, the molecule name will now always be a string. Previously this was allowed to be an integer. This is needed for the private metadata functions to operate correctly. A number of unit tests have been updated for the changes.
  • Removed a hack from the generic_fns.relax_data.pack_data() function for the BMRB support. This calls the generic_fns.bmrb.generate_sequence() function. As non-BMRB code paths access the pack_data() function, this is a nasty hack which would have caused problems in the future.
  • Removal of a hack from the generic_fns.bmrb.generate_sequence() function. This hack was for naming unnamed spins. But this is not needed as the generic_fns.mol_res_spin.create_spin() function already does this but with many more safety checks.
  • The spin ID lookup table has been made private so that it is not included in the save files.
  • Update and clean up of the model-free LaTeX table generation script.
  • Created the generic_fns.mol_res_spin.return_spin_indices() function to return the index triplet. This allows a spin ID to be converted into the molecule, residue and spin indices.
  • The generate_spin_id() function now choses to use the spin name instead of number by default.
  • Renamed return_spin_from_id() to return_spin(), and return_spin() to return_spin_from_selection(). This shaves off a number of seconds from the system test - the look up table speed ups will come with support in the other mol_res_spin module functions.
  • return_spin_from_id() now defaults to return_spin() when the spin ID is not in the lookup table. The slower return_spin() method will allow return_spin_from_id() to always be functional.
  • Added the 'pipe' argument to generic_fns.mol_res_spin.return_spin_from_id(). This is to mimic the return_spin() function.
  • Created generic_fns.mol_res_spin.return_spin_from_id() for returning spin containers from spin IDs.
  • generic_fns.mol_res_spin.create_pseudo_spin() is now adding data to the spin ID look up table. To support this, the return_residue() method now takes the 'indices' argument and returns the molecule and residue indices.
  • Started to fill up the spin ID look up table. The index_molecule() and index_residue() functions have been added to determine the MoleculeList and ResidueList indices of given molecules and residues. These are used by the create_spin for efficiency and to allow the indices (together with the spin index and spin ID string) to be assembled into the look up table. This table is not used anywhere yet.
  • Initialised a look up table in the cdp.mol structure for faster spin access. This look up table will be slowly transitioned to, and should significantly speed up certain operations.
  • Created the gui.misc.bitmap_setup() function for handling bitmap alpha correctly on operating systems. This function is required to handle alpha in bitmap on MS Windows so that regions with partial transparency are not blended into the default dark grey colour of Windows' windows.
  • Added the status/weather-snow-scattered-night Oxygen icon as a wizard graphic for the temperature uf.
  • The about model-free dialog no longer has grey at the bottom in MS Windows. The wx.ScrolledWindow.GetScrollPixelsPerUnit() function is now used to determine how many pixels the y scrolling is, and rounds up the virtual size based on that.
  • Improved the debugging drawing for the about GUI elements.
  • Hack for the relax_fit C module compilation to detect supported CPUs for Mac OS X cross compilation.


relax 2.0 series

relax 2.0.0

#lst:relax 2.0.0


Version 1 of relax

relax 1.3 series

relax 1.3.16


relax 1.3.15

  • Changed all of the maths in the HTML user manual page titles, the latexonly and htmlonly environments are now being used to produce different section titles with and without maths respectively.
  • The latex2html configuration script now allows for more maths in the HTML user manual with the HTML_VERSION math extension.
  • The section numbers are now removed from the HTML user manual pages to allow for more static webpages for the user functions which do not disappear as new user functions are added.
  • The title page of the HTML user manual has been renamed to "The relax user manual".
  • Updated the ancient COMMITTERS file which has not been changed in over 4 years!
  • The pipe editor window is no longer centred, now matching the behaviour all other windows.
  • All open relax windows are now closed prior to running the test suite within the GUI.
  • Exiting the GUI now only warns about data loss only if there is data to loose.
  • The relax controller can now not be closed while the test suite is running.
  • During the GUI tests from the GUI, the relax controller is now modal preventing users from interfering with the tests.
  • The relax controller now stays on top of all windows when the GUI tests are being run improving the running of the tests on Mac OS X and MS Windows.
  • The GUI tests now work in the GUI thanks to a lot of GUI black magic. The tests' tearDown() method now carefully deconstructs the GUI element prior to the next test being run. In the normal 'relax --gui-test' mode, the GUI object is destroyed and recreated for each test however, when run from the GUI, the GUI object is always there and must remain intact. The deconstruction includes deletion of each analysis tab and selective destruction of all non-main windows (excluding the controller which shows the test suite progress). The relax data store GUI object is also reconstructed in the tearDown() method, and all wx events flushed at the very end to prevent clashes with the next GUI test.
  • The relax mode (i.e. prompt, script, GUI, test suite, etc.) is now stored in the status object - this is used to activate and deactivate certain parts of the GUI tests within the GUI and normal test suite modes.
  • The ds.relax_gui GUI data object is now a permanent feature of the relax data store.
  • The 'Tools->Test suite' menu item has been converted into a sub-menu with entries for running all tests or the individual test categories.
  • Created the _det_install_path() status singleton method for better determining the install path - this is used for the Mac OS X applications whereby the current logic of using sys.path[0] fails miserably!
  • Prepared the multi-processor package for the import mechanisms of Python 3 - this new mechanism is present in Python 2.7 now, and the code falls back to the old method when not present.
  • Complete redesign of the py2app setup.py script for building Mac OS X applications. The script has been converted into a class called Setup which performs all the actions. All files, source or otherwise, are now stated as data files to be included in relax.app/Contents/Resources. All relax modules are specified by the py2app 'includes' option so that they are forced to all be included within the relax.app/Contents/Resources/lib/python2.X/site-packages.zip file as *.pyc files.
  • The py2app part of the setup.py script now throws a RelaxError if the setuptools module is missing.
  • Added the relax prompt icon to the main GUI window toolbar.
  • Added the larger sized application-x-executable-script Oxygen icons.
  • Created the 'ansi' module containing the terminal colouring ANSI escape sequences.
  • The test suite is now only imported in the test modes of operation - this should speed up program initialisation.
  • The import of the gui package now only occurs in GUI mode - this will speed up the program start up.
  • The script print out in scripting mode is now in cyan if sys.stdout is a TTY.
  • ANSI escape characters are now turned off forcibly when in GUI mode.
  • The sys.std*.isatty() methods are being used to determine if text output should be coloured.
  • All RelaxWarnings are now coloured yellow when printed to a TTY.
  • All RelaxErrors are now coloured red when printed to a TTY.
  • The relax prompts will be coloured blue when printed to a TTY.
  • The GUI analyses delete_all() method now unregisters all observer methods prior to deletion.
  • Created observer_register() for all GUI analyses for method registration and unregistration - this method allows for external calls to observer_register() to pre-remove the methods from the observer objects.
  • Added debugging printouts to the delete_all() analysis method.
  • More advanced debugging printouts for the delete_analysis() method.
  • Added some heavy debugging code to the GUI analysis delete_analysis() method.
  • Increased the size of the model-free model change warning dialog for wxPython 2.9 on GNU/Linux.
  • The file selection wizard GUI element now has the preview button turned on by default.
  • Double clicking on a file in the results view window now opens it.
  • Added a file preview button for the spectrum.read_intensities user function GUI page.
  • Added a file preview button to the file selection GUI element of the wizards.
  • Increased the size of the incomplete set up dialogs for wxPython 2.9 on GNU/Linux.
  • Added the document-preview.png Oxygen icons.
  • Increased the loading state warning dialog size - this is to accommodate for larger text on wxPython 2.9 on GNU/Linux with GTK.
  • Improved the spin data deletion messages from the spin viewer window.
  • Increased the dialog heights for the deletion of spin data via the spin viewer window.
  • Improved the user feedback during a state save by just sleeping a little to show the busy cursor.
  • Modified the spin loading wizard so that preloaded structures are the default.
  • The maths_fns.relax_fit module is now stored in the dep_check module for the info print out.
  • Added the structure.read_xyz user function to the menus.
  • Created the Tools->System Information menu entry, which is simply the sys_info user function front end.
  • Created the GUI front end to the structure.read_xyz user function.
  • The relax controller now accepts Ctrl-A to select all text.
  • The relax controller now shows the relax intro text to mimic the prompt/scripting modes.
  • Introduced the empty() method into the structure API to check if structural data is loaded - this will be used in the spin loading wizard of the spin viewer window.
  • Converted the structure.read_xyz user function front end to the new design.
  • Improved details of relax and the compiled C modules from the info print out.
  • Created a dictionary object containing wxPython version info within the status singleton object - this is being used to construct the Mac dock icon, when the Carbon and Cocoa builds and not GTK are being used.
  • Updated the multi-processor package __all__ list to allow the relax unit tests to pass.
  • Added a document describing how to build a 3-way (i386, pcc, x86_64) Mac OS X Python framework.
  • Added a script which is used to validate the binary architecture of Mac OS X Frameworks.
  • Improved the relax info print out for the installed python packages - this now shows more information for the wxPython version, and formats the output based on maximum widths to handle different situations.
  • Removed the ppc64 build target for the relax C modules on Mac OS X - this architecture is not supported by the recent Xcode frameworks, so it has been dropped.
  • The scons binary_dist target on Mac OS X can now overwrite a pre-existing DMG file.
  • Added some epydoc @attention fields to the multi-processor API.
  • Created the fetch_data_store() multi-processor API function - this simply returns the data store of the same processor as the calling code.
  • The 2nd test implementation's slave command now uses the fetch_data() API function - this is to obtain the invariant data pre-sent by the master to the slaves.
  • Renamed the multi-processor API data_fetch() function to fetch_data(), and implemented it.
  • Renamed data_upload() to send_data_to_slaves() and made it more specific.
  • The multi-processor data_fetch() API function is now used to obtain the total_length variable.
  • Shifted the self.threaded_result_processing flag into the base Processor class where it belongs.
  • Clean up and completion of the TODO for the Processor.assert_on_master() method. The Processor.assert_on_master() method has been created and calls raise_unimplemented(). The Multi_processor.assert_on_master() method has been shifted to Mpi4py_processor.assert_on_master(), as that method's error message is MPI specific. The empty Uni_processor.assert_on_master() method has been added to allow that fabric to work.
  • Spun out all of the results queue objects into their own module. This completes another set of TODOs by removing these queue objects from any fabric level. They can now be imported and used by any fabric level (Processor, Multi_processor, Mpi4py_processor, Uni_processor, etc.).
  • Shifted the run_command_queue() and run_queue() methods from the Multi_processor to Processor class.
  • The multi/test_implementation2.py script can now be run in uni-processor mode.
  • The multi/test_implementation2.py script now properly uses pre-send data in the slave calculations.
  • Partially implemented the Processor.data_update() method.
  • Created the special command object Slave_storage_command for transferring data to slaves - this command currently has two special methods: add(), used by the master processor to add data to the command for transfer; and clear(), used by the slave (via run()) or the master to clear out all data.
  • Split the multi.commands module into two - the slave commands and result commands.
  • Removed the Mpi_processor.data_upload() method as this will be performed at the Processor level.
  • Shifted all of the processor command objects into the multi.commands module. The other multi.api module objects have been shifted into the multi.misc module.
  • The multi-processor package now allows sys.exit() calls within the master processor.
  • Removed a number of sys.exit() calls from different relax modes. The return call is used rather than sys.exit() to exit the main run() method. These were not needed and it allows the 'version', 'info', and 'gui' modes to play better with the multi-processor package when using mpi4py.
  • Shifted the mpi4py processor module functions broadcast_command() and ditch_all_results() into the class - these have been turned into private methods.
  • Redesigned how the multi-processor package terminates program execution - the Processor.exit() method has been introduced to perform this action.
  • Spelling fix for a number of the processor method names.
  • Fully documented the Processor.run() method via comments.
  • Eliminated the unused Set_processor_property_command multi-processor class.
  • Eliminated the unused Get_name_command multi-processor class.
  • Shifted the Multi_processor.run() method up a level to Processor.run() - this completes one of the TODOs, and will be needed to avoid code duplication for handling the new data_upload() and data_fetch() API methods.
  • Eliminated the completely unused create_slaves() Processor method.
  • The processor instances now have a data storage container - this will be used by the data_upload() and data_fetch() API methods.
  • Implemented the mpi4py processor fabric data_upload() method.
  • Updated the second multi-processor test implementation to use the new data_upload() API function.
  • The multi.data_upload() API function now forwards the call to the Processor classes.
  • Clean up of the Multi_processor.run_command_queue() method.


relax 1.3.14

  • The MS Windows text wrapping width has been changed from 80 to 79 - this now fits the cmd prompt.
  • The program intro print out and prompt UI help system is using the new Status.text_width variable to wrap text.
  • The model-free constraint matrix A is now of numpy.int8 type - this is to decrease virtual memory usage and increase the scaling efficiency on clusters.
  • Changed all relax website links to http://www.nmr-relax.com for consistency.
  • The setup.py script can now be imported for the epydoc API documentation system.
  • The extern and minfx.scipy_subset relax packages are excluded form the API documentation.
  • Added the ability to skip scripts in the package __all__ list unit test checks.
  • Added some code to detect the bit version of MS Windows in the information print out (to better distinguish 32 vs. 64-bit versions).
  • Added some documentation about master processors on Linux 2.6 eating 100% of one CPU core (in the mpi4py multi-processor fabric).
  • Added a reference implementation to the multi-processor package - this is to demonstrate to users of the package how an implementation is created via the public API.
  • Elimination of all relax dependencies from the multi-processor package.
  • Added support for the memory size on MS Windows to the relax info print out.
  • Updated the value.set user function unknown parameter error message to list the known ones.
  • For Unix and GNU/Linux systems, the relax info printout now shows the ram and swap size.
  • Expansion and improvement of the information printed out by 'relax --info'.
  • Expansion of the multi-processor API documentation.
  • Expansion of the multi-processor package documentation with a step by step usage guide - this should increase the usability of the package by clarifying how one should use it.
  • Expansion and improvements to the multi-processor package and module docstrings.
  • The finish button in the new analysis wizard has been renamed to "Start".
  • Created a special Verbosity singleton for controlling the multi-processor package print outs.
  • Future proofed the relax codebase by replacing all ''' with """ in the docstrings.
  • Removed the unnecessary try statement in the model-free Slave_command.run() method as exception handling is correctly performed on the slave and master.
  • Shifted the Memo object into its own module (multi package).
  • Simplification and abstraction of the Slave_command.run() method to shift all exception handling into the package. Therefore program code no longer needs to handle the multi-processor specific errors.
  • Created a new module 'multi.misc' for holding miscellaneous functions used throughout the multi package.
  • Created a public API for the multi-processor package, available via multi.__init__.
  • The load_multiprocessor() function is no longer a static method of the Processor base class. This function loads the correct Processor class, so doesn't need to be a method of the base class and operates cleanly and more clearly as a stand alone function.
  • Clean up of the processor IO module (multi package).
  • Eliminated all usage of sys.__stdout__ and sys.__stderr__ in the multi-processor package - this returns full control of IO streams to the parent program.
  • The float arg checks now checks against all the numpy float types (float16, float32, float64, float128).
  • Added value.write user function calls to the J(ω) mapping system test script.
  • Added some value.write user functions to the J(ω) mapping sample script.
  • Modified the J(ω) mapping test data to include relaxation values of None to trigger bug #19329.
  • Speed up for the generic_fns.relax_re.search() function.
  • More speed ups for the Selection.__contains_mol_res_spin_containers() method.
  • Reordered the checks in Selection.__contains_mol_res_spin_containers() - this cuts the number of function calls down by avoiding relax_re.search() calls if residue or spin numbers match.
  • Simplified the generic_fns.relax_re.search() function - this is to minimise the number of isinstance() calls when dealing with the relax mol-res-spin sequence data.
  • Updates for Python 3.0 using 2to3.
  • Removal of a number of debugging print out statements.
  • Significant speed ups of the return_spin() and return_residue() functions.
  • Added a print out for the diffusion_tensor.init user function to inform the user of an angle unit change - this is in response to bug #19323 to make it clearer that a parameter conversion has occurred.
  • Created a special specific API object called SPIN_PARAMS - this will be used to handle all operations to do with model parameters. The object Param_list has methods for parameter initialisation and handling (where all info is specified such as Grace string, units, default value, etc) and for determining if a parameter exists.
  • Mass conversion to the new GLOBAL_PARAMS and SPIN_PARAMS specific API data structures. The parameters are now all lowercase, for example ['S2', 'te', 'Rex'] is now ['s2', 'te', 'rex']. The follow parameters are now converted throughout relax: 'bond_length' to 'r', 'CSA' to 'csa', 'heteronucleus' to 'heteronuc_type', 'proton' to 'proton_type'.
  • Created a new algorithm for finding the pivot of motion between different structural models - this is available through the structure.find_pivot user function.
  • Added the validate_models() method to the structural API - this is used to check that the models are 100% consistent.
  • Added the centroid argument to the structure.superimpose user function - this allows for the superimposition of structures assuming a pivoted motion.


relax 1.3.13

#lst:relax 1.3.13

relax 1.3.12

Too many to list.


relax 1.3.11

  • Removed the Numeric module from the --info print out as it is completely unused now.
  • Added the Bruker PDC software info to the exp_info module.
  • The pdc.read user function back end is now reading the PDC version information.
  • Added a catch for the "worst case per peak scenario" option in the PDC. This will now throw a RelaxError, telling the user to go back to the PDC and use the other option.
  • Converted the pdc.read user function back-end to use the \t delimitation of the PDC file for parsing.
  • The R1 value and error are now being read directly from the PDC file.
  • Added wrapper methods to the relaxation curve-fitting specific code for the new C modules. These allow the parameter numpy array from minfx to be converted into a Python list prior to sending it into the C module.
  • Added a check for the runpy Python module.
  • Scripts can now be run again under Python versions 2.4 or lower (by avoiding the runpy module).
  • The auto_analyses package modules are now imported by __init__ to force their existence.
  • The relax_data.back_calc user function arguments no longer need to be supplied.
  • Shifted all of the model-free sample scripts into the new subdirectory sample_scripts/model_free.
  • Added copyright headers to all of the sample scripts, and updated the introduction text. This allows users to determine the age of the scripts.
  • Updated the OpenDX mapping sample scripts from the relax 1.2 to the 1.3 design.
  • Added a sample script for creating plots of experimental verses back calculated relaxation data.
  • Deleted the incomplete and useless N_state_model.py sample script.
  • Created a sample script subdirectory called n_state_model.
  • Added a sample script for the unsuccessful two domain N-state model optimisation.
  • Deleted the ancient TODO file, as it is no longer relevant.
  • Deleted the SRLS analysis type - there is no incentive to develop this part of relax.
  • Removed 'relax_disp' from the VALID_TYPES data pipe type array. This is supported in the relaxation dispersion 'relax_disp' branch and not in the main line.
  • The RDC and PCS Q factor user functions now do nothing when data is missing rather than failing. A warning is given and the function now simply returns rather than raising an error.
  • The grace file created by the pcs.corr_plot user function now separates each element into its own graph.
  • Converted the summary from the final_data_extraction.py sample script into comma separated file (.csv).
  • Added the PDB reading parts of Scientific python to relax so that Scientific python is no longer a relax dependency.
  • The Scientific python PDB reading tests are no longer skipped if the module is not installed.
  • Removed Scientific python and Numeric from the blacklisted modules. This is for the docstring fetching script to generate the user function section of the user manual.
  • relax icon is not shown in window title if running on a Mac.
  • Included the relax development team to copyright statement in the main window of the GUI.
  • Peak lists and relaxation times are collected in data grids within the GUI.
  • Updated the Bieri et al., 2011 reference as the relax GUI paper is now published (http://dx.doi.org/10.1007/s10858-011-9509-1).
  • Changed the "Help->Contact relaxGUI" to "Help->Mailing list contact".
  • Renamed the "Relaxation time [s]" column to "Relaxation delay [s]" in the GUI as this is a more correct description.
  • Deleted [Unknown developer Sw! Edward's] ancient Melbourne Uni email address from the MEDLINE info.
  • Normalisation of the text in the auto model-free analysis GUI tab. The font sizes are now all the same, the text is not right aligned (not seen under Linux but very ugly in Windows), and semicolon usage and capitalisation is normalised.
  • Regularisation of the font formatting in the steady-state NOE GUI tab.
  • Shifted the title and subtitle creation for the NOE frame into a base class for all frames to use.
  • Regularisation of the fonts, titles, and subtitles in the Rx auto-analysis frames.
  • Created the add_subsubtitle() base class method for creating a sub-sub-title in the GUI.
  • Regularised the text in the results tab.
  • Regularised the "Single delay cycle time [s]" text in the model-free frame.
  • Attempt to regularise the text in the text controls.
  • Regularised all the stead-state NOE frame text controls.
  • Created the add_static_text() base class method for all the GUI frames to use.
  • Added the add_button_open() base class method for regularising the buttons across GUI frames.
  • Regularised all of the auto NOE analysis frame buttons.
  • Added the add_text_sel_element() base class method for creating one of the basic frame elements.
  • Better layout for the base class static text, text control and button addition (for the GUI).
  • Added a stretchable spacer to the NOE window so the execute relax button is always at the bottom right.
  • Completely refactored the model-free model buttons code.
  • Shifted the model-free add_execute_relax() method into the base class for all analysis tabs to use.
  • The peak intensity GUI element is now enclosed within a StaticBox.
  • Shifted the main GUI window layout to the start of the __init__() method.
  • Created the function gui.misc.add_border() for adding borders to generic GUI elements.
  • Added an internal border to the relaxation peak list selection GUI element.
  • Created the analysis tab base class methods add_spin_control() and add_spin_element(). These are similar to the text control and text selection elements respectively, but use a SpinCtrl instead.
  • The model-free analysis tab now uses the base class add_spin_element() method for the max iterations.
  • Shifted the model-free list box in the results tab into the new add_list_box() method.
  • Shifted the 'Execute relax' text to be inside the button.
  • Added an icon to the 'Change' buttons throughout the GUI.
  • Added the oxygen icons for opening folders.
  • The open folder icons are now used for the 'Results directory' change buttons.
  • The settings windows are now derived from wx.Dialog rather than wx.Frame. This is for better operation under MS Windows.
  • Added the document-close Oxygen icons.
  • Added support for old save files in the Peak_intensity.sync_ds() method (for the GUI).
  • Argument change_all for deselecting residues in the d'Auvergne protocol[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b] has to be boolean.
  • The relax_fit overfit_deselect() method now skips deselected spins. This prevents unnecessary warnings about missing data for deselected spins.
  • All of the overfit_deselect() methods now skip deselected spins, avoiding confusing warnings.
  • Created a script for byte compiling the Python source files.
  • Added the data_type() method to the specific functions API. This will be used to determine the type that a given parameter from data_names() should be.
  • Implemented the model-free version of the data_type() API method.
  • The heteronucleus and proton type parameters can now be specified by their parameter names as well.
  • The create_molecule, create_residue, and create_spin functions now return the created container.
  • Modified the create_spin() function to overwrite the first spin if empty.
  • Redesign of the structure.load_spins user function back end for XYZ file reading support.
  • Redesign of the main relax module. The module has been renamed to 'relax.py', and the original file 'relax' is now a very basic python script which simply loads the module and runs the new start() function for launching relax.
  • The relax mode normally specified by the command line can now be overridden.
  • Removed the executable svn property on the info module.
  • Shifted the pedantic flag into the status object.
  • RelaxWarnings now only show a traceback when the pedantic flag is True.
  • The relax state is now saved on a RelaxError when the pedantic rather than debugging flag is turned on. As both flags can be given, this allows for this state saving to be activated or deactivated.
  • The relax_io.read_spin_data() function no longer skips short lines so that a warning is given for it.
  • generic_fns.sequence.validate_sequence() now also checks for the spin ID, data and error columns. This is now used by relax_io.read_spin_data().
  • Files created by the grace.write user function are now put into a new list cdp.results_files.
  • The value.write user function now also adds to the cdp.result_files structure.
  • Modified the execution lock to have a mode. This allows for greater control and avoid string comparison of names to determine if a script or auto-analysis acquired the lock.
  • The dauvergne_protocol is now more robust if the program is interrupted and restarted later[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b]. The opt/results.bz2 file is now searched for, and if not the round is assumed incomplete.
  • The pymol.tensor_pdb user function now shows the diffusion tensor using PyMOL sticks.
  • Creation of the new structure.read_xyz user function.
  • Addition of two new system tests Structure.test_read_xyz_internal() and Structure.test_read_xyz_internal2().
  • Changing the description in the user functions structure.load_spins and structure.read_xyz.
  • Code for extracting a vector between specified spins in a XYZ file has been included in the generic_fns.structure.internal.
  • Debugging in the function generic_fns.structure.main.load_spin() in order to load a spin for XYZ file properly.
  • For printing out of modified spins, a test to check whether the spin_name exists or not has been added.
  • Added the pymol_macro() method to the analysis specific API.
  • Clean up of parameters in the _create_mc_relax_data() specific API common method.
  • A number of epydoc docstring fixes.
  • The sconstruct script is only executed when it is the main loaded module or if launched by scons.
  • Improved the list of modules and packages used in the epydoc documentation. The files and directories which are not python modules or packages are now properly skipped.


relax 1.3.10

  • Updates for 2 model-free system tests for a new system which again shows optimisation differences.


relax 1.3.9


relax 1.3.8

  • Loosened the te check in value_check() so that the test_opt_constr_bfgs_back_S2_0_970_te_2048_Rex_0_149() model-free system test would pass on 32-bit Linux.
  • Renamed the model-free results LaTeX table generation script.
  • Created a test-suite module to aid in the back-calculation of relaxation data from model-free parameters independent of relax.
  • Updates for Python 3.0 using 2to3.


relax 1.3.7

  • Speed up of the relaxation curve-fitting system tests.
  • The sample_scripts/full_analysis.py script has been renamed to sample_scripts/dauvergne_protocol.py[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b].
  • Improvements to the test suite final print out with a table of skipped tests.
  • The data pipe system test is avoiding the frame order data pipe, allowing the test to pass without scipy installed.
  • The alignment tensor system test now uses the N-state model rather than frame order so it is always tested.
  • The frame order system and unit tests are skipped if scipy is not installed.
  • The frame order analysis is disabled if scipy is not installed.
  • The data pipe generic code now uses dep_check to see if the relaxation curve-fitting is available.
  • The system tests now use the RelaxTestLoader to allow tests to be skipped.
  • The unit vector system tests using scientific python are now being skipped if the package is not installed.
  • The structure system tests involving scientific python are now skipped if the package is not installed.
  • The RelaxTestLoader has been added as a replacement for unittest.TestLoader to handle the skipping of tests when optional Python packages are not installed.
  • Shifted the status singleton instantiation to the import level in all modules, saving execution time.
  • Big code cleanups in unit_test_runner.py.


relax 1.3.6

  • API documentation improvements with epydoc docstring fixes.
  • Numerous new system tests to catch bugs and prevent bugs appearing in the future.
  • Peak intensity data is now internally handled differently to improve its flexibility. This will be beneficial for handling Bruker PDC (Protein Dynamic Center) files, relaxation dispersion data, and adding new types of relaxation data.
  • The 'scons clean' target now removes temporary relax save files.
  • The molecule type can now be specified.
  • Improved the diffusion tensor print out.
  • pipe.delete without args will now delete all data pipes.
  • Added some checks to the dauvergne_protocol model-free analysis[d'Auvergne and Gooley, 2007][d'Auvergne and Gooley, 2008b] for the required previously optimised results.
  • Shifted the steady-state NOE specific analysis code into its own package.
  • Shifted the debug and pedantic flags into the __main__ namespace for better access from other modules.
  • Modified the specific code API to remove a number of references to spin_id, as not all analyses use spins.
  • Renamed the results_folder arg to results_dir in the NOE auto-analysis, and rearranged the args.
  • Citations now include the status if not 'published'.
  • Created a new directory 'graphics' for all relax artwork.
  • The structure.load_spins user function now gives a RelaxWarning if no data could be found.
  • The negative cones and z-axes are now not created for the pseudo-ellipses in frame_order.cone_pdb.
  • Added some transparency to the cone in pymol.cone_pdb.
  • Modified the pymol.cone_pdb user function to handle x, y, and z-axes.
  • Modified create_cone_pdb() to accept a pre-made structural object and to create a file only when asked.
  • Switched the names of the Pseudo_elliptic and Pseudo_elliptic2 classes.
  • Created a API common specific code set_selected_sim() method for a single global model.
  • Created a API common specific code model_loop() method for a single global model.
  • All objects placed into the relax data store structure are now stored in the XML save file.
  • Parameters can be fixed to the original values during the frame order grid search.
  • The user function argument checker arg_check.is_int_or_int_list() can now allow for None list elements.
  • The frame order model can be overwritten by frame_order.select_model.
  • Shifted to using numpy.sinc() for the frame order equations.
  • Switched the theta and phi angles in cartesian_to_spherical() to match the rest of relax.
  • Created a new module for performing coordinate transformations (maths_fns.coord_transform).
  • The pipe.display user function now places quotation marks around the pipe names and shows which is the current data pipe.
  • The align_tensor.display user function now prints out the generalized degree of order (GDO) value.
  • The back-calculated alignments tensors are now being stored in the current data pipe.
  • Removed the docstring length check from the code validator script.
  • The loading of RDCs and PCSs for non-existent spins now only throws a RelaxWarning.
  • The select.read and deselect.read user functions can now accept file handles or dummy file objects.
  • Limit arrays are now sent into the minfx generic interface for limiting simulated annealing.
  • The align_tensor.delete user function can now be used to remove all tensors simultaneously.
  • Made a RelaxError less stringent so that the paramagnetic centre can be unfixed.
  • Initialising an alignment tensor now adds the ID to the alignment ID list.
  • Changes to the NOE auto-analysis for the GUI: The output filename can be specified; The folder, where results files will be placed, can be specified; The label of heteronucleus and proton of peak lists and PDB file can be selected; The sequence is read either by sequence file or PDB file; Removed white spaces and progress output.
  • Changed the alignment tensor parameter scaling back to 1 as this was slowing down the optimisation.
  • The rdc.back_calc user function without an ID arg will back-calculate RDCs for all alignments.
  • Renamed the pcs.centre user function to paramag.centre to abstract for the PRE.
  • Better support for RDC and PCS correlation plots with and without errors.
  • Inverted the x and y axes in the RDC and PCS correlation plots.
  • Better support for tensor-less N-state model optimisation.
  • The align_tensor.copy tensor_to arg can now be None, this is useful for copying between data pipes.
  • Added a function for the pseudo-elliptical cosine function, this is a numerical approximation generated by series expansion.
  • Added a method for translating pymol.cmd.do() commands into specific pymol.cmd functions. This prevents problems with commands being executed asynchronously. For example images were being saved before ray-tracing was complete.
  • The RDC and PCS correlation plots now only show selected spins.


relax 1.3.5

#lst:relax 1.3.5

relax 1.3.4

#lst:relax 1.3.4

relax 1.3.3

  • Internal relax cleanups by the addition of the specific model_loop() method - this should eliminate a series of potential hidden bugs.
  • The results.write and results.display now only support XML output.
  • More information is now extracted from the Modelfree mfout files.
  • The version of the Modelfree program is checked and if it is an old, buggy version, relax will refuse to execute it.
  • The system tests can now handle the sometimes large differences in Modelfree results between the GNU gcc and Portland C compiler versions.
  • Fixes and improvements to much of the API documentation.


relax 1.3.2

  • Internal abstractions to the relax data store by using the generic_fns.pipes API.
  • Added more literature references to the 'full_analysis.py' script.
  • Eliminated Monte Carlo simulations are better identified during user function execution.


relax 1.3.1

#lst:relax 1.3.1

relax 1.3.0

#lst:relax 1.3.0


relax 1.2 series

relax 1.2.15

N/A


relax 1.2.14

  • Improvements to the relax manual.
  • Copyright statements added to the sample scripts for identifying the author and date of the script.


relax 1.2.13

  • Better support for MS Windows Vista in the scons build system and the relax introduction.
  • A file listing the unresolved residues is no longer necessary for running the 'full_analysis.py' script.
  • A few small documentation additions and fixes.


relax 1.2.12

N/A


relax 1.2.11

#lst:relax 1.2.11

relax 1.2.10

#lst:relax 1.2.10

relax 1.2.9

#lst:relax 1.2.9

relax 1.2.8

#lst:relax 1.2.8

relax 1.2.7

#lst:relax 1.2.7

relax 1.2.6

#lst:relax 1.2.6

relax 1.2.5

#lst:relax 1.2.5

relax 1.2.4

#lst:relax 1.2.4


See also

Relax release see also