Difference between revisions of "Tutorial for model free SBiNLab"

From relax wiki
Jump to navigation Jump to search
 
(89 intermediate revisions by the same user not shown)
Line 3: Line 3:
  
 
To get inspiration of example scripts files and '''see''' how the protocol is performed, have a look here:
 
To get inspiration of example scripts files and '''see''' how the protocol is performed, have a look here:
* nmr-relax-code/test_suite/system_tests/scripts/model_free/dauvergne_protocol.py
+
* [https://github.com/nmr-relax/relax/blob/master/auto_analyses/dauvergne_protocol.py nmr-relax-code/auto_analyses/dauvergne_protocol.py]
* nmr-relax-code/auto_analyses/dauvergne_protocol.py
 
  
 
For references, see [http://www.nmr-relax.com/refs.shtml relax references]:
 
For references, see [http://www.nmr-relax.com/refs.shtml relax references]:
 +
* [[Model-free_analysis_single_field#Protocol|See this description of the protocol by Edward]] and image [http://www.nmr-relax.com/manual/The_diffusion_seeded_paradigm.html The diffusion seeded paradigm]
 +
* [http://www.nmr-relax.com/manual/Model_free_analysis.html Link to the manual]
 +
* [http://www.nmr-relax.com/manual/The_model_free_models.html Summary of model-free models]
 +
* [http://www.nmr-relax.com/manual/molmol_macro_apply.html#SECTION081284600000000000000 Summary of parameter meaning and value to pymol visualization]
 
* d'Auvergne, E. J. and Gooley, P. R. (2008). [http://dx.doi.org/10.1007/s10858-007-9214-2 Optimisation of NMR dynamic models I. Minimisation algorithms and their performance within the model-free and Brownian rotational diffusion spaces. J. Biomol. NMR, 40(2), 107-119.]
 
* d'Auvergne, E. J. and Gooley, P. R. (2008). [http://dx.doi.org/10.1007/s10858-007-9214-2 Optimisation of NMR dynamic models I. Minimisation algorithms and their performance within the model-free and Brownian rotational diffusion spaces. J. Biomol. NMR, 40(2), 107-119.]
 
* d'Auvergne, E. J. and Gooley, P. R. (2008). [http://dx.doi.org/10.1007/s10858-007-9213-3 Optimisation of NMR dynamic models II. A new methodology for the dual optimisation of the model-free parameters and the Brownian rotational diffusion tensor. J. Biomol. NMR, 40(2), 121-133.]
 
* d'Auvergne, E. J. and Gooley, P. R. (2008). [http://dx.doi.org/10.1007/s10858-007-9213-3 Optimisation of NMR dynamic models II. A new methodology for the dual optimisation of the model-free parameters and the Brownian rotational diffusion tensor. J. Biomol. NMR, 40(2), 121-133.]
  
= Scripts =
+
= Script inspiration =
To get the protocol to work, we need to
+
== model-free : Script inspiration for setup and analysis ==
 +
The distribution of relax includes a folder '''sample_scripts/model_free''' which contain
 +
a folder with scripts for analysis.
  
* Load a PDB structure
+
It can be seen here: https://github.com/nmr-relax/relax/tree/master/sample_scripts/model_free
* Assign the "data structure" in relax through spin-assignments
 
* Assign necessary "information" as isotope information to each spin-assignment
 
* Read "R1, R2 and NOE" for different magnet field strengths
 
* Calculate some properties
 
* Check the data
 
* Run the protocol
 
  
To work most efficiently, it is important to perform each step 1 by 1,
+
Here is the current list
and closely inspect the log for any errors.
+
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/back_calculate.py back_calculate.py]. Back-calculate and save relaxation data starting from a saved model-free results file.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/bmrb_deposition.py bmrb_deposition.py] Script for creating a NMR-STAR 3.1 formatted file for BMRB deposition of model-free results.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/cv.py cv.py] Script for model-free analysis using cross-validation model selection.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/dasha.py dasha.py] Script for model-free analysis using the program Dasha.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/dauvergne_protocol.py dauvergne_protocol.py] Script for black-box model-free analysis.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/diff_min.py diff_min.py] Demonstration script for diffusion tensor optimisation in a model-free analysis.]
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/final_data_extraction.py final_data_extraction.py] Extract Data to Table
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/generate_ri.py generate_ri.py] Script for back-calculating the relaxation data.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/grace_S2_vs_te.py grace_S2_vs_te.py] Script for creating a grace plot of the simulated order parameters vs. simulated correlation times.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/ grace_ri_data_correlation.py] Script for creating correlations plots of experimental verses back calculated relaxation data.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/map.py map.py] Script for mapping the model-free space for OpenDX visualisation.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/mf_multimodel.py mf_multimodel.py] This script performs a model-free analysis for the models 'm0' to 'm9' (or 'tm0' to 'tm9').
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/modsel.py modsel.py] Script for model-free model selection.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/molmol_plot.py molmol_plot.py] Script for generating Molmol macros for highlighting model-free motions
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/palmer.py palmer.py] Script for model-free analysis using Art Palmer's program 'Modelfree4'. Download from http://comdnmr.nysbc.org/comd-nmr-dissem/comd-nmr-software
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/remap.py remap.py] Script for mapping the model-free space.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/single_model.py single_model.py] This script performs a model-free analysis for the single model 'm4'.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/table_csv.py table_csv.py] Script for converting the model-free results into a CSV table.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/model_free/table_latex.py table_latex.py] Script for converting the model-free results into a LaTeX table.
  
For similar tutorial, have a look at: [[Tutorial_for_model-free_analysis_sam_mahdi|Tutorial for model-free analysis sam mahdi]]
+
== Other script inspiration for checking ==
 +
The distribution of relax includes a folder '''sample_scripts/''' which contain a folder with scripts for analysis.
  
== 01_read_pdb.py - Test load of PDB ==
+
It can be seen here: https://github.com/nmr-relax/relax/tree/master/sample_scripts
First we just want to test to read the PDB file.
 
  
'''01_read_pdb.py'''
+
'''R1 / R2 Calculation'''
{| class="mw-collapsible mw-collapsed wikitable"
+
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/relax_fit.py relax_fit.py] Script for relaxation curve fitting.
! See file content
+
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/relax_curve_diff.py relax_curve_diff.py] Script for creating a Grace plot of peak intensity differences.
|-
+
The resultant plot is useful for finding bad points or bad spectra when fitting exponential curves determine the R1 and R2 relaxation rates. If the averages deviate systematically from zero, bias in the spectra or fitting will be clearly revealed. To use this script, R1 or R2 exponential curve fitting must have previously have been carried out the program state saved to the file 'rx.save' (either with or without the .gz or .bz2 ).  The file name of the saved state can be changed at the top of this script.
|
 
<source lang="python">
 
# Python module imports.
 
from time import asctime, localtime
 
import os
 
  
# relax module imports.
+
'''NOE calculation'''
from auto_analyses.dauvergne_protocol import dAuvergne_protocol
+
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/noe.py noe.py] Script for calculating NOEs.
  
# Set up the data pipe.
+
'''Test data'''
#######################
+
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/jw_mapping.py jw_mapping.py] Script for reduced spectral density mapping.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/consistency_tests.py consistency_tests.py] Script for consistency testing.
 +
Severe artifacts can be introduced if model-free analysis is performed from inconsistent multiple magnetic field datasets. The use of simple tests as validation tools for the consistency assessment can help avoid such problems in order to extract more reliable information from spin relaxation experiments. In particular, these tests are useful for detecting inconsistencies arising from R2 data. Since such inconsistencies can yield artifactual Rex parameters within model-free analysis, these tests should be use routinely prior to any analysis such as model-free calculations.
 +
This script will allow one to calculate values for the three consistency tests J(0), F_eta and F_R2. Once this is done, qualitative analysis can be performed by comparing values obtained at different magnetic fields. Correlation plots and histograms are useful tools for such comparison, such as presented in Morin & Gagne (2009a) J. Biomol. NMR, 45: 361-372.
  
# The following sequence of user function calls can be changed as needed.
+
'''Other representations'''
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/angles.py angles.py] Script for calculating the protein NH bond vector angles with respect to the diffusion tensor.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/xh_vector_dist.py xh_vector_dist.py] Script for creating a PDB representation of the distribution of XH bond vectors.
 +
* [https://github.com/nmr-relax/relax/blob/master/sample_scripts/diff_tensor_pdb.py diff_tensor_pdb.py] Script for creating a PDB representation of the Brownian rotational diffusion tensor.
  
# Create the data pipe.
+
= Scripts - Part 2 =
bundle_name = "mf (%s)" % asctime(localtime())
+
We now try to setup things a little more efficient.
name = "origin"
 
pipe.create(name, 'mf', bundle=bundle_name)
 
  
# Load the PDB file.
+
Relax is able to read previous results file, so let us divide the task up into:
structure.read_pdb('energy_1.pdb', set_mol_name='TEMP', read_model=1)
 
  
# Set up the 15N and 1H spins (both backbone and Trp indole sidechains).
+
* 1: Load the data and save as state file. Inspect in GUI before running.
structure.load_spins('@N', ave_pos=True)
+
* 2: Run the Model 1: local_tm.  
structure.load_spins('@NE1', ave_pos=True)
+
* 3: Here make 4 scripts. Each of them only depends on Model 1:
structure.load_spins('@H', ave_pos=True)
+
** Model 2: sphere
structure.load_spins('@HE1', ave_pos=True)
+
** Model 3: prolate
 +
** Model 4: oblate
 +
** Model 5: ellipsoid
 +
* 4: Make an intermediate 'final' model script. This will automatically detect files from above.  
  
# Assign isotopes
+
== Prepare data ==
spin.isotope('15N', spin_id='@N*')
+
We make a new folder and try.
spin.isotope('1H', spin_id='@H*')
 
</source>
 
|}
 
  
Run with
+
{| class="mw-collapsible mw-collapsed wikitable"
 +
! See commands
 +
|-
 +
|
 
<source lang="bash">
 
<source lang="bash">
relax 01_read_pdb.py -t 01_read_pdb.log
+
mkdir 20171010_model_free_2_HADDOCK
 +
cp 20171010_model_free/*.dat 20171010_model_free_2_HADDOCK
 +
cp 20171010_model_free/*.pdb 20171010_model_free_2_HADDOCK
 +
 
 +
# Get scripts
 +
cd 20171010_model_free_2_HADDOCK
 +
git init
 +
git remote add origin git@github.com:tlinnet/relax_modelfree_scripts.git
 +
git fetch
 +
git checkout -t origin/master
 
</source>
 
</source>
 +
|}
  
 +
And a new one, changing the NOE error
 
{| class="mw-collapsible mw-collapsed wikitable"
 
{| class="mw-collapsible mw-collapsed wikitable"
! Output from logfile
+
! See commands
 
|-
 
|-
 
|
 
|
 
<source lang="bash">
 
<source lang="bash">
script = '01_read_pdb.py'
+
mkdir 20171010_model_free_3_HADDOCK
----------------------------------------------------------------------------------------------------
+
cp 20171010_model_free/*.dat 20171010_model_free_3_HADDOCK
# Python module imports.
+
cp 20171010_model_free/*.pdb 20171010_model_free_3_HADDOCK
from time import asctime, localtime
 
import os
 
 
 
# relax module imports.
 
from auto_analyses.dauvergne_protocol import dAuvergne_protocol
 
 
 
# Set up the data pipe.
 
#######################
 
 
 
# The following sequence of user function calls can be changed as needed.
 
 
 
# Create the data pipe.
 
bundle_name = "mf (%s)" % asctime(localtime())
 
name = "origin"
 
pipe.create(name, 'mf', bundle=bundle_name)
 
 
 
# Load the PDB file.
 
structure.read_pdb('energy_1.pdb', set_mol_name='TEMP', read_model=1)
 
 
 
# Set up the 15N and 1H spins (both backbone and Trp indole sidechains).
 
structure.load_spins('@N', ave_pos=True)
 
structure.load_spins('@NE1', ave_pos=True)
 
structure.load_spins('@H', ave_pos=True)
 
structure.load_spins('@HE1', ave_pos=True)
 
 
 
# Assign isotopes
 
spin.isotope('15N', spin_id='@N*')
 
spin.isotope('1H', spin_id='@H*')
 
 
 
----------------------------------------------------------------------------------------------------
 
 
 
relax> pipe.create(pipe_name='origin', pipe_type='mf', bundle='mf (Fri Oct 13 17:44:18 2017)')
 
 
 
relax> structure.read_pdb(file='energy_1.pdb', dir=None, read_mol=None, set_mol_name='TEMP', read_model=1, set_model_num=None, alt_loc=None, verbosity=1, merge=False)
 
 
 
Internal relax PDB parser.
 
Opening the file 'energy_1.pdb' for reading.
 
RelaxWarning: Cannot determine the element associated with atom 'X'.
 
RelaxWarning: Cannot determine the element associated with atom 'Z'.
 
RelaxWarning: Cannot determine the element associated with atom 'OO'.
 
RelaxWarning: Cannot determine the element associated with atom 'OO2'.
 
Adding molecule 'TEMP' to model 1 (from the original molecule number 1 of model 1).
 
 
 
relax> structure.load_spins(spin_id='@N', from_mols=None, mol_name_target=None, ave_pos=True, spin_num=True)
 
Adding the following spins to the relax data store.
 
 
 
# mol_name    res_num    res_name    spin_num    spin_name   
 
REMOVED FROM DISPLAY
 
 
 
relax> structure.load_spins(spin_id='@NE1', from_mols=None, mol_name_target=None, ave_pos=True, spin_num=True)
 
Adding the following spins to the relax data store.
 
 
 
# mol_name    res_num    res_name    spin_num    spin_name   
 
REMOVED FROM DISPLAY
 
 
 
relax> structure.load_spins(spin_id='@H', from_mols=None, mol_name_target=None, ave_pos=True, spin_num=True)
 
Adding the following spins to the relax data store.
 
 
 
# mol_name    res_num    res_name    spin_num    spin_name   
 
REMOVED FROM DISPLAY
 
  
relax> structure.load_spins(spin_id='@HE1', from_mols=None, mol_name_target=None, ave_pos=True, spin_num=True)
+
# Get scripts
Adding the following spins to the relax data store.
+
cd 20171010_model_free_3_HADDOCK
 
+
git init
# mol_name    res_num    res_name    spin_num    spin_name   
+
git remote add origin git@github.com:tlinnet/relax_modelfree_scripts.git
REMOVED FROM DISPLAY
+
git fetch
 
+
git checkout -t origin/master
relax> spin.isotope(isotope='15N', spin_id='@N*', force=False)
 
 
 
relax> spin.isotope(isotope='1H', spin_id='@H*', force=False)
 
  
 +
# Change NOE error
 +
sed -i 's/0.1*$/0.05/' NOE_600MHz_new.dat
 +
sed -i 's/0.1*$/0.05/' NOE_750MHz.dat
 
</source>
 
</source>
 
|}
 
|}
  
== 02_read_data.py - Test load of data ==
+
And a new one, changing the NOE error, and deselecting N-terminal.<br>
That looked to go fine, so let us try to just load data.
+
Consistency test, found that this stretch contained outliers.
 
 
Copy '''01_read_pdb.py''' to '''02_read_data.py''' and add:
 
 
{| class="mw-collapsible mw-collapsed wikitable"
 
{| class="mw-collapsible mw-collapsed wikitable"
! See file content
+
! See commands
 
|-
 
|-
 
|
 
|
<source lang="python">
+
<source lang="bash">
# Load the relaxation data.
+
mkdir 20171010_model_free_4_HADDOCK
relax_data.read(ri_id='R1_600',  ri_type='R1',  frq=600.17*1e6, file='R1_600MHz_new_model_free.dat',  mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
+
cp 20171010_model_free/*.dat 20171010_model_free_4_HADDOCK
relax_data.read(ri_id='R2_600',  ri_type='R2',  frq=600.17*1e6, file='R2_600MHz_new_model_free.dat',  mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
+
cp 20171010_model_free/*.pdb 20171010_model_free_4_HADDOCK
relax_data.read(ri_id='NOE_600',  ri_type='NOE',  frq=600.17*1e6, file='NOE_600MHz_new.dat',  mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
+
 
relax_data.read(ri_id='R1_750',  ri_type='R1',  frq=750.06*1e6, file='R1_750MHz_model_free.dat',  mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
+
# Get scripts
relax_data.read(ri_id='R2_750',  ri_type='R2',  frq=750.06*1e6, file='R2_750MHz_model_free.dat',  mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
+
cd 20171010_model_free_4_HADDOCK
relax_data.read(ri_id='NOE_750', ri_type='NOE', frq=750.06*1e6, file='NOE_750MHz.dat', mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
+
git init
 +
git remote add origin git@github.com:tlinnet/relax_modelfree_scripts.git
 +
git fetch
 +
git checkout -t origin/master
  
# Define the magnetic dipole-dipole relaxation interaction.
+
# Change NOE error
interatom.define(spin_id1='@N', spin_id2='@H', direct_bond=True)
+
sed -i 's/0.1*$/0.05/' NOE_600MHz_new.dat
interatom.define(spin_id1='@NE1', spin_id2='@HE1', direct_bond=True)
+
sed -i 's/0.1*$/0.05/' NOE_750MHz.dat
interatom.set_dist(spin_id1='@N*', spin_id2='@H*', ave_dist=1.02 * 1e-10)
 
interatom.unit_vectors()
 
  
# Define the chemical shift relaxation interaction.
+
# Make deselection
value.set(-172 * 1e-6, 'csa', spin_id='@N*')
+
echo "#" > deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t151" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t152" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t153" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t154" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t155" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t156" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t157" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t158" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t159" >> deselect.txt
 
</source>
 
</source>
 
|}
 
|}
  
Run with
+
And a new one, changing the NOE error, and deselecting spins found from consistency test.<br>
<source lang="bash">
 
relax 02_read_data.py -t 02_read_data.log
 
</source>
 
 
 
 
{| class="mw-collapsible mw-collapsed wikitable"
 
{| class="mw-collapsible mw-collapsed wikitable"
! Output from logfile
+
! See commands
 
|-
 
|-
 
|
 
|
 
<source lang="bash">
 
<source lang="bash">
script = '02_read_data.py'
+
mkdir 20171010_model_free_5_HADDOCK
----------------------------------------------------------------------------------------------------
+
cp 20171010_model_free/*.dat 20171010_model_free_5_HADDOCK
# Python module imports.
+
cp 20171010_model_free/*.pdb 20171010_model_free_5_HADDOCK
from time import asctime, localtime
 
import os
 
 
 
# relax module imports.
 
from auto_analyses.dauvergne_protocol import dAuvergne_protocol
 
 
 
# Set up the data pipe.
 
#######################
 
 
 
# The following sequence of user function calls can be changed as needed.
 
 
 
# Create the data pipe.
 
bundle_name = "mf (%s)" % asctime(localtime())
 
name = "origin"
 
pipe.create(name, 'mf', bundle=bundle_name)
 
 
 
# Load the PDB file.
 
structure.read_pdb('energy_1.pdb', set_mol_name='TEMP', read_model=1)
 
 
 
# Set up the 15N and 1H spins (both backbone and Trp indole sidechains).
 
structure.load_spins('@N', ave_pos=True)
 
structure.load_spins('@NE1', ave_pos=True)
 
structure.load_spins('@H', ave_pos=True)
 
structure.load_spins('@HE1', ave_pos=True)
 
 
 
# Assign isotopes
 
spin.isotope('15N', spin_id='@N*')
 
spin.isotope('1H', spin_id='@H*')
 
 
 
# Load the relaxation data.
 
relax_data.read(ri_id='R1_600',  ri_type='R1',  frq=600.17*1e6, file='R1_600MHz_new_model_free.dat',  mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
 
relax_data.read(ri_id='R2_600',  ri_type='R2',  frq=600.17*1e6, file='R2_600MHz_new_model_free.dat',  mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
 
relax_data.read(ri_id='NOE_600',  ri_type='NOE',  frq=600.17*1e6, file='NOE_600MHz_new.dat',  mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
 
relax_data.read(ri_id='R1_750',  ri_type='R1',  frq=750.06*1e6, file='R1_750MHz_model_free.dat',  mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
 
relax_data.read(ri_id='R2_750',  ri_type='R2',  frq=750.06*1e6, file='R2_750MHz_model_free.dat',  mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
 
relax_data.read(ri_id='NOE_750', ri_type='NOE', frq=750.06*1e6, file='NOE_750MHz.dat', mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7)
 
  
# Define the magnetic dipole-dipole relaxation interaction.
+
# Get scripts
interatom.define(spin_id1='@N', spin_id2='@H', direct_bond=True)
+
cd 20171010_model_free_5_HADDOCK
interatom.define(spin_id1='@NE1', spin_id2='@HE1', direct_bond=True)
+
git init
interatom.set_dist(spin_id1='@N*', spin_id2='@H*', ave_dist=1.02 * 1e-10)
+
git remote add origin git@github.com:tlinnet/relax_modelfree_scripts.git
interatom.unit_vectors()
+
git fetch
 +
git checkout -t origin/master
  
# Define the chemical shift relaxation interaction.
+
# Change NOE error
value.set(-172 * 1e-6, 'csa', spin_id='@N*')
+
sed -i 's/0.1*$/0.05/' NOE_600MHz_new.dat
 +
sed -i 's/0.1*$/0.05/' NOE_750MHz.dat
  
----------------------------------------------------------------------------------------------------
+
# Make deselection
 
+
echo "#" > deselect.txt
relax> pipe.create(pipe_name='origin', pipe_type='mf', bundle='mf (Fri Oct 13 17:51:28 2017)')
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t158" >> deselect.txt
 
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t157" >> deselect.txt
relax> structure.read_pdb(file='energy_1.pdb', dir=None, read_mol=None, set_mol_name='TEMP', read_model=1, set_model_num=None, alt_loc=None, verbosity=1, merge=False)
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t17" >> deselect.txt
 
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t159" >> deselect.txt
Internal relax PDB parser.
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t120" >> deselect.txt
Opening the file 'energy_1.pdb' for reading.
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t59" >> deselect.txt
RelaxWarning: Cannot determine the element associated with atom 'X'.
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t98" >> deselect.txt
RelaxWarning: Cannot determine the element associated with atom 'Z'.
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t49" >> deselect.txt
RelaxWarning: Cannot determine the element associated with atom 'OO'.
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t76" >> deselect.txt
RelaxWarning: Cannot determine the element associated with atom 'OO2'.
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t155" >> deselect.txt
Adding molecule 'TEMP' to model 1 (from the original molecule number 1 of model 1).
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t156" >> deselect.txt
 
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t48" >> deselect.txt
relax> structure.load_spins(spin_id='@N', from_mols=None, mol_name_target=None, ave_pos=True, spin_num=True)
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t154" >> deselect.txt
Adding the following spins to the relax data store.
 
 
 
# mol_name    res_num    res_name    spin_num    spin_name   
 
REMOVED FROM DISPLAY
 
 
 
relax> structure.load_spins(spin_id='@NE1', from_mols=None, mol_name_target=None, ave_pos=True, spin_num=True)
 
Adding the following spins to the relax data store.
 
 
 
# mol_name    res_num    res_name    spin_num    spin_name   
 
REMOVED FROM DISPLAY
 
 
 
relax> structure.load_spins(spin_id='@H', from_mols=None, mol_name_target=None, ave_pos=True, spin_num=True)
 
Adding the following spins to the relax data store.
 
 
 
# mol_name    res_num    res_name    spin_num    spin_name   
 
REMOVED FROM DISPLAY
 
 
 
relax> structure.load_spins(spin_id='@HE1', from_mols=None, mol_name_target=None, ave_pos=True, spin_num=True)
 
Adding the following spins to the relax data store.
 
 
 
# mol_name    res_num    res_name    spin_num    spin_name   
 
REMOVED FROM DISPLAY
 
 
 
relax> spin.isotope(isotope='15N', spin_id='@N*', force=False)
 
 
 
relax> spin.isotope(isotope='1H', spin_id='@H*', force=False)
 
 
 
relax> relax_data.read(ri_id='R1_600', ri_type='R1', frq=600170000.0, file='R1_600MHz_new_model_free.dat', dir=None, spin_id_col=None, mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7, sep=None, spin_id=None)
 
Opening the file 'R1_600MHz_new_model_free.dat' for reading.
 
 
 
The following 600.17 MHz R1 relaxation data with the ID 'R1_600' has been loaded into the relax data store:
 
 
 
# Spin_ID          Value      Error     
 
REMOVED FROM DISPLAY   
 
 
 
relax> relax_data.read(ri_id='R2_600', ri_type='R2', frq=600170000.0, file='R2_600MHz_new_model_free.dat', dir=None, spin_id_col=None, mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7, sep=None, spin_id=None)
 
Opening the file 'R2_600MHz_new_model_free.dat' for reading.
 
 
 
The following 600.17 MHz R2 relaxation data with the ID 'R2_600' has been loaded into the relax data store:
 
 
 
# Spin_ID          Value        Error     
 
REMOVED FROM DISPLAY 
 
 
 
relax> relax_data.read(ri_id='NOE_600', ri_type='NOE', frq=600170000.0, file='NOE_600MHz_new.dat', dir=None, spin_id_col=None, mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7, sep=None, spin_id=None)
 
Opening the file 'NOE_600MHz_new.dat' for reading.
 
 
 
The following 600.17 MHz NOE relaxation data with the ID 'NOE_600' has been loaded into the relax data store:
 
 
 
# Spin_ID          Value        Error   
 
REMOVED FROM DISPLAY 
 
 
 
relax> relax_data.read(ri_id='R1_750', ri_type='R1', frq=750060000.0, file='R1_750MHz_model_free.dat', dir=None, spin_id_col=None, mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7, sep=None, spin_id=None)
 
Opening the file 'R1_750MHz_model_free.dat' for reading.
 
 
 
The following 750.06 MHz R1 relaxation data with the ID 'R1_750' has been loaded into the relax data store:
 
 
 
# Spin_ID          Value      Error     
 
REMOVED FROM DISPLAY 
 
 
 
relax> relax_data.read(ri_id='R2_750', ri_type='R2', frq=750060000.0, file='R2_750MHz_model_free.dat', dir=None, spin_id_col=None, mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7, sep=None, spin_id=None)
 
Opening the file 'R2_750MHz_model_free.dat' for reading.
 
 
 
The following 750.06 MHz R2 relaxation data with the ID 'R2_750' has been loaded into the relax data store:
 
 
 
# Spin_ID          Value        Error     
 
REMOVED FROM DISPLAY   
 
 
 
relax> relax_data.read(ri_id='NOE_750', ri_type='NOE', frq=750060000.0, file='NOE_750MHz.dat', dir=None, spin_id_col=None, mol_name_col=1, res_num_col=2, res_name_col=3, spin_num_col=4, spin_name_col=5, data_col=6, error_col=7, sep=None, spin_id=None)
 
Opening the file 'NOE_750MHz.dat' for reading.
 
 
 
The following 750.06 MHz NOE relaxation data with the ID 'NOE_750' has been loaded into the relax data store:
 
 
 
# Spin_ID          Value        Error   
 
REMOVED FROM DISPLAY   
 
 
 
relax> interatom.define(spin_id1='@N', spin_id2='@H', direct_bond=True, spin_selection=True, pipe=None)
 
Interatomic interactions are now defined for the following spins:
 
 
 
# Spin_ID_1        Spin_ID_2         
 
'#TEMP:3@N'      '#TEMP:3@H'     
 
'#TEMP:4@N'      '#TEMP:4@H'     
 
'#TEMP:5@N'      '#TEMP:5@H'     
 
'#TEMP:6@N'      '#TEMP:6@H'     
 
'#TEMP:7@N'      '#TEMP:7@H'     
 
'#TEMP:8@N'      '#TEMP:8@H'     
 
'#TEMP:9@N'      '#TEMP:9@H'     
 
'#TEMP:10@N'    '#TEMP:10@H'   
 
'#TEMP:11@N'    '#TEMP:11@H'   
 
'#TEMP:13@N'    '#TEMP:13@H'   
 
'#TEMP:14@N'    '#TEMP:14@H'   
 
'#TEMP:15@N'    '#TEMP:15@H'   
 
'#TEMP:16@N'    '#TEMP:16@H'   
 
'#TEMP:17@N'    '#TEMP:17@H'   
 
'#TEMP:18@N'    '#TEMP:18@H'   
 
'#TEMP:19@N'    '#TEMP:19@H'   
 
'#TEMP:20@N'    '#TEMP:20@H'   
 
'#TEMP:21@N'    '#TEMP:21@H'   
 
'#TEMP:22@N'    '#TEMP:22@H'   
 
'#TEMP:23@N'    '#TEMP:23@H'   
 
'#TEMP:24@N'    '#TEMP:24@H'   
 
'#TEMP:25@N'    '#TEMP:25@H'   
 
'#TEMP:26@N'    '#TEMP:26@H'   
 
'#TEMP:27@N'    '#TEMP:27@H'   
 
'#TEMP:28@N'    '#TEMP:28@H'   
 
'#TEMP:29@N'    '#TEMP:29@H'   
 
'#TEMP:30@N'    '#TEMP:30@H'   
 
'#TEMP:31@N'    '#TEMP:31@H'   
 
'#TEMP:32@N'    '#TEMP:32@H'   
 
'#TEMP:33@N'    '#TEMP:33@H'   
 
'#TEMP:34@N'    '#TEMP:34@H'   
 
'#TEMP:35@N'    '#TEMP:35@H'   
 
'#TEMP:36@N'    '#TEMP:36@H'   
 
'#TEMP:37@N'    '#TEMP:37@H'   
 
'#TEMP:38@N'    '#TEMP:38@H'   
 
'#TEMP:39@N'    '#TEMP:39@H'   
 
'#TEMP:40@N'    '#TEMP:40@H'   
 
'#TEMP:41@N'    '#TEMP:41@H'   
 
'#TEMP:42@N'    '#TEMP:42@H'   
 
'#TEMP:43@N'    '#TEMP:43@H'   
 
'#TEMP:45@N'    '#TEMP:45@H'   
 
'#TEMP:46@N'    '#TEMP:46@H'   
 
'#TEMP:47@N'    '#TEMP:47@H'   
 
'#TEMP:48@N'    '#TEMP:48@H'   
 
'#TEMP:49@N'    '#TEMP:49@H'   
 
'#TEMP:50@N'    '#TEMP:50@H'   
 
'#TEMP:51@N'    '#TEMP:51@H'   
 
'#TEMP:52@N'    '#TEMP:52@H'   
 
'#TEMP:53@N'    '#TEMP:53@H'   
 
'#TEMP:54@N'    '#TEMP:54@H'   
 
'#TEMP:55@N'    '#TEMP:55@H'   
 
'#TEMP:56@N'    '#TEMP:56@H'   
 
'#TEMP:57@N'    '#TEMP:57@H'   
 
'#TEMP:58@N'    '#TEMP:58@H'   
 
'#TEMP:59@N'    '#TEMP:59@H'   
 
'#TEMP:60@N'    '#TEMP:60@H'   
 
'#TEMP:61@N'    '#TEMP:61@H'   
 
'#TEMP:62@N'    '#TEMP:62@H'   
 
'#TEMP:63@N'    '#TEMP:63@H'   
 
'#TEMP:64@N'    '#TEMP:64@H'   
 
'#TEMP:65@N'    '#TEMP:65@H'   
 
'#TEMP:66@N'    '#TEMP:66@H'   
 
'#TEMP:67@N'    '#TEMP:67@H'   
 
'#TEMP:68@N'    '#TEMP:68@H'   
 
'#TEMP:69@N'    '#TEMP:69@H'   
 
'#TEMP:70@N'    '#TEMP:70@H'   
 
'#TEMP:71@N'    '#TEMP:71@H'   
 
'#TEMP:72@N'    '#TEMP:72@H'   
 
'#TEMP:73@N'    '#TEMP:73@H'   
 
'#TEMP:74@N'    '#TEMP:74@H'   
 
'#TEMP:75@N'    '#TEMP:75@H'   
 
'#TEMP:76@N'    '#TEMP:76@H'   
 
'#TEMP:77@N'    '#TEMP:77@H'   
 
'#TEMP:78@N'    '#TEMP:78@H'   
 
'#TEMP:79@N'    '#TEMP:79@H'   
 
'#TEMP:80@N'    '#TEMP:80@H'   
 
'#TEMP:81@N'    '#TEMP:81@H'   
 
'#TEMP:82@N'    '#TEMP:82@H'   
 
'#TEMP:83@N'    '#TEMP:83@H'   
 
'#TEMP:84@N'    '#TEMP:84@H'   
 
'#TEMP:85@N'    '#TEMP:85@H'   
 
'#TEMP:87@N'    '#TEMP:87@H'   
 
'#TEMP:88@N'    '#TEMP:88@H'   
 
'#TEMP:89@N'    '#TEMP:89@H'   
 
'#TEMP:90@N'    '#TEMP:90@H'   
 
'#TEMP:91@N'    '#TEMP:91@H'   
 
'#TEMP:93@N'    '#TEMP:93@H'   
 
'#TEMP:94@N'    '#TEMP:94@H'   
 
'#TEMP:95@N'    '#TEMP:95@H'   
 
'#TEMP:96@N'    '#TEMP:96@H'   
 
'#TEMP:97@N'    '#TEMP:97@H'   
 
'#TEMP:98@N'    '#TEMP:98@H'   
 
'#TEMP:99@N'    '#TEMP:99@H'   
 
'#TEMP:100@N'    '#TEMP:100@H'   
 
'#TEMP:101@N'    '#TEMP:101@H'   
 
'#TEMP:102@N'    '#TEMP:102@H'   
 
'#TEMP:103@N'    '#TEMP:103@H'   
 
'#TEMP:104@N'    '#TEMP:104@H'   
 
'#TEMP:105@N'    '#TEMP:105@H'   
 
'#TEMP:106@N'    '#TEMP:106@H'   
 
'#TEMP:107@N'    '#TEMP:107@H'   
 
'#TEMP:108@N'    '#TEMP:108@H'   
 
'#TEMP:109@N'    '#TEMP:109@H'   
 
'#TEMP:110@N'    '#TEMP:110@H'   
 
'#TEMP:111@N'    '#TEMP:111@H'   
 
'#TEMP:112@N'    '#TEMP:112@H'   
 
'#TEMP:113@N'    '#TEMP:113@H'   
 
'#TEMP:114@N'    '#TEMP:114@H'   
 
'#TEMP:115@N'    '#TEMP:115@H'   
 
'#TEMP:116@N'    '#TEMP:116@H'   
 
'#TEMP:117@N'    '#TEMP:117@H'   
 
'#TEMP:118@N'    '#TEMP:118@H'   
 
'#TEMP:119@N'    '#TEMP:119@H'   
 
'#TEMP:120@N'    '#TEMP:120@H'   
 
'#TEMP:121@N'    '#TEMP:121@H'   
 
'#TEMP:122@N'    '#TEMP:122@H'   
 
'#TEMP:123@N'    '#TEMP:123@H'   
 
'#TEMP:124@N'    '#TEMP:124@H'   
 
'#TEMP:125@N'    '#TEMP:125@H'   
 
'#TEMP:127@N'    '#TEMP:127@H'   
 
'#TEMP:128@N'    '#TEMP:128@H'   
 
'#TEMP:129@N'    '#TEMP:129@H'   
 
'#TEMP:130@N'    '#TEMP:130@H'   
 
'#TEMP:131@N'    '#TEMP:131@H'   
 
'#TEMP:132@N'    '#TEMP:132@H'   
 
'#TEMP:133@N'    '#TEMP:133@H'   
 
'#TEMP:134@N'    '#TEMP:134@H'   
 
'#TEMP:136@N'    '#TEMP:136@H'   
 
'#TEMP:138@N'    '#TEMP:138@H'   
 
'#TEMP:139@N'    '#TEMP:139@H'   
 
'#TEMP:140@N'    '#TEMP:140@H'   
 
'#TEMP:141@N'    '#TEMP:141@H'   
 
'#TEMP:142@N'    '#TEMP:142@H'   
 
'#TEMP:143@N'    '#TEMP:143@H'   
 
'#TEMP:144@N'    '#TEMP:144@H'   
 
'#TEMP:145@N'    '#TEMP:145@H'   
 
'#TEMP:146@N'    '#TEMP:146@H'   
 
'#TEMP:147@N'    '#TEMP:147@H'   
 
'#TEMP:148@N'    '#TEMP:148@H'   
 
'#TEMP:149@N'    '#TEMP:149@H'   
 
'#TEMP:150@N'    '#TEMP:150@H'   
 
'#TEMP:151@N'    '#TEMP:151@H'   
 
'#TEMP:152@N'    '#TEMP:152@H'   
 
'#TEMP:153@N'    '#TEMP:153@H'   
 
'#TEMP:154@N'    '#TEMP:154@H'   
 
'#TEMP:155@N'    '#TEMP:155@H'   
 
'#TEMP:156@N'    '#TEMP:156@H'   
 
'#TEMP:157@N'    '#TEMP:157@H'   
 
'#TEMP:158@N'    '#TEMP:158@H'   
 
'#TEMP:159@N'    '#TEMP:159@H'   
 
 
 
relax> interatom.define(spin_id1='@NE1', spin_id2='@HE1', direct_bond=True, spin_selection=True, pipe=None)
 
Interatomic interactions are now defined for the following spins:
 
 
 
# Spin_ID_1          Spin_ID_2           
 
'#TEMP:33@NE1'    '#TEMP:33@HE1'   
 
'#TEMP:48@NE1'    '#TEMP:48@HE1'   
 
'#TEMP:49@NE1'    '#TEMP:49@HE1'   
 
'#TEMP:59@NE1'    '#TEMP:59@HE1'   
 
'#TEMP:98@NE1'    '#TEMP:98@HE1'   
 
 
 
relax> interatom.set_dist(spin_id1='@N*', spin_id2='@H*', ave_dist=1.0200000000000001e-10, unit='meter')
 
The following averaged distances have been set:
 
 
 
# Spin_ID_1          Spin_ID_2            Ave_distance(meters)     
 
'#TEMP:3@N'      '#TEMP:3@H'      1.0200000000000001e-10   
 
'#TEMP:4@N'      '#TEMP:4@H'      1.0200000000000001e-10   
 
'#TEMP:5@N'      '#TEMP:5@H'      1.0200000000000001e-10   
 
'#TEMP:6@N'      '#TEMP:6@H'      1.0200000000000001e-10   
 
'#TEMP:7@N'      '#TEMP:7@H'      1.0200000000000001e-10   
 
'#TEMP:8@N'      '#TEMP:8@H'      1.0200000000000001e-10   
 
'#TEMP:9@N'      '#TEMP:9@H'      1.0200000000000001e-10   
 
'#TEMP:10@N'      '#TEMP:10@H'      1.0200000000000001e-10   
 
'#TEMP:11@N'      '#TEMP:11@H'      1.0200000000000001e-10   
 
'#TEMP:13@N'      '#TEMP:13@H'      1.0200000000000001e-10   
 
'#TEMP:14@N'      '#TEMP:14@H'      1.0200000000000001e-10   
 
'#TEMP:15@N'      '#TEMP:15@H'      1.0200000000000001e-10   
 
'#TEMP:16@N'      '#TEMP:16@H'      1.0200000000000001e-10   
 
'#TEMP:17@N'      '#TEMP:17@H'      1.0200000000000001e-10   
 
'#TEMP:18@N'      '#TEMP:18@H'      1.0200000000000001e-10   
 
'#TEMP:19@N'      '#TEMP:19@H'      1.0200000000000001e-10   
 
'#TEMP:20@N'      '#TEMP:20@H'      1.0200000000000001e-10   
 
'#TEMP:21@N'      '#TEMP:21@H'      1.0200000000000001e-10   
 
'#TEMP:22@N'      '#TEMP:22@H'      1.0200000000000001e-10   
 
'#TEMP:23@N'      '#TEMP:23@H'      1.0200000000000001e-10   
 
'#TEMP:24@N'      '#TEMP:24@H'      1.0200000000000001e-10   
 
'#TEMP:25@N'      '#TEMP:25@H'      1.0200000000000001e-10   
 
'#TEMP:26@N'      '#TEMP:26@H'      1.0200000000000001e-10   
 
'#TEMP:27@N'      '#TEMP:27@H'      1.0200000000000001e-10   
 
'#TEMP:28@N'      '#TEMP:28@H'      1.0200000000000001e-10   
 
'#TEMP:29@N'      '#TEMP:29@H'      1.0200000000000001e-10   
 
'#TEMP:30@N'      '#TEMP:30@H'      1.0200000000000001e-10   
 
'#TEMP:31@N'      '#TEMP:31@H'      1.0200000000000001e-10   
 
'#TEMP:32@N'      '#TEMP:32@H'      1.0200000000000001e-10   
 
'#TEMP:33@N'      '#TEMP:33@H'      1.0200000000000001e-10   
 
'#TEMP:34@N'      '#TEMP:34@H'      1.0200000000000001e-10   
 
'#TEMP:35@N'      '#TEMP:35@H'      1.0200000000000001e-10   
 
'#TEMP:36@N'      '#TEMP:36@H'      1.0200000000000001e-10   
 
'#TEMP:37@N'      '#TEMP:37@H'      1.0200000000000001e-10   
 
'#TEMP:38@N'      '#TEMP:38@H'      1.0200000000000001e-10   
 
'#TEMP:39@N'      '#TEMP:39@H'      1.0200000000000001e-10   
 
'#TEMP:40@N'      '#TEMP:40@H'      1.0200000000000001e-10   
 
'#TEMP:41@N'      '#TEMP:41@H'      1.0200000000000001e-10   
 
'#TEMP:42@N'      '#TEMP:42@H'      1.0200000000000001e-10   
 
'#TEMP:43@N'      '#TEMP:43@H'      1.0200000000000001e-10   
 
'#TEMP:45@N'      '#TEMP:45@H'      1.0200000000000001e-10   
 
'#TEMP:46@N'      '#TEMP:46@H'      1.0200000000000001e-10   
 
'#TEMP:47@N'      '#TEMP:47@H'      1.0200000000000001e-10   
 
'#TEMP:48@N'      '#TEMP:48@H'      1.0200000000000001e-10   
 
'#TEMP:49@N'      '#TEMP:49@H'      1.0200000000000001e-10   
 
'#TEMP:50@N'      '#TEMP:50@H'      1.0200000000000001e-10   
 
'#TEMP:51@N'      '#TEMP:51@H'      1.0200000000000001e-10   
 
'#TEMP:52@N'      '#TEMP:52@H'      1.0200000000000001e-10   
 
'#TEMP:53@N'      '#TEMP:53@H'      1.0200000000000001e-10   
 
'#TEMP:54@N'      '#TEMP:54@H'      1.0200000000000001e-10   
 
'#TEMP:55@N'      '#TEMP:55@H'      1.0200000000000001e-10   
 
'#TEMP:56@N'      '#TEMP:56@H'      1.0200000000000001e-10   
 
'#TEMP:57@N'      '#TEMP:57@H'      1.0200000000000001e-10   
 
'#TEMP:58@N'      '#TEMP:58@H'      1.0200000000000001e-10   
 
'#TEMP:59@N'      '#TEMP:59@H'      1.0200000000000001e-10   
 
'#TEMP:60@N'      '#TEMP:60@H'      1.0200000000000001e-10   
 
'#TEMP:61@N'      '#TEMP:61@H'      1.0200000000000001e-10   
 
'#TEMP:62@N'      '#TEMP:62@H'      1.0200000000000001e-10   
 
'#TEMP:63@N'      '#TEMP:63@H'      1.0200000000000001e-10   
 
'#TEMP:64@N'      '#TEMP:64@H'      1.0200000000000001e-10   
 
'#TEMP:65@N'      '#TEMP:65@H'      1.0200000000000001e-10   
 
'#TEMP:66@N'      '#TEMP:66@H'      1.0200000000000001e-10   
 
'#TEMP:67@N'      '#TEMP:67@H'      1.0200000000000001e-10   
 
'#TEMP:68@N'      '#TEMP:68@H'      1.0200000000000001e-10   
 
'#TEMP:69@N'      '#TEMP:69@H'      1.0200000000000001e-10   
 
'#TEMP:70@N'      '#TEMP:70@H'      1.0200000000000001e-10   
 
'#TEMP:71@N'      '#TEMP:71@H'      1.0200000000000001e-10   
 
'#TEMP:72@N'      '#TEMP:72@H'      1.0200000000000001e-10   
 
'#TEMP:73@N'      '#TEMP:73@H'      1.0200000000000001e-10   
 
'#TEMP:74@N'      '#TEMP:74@H'      1.0200000000000001e-10   
 
'#TEMP:75@N'      '#TEMP:75@H'      1.0200000000000001e-10   
 
'#TEMP:76@N'      '#TEMP:76@H'      1.0200000000000001e-10   
 
'#TEMP:77@N'      '#TEMP:77@H'      1.0200000000000001e-10   
 
'#TEMP:78@N'      '#TEMP:78@H'      1.0200000000000001e-10   
 
'#TEMP:79@N'      '#TEMP:79@H'      1.0200000000000001e-10   
 
'#TEMP:80@N'      '#TEMP:80@H'      1.0200000000000001e-10   
 
'#TEMP:81@N'      '#TEMP:81@H'      1.0200000000000001e-10   
 
'#TEMP:82@N'      '#TEMP:82@H'      1.0200000000000001e-10   
 
'#TEMP:83@N'      '#TEMP:83@H'      1.0200000000000001e-10   
 
'#TEMP:84@N'      '#TEMP:84@H'      1.0200000000000001e-10   
 
'#TEMP:85@N'      '#TEMP:85@H'      1.0200000000000001e-10   
 
'#TEMP:87@N'      '#TEMP:87@H'      1.0200000000000001e-10   
 
'#TEMP:88@N'      '#TEMP:88@H'      1.0200000000000001e-10   
 
'#TEMP:89@N'      '#TEMP:89@H'      1.0200000000000001e-10   
 
'#TEMP:90@N'      '#TEMP:90@H'      1.0200000000000001e-10   
 
'#TEMP:91@N'      '#TEMP:91@H'      1.0200000000000001e-10   
 
'#TEMP:93@N'      '#TEMP:93@H'      1.0200000000000001e-10   
 
'#TEMP:94@N'      '#TEMP:94@H'      1.0200000000000001e-10   
 
'#TEMP:95@N'      '#TEMP:95@H'      1.0200000000000001e-10   
 
'#TEMP:96@N'      '#TEMP:96@H'      1.0200000000000001e-10   
 
'#TEMP:97@N'      '#TEMP:97@H'      1.0200000000000001e-10   
 
'#TEMP:98@N'      '#TEMP:98@H'      1.0200000000000001e-10   
 
'#TEMP:99@N'      '#TEMP:99@H'      1.0200000000000001e-10   
 
'#TEMP:100@N'    '#TEMP:100@H'    1.0200000000000001e-10   
 
'#TEMP:101@N'    '#TEMP:101@H'    1.0200000000000001e-10   
 
'#TEMP:102@N'    '#TEMP:102@H'    1.0200000000000001e-10   
 
'#TEMP:103@N'    '#TEMP:103@H'    1.0200000000000001e-10   
 
'#TEMP:104@N'    '#TEMP:104@H'    1.0200000000000001e-10   
 
'#TEMP:105@N'    '#TEMP:105@H'    1.0200000000000001e-10   
 
'#TEMP:106@N'    '#TEMP:106@H'    1.0200000000000001e-10   
 
'#TEMP:107@N'    '#TEMP:107@H'    1.0200000000000001e-10   
 
'#TEMP:108@N'    '#TEMP:108@H'    1.0200000000000001e-10   
 
'#TEMP:109@N'    '#TEMP:109@H'    1.0200000000000001e-10   
 
'#TEMP:110@N'    '#TEMP:110@H'    1.0200000000000001e-10   
 
'#TEMP:111@N'    '#TEMP:111@H'    1.0200000000000001e-10   
 
'#TEMP:112@N'    '#TEMP:112@H'    1.0200000000000001e-10   
 
'#TEMP:113@N'    '#TEMP:113@H'    1.0200000000000001e-10   
 
'#TEMP:114@N'    '#TEMP:114@H'    1.0200000000000001e-10   
 
'#TEMP:115@N'    '#TEMP:115@H'    1.0200000000000001e-10   
 
'#TEMP:116@N'    '#TEMP:116@H'    1.0200000000000001e-10   
 
'#TEMP:117@N'    '#TEMP:117@H'    1.0200000000000001e-10   
 
'#TEMP:118@N'    '#TEMP:118@H'    1.0200000000000001e-10   
 
'#TEMP:119@N'    '#TEMP:119@H'    1.0200000000000001e-10   
 
'#TEMP:120@N'    '#TEMP:120@H'    1.0200000000000001e-10   
 
'#TEMP:121@N'    '#TEMP:121@H'    1.0200000000000001e-10   
 
'#TEMP:122@N'    '#TEMP:122@H'    1.0200000000000001e-10   
 
'#TEMP:123@N'    '#TEMP:123@H'    1.0200000000000001e-10   
 
'#TEMP:124@N'    '#TEMP:124@H'    1.0200000000000001e-10   
 
'#TEMP:125@N'    '#TEMP:125@H'    1.0200000000000001e-10   
 
'#TEMP:127@N'    '#TEMP:127@H'    1.0200000000000001e-10   
 
'#TEMP:128@N'    '#TEMP:128@H'    1.0200000000000001e-10   
 
'#TEMP:129@N'    '#TEMP:129@H'    1.0200000000000001e-10   
 
'#TEMP:130@N'    '#TEMP:130@H'    1.0200000000000001e-10   
 
'#TEMP:131@N'    '#TEMP:131@H'    1.0200000000000001e-10   
 
'#TEMP:132@N'    '#TEMP:132@H'    1.0200000000000001e-10   
 
'#TEMP:133@N'    '#TEMP:133@H'    1.0200000000000001e-10   
 
'#TEMP:134@N'    '#TEMP:134@H'    1.0200000000000001e-10   
 
'#TEMP:136@N'    '#TEMP:136@H'    1.0200000000000001e-10   
 
'#TEMP:138@N'    '#TEMP:138@H'    1.0200000000000001e-10   
 
'#TEMP:139@N'    '#TEMP:139@H'    1.0200000000000001e-10   
 
'#TEMP:140@N'    '#TEMP:140@H'    1.0200000000000001e-10   
 
'#TEMP:141@N'    '#TEMP:141@H'    1.0200000000000001e-10   
 
'#TEMP:142@N'    '#TEMP:142@H'    1.0200000000000001e-10   
 
'#TEMP:143@N'    '#TEMP:143@H'    1.0200000000000001e-10   
 
'#TEMP:144@N'    '#TEMP:144@H'    1.0200000000000001e-10   
 
'#TEMP:145@N'    '#TEMP:145@H'    1.0200000000000001e-10   
 
'#TEMP:146@N'    '#TEMP:146@H'    1.0200000000000001e-10   
 
'#TEMP:147@N'    '#TEMP:147@H'    1.0200000000000001e-10   
 
'#TEMP:148@N'    '#TEMP:148@H'    1.0200000000000001e-10   
 
'#TEMP:149@N'    '#TEMP:149@H'    1.0200000000000001e-10   
 
'#TEMP:150@N'    '#TEMP:150@H'    1.0200000000000001e-10   
 
'#TEMP:151@N'    '#TEMP:151@H'    1.0200000000000001e-10   
 
'#TEMP:152@N'    '#TEMP:152@H'    1.0200000000000001e-10   
 
'#TEMP:153@N'    '#TEMP:153@H'    1.0200000000000001e-10   
 
'#TEMP:154@N'    '#TEMP:154@H'    1.0200000000000001e-10   
 
'#TEMP:155@N'    '#TEMP:155@H'    1.0200000000000001e-10   
 
'#TEMP:156@N'    '#TEMP:156@H'    1.0200000000000001e-10   
 
'#TEMP:157@N'    '#TEMP:157@H'    1.0200000000000001e-10   
 
'#TEMP:158@N'    '#TEMP:158@H'    1.0200000000000001e-10   
 
'#TEMP:159@N'    '#TEMP:159@H'    1.0200000000000001e-10   
 
'#TEMP:33@NE1'    '#TEMP:33@HE1'    1.0200000000000001e-10   
 
'#TEMP:48@NE1'    '#TEMP:48@HE1'    1.0200000000000001e-10   
 
'#TEMP:49@NE1'    '#TEMP:49@HE1'    1.0200000000000001e-10   
 
'#TEMP:59@NE1'    '#TEMP:59@HE1'    1.0200000000000001e-10   
 
'#TEMP:98@NE1'    '#TEMP:98@HE1'    1.0200000000000001e-10   
 
 
 
relax> interatom.unit_vectors(ave=True)
 
Averaging all vectors.
 
Calculated 1 N-H unit vector between the spins '#TEMP:3@N' and '#TEMP:3@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:4@N' and '#TEMP:4@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:5@N' and '#TEMP:5@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:6@N' and '#TEMP:6@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:7@N' and '#TEMP:7@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:8@N' and '#TEMP:8@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:9@N' and '#TEMP:9@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:10@N' and '#TEMP:10@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:11@N' and '#TEMP:11@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:13@N' and '#TEMP:13@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:14@N' and '#TEMP:14@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:15@N' and '#TEMP:15@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:16@N' and '#TEMP:16@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:17@N' and '#TEMP:17@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:18@N' and '#TEMP:18@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:19@N' and '#TEMP:19@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:20@N' and '#TEMP:20@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:21@N' and '#TEMP:21@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:22@N' and '#TEMP:22@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:23@N' and '#TEMP:23@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:24@N' and '#TEMP:24@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:25@N' and '#TEMP:25@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:26@N' and '#TEMP:26@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:27@N' and '#TEMP:27@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:28@N' and '#TEMP:28@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:29@N' and '#TEMP:29@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:30@N' and '#TEMP:30@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:31@N' and '#TEMP:31@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:32@N' and '#TEMP:32@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:33@N' and '#TEMP:33@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:34@N' and '#TEMP:34@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:35@N' and '#TEMP:35@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:36@N' and '#TEMP:36@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:37@N' and '#TEMP:37@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:38@N' and '#TEMP:38@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:39@N' and '#TEMP:39@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:40@N' and '#TEMP:40@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:41@N' and '#TEMP:41@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:42@N' and '#TEMP:42@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:43@N' and '#TEMP:43@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:45@N' and '#TEMP:45@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:46@N' and '#TEMP:46@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:47@N' and '#TEMP:47@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:48@N' and '#TEMP:48@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:49@N' and '#TEMP:49@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:50@N' and '#TEMP:50@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:51@N' and '#TEMP:51@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:52@N' and '#TEMP:52@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:53@N' and '#TEMP:53@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:54@N' and '#TEMP:54@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:55@N' and '#TEMP:55@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:56@N' and '#TEMP:56@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:57@N' and '#TEMP:57@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:58@N' and '#TEMP:58@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:59@N' and '#TEMP:59@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:60@N' and '#TEMP:60@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:61@N' and '#TEMP:61@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:62@N' and '#TEMP:62@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:63@N' and '#TEMP:63@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:64@N' and '#TEMP:64@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:65@N' and '#TEMP:65@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:66@N' and '#TEMP:66@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:67@N' and '#TEMP:67@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:68@N' and '#TEMP:68@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:69@N' and '#TEMP:69@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:70@N' and '#TEMP:70@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:71@N' and '#TEMP:71@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:72@N' and '#TEMP:72@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:73@N' and '#TEMP:73@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:74@N' and '#TEMP:74@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:75@N' and '#TEMP:75@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:76@N' and '#TEMP:76@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:77@N' and '#TEMP:77@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:78@N' and '#TEMP:78@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:79@N' and '#TEMP:79@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:80@N' and '#TEMP:80@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:81@N' and '#TEMP:81@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:82@N' and '#TEMP:82@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:83@N' and '#TEMP:83@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:84@N' and '#TEMP:84@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:85@N' and '#TEMP:85@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:87@N' and '#TEMP:87@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:88@N' and '#TEMP:88@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:89@N' and '#TEMP:89@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:90@N' and '#TEMP:90@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:91@N' and '#TEMP:91@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:93@N' and '#TEMP:93@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:94@N' and '#TEMP:94@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:95@N' and '#TEMP:95@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:96@N' and '#TEMP:96@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:97@N' and '#TEMP:97@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:98@N' and '#TEMP:98@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:99@N' and '#TEMP:99@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:100@N' and '#TEMP:100@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:101@N' and '#TEMP:101@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:102@N' and '#TEMP:102@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:103@N' and '#TEMP:103@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:104@N' and '#TEMP:104@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:105@N' and '#TEMP:105@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:106@N' and '#TEMP:106@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:107@N' and '#TEMP:107@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:108@N' and '#TEMP:108@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:109@N' and '#TEMP:109@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:110@N' and '#TEMP:110@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:111@N' and '#TEMP:111@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:112@N' and '#TEMP:112@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:113@N' and '#TEMP:113@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:114@N' and '#TEMP:114@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:115@N' and '#TEMP:115@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:116@N' and '#TEMP:116@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:117@N' and '#TEMP:117@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:118@N' and '#TEMP:118@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:119@N' and '#TEMP:119@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:120@N' and '#TEMP:120@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:121@N' and '#TEMP:121@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:122@N' and '#TEMP:122@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:123@N' and '#TEMP:123@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:124@N' and '#TEMP:124@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:125@N' and '#TEMP:125@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:127@N' and '#TEMP:127@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:128@N' and '#TEMP:128@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:129@N' and '#TEMP:129@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:130@N' and '#TEMP:130@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:131@N' and '#TEMP:131@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:132@N' and '#TEMP:132@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:133@N' and '#TEMP:133@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:134@N' and '#TEMP:134@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:136@N' and '#TEMP:136@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:138@N' and '#TEMP:138@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:139@N' and '#TEMP:139@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:140@N' and '#TEMP:140@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:141@N' and '#TEMP:141@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:142@N' and '#TEMP:142@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:143@N' and '#TEMP:143@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:144@N' and '#TEMP:144@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:145@N' and '#TEMP:145@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:146@N' and '#TEMP:146@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:147@N' and '#TEMP:147@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:148@N' and '#TEMP:148@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:149@N' and '#TEMP:149@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:150@N' and '#TEMP:150@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:151@N' and '#TEMP:151@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:152@N' and '#TEMP:152@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:153@N' and '#TEMP:153@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:154@N' and '#TEMP:154@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:155@N' and '#TEMP:155@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:156@N' and '#TEMP:156@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:157@N' and '#TEMP:157@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:158@N' and '#TEMP:158@H'.
 
Calculated 1 N-H unit vector between the spins '#TEMP:159@N' and '#TEMP:159@H'.
 
Calculated 1 NE1-HE1 unit vector between the spins '#TEMP:33@NE1' and '#TEMP:33@HE1'.
 
Calculated 1 NE1-HE1 unit vector between the spins '#TEMP:48@NE1' and '#TEMP:48@HE1'.
 
Calculated 1 NE1-HE1 unit vector between the spins '#TEMP:49@NE1' and '#TEMP:49@HE1'.
 
Calculated 1 NE1-HE1 unit vector between the spins '#TEMP:59@NE1' and '#TEMP:59@HE1'.
 
Calculated 1 NE1-HE1 unit vector between the spins '#TEMP:98@NE1' and '#TEMP:98@HE1'.
 
 
 
relax> value.set(val=-0.00017199999999999998, param='csa', index=0, spin_id='@N*', error=False, force=True)
 
 
</source>
 
</source>
 
|}
 
|}
  
== 03_save_state_inspect_GUI.py - Inspect data in GUI ==
+
And a new one, without changing the NOE error, and deselecting spins found from consistency test.<br>
The GUI can be a good place to inspect the setup and files.
 
 
 
Copy '''02_read_data.py''' to '''03_save_state_inspect_GUI.py''' and add:
 
 
{| class="mw-collapsible mw-collapsed wikitable"
 
{| class="mw-collapsible mw-collapsed wikitable"
! See file content
+
! See commands
 
|-
 
|-
 
|
 
|
<source lang="python">
+
<source lang="bash">
# Analysis variables.
+
mkdir 20171010_model_free_6_HADDOCK
#####################
+
cp 20171010_model_free/*.dat 20171010_model_free_6_HADDOCK
# The model-free models.  Do not change these unless absolutely necessary, the protocol is likely to fail if these are changed.
+
cp 20171010_model_free/*.pdb 20171010_model_free_6_HADDOCK
MF_MODELS = ['m0', 'm1', 'm2', 'm3', 'm4', 'm5', 'm6', 'm7', 'm8', 'm9']
 
#MF_MODELS = ['m1', 'm2']
 
LOCAL_TM_MODELS = ['tm0', 'tm1', 'tm2', 'tm3', 'tm4', 'tm5', 'tm6', 'tm7', 'tm8', 'tm9']
 
  
# The grid search size (the number of increments per dimension).
+
# Get scripts
GRID_INC = 11
+
cd 20171010_model_free_6_HADDOCK
 +
git init
 +
git remote add origin git@github.com:tlinnet/relax_modelfree_scripts.git
 +
git fetch
 +
git checkout -t origin/master
  
# The optimisation technique. Standard is: min_algor='newton' : and cannot be changed in the GUI.
+
# Make deselection
MIN_ALGOR = 'newton'
+
echo "#" > deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t158" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t157" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t17" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t159" >> deselect.txt
  
# The number of Monte Carlo simulations to be used for error analysis at the end of the analysis.
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t59" >> deselect.txt
#MC_NUM = 500
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t98" >> deselect.txt
MC_NUM = 20
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t76" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t155" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t156" >> deselect.txt
 +
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t120" >> deselect.txt
  
# The diffusion model. Standard is 'Fully automated', which means: DIFF_MODEL=['local_tm', 'sphere', 'prolate', 'oblate', 'ellipsoid', 'final']
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t49" >> deselect.txt
# 'local_tm', 'sphere', ''prolate', 'oblate', 'ellipsoid', or 'final'
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t48" >> deselect.txt
#DIFF_MODEL = 'local_tm'
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t154" >> deselect.txt
DIFF_MODEL = ['local_tm', 'sphere', 'prolate', 'oblate', 'ellipsoid', 'final']
 
  
# The maximum number of iterations for the global iteration. Set to None, then the algorithm iterates until convergence.
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t33" >> deselect.txt
MAX_ITER = None
+
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t67" >> deselect.txt
 +
</source>
 +
|}
  
# Automatic looping over all rounds until convergence (must be a boolean value of True or False). Standard is: conv_loop=True : and cannot be changed in the GUI.
+
== 11_read_data_GUI_inspect.py - Read data GUI inspect ==
CONV_LOOP = True
+
This will read the data and save as a state.
  
# Change some minimise opt params.  
+
The GUI can be a good place to inspect the setup and files.
# This goes into: minimise.execute(self.min_algor, func_tol=self.opt_func_tol, max_iter=self.opt_max_iterations)
 
#####################
 
#dAuvergne_protocol.opt_func_tol = 1e-5 # Standard:  opt_func_tol = 1e-25 
 
#dAuvergne_protocol.opt_max_iterations = 1000 # Standard: opt_max_iterations = int(1e7)
 
dAuvergne_protocol.opt_func_tol = 1e-10 # Standard:  opt_func_tol = 1e-25 
 
dAuvergne_protocol.opt_max_iterations = int(1e5) # Standard: opt_max_iterations = int(1e7)
 
  
#####################################
+
See content of:
 
+
[https://github.com/tlinnet/relax_modelfree_scripts/blob/master/11_read_data_GUI_inspect.py 11_read_data_GUI_inspect.py]
# The results dir.
 
var = 'result_03'
 
results_dir = os.getcwd() + os.sep + var
 
 
 
# Save the state before running. Open and check in GUI!
 
state.save(state=var+'_ini.bz2', dir=results_dir, force=True)
 
 
 
# To check in GUI
 
# relax -g
 
# File -> Open relax state
 
# In folder "result_03" open "result_03_ini.bz2"
 
# View -> Data pipe editor
 
# Right click on pipe, and select "Associate with a new auto-analysis"
 
</source>
 
|}
 
  
 
Run with
 
Run with
 
<source lang="bash">
 
<source lang="bash">
relax 03_save_state_inspect_GUI.py -t 03_save_state_inspect_GUI.log
+
relax 11_read_data_GUI_inspect.py -t 11_read_data_GUI_inspect.log
 
</source>
 
</source>
  
Line 889: Line 254:
 
* relax -g
 
* relax -g
 
* File -> Open relax state
 
* File -> Open relax state
* In folder "result_03" open "result_03_ini.bz2"
+
* In folder "result_10" open "result_10_ini.bz2"
 
* View -> Data pipe editor
 
* View -> Data pipe editor
 
* Right click on pipe, and select "Associate with a new auto-analysis"
 
* Right click on pipe, and select "Associate with a new auto-analysis"
  
== 04_run_default_with_tolerance_lim.py - Try fast run ==
+
=== relax 11_test_consistency.py - Consistency test of our data ===
Now we try a fast run, to see if everything is setup
+
Before running the analysis, it is wise to run a script for [[Tutorial_for_model_free_SBiNLab#Other_script_inspiration_for_checking|consistency testing]].
 
 
Copy '''03_save_state_inspect_GUI.py''' to '''04_run_default_with_tolerance_lim.py''' and modify last lines:
 
 
 
{| class="mw-collapsible mw-collapsed wikitable"
 
! See file content
 
|-
 
|
 
<source lang="python">
 
# The results dir.
 
var = 'result_04'
 
results_dir = os.getcwd() + os.sep + var
 
 
 
# Save the state before running. Open and check in GUI!
 
state.save(state=var+'_ini.bz2', dir=results_dir, force=True)
 
 
 
# To check in GUI
 
# relax -g
 
# File -> Open relax state
 
# In folder "result_03" open "result_03_ini.bz2"
 
# View -> Data pipe editor
 
# Right click on pipe, and select "Associate with a new auto-analysis"
 
 
 
dAuvergne_protocol(pipe_name=name, pipe_bundle=bundle_name, results_dir=results_dir, diff_model=DIFF_MODEL, mf_models=MF_MODELS, local_tm_models=LOCAL_TM_MODELS, grid_inc=GRID_INC, min_algor=MIN_ALGOR, mc_sim_num=MC_NUM, max_iter=MAX_ITER, conv_loop=CONV_LOOP)
 
</source>
 
|}
 
 
 
Before running, is worth to note, which values are NOT set to default values in the GUI.
 
* dAuvergne_protocol.opt_func_tol = 1e-10 # Standard:  opt_func_tol = 1e-25 
 
* dAuvergne_protocol.opt_max_iterations = int(1e5) # Standard: opt_max_iterations = int(1e7)
 
  
These 2 values is used in the '''minfx''' python package, and is an instruction to the minimiser function, to continue changing parameter values,
+
See here:
UNTIL either the difference in chi2 values between "2 steps" is less than 1e-10, OR if the number all steps is larger than 10^5.
+
* Morin & Gagne (2009a) [http://dx.doi.org/10.1007/s10858-009-9381-4 Simple tests for the validation of multiple field spin relaxation data. J. Biomol. NMR, 45: 361-372.]
It's an instruction not to be tooooo pedantic, here in the exploration phase. When finalising for publication, these values
 
should be set to their standard value.  
 
  
* MC_NUM = 20
+
Highlights:
Number of Monte-Carlo simulations. The protocol will find optimum parameter values in this protocol, but error
+
* Comparing results obtained at different magnetic fields should, in the case of perfect consistency and assuming the absence of conformational exchange, yield equal values independently of the magnetic field.
estimation will not be very reliable. Standard is 500.
+
* avoid the potential extraction of erroneous information as well as the waste of time associated to dissecting inconsistent datasets using numerous long model-free minimisations with different subsets of data.
 +
* The authors prefer the use of the spectral density at zero frequency J(0) alone since it '''does not rely''' on an estimation of the global correlation time '''tc/tm''', neither on a measure of theta, the angle between the 15N–1H vector and the principal axis of the 15N chemical shift tensor. Hence, J(0) is less likely to be affected by incorrect parameterisation of input parameters.
  
We use [http://www.dayid.org/comp/tm.html tmux] to make a terminal-session, we can get back to,
+
See content of:
if our own terminal connection get closed.
+
[https://github.com/tlinnet/relax_modelfree_scripts/blob/master/11_test_consistency.py 11_test_consistency.py]
 
 
* start a new session: '''tmux'''
 
* re-attach a detached session: '''tmux attach'''
 
 
 
Run with
 
 
<source lang="bash">
 
<source lang="bash">
# Make terminal-session
+
relax 11_test_consistency.py -t 11_test_consistency.py.log
tmux
 
  
relax 04_run_default_with_tolerance_lim.py -t 04_run_default_with_tolerance_lim.log
+
#Afterwards, go into the folder at plot data.
 +
python plot_txt_files.py
 +
./grace2images.py
 
</source>
 
</source>
  
You can then in another terminal follow the logfile by
+
== 12_Model_1_I_local_tm.py - Only run local_tm ==
<source lang="bash">
+
Now we only run '''Model 1'''.
less +F 04_run_default_with_tolerance_lim.log
 
</source>
 
  
* To scroll up and down, use keyboard: '''Ctrl+c'''
+
* DIFF_MODEL = ['local_tm']
* To return to follow mode, use keyboard: '''Shift+f'''
+
* GRID_INC = 11 # This is the standard
* To exit, use keyboard: '''Ctrl+c'''    and then: '''q'''
+
* MC_NUM = 0 # This has no influence in Model 1-5
 +
* MAX_ITER = 20 # Stop if it has not converged in 20 rounds
  
== 05_run_def_MC20.py - Try normal run with MC 20 ==
+
Normally between 8 to 15 multiple rounds of optimisation of the are required for the proper execution of this script.<br>
The inspection of the log of the previous run, it seems the '''prolate'''
+
This is can also be see here in Figure 2.
cannot converge. It jumps between 2 chi2 values. <br>
+
* d'Auvergne, E. J. and Gooley, P. R. (2008). [http://dx.doi.org/10.1007/s10858-007-9213-3 Optimisation of NMR dynamic models II. A new methodology for the dual optimisation of the model-free parameters and the Brownian rotational diffusion tensor. J. Biomol. NMR, 40(2), 121-133.]
Maybe it is because of the NOT default values of optimization, to let us set
 
it back to default.
 
  
We have 4 CPU on our lab computers.<br>
+
Relax should stop calculation, if a model does not converge.
So let us assign 1 to a run normal settings, and only MC=20.
 
  
Copy '''04_run_default_with_tolerance_lim.py''' to '''05_run_def_MC20.py'''
+
See content of:
<source lang="bash">
+
[https://github.com/tlinnet/relax_modelfree_scripts/blob/master/12_Model_1_I_local_tm.py 12_Model_1_I_local_tm.py]
cp 04_run_default_with_tolerance_lim.py 05_run_def_MC20.py
 
</source>
 
 
 
and modify last lines:
 
{| class="mw-collapsible mw-collapsed wikitable"
 
! See file content
 
|-
 
|
 
<source lang="python">
 
# The number of Monte Carlo simulations to be used for error analysis at the end of the analysis.
 
#MC_NUM = 500
 
MC_NUM = 20
 
 
 
# The diffusion model. Standard is 'Fully automated', which means: DIFF_MODEL=['local_tm', 'sphere', 'prolate', 'oblate', 'ellipsoid', 'final']
 
# 'local_tm', 'sphere', ''prolate', 'oblate', 'ellipsoid', or 'final'
 
#DIFF_MODEL = 'local_tm'
 
DIFF_MODEL = ['local_tm', 'sphere', 'prolate', 'oblate', 'ellipsoid', 'final']
 
 
 
# The maximum number of iterations for the global iteration.  Set to None, then the algorithm iterates until convergence.
 
MAX_ITER = None
 
 
 
# Automatic looping over all rounds until convergence (must be a boolean value of True or False). Standard is: conv_loop=True : and cannot be changed in the GUI.
 
CONV_LOOP = True
 
 
 
# Change some minimise opt params.
 
# This goes into: minimise.execute(self.min_algor, func_tol=self.opt_func_tol, max_iter=self.opt_max_iterations)
 
#####################
 
#dAuvergne_protocol.opt_func_tol = 1e-5 # Standard:  opt_func_tol = 1e-25 
 
#dAuvergne_protocol.opt_max_iterations = 1000 # Standard: opt_max_iterations = int(1e7)
 
#dAuvergne_protocol.opt_func_tol = 1e-10 # Standard:  opt_func_tol = 1e-25 
 
#dAuvergne_protocol.opt_max_iterations = int(1e5) # Standard: opt_max_iterations = int(1e7)
 
 
 
#####################################
 
 
 
# The results dir.
 
var = 'result_05'
 
results_dir = os.getcwd() + os.sep + var
 
 
 
# Save the state before running. Open and check in GUI!
 
state.save(state=var+'_ini.bz2', dir=results_dir, force=True)
 
 
 
# To check in GUI
 
# relax -g
 
# File -> Open relax state
 
# In folder "result_03" open "result_03_ini.bz2"
 
# View -> Data pipe editor
 
# Right click on pipe, and select "Associate with a new auto-analysis"
 
 
 
dAuvergne_protocol(pipe_name=name, pipe_bundle=bundle_name, results_dir=results_dir, diff_model=DIFF_MODEL, mf_models=MF_MODELS, local_tm_models=LOCAL_TM_MODELS, grid_inc=GRID_INC, min_algor=MIN_ALGOR, mc_sim_num=MC_NUM, max_iter=MAX_ITER, conv_loop=CONV_LOOP)
 
</source>
 
|}
 
 
 
* MC_NUM = 20
 
Number of Monte-Carlo simulations. The protocol will find optimum parameter values in this protocol, but error
 
estimation will not be very reliable. Standard is 500.
 
  
 
We use [http://www.dayid.org/comp/tm.html tmux] to make a terminal-session, we can get back to,
 
We use [http://www.dayid.org/comp/tm.html tmux] to make a terminal-session, we can get back to,
 
if our own terminal connection get closed.
 
if our own terminal connection get closed.
 
* start a new session: '''tmux'''
 
* re-attach a detached session: '''tmux attach'''
 
  
 
Run with
 
Run with
 
<source lang="bash">
 
<source lang="bash">
 
# Make terminal-session
 
# Make terminal-session
tmux
+
tmux new -s m1
  
relax 05_run_def_MC20.py -t 05_run_def_MC20.log
+
relax 12_Model_1_I_local_tm.py -t 12_Model_1_I_local_tm.log
 +
 
 +
# or
 +
tmux new -s m1
 +
mpirun -np 22 relax --multi='mpi4py' 12_Model_1_I_local_tm.py -t 12_Model_1_I_local_tm.log
 
</source>
 
</source>
  
 
You can then in another terminal follow the logfile by
 
You can then in another terminal follow the logfile by
 
<source lang="bash">
 
<source lang="bash">
less +F 05_run_def_MC20.log
+
less +F 12_Model_I_local_tm.log
 
</source>
 
</source>
  
Line 1,047: Line 320:
 
* To exit, use keyboard: '''Ctrl+c'''    and then: '''q'''
 
* To exit, use keyboard: '''Ctrl+c'''    and then: '''q'''
  
== 06_run_def_MC20_MAX_ITER20.py - Try normal run with MC 20 and MAX_ITER 20 ==
+
== 13_Model_2-5 - Run Model 2 to 5 ==
It looks like the '''prolate''' has problem with converging. <br>
+
When Model 1 is completed, then make 4 terminal windows and run them at the  
So let us try a run, where a maximum of '''20 rounds of convergence''' is accepted. <br>
+
same time.
  
Normally between 8 to 15 multiple rounds of optimisation of the are required for the proper execution of this script.<br>
+
These scripts do:
This is can also be see here in Figure 2.
+
* Read the state file from before with setup
* d'Auvergne, E. J. and Gooley, P. R. (2008). [http://dx.doi.org/10.1007/s10858-007-9213-3 Optimisation of NMR dynamic models II. A new methodology for the dual optimisation of the model-free parameters and the Brownian rotational diffusion tensor. J. Biomol. NMR, 40(2), 121-133.]
+
* Change DIFF_MODEL accordingly
 
 
Then hopefully, relax should continue to the other models, if '''prolate''' does not converge.
 
 
 
We have 4 CPU on our lab computers.<br>
 
Let us assign another to a run normal settings, only MC=20 and MAX_ITER=20.
 
 
 
Copy '''05_run_def_MC20.py''' to '''06_run_def_MC20_MAX_ITER20.py'''
 
  
 +
[https://github.com/tlinnet/relax_modelfree_scripts/blob/master/13_Model_2_II_sphere.py 13_Model_2_II_sphere.py]
 
<source lang="bash">
 
<source lang="bash">
cp 05_run_def_MC20.py 06_run_def_MC20_MAX_ITER20.py
+
tmux new -s m2
</source>
+
relax 13_Model_2_II_sphere.py -t 13_Model_2_II_sphere.log
 
+
# Or
and modify last lines:
+
mpirun -np 5 relax --multi='mpi4py' 13_Model_2_II_sphere.py -t 13_Model_2_II_sphere.log
{| class="mw-collapsible mw-collapsed wikitable"
 
! See file content
 
|-
 
|
 
<source lang="python">
 
# The number of Monte Carlo simulations to be used for error analysis at the end of the analysis.
 
#MC_NUM = 500
 
MC_NUM = 20
 
  
# The diffusion model. Standard is 'Fully automated', which means: DIFF_MODEL=['local_tm', 'sphere', 'prolate', 'oblate', 'ellipsoid', 'final']
+
# When relax is running, push: Ctrl+b and then d, to disconnect without exit
# 'local_tm', 'sphere', ''prolate', 'oblate', 'ellipsoid', or 'final'
 
#DIFF_MODEL = 'local_tm'
 
DIFF_MODEL = ['local_tm', 'sphere', 'prolate', 'oblate', 'ellipsoid', 'final']
 
 
 
# The maximum number of iterations for the global iteration.  Set to None, then the algorithm iterates until convergence.
 
MAX_ITER = 20
 
 
 
# Automatic looping over all rounds until convergence (must be a boolean value of True or False). Standard is: conv_loop=True : and cannot be changed in the GUI.
 
CONV_LOOP = True
 
 
 
# Change some minimise opt params.
 
# This goes into: minimise.execute(self.min_algor, func_tol=self.opt_func_tol, max_iter=self.opt_max_iterations)
 
#####################
 
#dAuvergne_protocol.opt_func_tol = 1e-5 # Standard:  opt_func_tol = 1e-25 
 
#dAuvergne_protocol.opt_max_iterations = 1000 # Standard: opt_max_iterations = int(1e7)
 
#dAuvergne_protocol.opt_func_tol = 1e-10 # Standard:  opt_func_tol = 1e-25 
 
#dAuvergne_protocol.opt_max_iterations = int(1e5) # Standard: opt_max_iterations = int(1e7)
 
 
 
#####################################
 
 
 
# The results dir.
 
var = 'result_06'
 
results_dir = os.getcwd() + os.sep + var
 
 
 
# Save the state before running. Open and check in GUI!
 
state.save(state=var+'_ini.bz2', dir=results_dir, force=True)
 
 
 
# To check in GUI
 
# relax -g
 
# File -> Open relax state
 
# In folder "result_03" open "result_03_ini.bz2"
 
# View -> Data pipe editor
 
# Right click on pipe, and select "Associate with a new auto-analysis"
 
 
 
dAuvergne_protocol(pipe_name=name, pipe_bundle=bundle_name, results_dir=results_dir, diff_model=DIFF_MODEL, mf_models=MF_MODELS, local_tm_models=LOCAL_TM_MODELS, grid_inc=GRID_INC, min_algor=MIN_ALGOR, mc_sim_num=MC_NUM, max_iter=MAX_ITER, conv_loop=CONV_LOOP)
 
 
</source>
 
</source>
|}
 
  
We use [http://www.dayid.org/comp/tm.html tmux] to make a terminal-session, we can get back to,
+
[https://github.com/tlinnet/relax_modelfree_scripts/blob/master/13_Model_3_III_prolate.py 13_Model_3_III_prolate.py]
if our own terminal connection get closed.
 
 
 
* start a new session: '''tmux new -s relax06'''
 
* re-attach a detached session: '''tmux a -t relax06'''
 
 
 
Run with
 
 
<source lang="bash">
 
<source lang="bash">
# Make terminal-session
+
tmux new -s m3
tmux new -s relax06
+
relax 13_Model_3_III_prolate.py -t 13_Model_3_III_prolate.log
 
+
# Or
relax 06_run_def_MC20_MAX_ITER20.py -t 06_run_def_MC20_MAX_ITER20.log
+
mpirun -np 5 relax --multi='mpi4py' 13_Model_3_III_prolate.py -t 13_Model_3_III_prolate.log
 
</source>
 
</source>
  
===06_check_intermediate.py - Inspection of 06 run ===
+
[https://github.com/tlinnet/relax_modelfree_scripts/blob/master/13_Model_4_IV_oblate.py 13_Model_4_IV_oblate.py]
After running around 12H, it is in round '''14''' in the '''prolate'''.
+
<source lang="bash">
 
+
tmux new -s m4
Let's us try '''finalize''' on just the current available data!
+
relax 13_Model_4_IV_oblate.py -t 13_Model_4_IV_oblate.log
 
+
# Or
Make a '''06_check_intermediate.py file''', with this content. We just want to finish, and see some results. Therefore also nr. of Monte-Carlo is set to a minimum.
+
mpirun -np 5 relax --multi='mpi4py' 13_Model_4_IV_oblate.py -t 13_Model_4_IV_oblate.log
 
 
MC_NUM = 5
 
{| class="mw-collapsible mw-collapsed wikitable"
 
! See file content
 
|-
 
|
 
<source lang="python">
 
# Python module imports.
 
import os, stat
 
 
# relax module imports.
 
from auto_analyses.dauvergne_protocol import dAuvergne_protocol
 
from pipe_control import pipes
 
import lib.io
 
import lib.plotting.grace
 
 
# Analysis variables.
 
#####################
 
# The number of Monte Carlo simulations to be used for error analysis at the end of the analysis.
 
MC_NUM = 5
 
# The diffusion model. Standard is 'Fully automated', which means: DIFF_MODEL=['local_tm', 'sphere', 'prolate', 'oblate', 'ellipsoid', 'final']
 
# 'local_tm', 'sphere', ''prolate', 'oblate', 'ellipsoid', or 'final'
 
#DIFF_MODEL = ['local_tm', 'sphere', 'prolate', 'oblate', 'ellipsoid', 'final']
 
DIFF_MODEL = ['final']
 
 
# Read the state with the setup
 
# The results dir.
 
var = 'result_06'
 
results_dir = os.getcwd() + os.sep + var
 
# Load the state with setup data.
 
state.load(state=var+'_ini.bz2', dir=results_dir, force=True)
 
 
# Define write out
 
out = 'result_06_check_intermediate'
 
write_results_dir = os.getcwd() + os.sep + out
 
 
# Read the pipe info
 
pipe.display()
 
pipe_name = pipes.cdp_name()
 
pipe_bundle = pipes.get_bundle(pipe_name)
 
 
# Run protocol
 
dAuvergne_protocol(pipe_name=pipe_name, pipe_bundle=pipe_bundle,
 
  results_dir=results_dir,
 
  write_results_dir=write_results_dir,
 
  diff_model=DIFF_MODEL,
 
  mc_sim_num=MC_NUM)
 
 
 
# Write a python "grace to PNG/EPS/SVG..." conversion script.
 
# Open the file for writing.
 
file_name = "grace2images.py"
 
write_results_dir_grace = write_results_dir + os.sep + 'final' + os.sep + 'grace'
 
file_path = lib.io.get_file_path(file_name, write_results_dir_grace)
 
file = lib.io.open_write_file(file_path, force=True)
 
# Write the file.
 
lib.plotting.grace.script_grace2images(file=file)
 
file.close()
 
os.chmod(file_path, stat.S_IRWXU|stat.S_IRGRP|stat.S_IROTH)
 
 
</source>
 
</source>
|}
 
  
Run with. This should take 20-30 min on 1 CPU.
+
[https://github.com/tlinnet/relax_modelfree_scripts/blob/master/13_Model_5_V_ellipsoid.py 13_Model_5_V_ellipsoid.py]
 
<source lang="bash">
 
<source lang="bash">
# Make terminal-session
+
tmux new -s m5
tmux new -s relax06_check
+
relax 13_Model_5_V_ellipsoid.py -t 13_Model_5_V_ellipsoid.log
 
+
# Or
# First delete old data
+
mpirun -np 5 relax --multi='mpi4py' 13_Model_5_V_ellipsoid.py -t 13_Model_5_V_ellipsoid.log
rm -rf result_06_check_intermediate
 
relax 06_check_intermediate.py -t 06_check_intermediate.log
 
 
</source>
 
</source>
  
=== 06_check_intermediate_spin_info.py - Spin info ===
+
To join session
We would like to extract more info from the spin_containers in the final run.
+
<source lang="bash">
 
+
# List
Make a '''06_check_intermediate_spin_info.py''' file, with this content.
+
tmux list-s
 
 
{| class="mw-collapsible mw-collapsed wikitable"
 
! See file content
 
|-
 
|
 
<source lang="python">
 
# Python module imports.
 
import os
 
  
# relax module imports.
+
# Join either
from pipe_control import pipes
+
tmux a -t m1
import lib.io
+
tmux a -t m2
from pipe_control.mol_res_spin import spin_loop
+
tmux a -t m3
 
+
tmux a -t m4
# Read the state with the setup
+
tmux a -t m5
var = 'result_06_check_intermediate'
 
results_dir = os.getcwd() + os.sep + var + os.sep + 'final'
 
# Load the state with setup data.
 
state.load(state='results.bz2', dir=results_dir, force=True)
 
 
 
# Show pipes
 
pipe.display()
 
pipe_name = pipes.cdp_name()
 
pipe_bundle = pipes.get_bundle(pipe_name)
 
 
 
# Get model
 
value.write(param='model', file='model.txt', dir=results_dir, force=True)
 
# Get equation
 
value.write(param='equation', file='equation.txt', dir=results_dir, force=True)
 
 
 
# Inspect manually
 
out_results = []
 
i=0
 
for c_s, c_s_mol, c_s_resi, c_s_resn, c_s_id in spin_loop(full_info=True, return_id=True, skip_desel=True):
 
    # See what we can extract from the spin container
 
    if i == 0:
 
        print dir(c_s)
 
 
 
    # First convert to string
 
    c_s_resi = str(c_s_resi)
 
    # Append
 
    out_results.append([c_s_mol, c_s_resi, c_s_resn, c_s.element, c_s_id, c_s.model, c_s.equation])
 
    # Print
 
    print("mol: %s, resi: %s, resn: %s, element: %s, id: %s, model: %s, equation: %s" % tuple(out_results[-1]) )
 
    i += 1
 
 
 
# Write file
 
file_name = "results_collected_spin_info.txt"
 
file_path = lib.io.get_file_path(file_name, results_dir)
 
file = lib.io.open_write_file(file_path, force=True)
 
 
 
# Write the file.
 
headings = ["mol", "resi", "resn", "element", "id", "model", "equation"]
 
lib.io.write_data(out=file, headings=headings, data=out_results)
 
file.close()
 
 
</source>
 
</source>
|}
 
 
Run with relax
 
<source lang="bash">
 
relax 06_check_intermediate_spin_info.py
 
</source>
 
 
=== 06_check_intermediate_iteration_chi2.py - Per iteration get chi2 ===
 
Specifically, since we have problems with convergence, we would like to see the chi2
 
value per iteration for the different models. This is not so easy to get, and we have
 
to make a script, that loads each result file per '''round''' folder and extract the chi2 value.
 
 
This will also get '''k''' The global number parameters and '''n''' the global number of data sets.
 
 
Make a '''06_check_intermediate_iteration_chi2.py ''' file, with this content.
 
 
{| class="mw-collapsible mw-collapsed wikitable"
 
! See file content
 
|-
 
|
 
<source lang="python">
 
# Python module imports.
 
import os
 
 
# relax module imports.
 
from pipe_control import pipes
 
import lib.io
 
from specific_analyses.api import return_api
 
 
# Read the state with the setup
 
var = 'result_06_check_intermediate'
 
results_dir = os.getcwd() + os.sep + var + os.sep + 'final'
 
# Load the state with setup data.
 
state.load(state='results.bz2', dir=results_dir, force=True)
 
 
# Show pipes
 
pipe.display()
 
pipe_name = pipes.cdp_name()
 
pipe_bundle = pipes.get_bundle(pipe_name)
 
 
# Define write out
 
write_out = results_dir + os.sep + 'grace'
 
 
# chi2 per iteration? But does not work?
 
grace.write(x_data_type='iter', y_data_type='chi2',  file='iter_chi2.agr', dir=write_out, force=True)
 
 
#############
 
 
# This does not do what we want. So let us try manually.
 
var_ori = 'result_06'
 
results_dir_ori = os.getcwd() + os.sep + var_ori
 
 
dir_list = os.listdir(results_dir_ori)
 
 
all_models = ['local_tm', 'sphere', 'prolate', 'oblate', 'ellipsoid']
 
opt_models = []
 
for model in all_models:
 
    if model in dir_list:
 
        opt_models.append(model)
 
 
# Loop over models MII to MV.
 
out_results = []
 
for model in ['sphere', 'prolate', 'oblate', 'ellipsoid']:
 
    # Skip missing models.
 
    if model not in opt_models:
 
        continue
 
    # Make the model dir
 
    mdir = results_dir_ori + os.sep + model
 
    rdir = [ name for name in os.listdir(mdir) if os.path.isdir(os.path.join(mdir, name)) ]
 
    rdirs = lib.io.sort_filenames(rdir)
 
 
    # Loop over rounds
 
    for rd in rdirs:
 
        if "round_" in rd:
 
            dir_model_round = mdir + os.sep + rd + os.sep + 'opt'
 
            if os.path.isdir(dir_model_round):
 
                # Create pipe to read data
 
                pipe_name_rnd = "%s_%s" % (model, rd)
 
                pipe.create(pipe_name_rnd, 'mf', bundle="temp")
 
                results.read(file='results', dir=dir_model_round)
 
 
                # Get info
 
                round_i = rd.split("_")[-1]
 
                cdp_iter = str(cdp.iter)
 
                chi2 = str(cdp.chi2)
 
                tm = str(cdp.diff_tensor.tm)
 
 
                # Get the api to get number of parameters
 
                api = return_api(pipe_name=pipe_name)
 
                model_loop = api.model_loop
 
                model_desc = api.model_desc
 
                model_statistics = api.model_statistics
 
 
                for model_info in model_loop():
 
                    desc = model_desc(model_info)
 
                    # Num_params_(k)
 
                    # Num_data_sets_(n)
 
                    k_glob, n_glob, chi2_glob = model_statistics(model_info, global_stats=True)
 
                    break
 
 
                k_glob = str(k_glob)
 
                n_glob = str(n_glob)
 
                chi2_glob = str(chi2_glob)
 
 
                # Append to results
 
                out_results.append([pipe_name_rnd, model, round_i, cdp_iter, chi2, tm, k_glob, n_glob, chi2_glob])
 
                print("\n# Data:")
 
                print(out_results[-1])
 
 
# Change back to original pipe
 
pipe.switch(pipe_name)
 
cdp.out_results = out_results
 
  
#print result
+
== 14_intermediate_final.py - Inspection during model optimization ==
for res in out_results:
+
During running of model 2-5, the current results can be inspected with
    print res
+
this nifty scripts.
  
# Write file
+
The script will ask for input of MC numbers. So just run it.
file_name = "results_collected.txt"
 
file_path = lib.io.get_file_path(file_name, results_dir)
 
file = lib.io.open_write_file(file_path, force=True)
 
  
# Write the file.
+
[https://github.com/tlinnet/relax_modelfree_scripts/blob/master/14_intermediate_final.py 14_intermediate_final.py]
headings = ["pipe_name", "model", "round_i", "cdp_iter", "chi2", "tm", "k_glob_Num_params", "n_glob_Num_data_sets", "chi2_glob"]
 
lib.io.write_data(out=file, headings=headings, data=out_results)
 
file.close()
 
 
 
# Save the state
 
state.save(state='results_collected.bz2', dir=results_dir, force=True)
 
</source>
 
|}
 
 
 
Run with relax
 
 
<source lang="bash">
 
<source lang="bash">
relax 06_check_intermediate_iteration_chi2.py
+
tmux new -s final
 +
relax 14_intermediate_final.py -t 14_intermediate_final.log
 
</source>
 
</source>
  
=== 06_check_intermediate_pymol.pml - Use pymol commands from inspection of 06 run ===
+
This does:
From the above run of check_intermediate, we can inspect grace images.
+
* Option: Collect current best result from Model 2-5, and make MC simulations, and finalize to get current results files
 
+
** [http://comdnmr.nysbc.org/comd-nmr-dissem/comd-nmr-software Make analysis script for palmer Modelfree4]
We also get some pymol files.<br>
+
** Get more spin information
Let us try to use these, to get a feeling for the data.
+
* Make a pymol file, that collects all of relax pymol command files into 1 pymol session
 
+
* Option: Collect all chi2 and number of params k, for each iteration per model
Make a '''06_check_intermediate_pymol.pml ''' file, with this content.
+
** Make a python plot file for plotting this results
 
 
{| class="mw-collapsible mw-collapsed wikitable"
 
! See file content
 
|-
 
|
 
<source lang="python">
 
# Start settings
 
reinitialize
 
bg_color white
 
set scene_buttons, 1
 
 
 
# Load protein and set name
 
load energy_1.pdb
 
prot='prot'
 
cmd.set_name("energy_1", prot)
 
 
 
# Load tensor pdb
 
load ./result_06_check_intermediate/final/tensor.pdb
 
 
 
#################################
 
# Scene 1 :  Make default view
 
#################################
 
hide everything, prot
 
show_as cartoon, prot
 
zoom prot and polymer
 
 
 
scene F1, store, load of data, view=1
 
 
 
################################
 
# Scenes: We will go through the order like this
 
# 's2', 's2f', 's2s', 'amp_fast', 'amp_slow', 'te', 'tf', 'ts', 'time_fast', 'time_slow', 'rex'
 
# s2: S2, the model-free generalised order parameter (S2 = S2f.S2s).
 
# s2f: S2f, the faster motion model-free generalised order parameter.
 
# s2s: S2s, the slower motion model-free generalised order parameter.
 
# amp_fast:
 
# amp_slow:
 
# te: Single motion effective internal correlation time (seconds).
 
# tf: Faster motion effective internal correlation time (seconds).
 
# ts: Slower motion effective internal correlation time (seconds).
 
# time_fast:
 
# time_slow:
 
# rex: Chemical exchange relaxation (sigma_ex = Rex / omega**2).
 
 
 
#modes = ['s2']
 
#modes = ['s2', 's2f']
 
modes = ['s2', 's2f', 's2s', 'amp_fast', 'amp_slow', 'te', 'tf', 'ts', 'time_fast', 'time_slow', 'rex']
 
fdir = "./result_06_check_intermediate/final/pymol"
 
 
 
python
 
# File placement
 
if True:
 
    for i, mode in enumerate(modes):
 
        # Make name
 
        protn = '%s_%s' % (prot, mode)
 
 
 
        # Loop over file lines
 
        fname = fdir + "/%s.pml"%mode
 
        fname_out = fdir + "/0_mod_%s.pml"%mode
 
        f_out = open(fname_out, "w")
 
        with open(fname) as f:
 
            for line in f:
 
                line_cmd = ""
 
                # Add to end of line, depending on command
 
                if line[0] == "\n":
 
                    line_add = ""
 
                elif line[0:4] == "hide":
 
                    line_add = " %s"%protn
 
 
 
                # All not changed
 
                elif line[0:8] == "bg_color":
 
                    line_add = ""
 
                elif line[0:9] == "set_color":
 
                    line_add = ""
 
                elif line[0:6] == "delete":
 
                    line_add = ""
 
 
 
                else:
 
                    line_add =  " and %s"%protn
 
                # Modify line
 
                line_cmd = line.strip() + line_add + "\n"
 
 
 
                # Write the line
 
                f_out.write(line_cmd)
 
            f_out.close()
 
python end
 
 
 
# Make pymol objects
 
python
 
for i, mode in enumerate(modes):
 
    protn = '%s_%s' % (prot, mode)
 
    cmd.copy(protn, prot)
 
   
 
    cmd.scene("F1")
 
    cmd.disable(prot)
 
    cmd.enable(protn)
 
    cmd.scene("F%i"%(i+2), "store", mode, view=0)
 
python end
 
 
 
#################################
 
# Scenes
 
# #modes = ['s2', 's2f', 's2s', 'amp_fast', 'amp_slow', 'te', 'tf', 'ts', 'time_fast', 'time_slow', 'rex']
 
 
 
scene F2
 
@./result_06_check_intermediate/final/pymol/0_mod_s2.pml
 
scene F2, store, s2: the model-free generalised order parameter (S2 = S2f.S2s), view=0
 
 
 
scene F3
 
@./result_06_check_intermediate/final/pymol/0_mod_s2f.pml
 
scene F3, store, s2f: the faster motion model-free generalised order parameter, view=0
 
 
 
scene F4
 
@./result_06_check_intermediate/final/pymol/0_mod_s2s.pml
 
scene F4, store, s2s: the slower motion model-free generalised order parameter, view=0
 
 
 
scene F5
 
@./result_06_check_intermediate/final/pymol/0_mod_amp_fast.pml
 
scene F5, store, amp_fast, view=0
 
 
 
scene F6
 
@./result_06_check_intermediate/final/pymol/0_mod_amp_slow.pml
 
scene F6, store, amp_slow, view=0
 
 
 
scene F7
 
@./result_06_check_intermediate/final/pymol/0_mod_te.pml
 
scene F7, store, te: Single motion effective internal correlation time (seconds), view=0
 
 
 
scene F8
 
@./result_06_check_intermediate/final/pymol/0_mod_tf.pml
 
scene F8, store, tf: Faster motion effective internal correlation time (seconds), view=0
 
 
 
scene F9
 
@./result_06_check_intermediate/final/pymol/0_mod_ts.pml
 
scene F9, store, ts: Slower motion effective internal correlation time (seconds), view=0
 
 
 
scene F10
 
@./result_06_check_intermediate/final/pymol/0_mod_time_fast.pml
 
scene F10, store, time_fast, view=0
 
 
 
scene F11
 
@./result_06_check_intermediate/final/pymol/0_mod_time_slow.pml
 
scene F11, store, time_slow, view=0
 
 
 
scene F12
 
@./result_06_check_intermediate/final/pymol/0_mod_rex.pml
 
scene F12, store, rex: Chemical exchange relaxation (sigma_ex = Rex / omega**2), view=0
 
</source>
 
|}
 
  
Run with pymol.
+
=== Per iteration get: chi2, k, tm ===
 +
Afterwards, plot the data.
 
<source lang="bash">
 
<source lang="bash">
pymol 06_check_intermediate_pymol.pml
+
python results_collected.py
 
 
# To bug test
 
pymol -c 06_check_intermediate_pymol.pml
 
 
</source>
 
</source>
  
=== 06_check_intermediate_convert.py - Create input for other programs ===
+
=== Pymol macro ===
Relax can create input files to other program, to help verify the results. <br>
+
You also get a pymol folder.
This is mentioned here:
 
* d'Auvergne, E. J. and Gooley, P. R. (2008). [http://dx.doi.org/10.1007/s10858-007-9214-2 Optimisation of NMR dynamic models I. Minimisation algorithms and their performance within the model-free and Brownian rotational diffusion spaces. J. Biomol. NMR, 40(2), 107-119.]
 
  
There exist some model-free programs for analysis
+
See here for info how the macro is applied
* Modelfree (Palmer et al. 1991; Mandel et al. 1995) - most commonly used program in the literature is the Modelfree program
+
* [http://www.nmr-relax.com/manual/molmol_macro_apply.html#SECTION081284600000000000000 Summary of parameter meaning and value to pymol visualization]
* Dasha (Orekhov et al. 1995a) - two local optimisation algorithms are available.
 
* DYNAMICS (Fushman et al. 1997)
 
* Tensor 2 (Blackledge et al. 1998; Cordier et al. 1998; Dosset et al. 2000; Tsan et al. 2000).
 
  
Relax can export output to
+
Run with
* Modelfree4 : User function: palmer.create()
 
* dasha : User function: dasha.create()
 
 
 
Make a '''06_check_intermediate_convert.py ''' file, with this content.
 
{| class="mw-collapsible mw-collapsed wikitable"
 
! See file content
 
|-
 
|
 
<source lang="python">
 
# Python module imports.
 
import os
 
 
 
# relax module imports.
 
 
 
# Read the state with the setup
 
var = 'result_06_check_intermediate'
 
results_dir = os.getcwd() + os.sep + var + os.sep + 'final'
 
# Load the state with setup data.
 
state.load(state='results.bz2', dir=results_dir, force=True)
 
 
 
######
 
#Create the Modelfree4 input files.
 
#####
 
 
 
#Defaults
 
# dir:  The directory to place the files.
 
# force:  A flag which if set to True will cause the results file to be overwritten if it already exists.
 
# binary:  The name of the executable Modelfree program file.
 
# diff_search:  See the Modelfree4 manual for 'diffusion_search'.
 
# sims:  The number of Monte Carlo simulations.
 
# sim_type:  See the Modelfree4 manual.
 
# trim:  See the Modelfree4 manual.
 
# steps:  See the Modelfree4 manual.
 
# constraints:  A flag specifying whether the parameters should be constrained.  The default is to turn constraints on (constraints=True).
 
# heteronuc_type:  A three letter string describing the heteronucleus type, ie '15N', '13C', etc.
 
# atom1:  The symbol of the X heteronucleus in the PDB file.
 
# atom2:  The symbol of the H nucleus in the PDB file.
 
# spin_id:  The spin identification string.
 
 
 
# The following files are created
 
# - 'dir/mfin'
 
# - 'dir/mfdata'
 
# - 'dir/mfpar'
 
# - 'dir/mfmodel'
 
# - 'dir/run.sh'
 
 
 
# The file 'dir/run.sh' contains the single command,
 
# 'modelfree4 -i mfin -d mfdata -p mfpar -m mfmodel -o mfout -e out',
 
 
 
# which can be used to execute modelfree4.
 
# If you would like to use a different Modelfree executable file, change the binary name to the
 
# appropriate file name.  If the file is not located within the environment's path, include the full
 
# path in front of the binary file name.
 
 
 
#palmer.create(dir=None, force=False,
 
#    binary='modelfree4', diff_search='none', sims=0,
 
#    sim_type='pred', trim=0, steps=20,
 
#    constraints=True, heteronuc_type='15N', atom1='N', atom2='H',
 
#    spin_id=None)
 
 
 
# Define write out
 
write_modelfree = os.getcwd() + os.sep + var + os.sep + "Modelfree4"
 
# Fix bug
 
cdp.structure.structural_data[0].mol[0].file_path = '.'
 
 
 
outdir = os.getcwd()
 
palmer.create(dir=write_modelfree, force=True,
 
    binary='modelfree4', diff_search='none', sims=0,
 
    sim_type='pred', trim=0, steps=20,
 
    constraints=True, heteronuc_type='15N', atom1='N', atom2='H',
 
    spin_id=None)
 
   
 
######
 
#Create the Dasha script
 
#####
 
 
 
#Defaults
 
# algor:  The minimisation algorithm.
 
# dir:  The directory to place the files.
 
# force:  A flag which if set to True will cause the results file to be overwritten if it already exists.
 
 
 
# Optimisation algorithms
 
#The two minimisation algorithms within Dasha are accessible through the algorithm which can be set to:
 
# 'LM':  The Levenberg-Marquardt algorithm,
 
# 'NR':  Newton-Raphson algorithm.
 
# For Levenberg-Marquardt minimisation, the function 'lmin' will be called, while for Newton-Raphson,
 
# the function 'min' will be executed.
 
 
 
# dasha.create(algor='LM', dir=None, force=False)
 
 
 
# Define write out
 
out = 'result_06_check_intermediate'
 
write_dasha = os.getcwd() + os.sep + out + os.sep + "Dasha"
 
#dasha.create(algor='LM', dir=write_dasha, force=True)
 
</source>
 
|}
 
 
 
Run with:
 
 
<source lang="bash">
 
<source lang="bash">
relax 06_check_intermediate_convert.py
+
pymol 0_0_apply_all_pymol_commands.pml
 
</source>
 
</source>
  

Latest revision as of 10:40, 22 October 2017

Background

This is a tutorial for Lau and Kaare in SBiNLab, and hopefully others.

To get inspiration of example scripts files and see how the protocol is performed, have a look here:

For references, see relax references:

Script inspiration

model-free : Script inspiration for setup and analysis

The distribution of relax includes a folder sample_scripts/model_free which contain a folder with scripts for analysis.

It can be seen here: https://github.com/nmr-relax/relax/tree/master/sample_scripts/model_free

Here is the current list

  • back_calculate.py. Back-calculate and save relaxation data starting from a saved model-free results file.
  • bmrb_deposition.py Script for creating a NMR-STAR 3.1 formatted file for BMRB deposition of model-free results.
  • cv.py Script for model-free analysis using cross-validation model selection.
  • dasha.py Script for model-free analysis using the program Dasha.
  • dauvergne_protocol.py Script for black-box model-free analysis.
  • diff_min.py Demonstration script for diffusion tensor optimisation in a model-free analysis.]
  • final_data_extraction.py Extract Data to Table
  • generate_ri.py Script for back-calculating the relaxation data.
  • grace_S2_vs_te.py Script for creating a grace plot of the simulated order parameters vs. simulated correlation times.
  • grace_ri_data_correlation.py Script for creating correlations plots of experimental verses back calculated relaxation data.
  • map.py Script for mapping the model-free space for OpenDX visualisation.
  • mf_multimodel.py This script performs a model-free analysis for the models 'm0' to 'm9' (or 'tm0' to 'tm9').
  • modsel.py Script for model-free model selection.
  • molmol_plot.py Script for generating Molmol macros for highlighting model-free motions
  • palmer.py Script for model-free analysis using Art Palmer's program 'Modelfree4'. Download from http://comdnmr.nysbc.org/comd-nmr-dissem/comd-nmr-software
  • remap.py Script for mapping the model-free space.
  • single_model.py This script performs a model-free analysis for the single model 'm4'.
  • table_csv.py Script for converting the model-free results into a CSV table.
  • table_latex.py Script for converting the model-free results into a LaTeX table.

Other script inspiration for checking

The distribution of relax includes a folder sample_scripts/ which contain a folder with scripts for analysis.

It can be seen here: https://github.com/nmr-relax/relax/tree/master/sample_scripts

R1 / R2 Calculation

The resultant plot is useful for finding bad points or bad spectra when fitting exponential curves determine the R1 and R2 relaxation rates. If the averages deviate systematically from zero, bias in the spectra or fitting will be clearly revealed. To use this script, R1 or R2 exponential curve fitting must have previously have been carried out the program state saved to the file 'rx.save' (either with or without the .gz or .bz2 ). The file name of the saved state can be changed at the top of this script.

NOE calculation

  • noe.py Script for calculating NOEs.

Test data

Severe artifacts can be introduced if model-free analysis is performed from inconsistent multiple magnetic field datasets. The use of simple tests as validation tools for the consistency assessment can help avoid such problems in order to extract more reliable information from spin relaxation experiments. In particular, these tests are useful for detecting inconsistencies arising from R2 data. Since such inconsistencies can yield artifactual Rex parameters within model-free analysis, these tests should be use routinely prior to any analysis such as model-free calculations. This script will allow one to calculate values for the three consistency tests J(0), F_eta and F_R2. Once this is done, qualitative analysis can be performed by comparing values obtained at different magnetic fields. Correlation plots and histograms are useful tools for such comparison, such as presented in Morin & Gagne (2009a) J. Biomol. NMR, 45: 361-372.

Other representations

  • angles.py Script for calculating the protein NH bond vector angles with respect to the diffusion tensor.
  • xh_vector_dist.py Script for creating a PDB representation of the distribution of XH bond vectors.
  • diff_tensor_pdb.py Script for creating a PDB representation of the Brownian rotational diffusion tensor.

Scripts - Part 2

We now try to setup things a little more efficient.

Relax is able to read previous results file, so let us divide the task up into:

  • 1: Load the data and save as state file. Inspect in GUI before running.
  • 2: Run the Model 1: local_tm.
  • 3: Here make 4 scripts. Each of them only depends on Model 1:
    • Model 2: sphere
    • Model 3: prolate
    • Model 4: oblate
    • Model 5: ellipsoid
  • 4: Make an intermediate 'final' model script. This will automatically detect files from above.

Prepare data

We make a new folder and try.

See commands
mkdir 20171010_model_free_2_HADDOCK
cp 20171010_model_free/*.dat 20171010_model_free_2_HADDOCK
cp 20171010_model_free/*.pdb 20171010_model_free_2_HADDOCK

# Get scripts
cd 20171010_model_free_2_HADDOCK
git init
git remote add origin git@github.com:tlinnet/relax_modelfree_scripts.git
git fetch
git checkout -t origin/master

And a new one, changing the NOE error

See commands
mkdir 20171010_model_free_3_HADDOCK
cp 20171010_model_free/*.dat 20171010_model_free_3_HADDOCK
cp 20171010_model_free/*.pdb 20171010_model_free_3_HADDOCK

# Get scripts
cd 20171010_model_free_3_HADDOCK
git init
git remote add origin git@github.com:tlinnet/relax_modelfree_scripts.git
git fetch
git checkout -t origin/master

# Change NOE error
sed -i 's/0.1*$/0.05/' NOE_600MHz_new.dat
sed -i 's/0.1*$/0.05/' NOE_750MHz.dat

And a new one, changing the NOE error, and deselecting N-terminal.
Consistency test, found that this stretch contained outliers.

See commands
mkdir 20171010_model_free_4_HADDOCK
cp 20171010_model_free/*.dat 20171010_model_free_4_HADDOCK
cp 20171010_model_free/*.pdb 20171010_model_free_4_HADDOCK

# Get scripts
cd 20171010_model_free_4_HADDOCK
git init
git remote add origin git@github.com:tlinnet/relax_modelfree_scripts.git
git fetch
git checkout -t origin/master

# Change NOE error
sed -i 's/0.1*$/0.05/' NOE_600MHz_new.dat
sed -i 's/0.1*$/0.05/' NOE_750MHz.dat

# Make deselection
echo "#" > deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t151" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t152" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t153" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t154" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t155" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t156" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t157" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t158" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t159" >> deselect.txt

And a new one, changing the NOE error, and deselecting spins found from consistency test.

See commands
mkdir 20171010_model_free_5_HADDOCK
cp 20171010_model_free/*.dat 20171010_model_free_5_HADDOCK
cp 20171010_model_free/*.pdb 20171010_model_free_5_HADDOCK

# Get scripts
cd 20171010_model_free_5_HADDOCK
git init
git remote add origin git@github.com:tlinnet/relax_modelfree_scripts.git
git fetch
git checkout -t origin/master

# Change NOE error
sed -i 's/0.1*$/0.05/' NOE_600MHz_new.dat
sed -i 's/0.1*$/0.05/' NOE_750MHz.dat

# Make deselection
echo "#" > deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t158" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t157" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t17" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t159" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t120" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t59" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t98" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t49" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t76" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t155" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t156" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t48" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t154" >> deselect.txt

And a new one, without changing the NOE error, and deselecting spins found from consistency test.

See commands
mkdir 20171010_model_free_6_HADDOCK
cp 20171010_model_free/*.dat 20171010_model_free_6_HADDOCK
cp 20171010_model_free/*.pdb 20171010_model_free_6_HADDOCK

# Get scripts
cd 20171010_model_free_6_HADDOCK
git init
git remote add origin git@github.com:tlinnet/relax_modelfree_scripts.git
git fetch
git checkout -t origin/master

# Make deselection
echo "#" > deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t158" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t157" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t17" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t159" >> deselect.txt

cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t59" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t98" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t76" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t155" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t156" >> deselect.txt 
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t120" >> deselect.txt

cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t49" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t48" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t154" >> deselect.txt

cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t33" >> deselect.txt
cat R1_600MHz_new_model_free.dat | grep -P "ArcCALD\t67" >> deselect.txt

11_read_data_GUI_inspect.py - Read data GUI inspect

This will read the data and save as a state.

The GUI can be a good place to inspect the setup and files.

See content of: 11_read_data_GUI_inspect.py

Run with

relax 11_read_data_GUI_inspect.py -t 11_read_data_GUI_inspect.log

To check in GUI

  • relax -g
  • File -> Open relax state
  • In folder "result_10" open "result_10_ini.bz2"
  • View -> Data pipe editor
  • Right click on pipe, and select "Associate with a new auto-analysis"

relax 11_test_consistency.py - Consistency test of our data

Before running the analysis, it is wise to run a script for consistency testing.

See here:

Highlights:

  • Comparing results obtained at different magnetic fields should, in the case of perfect consistency and assuming the absence of conformational exchange, yield equal values independently of the magnetic field.
  • avoid the potential extraction of erroneous information as well as the waste of time associated to dissecting inconsistent datasets using numerous long model-free minimisations with different subsets of data.
  • The authors prefer the use of the spectral density at zero frequency J(0) alone since it does not rely on an estimation of the global correlation time tc/tm, neither on a measure of theta, the angle between the 15N–1H vector and the principal axis of the 15N chemical shift tensor. Hence, J(0) is less likely to be affected by incorrect parameterisation of input parameters.

See content of: 11_test_consistency.py

relax 11_test_consistency.py -t 11_test_consistency.py.log

#Afterwards, go into the folder at plot data.
python plot_txt_files.py
./grace2images.py

12_Model_1_I_local_tm.py - Only run local_tm

Now we only run Model 1.

  • DIFF_MODEL = ['local_tm']
  • GRID_INC = 11 # This is the standard
  • MC_NUM = 0 # This has no influence in Model 1-5
  • MAX_ITER = 20 # Stop if it has not converged in 20 rounds

Normally between 8 to 15 multiple rounds of optimisation of the are required for the proper execution of this script.
This is can also be see here in Figure 2.

Relax should stop calculation, if a model does not converge.

See content of: 12_Model_1_I_local_tm.py

We use tmux to make a terminal-session, we can get back to, if our own terminal connection get closed.

Run with

# Make terminal-session
tmux new -s m1

relax 12_Model_1_I_local_tm.py -t 12_Model_1_I_local_tm.log

# or
tmux new -s m1
mpirun -np 22 relax --multi='mpi4py' 12_Model_1_I_local_tm.py -t 12_Model_1_I_local_tm.log

You can then in another terminal follow the logfile by

less +F 12_Model_I_local_tm.log
  • To scroll up and down, use keyboard: Ctrl+c
  • To return to follow mode, use keyboard: Shift+f
  • To exit, use keyboard: Ctrl+c and then: q

13_Model_2-5 - Run Model 2 to 5

When Model 1 is completed, then make 4 terminal windows and run them at the same time.

These scripts do:

  • Read the state file from before with setup
  • Change DIFF_MODEL accordingly

13_Model_2_II_sphere.py

tmux new -s m2
relax 13_Model_2_II_sphere.py -t 13_Model_2_II_sphere.log
# Or
mpirun -np 5 relax --multi='mpi4py' 13_Model_2_II_sphere.py -t 13_Model_2_II_sphere.log

# When relax is running, push: Ctrl+b and then d, to disconnect without exit

13_Model_3_III_prolate.py

tmux new -s m3
relax 13_Model_3_III_prolate.py -t 13_Model_3_III_prolate.log
# Or
mpirun -np 5 relax --multi='mpi4py' 13_Model_3_III_prolate.py -t 13_Model_3_III_prolate.log

13_Model_4_IV_oblate.py

tmux new -s m4
relax 13_Model_4_IV_oblate.py -t 13_Model_4_IV_oblate.log
# Or
mpirun -np 5 relax --multi='mpi4py' 13_Model_4_IV_oblate.py -t 13_Model_4_IV_oblate.log

13_Model_5_V_ellipsoid.py

tmux new -s m5
relax 13_Model_5_V_ellipsoid.py -t 13_Model_5_V_ellipsoid.log
# Or
mpirun -np 5 relax --multi='mpi4py' 13_Model_5_V_ellipsoid.py -t 13_Model_5_V_ellipsoid.log

To join session

# List
tmux list-s

# Join either
tmux a -t m1
tmux a -t m2
tmux a -t m3
tmux a -t m4
tmux a -t m5

14_intermediate_final.py - Inspection during model optimization

During running of model 2-5, the current results can be inspected with this nifty scripts.

The script will ask for input of MC numbers. So just run it.

14_intermediate_final.py

tmux new -s final
relax 14_intermediate_final.py -t 14_intermediate_final.log

This does:

  • Option: Collect current best result from Model 2-5, and make MC simulations, and finalize to get current results files
  • Make a pymol file, that collects all of relax pymol command files into 1 pymol session
  • Option: Collect all chi2 and number of params k, for each iteration per model
    • Make a python plot file for plotting this results

Per iteration get: chi2, k, tm

Afterwards, plot the data.

python results_collected.py

Pymol macro

You also get a pymol folder.

See here for info how the macro is applied

Run with

pymol 0_0_apply_all_pymol_commands.pml

To run on Haddock

Have a look here, how to get standalone python Anaconda linux. Also have a look here OpenMPI.

# SSH in
ssh haddock

# Test with shell
mpirun -np 6 echo "hello world"

# Test with python
mpirun -np 6 python -m mpi4py helloworld

# Test with relax
mpirun -np 6 relax --multi='mpi4py'
# Look for: Processor fabric:  MPI 2.2 running via mpi4py with 5 slave processors & 1 master.  Using MPICH2 1.4.1.

Now we run 04_run_default_with_tolerance_lim.py with more power!
We use tmux to make a terminal-session, we can get back to, if our own terminal connection get closed.

  • start a new session: tmux
  • re-attach a detached session: tmux attach
# Make terminal-session
tmux

# Start relax
mpirun -np 20 relax --multi='mpi4py' 04_run_default_with_tolerance_lim.py -t 04_run_default_with_tolerance_lim.log

Useful commands to log file

While the analysis is running, these commands could be used to check the logfile for errors

### Check convergence 
# For chi2
cat 04_run_default_with_tolerance_lim.log | grep -A 10 "Chi-squared test:"

# For other tests
cat 04_run_default_with_tolerance_lim.log | grep -A 10 "Identical "
cat 04_run_default_with_tolerance_lim.log | grep -A 10 "Identical model-free models test:"
cat 04_run_default_with_tolerance_lim.log | grep -A 10 "Identical diffusion tensor parameter test:"
cat 04_run_default_with_tolerance_lim.log | grep -A 10 "Identical model-free parameter test:"

# To look for not converged errors
# For chi2
cat 04_run_default_with_tolerance_lim.log | grep -B 7 "The chi-squared value has not converged."

# For other tests
cat 04_run_default_with_tolerance_lim.log | grep -B 7 " have not converged."
cat 04_run_default_with_tolerance_lim.log | grep -B 7 "The model-free models have not converged."
cat 04_run_default_with_tolerance_lim.log | grep -B 7 "The diffusion parameters have not converged."
cat 04_run_default_with_tolerance_lim.log | grep -B 7 "The model-free parameters have not converged."

You can then inspect the logfile by less: 10-tips for less

less 04_run_default_with_tolerance_lim.log

To find pattern: We have to escape with \ for special character like: ()[] etc.

# Search forward
/Value \(iter 14\)
/The chi-squared value has not converged

n or N – for next match in forward / previous match in backward

  • To return to follow mode, use keyboard: Shift+f
  • To exit, use keyboard: Ctrl+c and then: q

rsync files

rsync files after completion to Sauron

When a run is completed, then sync files to Sauron file server.

Make a rsync_to_sbinlab.sh file with content

See file content
#!/bin/bash

read -p "Username on sauron :" -r

RUSER=$REPLY
SAURON=10.61.4.60
PROJ=`basename "$PWD"`

FROM=${PWD}
TO=${RUSER}@${SAURON}:/data/sbinlab2/${RUSER}/Downloads

# -a: "archive"- archive mode; equals -rlptgoD (no -H,-A,-X). syncs recursively and preserves symbolic links, special and device files, modification times, group, owner, and permissions.
# We want to remove the -o and -g options:
# -o, --owner                 preserve owner (super-user only)
# -g, --group                 preserve group
# -rlptD : Instead or
# -a --no-o --no-g  
# -z: Compression over network
# -P: It combines the flags --progress and --partial. The first of these gives you a progress bar for the transfers and the second allows you to resume interrupted transfers:
# -h, Output numbers in a more human-readable format.

# Always double-check your arguments before executing an rsync command.
# -n 

echo "I will now do a DRY RUN, which does not move files"
read -p "Are you sure? y/n :" -n 1 -r
echo ""

if [[ $REPLY =~ ^[Yy]$ ]]; then
  rsync -rlptDPzh -n ${FROM} ${TO} 
else
  echo "Not doing DRY RUN"
fi

echo ""

echo "I will now do the sync of files"
read -p "Are you sure? y/n :" -n 1 -r
echo ""

if [[ $REPLY =~ ^[Yy]$ ]]; then
  rsync -rlptDPzh ${FROM} ${TO}
else
  echo "Not doing anything"
fi

Make it executable and run

chmod +x rsync_to_sbinlab.sh

#run
./rsync_to_sbinlab2.sh

rsync files from BIO to home mac

To inspect from home mac.

Make a rsync_from_bio_to_home.sh file with content

See file content
#!/bin/bash
 
read -p "Username on bio:" -r
 
RUSER=$REPLY
BIO=ssh-bio.science.ku.dk

#PROJ=Desktop/kaare_relax
PROJ=Desktop/kaare_relax/20171010_model_free_HADDOCK
PROJDIR=`basename "$PROJ"`

FROM=${RUSER}@${BIO}:/home/${RUSER}/${PROJ} 
TO=${PWD}/${PROJDIR}

# -a: "archive"- archive mode; equals -rlptgoD (no -H,-A,-X). syncs recursively and preserves symbolic links, special and device files, modification times, group, owner, and permissions.
# We want to remove the -o and -g options:
# -o, --owner                 preserve owner (super-user only)
# -g, --group                 preserve group
# -rlptD : Instead or
# -a --no-o --no-g  
# -z: Compression over network
# -P: It combines the flags --progress and --partial. The first of these gives you a progress bar for the transfers and the second allows you to resume interrupted transfers:
# -h, Output numbers in a more human-readable format.
 
# Always double-check your arguments before executing an rsync command.
# -n 
 
echo "I will now do a DRY RUN, which does not move files"
read -p "Are you sure? y/n :" -n 1 -r
echo ""
 
if [[ $REPLY =~ ^[Yy]$ ]]; then
  rsync -rlptDPzh -n ${FROM} ${TO} 
else
  echo "Not doing DRY RUN"
fi
 
echo ""
 
echo "I will now do the sync of files"
read -p "Are you sure? y/n :" -n 1 -r
echo ""
 
if [[ $REPLY =~ ^[Yy]$ ]]; then
  rsync -rlptDPzh ${FROM} ${TO}
else
  echo "Not doing anything"
fi

Make it executable and run

chmod +x rsync_from_bio_to_home.sh

#run
./rsync_from_bio_to_home.sh

About the protocol

Model I - 'local_tm'
This will optimise the diffusion model whereby all spin of the molecule have a local tm value, i.e. there is no global diffusion tensor. This model needs to be optimised prior to optimising any of the other diffusion models. Each spin is fitted to the multiple model-free models separately, where the parameter tm is included in each model.

Model II - 'sphere'
This will optimise the isotropic diffusion model. Multiple steps are required, an initial optimisation of the diffusion tensor, followed by a repetitive optimisation until convergence of the diffusion tensor. In the relax script UI each of these steps requires this script to be rerun, unless the conv_loop flag is True. In the GUI (graphical user interface), the procedure is repeated automatically until convergence. For the initial optimisation, which will be placed in the directory './sphere/init/', the following steps are used:

  • The model-free models and parameter values for each spin are set to those of diffusion model MI.
  • The local tm parameter is removed from the models.
  • The model-free parameters are fixed and a global spherical diffusion tensor is minimised
  • For the repetitive optimisation, each minimisation is named from 'round_1' onwards. The initial 'round_1' optimisation will extract the diffusion tensor from the results file in './sphere/init/', and the results will be placed in the directory './sphere/round_1/'. Each successive round will take the diffusion tensor from the previous round. The following steps are used:
    • The global diffusion tensor is fixed and the multiple model-free models are fitted to each spin.
    • AIC model selection is used to select the models for each spin.
    • All model-free and diffusion parameters are allowed to vary and a global optimisation of all parameters is carried out.

Model III - 'prolate'
The methods used are identical to those of diffusion model MII, except that an axially symmetric diffusion tensor with Da >= 0 is used. The base directory containing all the results is './prolate/'.

Model IV -'oblate'
The methods used are identical to those of diffusion model MII, except that an axially symmetric diffusion tensor with Da <= 0 is used. The base directory containing all the results is './oblate/'.

Model V - 'ellipsoid'
The methods used are identical to those of diffusion model MII, except that a fully anisotropic diffusion tensor is used (also known as rhombic or asymmetric diffusion). The base directory is './ellipsoid/'

'final'
Once all the diffusion models have converged, the final run can be executed. This is done by setting the variable diff_model to 'final'. This consists of two steps, diffusion tensor model selection, and Monte Carlo simulations. Firstly AIC model selection is used to select between the diffusion tensor models. Monte Carlo simulations are then run solely on this selected diffusion model. Minimisation of the model is bypassed as it is assumed that the model is already fully optimised (if this is not the case the final run is not yet appropriate). The final black-box model-free results will be placed in the file 'final/results'.

See also