Open main menu

Changes

→‎Intro: Deleted the out of date svn branch information.
__TOC__
 
= Intro =
This tutorial presently cover the [http://svn.gna.org/svn/relax/branches/ relax_disp branch].<br>
This branch is under development, for testing it out, you need to use the source code. See [[Installation_linux#Checking_out_a_relax_branch]].
 
This tutorial is based on the analysis of NMR data from the paper:
<blockquote>
== Spectral processing ==
# Now we need to spectral process the spectra.# Process one of the files normally and the next script will copy the processing script to the other folder.# [m]->Right-Click Process 2D->Basic 2D# Save->Execute->Done; then; RClick File->Select File->test.ft2->Read/draw->Done# If your spectra look reversed (i.e. if your peaks do not seem to match your reference spectrum) it might This step can be solved done by changing to# following wiki page [m] '| nmrPipe -fn FT -neg \' to the script to the third lowest line.# Save->Execute->Done. Then push [r] to refresh.# Press [h], and find P0 and P1, and push [m], change parameters and update script# The changes to '| nmrPipe -fn PS xxx \' should be the FIRST line (The proton dimension) with PS# save/execute, push [r] (read) and the [e] (erase settings) to see result in NMRdraw# And then run the next CPMG script As suggested in the [[Manual | relax manaul]], section '''5.2.2 Spectral processing''', the spectral processing script could look like:<br> '''NOTE''' only put '''EXT''' in, AFTER you are done with phasing, or you will get problems phasing. File: '''nmrproc.com'''<source lang="bash">#!/bin/csh nmrPipe -in test.fid \| nmrPipe -fn SOL \| nmrPipe -fn GM -g1 5 -g2 10 -c 1.0 \| nmrPipe -fn ZF -auto -size 8000 \| nmrPipe -fn FT -auto \| nmrPipe -fn PS -p0 214.00 -p1 -21.00 -di -verb \| nmrPipe -fn TP \| nmrPipe -fn SP -off 0.5 -end 0.98 -pow 2 -c 0.5 \| nmrPipe -fn ZF -auto -size 8000 \| nmrPipe -fn FT -neg \| nmrPipe -fn PS -p0 0.00 -p1 0.00 -di -verb \| nmrPipe -fn TP \| nmrPipe -fn POLY -auto \| nmrPipe -fn EXT -left -sw \ -ov -out test.ft2</source> === Understand spectral processing ===To understand the NMRPipe functions, you can look them up in the manual page: http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/ <br> See also the [http://www.nmr-relax.com/manual/Spectral_processing.html relax online manual for spectral processing]. A good book to loop up in, is '''Keeler, Understanding NMR Spectroscopy, Second edition'''. {| class="wikitable sortable" border="1"|-!nmrPipe!Desc.!Comments|-|nmrPipe -fn [http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/sol.html SOL]|Solvent Filter||-|nmrPipe -fn [http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/gm.html GM] -g1 5 -g2 10 -c 1.0|Lorentz-to-Gauss Window, here for the measured direct dimension.|'''-c 1.0'''' The constant c is set to '''1.0''', since the phase '''P1''' correction is different from 0.0, here '''-p1 -21.00''', if '''-p1 0.0''' then '''c 0.5'''.|-|nmrPipe -fn [http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/zf.html ZF] -auto -size 8000|Zero Fill, here for the measured direct dimension.|The '''-auto''' will auto round to final size to power of 2. So here it is equivalent to: '''nmrPipe -fn -size 8192'''|-|nmrPipe -fn [http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/ft.html FT] -auto|Complex Fourier Transform, here for the measured direct dimension.|Do Fourier Transform.|-|nmrPipe -fn [http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/ps.html PS] -p0 214.00 -p1 -21.00 -di -verb|Phase Correction, here for the measured direct dimension.||-|nmrPipe -fn TP|2D Transpose XY->YX (YTP)|Transpose matrix to work in in-direct dimension.|-|nmrPipe -fn [http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/sp.html SP] -off 0.5 -end 0.98 -pow 2 -c 0.5|Adjustable Sine Bell Window. The '''-pow 2''' means is sinus^2 function. See Keeler p. 93 and p. 98 for the sine window desc|The '''-end 0.98''' means that you cut 2% data. '''-c 0.5''' is set 0.5 since the p1 phasing is 0.0 in the in-direct dimension. |-|-|nmrPipe -fn [http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/zf.html ZF] -auto -size 8000|Zero Fill, here for the in-direct dimension.|The '''-auto''' will auto round to final size to power of 2. So here it is equivalent to: '''nmrPipe -fn -size 8192'''|-|nmrPipe -fn [http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/ft.html FT] -neg|Complex Fourier Transform, here for the measured direct dimension.|Do Fourier Transform, but here negative, since the CPMG element in the Puls Sequence makes the magnetization end up negative.|-|nmrPipe -fn [http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/ps.html PS] -p0 0.00 -p1 0.00 -di -verb |Phase Correction, here for the in-direct dimension. |No-phase correction needed.|-|nmrPipe -fn TP|2D Transpose XY->YX (YTP)|Transpose matrix back to work in direct dimension.|-|nmrPipe -fn [http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/poly.html POLY] -auto|Polynomial Subtract for Time-Domain Solvent Correction and Frequency-Domain Baseline Correction. ||-|nmrPipe -fn [http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/ext.html EXT] -left -sw |Extract Region. '''NOTE''' only put this in, AFTER you are done with phasing, or you will get problems phasing. |'''-left''' extract left half on the sweep-width which have been centered on water.|}
== Fourier transform all spectra ==
As stated in the [[manual | relax manual]] section '''5.2.1 Temperature control and calibration''', the pulse sequence can put a lot of power into the sample. <br>
You could read these sections in the relax manual: <br>[http://www.nmr-relax.com/manual/Temperature_control_calibration.html Importance of Temperature control and calibration]<br>[http://www.nmr-relax.com/manual/relax_data_temp_control.html Temperature control]<br>[http://www.nmr-relax.com/manual/relax_data_temp_calibration.html Temperature calibration]<br> It is therefore good also good practice to inspect for peak movements, by overlaying all spectra:
Open all the files, and overlay them with SPARKY command '''ol'''.
</source>
=Analyse in relax = == Extract the spectra settings from Varian procpar file ===
Now we want to make a settings file we can read in relax.
<source lang="bash">
end
cat ncyc.txt
</source>
 
= Analyse in relax =
== making a spin file from SPARKY list ==
relax does not yet has the possibility to read spins from a sparky file. [https://gna.org/support/?3044 See support request].
 
So we create one.
 
<source lang="bash">
set ATOMS=`tail -n+4 peaks_list.tab | awk '{print $7}'`
set SCRIPT=relax_2_spins.py
 
foreach I (`seq 1 ${#ATOMS}`)
set ATOM=${ATOMS[$I]}; set SPIN=`echo $ATOM | sed -e "s/N-HN//g"`; set RESN=`echo $SPIN | sed -e "s/[0-9]*//g"`; set RESI=`echo $SPIN | sed -e "s/[A-Za-z]//g"`
echo $ATOM $SPIN $RESN $RESI
echo "spin.create(spin_name='N', spin_num=$I, res_name='$RESN', res_num=$RESI, mol_name=None)" >> $SCRIPT
end
 
cat $SCRIPT
</source>
== Measure the backgorund noise "RMSD" in each of the .ft2 files ==
=== RMSD via sparky ===
There exist two ways to get the background RMSD noise
Repeat for all spectra.
 
We will use '''ncyc.text''' to make the spectra settings later, and should look like this.<br>
'''ncyc time_T2 CPMG_frq sfrq RMSD'''
<source lang="text">
28 0.06 466.66666666666666666666 599.8908622 2.47e+03
0 0.06 0 599.8908622 2.34e+03
4 0.06 66.66666666666666666666 599.8908622 2.41e+03
32 0.06 533.33333333333333333333 599.8908622 2.42e+03
60 0.06 1000.00000000000000000000 599.8908622 2.45e+03
2 0.06 33.33333333333333333333 599.8908622 2.42e+03
10 0.06 166.66666666666666666666 599.8908622 2.42e+03
16 0.06 266.66666666666666666666 599.8908622 2.44e+03
8 0.06 133.33333333333333333333 599.8908622 2.39e+03
20 0.06 333.33333333333333333333 599.8908622 2.4e+03
50 0.06 833.33333333333333333333 599.8908622 2.42e+03
18 0.06 300.00000000000000000000 599.8908622 2.46e+03
40 0.06 666.66666666666666666666 599.8908622 2.41e+03
6 0.06 100.00000000000000000000 599.8908622 2.45e+03
12 0.06 200.00000000000000000000 599.8908622 2.45e+03
0 0.06 0 599.8908622 2.39e+03
24 0.06 400.00000000000000000000 599.8908622 2.45e+03
</source>
 
=== RMSD via nmrpipe showApod ===
We can also use the showApod rmsd.
<source lang="bash">
set FIDS=`cat ft2_files.ls`
set OUT=${PWD}/apod_rmsd.txt
set CWD=$PWD
rm $OUT
 
foreach I (`seq 1 ${#FIDS}`)
set FID=${FIDS[$I]}; set DIRN=`dirname $FID`
cd $DIRN
set apodrmsd=`showApod *.ft2 | grep "REMARK Automated Noise Std Dev in Processed Data:" | awk '{print $9}'`
echo $apodrmsd $DIRN >> $OUT
cd $CWD
end
cat $OUT
mv ncyc.txt ncyc_or.txt
paste ncyc_or.txt $OUT > ncyc.txt
</source>
== Prepare directory for relax run ==
cp ncyc.txt ../relax
cp peaks_list* ../relax
cp relax_2_spins.py ../relax
cd ../relax
</source>
== relax script for setting experiment settings to spectra ==
Add the following python relax script file to the relax directory. This can be modified as wanted.<br>This is to save "time" on the tedious work on setting the experimental conditions for each spectra.
'''relax_3_spectra_settings.py'''
time_T2 = float(line.split()[1])
vcpmg = line.split()[2]
set_sfrq = float(line.split()[3]) * 1e6
rmsd_err = float(line.split()[4])
# Set the current spectrum id
current_id = "Z_A%s"%(i)
 
# Set the current experiment type.
relax_disp.exp_type(spectrum_id=current_id, exp_type='SQ CPMG')
# Set the peak intensity errors, as defined as the baseplane RMSD.
# Set the NMR field strength of the spectrum.
spectrometer.frequency(id=current_id, frq=set_sfrq, units='MHz')
# Relaxation dispersion CPMG constant time delay T (in s).
# Set the relaxation dispersion CPMG frequencies.
relax_disp.cpmg_frqcpmg_setup(spectrum_id=current_id, cpmg_frq=vcpmg)
i += 1
== Analyse ==
NOTE, to '''NOTES about speed up of model selection:'''To speed up the fittingmodel selection, see [[:Category:Time_of_running]], we .<br> '''Monte-Carlo simulations:'''<br>We will set the number of Monte-Carlo simulations to '''10''', for inspection.<br>
This will not affect model selection. <br>
If errors are not important for specific cases, set the number of MC sims to 3-10, and the analysis will perform much more rapidly. <br>
The result is that the error estimates of the parameters are horrible but, but in some cases, excluding publication, that is not such a problem.
 
'''Which models to chose for running:''' This text is based on [http://article.gmane.org/gmane.science.nmr.relax.devel/4426 this email thread discussion]: <br>
When you are analysing data, you would probably limit the number of models to 2-3. <br>
For example if you know that all residues are experiencing '''slow exchange''', the '''LM63''' '''fast exchange''' model does not need to be used. <br>
It is interesting to see that sometimes the analytic models are selected and sometimes the numeric models. <br>
But this is an academic curiosity, it is probably not a practical question anyone analysing real dispersion data is interested in. <br>
The way an analysis would normally be performed is to first decide if the analytic or numeric approach is to be used. <br>
 
For the '''analytic approach''' with slow exchange, you only need the '''No Rex''' and '''CR72''' models. <br>
You could add the '''IT99''' model if you can see that pA >> pB in the spectra, i.e. the pB peak is tiny.
 
If you take the '''numeric approach''', then the 'No Rex' and 'NS 2-site expanded' models can be used. <br>
 
Once you perform an initial analysis of all residues separately, you can then look at the dynamics parameter values and judge which spins to
cluster together to have the same model of dynamics, then re-perform the analysis.
=== Analyse in GUI ===
Start relax in GUI mode
<source lang="python">
relax_disp -g -l logfilet log_relax_4_model_sel.txtlog
</source>
# Ctrl+n for new analysis
# Select '''Relaxation dispersion analysis ''' button -> Next# CPMG, fixed time -> Next
# Starting pipe: '''base pipe'''
# Pipe bundle: '''relax_disp''' -> Start
# The file name: '''peaks_list_max_standard.ser'''
# The spectrum ID string: auto
# Leave the rest of the fields as they are, they are not used.# Push "Apply" and then "'''Cancel"'''
# We want to change the spectra properties by a script.
# Go to "User functions (n-z) -> script"
# Select file_name: '''relax_3_spectra_settings.py''' -> OK
# Before executing, it would be a good idea to save the state, to save the current setup.
# This '''state''' file will also be used for loading, before a later cluster/global fit analysis.
# Shift+Ctrl+s OR File-> Save as... '''ini_setup.bz2'''
# Make a directory for the output of the results, f.ex: '''model_sel_analyt'''.
# Point '''Results directory''' to '''model_sel_analyt'''.
# Set Monte-Carlo Simulations to '''10'''
# Before executingSelect models: Lets take '''"R2eff", "No Rex", "TSMFK01", "LM63", "CR72", it would be a good idea to save "CR72 full", "IT99"'''# Save the stateagain, to save so the current setupsettings for models, monte-carlo settings and result directory is preserved.# Shift+Ctrl+s OR File-> Save as... '''prerunini_run.bz2'''in the '''model_sel_analyt''' directory.
# Now push "Execute"
The analysis will probably take between 4-10 hours.<br>
=== Analyse via script ===
<source lang="python">
# Taken from the relax disp manual, section 10.6.1 Dispersion script mode - the sample script
# Python module imports.
from os import sep
 
# relax module imports.
from auto_analyses.relax_disp import Relax_disp
 
# Analysis variables.
#####################
 
# The dispersion models.
MODELS = ['R2eff', 'No Rex', 'LM63', 'CR72', 'IT99', 'NS 2-site expanded']
 
# The grid search size (the number of increments per dimension).
GRID_INC = 21
 
# The number of Monte Carlo simulations to be used for error analysis at the end of the analysis.
MC_NUM = 10
 
# The model selection technique to use.
MODSEL = 'AIC'
 
# Experiment settings
#set_dir = "spectrometer_data_processed"
set_dir = None
 
# Set up the data pipe.
#######################
 
# Create the data pipe.
pipe_name = 'base pipe'
pipe_bundle = 'relax_disp'
pipe.create(pipe_name=pipe_name, bundle=pipe_bundle, pipe_type='relax_disp')
 
# Set the relaxation dispersion experiment type.
relax_disp.exp_type('cpmg fixed')
# Create the spins
scriptspectrum.read_spins(file='relax_2_spins"peaks_list_max_standard.py'ser", dir=set_dirNone)
# Name the isotope for field strength scaling.
# Read the spectrum from NMRSeriesTab file. The "auto" will generate spectrum name of form: Z_A{i}
spectrum.read_intensities(file="peaks_list_max_standard.ser", dir=set_dirNone, spectrum_id='auto', int_method='height')
# Set the spectra experimental properties/settings.
script(file='relax_3_spectra_settings.py', dir=set_dirNone)
# Auto-Save the program state before run.# This state file will also be used for loading, before a later cluster/global fit analysis execution.##########################state.save('ini_setup', force=True)</source>
# Save the program state before run'''relax_4_model_sel.state.save(py''pre_run', force<source lang=True)"python">import osfrom auto_analyses.relax_disp import Relax_disp
# Do not change!Load the initial state setupstate.load(state='ini_setup.bz2') # Set settings for run.results_directory = os.path.join(os.getcwd(),"model_sel_analyt")pipe_name = 'base pipe'; pipe_bundle = 'relax_disp'MODELS = ['R2eff', 'No Rex', 'TSMFK01', 'LM63', 'CR72', 'CR72 full', 'IT99']GRID_INC = 21; MC_NUM = 10; MODSEL = 'AIC' # ExecuteRelax_disp(pipe_name=pipe_name, pipe_bundle=pipe_bundle, results_dir=results_directory, models=MODELS, grid_inc=GRID_INC, mc_sim_num=MC_NUM, modsel=MODSEL)
</source>
And the just start relax with
<source lang="bash">
relax_disp relax_1_ini.py# Or with logfile-t log_relax_1_ini.logrelax_disp relax_1_inirelax_4_model_sel.py -l logfilet log_relax_4_model_sel.txtlog
</source>
The analysis will probably take between 4-10 hours.<br>
== Rerun from a "pre_runini_setup.bz2" file ==If something goes wrong, you can open the '''pre_runini_setup.bz2'''in the '''model_sel_analyt''' directory.
Just start relax:
relax_disp -g -l logfilet log_relax_4_model_sel.logand open the '''pre_runini_setup.bz2''' from File->"Open relax state".<br>It should jump to the analysis window, make corrections, and you can then click "Execute" In script it will be [http://www.nmr-relax.com/manual/state_load.html See manual]:<source lang="python">state.load(state='pre_run.bz2')from auto_analyses.relax_disp import Relax_dispimport pipe_control
print pipe.display()pipe_name = 'base pipe'; pipe_bundle = 'relax_disp' MODELS = In script, just follow section [['R2eff', 'No Rex', 'LM63', 'CR72', 'IT99', 'NS 2-site expanded'Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved#Analyse_via_script | Analyse_via_script]]GRID_INC = 21; MC_NUM = 10; MODSEL = 'AIC' print pipe_control.spectrum.get_ids() Relax_disp(pipe_name=pipe_name, pipe_bundle=pipe_bundle, models=MODELS, grid_inc=GRID_INC, mc_sim_num=MC_NUM, modsel=MODSEL)</source>
= Inspecting results from the relax analysis =
After the analysis, several folders should be available, with data for each fitted model.
<source lang="bash">
R2eff/
No Rex/
TSMFK01/
LM63/
CR72/
CR72 full/
IT99/
final/
IT99/
LM63/
No Rex/
NS 2-site expanded/
R2eff/
</source>
You can convert all to PNG images, by:
<source lang="bash">
cd "R2eff"; ./grace2images.py; cd .. ;
cd "No Rex"; ./grace2images.py; cd .. ;
cd "TSMFK01"; ./grace2images.py; cd .. ;
cd "LM63"; ./grace2images.py; cd .. ;
cd "CR72"; ./grace2images.py; cd .. ;
cd "CR72 full"; ./grace2images.py; cd .. ;
cd "IT99"; ./grace2images.py; cd .. ;
cd "final"; ./grace2images.py; cd .. ;
cd "IT99"; ./grace2images.py; cd .. ;cd "No Rex"; ./grace2images.py; cd .. ;cd "NS 2-site expanded"; ./grace2images.py; cd .. ;cd "R2eff"; ./grace2images.py; cd .. ;
find . -type f -name "*.png"
# And if you want to delete them.
find . -type f -name "*.png" -exec rm -f {} \;
</source>
See [[Grep_log_file]] for this.
== Compare values ==For the '''TSMFK01''' and for example the '''CR72''', the '''k_AB''' value can be compare <source lang="bash">cd model_sel_analytpaste "TSMFK01/k_AB.out" "CR72/k_AB.out" | awk '{print $2, $3, $6, $13}'</source> == Inspect model selection for residues ==
=== With Grep AIC selection from logfile ===
If you have a log file.
<source lang="bash">
set IN=logfilelog_relax_4_model_sel.txt log ;set OUT=grep_log_to_model_sellog_relax_4_model_sel_chosen_models.txt ;
set FROM=`grep -n "AIC model selection" $IN | cut -d":" -f1` ;
</source>
=== In relax script get spin.model ===
See [[:Category:List_objects]] to get inspiration how to loop through the data class containers.
<source lang="bash">relax_disp -l logfile_clusterYou should open the '''final_state.bz2''' in the result directory.log</source>
<source lang="python">
pipe.display()
# print the spin model, first import spin_loop
from pipe_control.mol_res_spin import spin_loop
</source>
=== In relax You can also in the GUI ===<source lang="bashsee this in the '''Spin Viewer window''' under ">relax_disp -g -l logfile_cluster.log</source># Open relax state (Ctrl+o) : final_state.bz2# View -> Spin Viewer (Ctrl+t) ". <br>Select a spin, and look for the Variable '''model'''.
Or open the = Execute a clustering analysis ='''Notes about how to select residues for clustering. Based on [http://article.gmane.org/gmane.science.nmr.relax prompt (Ctrl+p) .devel/4442 this email thread:] '''<br>Clustering is a manual operation and it should not be automated. <br>It is based on human logic and writeis highly subjective. <br>For example it could be decided that one analysis is performed whereby one motional process is assumed, i.e. one kex value for all exchangingspins. <br>Or it could be decided that there are two motional processes, so two clusters are created, each having their own kex. <br>Some spins with bizarre dynamics may be left out as 'free spins' and not used in the cluster. <source lang="python"br>from pipe_controlIf you just want all spins with '''Rex''' to be in one cluster, you could just use all spins where '''spin.model''' is not set to '''No Rex'''.mol_res_spin import spin_loop
print("%20s %20s" % ("# Spin ID", "Model"))for spin, spin_id '''Notes about how clustering is performed in spin_loop(return_id=True, skip_desel=True): print("%20s %20s" % (repr(spin_id), spinrelax.model))'''</sourcebr== Execute a clustering analysis ==
All spins of one cluster ID will be optimised as one model. <br>
Several cluster ID will result in all those spins being optimised separately, but again with all spins together. <br>
and the function '''specific_analyses.relax_disp.disp_data.loop_cluster''' which it uses.
=== Inspect residues for clustering ===
Let us select residues based on a criterion that is where the highest number of residues have been fitted to the same model which are fit. Open the '''final_state.bz2''' in relax GUI. <br> You can see the model select for each residue in the '''Spin viewer''' (View -> Spin viewer (Ctrl+T)). Look for the '''Variable''' '''model'''.
Open the relax prompt with '''Ctrl+p''' if you are in the GUI.<br>
<source lang="python">
from pipe_control.mol_res_spin import spin_loop
 
# Open file for writing
cluster_file = "cluster_residues.txt"
f = open(cluster_file, 'w')
# Make a list to count number of models
resi_models = []
 for spin, mol_name, res_num, res_name, spin_id in spin_loop(full_info=True, return_id=True, skip_desel=True): # Write models to file f.write( str(spin_id) + " ; " + str(spin.model) + " ; " + str(mol_name) + " ; " + str(res_num) + " ; " + str(res_name) + "\n" ) # Append models to list
resi_models.append(spin.model)
 print resi_models
# Count resi_models
print c_resi_models = dict((i,resi_models.count(i)) for i in resi_models)</source>print c_resi_models We see that "NS 2-site expanded" is most represented. <br>We make a list # Write count result to cluster these residues later.<source lang="python">model_crit = 'NS 2-site expanded'sel_residues = []filefor spinkey, spin_id val in spin_loopc_resi_models.items(return_id=True, skip_desel=True): if spinf.model == model_crit: sel_residues.appendwrite( [spin._res_num, spin._res_name, spin.model, spin.num ]) f = open"# ; " + str('cluster_residues.txt', 'w'key)for p in sel_residues: s = + " ; ".join+ str( map(str, pval) ) print s f.write( s + "\n" )
#Close the file
f.close()
</source>
=Copy '''cluster_residues.txt''' to the initial directory. == Create new analysis clustering ===
For the clustered analysis, you need to start a new analysis. <br>
You should not load the results from the final pipe, since this will likely be fatal for the clustered analysis. <br>
Each results file will be loaded into a temporary data pipe and the initial parameter values copied from that. <br>
The commands be performed in the GUISo Close relax, and then add these files.<br>
Close relax, and start again with new log-file.=== Do clustering Analysis in GUI === And the just start Start relax within GUI mode<source lang="bashpython"># Or with logfilerelax_disp -l logfile_clusterg -t log_relax_5_cluster.txtlog
</source>
==== Analyse via script ====Add # Open the following python '''ini_setup.bz2''' from File->"Open relax script file to state".# Open the '''relax directory prompt''' with '''relax_4_cluster.pyCtrl+p'''. And paste this is.
<source lang="python">
# Taken from the relax disp manualCluster residuescluster_file = "cluster_residues.txt"f = open(cluster_file, section 10'r')for line in f: if line[0] == "#": continue else: spinid = line.split(";")[0].6strip() spinmodel = line.split(";")[1 Dispersion script mode - the sample script# Python module imports].from os import sepstrip()
# relax module importsDeselect those spins not showing exchange for further analysis.from auto_analyses if spinmodel == "No Rex": deselect.spin(spin_id=spinid, change_all=False) else: relax_disp import Relax_disp.cluster('model_cluster', spinid) f.close()
# Analysis variablesCheck which are clusteredprint cdp.#####################clustering
# The dispersion modelsCheck for selected/deselected spins.MODELS for spin, spin_id in spin_loop(return_id= [True, skip_desel=False): print spin_id, spin.select</source># Before executing, it would be a good idea to save the state after clustering.# Shift+Ctrl+s OR File-> Save as... '''ini_setup_cluster.bz2'''# Ctrl+d , right click "base pipe" and "Associate with a new auto-analysis" # Close pipe viewer # Make a directory for the output of the results, f.ex: '''model_clustering_analyt'''# Point '''Results directory''' to '''model_clustering_analyt'''NS 2.# Pint '''Previous run directory''' to previous result directory, where all the models had their folders. Values will be read from here. '''model_sel_analyt'''# Set Monte-site expandedCarlo Simulations to '''50''']# Select models: Lets take '''"R2eff", "No Rex", "TSMFK01"'''# Now push "Execute"
# The grid search size (=== Do clustering Analysis in script ===Add the number of increments per dimension)following python relax script file to the relax directory.GRID_INC = 21
# The number of Monte Carlo simulations to be used for error analysis at the end of the analysis'''relax_5_cluster.py'''MC_NUM <source lang= "python">"""Taken from the relax disp manual, section 10.6.1 Dispersion script mode - the sample script.
# The model selection technique to use.MODSEL = 'AIC'To run the script, simply type:
# Experiment settings$ ../../../../../relax relax_5_cluster.py --tee relax_5_cluster.log#set_dir = "spectrometer_data_processed"set_dir = Nonepre_run = 'final'"
# Set up the data pipeimport osfrom auto_analyses.relax_disp import Relax_disp#######################from pipe_control.mol_res_spin import spin_loop
# Create the data pipeSet settings for run.pipe_name pre_run_directory = 'base pipe'os.path.join(os.getcwd(),"model_sel_analyt")pipe_bundle results_directory = 'relax_disp'pipeos.path.join(os.creategetcwd(pipe_name=pipe_name), bundle"model_clustering_analyt")cluster_file =pipe_bundle, pipe_type='relax_disp')"cluster_residues.txt"
# Set Load the relaxation dispersion experiment typeprevious final state with results.relax_dispstate.exp_typeload(state='cpmg fixedfinal_state.bz2', dir=pre_run_directory, force=False)
# Create the spinsOpen file for writingscriptf = open(file=cluster_file, 'relax_2_spins.pyw', dir) # Make a list to count number of modelsresi_models =set_dir)[]
for spin, mol_name, res_num, res_name, spin_id in spin_loop(full_info=True, return_id=True, skip_desel=True): # Write models to file f.write( str(spin_id) + " ; " + str(spin.model) + " ; " + str(mol_name) + " ; " + str(res_num) + " ; " + str(res_name) + "\n" ) # Append models to list resi_models.append(spin.model) # Name the isotope Count resi_modelsc_resi_models = dict((i,resi_models.count(i)) for field strength scalingi in resi_models)print c_resi_models # Write count result to filefor key, val in c_resi_models.items():spin f.isotopewrite(isotope='15N'"# ; " + str(key) + " ; " + str(val) + "\n" )
# Read Close the spectrum from NMRSeriesTab file. The "auto" will generate spectrum name of form: Z_A{i}spectrumf.read_intensitiesclose(file="peaks_list_max_standard.ser", dir=set_dir, spectrum_id='auto', int_method='height')
# Set the spectra experimental properties/settings.#################script(# Cluster file='relax_3_spectra_settingsfor selection residues.py', dir=set_dir)#################
# Auto-analysis executionLoad the initial state setupstate.##########################load(state='ini_setup.bz2', force=True)
# Save the program state before run.Cluster residuesstate.savef = open(cluster_file, 'pre_run_clusterr', force)for line in f: if line[0] == "#": continue else: spinid = line.split(";")[0].strip() spinmodel =Trueline.split(";")[1].strip()
# Do Deselect those spins not change!showing exchange for further analysis.#Relax_disp(pipe_name if spinmodel =pipe_name, pipe_bundle=pipe_bundle, results_dir"No Rex": deselect.spin(spin_id=pre_runspinid, modelschange_all=MODELSFalse) else: relax_disp.cluster('model_cluster', grid_inc=GRID_INC, mc_sim_num=MC_NUM, modsel=MODSELspinid)</source> f.close()
=== Make # Check which are clusteredprint cdp.clustering of residues ===
Now we want to cluster the residues # Check for selected/deselected spins.for spin, spin_id in the list '''sel_residues'''spin_loop(return_id=True, skip_desel=False): print spin_id, spin.<br>select
It is fastest to do it in # Save the relax prompt, since we have our selected list '''sel_residues''' from above:<source lang="python">for resi, resn, model, spini in sel_residues:program state before run. relax_dispstate.clustersave('NS2_clusterini_setup_cluster.bz2', ":%s@N"%resiforce=True)
cdp.clustering##################</source># Run cluster analysis#################
Or use in the GUI # Set settings for run.pipe_name = 'base pipe'; pipe_bundle = 'User functions (n->) -> relax_disp -> cluster'''.<br>Set ''MODELS = ['The cluster IDR2eff', 'No Rex' to , 'TSMFK01''NS2_cluster''' and '''The spin ID string''' to ''':5@N''' for nitrogen at residue number 5. <br>]Continue for the residues you want to cluster. Keep the same '''Cluster ID'GRID_INC = 21; MC_NUM = 50; MODSEL = 'AIC'.
You can inspect which residues you have clustered in the prompt.# Execute<source langRelax_disp(pipe_name="python">cdp.clusteringpipe_name, pipe_bundle=pipe_bundle, results_dir=results_directory, models=MODELS, grid_inc=GRID_INC, mc_sim_num=MC_NUM, modsel=MODSEL, pre_run_dir=pre_run_directory)
</source>
=== Run And the analysis ==just start relax with<source lang="bash">'''Ctrl+d''' for Data pipe editorrelax_disp relax_5_cluster. # Right Click '''cluster NS 2py -site expanded''', and select '''Associate with a new autoanalysis'''t log_relax_5_cluster.logIn '''Spin cluster IDs''' should now be: '''free spins, NS2_cluster'''. <br/source>Save the state now. "Shift+Ctrl+s" as '''prerun_cluster.bz2'''
= See also =
[[Category:Relaxation dispersion analysis]]
[[Category:Tutorials]]
Trusted, Bureaucrats
4,223

edits