Difference between revisions of "Tutorial for Relaxation dispersion analysis cpmg fixed time recorded on varian as fid interleaved"
Line 774: | Line 774: | ||
# Cluster file for selection residues. | # Cluster file for selection residues. | ||
cluster_file = "cluster_residues.txt" | cluster_file = "cluster_residues.txt" | ||
− | + | ||
# Cluster residues | # Cluster residues | ||
f = open(cluster_file, 'r') | f = open(cluster_file, 'r') | ||
for line in f: | for line in f: | ||
− | spinid = line.split()[0] | + | if line[0] == "#": |
− | + | continue | |
− | relax_disp.cluster('model_cluster', | + | else: |
+ | spinid = line.split(";")[0].strip() | ||
+ | spinmodel = line.split(";")[1].strip() | ||
+ | |||
+ | # Deselect those spins not showing exchange for further analysis. | ||
+ | if spinmodel == "No Rex": | ||
+ | deselect.spin(spin_id=spinid, change_all=False) | ||
+ | else: | ||
+ | relax_disp.cluster('model_cluster', spinid) | ||
+ | |||
f.close() | f.close() | ||
− | + | ||
# Check which are clustered | # Check which are clustered | ||
print cdp.clustering | print cdp.clustering | ||
− | # | + | # Check for selected/deselected spins. |
− | |||
− | |||
− | |||
− | |||
− | |||
from pipe_control.mol_res_spin import spin_loop | from pipe_control.mol_res_spin import spin_loop | ||
for spin, spin_id in spin_loop(return_id=True, skip_desel=False): | for spin, spin_id in spin_loop(return_id=True, skip_desel=False): |
Revision as of 07:53, 3 September 2013
Contents
- 1 Intro
- 2 Preparation
- 3 Get the process helper scripts
- 4 Extract interleaved spectra, process to NMRPipe and do spectral processing
- 5 Analyse in relax
- 5.1 making a spin file from SPARKY list
- 5.2 Extract the spectra settings from Varian procpar file
- 5.3 Measure the backgorund noise "RMSD" in each of the .ft2 files
- 5.4 Prepare directory for relax run
- 5.5 relax script for setting experiment settings to spectra
- 5.6 Analyse
- 5.7 Rerun from a "ini_setup.bz2" file
- 6 Inspecting results from the relax analysis
- 7 Execute a clustering analysis
- 8 See also
Intro
This tutorial presently cover the relax_disp branch.
This branch is under development, for testing it out, you need to use the source code. See Installation_linux#Checking_out_a_relax_branch.
This tutorial is based on the analysis of NMR data from the paper:
The inverted chevron plot measured by NMR relaxation reveals a native-like unfolding intermediate in acyl-CoA binding protein.
Kaare Teilum, Flemming M Poulsen, Mikael Akke.
Proceedings of the National Academy of Sciences of the United States of America (2006).
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1458987
The data is recorded as FID interleaved.
Preparation
You want to make a working dir, with different folders
peak_lists
spectrometer_data
scripts
You can create the folders by
mkdir peak_lists spectrometer_data scripts
In the folder spectrometer_data should be the files: fid and procpar as the output from recording fid interleaved on Varian.
In the folder peak_lists should contain SPARKY list in SPARKY list format.
In the folder scripts we put scripts which help us processing the files.
Get the process helper scripts
Go into the scripts directory and download these scripts to there.
- convert_all.com
- fft_all.com
- CPMG_1_sort_pseudo3D_initialize_files.sh
- CPMG_2_convert_and_process.sh
- CPMG_3_fft_all.sh
- NMRPipe_to_Sparky.sh
- sparky_add.sh
- stPeakList.pl
Then make them executable, and add to PATH.
cd scripts
# Change shell
tcsh
# Make them executable
chmod +x *.sh *.com *.pl
# Add scripts to PATH
setenv PATH ${PWD}:${PATH}
# Go back to previous directory
cd ..
Extract interleaved spectra, process to NMRPipe and do spectral processing
Extract interleaved and change format to NMRPipe
sort out the interleaved fid with the script CPMG_1_sort_pseudo3D_initialize_files.sh .
# Copy data
cp -r spectrometer_data spectrometer_data_processed
# sort_pseudo3D and initialize files
cd spectrometer_data_processed
CPMG_1_sort_pseudo3D_initialize_files.sh
Now we make a file to convert from binary format of Varian to NMRPipe.
- Now click, 'read parameters', check 'Rance-Kay'
- Remember to set Y-'Observe Freq MHz' to N15
- Click 'Save script' to make 'fid.com' file, and 'Quit', and run the next CPMG script
Now it is time to convert all the fid from varian format to NMRPipe with the script CPMG_2_convert_and_process.sh .
CPMG_2_convert_and_process.sh
Spectral processing
- Now we need to spectral process the spectra.
- Process one of the files normally and the next script will copy the processing script to the other folder.
- [m]->Right-Click Process 2D->Basic 2D
- Save->Execute->Done; then; RClick File->Select File->test.ft2->Read/draw->Done
- If your spectra look reversed (i.e. if your peaks do not seem to match your reference spectrum) it might be solved by changing to
- [m] '| nmrPipe -fn FT -neg \' to the script to the third lowest line.
- Save->Execute->Done. Then push [r] to refresh.
- Press [h], and find P0 and P1, and push [m], change parameters and update script
- The changes to '| nmrPipe -fn PS xxx \' should be the FIRST line (The proton dimension) with PS
- save/execute, push [r] (read) and the [e] (erase settings) to see result in NMRdraw
- And then run the next CPMG script
As suggested in the relax manaul, section 5.2.2 Spectral processing, the spectral processing script could look like:
NOTE only put EXT in, AFTER you are done with phasing, or you will get problems phasing.
NOTE for GM you would have to play with the constants -g1 5 -g2 10. The NMRPipe manual on GM suggest to try as a start: -g1 20 -g2 35. -g2 specifies the Gaussian to apply in terms of a line broadening in Hz. It is usually adjusted to be larger (x 1.25 - 4.0) than the line sharpening specified by the -g1 option.
File: nmrproc.com
#!/bin/csh
nmrPipe -in test.fid \
| nmrPipe -fn SOL \
| nmrPipe -fn GM -g1 5 -g2 10 -c 1.0 \
| nmrPipe -fn ZF -auto -size 8000 \
| nmrPipe -fn FT -auto \
| nmrPipe -fn PS -p0 214.00 -p1 -21.00 -di -verb \
| nmrPipe -fn TP \
| nmrPipe -fn SP -off 0.5 -end 0.98 -pow 2 -c 0.5 \
| nmrPipe -fn ZF -auto -size 8000 \
| nmrPipe -fn FT -neg \
| nmrPipe -fn PS -p0 0.00 -p1 0.00 -di -verb \
| nmrPipe -fn TP \
| nmrPipe -fn POLY -auto \
| nmrPipe -fn EXT -left -sw \
-ov -out test.ft2
Understand spectral processing
To understand the NMRPipe functions, you can look them up in the manual page: http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/
See also the relax online manual for spectral processing.
A good book to loop up in, is Keeler, Understanding NMR Spectroscopy, Second edition.
nmrPipe | Desc. | Comments |
---|---|---|
nmrPipe -fn SOL | Solvent Filter | |
nmrPipe -fn GM -g1 5 -g2 10 -c 1.0 | Lorentz-to-Gauss Window, here for the measured direct dimension. | -c 1.0' The constant c is set to 1.0, since the phase P1 correction is different from 0.0, here -p1 -21.00, if -p1 0.0 then c 0.5. |
nmrPipe -fn ZF -auto -size 8000 | Zero Fill, here for the measured direct dimension. | The -auto will auto round to final size to power of 2. So here it is equivalent to: nmrPipe -fn -size 8192 |
nmrPipe -fn FT -auto | Complex Fourier Transform, here for the measured direct dimension. | Do Fourier Transform. |
nmrPipe -fn PS -p0 214.00 -p1 -21.00 -di -verb | Phase Correction, here for the measured direct dimension. | |
nmrPipe -fn TP | 2D Transpose XY->YX (YTP) | Transpose matrix to work in in-direct dimension. |
nmrPipe -fn SP -off 0.5 -end 0.98 -pow 2 -c 0.5 | Adjustable Sine Bell Window. The -pow 2 means is sinus^2 function. See Keeler p. 93 and p. 98 for the sine window desc | The -end 0.98 means that you cut 2% data. -c 0.5 is set 0.5 since the p1 phasing is 0.0 in the in-direct dimension. |
nmrPipe -fn ZF -auto -size 8000 | Zero Fill, here for the in-direct dimension. | The -auto will auto round to final size to power of 2. So here it is equivalent to: nmrPipe -fn -size 8192 |
nmrPipe -fn FT -neg | Complex Fourier Transform, here for the measured direct dimension. | Do Fourier Transform, but here negative, since the CPMG element in the Puls Sequence makes the magnetization end up negative. |
nmrPipe -fn PS -p0 0.00 -p1 0.00 -di -verb | Phase Correction, here for the in-direct dimension. | No-phase correction needed. |
nmrPipe -fn TP | 2D Transpose XY->YX (YTP) | Transpose matrix back to work in direct dimension. |
nmrPipe -fn POLY -auto | Polynomial Subtract for Time-Domain Solvent Correction and Frequency-Domain Baseline Correction. | |
nmrPipe -fn EXT -left -sw | Extract Region. NOTE only put this in, AFTER you are done with phasing, or you will get problems phasing. | -left extract left half on the sweep-width which have been centered on water. |
Fourier transform all spectra
Now it is time to Fourier Transform all spectra with the script CPMG_3_fft_all.sh.
CPMG_3_fft_all.sh
Convert all *.ft2 files to ucsf format, so they can be opened in SPARKY
Done by the script NMRPipe_to_Sparky.sh
NMRPipe_to_Sparky.sh
Check the peak list matches
sparky 0.fid/test.ucsf
SPARKY GUI
The keyboard shortcuts are listed in the manual [1]
First make window bigger.
zo for zoom out. zi Zoom in.
ct for setting countour level.
Set level 6 for positive and negative.
Add 1 to the +e0x level. Ex: xxxe+03 -> xxxe+04 ind positive and negative.
Ok
rp for read peaks. Find your peak file, which should be in format SPARKY_list
../peak_lists/peaks.list
Click Create peaks, Close.
Shift all peaks
Select on peak, and center it.
lt (LT) to show a list of peaks for a spectrum.
Double click on peak "A3N" in the list. Zoom in "zi".
Now you want to align you peaks, since they can be off-shifted.
First note down the current value of PPM in w1 and w2.
A3N-HN 121.828 8.513
Push F1 for select mode, drag it with the mouse or "pc" for auto "peak center".
Then click "Update" in the peak list, and note down the new values.
A3N-HN 121.681 8.514
We need to shift the nitrogen peaks (121.681 - 121.828)=-0.147 ppm, and proton peaks (8.514 - 8.513)=0.001 ppm.
Exit SPARKY
Go to the peak_list folder.
cd ../peak_lists/
We can add values to a column by using script sparky_add.sh
Correct Nitrogen
sparky_add.sh peaks.list '$2' -0.147 peaks_corr_N15.list
Correct Proton
sparky_add.sh peaks_corr_N15.list '$3' 0.001 peaks_corr_N15_1H.list
Check and auto center peaks
Now go into Sparky again, and read peak list.
cd ..
cd spectrometer_data_processed
sparky 0.fid/test.ucsf
rp Choose ../peak_lists/peaks_corr_N15_1H.list
Create peaks, close.
zo zoom out. ct set contour.
lt and go through peaks, and auto center with pc.
Problematic peaks:
H30N-HN, not possible to auto center in the middle. Next to L47 and E4. A57N-HN / D68N-HN In original peak list: A57N-HN 121.526 7.944 / D68N-HN 121.511 7.922, both centered to: 121.409 7.933.
Manually alter peaks
Save file to: ../peak_lists/peaks_corr_peak_center.list and then alter values manual.
cp ../peak_lists/peaks_corr_peak_center.list ../peak_lists/peaks_corr_final.list
gedit ../peak_lists/peaks_corr_final.list &
Then alter to:
H30N-HN 117.794 8.045 A57N-HN 121.417 7.944 D68N-HN 121.402 7.922
Then check again in sparky.
Check for peak movement
As stated in the relax manual section 5.2.1 Temperature control and calibration, the pulse sequence can put a lot of power into the sample.
You could read these sections in the relax manual:
Importance of Temperature control and calibration
Temperature control
Temperature calibration
It is therefore good also good practice to inspect for peak movements, by overlaying all spectra:
Open all the files, and overlay them with SPARKY command ol.
sparky 0.fid/test.ucsf 1.fid/test.ucsf 2.fid/test.ucsf 3.fid/test.ucsf 4.fid/test.ucsf
Changes colours for different spectra in contour ct.
Then overlay with "ol". Make sure no peaks move around.
Measuring peak heights
We will use the program NMRPipe seriesTab to measure the intensities.
seriesTab needs a input file, where the ppm values from a SPARKY list has been converted to spectral points.
The spectral points value depends on the spectral processing parameters.
Generate spectral point file
Create a file with spectral point information with script stPeakList.pl .
stPeakList.pl 0.fid/test.ft2 ../peak_lists/peaks_corr_final.list > peaks_list.tab
cat peaks_list.tab
Make file with paths to .ft2 files
Then we make a file list of filepaths to .ft2 files.
ls -v -d -1 */*.ft2 > ft2_files.ls
cat ft2_files.ls
Measure the height or sum in a spectral point box
seriesTab -in peaks_list.tab -out peaks_list_max_standard.ser -list ft2_files.ls -max
seriesTab -in peaks_list.tab -out peaks_list_max_dx1_dy1.ser -list ft2_files.ls -max -dx 1 -dy 1
OR make the sum in a box:
seriesTab -in peaks_list.tab -out peaks_list_sum_dx1_dy1.ser -list ft2_files.ls -sum -dx 1 -dy 1
Analyse in relax
making a spin file from SPARKY list
relax does not yet has the possibility to read spins from a sparky file. See support request.
So we create one.
set ATOMS=`tail -n+4 peaks_list.tab | awk '{print $7}'`
set SCRIPT=relax_2_spins.py
foreach I (`seq 1 ${#ATOMS}`)
set ATOM=${ATOMS[$I]}; set SPIN=`echo $ATOM | sed -e "s/N-HN//g"`; set RESN=`echo $SPIN | sed -e "s/[0-9]*//g"`; set RESI=`echo $SPIN | sed -e "s/[A-Za-z]//g"`
echo $ATOM $SPIN $RESN $RESI
echo "spin.create(spin_name='N', spin_num=$I, res_name='$RESN', res_num=$RESI, mol_name=None)" >> $SCRIPT
end
cat $SCRIPT
Extract the spectra settings from Varian procpar file
Now we want to make a settings file we can read in relax.
set NCYCLIST=`awk '/^ncyc /{f=1;next}f{print $0;exit}' procpar`; echo $NCYCLIST
set TIMET2=`awk '/^time_T2 /{f=1;next}f{print $2;exit}' procpar`; echo $TIMET2
set SFRQ=`awk '/^sfrq /{f=1;next}f{print $2;exit}' procpar`; echo $SFRQ
foreach I (`seq 2 ${#NCYCLIST}`)
set NCYC=${NCYCLIST[$I]}; set FRQ=`echo ${NCYC}/${TIMET2} | bc -l`; echo $NCYC $TIMET2 $FRQ $SFRQ >> ncyc.txt
end
cat ncyc.txt
Measure the backgorund noise "RMSD" in each of the .ft2 files
There exist two ways to get the background RMSD noise
- For whole spectrum: http://www.cgl.ucsf.edu/home/sparky/manual/views.html#Noise
- For a region: http://www.cgl.ucsf.edu/home/sparky/manual/extensions.html#RegionRMSD
We take the full background noise, to save time.
sparky 0.fid/test.ucsf
Then st and recompute for 10000 points.
It should give valuee of order: 2.47e+03 or similar.
Add the values to ncyc.txt in next column.
Repeat for all spectra.
We will use ncyc.text to make the spectra settings later, and should look like this.
ncyc time_T2 CPMG_frq sfrq RMSD
28 0.06 466.66666666666666666666 599.8908622 2.47e+03
0 0.06 0 599.8908622 2.34e+03
4 0.06 66.66666666666666666666 599.8908622 2.41e+03
32 0.06 533.33333333333333333333 599.8908622 2.42e+03
60 0.06 1000.00000000000000000000 599.8908622 2.45e+03
2 0.06 33.33333333333333333333 599.8908622 2.42e+03
10 0.06 166.66666666666666666666 599.8908622 2.42e+03
16 0.06 266.66666666666666666666 599.8908622 2.44e+03
8 0.06 133.33333333333333333333 599.8908622 2.39e+03
20 0.06 333.33333333333333333333 599.8908622 2.4e+03
50 0.06 833.33333333333333333333 599.8908622 2.42e+03
18 0.06 300.00000000000000000000 599.8908622 2.46e+03
40 0.06 666.66666666666666666666 599.8908622 2.41e+03
6 0.06 100.00000000000000000000 599.8908622 2.45e+03
12 0.06 200.00000000000000000000 599.8908622 2.45e+03
0 0.06 0 599.8908622 2.39e+03
24 0.06 400.00000000000000000000 599.8908622 2.45e+03
Prepare directory for relax run
Then we make a directory ready for relax
mkdir ../relax
cp ncyc.txt ../relax
cp peaks_list* ../relax
cp relax_2_spins.py ../relax
cd ../relax
relax script for setting experiment settings to spectra
Add the following python relax script file to the relax directory.
This can be modified as wanted.
This is to save "time" on the tedious work on setting the experimental conditions for each spectra.
relax_3_spectra_settings.py
# Loop over the spectra settings.
ncycfile=open('ncyc.txt','r')
# Make empty ncyclist
ncyclist = []
i = 0
for line in ncycfile:
ncyc = line.split()[0]
time_T2 = float(line.split()[1])
vcpmg = line.split()[2]
set_sfrq = float(line.split()[3])
rmsd_err = float(line.split()[4])
print ncyc, time_T2, vcpmg
# Test if spectrum is a reference
if float(vcpmg) == 0.0:
vcpmg = None
else:
vcpmg = round(float(vcpmg),3)
# Add ncyc to list
ncyclist.append(int(ncyc))
# Set the current spectrum id
current_id = "Z_A%s"%(i)
# Set the peak intensity errors, as defined as the baseplane RMSD.
spectrum.baseplane_rmsd(error=rmsd_err, spectrum_id=current_id)
# Set the NMR field strength of the spectrum.
spectrometer.frequency(id=current_id, frq=set_sfrq, units=‘MHz’)
# Relaxation dispersion CPMG constant time delay T (in s).
relax_disp.relax_time(spectrum_id=current_id, time=time_T2)
# Set the relaxation dispersion CPMG frequencies.
relax_disp.cpmg_frq(spectrum_id=current_id, cpmg_frq=vcpmg)
i += 1
# Specify the duplicated spectra.
#spectrum.replicated(spectrum_ids=['Z_A1', 'Z_A15'])
# The automatic way
dublicates = map(lambda val: (val, [i for i in xrange(len(ncyclist)) if ncyclist[i] == val]), ncyclist)
for dub in dublicates:
ncyc, list_index_occur = dub
if len(list_index_occur) > 1:
id_list = []
for list_index in list_index_occur:
id_list.append('Z_A%s'%list_index)
# We don't setup replications, since we have RMSD values from background noise
print id_list
#spectrum.replicated(spectrum_ids=id_list)
# Delete replicate spectrum
spectrum.delete('Z_A15')
Analyse
NOTES about speed up of model selection:
To speed up the model selection, see Category:Time_of_running.
Monte-Carlo simulations:
We will set the number of Monte-Carlo simulations to 10, for inspection.
This will not affect model selection.
For initial analyses where errors are not so important, the number of simulations can be dropped massively to speed things up.
If errors are not important for specific cases, set the number of MC sims to 3-10, and the analysis will perform much more rapidly.
The result is that the error estimates of the parameters are horrible but, but in some cases, excluding publication, that is not such a problem.
Which models to chose for running: This text is based on this email thread discussion:
When you are analysing data, you would probably limit the number of models to 2-3.
For example if you know that all residues are experiencing slow exchange, the LM63 fast exchange model does not need to be used.
It is interesting to see that sometimes the analytic models are selected and sometimes the numeric models.
But this is an academic curiosity, it is probably not a practical question anyone analysing real dispersion data is interested in.
The way an analysis would normally be performed is to first decide if the analytic or numeric approach is to be used.
For the analytic approach with slow exchange, you only need the No Rex and CR72 models.
You could add the IT99 model if you can see that pA >> pB in the spectra, i.e. the pB peak is tiny.
If you take the numeric approach, then the 'No Rex' and 'NS 2-site expanded' models can be used.
Once you perform an initial analysis of all residues separately, you can then look at the dynamics parameter values and judge which spins to cluster together to have the same model of dynamics, then re-perform the analysis.
Analyse in GUI
Start relax in GUI mode
relax_disp -g -t log_relax_4_model_sel.log
- Ctrl+n for new analysis
- Select Relaxation dispersion analysis button -> Next
- Select CPMG, fixed time -> Next
- Starting pipe: base pipe
- Pipe bundle: relax_disp -> Start
- We want to load the spins manually, so in next window, then go to "User functions (n-z) -> script"
- Select file_name: relax_2_spins.py -> OK
- Then click Spin Isotopes button:
- The nuclear isotope name: 15N
- The spin ID string: @N* -> OK
- The load spectra: Select button "Add" under spectra list:
- The file name: peaks_list_max_standard.ser
- The spectrum ID string: auto
- Push "Apply" and then Cancel
- We want to change the spectra properties by a script.
- Go to "User functions (n-z) -> script"
- Select file_name: relax_3_spectra_settings.py -> OK
- Before executing, it would be a good idea to save the state, to save the current setup.
- This state file will also be used for loading, before a later cluster/global fit analysis.
- Shift+Ctrl+s OR File-> Save as... ini_setup.bz2
- Make a directory for the output of the results, f.ex: model_sel_analyt.
- Point Results directory to model_sel_analyt.
- Set Monte-Carlo Simulations to 10
- Select models: Lets take "R2eff", "No Rex", "LM63", "CR72", "CR72 full", "IT99"
- Save the state again, so the settings for models, monte-carlo settings and result directory is preserved.
- Shift+Ctrl+s OR File-> Save as... ini_run.bz2 in the model_sel_analyt directory.
- Now push "Execute"
Analyse via script
Add the following python relax script file to the relax directory
relax_1_ini.py
# Taken from the relax disp manual, section 10.6.1 Dispersion script mode - the sample script
# Create the data pipe.
pipe_name = 'base pipe'
pipe_bundle = 'relax_disp'
pipe.create(pipe_name=pipe_name, bundle=pipe_bundle, pipe_type='relax_disp')
# Set the relaxation dispersion experiment type.
relax_disp.exp_type('cpmg fixed')
# Create the spins
script(file='relax_2_spins.py', dir=None)
# Name the isotope for field strength scaling.
spin.isotope(isotope='15N')
# Read the spectrum from NMRSeriesTab file. The "auto" will generate spectrum name of form: Z_A{i}
spectrum.read_intensities(file="peaks_list_max_standard.ser", dir=None, spectrum_id='auto', int_method='height')
# Set the spectra experimental properties/settings.
script(file='relax_3_spectra_settings.py', dir=None)
# Save the program state before run.
# This state file will also be used for loading, before a later cluster/global fit analysis.
state.save('ini_setup', force=True)
relax_4_model_sel.py
import os
from auto_analyses.relax_disp import Relax_disp
# Load the initial state setup
state.load(state='ini_setup.bz2')
# Set settings for run.
results_directory = os.path.join(os.getcwd(),"model_sel_analyt")
pipe_name = 'base pipe'; pipe_bundle = 'relax_disp'
MODELS = ['R2eff', 'No Rex', 'LM63', 'CR72', 'CR72 full', 'IT99']
GRID_INC = 21; MC_NUM = 10; MODSEL = 'AIC'
# Execute
Relax_disp(pipe_name=pipe_name, pipe_bundle=pipe_bundle, results_dir=results_directory, models=MODELS, grid_inc=GRID_INC, mc_sim_num=MC_NUM, modsel=MODSEL)
And the just start relax with
relax_disp relax_1_ini.py -t log_relax_1_ini.log
relax_disp relax_4_model_sel.py -t log_relax_4_model_sel.log
Rerun from a "ini_setup.bz2" file
If something goes wrong, you can open the ini_setup.bz2 in the model_sel_analyt directory.
Just start relax:
relax_disp -g -t log_relax_4_model_sel.log
and open the ini_setup.bz2 from File->"Open relax state".
It should jump to the analysis window, make corrections, and you can then click "Execute".
In script, just follow section Analyse_via_script.
Inspecting results from the relax analysis
In the main directory, there should be an auto-saved final_state.bz2 which can opened to inspect the results.
After the analysis, several folders should be available, with data for each fitted model.
R2eff/
No Rex/
LM63/
CR72/
CR72 full/
IT99/
final/
In each of these folders, there is a grace2images.py python file, which will as standard convert the grace script files to PNG files.
Inspect graphs
You can convert all to PNG images, by:
cd "R2eff"; ./grace2images.py; cd .. ;
cd "No Rex"; ./grace2images.py; cd .. ;
cd "LM63"; ./grace2images.py; cd .. ;
cd "CR72"; ./grace2images.py; cd .. ;
cd "CR72 full"; ./grace2images.py; cd .. ;
cd "IT99"; ./grace2images.py; cd .. ;
cd "final"; ./grace2images.py; cd .. ;
find . -type f -name "*.png"
# And if you want to delete them.
find . -type f -name "*.png" -exec rm -f {} \;
You can then quickly go through the fitted graphs for the models.
Convert log file to relax script
If you made a logfile, then you can do convert it to the full relax script.
See Grep_log_file for this.
Inspect model selection for residues
With logfile
If you have a log file.
set IN=log_relax_4_model_sel.log ;
set OUT=log_relax_4_model_sel_chosen_models.log ;
set FROM=`grep -n "AIC model selection" $IN | cut -d":" -f1` ;
set TO=`grep -n "monte_carlo.setup(" $IN | cut -d":" -f1` ;
sed -n ${FROM},${TO}p $IN > $OUT ;
cat $OUT ;
In relax prompt
See Category:List_objects to get inspiration how to loop through the data class containers.
You should open the final_state.bz2 in the result directory.
state.load(state='final_state.bz2')
# See which data is in the pipe
pipe.display()
# print the spin model, first import spin_loop
from pipe_control.mol_res_spin import spin_loop
print("%20s %20s" % ("# Spin ID", "Model"))
for spin, spin_id in spin_loop(return_id=True, skip_desel=True):
print("%20s %20s" % (repr(spin_id), spin.model))
You can also in the GUI see this in the Spin Viewer window under "View -> Spin Viewer (Ctrl+t)".
Select a spin, and look for the Variable model.
Execute a clustering analysis
Notes about how to select residues for clustering. Based on this email thread:
Clustering is a manual operation and it should not be automated.
It is based on human logic and is highly subjective.
For example it could be decided that one analysis is performed whereby one motional process is assumed, i.e. one kex value for all exchanging
spins.
Or it could be decided that there are two motional processes, so two clusters are created, each having their own kex.
Some spins with bizarre dynamics may be left out as 'free spins' and not used in the cluster.
If you just want all spins with Rex to be in one cluster, you could just use all spins where spin.model is not set to No Rex.
Notes about how clustering is performed in relax.
All spins of one cluster ID will be optimised as one model.
Several cluster ID will result in all those spins being optimised separately, but again with all spins together.
Any spins not in a cluster ID (labelled as free spins) will be optimised individually.
Have a look at the model_loop() method of the specific_analyses.relax_disp.api module,
and the function specific_analyses.relax_disp.disp_data.loop_cluster which it uses.
Inspect residues for clustering
Let us select residues based on a criterion where the highest number of residues have been fitted to the same model.
Open the final_state.bz2 in relax.
You can see the model select for each residue in the Spin viewer (View -> Spin viewer (Ctrl+T)). Look for the Variable model.
Open the relax prompt with Ctrl+p if you are in the GUI.
Tip: You can copy the lines, and in the relax prompt, select "Paste Plus".
from pipe_control.mol_res_spin import spin_loop
# Open file for writing
f = open('cluster_residues.txt', 'w')
# Make a list to count number of models
resi_models = []
for spin, spin_id in spin_loop(return_id=True, skip_desel=True):
# Write models to file
f.write( str(spin_id) + " ; " + str(spin.model) + " ; " + str(spin._res_name) + "\n" )
# Append models to list
resi_models.append(spin.model)
# Count resi_models
c_resi_models = dict((i,resi_models.count(i)) for i in resi_models)
print c_resi_models
# Write count result to file
for key, val in c_resi_models.items():
f.write( "# ; " + str(key) + " ; " + str(val) + "\n" )
f.close()
Copy cluster_residues.txt to the initial directory.
Create new analysis clustering
For the clustered analysis, you need to start a new analysis.
You should not load the results from the final pipe, since this will likely be fatal for the clustered analysis.
The auto-analysis is designed to take the pre-run directory name and load the results files for each model itself (not the state file).
Each results file will be loaded into a temporary data pipe and the initial parameter values copied from that.
So Close relax, and then add these files.
Do clustering Analysis in GUI
Start relax in GUI mode
relax_disp -g -t log_relax_6_cluster.log
- Open the ini_setup.bz2 from File->"Open relax state".
- Open the relax prompt with Ctrl+p. And paste this is.
# Cluster file for selection residues.
cluster_file = "cluster_residues.txt"
# Cluster residues
f = open(cluster_file, 'r')
for line in f:
if line[0] == "#":
continue
else:
spinid = line.split(";")[0].strip()
spinmodel = line.split(";")[1].strip()
# Deselect those spins not showing exchange for further analysis.
if spinmodel == "No Rex":
deselect.spin(spin_id=spinid, change_all=False)
else:
relax_disp.cluster('model_cluster', spinid)
f.close()
# Check which are clustered
print cdp.clustering
# Check for selected/deselected spins.
from pipe_control.mol_res_spin import spin_loop
for spin, spin_id in spin_loop(return_id=True, skip_desel=False):
print spin_id, spin.select
- Before executing, it would be a good idea to save the state after clustering.
- Shift+Ctrl+s OR File-> Save as... ini_setup_cluster.bz2
- Ctrl+d , right click "base pipe" and "Associate with a new auto-analysis"
- Close pipe viewer
- Make a directory for the output of the results, f.ex: model_clustering_analyt
- Point Results directory to model_clustering_analyt.
- Pint Previous run directory to previous result directory, where all the models had their folders. Values will be read from here. model_sel_analyt
- Set Monte-Carlo Simulations to 10
- Select models: Lets take "R2eff", "No Rex", "CR72", "IT99"
- Save the state again, so the settings for models, monte-carlo settings and result directorys is preserved.
- Shift+Ctrl+s OR File-> Save as... ini_run_cluster.bz2 in the model_clustering_analyt directory.
- Now push "Execute"
Do clustering Analysis in script
Add the following python relax script file to the relax directory.
relax_5_ini_cluster.py
# Load the initial state setup
state.load(state='ini_setup.bz2')
# Cluster file for selection residues.
cluster_file = "cluster_residues.txt"
# Cluster residues
f = open(cluster_file, 'r')
for line in f:
if line[0] == "#":
continue
else:
spinid = line.split(";")[0].strip()
spinmodel = line.split(";")[1].strip()
# Deselect those spins not showing exchange for further analysis.
if spinmodel == "No Rex":
deselect.spin(spin_id=spinid, change_all=False)
else:
relax_disp.cluster('model_cluster', spinid)
f.close()
# Check which are clustered
print cdp.clustering
# Check for selected/deselected spins.
from pipe_control.mol_res_spin import spin_loop
for spin, spin_id in spin_loop(return_id=True, skip_desel=False):
print spin_id, spin.select
# Save the program state before run.
state.save('ini_setup_cluster.bz2', force=True)
relax_6_cluster.py
import os
from auto_analyses.relax_disp import Relax_disp
# Load the initial state setup
state.load(state='ini_setup_cluster.bz2')
# Set settings for run.
results_directory = os.path.join(os.getcwd(),"model_clustering_analyt")
pre_run_directory = os.path.join(os.getcwd(),"model_sel_analyt")
pipe_name = 'base pipe'; pipe_bundle = 'relax_disp'
MODELS = ['R2eff', 'No Rex', 'CR72']
GRID_INC = 21; MC_NUM = 10; MODSEL = 'AIC'
# Execute
Relax_disp(pipe_name=pipe_name, pipe_bundle=pipe_bundle, results_dir=results_directory, models=MODELS, grid_inc=GRID_INC, mc_sim_num=MC_NUM, modsel=MODSEL, pre_run_dir=pre_run_directory)
And the just start relax with
relax_disp relax_5_ini_cluster.py -t log_relax_5_ini_cluster.log
relax_disp relax_6_cluster.py -t log_relax_6_cluster.log