Open main menu

Changes

→‎Intro: Deleted the out of date svn branch information.
__TOC__
 
= Intro =
This tutorial is based on the analysis of NMR data from the paper:
 
<blockquote>
The inverted chevron plot measured by NMR relaxation reveals a native-like unfolding intermediate in acyl-CoA binding protein. <br>
</blockquote>
The data is recorded as FID interleaved.
= Preparation =
= Get the process helper scripts =
Go into the '''scripts''' directory and download these scripts to there.
 
# [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#convert_all.com | convert_all.com]]
# [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#fft_all.com | fft_all.com]]
# [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#CPMG_1_sort_pseudo3D_initialize_files.sh | CPMG_1_sort_pseudo3D_initialize_files.sh]]
# [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#CPMG_2_convert_and_process.sh | CPMG_2_convert_and_process.sh ]]
# [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#CPMG_3_fft_all.sh | CPMG_3_fft_all.sh]]
# [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#NMRPipe_to_Sparky.sh | NMRPipe_to_Sparky.sh]]
# [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#sparky_add.sh | sparky_add.sh]]
# [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#stPeakList.pl | stPeakList.pl]]
 
Then make them executable, and add to PATH.
<source lang="bash">
cd scripts
# Change shell
tcsh
# Set array of scripts to download
set SCRIPTS="CPMG_1_sort_pseudo3D_initialize_files.sh CPMG_2_convert_and_process.sh CPMG_3_fft_all.sh convert_all.com fft_all.com sparky_add.sh stPeakList.pl NMRPipe_to_Sparky.sh"
 
# Download scripts
foreach SCRIPT ( ${SCRIPTS} )
curl https://raw.github.com/nmr-relax/relax_scripts/master/shell_scripts/$SCRIPT -o $SCRIPT
end
# Make them executable
chmod +x *.sh *.com *.pl
 
# Add scripts to PATH
setenv PATH ${PWD}:${PATH}
# Go back to previous directory
= Extract interleaved spectra, process to NMRPipe and do spectral processing =
== Extract interleaved and change format to NMRPipe ==
sort out the interleaved fid with the script [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#CPMG_1_sort_pseudo3D_initialize_files.sh | CPMG_1_sort_pseudo3D_initialize_files.sh]] .
<source lang="bash">
# Copy data
# Click 'Save script' to make 'fid.com' file, and 'Quit', and run the next CPMG script
### Now it is time to convert all the fid from varian formatto NMRPipe with the script [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#CPMG_2_convert_and_process.sh | CPMG_2_convert_and_process.sh]]
<source lang="bash">
CPMG_2_convert_and_process.sh
== Spectral processing ==
# This step can be done by following wiki page [[Spectral_processing]]. == Fourier transform all spectra ==Now we need it is time to spectral process Fourier Transform all spectra with the spectrascript [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#CPMG_3_fft_all.sh | CPMG_3_fft_all.sh]]. <source lang="bash">CPMG_3_fft_all.sh</source> == Convert all *.ft2 files to ucsf format, so they can be opened in SPARKY ==Done by the script [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts# Process one NMRPipe_to_Sparky.sh | NMRPipe_to_Sparky.sh]]<source lang="bash">NMRPipe_to_Sparky.sh</source> == Check the peak list matches ==<source lang="bash">sparky 0.fid/test.ucsf</source> === SPARKY GUI ===The keyboard shortcuts are listed in the manual [http://www.cgl.ucsf.edu/home/sparky/manual/indx.html] First make window bigger.<br>'''zo''' for zoom out. '''zi''' Zoom in. '''ct''' for setting countour level. <br>Set level 6 for positive and negative.<br>Add 1 to the +e0x level. Ex: xxxe+03 -> xxxe+04 ind positive and negative.<br>Ok '''rp''' for read peaks. Find your peak file, which should be in format [[SPARKY_list]] <br> ../peak_lists/peaks.listClick Create peaks, Close. === Shift all peaks ===Select on peak, and center it.<br>'''lt''' (LT) to show a list of peaks for a spectrum.<br>Double click on peak "A3N" in the list. Zoom in "zi". Now you want to align you peaks, since they can be off-shifted.<br>First note down the current value of PPM in w1 and w2. <br> A3N-HN 121.828 8.513 Push '''F1''' for select mode, drag it with the files normally mouse or "pc" for auto "peak center". <br>Then click "Update" in the peak list, and note down the next script will copy new values. <br> A3N-HN 121.681 8.514  We need to shift the processing script nitrogen peaks (121.681 - 121.828)=-0.147 ppm, and proton peaks (8.514 - 8.513)=0.001 ppm. Exit SPARKY Go to the other peak_list folder.<source lang="bash">cd ../peak_lists/</source> We can add values to a column by using script [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts# [msparky_add.sh | sparky_add.sh]Correct Nitrogen<source lang="bash">sparky_add.sh peaks.list '$2' -0.147 peaks_corr_N15.list</source>Correct Proton<source lang="bash">sparky_add.sh peaks_corr_N15.list '$3' 0.001 peaks_corr_N15_1H.list</source> === Check and auto center peaks === Now go into Sparky again, and read peak list.<source lang="bash">cd ..cd spectrometer_data_processedsparky 0.fid/test.ucsf</source> '''rp''' Choose '''../peak_lists/peaks_corr_N15_1H.list''' <br>Create peaks, close.<br>'''zo''' zoom out. '''ct''' set contour. <br>'''lt''' and go through peaks, and auto center with '''pc'''.<br>Right Problematic peaks:<br> H30N-HN, not possible to auto center in the middle. Next to L47 and E4. A57N-HN / D68N-HN In original peak list: A57N-Click Process 2DHN 121.526 7.944 / D68N-HN 121.511 7.922, both centered to: 121.409 7.933. === Manually alter peaks === Save file to: '''../peak_lists/peaks_corr_peak_center.list''' and then alter values manual.<source lang="bash">cp ../peak_lists/peaks_corr_peak_center.list ../peak_lists/peaks_corr_final.list gedit ../peak_lists/peaks_corr_final.list &</source>Basic 2D# SaveThen alter to: H30N-HN 117.794 8.045 A57N-HN 121.417 7.944 D68N-HN 121.402 7.922 Then check again in sparky. == Check for peak movement ==As stated in the [[manual | relax manual]] section '''5.2.1 Temperature control and calibration''', the pulse sequence can put a lot of power into the sample. <br> You could read these sections in the relax manual: <br>Execute[http://www.nmr-relax.com/manual/Temperature_control_calibration.html Importance of Temperature control and calibration]<br>Done; then; RClick File[http://www.nmr-relax.com/manual/relax_data_temp_control.html Temperature control]<br>Select File[http://www.nmr-relax.com/manual/relax_data_temp_calibration.html Temperature calibration]<br> It is therefore good also good practice to inspect for peak movements, by overlaying all spectra: Open all the files, and overlay them with SPARKY command '''ol'''.<source lang="bash">sparky 0.fid/test.ucsf 1.fid/test.ucsf 2.fid/test.ucsf 3.fid/test.ucsf 4.fid/test.ucsf</source>Changes colours for different spectra in contour '''ct'''.<br>Then overlay with "ol". Make sure no peaks move around. == Measuring peak heights ==We will use the program [[NMRPipe_seriesTab | NMRPipe seriesTab]] to measure the intensities. '''seriesTab''' needs a input file, where the ppm values from a [[SPARKY_list | SPARKY list]] has been converted to spectral points.<br>The spectral points value depends on the spectral processing parameters. === Generate spectral point file ===Create a file with spectral point information with script [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved_scripts#stPeakList.pl | stPeakList.pl]] . <source lang="bash">stPeakList.pl 0.fid/test.ft2../peak_lists/peaks_corr_final.list > peaks_list.tabcat peaks_list.tab</source> === Make file with paths to .ft2 files ===Then we make a file list of filepaths to .ft2 files.<source lang="bash">ls -v -d -1 */*.ft2 >Readft2_files.lscat ft2_files.ls</drawsource> === Measure the height or sum in a spectral point box ===<source lang="bash">seriesTab -in peaks_list.tab -out peaks_list_max_standard.ser -list ft2_files.ls -maxseriesTab -in peaks_list.tab -out peaks_list_max_dx1_dy1.ser -list ft2_files.ls -max -dx 1 -dy 1</source> OR make the sum in a box:<source lang="bash">Done# If your seriesTab -in peaks_list.tab -out peaks_list_sum_dx1_dy1.ser -list ft2_files.ls -sum -dx 1 -dy 1</source> = Analyse in relax = == Extract the spectra look reversed settings from Varian procpar file ==Now we want to make a settings file we can read in relax.<source lang="bash">set NCYCLIST=`awk '/^ncyc /{f=1;next}f{print $0;exit}' procpar`; echo $NCYCLISTset TIMET2=`awk '/^time_T2 /{f=1;next}f{print $2;exit}' procpar`; echo $TIMET2set SFRQ=`awk '/^sfrq /{f=1;next}f{print $2;exit}' procpar`; echo $SFRQ foreach I (i`seq 2 ${#NCYCLIST}`)set NCYC=${NCYCLIST[$I]}; set FRQ=`echo ${NCYC}/${TIMET2} | bc -l`; echo $NCYC $TIMET2 $FRQ $SFRQ >> ncyc.txtendcat ncyc.etxt</source> == Measure the backgorund noise "RMSD" in each of the . if your peaks do not seem ft2 files ===== RMSD via sparky ===There exist two ways to match your reference get the background RMSD noise # For whole spectrum) it might be solved by changing : http://www.cgl.ucsf.edu/home/sparky/manual/views.html#Noise# For a region: http://www.cgl.ucsf.edu/home/sparky/manual/extensions.html#RegionRMSD <br> We take the full background noise, to save time. <br><source lang="bash">sparky 0.fid/test.ucsf</source>Then '''st''' and recompute for '''10000''' points.<br>It should give valuee of order: 2.47e+03 or similar. <br>Add the values toncyc.txt in next column. Repeat for all spectra. We will use '''ncyc.text''' to make the spectra settings later, and should look like this.<br>'''ncyc time_T2 CPMG_frq sfrq RMSD'''<source lang="text">28 0.06 466.66666666666666666666 599.8908622 2.47e+030 0.06 0 599.8908622 2.34e+034 0.06 66.66666666666666666666 599.8908622 2.41e+0332 0.06 533.33333333333333333333 599.8908622 2.42e+0360 0.06 1000.00000000000000000000 599.8908622 2.45e+032 0.06 33.33333333333333333333 599.8908622 2.42e+0310 0.06 166.66666666666666666666 599.8908622 2.42e+0316 0.06 266.66666666666666666666 599.8908622 2.44e+038 0.06 133.33333333333333333333 599.8908622 2.39e+0320 0.06 333.33333333333333333333 599.8908622 2.4e+0350 0.06 833.33333333333333333333 599.8908622 2.42e+0318 0.06 300.00000000000000000000 599.8908622 2.46e+0340 0.06 666.66666666666666666666 599.8908622 2.41e+036 0.06 100.00000000000000000000 599.8908622 2.45e+0312 0.06 200.00000000000000000000 599.8908622 2.45e+030 0.06 0 599.8908622 2.39e+0324 0.06 400.00000000000000000000 599.8908622 2.45e+03</source> === RMSD via nmrpipe showApod ===We can also use the showApod rmsd.<source lang="bash">set FIDS=`cat ft2_files.ls`set OUT=${PWD}/apod_rmsd.txtset CWD=$PWDrm $OUT foreach I (`seq 1 ${# FIDS}`)set FID=${FIDS[m$I] }; set DIRN=`dirname $FID`cd $DIRNset apodrmsd=`showApod *.ft2 | grep "REMARK Automated Noise Std Dev in Processed Data:" | awk '| nmrPipe -fn FT -neg \{print $9}' `echo $apodrmsd $DIRN >> $OUTcd $CWDendcat $OUTmv ncyc.txt ncyc_or.txtpaste ncyc_or.txt $OUT > ncyc.txt</source> == Prepare directory for relax run ==Then we make a directory ready for relax<source lang="bash">mkdir ../relaxcp ncyc.txt ../relaxcp peaks_list* ../relaxcd ../relax</source> == relax script for setting experiment settings to spectra ==Add the following python relax script file to the third lowest linerelax directory.# Save-This can be modified as wanted.<br>Execute-This is to save "time" on the tedious work on setting the experimental conditions for each spectra. '''relax_3_spectra_settings.py'''<source lang="python">Done# Loop over the spectra settings.ncycfile=open('ncyc.txt','r') # Make empty ncyclistncyclist = [] i = 0for line in ncycfile: ncyc = line.split()[0] time_T2 = float(line.split()[1]) vcpmg = line. Then push split()[r2] to refresh set_sfrq = float(line.split()[3])# Press rmsd_err = float(line.split()[h4])  print ncyc, and find P0 and P1time_T2, and push [m]vcpmg  # Test if spectrum is a reference if float(vcpmg) == 0.0: vcpmg = None else: vcpmg = round(float(vcpmg), change parameters and update script3)  # The changes Add ncyc to list ncyclist.append(int(ncyc))  # Set the current spectrum id current_id = "Z_A%s"%(i)  # Set the current experiment type. relax_disp.exp_type(spectrum_id=current_id, exp_type='| nmrPipe -fn PS xxx \SQ CPMG' should be )  # Set the peak intensity errors, as defined as the baseplane RMSD. spectrum.baseplane_rmsd(error=rmsd_err, spectrum_id=current_id)  # Set the NMR field strength of the FIRST line spectrum. spectrometer.frequency(id=current_id, frq=set_sfrq, units='MHz')  # Relaxation dispersion CPMG constant time delay T (in s). relax_disp.relax_time(The proton dimensionspectrum_id=current_id, time=time_T2) with PS  # save/executeSet the relaxation dispersion CPMG frequencies. relax_disp.cpmg_setup(spectrum_id=current_id, push cpmg_frq=vcpmg)  i += 1 # Specify the duplicated spectra.#spectrum.replicated(spectrum_ids=[r'Z_A1', 'Z_A15'] ) # The automatic waydublicates = map(lambda val: (val, [i for i in xrange(readlen(ncyclist)) and the if ncyclist[ei] == val]), ncyclist)for dub in dublicates: ncyc, list_index_occur = dub if len(erase settingslist_index_occur) to see result > 1: id_list = [] for list_index in NMRdrawlist_index_occur: id_list.append('Z_A%s'%list_index) # And then run We don't setup replications, since we have RMSD values from background noise print id_list #spectrum.replicated(spectrum_ids=id_list) # Delete replicate spectrumspectrum.delete('Z_A15')</source> == Analyse == '''NOTES about speed up of model selection:'''To speed up the model selection, see [[:Category:Time_of_running]].<br> '''Monte-Carlo simulations:'''<br>We will set the next CPMG scriptnumber of Monte-Carlo simulations to '''10''', for inspection.<br>
As suggested This will not affect model selection. <br>For initial analyses where errors are not so important, the number of simulations can be dropped massively to speed things up. <br>If errors are not important for specific cases, set the number of MC sims to 3-10, and the analysis will perform much more rapidly. <br>The result is that the error estimates of the parameters are horrible but, but in the some cases, excluding publication, that is not such a problem.  '''Which models to chose for running:''' This text is based on [[Manual | http://article.gmane.org/gmane.science.nmr.relax manaul].devel/4426 this email thread discussion]: <br>When you are analysing data, you would probably limit the number of models to 2-3. <br>For example if you know that all residues are experiencing '''slow exchange''', the '''LM63''' '''fast exchange''' model does not need to be used. <br>It is interesting to see that sometimes the analytic models are selected and sometimes the numeric models. <br>But this is an academic curiosity, it is probably not a practical question anyone analysing real dispersion data is interested in. <br>The way an analysis would normally be performed is to first decide if the analytic or numeric approach is to be used. <br> For the '''analytic approach''' with slow exchange, section you only need the '''No Rex''' and '''CR72''' models. <br>You could add the '''IT99'''5model if you can see that pA >> pB in the spectra, i.e. the pB peak is tiny. If you take the '''numeric approach''', then the 'No Rex' and 'NS 2-site expanded' models can be used. <br> Once you perform an initial analysis of all residues separately, you can then look at the dynamics parameter values and judge which spins tocluster together to have the same model of dynamics, then re-perform the analysis.2 Spectral processing === Analyse in GUI ===Start relax in GUI mode<source lang="python">relax_disp -g -t log_relax_4_model_sel.log</source> # Ctrl+n for new analysis# Select '''Relaxation dispersion analysis''' button -> Next# Starting pipe: '''base pipe'''# Pipe bundle: '''relax_disp''' -> Start# We want to load the spins manually, so in next window, then go to "User functions (n-z) -> script"# Select file_name: '''relax_2_spins.py''' -> OK# Then click Spin Isotopes button: # The nuclear isotope name: 15N# The spin ID string: @N* -> OK# The load spectra: Select button "Add" under spectra list: # The file name: '''peaks_list_max_standard.ser'''# The spectrum ID string: auto# Leave the rest of the fields as they are, they are not used.# Push "Apply" and then '''Cancel'''# We want to change the spectral processing spectra properties by a script could look like.# Go to "User functions (n-z) -> script"# Select file_name: '''relax_3_spectra_settings.py''' -> OK# Before executing, it would be a good idea to save the state, to save the current setup.# This '''state''' file will also be used for loading, before a later cluster/global fit analysis.# Shift+Ctrl+s OR File-> Save as... '''ini_setup.bz2'''# Make a directory for the output of the results, f.ex: '''model_sel_analyt'''.# Point '''Results directory''' to '''model_sel_analyt'''.# Set Monte-Carlo Simulations to '''10'''# Select models: Lets take '''"R2eff", "No Rex", "TSMFK01", "LM63", "CR72", "CR72 full", "IT99"'''# Save the state again, so the settings for models, monte-carlo settings and result directory is preserved.# Shift+Ctrl+s OR File-> Save as... '''ini_run.bz2''' in the '''model_sel_analyt''' directory.# Now push "Execute"The analysis will probably take between 4-10 hours.<br> === Analyse via script ===Add the following python relax script file to the relax directory '''relax_1_ini.py'''<source lang="python"># Taken from the relax disp manual, section 10.6.1 Dispersion script mode - the sample script# Create the data pipe.pipe_name = 'base pipe'pipe_bundle = 'relax_disp'pipe.create(pipe_name=pipe_name, bundle=pipe_bundle, pipe_type='relax_disp') # Create the spinsspectrum.read_spins(file="peaks_list_max_standard.ser", dir=None) # Name the isotope for field strength scaling.spin.isotope(isotope='15N') # Read the spectrum from NMRSeriesTab file. The "auto" will generate spectrum name of form: Z_A{i}spectrum.read_intensities(file="peaks_list_max_standard.ser", dir=None, spectrum_id='auto', int_method='nmrprocheight') # Set the spectra experimental properties/settings.comscript(file='relax_3_spectra_settings.py', dir=None) # Save the program state before run.# This state file will also be used for loading, before a later cluster/global fit analysis.state.save('ini_setup', force=True)</source> '''relax_4_model_sel.py'''<source lang="python">import osfrom auto_analyses.relax_disp import Relax_disp # Load the initial state setupstate.load(state='ini_setup.bz2') # Set settings for run.results_directory = os.path.join(os.getcwd(),"model_sel_analyt")pipe_name = 'base pipe'; pipe_bundle = 'relax_disp'MODELS = ['R2eff', 'No Rex', 'TSMFK01', 'LM63', 'CR72', 'CR72 full', 'IT99']GRID_INC = 21; MC_NUM = 10; MODSEL = 'AIC' # ExecuteRelax_disp(pipe_name=pipe_name, pipe_bundle=pipe_bundle, results_dir=results_directory, models=MODELS, grid_inc=GRID_INC, mc_sim_num=MC_NUM, modsel=MODSEL)</source> And the just start relax with<source lang="bash">relax_disp relax_1_ini.py -t log_relax_1_ini.logrelax_disp relax_4_model_sel.py -t log_relax_4_model_sel.log</source>The analysis will probably take between 4-10 hours.<br> == Rerun from a "ini_setup.bz2" file ==If something goes wrong, you can open the '''ini_setup.bz2''' in the '''model_sel_analyt''' directory. Just start relax: relax_disp -g -t log_relax_4_model_sel.logand open the '''ini_setup.bz2''' from File->"Open relax state".<br>It should jump to the analysis window, make corrections, and you can then click "Execute". In script, just follow section [[Tutorial_for_Relaxation_dispersion_analysis_cpmg_fixed_time_recorded_on_varian_as_fid_interleaved#Analyse_via_script | Analyse_via_script]]. = Inspecting results from the relax analysis = In the main directory, there should be an auto-saved '''final_state.bz2''' which can opened to inspect the results. After the analysis, several folders should be available, with data for each fitted model.<source lang="bash">R2eff/No Rex/TSMFK01/LM63/CR72/CR72 full/IT99/final/</source> In each of these folders, there is a [[grace2images.py]] python file, which will as standard convert the grace script files to PNG files.== Inspect graphs ==You can convert all to PNG images, by:
<source lang="bash">
cd "R2eff"; ./grace2images.py; cd .. ;cd "No Rex"; ./grace2images.py; cd .. ;cd "TSMFK01"; ./grace2images.py; cd .. ;cd "LM63"; ./grace2images.py; cd .. ;cd "CR72"; ./grace2images.py; cd .. ;cd "CR72 full"; ./grace2images.py; cd .. ;cd "IT99"; ./grace2images.py; cd .. ;cd "final"; ./grace2images.py; cd .. ; find . -type f -name "*.png" #!And if you want to delete them.find . -type f -name "*.png" -exec rm -f {} \;</source> You can then quickly go through the fitted graphs for the models. == Convert log file to relax script ==If you made a logfile, then you can do convert it to the full relax script.<br>See [[Grep_log_file]] for this. == Compare values ==For the '''TSMFK01''' and for example the '''CR72''', the '''k_AB''' value can be compare <source lang="bash">cd model_sel_analytpaste "TSMFK01/k_AB.out" "CR72/k_AB.out" | awk '{print $2, $3, $6, $13}'</source> == Inspect model selection for residues == === Grep AIC selection from logfile ===If you have a log file.<source lang="bash">set IN=log_relax_4_model_sel.log ;set OUT=log_relax_4_model_sel_chosen_models.txt ; set FROM=`grep -n "AIC model selection" $IN | cut -d":" -f1` ;set TO=`grep -n "monte_carlo.setup(" $IN | cut -d":" -f1` ;sed -n ${FROM},${TO}p $IN > $OUT ;cat $OUT ;</source> === get spin.model ===See [[:Category:List_objects]] to get inspiration how to loop through the data class containers. You should open the '''final_state.bz2''' in the result directory. <source lang="python">state.load(state='final_state.bz2') # See which data is in the pipepipe.display() # print the spin model, first import spin_loopfrom pipe_control.mol_res_spin import spin_loop print("%20s %20s" % ("# Spin ID", "Model"))for spin, spin_id in spin_loop(return_id=True, skip_desel=True): print("%20s %20s" % (repr(spin_id), spin.model))</source> You can also in the GUI see this in the '''Spin Viewer window''' under "View -> Spin Viewer (Ctrl+t)". <br>Select a spin, and look for the Variable '''model'''. = Execute a clustering analysis ='''Notes about how to select residues for clustering. Based on [http://article.gmane.org/bingmane.science.nmr.relax.devel/csh4442 this email thread:] '''<br>Clustering is a manual operation and it should not be automated. <br>It is based on human logic and is highly subjective. <br>For example it could be decided that one analysis is performed whereby one motional process is assumed, i.e. one kex value for all exchangingspins. <br>Or it could be decided that there are two motional processes, so two clusters are created, each having their own kex. <br>Some spins with bizarre dynamics may be left out as 'free spins' and not used in the cluster. <br>If you just want all spins with '''Rex''' to be in one cluster, you could just use all spins where '''spin.model''' is not set to '''No Rex'''.
nmrPipe '''Notes about how clustering is performed in relax.'''<br>All spins of one cluster ID will be optimised as one model. <br>Several cluster ID will result in all those spins being optimised separately, but again with all spins together. <br>Any spins not in a cluster ID (labelled as '''free spins''') will be optimised individually. <br>Have a look at the '''model_loop()''' method of the '''specific_analyses.relax_disp.api''' module, <br>and the function '''specific_analyses.relax_disp.disp_data.loop_cluster''' which it uses. == Inspect residues for clustering == Let us select residues based on a criterion where the highest number of residues have been fitted to the same model. Open the '''final_state.bz2''' in relax GUI. <br> You can see the model select for each residue in the '''Spin viewer''' (View -> Spin viewer (Ctrl+T)). Look for the '''Variable''' '''model'''.  Open the relax prompt with '''Ctrl+p''' if you are in the GUI.<br>Tip: You can copy the lines, and in testthe relax prompt, select "Paste Plus".fid \| nmrPipe <source lang="python">from pipe_control.mol_res_spin import spin_loop # Open file for writingcluster_file = "cluster_residues.txt"f = open(cluster_file, 'w') -fn SOL # Make a list to count number of modelsresi_models = [] for spin, mol_name, res_num, res_name, spin_id in spin_loop(full_info=True, return_id=True, skip_desel=True): # Write models to file f.write( str(spin_id) + " ; " + str(spin.model) + " ; " + str(mol_name) + " ; " + str(res_num) + " ; " + str(res_name) + "\n" )| nmrPipe -fn GM # Append models to list resi_models.append(spin.model) -g1 5 -g2 10 -c 0# Count resi_modelsc_resi_models = dict((i,resi_models.5 \count(i)) for i in resi_models)print c_resi_models| nmrPipe -fn ZF -auto -size 8000 # Write count result to filefor key, val in c_resi_models.items(): f.write( "# ; " + str(key) + " ; " + str(val) + "\n" )| nmrPipe #Close the file f.close()</source> Copy '''cluster_residues.txt''' to the initial directory. == Create new analysis clustering ==For the clustered analysis, you need to start a new analysis. <br>You should not load the results from the final pipe, since this will likely be fatal for the clustered analysis. <br>The auto-fn FT analysis is designed to take the pre-auto \run directory name and load the results files for each model itself (not the state file). <br>Each results file will be loaded into a temporary data pipe and the initial parameter values copied from that. <br> So Close relax, and then add these files. === Do clustering Analysis in GUI ===Start relax in GUI mode<source lang="python">| nmrPipe relax_disp -fn PS g -p0 214t log_relax_5_cluster.00 log</source> # Open the '''ini_setup.bz2''' from File-p1 -21>"Open relax state".# Open the '''relax prompt''' with '''Ctrl+p'''. And paste this is.<source lang="python"># Cluster residuescluster_file = "cluster_residues.txt"f = open(cluster_file, 'r')for line in f: if line[0] == "#": continue else: spinid = line.split(";")[0].strip() spinmodel = line.split(";")[1].strip()  # Deselect those spins not showing exchange for further analysis. if spinmodel == "No Rex": deselect.spin(spin_id=spinid, change_all=False) else: relax_disp.cluster('model_cluster', spinid) f.close() # Check which are clusteredprint cdp.clustering # Check for selected/deselected spins.00 -di -verb for spin, spin_id in spin_loop(return_id=True, skip_desel=False): \print spin_id, spin.select</source># Before executing, it would be a good idea to save the state after clustering.| nmrPipe # Shift+Ctrl+s OR File-fn TP \> Save as... '''ini_setup_cluster.bz2'''| nmrPipe # Ctrl+d , right click "base pipe" and "Associate with a new auto-fn SP analysis" # Close pipe viewer # Make a directory for the output of the results, f.ex: '''model_clustering_analyt'''# Point '''Results directory''' to '''model_clustering_analyt'''.# Pint '''Previous run directory''' to previous result directory, where all the models had their folders. Values will be read from here. '''model_sel_analyt'''# Set Monte-off 0Carlo Simulations to '''50'''# Select models: Lets take '''"R2eff", "No Rex", "TSMFK01"'''# Now push "Execute" === Do clustering Analysis in script ===Add the following python relax script file to the relax directory. '''relax_5_cluster.py'''<source lang="python">"""Taken from the relax disp manual, section 10.6.5 1 Dispersion script mode -end 0the sample script. To run the script, simply type: $ ../../../../../relax relax_5_cluster.98 py -pow 2 -c 0tee relax_5_cluster.log""" import osfrom auto_analyses.relax_disp import Relax_dispfrom pipe_control.mol_res_spin import spin_loop # Set settings for run.pre_run_directory = os.path.join(os.getcwd(),"model_sel_analyt")results_directory = os.path.join(os.getcwd(),"model_clustering_analyt")cluster_file = "cluster_residues.txt" # Load the previous final state with results.state.load(state='final_state.bz2', dir=pre_run_directory, force=False) # Open file for writingf = open(cluster_file, 'w') # Make a list to count number of modelsresi_models = [] for spin, mol_name, res_num, res_name, spin_id in spin_loop(full_info=True, return_id=True, skip_desel=True): # Write models to file f.write( str(spin_id) + " ; " + str(spin.5 model) + " ; " + str(mol_name) + " ; " + str(res_num) + " ; " + str(res_name) + "\n" )| nmrPipe -fn ZF -auto -size 8000 \| nmrPipe # Append models to list resi_models.append(spin.model) # Count resi_modelsc_resi_models = dict((i,resi_models.count(i)) for i in resi_models)print c_resi_models -fn FT -neg # Write count result to filefor key, val in c_resi_models.items(): f.write( "# ; " + str(key) + " ; " + str(val) + "\n" )| nmrPipe -fn PS -p0 #Close the file f.close() ################### Cluster file for selection residues.################# # Load the initial state setupstate.load(state='ini_setup.bz2', force=True) # Cluster residuesf = open(cluster_file, 'r')for line in f: if line[0] == "#": continue else: spinid = line.split(";")[00 -p1 0].00 -di -verb strip() spinmodel = line.split(";")[1].strip()  # Deselect those spins not showing exchange for further analysis. if spinmodel == "No Rex": deselect.spin(spin_id=spinid, change_all=False) else: relax_disp.cluster('model_cluster', spinid) \| nmrPipe -fn TP \f.close() # Check which are clusteredprint cdp.clustering # Check for selected/deselected spins.for spin, spin_id in spin_loop(return_id=True, skip_desel=False): print spin_id, spin.select # Save the program state before run.state.save('ini_setup_cluster.bz2', force=True) ################### Run cluster analysis################# # Set settings for run.pipe_name = 'base pipe'; pipe_bundle = 'relax_disp'MODELS = ['R2eff', 'No Rex', 'TSMFK01']GRID_INC = 21; MC_NUM = 50; MODSEL = 'AIC' # ExecuteRelax_disp(pipe_name=pipe_name, pipe_bundle=pipe_bundle, results_dir=results_directory, models=MODELS, grid_inc=GRID_INC, mc_sim_num=MC_NUM, modsel=MODSEL, pre_run_dir=pre_run_directory)</source> | nmrPipe -fn POLY -auto \And the just start relax with| nmrPipe -fn EXT -left -sw \<source lang="bash"> -ov relax_disp relax_5_cluster.py -out testt log_relax_5_cluster.ft2log
</source>
= See also =
[[Category:Relaxation dispersion analysis]]
[[Category:Tutorials]]
Trusted, Bureaucrats
4,223

edits