Difference between revisions of "Tutorial for Relaxation dispersion analysis cpmg fixed time recorded on varian as fid interleaved"

From relax wiki
Jump to navigation Jump to search
Line 623: Line 623:
 
find . -type f -name "*.png" -exec rm -f {} \;
 
find . -type f -name "*.png" -exec rm -f {} \;
 
</source>
 
</source>
 +
 +
You can then quickly go through the fitted graphs for the models.
  
 
== Convert log file to relax script ==
 
== Convert log file to relax script ==

Revision as of 11:35, 28 August 2013

Intro

This tutorial presently cover the relax_disp branch.
This branch is under development, for testing it out, you need to use the source code. See Installation_linux#Checking_out_a_relax_branch.

This tutorial is based on the analysis of NMR data from the paper:

The inverted chevron plot measured by NMR relaxation reveals a native-like unfolding intermediate in acyl-CoA binding protein.
Kaare Teilum, Flemming M Poulsen, Mikael Akke.
Proceedings of the National Academy of Sciences of the United States of America (2006).
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1458987

The data is recorded as FID interleaved.

Preparation

You want to make a working dir, with different folders

peak_lists
spectrometer_data
scripts

You can create the folders by

mkdir peak_lists spectrometer_data scripts

In the folder spectrometer_data should be the files: fid and procpar as the output from recording fid interleaved on Varian.
In the folder peak_lists should contain SPARKY list in SPARKY list format.
In the folder scripts we put scripts which help us processing the files.

Get the process helper scripts

Go into the scripts directory and download these scripts to there.

  1. convert_all.com
  2. fft_all.com
  3. CPMG_1_sort_pseudo3D_initialize_files.sh
  4. CPMG_2_convert_and_process.sh
  5. CPMG_3_fft_all.sh
  6. NMRPipe_to_Sparky.sh
  7. sparky_add.sh
  8. stPeakList.pl

Then make them executable, and add to PATH.

cd scripts
# Change shell
tcsh

# Make them executable
chmod +x *.sh *.com *.pl

# Add scripts to PATH
setenv PATH ${PWD}:${PATH}

# Go back to previous directory
cd ..

Extract interleaved spectra, process to NMRPipe and do spectral processing

Extract interleaved and change format to NMRPipe

sort out the interleaved fid with the script CPMG_1_sort_pseudo3D_initialize_files.sh .

# Copy data
cp -r spectrometer_data spectrometer_data_processed

# sort_pseudo3D and initialize files
cd spectrometer_data_processed
CPMG_1_sort_pseudo3D_initialize_files.sh

Now we make a file to convert from binary format of Varian to NMRPipe.

  1. Now click, 'read parameters', check 'Rance-Kay'
  2. Remember to set Y-'Observe Freq MHz' to N15
  3. Click 'Save script' to make 'fid.com' file, and 'Quit', and run the next CPMG script

Now it is time to convert all the fid from varian format to NMRPipe with the script CPMG_2_convert_and_process.sh .

CPMG_2_convert_and_process.sh

Spectral processing

  1. Now we need to spectral process the spectra.
  2. Process one of the files normally and the next script will copy the processing script to the other folder.
  3. [m]->Right-Click Process 2D->Basic 2D
  4. Save->Execute->Done; then; RClick File->Select File->test.ft2->Read/draw->Done
  5. If your spectra look reversed (i.e. if your peaks do not seem to match your reference spectrum) it might be solved by changing to
  6. [m] '| nmrPipe -fn FT -neg \' to the script to the third lowest line.
  7. Save->Execute->Done. Then push [r] to refresh.
  8. Press [h], and find P0 and P1, and push [m], change parameters and update script
  9. The changes to '| nmrPipe -fn PS xxx \' should be the FIRST line (The proton dimension) with PS
  10. save/execute, push [r] (read) and the [e] (erase settings) to see result in NMRdraw
  11. And then run the next CPMG script

As suggested in the relax manaul, section 5.2.2 Spectral processing, the spectral processing script could look like:

File: nmrproc.com

#!/bin/csh

nmrPipe -in test.fid \
| nmrPipe  -fn SOL                                    \
| nmrPipe  -fn GM  -g1 5 -g2 10 -c 1.0                \
| nmrPipe  -fn ZF -auto -size 8000                    \
| nmrPipe  -fn FT -auto                               \
| nmrPipe  -fn PS -p0 214.00 -p1 -21.00 -di -verb     \
| nmrPipe  -fn TP                                     \
| nmrPipe  -fn SP -off 0.5 -end 0.98 -pow 2 -c 0.5    \
| nmrPipe  -fn ZF -auto -size 8000                    \
| nmrPipe  -fn FT -neg                                \
| nmrPipe  -fn PS -p0 0.00 -p1 0.00 -di -verb         \
| nmrPipe  -fn TP                                     \
| nmrPipe  -fn POLY -auto                             \
| nmrPipe  -fn EXT -left -sw                          \
   -ov -out test.ft2

Understand spectral processing

To understand the NMRPipe functions, you can look them up in the manual page: http://spin.niddk.nih.gov/NMRPipe/ref/nmrpipe/

See also the relax online manual for spectral processing.

A good book to loop up in, is Keeler, Understanding NMR Spectroscopy, Second edition.

nmrPipe Desc. Comments
nmrPipe -fn SOL Solvent Filter
nmrPipe -fn GM -g1 5 -g2 10 -c 1.0 Lorentz-to-Gauss Window, here for the measured direct dimension. -c 1.0' The constant c is set to 1.0, since the phase P1 correction is different from 0.0, here -p1 -21.00, if -p1 0.0 then c 0.5.
nmrPipe -fn ZF -auto -size 8000 Zero Fill, here for the measured direct dimension. The -auto will auto round to final size to power of 2. So here it is equivalent to: nmrPipe -fn -size 8192
nmrPipe -fn FT -auto Complex Fourier Transform, here for the measured direct dimension. Do Fourier Transform.
nmrPipe -fn PS -p0 214.00 -p1 -21.00 -di -verb Phase Correction, here for the measured direct dimension.
nmrPipe -fn TP 2D Transpose XY->YX (YTP) Transpose matrix to work in in-direct dimension.
nmrPipe -fn SP -off 0.5 -end 0.98 -pow 2 -c 0.5 Adjustable Sine Bell Window. The -pow 2 means is sinus^2 function. See Keeler p. 93 and p. 98 for the sine window desc The -end 0.98 means that you cut 2% data. -c 0.5 is set 0.5 since the p1 phasing is 0.0 in the in-direct dimension.
nmrPipe -fn ZF -auto -size 8000 Zero Fill, here for the in-direct dimension. The -auto will auto round to final size to power of 2. So here it is equivalent to: nmrPipe -fn -size 8192
nmrPipe -fn FT -neg Complex Fourier Transform, here for the measured direct dimension. Do Fourier Transform, but here negative, since the CPMG element in the Puls Sequence makes the magnetization end up negative.
nmrPipe -fn PS -p0 0.00 -p1 0.00 -di -verb Phase Correction, here for the in-direct dimension. No-phase correction needed.
nmrPipe -fn TP 2D Transpose XY->YX (YTP) Transpose matrix back to work in direct dimension.
nmrPipe -fn POLY -auto Polynomial Subtract for Time-Domain Solvent Correction and Frequency-Domain Baseline Correction.
nmrPipe -fn EXT -left -sw Extract Region. -left extract left half on the sweep-width which have been centered on water.

Fourier transform all spectra

Now it is time to Fourier Transform all spectra with the script CPMG_3_fft_all.sh.

CPMG_3_fft_all.sh

Convert all *.ft2 files to ucsf format, so they can be opened in SPARKY

Done by the script NMRPipe_to_Sparky.sh

NMRPipe_to_Sparky.sh

Check the peak list matches

sparky 0.fid/test.ucsf

SPARKY GUI

The keyboard shortcuts are listed in the manual [1]

First make window bigger.
zo for zoom out. zi Zoom in.

ct for setting countour level.
Set level 6 for positive and negative.
Add 1 to the +e0x level. Ex: xxxe+03 -> xxxe+04 ind positive and negative.
Ok

rp for read peaks. Find your peak file, which should be in format SPARKY_list

../peak_lists/peaks.list

Click Create peaks, Close.

Shift all peaks

Select on peak, and center it.
lt (LT) to show a list of peaks for a spectrum.
Double click on peak "A3N" in the list. Zoom in "zi".

Now you want to align you peaks, since they can be off-shifted.
First note down the current value of PPM in w1 and w2.

A3N-HN	121.828	8.513

Push F1 for select mode, drag it with the mouse or "pc" for auto "peak center".
Then click "Update" in the peak list, and note down the new values.

A3N-HN	121.681	8.514
We need to shift the nitrogen peaks 
(121.681 - 121.828)=-0.147 ppm, 
and proton peaks 
(8.514 - 8.513)=0.001 ppm.

Exit SPARKY

Go to the peak_list folder.

cd ../peak_lists/

We can add values to a column by using script sparky_add.sh

Correct Nitrogen

sparky_add.sh peaks.list '$2' -0.147 peaks_corr_N15.list

Correct Proton

sparky_add.sh peaks_corr_N15.list '$3' 0.001 peaks_corr_N15_1H.list

Check and auto center peaks

Now go into Sparky again, and read peak list.

cd ..
cd spectrometer_data_processed
sparky 0.fid/test.ucsf

rp Choose ../peak_lists/peaks_corr_N15_1H.list
Create peaks, close.
zo zoom out. ct set contour.
lt and go through peaks, and auto center with pc.

Problematic peaks:

H30N-HN, not possible to auto center in the middle. Next to L47 and E4.
A57N-HN / D68N-HN  In original peak list: A57N-HN 121.526 7.944 / D68N-HN 121.511 7.922, both centered to: 121.409 7.933.

Manually alter peaks

Save file to: ../peak_lists/peaks_corr_peak_center.list and then alter values manual.

cp ../peak_lists/peaks_corr_peak_center.list ../peak_lists/peaks_corr_final.list 
gedit ../peak_lists/peaks_corr_final.list &

Then alter to:

H30N-HN 117.794 8.045
A57N-HN	121.417 7.944
D68N-HN	121.402 7.922

Then check again in sparky.

Check for peak movement

As stated in the relax manual section 5.2.1 Temperature control and calibration, the pulse sequence can put a lot of power into the sample.

It is therefore good practice to inspect for peak movements, by overlaying all spectra:

Open all the files, and overlay them with SPARKY command ol.

sparky 0.fid/test.ucsf 1.fid/test.ucsf 2.fid/test.ucsf 3.fid/test.ucsf 4.fid/test.ucsf

Changes colours for different spectra in contour ct.
Then overlay with "ol". Make sure no peaks move around.

Measuring peak heights

We will use the program NMRPipe seriesTab to measure the intensities.

seriesTab needs a input file, where the ppm values from a SPARKY list has been converted to spectral points.
The spectral points value depends on the spectral processing parameters.

Generate spectral point file

Create a file with spectral point information with script stPeakList.pl .

stPeakList.pl 0.fid/test.ft2 ../peak_lists/peaks_corr_final.list > peaks_list.tab
cat peaks_list.tab

Make file with paths to .ft2 files

Then we make a file list of filepaths to .ft2 files.

ls -v -d -1 */*.ft2 > ft2_files.ls
cat ft2_files.ls

Measure the height or sum in a spectral point box

seriesTab -in peaks_list.tab -out peaks_list_max_standard.ser -list ft2_files.ls -max
seriesTab -in peaks_list.tab -out peaks_list_max_dx1_dy1.ser -list ft2_files.ls -max -dx 1 -dy 1

OR make the sum in a box:

seriesTab -in peaks_list.tab -out peaks_list_sum_dx1_dy1.ser -list ft2_files.ls -sum -dx 1 -dy 1

Extract the spectra settings from Varian procpar file

Now we want to make a settings file we can read in relax.

set NCYCLIST=`awk '/^ncyc /{f=1;next}f{print $0;exit}' procpar`; echo $NCYCLIST
set TIMET2=`awk '/^time_T2 /{f=1;next}f{print $2;exit}' procpar`; echo $TIMET2
set SFRQ=`awk '/^sfrq /{f=1;next}f{print $2;exit}' procpar`; echo $SFRQ

foreach I (`seq 2 ${#NCYCLIST}`)
set NCYC=${NCYCLIST[$I]}; set FRQ=`echo ${NCYC}/${TIMET2} | bc -l`; echo $NCYC $TIMET2 $FRQ $SFRQ >> ncyc.txt
end
cat ncyc.txt

Analyse in relax

making a spin file from SPARKY list

relax does not yet has the possibility to read spins from a sparky file. See support request.

So we create one.

set ATOMS=`tail -n+4 peaks_list.tab | awk '{print $7}'`
set SCRIPT=relax_2_spins.py

foreach I (`seq 1 ${#ATOMS}`)
set ATOM=${ATOMS[$I]}; set SPIN=`echo $ATOM | sed -e "s/N-HN//g"`; set RESN=`echo $SPIN | sed -e "s/[0-9]*//g"`; set RESI=`echo $SPIN | sed -e "s/[A-Za-z]//g"`
echo $ATOM $SPIN $RESN $RESI
echo "spin.create(spin_name='N', spin_num=$I, res_name='$RESN', res_num=$RESI, mol_name=None)" >> $SCRIPT
end

cat $SCRIPT

Measure the backgorund noise "RMSD" in each of the .ft2 files

There exist two ways to get the background RMSD noise

  1. For whole spectrum: http://www.cgl.ucsf.edu/home/sparky/manual/views.html#Noise
  2. For a region: http://www.cgl.ucsf.edu/home/sparky/manual/extensions.html#RegionRMSD

We take the full background noise, to save time.

sparky 0.fid/test.ucsf

Then st and recompute for 10000 points.
It should give valuee of order: 2.47e+03 or similar.
Add the values to ncyc.txt in next column.

Repeat for all spectra.

Prepare directory for relax run

Then we make a directory ready for relax

mkdir ../relax
cp ncyc.txt ../relax
cp peaks_list* ../relax
cp relax_2_spins.py ../relax
cd ../relax

relax script for setting experiment settings to spectra

Add the following python relax script file to the relax directory

relax_3_spectra_settings.py

# Loop over the spectra settings.
ncycfile=open('ncyc.txt','r')

# Make empty ncyclist
ncyclist = []

i = 0
for line in ncycfile:
    ncyc = line.split()[0]
    time_T2 = float(line.split()[1])
    vcpmg = line.split()[2]
    set_sfrq = float(line.split()[3]) * 1e6
    rmsd_err = float(line.split()[4])

    print ncyc, time_T2, vcpmg

    # Test if spectrum is a reference
    if float(vcpmg) == 0.0:
        vcpmg = None
    else:
        vcpmg = round(float(vcpmg),3)

    # Add ncyc to list
    ncyclist.append(int(ncyc))

    # Set the current spectrum id
    current_id = "Z_A%s"%(i)

    # Set the peak intensity errors, as defined as the baseplane RMSD.
    spectrum.baseplane_rmsd(error=rmsd_err, spectrum_id=current_id)

    # Set the NMR field strength of the spectrum.
    spectrometer.frequency(id=current_id, frq=set_sfrq)

    # Relaxation dispersion CPMG constant time delay T (in s).
    relax_disp.relax_time(spectrum_id=current_id, time=time_T2)

    # Set the relaxation dispersion CPMG frequencies.
    relax_disp.cpmg_frq(spectrum_id=current_id, cpmg_frq=vcpmg)

    i += 1

# Specify the duplicated spectra.
#spectrum.replicated(spectrum_ids=['Z_A1', 'Z_A15'])

# The automatic way
dublicates = map(lambda val: (val, [i for i in xrange(len(ncyclist)) if ncyclist[i] == val]), ncyclist)
for dub in dublicates:
    ncyc, list_index_occur = dub
    if len(list_index_occur) > 1:
        id_list = []
        for list_index in list_index_occur:
            id_list.append('Z_A%s'%list_index)
        # We don't setup replications, since we have RMSD values from background noise
        print id_list
        #spectrum.replicated(spectrum_ids=id_list)

# Delete replicate spectrum
spectrum.delete('Z_A15')

Analyse

NOTE, to speed up the fitting, see Category:Time_of_running, we set the number of Monte-Carlo simulations to 10, for inspection.

This will not affect model selection.
For initial analyses where errors are not so important, the number of simulations can be dropped massively to speed things up.
If errors are not important for specific cases, set the number of MC sims to 3-10, and the analysis will perform much more rapidly.
The result is that the error estimates of the parameters are horrible but, but in some cases, excluding publication, that is not such a problem.

Analyse in GUI

Start relax in GUI mode

relax_disp -g -l logfile.txt
  1. Ctrl+n for new analysis
  2. Select Relaxation dispersion analysis button -> Next
  3. CPMG, fixed time -> Next
  4. Starting pipe: base pipe
  5. Pipe bundle: relax_disp -> Start
  6. We want to load the spins manually, so in next window, then go to "User functions (n-z) -> script"
  7. Select file_name: relax_2_spins.py -> OK
  8. Then click Spin Isotopes button:
  9. The nuclear isotope name: 15N
  10. The spin ID string: @N* -> OK
  11. The load spectra: Select button "Add" under spectra list:
  12. The file name: peaks_list_max_standard.ser
  13. The spectrum ID string: auto
  14. Push "Apply" and then "Cancel"
  15. We want to change the spectra properties by a script.
  16. Go to "User functions (n-z) -> script"
  17. Select file_name: relax_3_spectra_settings.py -> OK
  18. Set Monte-Carlo Simulations to 10
  19. Before executing, it would be a good idea to save the state, to save the current setup.
  20. Shift+Ctrl+s OR File-> Save as... prerun.bz2
  21. Now push "Execute"

Analyse via script

Add the following python relax script file to the relax directory

relax_1_ini.py

# Taken from the relax disp manual, section 10.6.1 Dispersion script mode - the sample script
# Python module imports.
from os import sep

# relax module imports.
from auto_analyses.relax_disp import Relax_disp

# Analysis variables.
#####################

# The dispersion models.
MODELS = ['R2eff', 'No Rex', 'LM63', 'CR72', 'IT99', 'NS 2-site expanded']

# The grid search size (the number of increments per dimension).
GRID_INC = 21

# The number of Monte Carlo simulations to be used for error analysis at the end of the analysis.
MC_NUM = 10

# The model selection technique to use.
MODSEL = 'AIC'

# Experiment settings
#set_dir = "spectrometer_data_processed"
set_dir = None

# Set up the data pipe.
#######################

# Create the data pipe.
pipe_name = 'base pipe'
pipe_bundle = 'relax_disp'
pipe.create(pipe_name=pipe_name, bundle=pipe_bundle, pipe_type='relax_disp')

# Set the relaxation dispersion experiment type.
relax_disp.exp_type('cpmg fixed')

# Create the spins
script(file='relax_2_spins.py', dir=set_dir)

# Name the isotope for field strength scaling.
spin.isotope(isotope='15N')

# Read the spectrum from NMRSeriesTab file. The "auto" will generate spectrum name of form: Z_A{i}
spectrum.read_intensities(file="peaks_list_max_standard.ser", dir=set_dir, spectrum_id='auto', int_method='height')

# Set the spectra experimental properties/settings.
script(file='relax_3_spectra_settings.py', dir=set_dir)

# Auto-analysis execution.
##########################

# Save the program state before run.
state.save('pre_run', force=True)

# Do not change!
Relax_disp(pipe_name=pipe_name, pipe_bundle=pipe_bundle, models=MODELS, grid_inc=GRID_INC, mc_sim_num=MC_NUM, modsel=MODSEL)

And the just start relax with

relax_disp relax_1_ini.py
# Or with logfile
relax_disp relax_1_ini.py -l logfile.txt

Rerun from a "pre_run.bz2" file

If something goes wrong, you can open the pre_run.bz2.

Just start relax:

relax_disp -g -l logfile.log

and open the pre_run.bz2 from File->"Open relax state".
It should jump to the analysis window, and you can click "Execute"

In script it will be See manual:

state.load(state='pre_run.bz2')
from auto_analyses.relax_disp import Relax_disp
import pipe_control

print pipe.display()
pipe_name = 'base pipe'; pipe_bundle = 'relax_disp'

MODELS = ['R2eff', 'No Rex', 'LM63', 'CR72', 'IT99', 'NS 2-site expanded']
GRID_INC = 21; MC_NUM = 10; MODSEL = 'AIC'


print pipe_control.spectrum.get_ids()

Relax_disp(pipe_name=pipe_name, pipe_bundle=pipe_bundle, models=MODELS, grid_inc=GRID_INC, mc_sim_num=MC_NUM, modsel=MODSEL)

Inspecting results from the relax analysis

In the main directory, there should be an auto-saved final_state.bz2 which can opened to inspect the results.

After the analysis, several folders should be available, with data for each fitted model.

CR72/
final/
IT99/
LM63/
No Rex/
NS 2-site expanded/
R2eff/

In each of these folders, there is a grace2images.py python file, which will as standard convert the grace script files to PNG files.

Inspect graphs

You can convert all to PNG images, by:

cd "CR72"; ./grace2images.py; cd .. ;
cd "final"; ./grace2images.py; cd .. ;
cd "IT99"; ./grace2images.py; cd .. ;
cd "No Rex"; ./grace2images.py; cd .. ;
cd "NS 2-site expanded"; ./grace2images.py; cd .. ;
cd "R2eff"; ./grace2images.py; cd .. ;
find . -type f -name "*.png"

# And to delete them
find . -type f -name "*.png" -exec rm -f {} \;

You can then quickly go through the fitted graphs for the models.

Convert log file to relax script

If you made a logfile, then you can do convert it to the full relax script.
See Grep_log_file for this.

Inspect model selection

With logfile

If you have a log file.

set IN=logfile.txt ;
set OUT=grep_log_to_model_sel.txt ;

set FROM=`grep -n "AIC model selection" $IN | cut -d":" -f1` ;
set TO=`grep -n "monte_carlo.setup(" $IN | cut -d":" -f1` ;
sed -n ${FROM},${TO}p $IN > $OUT ;
cat $OUT ;

In relax script

See Category:List_objects to get inspiration how to loop through the data class containers.

state.load(state='final_state.bz2')

# See which data is in the pipe
pipe.display()
for p in dir(cdp): print p

# 
print cdp.mol
for resiob in cdp.mol[0].res: print resiob.name, resiob.num, resiob.spin[0].model

In relax GUI

  1. Open relax state (Ctrl+o) : final_state.bz2
  2. View -> Spin Viewer (Ctrl+t)

Select a spin, and look for the Variable model

Or open the relax prompt (Ctrl+p) and write

for resiob in cdp.mol[0].res: print resiob.name, resiob.num, resiob.spin[0].model

Select residues for clustering

Let us select residue based on a criterion that is the same model which are fit.

resi_models = []
for resiob in cdp.mol[0].res:
    resi_models.append(resiob.spin[0].model)
print resi_models
# Count resi_models
dublicates = map(lambda val: (val, [i for i in xrange(len(resi_models)) if resi_models[i] == val]), resi_models)
for dub in dublicates:
    print dub[0], len(dub[1])
# We see that "NS 2-site expanded" is most represented
# We now cluster these residues
sel_residues = []
for resiob in cdp.mol[0].res:
   if resiob.spin[0].model == 'NS 2-site expanded':
       sel_residues.append( [resiob.spin[0].num, resiob.spin[0]._res_index, resiob.spin[0]._res_name, resiob.spin[0]._res_num, resiob.spin[0].model ] )

for p in sel_residues: print p

Now we want to cluster the residues.
Use User functions (n->) -> relax_disp -> cluster.
Set The cluster ID to NS2_cluster and The spin ID string to :5@N for nitrogen at residue number 5.
Continue for the residues you want to cluster. Keep the same Cluster ID.

You can inspect which residues you have clustered in the prompt.

cdp.clustering

Since we have our selected list from above, it is faster to do:

for spin_nu, res_ind, resn, resi, model in sel_residues:
   relax_disp.cluster('NS2_cluster', ":%s@N"%resi)
cdp.clustering

Execute a clustering analysis

Put these lines into a file: cluster.py

The commands can also be performed in the GUI

  1. pipe.copy -> User functions (n-z) -> pipe -> copy
  2. pipe.switch -> User functions (n-z) -> pipe -> switch
  3. pipe.switch -> User functions (n-z) -> relax_disp -> select_model
  4. minimise -> User functions (a-m) -> minimise
  5. and so forth...
set_id="cluster NS 2-site expanded"

pipe.copy(pipe_from='NS 2-site expanded', pipe_to=set_id, bundle_to='relax_disp')
pipe.switch(pipe_name='cluster NS 2-site expanded')
relax_disp.select_model(model='NS 2-site expanded')

# Is this necessary since it should be contained in the previous pipe??
#value.copy(pipe_from='NS 2-site expanded', pipe_to='cluster NS 2-site expanded', param='pA')
#value.copy(pipe_from='NS 2-site expanded', pipe_to='cluster NS 2-site expanded', param='pB')
#value.copy(pipe_from='NS 2-site expanded', pipe_to='cluster NS 2-site expanded', param='dw')
#value.copy(pipe_from='NS 2-site expanded', pipe_to='cluster NS 2-site expanded', param='kex')
#value.copy(pipe_from='NS 2-site expanded', pipe_to='cluster NS 2-site expanded', param='tex')

minimise(min_algor='simplex', line_search=None, hessian_mod=None, hessian_type=None, func_tol=1e-25, grad_tol=None, max_iter=10000000, constraints=True, scaling=True, verbosity=1)

relax_disp.plot_disp_curves(dir=set_id, force=True)
value.write(param='pA', file='pA.out', dir=set_id, scaling=1.0, comment=None, bc=False, force=True)
value.write(param='pB', file='pB.out', dir=set_id, scaling=1.0, comment=None, bc=False, force=True)
grace.write(x_data_type='res_num', y_data_type='pA', spin_id=None, plot_data='value', file='pA.agr', dir=set_id, force=True, norm=False)
grace.write(x_data_type='res_num', y_data_type='pB', spin_id=None, plot_data='value', file='pB.agr', dir=set_id, force=True, norm=False)
value.write(param='dw', file='dw.out', dir=set_id, scaling=1.0, comment=None, bc=False, force=True)
grace.write(x_data_type='res_num', y_data_type='dw', spin_id=None, plot_data='value', file='dw.agr', dir=set_id, force=True, norm=False)
value.write(param='kex', file='kex.out', dir=set_id, scaling=1.0, comment=None, bc=False, force=True)
value.write(param='tex', file='tex.out', dir=set_id, scaling=1.0, comment=None, bc=False, force=True)
grace.write(x_data_type='res_num', y_data_type='kex', spin_id=None, plot_data='value', file='kex.agr', dir=set_id, force=True, norm=False)
grace.write(x_data_type='res_num', y_data_type='tex', spin_id=None, plot_data='value', file='tex.agr', dir=set_id, force=True, norm=False)
grace.write(x_data_type='res_num', y_data_type='chi2', spin_id=None, plot_data='value', file='chi2.agr', dir=set_id, force=True, norm=False)

results.write(file='results', dir=set_id, compress_type=1, force=True)

and then run it.

script(file='cluster.py', dir=None)

See also