Open MPI is a Message Passing Interface (MPI) used in relax for parallalised calculations. == mpi4py OpenMPI ==
This package provides Python bindings for the Message Passing Interface (MPI) standard. It is implemented on top of the MPI-1/2/3 specification and exposes an API which grounds on the standard MPI-2 C++ bindings.
Gary has achieved near perfect scaling efficiency:
https://mail.{{gna.org/public/mailing list url|relax-devel/2007-05/msg00000.html}}
=== Dependencies ===
# Python 2.4 to 2.7 or 3.0 to 3.4, or a recent PyPy release.
# A functional MPI 1.x/2.x/3.x implementation like MPICH or Open MPI built with shared/dynamic libraries.
=== How does it work ? ===
If mpirun is started with "np 4", relax will get 4 "rank" processors. <br>
# Simultaneously make 10 Monte-Carlo simulations
=== Install OpenMPI on linux and set environments ===
See https://www10.informatik.uni-erlangen.de/Cluster/
# Show what loading does
module show openmpi-x86_64
# or
module show openmpi-1.10-x86_64
# See if anything is loaded
# Load
module load openmpi-x86_64
# Or
module load openmpi-1.10-x86_64
# See list
module list
# For 64 bit computer.
sudo ln -s /usr/lib64/openmpi/bin/mpicc /usr/bin/mpicc
# or
sudo ln -s /usr/lib64/openmpi-1.10/bin/mpicc /usr/bin/mpicc
</source>
== Install mpi4py ==
=== Linux and Mac ===
Remember to check, if there are newer versions of [https://bitbucket.org/mpi4py/mpi4py/downloads/ mpi4py]. <br>
The [https://bitbucket.org/mpi4py/mpi4py mpi4py] library can be installed on all UNIX systems by typing:
<{{#tag:source lang="bash">|
# Change to bash, if in tcsh shell
#bash
v=1.3.1{{current version mpi4py}}
#tcsh
set v=1.3.1{{current version mpi4py}}
pip install https://bitbucket.org/mpi4py/mpi4py/downloads/mpi4py-$v.tar.gz
pip install https://bitbucket.org/mpi4py/mpi4py/downloads/mpi4py-$v.tar.gz --upgrade
# Or to use another python interpreter than standard
wget https://bitbucket.org/mpi4py/mpi4py/downloads/mpi4py-$v.tar.gz
tar -xzf mpi4py-$v.tar.gz
rm mpi4py-$v.tar.gz
cd mpi4py-$v
# Use the path to the python to build with
python setup.py build
python setup.py install
cd ..
rm -rf mpi4py-$v
|lang="bash"
}}
Then test
<source lang="python">
python
import mpi4py
mpi4py.__file__
</source>
== Relax In multiprocessor mode ==
'''How many processors should I start?''' <br>
You can continue try this, until a good results
<source lang="bash">
# With shellmpirun -np 4 echo "hello world" # With pythonmpirun -np 4 python -m mpi4py helloworld # If newer version of mpirun, then --report-bindings worksmpirun --report-bindings -np 11 4 echo "hello world"
mpirun --report-bindings -np 12 echo "hello world"
# This is too much
This code runs in the GUI, the script UI and the prompt UI, i.e. everywhere.
=== Helper start scripts ===
If you have several versions or development branches of relax installed, you could probably use some of these scripts, and put them in your PATH.
==== Script for force running relax on server computer ====
This script exemplifies a setup, where the above installation requirements is met on one server computer ''haddock'', and where satellite computers are forced to run on this computer.
</source>
==== Script for running relax with maximum number of processors available ====
This script exemplifies a setup, to test the running relax with maximum number of processors.
# Set number of available CPUs.
set NPROC=`nproc`
set NP=`echo $NPROC + 1 0 | bc `
echo "Running relax with NP=$NP in multi-processor mode"
</source>
==== Script for force running relax on server computer with openmpi ====
<source lang="bash">
#!/bin/tcsh
#set NPROC=`nproc`
set NPROC=10
set NP=`echo $NPROC + 1 0 | bc `
# Run relax in multi processor mode.
</source>
== Setting up relax on super computer Beagle2 == Please see post from Lora Picton [[relax_on_Beagle2]] Message: http://thread.gmane.org/gmane.science.nmr.relax.user/1821 == Commands and FAQ about mpirun ==
See oracles page on mpirun and the manual openmpi:
# https://docs.oracle.com/cd/E19923-01/820-6793-10/ExecutingPrograms.html
</source>
=== Find number of Socket, Cores and Threads ===
See http://blogs.cisco.com/performance/open-mpi-v1-5-processor-affinity-options
</source>
=== Test binding to socket ===
<source lang="bash">
module load openmpi-x86_64
</source>
== Use mpirun with ssh hostfile == '''NOTE:''' {{caution|This is test only. This appears not to function well! <br>}}
See
</source>
== Running Parallel Jobs with queue system ==
See:
# https://docs.oracle.com/cd/E19923-01/820-6793-10/ExecutingBatchPrograms.html
=== Running Parallel Jobs in the Sun Grid Engine Environment ===
See
# https://www.open-mpi.org/faq/?category=building#build-rte-sge
</source>
=== Running Parallel Jobs in the PBS/Torque ===
See
# https://www.open-mpi.org/faq/?category=building#build-rte-tm
</source>
=== Running Parallel Jobs in SLURM ===
See
# https://www.open-mpi.org/faq/?category=building#build-rte-sge
</source>
== Updates ==
=== Update 2013/09/11 ===
See [http://thread.gmane.org/gmane.science.nmr.relax.scm Commit]
'''no clustering''' is defined and the Monte Carlo simulations for error analysis.
=== Test of speed ===
==== Performed tests ====
===== A - Relax_disp systemtest =====
'''Relax_disp_systemtest'''
<source lang="bash">
</source>
===== B - Relax full analysis performed on dataset =========== First initialize data ======
<source lang="bash">
set CPU1=tomat ;
relax_single $TDATA/$CPU2/$MODE2/relax_1_ini.py ;
</source>
====== Relax_full_analysis_performed_on_dataset ======
<source lang="bash">
#!/bin/tcsh -e
cat $LOG ;
</source>
===== C - Relax full analysis performed on dataset with clustering =====
'''Relax_full_analysis_performed_on_dataset_cluster'''
</source>
==== Setup of test ====
===== List of computers - the 'lscpu' command =====
CPU 1
<source lang="text">
</source>
===== Execution scripts =====
'''relax_single'''
<source lang="bash">
</source>
=== Results ===
{| class="wikitable sortable" border="1"
# MODEL_ALL = ['R2eff', 'No Rex', 'TSMFK01', 'LM63', 'LM63 3-site', 'CR72', 'CR72 full', 'IT99', 'NS CPMG 2-site 3D', 'NS CPMG 2-site expanded', 'NS CPMG 2-site star']
== See also ==
[[Category:Installation]]
[[Category:DevelDevelopment]]