Parallel Jobs
From HPC
(New page: parallel queue The parallel queue is for running two types of parallel job openMP and openMPI. =OpenMP= Shared memory, single node. This is parallel across the cores on a single node, c...)
Newer edit →
Revision as of 13:02, 20 October 2008
parallel queue
The parallel queue is for running two types of parallel job openMP and openMPI.
Contents |
OpenMP
Shared memory, single node. This is parallel across the cores on a single node, can often be archived by just compiling your code with openMP flags. However better performance can be achieve if you programs are written with openMP in mind.
Compiling
enabling openMP while compiling:
gnu: gfortran -fopenmp -o <exec> <src> Intel: ifort -openmp -o <exec> <src>
Running OpenMP job
The enviroment OMP_NUM_THREADS must be set to define the number of threads the program should use. Use as many as you have processors i.e 8
Example submission script
#!/bin/bash #$ -cwd -V #$ -pe smp 8 export OMP_NUM_THREADS=8 myOpenMPapp
OpenMPI
Distributed Memory, multiple nodes. This method of parallel requires you to write your programs to work with openMPI. Using openMPI the nodes will comunicate with each other via the infiniband network
- SGE produces a list of hosts $PE_HOSTFILE
- SGE executes a "start" script for the PE
- SGE runs the users job script
- On termination a "stop" script is executed
Scripts are in
/usr/local/sge6.0/streamline/mpi/
ompi_start.sh
script here
Compiling OpenMPI
Setup up enviroment, using modules to load the OpenMPI libraries you want to use. This is likely to be a choice between gnu and intel.
Compile using via wraped script to compiler
mpif77, mpif90, mpicc, mpicc+
Run using batch scheduler
ompisub N <app> (Ncores)
Submiting OpenMPI
ompisub 8 <prog> <args>
ompisub 4x3 <prog> <args>