Parallel Jobs

From HPC

Revision as of 12:36, 21 October 2008 by Aitsswhi (Talk | contribs)
Jump to: navigation, search

parallel queue

The parallel queue is for running two types of parallel job openMP and openMPI.

Contents

OpenMP

Shared memory, single node. This is parallel across the cores on a single node, can often be archived by just compiling your code with openMP flags. However better performance can be achieve if you programs are written with openMP in mind.

Compiling

enabling openMP while compiling:

gnu: gfortran -fopenmp -o <exec> <src> Intel: ifort -openmp -o <exec> <src>

Running OpenMP job

The enviroment OMP_NUM_THREADS must be set to define the number of threads the program should use. Use as many as you have processors i.e 8

Example submission script

#!/bin/bash
#$ -cwd -V
#$ -pe smp 8
export OMP_NUM_THREADS=8
myOpenMPapp

OpenMPI

Distributed Memory, multiple nodes. This method of parallel requires you to write your programs to work with openMPI. Using openMPI the nodes will comunicate with each other via the infiniband network

Process that a parallel job goes through

  • SGE produces a list of hosts $PE_HOSTFILE
  • SGE executes a "start" script for the PE
  • SGE runs the users job script
  • On termination a "stop" script is executed

Scripts that are automatically used are in

/usr/local/sge6.0/streamline/mpi/

The ompi_start.sh script

#!/bin/sh
# Local info
# This is executed on the front end server
# SERVER=`hostname -s`
function ncpus() {
 n=`cat /proc/cpuinfo | grep processor | tail -1 | cut -f 2- -d :`
 echo $((n+1))
 return
}
SMP=${SMP:-`ncpus`}
. /etc/profile
#
pe_hostfile=$1
echo $pe_hostfile
cat $pe_hostfile
job_id=$2
user=`basename ${HOME}`
if [ -d /users/$user ]; then
  user_dir=/users/$user
else
  user_dir=${HOME}
fi
mpich_dir=${user_dir}/.mpich
mkdir -p $mpich_dir 

cat $pe_hostfile | cut -f 1 -d " " | sort > $mpich_dir/mpich_hosts.$job_id

Compiling OpenMPI

Setup up enviroment, using modules to load the OpenMPI libraries you want to use. This is likely to be a choice between gnu and intel.

Compile using via wraped script to compiler

mpif77, mpif90, mpicc, mpicc+

Run using batch scheduler

ompisub N <app> (Ncores)

Submiting OpenMPI

ompisub 8 <prog> <args>
ompisub 4x3 <prog> <args>
Personal tools