About the Gecko Cluster
From HPC
(Difference between revisions)
(New page: A quick over view: * 30 compute nodes 2x 2.5GHz Quad Core Intel Xeons with 8GB of memory (i.e. 8 jobs with 2GB memory each per node) * Infiniband network for parallel computation * openMP...) |
|||
Line 1: | Line 1: | ||
A quick over view: | A quick over view: | ||
- | * 30 compute nodes 2x 2.5GHz Quad Core Intel Xeons with 8GB of memory (i.e. 8 | + | * 30 compute nodes 2x 2.5GHz Quad Core Intel Xeons with 8GB of memory (i.e. 8 jobs with 2GB memory each per node) |
- | jobs with 2GB memory each per node) | + | |
* Infiniband network for parallel computation | * Infiniband network for parallel computation | ||
- | * openMPI (intel and gcc) for parallel computation across the inifiniband | + | * openMPI (intel and gcc) for parallel computation across the inifiniband network. |
- | network. | + | |
* Sun Grid Engine 6 to manage job submission and queue management | * Sun Grid Engine 6 to manage job submission and queue management | ||
* Intel C++/Fortran, GNU, Java 1.4,1.5,1.6 and R compilers | * Intel C++/Fortran, GNU, Java 1.4,1.5,1.6 and R compilers | ||
* Stata 10 se | * Stata 10 se | ||
- | * Also MPICH2 and SCORE for parallel computation, however these rely on gigabit | + | * Also MPICH2 and SCORE for parallel computation, however these rely on gigabit ethernet which is much slower than the openMPI/ininfinband option. |
- | ethernet which is much slower than the openMPI/ininfinband option. | + |
Revision as of 19:48, 16 October 2008
A quick over view:
- 30 compute nodes 2x 2.5GHz Quad Core Intel Xeons with 8GB of memory (i.e. 8 jobs with 2GB memory each per node)
- Infiniband network for parallel computation
- openMPI (intel and gcc) for parallel computation across the inifiniband network.
- Sun Grid Engine 6 to manage job submission and queue management
- Intel C++/Fortran, GNU, Java 1.4,1.5,1.6 and R compilers
- Stata 10 se
- Also MPICH2 and SCORE for parallel computation, however these rely on gigabit ethernet which is much slower than the openMPI/ininfinband option.