...
software | location of source code or executable |
---|---|
NAMD 2.7b2 | /share/apps/NAMD_2.7b2_Source/Linux-x86_64-OpenMPI |
Charm++ 6.2 | /share/apps/charm-6.2.0/mpi-linux-x86_64-mpicxx |
VMD 1.8.7 | /share/apps/vmd-1.8.7/LINUXAMD64 |
AmberTools 1.4 | /share/apps/amber11 |
CHARMM c35b2 | not yet installed, source in /share/apps/c35b2 |
GNUPLOT 4.0 | /usr/bin/gnuplot |
Grace 5.1 | /usr/bin/xmgrace |
Octave 3.0.5 | /usr/bin/octave |
GROMACS-MPI 4.0.7 | /usr/bin/g_mdrun_mpi |
The OpenMPI, Mvapich, and Mvapich2 libraries all use the Infiniband interface.
libraries | location |
---|---|
OpenMPI | /usr/mpi/gcc/openmpi-1.4.1 |
Mvapich | /usr/mpi/gcc/mvapich-1.2.0 |
Mvapich2 | /usr/mpi/gcc/mvapic2-1.4.1 |
Platform MPI, aka HP-MPI | /opt/hpmpi |
Note that the mpicc, mpicxx, mpif77, mpif90, etc., all refer to the OpenMPI versions, when using the default path.
Performance
Wiki Markup |
---|
Using the 92 katom [ApoA1 benchmark|http://www.ks.uiuc.edu/Research/namd/performance.html] with NAMD 2.7, performance appears to compare very favorably with previously benchmarked clusters: its \[processor cores X time per step\] values (corresponding to the ordinate on the [figure|http://www.ks.uiuc.edu/Research/namd/apoa1_bench.png] on the linked page) vary from 1.8 sec (8 CPUs) to 2.1 sec (128 CPUs). !apoa_bench_scan.png|thumbnail! |
...