Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 4.0
Wiki Markup
Darius is a 32-node cluster with InfiniBand QDR interconnections.  It runs [CentOS 5.4|http://wiki.centos.org/Manuals/ReleaseNotes/CentOS5.4] (Linux kernel version 2.6), which is closely related to Red Hat Enterprise Linux.

...

  

Its hostname is darius1.csbi.mit.edu.  The CSBi firewall is configured to accept connections only from MIT IP addresses, so users should connect to MITNet through [MIT VPN|http://ist.mit.edu.ezproxyberklee.flo.org/services/network/vpn] for off-campus use.

...

Installed software

...

software

...

location of source code or executable

...

NAMD 2.7b2



h4.  Installed software
||software||     location of source code or executable ||
|NAMD 2.7b2  |  /share/apps/NAMD_2.7b2_Source/Linux-x86_64-OpenMPI

...

 |
|Charm++ 6.2

...

 |	/share/apps/charm-6.2.0/mpi-linux-x86_64-mpicxx

...


|VMD 1.8.7

...

   |	/share/apps/vmd-1.8.7/LINUXAMD64

...


|AmberTools 1.4

...

	| 	/share/apps/amber11

...

 |
|CHARMM c35b2

...

 |		not yet installed, source in /share/apps/c35b2

...

 |
| GNUPLOT 4.0

...

 | /usr/bin/gnuplot

...

 |
|Grace 5.1

...

    |		/usr/bin/xmgrace

...

 |
|Octave 3.0.5

...

 |	/usr/bin/octave

...

 |
|GROMACS-MPI 4.0.7

...

|	/usr/bin/g_mdrun_mpi

...

 |


The OpenMPI, Mvapich, and Mvapich2 libraries all use the Infiniband interface.

...

libraries

...

location

...

OpenMPI


||libraries|| location ||
| OpenMPI  | /usr/mpi/gcc/openmpi-1.4.1

...

Mvapich

 |
| Mvapich  | /usr/mpi/gcc/mvapich-1.2.0

...

Mvapich2

 |
| Mvapich2 | /usr/mpi/gcc/mvapic2-1.4.1

...

 |
| Platform MPI, aka HP-MPI

...

 | /opt/hpmpi

...

 |
Note that the mpicc, mpicxx, mpif77, mpif90, etc., all refer to the OpenMPI versions, when using the default path

...

Performance

...

.

h4.  Performance
Using the 92 katom [ApoA1 benchmark|http://www.ks.uiuc.edu/Research/namd/performance.html] with NAMD 2.7, performance appears to compare very favorably with previously benchmarked clusters: its \[processor cores X time per step\] values (corresponding to the ordinate on the [figure|http://www.ks.uiuc.edu/Research/namd/apoa1_bench.png] on the linked page) vary from 1.8 sec (8 CPUs) to 2.1 sec (128 CPUs).
!apoa_bench_scan.png|thumbnail!

...

Using the queue system

All programs run on Darius must use the queue system. Users can check what jobs are running using qstat; XWindows users can try xpbs, although load times are long. Users can submit their jobs using the qsub command:

Code Block




h4.  Using the queue system
All programs run on Darius must use the queue system.  Users can check what jobs are running using {{qstat}}; XWindows users can try {{xpbs}}, although load times are long.  Users can submit their jobs using the {{qsub}} command:
{code}
qsub jobscript.sh
{code}

Here is an example of a simple PBS script, which is like an ordinary BASH script, but with the addition of special \#PBS directives:

...


{code
}
#!/bin/bash                                                                                                                                                                        
#PBS -k o                                                                                                                                                                          
#PBS -N apoa                                                                                                                                                                       
#PBS -l nodes=1:ppn=4,walltime=12:00:00                                                                                                                                            
#PBS -j oe                                                                                                                                                                         
cd /home/musolino/namd/apoa1
/usr/mpi/gcc/openmpi-1.4.1/bin/mpirun /share/apps/NAMD_2.7b2_Source/Linux-x86_64-OpenMPI/namd2 apoa1.namd

...

{code}

h4.  Notes on compiling NAMD 2.6

...


# Run {{./config Linux-amd64-MPI.<postfix>}} to create an installation.  <Postfix> is any label you wish to append to the NAMD directory.

...


# The new Charm++ and NAMD 2.7 use different platform names than they did in the past(e.g. {{Linux-x86_64-OpenMPI}} for NAMD 2.7 and Charm++ 6.2, but {{Linux-amd64-MPI}} for NAMD 2.6.  This causes problems when looking for the Charm++ installation, so edit {{Make.charm}} in NAMD directory:

...


{code

...

}
CHARMBASE = /share/apps/charm-6.2.0/mpi-linux-x86_64-mpicxx

...

{code}
# Then edit {{Makearch}} in the {{Linux-amd64-MPI.postfix}} directory as follows, by changing the {{CHARM=...}} line:

...


{code

...

}
include .rootdir/Make.charm
include .rootdir/arch/Linux-amd64-MPI.arch
CHARM = $(CHARMBASE)
NAMD_PLATFORM = $(NAMD_ARCH)-MPI
include .rootdir/arch/$(NAMD_ARCH)$(NAMD_SUBARCH).base

...

Other notes

...


{code}
# Try {{make}} in the installation directory.

h4.  Other notes

Using the Mac OS 10.6 X11 client, XWindows is working, with small utilities like xcalc and xpbs, and with VMD 1.8.7.  VMD performance seems sluggish, however.  _Note:_ when connecting with "ssh -X", starting VMD consistently leads to a crash of the X11 client.  Connecting with "ssh -Y" (which disables X11 security protocols, for trusted connections) works instead.  This problem has occurred with other computers in the past, and it's not clear whether this is a bug in VMD, the X11 installation on the cluster, or the Mac OS 10.6 X11 client.