Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin

Computer Clusters

Our five clusters are Xerxes, Darius2, Darius1

Policies

In order to efficiently use our computational resources, we ask all group members to follow the guidelines below when planning and running simulations:

  • Please run jobs on the computational nodes ("slave nodes"), rather than the head node, of each cluster. In the past, head node crashes – a great inconvenience to everyone – have occurred when all eight of its processors are engaged with computationally intensive work.
  • Please do not run jobs "on top of" those of other users. If a node is fully occupied and you need to run something, please contact the other user, rather than simply starting your job there.
  • For fastest disk read/write speeds, write the to local /scratch/username/ directory on each node, rather than your home directory. Home directories, which are accessible from every node, are physically located on the head node, so that reading and writing to disk may be limited by network transmission rates.
  • The new cluster, with its fast interconnection hardware, is very well suited for large-scale simulations which benefit from the use many processors. A queuing system will be used to manage jobs on this cluster, and no jobs should run outside this queue system.

Please note that we attempted to implement the OpenPBS queue system on Cyrus1 and Quantum2 in December 2009; these systems appeared to be working in testing, but did not perform as desired when multiple jobs were submitted. The use of these queuing systems on those clusters has been suspended until further notice.

Computer Clusters

Our three clusters are Darius, Cyrus1, and Quantum2. See below for additional information. For information about using Gaussian at MIT, see below.

Anchor
darius
darius

Darius

Darius is our newest cluster, with 30 computational nodes, each with 8 virtual processors.

Darius1

Hostname: darius1.csbi.mit.edu
Darius1 is a 35-node cluster, installed April 2010.  Information about Darius can be found on its dedicated pageMore details will be posted soon.

Anchor
cyrus1
cyrus1

Cyrus1

...

Code Block
    Usage:  /home/gpw501/software/gamess/rungms JOB VERNO NCPUS >& JOB.log &
            JOB is the name of JOB.inp file to be executed
            VERNO is the current version of gamess (01 at the time of writing)
            NCPUS number of cpus   
      you must make a scratch directory that matches your user name on the
      node you are running from i.e. /scratch/$USER

...

Code Block
    Usage:  mpirun -n NCPUS cpmd.x JOB PATH-TO-PPs >& JOB.out &
            mpirun should be set to /opt/openmpi/tcp-gnu/bin/mpirun in your .bashrc
            NCPUS number of cpus
            JOB name of job file to be executed 
            PATH-TO-PPs the path to a pseudo potential library (see for eg /home/gpw501/software/cpmd/pseudos/ )
            JOB.out name of output file

...