Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin

...

The apoa.sh model script is for a simple 8-processor job in NAMD. The multiple_images.sh script is to run 8 single-processor jobs in NAMD simultaneously, while the sequential-jobs-separately-submitted.sh script is for submitting a series of jobs in series, as would be done in aimless shooting, for example.

When copying to scratch directories is necessary, it is necessary to do that within the PBS script, since it's not a priori known which compute node will be used. The example script movetoscratch_and_run.sh does this, albeit with some more complicated features that start a new NAMD simulation from an older one.

Small analysis jobs (with VMD, CHARMM, custom C code) should not be run outside the queue system. To run many small jobs, you can write a script that creates bash scripts which then can be qsub'ed. This approach is used in generate_many_child_scripts.sh. Of course, this approach could be used quite generally.

grid_and_dock.sh is another script illustrating that successive program executions can be included in one simplified shell script. It runs AutoDock and AutoGrid.

The qnodes_status The qstatus script gives a summary of the state of all nodes in group0.output of the qnodes command (as written, it considers only nodes with the property "group0"). Remove the ":group" argument to qnodes to use this on Quantum2.

Anchor
analysis
analysis

Submitting many 1 cpu jobs on clusters using parallel perl scripts

We have found that submitting many 1 cpu jobs to the clusters can often create efficiency problems when these jobs compete for resources with jobs that are requesting whole or multiples of nodes. That is, the 1 cpu jobs  will dominate the resources making jobs requesting multiples of nodes wait longer than they should with respect to the fair share policy.  To overcome this difficulty we suggest that your 1 cpu jobs should be submitted in batches according to the size of the node.  To achieve this we have installed a parallel job manager using the perl programming language. To utilize this functionality you will have to write a perl script to manage your 1 cpu jobs.  Below we give an example of what your scripts might look. Example 1 is a normal 1 cpu PBS script executing your program without any perl parallelization. Examples 2a and 2b show the pbs script and perl script that you would need to write in order to parallelize example 1.

Example 1: A pbs script running a 1 cpu job executing a program called myprogram

#!/bin/bash

###### Queue name

#PBS -q short

###### Resources request

#PBS -l  nodes=1:ppn=1

cd $PBS_O_WORKDIR

i=1

./myprogram $i > out$i

Example 2a: A pbs script running eight instances of a 1 cpu job executing a program called myprogram using a perl script called parallel.perl  all on one node:

#!/bin/bash

###### Queue name

#PBS -q short

###### Resources request

#PBS -l  nodes=1:ppn=8

cd $PBS_O_WORKDIR

start=0

ncpus=8

./parallel.perl $start $ncpus

 

Example 2b: the contents of the perl script will look something like this:

#!/usr/bin/perl

use Parallel::ForkManager;  ### this is the parallel perl package

$start=$ARGV[0]; #### an increment variable starting point

$ncpus=$ARGV[1];  #### the number of cpus 

my $pm = new Parallel::ForkManager($ncpus);  ### start a parallel session with ncpus

     ### a for loop containing ncpu times your program is parallelized:

     $end=$start+$ncpus;

     for( $j =  $start; $j < $end ; $j++){

      my $pm = new Parallel::ForkManager($ncpus);  ### start a parallel session with ncpus

      system("./myprogram $j > out$j");  ### the same program call as before

    ### wait until all jobs running in the background have stopped

       $pm->wait_all_children;

    }

   

MD Analysis scripts

Basic analysis for NAMD trajectories

...

Note that this requires runavg.awk (assumed to be in /home/$(whoami)/tools), and note that the formula for calculating density uses only simulation cell lengths: (volume) = a*b*c. Run with options "--usage" or "--help" for more details:

Code Block
Usage: basic_analysis.sh --name SIMNAME [ --logfile FILE ] [ --psf PSFFILE ] [ --outputdir ODR ] [ --dcdfreq N ] [ --rmsdsel SELTEXT ]

Performs simple analysis tasks for simulation without output SIMNAME.
OPTIONS:
	--logfile FILE : Use FILE as NAMD log file.  Default value is SIMNAME.log
 	--psf PSFFILE  : Use PSFFILE (either CHARMM or AMBER format) as system topology file.
			 Should have .psf or .parm7 extension for filetype recognition by VMD.
			 Default value: SIMNAME.psf
	--outputdir ODR: Write analysis output to directory ODR. Figures will go to ODR/figures/.
			 Default value: $(dirname SIMNAME)/analysis_output/
	--dcdfreq N    : Frequency at which frames were written to DCD file SIMNAME.dcd. Default 100.
	--rmsdsel SELTXT: VMD selection text for RMSD calculation. Must be enclosed in quotes.
			 Default value: "all".

...

Code Block
vmd -dispdev text -psf system.psf -dcd sample_run.dcd -args seltext "not water" outfile zinfo_timeseries.txt histfile denprofile.txt first 100000 last 200000 freq 100 

Note that the -psf PSFfile topology description could be replaced with other filetype/filename options, e.g. -parm AMBERparmfile. Bin width and number are specified in the script, although they could be adjusted to be set at command line.

...

When uploading scripts, please remember that comments are essential to helping others understand your work. All uploaded files should include author information, a date or version number, a description of the file's purpose and methodology, a description of the expected input (if applicable), and a description of the output format.

Wiki MarkupTo upload: Choose "Attachment" under the "Add" menu at right. Then, link to the attached file in this page using the following wiki syntax: \ [\^filename.sh\], followed by a description of the file.

To use: download the file. Note that some browsers will append unwanted extensions to downloaded files. Remember that all every user is responsible for understanding at a detailed level all the tools that he/she uses!