Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin

ssh pdsf.nersc.gov

STAR disks:

0.5TB pdsf.nersc.gov:/eliza17/star/pwg/starspin/balewski/

1.5 TB /eliza14/star/pwgscr

STAR software specific instructions for Eucalyptus

From Doug Olson

Matt started problem-tracking page:

https://docs.google.com/document/d/180dwYTO3iBB42DbmD7zcTKUfU_CZpY903TfJM7udTIc/edit?hl=en&authkey=CMvVt8AK&pli=1#

e-mail to NERSC: Consult <consult@nersc.gov>

Quota on /project disk, from Eric

Code Block

You can use prjquota to check the quotas on NFG (/project) like this:
pdsf4 88% prjquota star
   
Code Block
the headnode is running at 128.55.56.51.
It has a copy of the /common and /home fs from the other image
taken last night.
# df
Filesystem        ------ Space  1K-blocks(GB) -------      Used Available Use% Mounted on
/dev/vda1     ----------- Inode -----------
 Project    Usage    Quota   InDoubt      Usage  10321208   4363536 Quota  5433384  45% /
/dev/vda2InDoubt
--------   -------  -------  -------     -------  9322300  -------    -------
  52  star 8848700   1% /mnt
/dev/vdb 1011     1536        516061401  10893984  38090716  23% /common
/dev/vdc842365    1000000       1075
pdsf4 89%

So STAR has 30963708a quota of  189028  29201816   1% /home
/dev/vdd             103212320    192252  97777188   1% /data

The idea is to share /common, /home and /data across the
worker nodes. So if you want to set up some things for the
cluster this will be the one to use.

Quota on /project disk, from Eric

Code Block

You can use prjquota to check the quotas on NFG (/project) like this:
pdsf4 88% prjquota star
           ------ Space (GB) -------     ----------- Inode -----------
 Project    Usage    Quota   InDoubt      Usage      Quota     InDoubt
--------   -------  -------  -------     -------    -------    -------
    star      1011     1536        1      842365    1000000       1075
pdsf4 89%

So STAR has a quota of 1.5TB now.

...

1.5TB now.

Alternatives to scp

Code Block

STAR users have access to  /project/projectdirs/star and this area
is visible from all NERSC systems (both carver and PDSF and the data
transfer nodes). Best way to transfer data from BNL to the project area
would be  through the data transfer nodes:
http://www.nersc.gov/nusers/systems/datatran/
but there are lots of options.

From Matt - how to add cron tab job

Code Block

Here are some commands for inserting things into cron.
There is a directory called /etc/cron.daily which where you can put scripts that will run once per day.
 For instance, you can put a controller script doResetDB.sh in that directory that runs resetDB.sh
with the right arguments. Make sure to make it executable.

You can control when the script will run by editing the line in /etc/crontab that 
says "run-parts /etc/cron.daily". I didn't look in any of your machines, but the ones I have are 
setup to run at 4:02 every morning.

Controlling this behavior is as easy as moving the script in and out of the /etc/cron.daily directory.

...

Transport of DB-snapshot file from RCF to carver

Code Block

Hi Jan,
In my home directory on carver, there is a script called doDownloads.sh. This script is what needs to 
be run. Right now it runs on cvrsvc03. It sleeps for 24 hours, then runs a script called 
downloadSnapshot.sh in the same location.

Best,
Matt

...

Direct upload of DB snapshot from RCF

Code Block

assuming you have curl installed, here is a safe one-liner to
download snapshots :

curl -s --retry 60 --retry-delay 60 http://www.star.bnl.gov/dsfactory/
--output snapshot.tgz

It will automatically retry 503 errors 60 times, waiting 60 seconds
between attempts. Should be safe enough to add it to VMs directly..

"-s" means silent => no output at all. You may want to remove this
switch while you test..

...

share data with STAR members

Code Block

setfacl -m g:rhstar:x  /global/scratch/sd/balewski - this is dangerous
setfacl -R -m g:rhstar:rX  /global/scratch/sd/balewski/2011w

...

How to start VM with STAR environment

Code Block

1) copy my setup code from carver.

...

  • SVN @ NERSC

http://www.nersc.gov/nusers/systems/servers/cvs.php

...

  • Running Interactive Jobs on Carver
    You can run an xterm in batch by doing a "qsub -I -q regular ...."

http://www.nersc.gov/nusers/systems/carver/running_jobs/interactive.php
Note that interactive jobs do not have to go to the interactive queue.

E.g. in side screen run this command (1 node, 1 core)

qsub -I -V -q interactive -l nodes=1:ppn=1 

brings you in to another shell which is treated as batch but is interactive.

Misc info about resources at NERSC/PDSF
https://newweb.nersc.gov/users/computational-systems/pdsf/using-the-sge-batch-system/i-o-resources/

...

  • Reboot VM
    From inside: shutdown -r now
    From outside: euca-reboot-instances i-446F07EE