You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

STAR software specific instructions for Eucalyptus

Matt started problem-tracking page:

https://docs.google.com/document/d/180dwYTO3iBB42DbmD7zcTKUfU_CZpY903TfJM7udTIc/edit?hl=en&authkey=CMvVt8AK&pli=1#

e-mail to NERSC: Consult <consult@nersc.gov>

From Doug Olson

the headnode is running at 128.55.56.51.
It has a copy of the /common and /home fs from the other image
taken last night.
# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/vda1             10321208   4363536   5433384  45% /
/dev/vda2              9322300        52   8848700   1% /mnt
/dev/vdb              51606140  10893984  38090716  23% /common
/dev/vdc              30963708    189028  29201816   1% /home
/dev/vdd             103212320    192252  97777188   1% /data

The idea is to share /common, /home and /data across the
worker nodes. So if you want to set up some things for the
cluster this will be the one to use.

Quota on /project disk, from Eric

You can use prjquota to check the quotas on NFG (/project) like this:
pdsf4 88% prjquota star
           ------ Space (GB) -------     ----------- Inode -----------
 Project    Usage    Quota   InDoubt      Usage      Quota     InDoubt
--------   -------  -------  -------     -------    -------    -------
    star      1011     1536        1      842365    1000000       1075
pdsf4 89%

So STAR has a quota of 1.5TB now.

Alternatives to scp

STAR users have access to  /project/projectdirs/star and this area
is visible from all NERSC systems (both carver and PDSF and the data
transfer nodes). Best way to transfer data from BNL to the project area
would be  through the data transfer nodes:
http://www.nersc.gov/nusers/systems/datatran/
but there are lots of options.

From Matt - how to add cron tab job

Here are some commands for inserting things into cron.
There is a directory called /etc/cron.daily which where you can put scripts that will run once per day.
 For instance, you can put a controller script doResetDB.sh in that directory that runs resetDB.sh
with the right arguments. Make sure to make it executable.

You can control when the script will run by editing the line in /etc/crontab that 
says "run-parts /etc/cron.daily". I didn't look in any of your machines, but the ones I have are 
setup to run at 4:02 every morning.

Controlling this behavior is as easy as moving the script in and out of the /etc/cron.daily directory.

Transport of DB-snapshot file from RCF to carver

Hi Jan,
In my home directory on carver, there is a script called doDownloads.sh. This script is what needs to 
be run. Right now it runs on cvrsvc03. It sleeps for 24 hours, then runs a script called 
downloadSnapshot.sh in the same location.

Best,
Matt


Direct upload of DB snapshot from RCF


assuming you have curl installed, here is a safe one-liner to
download snapshots :

curl -s --retry 60 --retry-delay 60 http://www.star.bnl.gov/dsfactory/
--output snapshot.tgz

It will automatically retry 503 errors 60 times, waiting 60 seconds
between attempts. Should be safe enough to add it to VMs directly..

"-s" means silent => no output at all. You may want to remove this
switch while you test..
  • No labels