ssh pdsf.nersc.gov
STAR disks:
0.5TB pdsf.nersc.gov:/eliza17/star/pwg/starspin/balewski/
1.5 TB /eliza14/star/pwgscr
STAR software specific instructions for Eucalyptus
...
e-mail to NERSC: Consult <consult@nersc.gov>From Doug Olson
Code Block |
---|
the headnode is running at 128.55.56.51.
It has a copy of the /common and /home fs from the other image
taken last night.
# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 10321208 4363536 5433384 45% /
/dev/vda2 9322300 52 8848700 1% /mnt
/dev/vdb 51606140 10893984 38090716 23% /common
/dev/vdc 30963708 189028 29201816 1% /home
/dev/vdd 103212320 192252 97777188 1% /data
The idea is to share /common, /home and /data across the
worker nodes. So if you want to set up some things for the
cluster this will be the one to use.
|
Quota on /project disk, from Eric
...
Code Block |
---|
setfacl -m g:rhstar:x /global/scratch/sd/balewski - this is dangerous
setfacl -R -m g:rhstar:rX /global/scratch/sd/balewski/2011w
|
...
How to start VM with STAR environment
Code Block |
---|
1) copy my setup code from carver.
|
...
- SVN @ NERSC
http://www.nersc.gov/nusers/systems/servers/cvs.php
...
- Running Interactive Jobs on Carver
You can run an xterm in batch by doing a "qsub -I -q regular ...."
http://www.nersc.gov/nusers/systems/carver/running_jobs/interactive.php
Note that interactive jobs do not have to go to the interactive queue.
E.g. in side screen run this command (1 node, 1 core)
qsub -I -V -q interactive -l nodes=1:ppn=1
brings you in to another shell which is treated as batch but is interactive.
Misc info about resources at NERSC/PDSF
https://newweb.nersc.gov/users/computational-systems/pdsf/using-the-sge-batch-system/i-o-resources/
...
- Reboot VM
From inside: shutdown -r now
From outside: euca-reboot-instances i-446F07EE