Wiki Markup |
---|
*Advanced tasks at |
...
...
Eucalyptus* ---- [list of ec2 commands |http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/index.html?ApiReference-cmd-AttachVolume.html] ---- *virtual cluster* scripts, documentation is here: $VIRTUALCLUSTER_HOME/docs/README |
...
skip: **on other systems{*}*, \**Setting credentials*\*, where echo $VIRTUALCLUSTER_HOME /global/common/carver/tig/virtualcluster/0.2.2 |
...
# Log into *carver*.nersc.gov |
...
change shell |
...
to {color:#ff0000}bash{color} or python will crash # Load the Eucalyptus tools + virtual cluster tools, order of loading matters {color:#ff0000}module load tig virtualcluster euca2ools{color}{color:#0000ff} {color} *python/2.7.1 |
...
screen* # setup system variables & your credentials stored in your 'eucarc' script |
...
: {color:#ff0000}source \~/key-euca2-balewski-x509/eucarc |
...
{color} {color:#333333} (for bash) |
...
{color} make sure EUCA_KEY_DIR is set |
...
properly # create and format EBS volume: {color:#0000ff}ec2-create-volume \--size 5 \--availability-zone euca |
...
{color} VOLUME vol-82DB0796 5 euca creating 2010-12-17T20:24:45+0000 |
...
## check the volume is |
...
created {color:#0000ff}euca-describe-volumes |
...
{color} VOLUME vol-82DB0796 5 euca available 2010-12-17T20:24:45+0000 |
...
## Create an instance: |
...
{color:#0000ff}euca-run-instances \-k balewski-euca emi-39FA160F |
...
(Ubuntu) |
...
{color}{color:#333333}and check it runs: {color}{color:#0000ff} euca-describe-instances \| sort \-k 4 \| |
...
grep run {color} ## {color:#000000}{*}STAR VM w/ SL10k{*}{color}{color:#000000}: {color}{color:#ff0000}euca-run-instances \-k balewski-euca \-t c1. |
...
xlarge emi-48080D8D{color} ## {color:#ff0000}* *{color}{color:#000000}{*}STAR VM w/ SL11a{*}{color}{color:#000000}: {color}{color:#ff0000}euca-run-instances \-k balewski-euca \-t c1.xlarge emi-6F2A0E46 |
...
{color} ## {color:#ff0000}* *{color}{color:#000000}{*}STAR VM w/ SL11b{*}{color}{color:#000000}: {color}{color:#ff0000}euca-run-instances \-k balewski-euca \-t c1. |
...
xlarge emi-FA4D10D5{color} ## {color:#ff0000}* *{color}{color:#000000}{*}STAR VM w/ SL11c{*}{color}{color:#000000}: {color}{color:#ff0000}euca-run-instances \-k balewski-euca \-t c1. |
...
xlarge emi-6E5B0E5C --addressing |
...
private{color} ## {color:#000000}small instance{color}{color:#ff0000} euca-run-instances \-k balewski-euca |
...
emi-1CF115B4 {color}content: 1 core, 360 MB of disk space ## Attach EBS volume to this instance : {color:#0000ff}euca-attach-volume \-i |
...
i-508B097C \-d /dev/vdb vol-82DB0796 |
...
{color} euca-attach-volume \-i <instance-id> \-d /dev/vdb <volumeid> |
...
## check attachment worked out: |
...
{color:#ff0000}euca-describe-volumes |
...
{color} vol-830F07A0 |
...
VOLUMEvol-830F07A0 145eucain-use2011-03-16T19:09:49.738Z |
...
ATTACHMENTvol-830F07A0i-46740817/dev/vdb2011-03-16T19:21:21.379Z |
...
## ssh to this instance and format the EBS volume: |
...
{color:#0000ff}ssh \-i \~/key-euca2-balewski-x509/balewski-euca.private |
...
root@{color}128.55.70.203 |
...
## {color:#0000ff}yes \| mkfs \-t ext3 /dev/vdb |
...
{color} {color:#0000ff}mkdir /apps{color} {color:#0000ff}mount /dev/vdb /apps |
...
{color} ## *terminate this instance :* {color:#0000ff}{*}euca-terminate-instances |
...
{*}{color} {color:#0000ff} i-508B097C {color} ## {color:#000000}to terminate all instances:{color} {color:#0000ff}euca-terminate-instances $(euca-describe-instances \|grep i-\|cut \-f 2) |
...
{color} # {color:#000000}re-mount already formatted EBS disk to a single node{color}\## {color:#000000}start VM, attach volume{color}\## {color:#000000}ssh to VM, |
...
do {color}{color:#0000ff}mkdir /apps;mount /dev/vdb /apps |
...
{color} # {color:#000000}to mount 2nd EBS volume to the same machine you need it first format as above, next mount it with different:{color} {color:#0000ff}/dev/vdc & mount as /someName |
...
{color} # setup & *deploy VM cluster* using a common EBS volume ## Create a .conf on local machine and edit appropriately, following $VIRTUALCLUSTER_HOME/docs/*sample-user.conf |
...
* {color:#0000ff}cp ...sample-user.conf |
...
/global/u2/b/balewski/.cloud/nersc/user.conf |
...
{color} , set properly : {color:#0000ff}EBS_VOLUME_ID=vol-82DB0796 |
...
{color} ## {color:#ff0000}export CLUSTER_CONF=/global/u2/b/balewski/.cloud/nersc/user.conf |
...
{color} ## launch your 3-nodes *cluster*, it will be *named 'balewski-cent'* , do: |
...
{color:#ff0000}vc-launcher newCluster |
...
3{color} ## after many minutes check if # launched instances matches, oly head node will have good IP , do {color:#0000ff}euca-describe-instances |
...
{color} ## ssh to the head node, do {color:#0000ff}ssh \-i \~/key-euca2-balewski-x509/balewski-euca.private root@128.55.56.49 |
...
{color} ## ssh to a worker node from the head node, do : {color:#0000ff}ssh root@192.168.2.132 |
...
{color} ## verify the EBS disk is visible, do : {color:#0000ff}cd /apps/; ls \-l |
...
{color} # {color:#000000}add nodes to existing cluster :{color}{color:#0000ff} vc-launcher addNodes 4{color} # terminate cluster, do : {color:#0000ff}vc-launcher terminateCluster balewski-cent{color} # List of local IPs: {color:#0000ff}cat \~/.cloud/nersc/machineFile \| sort |
...
\-u {color}, global head IP:{color:#0000ff} cat \~/.cloud/nersc/.vc-private-balewski- |
...
cent {color} # {color:#000000}Change type of VMs added to the cluster.{color}\## {color:#000000}copy full config:{color} {color:#0000ff}cp /global/common/carver/tig/virtualcluster/0.2.2/conf/cluster/cluster.centos. |
...
conf /global/homes/b/balewski/.cloud/nersc |
...
{color}\## redefine {color:#0000ff}INSTANCE_TYPE{color}=c1.xlarge, |
...
{color:#0000ff}IMAGE_ID{color}=emi-5B7B12EE |
...
## your user. |
...
conf ### {color:#ff0000}remove{color} the {color:#0000ff}CLUSTER_TYPE |
...
{color} line ### {color:#ff0000}add{color} {color:#0000ff}CLUSTER_CONF{color}=/global/homes/b/balewski/.cloud/nersc/cluster.centos.conf |
...
## vc-launcher addNodes |
...
Trouble shooting:
...
4 ---- Trouble shooting: # version of python whould be at least 2.5, to test it type: |
...
{color:#0000ff}which python{color} {color:#0000ff}/usr/bin/env |
...
Not tested instruction how to setup NFS on a worker node
Code Block |
---|
python{color}
# sssdes
----
Not tested instruction how to setup NFS on a worker node
{code}For doing this manually, you can use the standard linux distribution instructions
on how to do that. Here are some high-level instructions based on how the
virtual cluster scripts does it for CentOS.
On the master
a) You will need an entry for each slave-partition you expect to serve to
the worker nodes (You should see entries for /apps if you)
b) Then restart nfs
$ service nfs restart
On each worker
a) mkdir /data
b) mount <master IP>:/data /data
|
...
{code} ---- Web page monitoring deployed VMs , updated every 15 minutes[http://portal.nersc.gov/project/magellan/eucalyptus/instances.txt |
...
] {code |
} Can you explain the meaning of every line? node=c0501 node name, eucalyptus runs on nodes c0501 through c0540 mem=24404 /24148 memory total/free (in MB) disk=182198/ 181170 space available/free - will check tomorrow what units. but I need to check what space is it actually listing here cores=8/ 6 cores total/free {code} |