...
Advanced
...
tasks
...
at
...
Eucalyptus
...
...
virtual cluster scripts, documentation is here: $VIRTUALCLUSTER_HOME/docs/README
...
skip: *on other systems, **Setting credentials*, where echo $VIRTUALCLUSTER_HOME
/global/common/carver/tig/virtualcluster/0.2.2
...
- Log
...
- into
...
- carver
...
- .nersc.gov
...
change
...
- shell
...
- to bash or python will crash
- Load the Eucalyptus tools + virtual cluster tools, order of loading matters
module load tig virtualcluster euca2ools python/2.7.1
...
- screen
- setup system variables & your credentials stored in your 'eucarc'
...
- script:
...
source ~/key-euca2-balewski-x509/eucarc
...
- (for
...
- bash)
...
make
...
- sure
...
- EUCA_KEY_DIR
...
- is
...
- set
...
- properly
- create and format EBS volume:
ec2-create-volume
...
- --size
...
- 5
...
- --availability-zone
...
- euca
...
VOLUME vol-82DB0796
...
- 5
...
- euca
...
- creating
...
- 2010-12-17T20:24:45+0000
...
- check
...
- the
...
- volume
...
- is
...
- created
euca-describe-volumes
- created
...
VOLUME
...
- vol-82DB0796
...
- 5
...
- euca
...
- available
...
- 2010-12-17T20:24:45+0000
...
- Create
...
- an
...
- instance:
...
- euca-run-instances
...
- -k
...
- balewski-euca
...
- emi-39FA160F
...
- (Ubuntu)
...
- and check it runs: euca-describe-instances
...
- |
...
- sort
...
- -k
...
- 4
...
- |
...
- grep run
- STAR VM w/ SL10k: euca-run-instances
...
- -k
...
- balewski-euca
...
- -t
...
- c1.
...
- xlarge emi-48080D8D
- STAR VM w/ SL11a: euca-run-instances
...
- -k
...
- balewski-euca
...
- -t
...
- c1.xlarge
...
- emi-6F2A0E46
...
- STAR VM w/ SL11b: euca-run-instances
...
- -k
...
- balewski-euca
...
- -t
...
- c1.
...
- xlarge emi-FA4D10D5
- STAR VM w/ SL11c: euca-run-instances
...
- -k
...
- balewski-euca
...
- -t
...
- c1.
...
- xlarge emi-6E5B0E5C --addressing
...
- private
- small instance euca-run-instances
...
- -k
...
- balewski-euca
...
- emi-1CF115B4 content: 1 core, 360 MB of disk space
- Attach EBS volume to this instance : euca-attach-volume
...
- -i
...
- i-508B097C
...
- -d
...
- /dev/vdb
...
- vol-82DB0796
...
euca-attach-volume
...
- -i
...
- <instance-id>
...
- -d
...
- /dev/vdb
...
- <volumeid>
...
- check
...
- attachment
...
- worked
...
- out:
...
- euca-describe-volumes
...
- vol-830F07A0
...
VOLUMEvol-830F07A0
...
- 145eucain-use2011-03-16T19:09:49.738Z
...
ATTACHMENTvol-830F07A0i-46740817/dev/vdb2011-03-16T19:21:21.379Z
...
- ssh
...
- to
...
- this
...
- instance
...
- and
...
- format
...
- the
...
- EBS
...
- volume:
...
- ssh -i
...
- ~/key-euca2-balewski-x509/balewski-euca.private
...
- root@128.55.70.203
...
- yes | mkfs -t ext3 /dev/vdb
...
mkdir /apps
mount /dev/vdb
...
- /apps
...
- terminate this instance : euca-terminate-instances
...
- i-508B097C
- to terminate all instances: euca-terminate-instances
...
- $(euca-describe-instances
...
- |grep
...
- i-
...
- |cut
...
- -f
...
- 2)
...
- re-mount
...
- already
...
- formatted
...
- EBS
...
- disk
...
- to
...
- a
...
- single
...
- node
...
- ##
...
- start
...
- VM,
...
- attach
...
- volume
...
- ##
...
- ssh
...
- to
...
- VM,
...
- do mkdir /apps;mount
...
- /dev/vdb
...
- /apps
...
- to mount 2nd EBS volume to the same machine you need it first format as above, next mount it with different: /dev/vdc
...
- &
...
- mount
...
- as
...
- /someName
...
- setup & deploy VM cluster using a common EBS volume
- Create a .conf on local machine and edit appropriately, following $VIRTUALCLUSTER_HOME/docs/
...
- sample-user.conf
...
-
cp ...sample-user.conf
-
...
- /global/u2/b/balewski/.cloud/nersc/user.conf
...
- , set properly : EBS_VOLUME_ID=vol-82DB0796
...
- export CLUSTER_CONF=/global/u2/b/balewski/.cloud/nersc/user.conf
...
- launch your 3-nodes
...
- cluster
...
- ,
...
- it
...
- will
...
- be
...
- named
...
- 'balewski-cent'
...
- ,
...
- do:
...
- vc-launcher
...
- newCluster
...
- 3
- after many minutes check if # launched instances matches, oly head node will have good IP , do euca-describe-instances
...
- ssh to the head node, do ssh -i ~/key-euca2-balewski-x509/balewski-euca.private
...
- root@128.55.56.49
...
- ssh to a worker node from the head node, do : ssh root@192.168.2.132
...
- verify the EBS disk is visible, do : cd /apps/;
...
- ls
...
- -l
...
- add nodes to existing cluster : vc-launcher addNodes 4
- terminate cluster, do : vc-launcher terminateCluster balewski-cent
- List of local IPs: cat ~/.cloud/nersc/machineFile
...
- |
...
- sort
...
- -u , global head IP: cat ~/.cloud/nersc/.vc-private-balewski-
...
- cent
- Change type of VMs added to the cluster.## copy full config: cp /global/common/carver/tig/virtualcluster/0.2.2/conf/cluster/cluster.centos.
...
- conf /global/homes/b/balewski/.cloud/nersc
...
- ## redefine INSTANCE_TYPE
...
- =c1.xlarge,
...
- IMAGE_ID
...
- =emi-5B7B12EE
...
- your
...
- user.
...
- conf
- remove the CLUSTER_TYPE
- conf
...
- line
- add CLUSTER_CONF
...
- =/global/homes/b/balewski/.cloud/nersc/cluster.centos.conf
...
- vc-launcher
...
- addNodes
...
- 4
...
Trouble shooting:
- version of python whould be at least 2.5,
...
- to
...
- test
...
- it
...
- type:
...
which python
/usr/bin/env python- sssdes
...
Not tested instruction how to setup NFS on a worker node
Code Block |
---|
python{color} # sssdes ---- Not tested instruction how to setup NFS on a worker node {code}For doing this manually, you can use the standard linux distribution instructions on how to do that. Here are some high-level instructions based on how the virtual cluster scripts does it for CentOS. On the master a) You will need an entry for each slave-partition you expect to serve to the worker nodes (You should see entries for /apps if you) b) Then restart nfs $ service nfs restart On each worker a) mkdir /data b) mount <master IP>:/data /data {code} ---- Web page monitoring deployed VMs , updated every 15 minutes[ |
...
Web page monitoring deployed VMs , updated every 15 minuteshttp://portal.nersc.gov/project/magellan/eucalyptus/instances.txt
Code Block |
---|
} Can you explain the meaning of every line? node=c0501 node name, eucalyptus runs on nodes c0501 through c0540 mem=24404 /24148 memory total/free (in MB) disk=182198/ 181170 space available/free - will check tomorrow what units. but I need to check what space is it actually listing here cores=8/ 6 cores total/free {code} |