Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin

...

Advanced

...

tasks

...

at

...

Eucalyptus

...

list of ec2 commands

...

virtual cluster scripts, documentation   is here: $VIRTUALCLUSTER_HOME/docs/README

...

  
      skip: *on other systems, **Setting credentials*, where echo $VIRTUALCLUSTER_HOME 
/global/common/carver/tig/virtualcluster/0.2.2

...

  1. Log

...

  1. into

...

  1. carver

...

  1. .nersc.gov

...


  1. change

...

  1. shell

...

  1. to  bash  or python  will crash
  2. Load the Eucalyptus tools + virtual cluster tools, order of loading matters
    module load tig virtualcluster euca2ools  python/2.7.1

...

  1.  screen
  2.  setup system variables & your credentials stored in your 'eucarc'

...

  1. script:

...


  1. source ~/key-euca2-balewski-x509/eucarc

...

  1.     (for

...

  1. bash)

...


  1. make

...

  1. sure

...

  1. EUCA_KEY_DIR

...

  1. is

...

  1. set

...

  1. properly
  2.  create and format EBS volume:
    ec2-create-volume

...

  1. --size

...

  1. 5

...

  1. --availability-zone

...

  1. euca

...


  1. VOLUME vol-82DB0796

...

  1. 5

...

  1. euca

...

  1. creating

...

  1. 2010-12-17T20:24:45+0000

...

    1. check

...

    1. the

...

    1. volume

...

    1. is

...

    1. created
      euca-describe-volumes

...


    1. VOLUME

...

    1. vol-82DB0796

...

    1. 5

...

    1. euca

...

    1. available

...

    1. 2010-12-17T20:24:45+0000

...

    1. Create

...

    1. an

...

    1. instance:

...

    1.  euca-run-instances

...

    1. -k

...

    1. balewski-euca

...

    1. emi-39FA160F

...

    1.  (Ubuntu)

...

    1.  and check it runs:  euca-describe-instances

...

    1. |

...

    1. sort

...

    1. -k

...

    1. 4

...

    1. |

...

    1. grep run 
    2.   STAR VM w/ SL10keuca-run-instances

...

    1. -k

...

    1. balewski-euca

...

    1. -t

...

    1. c1.

...

    1. xlarge emi-48080D8D
    2.  STAR VM w/ SL11aeuca-run-instances

...

    1. -k

...

    1. balewski-euca

...

    1. -t

...

    1. c1.xlarge

...

    1. emi-6F2A0E46

...

    1.  STAR VM w/ SL11beuca-run-instances

...

    1. -k

...

    1. balewski-euca

...

    1. -t

...

    1. c1.

...

    1. xlarge emi-FA4D10D5
    2.  STAR VM w/ SL11ceuca-run-instances

...

    1. -k

...

    1. balewski-euca

...

    1. -t

...

    1. c1.

...

    1. xlarge emi-6E5B0E5C --addressing

...

    1. private
    2. small instance   euca-run-instances

...

    1. -k

...

    1. balewski-euca

...

    1.  emi-1CF115B4    content: 1 core, 360 MB of disk space
    2. Attach EBS volume to this instance : euca-attach-volume

...

    1. -i

...

    1.  i-508B097C

...

    1. -d

...

    1. /dev/vdb

...

    1. vol-82DB0796

...


    1. euca-attach-volume

...

    1. -i

...

    1. <instance-id>

...

    1. -d

...

    1. /dev/vdb

...

    1. <volumeid>

...

    1. check

...

    1. attachment

...

    1. worked

...

    1. out:

...

    1.  euca-describe-volumes

...

    1.  vol-830F07A0

...


    1. VOLUMEvol-830F07A0

...

    1. 145eucain-use2011-03-16T19:09:49.738Z

...


    1. ATTACHMENTvol-830F07A0i-46740817/dev/vdb2011-03-16T19:21:21.379Z

...

    1. ssh

...

    1. to

...

    1. this

...

    1. instance

...

    1. and

...

    1. format

...

    1. the

...

    1. EBS

...

    1. volume:

...

    1.  ssh -i

...

    1. ~/key-euca2-balewski-x509/balewski-euca.private

...

    1.  root@128.55.70.203

...

    1. yes | mkfs -t ext3 /dev/vdb

...


    1. mkdir /apps
      mount /dev/vdb

...

    1. /apps

...

    1. terminate this instance : euca-terminate-instances

...

    1.    i-508B097C  
    2. to terminate all instances: euca-terminate-instances

...

    1. $(euca-describe-instances

...

    1. |grep

...

    1. i-

...

    1. |cut

...

    1. -f

...

    1. 2)

...

  1. re-mount

...

  1. already

...

  1. formatted

...

  1. EBS

...

  1. disk

...

  1. to

...

  1. a

...

  1. single

...

  1. node

...

  1. ##

...

  1. start

...

  1. VM,

...

  1. attach

...

  1. volume

...

  1. ##

...

  1. ssh

...

  1. to

...

  1. VM,

...

  1. do mkdir /apps;mount

...

  1. /dev/vdb

...

  1. /apps

...

  1. to mount 2nd EBS volume to the same machine you need it first format as above, next mount it with different: /dev/vdc

...

  1. &

...

  1. mount

...

  1. as

...

  1. /someName

...

  1. setup & deploy VM cluster using a common EBS volume
    1. Create a .conf on local machine and edit appropriately, following $VIRTUALCLUSTER_HOME/docs/

...

    1. sample-user.conf

...

    1.  
      cp  ...sample-user.conf

...

    1.  /global/u2/b/balewski/.cloud/nersc/user.conf

...

    1.  , set properly : EBS_VOLUME_ID=vol-82DB0796

...

    1. export CLUSTER_CONF=/global/u2/b/balewski/.cloud/nersc/user.conf

...

    1.  launch your 3-nodes

...

    1. cluster

...

    1. ,

...

    1. it

...

    1. will

...

    1. be

...

    1. named

...

    1. 'balewski-cent'

...

    1. ,

...

    1. do:

...

    1.  vc-launcher

...

    1. newCluster

...

    1. 3
    2. after many minutes check if # launched instances matches, oly head node will have good IP , do euca-describe-instances

...

    1. ssh to the head node, do  ssh -i ~/key-euca2-balewski-x509/balewski-euca.private

...

    1. root@128.55.56.49

...

    1. ssh to a worker node from the head node, do : ssh root@192.168.2.132

...

    1. verify the EBS disk is visible, do : cd /apps/;

...

    1. ls

...

    1. -l

...

  1. add nodes to existing cluster : vc-launcher addNodes 4
  1. terminate cluster, do : vc-launcher  terminateCluster balewski-cent
  2. List of local IPs:    cat ~/.cloud/nersc/machineFile

...

  1. |

...

  1. sort

...

  1. -u ,    global head IP: cat ~/.cloud/nersc/.vc-private-balewski-

...

  1. cent 
  2. Change type of VMs added to the cluster.## copy full config: cp /global/common/carver/tig/virtualcluster/0.2.2/conf/cluster/cluster.centos.

...

  1. conf /global/homes/b/balewski/.cloud/nersc

...

  1. ##  redefine INSTANCE_TYPE

...

  1. =c1.xlarge,

...

  1.  IMAGE_ID

...

  1. =emi-5B7B12EE

...

    1. your

...

    1. user.

...

    1. conf 
      1. remove the CLUSTER_TYPE

...

      1. line
      2. add CLUSTER_CONF

...

      1. =/global/homes/b/balewski/.cloud/nersc/cluster.centos.conf

...

    1. vc-launcher

...

    1. addNodes

...

    1. 4

...

Trouble shooting:

  1. version of python whould be at least 2.5,

...

  1. to

...

  1. test

...

  1. it

...

  1. type:

...


  1. which python
    /usr/bin/env python
  2. sssdes

...

Not tested instruction how to setup NFS on a worker node

Code Block
 python{color}
# sssdes

----
Not tested instruction how to setup NFS on a worker node
{code}For doing this manually, you can use the standard linux distribution instructions
 on how to do that. Here are some high-level instructions based on how the 
virtual cluster scripts does it for CentOS.
On the master

a) You will need an entry for each slave-partition you expect to serve to
 the worker nodes (You should see entries for /apps if you)
b) Then restart nfs

$  service nfs restart

On each worker
a) mkdir /data
b) mount <master IP>:/data /data
{code}
----
Web page monitoring deployed VMs , updated every 15 minutes[

...

Web page monitoring deployed VMs , updated every 15 minuteshttp://portal.nersc.gov/project/magellan/eucalyptus/instances.txt

] {
Code Block
}
Can you explain the meaning of every line?
 node=c0501
node name, eucalyptus runs on nodes c0501 through c0540

mem=24404
/24148
memory total/free (in MB)

disk=182198/
181170
space available/free - will check tomorrow what units.
but I need to check what space is it actually listing here

cores=8/ 6
cores total/free

{code}