You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

Hadoop - my first MapReduce (M/R) code
  1. install VMware on the local machine
  2. import Hadoop server from http://www.cloudera.com/hadoop-training-virtual-machine
  3. fire VM up
  4. To use Streamers add system variable:
    export SJAR=/usr/lib/hadoop/contrib/streaming/hadoop-0.20.1+133-streaming.jar
    Now you can writhe M?R in any language
  5. upload Shakespeare text to HDFS (Hadoop Distributed File System)
    cd ~/git/data
    tar vzxf shakespeare.tar.gz
    check nothig is in HDFS
    hadoop fs -ls /user/training
    add unpacked text gile to HDFS and check again
    hadoop fs -put input /user/training/inputShak
  6. (source) (target)
    hadoop fs -ls /user/training
  7. Execute M/R job using 'cat' & 'wc'
    hadoop jar $SJAR \
    -mapper cat \
    -reducer wc \
    -input inputShak \
    -output outputShak 
    
    1. inspect output in
      hadoop fs -cat outputShak/p*
      175376 948516 5398106
  8. Task: write own Map & Reduce counting frequency of words:
    mapper : read text data from stdin
    write "<key> <value>" to stdout (<key>=word, <value>=1)
    example:
    $ echo "foo foo quux labs foo bar quux" | ./mapper.py
    1. Python Mapper
      mapp1.py
      #!/usr/bin/env python
      # my 1st mapper: writes <word> 1 
      import sys
      data = sys.stdin.readlines()
      for ln in data:
          L=ln.split()
          for key in L:
              if len(key)>1:
                  print key,1
      
      
      reducer : read a stream of "<word> 1" from stdin
      write "<word> <count>" to stdout
      redu1.py
      #!/usr/bin/env python
      # my 1st reducer: reads:  <word> <vlue., sums key values from the same consecutive key, writes <word> <sum>
      import sys
      data = sys.stdin.readlines()
      myKey=""
      myVal=0
      for ln in data:
          #print ln,
          L=ln.split()
          #print L
          nw=len(L)/2
          for i in range(nw):
              #print i
              key=L[0+2*i]
              val=int(L[2*i+1])
              #print key,val,nw
              if myKey==key:
                  myVal=myVal+val
              else:
                  if len(myKey)>0:
                      print myKey, myVal,
                  myKey=key
                  myVal=val
                      
      if len(myKey)>0:
          print myKey, myVal,
      
    2. Execute:
      hadoop jar $SJAR \
      -mapper $(pwd)/mapp1.py \
      -reducer $(pwd)/redu1.py \
      -input inputShak \
      -output outputShak3
      
Cloudera - my 1st EC2 cluster deployed
  1. follow instruction http://archive.cloudera.com/docs/ec2.html
    1. Item 2.1 . I uploaded 3 tar files: 'client script', 'boto', and 'simplejson'.
      • Un-tarred all 3.
      • execute twice , in 'boto', and 'simplejson'
        sudo python 
        setup.py install 
      • exported by hand environment variables
        AWS_ACCESS_KEY_ID - Your AWS Access Key ID
        AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key
      • create a directory called .hadoop-ec2 w/ file ec2-clusters.cfg with content:
        [my-hadoop-cluster]
        ami=ami-6159bf08
        instance_type=c1.medium
        key_name=janAmazonKey
        availability_zone=us-east-1c
        private_key=/home/training/
        ssh_options=-i %(private_key)s -o StrictHostKeyChecking=no
        
      • filer a cluster of 1 server+2 nodes
        cd ~/Desktop/cloudera-for-hadoop-on-ec2-py-0.3.0-beta
        ./hadoop-ec2 launch-cluster my-hadoop-cluster 2

1+2 machines are up and running:

Waiting for master to start (Reservation:r-d0d424b8)
...............................................
master	i-1cbe3974	ami-6159bf08	ec2-67-202-49-216.compute-1.amazonaws.com	ip-10-244-182-63.ec2.internal	running	janAmazonKey	c1.medium	2009-10-26T02:31:17.000Z	us-east-1c
Waiting for slaves to start
.............................................
slave	i-f2be399a	ami-6159bf08	ec2-67-202-14-97.compute-1.amazonaws.com	ip-10-244-182-127.ec2.internal	running	janAmazonKey	c1.medium	2009-10-26T02:32:19.000Z	us-east-1c
slave	i-f4be399c	ami-6159bf08	ec2-75-101-201-30.compute-1.amazonaws.com	ip-10-242-19-81.ec2.internal	running	janAmazonKey	c1.medium	2009-10-26T02:32:19.000Z	us-east-1c
Waiting for jobtracker to start

Waiting for 2 tasktrackers to start
.................................................1.............2.
Browse the cluster at http://ec2-67-202-49-216.compute-1.amazonaws.com/

To terminate cluster execute:
./hadoop-ec2 terminate-cluster my-hadoop-cluster
and say 'yes' !!

To actually brows them I need to:

  1. execute ./hadoop-ec2 proxy my-hadoop-cluster
    Resulting with:
    export HADOOP_EC2_PROXY_PID=20873;
    echo Proxy pid 20873;
  2. and add proxy to the fireFox - so far it did not worked.
More advance instruction about Running Hadoop on Amazon EC2

http://wiki.apache.org/hadoop/AmazonEC2

  • No labels