AWS Setup

Setting up your AWS account

Note: Amazon will ask you for your credit card information during the setup process. This is normal.

  1. Go to http://aws.amazon.com/ and sign up:
    1. You may sign in using your existing Amazon account or you can create a new account by selecting "I am a new user."
    2. Enter your contact information and confirm your acceptance of the AWS Customer Agreement.
    3. Once you have created an Amazon Web Services Account, check your email for your confirmation step. You need Access Identifiers to make valid web service requests.
  2. Go to http://aws.amazon.com/ and sign in. You need to double-check that your account is signed up for three of their services: Simple Storage Service (S3), Elastic Compute Cloud (EC2), and Amazon Elastic MapReduce by clicking here -- you should see "Services You're Signed Up For" under "Manage Your Account".
  3. You should have received your AWS credit code by email. Armed with this code, go to http://aws.amazon.com/awscredits/ This gives you $100 credit towards AWS. Be aware that if you exceed it, amazon will charge your credit card without warning. Normally, this credit is more than enough for this homework assignment (if you are interested in their changes, see AWS charges: currently, AWS charges about 10 cents/node/hour for the default "small" node size.). However, you must remember to terminate manually the AWS cluster (called Job Flows) when you are done: if you just close the browser, the job flows continue to run, and amazon will continue to charge you for days and weeks, exhausting your credit and charging you huge amount on your credit card. Remember to terminate the AWS cluster.

Setting up an EC2 key pair

To connect to an Amazon EC2 node, such as the master nodes for the Hadoop clusters you will be creating, you need an SSH key pair. To create and install one, do the following:

  1. After setting up your account, follow Amazon's instructions to create a key pair. Follow the instructions in section "Having AWS create the key pair for you," subsection "AWS Management Console." (Don't do this in Internet Explorer, or you might not be able to download the .pem private key file.)
  2. Download and save the .pem private key file to disk. We will reference the .pem file as </path/to/saved/keypair/file.pem> in the following instructions.
  3. Make sure only you can access the .pem file. If you do not change the permissions, you will get an error message later:
    $ chmod 600 </path/to/saved/keypair/file.pem>

Starting an AWS Cluster and running Pig Interactively

To run a Pig job on AWS, you need to start up an AWS cluster using the web Management Console and connect to the Hadoop master node. Do this by following the Pig tutorial but note that things change quickly in the cloud, so the screenshots and other details will be a bit outdated:

  1. Complete Section 1 (you already did this), Section 2, and Section 3.1 (only) in Amazon's interactive Pig tutorial. Some extra notes to help you along the way:

  2. Once you completed Sections 1, 2, and 3.1 you have a pig prompt:
      grunt>
    
    This is the interactive mode where you type in pig queries. Here you will cut and paste example.pig, but only after you read "Managing the results of your Pig queries" below. In this homework we will use pig only interactively. (The alternative is to have pig read the program from a file.)

  3. Useful information:

Monitoring Hadoop jobs

You are required in this homework to monitor the running Hadoop jobs on your AWS cluster using the master node's job tracker web UI. There are two ways to do this: using lynx or using your own browser with a SOCKS proxy.

  1. Using LYNX. Very easy, you don't need to download anything. Open a separate ssh connection to the AWS master node and type:

    % lynx http://localhost:9100/

    Lynx is a text browswer. Navigate as follows: up/down arrows = move through the links (current link is highlighted); enter = follows a link; left arrow = return to previous page.

    Examine the webpage carefully, while your pig pgram is running. You should find information about the map tasks, the reduce tasks, you should be able to drill down into each map task (for example to monitor its progress); you should be able to look at the log files of the map tasks (if there are runtime errors, you will see them only in these log files).

  2. Using SOCKS proxy, and your own browser. This requires more work, but the nicer interface makes it worth the extra work
    1. Set up your browser to use a proxy when connecting to the master node. Note: If the instructions fail for one browser, try the other browser. In particular, it seems like people are having problems with Chrome but Firefox, especially following Amazon's instructions, works well.
      • Firefox:
        1. Install the FoxyProxy extension for Firefox.
        2. Copy the foxyproxy.xml configuration file from the hw6/ folder into your Firefox profile folder.
        3. If the previous step doesn't work for you, try deleting the foxyproxy.xml you copied into your profile, and using Amazon's instructions to set up FoxyProxy manually. If you use Amazon's instructions, be careful to use port 8888 instead of the port in the instructions.
      • Chrome:
        1. Install proxy switch!
        2. Click the Tools icon (upper right corner; don't confuse it with the Developer's Tools !), Go to Tools, go to Extensions. Here you will see the ProxySwitch!: click on Options.
        3. Create a new Proxy Profile: Manual Configuration, Profile name = Amazon Elastic MapReduce (any name you want), SOCKS Host = localhost, Port = 8888 (you can choose any port you want; another favorite is 8157), SOCKS v5. If you don't see "SOCKS", de-select the option to "Use the same proxy server for all protocols".
        4. Create two new swtich rules (give them any names, say AWS1 and AWS2). Rule 1: pattern=*.amazonaws.com:*/*, Rule 2: pattern=*.ec2.internal:*/*. For both, Type=wildcard, Proxy profile=[the profile you created at the previous step].
    2. Open a new local terminal window and create the SSH SOCKS tunnel to the master node using the following:
      $ ssh -o "ServerAliveInterval 10" -i </path/to/saved/keypair/file.pem> -ND 8888 hadoop@<master.public-dns-name.amazonaws.com>
      (The -N option tells ssh not to start a shell, and the -D 8888 option tells ssh to start the proxy and have it listen on port 8888.)

      The resulting SSH window will appear to hang, without any output; this is normal as SSH has not started a shell on the master node, but just created the tunnel over which proxied traffic will run.

      Keep this window running in the background (minimize it) until you are finished with the proxy, then close the window to shut the proxy down.
    3. Open your browser, and type one of the following URLs:
      • For the job tracker: http://<master.public-dns-name.amazonaws.com>:9100/
      • For HDFS management: http://<master.public-dns-name.amazonaws.com>:9101/

    The job tracker enables you to see what MapReduce jobs are executing in your cluster and the details on the number of maps and reduces that are running or already completed.

    Note that, at this point in the instructions, you will not see any MapReduce jobs running but you should see that your cluster has the capacity to run a couple of maps and reducers on your one instance.

    The HDFS manager gives you more low-level details about your cluster and all the log files for your jobs.

 

Killing a Hadoop Job

Later, in the assignment, we will show you how to launch MapReduce jobs through Pig. You will basically write Pig Latin scripts that will be translated into MapReduce jobs (see lecture notes). Some of these jobs can take a long time to run. If you decide that you need to interrupt a job before it completes, here is the way to do it:

If you want to kill pig, you first type CTRL/C, which kills pig only. Next, kill the hadoop job, as follows. From the job tracker interface find the hadoop job_id, then type:

% hadoop job -kill job_id

You do not need to kill any jobs at this point.

However, you can now exit pig (just type "quit") and exit your ssh session. You can also kill the SSH SOCKS tunnel to the master node.

 

Terminating an AWS cluster

When you are done running Pig scripts, make sure to ALSO terminate your job flow. This is a step that you need to do in addition to stopping pig and Hadoop (if necessary) above.

This step shuts down your AWS cluster:

  1. Go to the Management Console.
  2. Select the job in the list.
  3. Click the Terminate button (it should be right below "Your Elastic MapReduce Job Flows").
  4. Wait for a while (may take minutes) and recheck until the job state becomes TERMINATED.

Pay attention to this step. If you fail to terminate your job and only close the browser, or log off AWS, your AWS will continue to run, and AWS will continue to charge you: for hours, days, weeks, and when your credit is exhausted, it chages your creditcard. Make sure you don't leave the console until you have confirmation that the job is terminated.

You can now shut down your cluster.

Checking your Balance

Please check your balance regularly!!!

  1. Go to the Management Console.
  2. Click on your name in the top right corner and select "Account Activity".
  3. Now click on "detail" to see any charges < $1.

To avoid unnecessary charges, terminate your job flows when you are not using them.

 

Managing the results of your Pig queries

For the next step, you need to restart a new cluster as follows. Hopefully, it should now go very quickly:

We will now get into more details about running Pig scripts.

Your pig program stores the results in several files in a directory. You have two options: (1) store these files in the Hadoop File System, or (2) store these files in S3. In both cases you need to copy them to your local machine.

1. Storing Files in the Hadoop File System

This is done through the following pig command (used in example.pig):

	store count_by_object_ordered into '/user/hadoop/example-results' using PigStorage();

Before you run the pig query, you need to (A) create the /user/hadoop directory. After you run the query you need to (B) copy this directory to the local directory of the AWS master node, then (C) copy this directory from the AWS master node to your local machine.

1.A. Create the "/user/hadoop Directory" in the Hadoop Filesystem

You will need to do this for each new job flow that you create.

To create a /user/hadoop directory on the AWS cluster's HDFS file system run this from the AWS master node:

% hadoop dfs -mkdir /user/hadoop
Check that the directory was created by listing it with this command:
% hadoop dfs -ls /user/hadoop

You may see some output from either command, but you should not see any errors.

You can also do this directly from grunt with the following command.

grunt> fs -mkdir /user/hadoop 

Now you are ready to run your first sample program. Take a look at the starter code that we provided in hw6.tar.gz. Copy and paste the content of example.pig. (We give more details about this program back in hw6.html).

Note: The program may appear to hang with a 0% completion time... go check the job tracker. Scroll down. You should see a MapReduce job running with some non-zero progress.

Note 2: Once the first MapReduce job gets to 100%... if your grunt terminal still appears to be suspended... go back to the job tracker and make sure that the reduce phase is also 100% complete. It can take some time for the reducers to start making any progress.

Note 3: The example generates more than 1 MapReduce job... so be patient.

 

1.B. Copying files from the Hadoop Filesystem

The result of a pig script is stored in the hadoop directory specified by the store command. That is, for example.pig, the output will be stored at /user/hadoop/example-results, as specified in the script. HDFS is separate from the master node's file system, so before you can copy this to your local machine, you must copy the directory from HDFS to the master node's Linux file system:

% hadoop dfs -copyToLocal /user/hadoop/example-results example-results

This will create a directory example-results with part-* files in it, which you can copy to your local machine with scp. You can then concatenate all the part-* files to get a single results file, perhaps sorting the results if you like.

An easier option may be to use

% hadoop fs -getmerge  /user/hadoop/example-results example-results

This command takes a source directory and a destination file as input and concatenates files in src into the destination local file.


Use hadoop dfs -help or see the hadoop dfs guide to learn how to manipulate HDFS. (Note that hadoop fs is the same as hadoop dfs.)

1.C. Copying files to or from the AWS master node

2. Storing Files in S3

This seems much easier to use. Go to your AWS Management Console, click on Create Bucket, and create a new bucket (=directory). Give it a name that may be a public name. Let's say you call it superman-hw6. Click on Actions, Properties, Permissions. Make sure you have all the permissions.

Modify the store command of example.pig to:

	store count_by_object_ordered into 's3n://superman-hw6/example-results';

Run your pig program. When it terminates, then in your S3 console you should see the new directory example-results. Click on individual files to download. The number of files depends on the number of reduce tasks, and may vary from one to a few dozens. The only disadvantage of using S3 is that you have to click on each file separately to download.

Note that S3 is permanent storage, and you are charged for it. You can safely store all your query answers for several weeks without exceeding your credit; at some point in the future remember to delete them.