Skip to main content

How to Install Hadoop (2.7.3) on Ubuntu (16.04 LTS)



As I am planning to learn Hadoop, I wanted to install Hadoop (2.7.3) on my Ubuntu (16.04 LTS) and I followed the steps mentioned in the documentation on the Apache Hadoop website. I encountered few problems which are mentioned below, spent some time finding solution to them.

Below are the steps I followed and the description of the error is at the end of this post and also I have mentioned what I missed and what caused these errors.


Step 1. Download Hadoop installation file. For Hadoop 2.7.3 version I used the following link 

http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.3/hadoop-2.7.3-src.tar.gz
Step 2. (Assuming you have downloaded the file into /home/<username>/Downloads folder

$ tar zxf hadoop-2.7.3-src.tar.gz

This will extract the files into a folder  "hadoop-2.7.3" .

Step 3. I moved the file to /home/<username> folder (many suggest to move it into /usr/local but I prefer to keep it here, may be once I learn more about linux I might get into that, as of now I am fine with my current setup.)

mv hadoop-2.7.3 /home/kiran/
Step 4. Install the ssh and rsync
 $ sudo apt-get install ssh
  $ sudo apt-get install rsync
Step 5. Edit hadoop-env.sh file located at

            /home/<username>/hadoop-2.7.3/etc/hadoop/hadoop-env.sh

 Find this line : export JAVA_HOME=${JAVA_HOME}
replace ${JAVA_HOME} with the location of java to find where the java is located execute the command "whereis java" and you will get the path. For me it was located at the below path
       /usr/bin/java
 I replaced the ${JAVA_HOME} with "/usr/"

Step 6. Prepare to start the Hadoop cluster

Assuming you are within hadoop-2.7.3 directory type the following command
$bin/hadoop
This will display the usage documentation for the hadoop script. This means you are on the right path :)

Step 7. Configuration

a. Edit the file hadoop-2.7.3/etc/hadoop/core-site.xml , between <configuration></configuration> paste the following and save
 <property>        
    <name>fs.defaultFS</name>        
    <value>hdfs://localhost:9000</value>   
 </property>
b. Edit the file hadoop-2.7.3/etc/hadoop/hdfs-site.xml, between <configuration></configuration> paste the following and save

    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
c. Edit the file hadoop-2.7.3/etc/hadoop/mapred-site.xm, between <configuration></configuration> paste the following and save
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

d. Edit the file  hadoop-2.7.3/etc/hadoop/yarn-site.xml, between <configuration></configuration> paste the following and save
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
Step 8. Setup SSH

  Check if you can ssh to localhost without passphrase by executing the following command
$ ssh localhost
If you cannot then execute the following
  $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
  $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
  $ chmod 0600 ~/.ssh/authorized_keys


Step 9. Execution and testing the setup

a. Format the HDFS, assuming you are in hadoop-2.7.3 folder
 $ bin/hdfs namenode -format
b. Start NameNode and DataNode deamons by the following command
 $ sbin/start-dfs.sh
Now you can browse the NameNode by browsing to http://localhost:50070/

c. Create the folders required to run the MapReduce jobs by following commands
$ bin/hdfs dfs -mkdir /user
  $ bin/hdfs dfs -mkdir /user/<username>
d. You can stop the deamon by the following command
 $ sbin/stop-dfs.sh
e. You can start ResourceManager and NodeManager deamon by the following command
$ sbin/start-yarn.sh
Now you can browse the ResourceManager by browsing to http://localhost:8088/

f.You can stop the deamon by the following command
 $ sbin/stop-yarn.sh


Background story
I missed the configuration step and encountered the following errors

475 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.

If you encounter this error check if your configurations are correct or not.

You can also refer the Apache Hadoop Documentation for installation, executing the test jobs and further explanation : http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

Comments

Joel said…
Explained it clearly for beginners. Great Job!!! :)
Rahul said…
Thx. Very concise and clear.
JonathanL said…
This comment has been removed by the author.

Popular posts from this blog

Installing Adobe Reader on Ubuntu 16.04 LTS

Adobe stopped Adobe Reader support for Linux so it is bit of a challenge to find and install the Adobe Reader for Linux. Here I describe how to install it on Ubuntu 16.04 LTS.

Step 1. First install gdebi

sudo apt-get install gdebi
Step 2. Install required libraries

sudo apt-get install libgtk2.0-0:i386
sudo apt-get install libnss3-1d:i386
sudo apt-get install libnspr4-0d:i386
sudo apt-get install libxml2:i386
sudo apt-get install libxslt1.1:i386

Step 3. Download the Adobe Reader DEB package

Adobe Reader 9.5.5 enu

Javascript KeyCode Reference table for Event Handling

The post explains Keyboard event handling using javascript.Javascript events are used to capture user keystrokes. Below is a table of key codes for the keys on a multimedia keyboard. If this table is inconsistent with your own findings, please let me know.

Java Script Code to Find Key code

<script language="JavaScript">
document.onkeydown = checkKeycode
function checkKeycode(e) {
var keycode;
if (window.event) keycode = window.event.keyCode;
else if (e) keycode = e.which;
alert("keycode: " + keycode);
}
</script>


Key Code Reference Table
Key PressedJavascript Key Codebackspace8tab9enter13shift16ctrl17alt18pause/break19caps lock20escape27page up33page down34end35home36left arrow37up arrow38right arrow39down arrow40insert45delete46048149250351452553654755856957a65b66c67d68