Month: December 2012

TBF wishes you all a happy coding and troubleshooting New Year!

Celebrate!In 2012 we celebrated our 3th anniversary.

The year 2012 was all about Oracle migrations and troubleshooting, playing with Ravendb and picking up the old PHP skills.

We welcomed 20.000 viewers in 2012.

The busiest day of the year was November 27th with 141 views. The most popular post that day was Recover your corrupt datafiles in oracle – ora-00376.

These are the posts that got the most views in 2012

More numbers and figures: annual report 2012

It was a busy year, work hard and no play. But we will make this up next year. We are currently obsessed with bigdata,
so I expect lots of big data coming your way in 2013.

For now, stay save, enjoy and have a great ‘Oud en Nieuw’ as we dutch girls say 😉

Hadoop on Debian Wheezy


We from the ButtonFactory care deeply about Data. And the latest hype seems to be that the Data must be Big. So let’s get our hands dirty already and take a plunge into Hadoop.

This is how you can install Hadoop on a Debian Wheezy virtual machine in VirtualBox.
My laptop is running Windows 8 Enterprise.

Java Development Kit

Install Debian Wheezy from here.
When choosing packages, select only base system and SSH server.
We need to install Java 6 (see wiki).

$chmod u=rwx jdk-6u38-linux-x64.bin
$tar xvf jdk-6u38-linux-x64.bin

We will move the JDK to /opt like this:

$mkdir /opt/jvm
$mv jdk1.6.0_38/ /opt/jvm/jdk1.6.0_38/

$update-alternatives --install /usr/bin/java java /opt/jvm/jdk1.6.0_38/jre/bin/java 3
$update-alternatives --config java

Now we can check the version:

$ java -version
java version "1.6.0_38"
Java(TM) SE Runtime Environment (build 1.6.0_38-b05)
Java HotSpot(TM) 64-Bit Server VM (build 20.13-b02, mixed mode)

The Hadoop user

I followed Michael Noll‘s howto for Ubuntu.

We need to create a hadoop group and a hduser, and we’ll put the haduser in the hadoop group:

addgroup hadoop
adduser --ingroup hadoop hduser

We need to disable IPv6, so let’s add the following lines to the end of /etc/sysctl.conf:

#disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

Next we need to add Java and Hadoop to the path.

Add Java to the path

We need to set the $JAVA_HOME variable and add JAVA to our path.
Edit ~/.bashrc:

# Set Hadoop-related environment variables
export HADOOP_HOME=/opt/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/opt/jvm/jdk1.6.0_38

# Add Hadoop bin/ and JAVA bin/ directory to PATH
export PATH=$PATH:$JAVA_HOME/bin

Next we can install Hadoop, also in the opt directory. I prefer to use the opt directory, but feel free to use /usr or something else.

Installing Hadoop

I’m using Hadoop 1.0.4, because it is stable. We might want to test with 2.0 later on.

cd /opt
tar xzf hadoop-1.0.4.tar.gz
mv hadoop-1.0.4 hadoop
chown -R hduser:hadoop hadoop

We need to edit some of Hadoop’s config files now.

Open opt/hadoop/conf/ and set the JAVA_HOME environment variable to the Sun JDK/JRE 6 directory.

# The java implementation to use.  Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
export JAVA_HOME=/opt/jvm/jdk1.6.0_38


This is where Hadoop stores its Data.

  A base for other temporary directories.
  The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.


We need to create this directory and set ownership correctly:

mkdir -p /app/hadoop/tmp
chown hduser:hadoop /app/hadoop/tmp


vim mapred-site.xml

  The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.


vim hdfs-site.xml

  Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.

Starting Hadoop

su to hduser, because Hadoop runs under the hduser:

First format the namenode:

/opt/hadoop/bin/hadoop namenode -format

Now start Hadoop:


You can check if it all runs by running the Java JPS command:

hduser@wheezy:$ jps
2764 JobTracker
3374 Jps
2554 DataNode
2667 SecondaryNameNode
2879 TaskTracker
2449 NameNode

Is something went wrong and you don’t see the DataNode running, then stop Hadoop, remove all the files in /app/hadoop/tmp, format the datanode and start again.

cd /app/hadoop/tmp/
rm * -rf
cd /opt/hadoop/
bin/hadoop namenode -format

Now you should be able to browse to:
http://localhost:50070/ – web UI of the NameNode daemon
http://localhost:50030/ – web UI of the JobTracker daemon
http://localhost:50060/ – web UI of the TaskTracker daemon

If you visit the browser from the host machine replace ‘localhost’ with the IP from your virtual Debian machine.

Run a MapReduce Task

You can run one of the examples like this:

hduser@wheezy:/opt/hadoop$ bin/hadoop jar hadoop-examples-1.0.4.jar pi 10 1000000

When all goes well the output should be something like this:

12/12/16 16:48:57 INFO mapred.JobClient:     Combine output records=0
12/12/16 16:48:57 INFO mapred.JobClient:     Physical memory (bytes) snapshot=1655828480
12/12/16 16:48:57 INFO mapred.JobClient:     Reduce output records=0
12/12/16 16:48:57 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=5379981312
12/12/16 16:48:57 INFO mapred.JobClient:     Map output records=20
Job Finished in 71.721 seconds
Estimated value of Pi is 3.14158440000000000000

But chances are it doesn’t work directly. I had to doublecheck my file permissions, because I once ran Hadoop as root and that makes root owner of the log directory. And then the hduser is not allowed to write in them.

So check for errors like “WARN mapred.JobClient: Error reading task outputhttp://wheezy:50060/tasklog?plaintext=true&taskid=attempt_201001181020_0002_m_000014_0&filter=stdout 10/01/18 10:52:48 WARN mapred.JobClient: Error reading task outputhttp://wheezy:50060/tasklog?plaintext=true&taskid=attempt_201001181020_0002_m_000014_0&filter=stderr”

And take ownership of the /opt/hadoop/logs:

chown -R hduser:hadoop logs

Hooray! It’s our 3th birtday

8-12-2012 It’s our 3th birthday today and we are still alive and kicking!