Category: ITInfrastructure

Create a lab on Azure with Vagrant and Powershell

If you want to spin up a lab quickly to test things in a Windows environment, you can use an Azure trial account. It is possible to create trial accounts indefinitely so it will cost you nothing. So, let’s go.

For this scenario, I am assuming you are on Windows. By the way, I did the same on a Macbook but instead of Powershell I used the Azure CLI for Mac (runs on Node.js). Check this.

Step 1. Create an Azure trial account

Create a trial account on Azure here.
You will need to supply your credit card info and you should use an mail address that has not been used before for a trial. I am on Google Apps, so I can create mail addresses as much as I like.

Step 2. Install Azure Powershell

You’ll need Azure Powershell to query the available images.
Install the Azure Powershell with the msi (or Web Platform Installer).
I’ve been trying to install the SDK with OneGet, but it seems to be not available.

This gives you a brand new shell.
Not happy with it because it doen’t have a cursor. Let’s fix that:

[Console]::CursorSize = 25

Step 3. Add your Azure credentials



and enter your credentials


Next, get the publishsettings.



Save your publishsettings (e.g. on c:\temp) and import them:

Import-AzurePublishSettingsFile c:\temp\%your trial account%-credentials.publishsettings

Step 4. Generate certificates

I would advise to use Cmder with msysgit integration, if you don’t already. Cmder is my go to terminal emulator. I use it for Powershell, Git Bash and ordinary DOS. So install Cmder with Chocolatey.

  • First create a pem certificate which is conveniently valid for 10 years. This contains a public key and private key.
  • Then create a pfx certicate based on this pem certifcate.
  • From the pfx, generate a cer to upload to Azure.

openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout azurecert.pem -out azurecert.pem

openssl pkcs12 -export -out azurecert.pfx -in azurecert.pem -name "Vagrant Azure Cert"

openssl x509 -inform pem -in azurecert.pem -outform der -out azurecert.cer

Thanks to this article.

Step 5. Upload the cer file to Azure

I can’t figure out how this works with Powershell, so log on to your subscription and add the .cer file:


First go to settings, then to Management Certificates and upload your .cer file.


Step 6. Install the Vagrant Plugin for Azure

vagrant plugin install vagrant-azure
vagrant box add azure

Now take a look at the Vagrant file for this box. It is located here: C:\Users\yourname\.vagrant.d\boxes\azure\0\azure\Vagrantfile.
In this file you can define some constants that will be applied to every Azure box you create. I’ve changed ‘azure.vm.size’ from ‘Small’ to ‘Medium’ and added ‘azure.vm.location’ = West Europe.

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # All Vagrant configuration is done here. The most common configuration
  # options are documented and commented below. For a complete reference,
  # please see the online documentation at

  config.vm.provider :azure do |azure|
    azure.vm_size = 'Medium'
    azure.vm_location = 'West Europe' # e.g., West US

Step 7. Create a new Vagrant file for your Azure box

Now it’s time to create the Azure Vagrant box. Without much further ado, this is my Vagrantfile:

# --
Vagrant.configure('2') do |config| = 'azure'

    config.vm.provider :azure do |azure, override|
        azure.mgmt_certificate = 'insert path to you pem certifcate'
        azure.mgmt_endpoint = ''
        azure.subscription_id = 'insert your Azure subscription ID'
        azure.vm_image = ''
        azure.vm_name = 'box01' # max 15 characters. contains letters, number and hyphens. can start with letters and can end with letters and numbers

        azure.vm_password = 'Vagrant!' # min 8 characters. should contain a lower case letter, an uppercase letter, a number and a special character

        azure.storage_acct_name = 'azureboxesstorage2015' # optional. A new one will be generated if not provided.
        azure.cloud_service_name = 'azureboxes' # same as vm_name. leave blank to auto-generate
        azure.vm_location = 'West Europe' # e.g., West US

    azure.tcp_endpoints = '3389:53390' # opens the Remote Desktop internal port that listens on public port 53389. Without this, you cannot RDP to a Windows VM.

The Vagrantfile is of based on the Vagrantfile supplied by

You can get a list of available Azure VM images by logging on to your Azure subscription with Powershell and issue the following command:

Get-AzureVMImage | where-object { $_.Label -like "Windows Server 2012 R2 *" }| select imagename,imagefamily

Step 8. Vagrant up

Now it’s time to issue a Vagrant up.

This is will generate some error messages because the vm needs to initialize (I assume).


Just issue an vagrant up again until it says: The machine is already created.

Then you can go ahead and RDP into your new VM:


So there you go, now you are all set to deploy Azure images until the cloud bursts.


I use Ubuntu as a development workstation (but it doesn’t matter!)

I use Ubuntu as my development machine and I like to evangelize about it. But actually it doesn’t matter at all. It’s the functionality I run that is the most important. And since that is the case, the underlying OS becomes irrelevant. That’s why I tend to choose the OS with the smallest footprint. Which would be a Linux based OS.

So here is why, and how, I use Ubuntu.


This picture is Ubuntu running in Parallels, which looks great in high res on the MacBook Pro Retina screen.

Some Linux advantages over another OS

There are some advantages of running Ubuntu (or another Linux distro):

  • system requirements are low, you can happily use older hardware
  • the software is open source and free (as in ‘costs nothing’, although I donate to my favourite open source projects like LibreOffice and Ubuntu itself).
  • installation is easy, however installing Windows is easy too.
  • installation is fast because Ubuntu has a smaller footprint than Windows (8 GB vs 20 GB, and then Ubuntu is considered large in comparison with e.g. Puppy Linux)
  • installation of software is a delight, because of the packaging method (apt, yum, rpm, pacman and so forth). With a package manager you do not need to browse to websites to grab a copy
  • Updating is just as simple apt-get update && apt-get upgrade
  • If you prefer to work with the keyboard and in the terminal, Linux is your best fried. Just choose your terminal, your favourite shell, your favourite editor and your good to do any kind of task

So how do I use Ubuntu?

  • I am a keyboard user. Ubuntu is very friendly for keyboard users! Especially the Dash is very handy:
  • As IDE I use Subtext and Vim. In Vim I us the NERDTree. Vim deserves a dedicated post. It’s an extremely versatile editor that lives in the terminal and it is very small (6 MB). It has a steep learning curve. But when you get the hang of it you’ll notice how powerful it is. And Vim is ubiquitous. It’s everywhere (as Vi on every Linux machine). Once you know vi, you can deal with every Linux machine out there.
  • I use Robomongo to browse Mongo databases.
  • The Gimp is a great Photoshop replacement, especially now that you can enable single Windows!
  • Chrome is my mainbrowser. I use the apps a lot so I have access to them on every machine.
  • Last but not least: I use XMind for mindmapping. It is multiplatform. And I love it. It too deserves a dedicated post.


So I use Ubuntu

And yes, I can do all above mentioned things on my Mac and Windows machine as well, but going the Ubuntu way the footprint is the smallest.


Hadoop on Debian Wheezy


We from the ButtonFactory care deeply about Data. And the latest hype seems to be that the Data must be Big. So let’s get our hands dirty already and take a plunge into Hadoop.

This is how you can install Hadoop on a Debian Wheezy virtual machine in VirtualBox.
My laptop is running Windows 8 Enterprise.

Java Development Kit

Install Debian Wheezy from here.
When choosing packages, select only base system and SSH server.
We need to install Java 6 (see wiki).

$chmod u=rwx jdk-6u38-linux-x64.bin
$tar xvf jdk-6u38-linux-x64.bin

We will move the JDK to /opt like this:

$mkdir /opt/jvm
$mv jdk1.6.0_38/ /opt/jvm/jdk1.6.0_38/

$update-alternatives --install /usr/bin/java java /opt/jvm/jdk1.6.0_38/jre/bin/java 3
$update-alternatives --config java

Now we can check the version:

$ java -version
java version "1.6.0_38"
Java(TM) SE Runtime Environment (build 1.6.0_38-b05)
Java HotSpot(TM) 64-Bit Server VM (build 20.13-b02, mixed mode)

The Hadoop user

I followed Michael Noll‘s howto for Ubuntu.

We need to create a hadoop group and a hduser, and we’ll put the haduser in the hadoop group:

addgroup hadoop
adduser --ingroup hadoop hduser

We need to disable IPv6, so let’s add the following lines to the end of /etc/sysctl.conf:

#disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

Next we need to add Java and Hadoop to the path.

Add Java to the path

We need to set the $JAVA_HOME variable and add JAVA to our path.
Edit ~/.bashrc:

# Set Hadoop-related environment variables
export HADOOP_HOME=/opt/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/opt/jvm/jdk1.6.0_38

# Add Hadoop bin/ and JAVA bin/ directory to PATH
export PATH=$PATH:$JAVA_HOME/bin

Next we can install Hadoop, also in the opt directory. I prefer to use the opt directory, but feel free to use /usr or something else.

Installing Hadoop

I’m using Hadoop 1.0.4, because it is stable. We might want to test with 2.0 later on.

cd /opt
tar xzf hadoop-1.0.4.tar.gz
mv hadoop-1.0.4 hadoop
chown -R hduser:hadoop hadoop

We need to edit some of Hadoop’s config files now.

Open opt/hadoop/conf/ and set the JAVA_HOME environment variable to the Sun JDK/JRE 6 directory.

# The java implementation to use.  Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
export JAVA_HOME=/opt/jvm/jdk1.6.0_38


This is where Hadoop stores its Data.

  A base for other temporary directories.
  The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.


We need to create this directory and set ownership correctly:

mkdir -p /app/hadoop/tmp
chown hduser:hadoop /app/hadoop/tmp


vim mapred-site.xml

  The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.


vim hdfs-site.xml

  Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.

Starting Hadoop

su to hduser, because Hadoop runs under the hduser:

First format the namenode:

/opt/hadoop/bin/hadoop namenode -format

Now start Hadoop:


You can check if it all runs by running the Java JPS command:

hduser@wheezy:$ jps
2764 JobTracker
3374 Jps
2554 DataNode
2667 SecondaryNameNode
2879 TaskTracker
2449 NameNode

Is something went wrong and you don’t see the DataNode running, then stop Hadoop, remove all the files in /app/hadoop/tmp, format the datanode and start again.

cd /app/hadoop/tmp/
rm * -rf
cd /opt/hadoop/
bin/hadoop namenode -format

Now you should be able to browse to:
http://localhost:50070/ – web UI of the NameNode daemon
http://localhost:50030/ – web UI of the JobTracker daemon
http://localhost:50060/ – web UI of the TaskTracker daemon

If you visit the browser from the host machine replace ‘localhost’ with the IP from your virtual Debian machine.

Run a MapReduce Task

You can run one of the examples like this:

hduser@wheezy:/opt/hadoop$ bin/hadoop jar hadoop-examples-1.0.4.jar pi 10 1000000

When all goes well the output should be something like this:

12/12/16 16:48:57 INFO mapred.JobClient:     Combine output records=0
12/12/16 16:48:57 INFO mapred.JobClient:     Physical memory (bytes) snapshot=1655828480
12/12/16 16:48:57 INFO mapred.JobClient:     Reduce output records=0
12/12/16 16:48:57 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=5379981312
12/12/16 16:48:57 INFO mapred.JobClient:     Map output records=20
Job Finished in 71.721 seconds
Estimated value of Pi is 3.14158440000000000000

But chances are it doesn’t work directly. I had to doublecheck my file permissions, because I once ran Hadoop as root and that makes root owner of the log directory. And then the hduser is not allowed to write in them.

So check for errors like “WARN mapred.JobClient: Error reading task outputhttp://wheezy:50060/tasklog?plaintext=true&taskid=attempt_201001181020_0002_m_000014_0&filter=stdout 10/01/18 10:52:48 WARN mapred.JobClient: Error reading task outputhttp://wheezy:50060/tasklog?plaintext=true&taskid=attempt_201001181020_0002_m_000014_0&filter=stderr”

And take ownership of the /opt/hadoop/logs:

chown -R hduser:hadoop logs

Analyzing a server 2008 R2 dwp crash dump file

Yesterday the four node file cluster resource crashed and blue screened and was moved to another node. I wanted to analyze the crash dump file (C:\Windows\Minidump\070711-36473-01.dmp) so I copied it to my W7 workstation and tried to open it but Visual Studio could not help me out here.

Reading a crash dump file is far from intuitive and I spent a great deal of the morning learning about debugging. So here is what I did to read the dump file.

First, you need to install the debugging tools from here. Choose the version that corresponds to your architecture. This install will take a long time depending on your network speed. Important is that you include the WinDbg.exe because that is the tool we will be using.

Next, you need to download the symbol files. Note that you can also use the symbol server from Microsoft but it is faster to have a copy of the symbol files on your hard drive. Download them here. Just download them all. And this will also take a long time because the Symbol files are huge.

Next! Open C:\Program Files\Debugging Tools for Windows (x86)\WinDb.exe.
Choose File -> Open -> Symbol File Path

Type: SRV*C:\Symbols* like this:

Now press CTRL+D to open the DWP file! Very exciting.

Now, if you enter !analyze -v like this:

And you’ll get more information about the crash. In my case:

8: kd> !analyze -v
* *
* Bugcheck Analysis *
* *

One or more critical user mode components failed to satisfy a health check.
Hardware mechanisms such as watchdog timers can detect that basic kernel
services are not executing. However, resource starvation issues, including
memory leaks, lock contention, and scheduling priority misconfiguration,
may block critical user mode components without blocking DPCs or
draining the nonpaged pool.
Kernel components can extend watchdog timer functionality to user mode
by periodically monitoring critical applications. This bugcheck indicates
that a user mode health check failed in a manner such that graceful
shutdown is unlikely to succeed. It restores critical services by
rebooting and/or allowing application failover to other servers.
Arg1: fffffa8038f3ab30, Process that failed to satisfy a health check within the
configured timeout
Arg2: 00000000000004b0, Health monitoring timeout (seconds)
Arg3: 0000000000000000
Arg4: 0000000000000000

Debugging Details:

PROCESS_OBJECT: fffffa8038f3ab30






LAST_CONTROL_TRANSFER: from fffff880030b76a5 to fffff80001a98d00

fffff880`0253d518 fffff880`030b76a5 : 00000000`0000009e fffffa80`38f3ab30 00000000`000004b0 00000000`00000000 : nt!KeBugCheckEx
fffff880`0253d520 fffff800`01aa4652 : fffff880`0253d600 00000000`00000000 00000000`40800088 00000000`00000001 : netft!NetftWatchdogTimerDpc+0xb9
fffff880`0253d570 fffff800`01aa44f6 : fffff880`030c4100 00000000`03023940 00000000`00000000 00000000`00000000 : nt!KiProcessTimerDpcTable+0x66
fffff880`0253d5e0 fffff800`01aa43de : 00000729`6e09a2ce fffff880`0253dc58 00000000`03023940 fffff880`02517d88 : nt!KiProcessExpiredTimerList+0xc6
fffff880`0253dc30 fffff800`01aa41c7 : 000001c5`99d9f3c1 000001c5`03023940 000001c5`99d9f3fd 00000000`00000040 : nt!KiTimerExpiration+0x1be
fffff880`0253dcd0 fffff800`01a90a2a : fffff880`02515180 fffff880`025202c0 00000000`00000000 fffff880`01368420 : nt!KiRetireDpcList+0x277
fffff880`0253dd80 00000000`00000000 : fffff880`0253e000 fffff880`02538000 fffff880`0253dd40 00000000`00000000 : nt!KiIdleLoop+0x5a


fffff880`030b76a5 cc int 3


SYMBOL_NAME: netft!NetftWatchdogTimerDpc+b9



IMAGE_NAME: netft.sys


FAILURE_BUCKET_ID: X64_0x9E_netft!NetftWatchdogTimerDpc+b9

BUCKET_ID: X64_0x9E_netft!NetftWatchdogTimerDpc+b9

Followup: MachineOwner

Explanation: USER_MODE_HEALTH_MONITOR (9e) is the bug check code I need to investigate. For a complete list of bugcheck codes look here:

And now all that is left for me to say is: ‘happy debugging’.

Oh here are some helpful links: