Browsing:

Category: Code

Golang and that strange asterisk and ampersant

When learning Golang and coming from other (scripting) languages, that asterisk and ampersant may strike as a bit odd.

This is a beginner Golang post!

Let's start with a basic Golang program outline and in the main func, let's create a variable named a and assign it the value Three-toed sloth.

package main
import (
    "fmt"
)
func main() {
  a := "Three-toed sloth"
  fmt.Println(a)
}

Notice the : before the = and that no type was specified. The Go compiler is able to derive the type based on the literal value of the variable. So it knows it's a string.

I could also have declared the variable like this:

var a string = 'Three-toed sloth

Ok, so the program will return:

Three-toed sloth

So you see that a is assigned a value of Three-toed sloth. It has stored that value in memory. And we can get the memory address by adding the & (ampersant).

Like this:

package main
import (
    "fmt"
)
func main() {
  a := "Three-toed sloth"
  fmt.Println(a)
  fmt.Println(&a)
}

This will return:

Three-toed sloth
0xc0000861c0

We can get the types like so:

fmt.Printf("%T\n",a)
fmt.Printf("%T\n",&a)

Adding this to our func will return:

 string
*string

We can also get the value of what is stored on that memory address like so (with the asterisk!):

  b := &a
  fmt.Println(*b)

This will return:

Three-toed sloth

And finally we can do:

    fmt.Println(*&b)

Now what would that return?

Exactly:

0xc0000861c0

Anyways, here is the complete code of this very useful exercise:

package main
import (
    "fmt"
)

func main() {
    a := "Three-toed sloth"
    fmt.Println(a)
    fmt.Println(&a)
    fmt.Printf("%T\n",a)
    fmt.Printf("%T\n",&a)
    b := &a
    fmt.Println(*b)
    fmt.Println(*&b)
}

So the gist of this post comes down to:

  1. The & will give you the memory address
  2. The asterisk (*) will give you the value stored at an address, aka the pointer to the memory location where the value of the variable is stored.

Now consider this piece of code without any pointers:

package main
import (
    "fmt"
)

func main() {
    x := 100
    blah(x)
    fmt.Println(x)
}

func blah(y int) {
    fmt.Println(y)
    y = 12
    fmt.Println(y)
}

This will return

100 # from blah
12 # from blah
100

But when we change the signature of 'blah' so that it will take in a memory address instead of an actual int

package main
import (
    "fmt"
)

func main() {
    x := 100
    blah(&x)
    fmt.Println("From main: ", x)
}

func blah(y *int) {
    fmt.Println("1.From foo:", y)
    *y = 12
    fmt.Println("2.From foo:", y)
}

This will return

1.From foo: 0xc00001c0c8
2.From foo: 0xc00001c0c8
From main:  12

That value will be never be the value of 100 because the blah function assigns the memory address the value of 12.

Finally a tip for a great Golang mentor: https://twitter.com/Todd_McLeod


Polybase configuration on SQL Server 2017 Part II

Image: Microsoft

Nowadays your precious data can be stored everywhere, not just on several servers with different SQL versions, your data probably is wide spread in the cloud. It’s also a good idea to store data in the cloud with stretch database to release your local discs from excessive data and still be able to query it, but also use it in your SSIS and BI environment and keep an acceptable ETL. With Microsoft’s polybase you can access, import and export any data structured, semi, or non structured on the Hadoop platform and azure blob storage using T- SQL language.

The best business knowledge comes from the data you collect. So it might be a good idea to put the data you collect into some good use. Businesses collect lot’s of data, but in most cases this is also where it ends. Those who read my posts before,  know I am all about combining various sources with linked servers, since SQL 2014 lot’s of new features are available for using all your data on business intelligence platforms.

In my last post, we had a first look and troubleshoot of a polybase installation. This time we are going to configure and use the polybase in SQL server 2017. I’m going to use the Blob storage on Azure to demonstrate how you can implement this solution in your (local) SQL database.

First things first, now you’ve got your polybase installation ready, check if the services for polybase exist and are running.

Services: ‘SQL Server PolyBase Data Movement’ and ‘SQL Server PolyBase Engine’

You need to configure Polybase in order to start using it. Fire up SSMS and open a new query window. Type

sp_configure ' hadoop connectivity', 4;

reconfigure

Option 4 is Azure blob storage (WASB[S]). For more info on availability of the Polybase connectivity configuration, take a look here. Run the query and make sure you restart both Polybase services on the machine to finish the configuration.

In order to start using the blob storage make sure you have an Azure storage account if you don’t have an Azure account yet, create one here.

Login to Azure and on the left side select and create a new storage account

Give it some time, once the storage is created, you also need to create a container on the Azure storage.

To connect your local db to azure  storage, you need to get the azure storage key from your Azure storage account you just created and put it in the configuration file of your SQL installation.
Look for the core-site.xml file in the installation path of SQL Server.
The path looks simular to this: “C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Binn\Polybase\Hadoop\conf”
This will open the config directory  with the core-site.xml file.

Open the file in notepad and add code before the block of code mentioning Kerebos.

Fill in the storage name, in my case polybasedemo  and the storagekey and save the file.

 fs.azure.account.key..blob.core.windows.net

Now we have to create an external data source in SSMS. Replace containername@storagename with the names you created on Azure.

CREATE EXTERNAL DATA SOURCE PolyBaseDemo WITH
( TYPE = HADOOP, -- wasbs:// containername@storagename.blob.core.windows.net/ 
LOCATION = 'wasbs://containername@storagename.blob.core.windows.net/' );

Next up, we create the external file format to define external data on Azure blob storage, this needs to be done in order to create the external table

CREATE EXTERNAL FILE FORMAT PolybaseFormat 
WITH ( FORMAT_TYPE = DELIMITEDTEXT , FORMAT_OPTIONS ( FIELD_TERMINATOR = ',' ) );

This creates 2 new server objects in SSMS and now all is left to create the external table itself. In this demo I use an excel sheet with some irrelevant data to have some test data available.

USE [DemoPolybase] CREATE EXTERNAL TABLE [dbo].[Customers] ( [Name] VARCHAR(255) NULL, [adres] VARCHAR(255) NULL, [postalcode] VARCHAR(6) NULL ) WITH (LOCATION = N’/Customer_Export.csv’, DATA_SOURCE = PolyBaseDemo, FILE_FORMAT = PolybaseFormat, REJECT_TYPE = Value, REJECT_VALUE = 10) GO

And to see if this worked, just query the data 🙂

SELECT * FROM [Customers]

Now put this knowledge into action yourself with some real data!


Setting up Ubuntu 17.10 for .NET Core Development (including SQL Server, Visual Studio Code, PowerShell and SQL Operations Studio)

In this post I will show you how to set up an Ubuntu desktop that you can use for .NET Core Development.

Why?

Maybe because you want to do Microsoft development on older hardware or because you think Windows is too bloated.

Install the OS

In this example I use the latest Ubuntu version. Mind you, in April another LTS version will be out but I can't wait and will probably update this article.

Update and essentials

Once installed, open up your terminal and enter:

sudo apt-get update -y
sudo apt-get upgrade -y

sudo apt-get install -y wget curl git gitk \ 
vim build-essential linux-headers-$(uname -r) zsh 

This the essentials you will want, build tools and zsh.

Oh My Zsh

I prefer the zsh and Oh My Zsh over Bash because of its auto complete features and eye candy.

<br />wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh

And the Powerline fonts for a lovely prompt:

cd ~/Downloads
git clone https://github.com/powerline/fonts.git
cd fonts
./install.sh

Set the default shell to zsh:

chsh -s `which zsh`

Now log out and log back in.
You can then set your favorite theme by editing ~/.zshrc.

Node.js

Next we will install Node.js and fix npm so we can run it without sudo:

curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
sudo apt-get install -y nodejs
mkdir ~/npm-global -p
sudo chown -R $USER:$USER ~/npm-global
npm config set prefix '~/npm-global'

#add to path:
echo "export PATH=~/npm-global/bin:$PATH" >> ~/.zshrc

Trust the Microsoft sources

curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg

Add the repo's for the .Net Core SDK, Powershell, Visual Studio Code, SQL Server at once


sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-artful-prod artful main" > /etc/apt/sources.list.d/dotnetdev.list'

curl https://packages.microsoft.com/config/ubuntu/17.04/prod.list | sudo tee /etc/apt/sources.list.d/microsoft.list

sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list'

sudo add-apt-repository "$(wget -qO- https://packages.microsoft.com/config/ubuntu/16.04/mssql-server-2017.list)"

sudo add-apt-repository "$(wget -qO- https://packages.microsoft.com/config/ubuntu/16.04/prod.list)"

Now install the software:

sudo apt-get update 
sudo apt-get install dotnet-sdk-2.1.3 powershell code mssql-server mssql-tools unixodbc-dev

Configure Sql Server

You will need to run setup to choose the correct Sql Server version and to set the SA password.

sudo /opt/mssql/bin/mssql-conf setup
# Verify
systemctl status mssql-server

Install Sql Server Tools

Next we need to instal sqlcmd:

wget http://download.microsoft.com/download/B/2/0/B20D3C84-C82A-4638-8E8C-164E09B96F71/sqlops-linux-0.25.4.deb
sudo dpkg -i sqlops-linux-0.25.4.deb

# Now login:
sqlcmd -S localhost -U SA -P ''

And we're done

We now have an Ubuntu desktop for .NET Core development, including Sql Server. Now go ahead and do

dotnet new webapi

Next, install the Entity Framework Core package, create a full fledged backend and enjoy the fact that Microsoft has gone out of its way to make this all possible!

Script

For your convenience: here is a Gist that contains this script.

Links

https://github.com/PowerShell/PowerShell/blob/master/docs/installation/linux.md
https://code.visualstudio.com/docs/setup/linux
https://docs.microsoft.com/en-us/sql/sql-operations-studio/download
https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-ubuntu


Polybase installation on SQL Server 2017 part I- Oracle JRE 7 Update 51 (64-bit) or higher is required

Fresh new year, so a good time to check out the newest SQL Server! So far the installing process itself in SQL server 2017 brings no big new surprises. Just like the SQL Server 2016, you have to optionally download and install the SSMS via the Microsoft website, the link will be provided once the installation has finished.

SQL Server 2017

Next the install en configuration starts. I'll highlight the one pain in the ass I encountered this time.

I already talked about the Polybase feature related to the content in a podcast early 2016, but this time an install and setup walkthrough, plus a warning for all the people bravely installing oracles newest version of java.

When you select the Polybase to be installed and you payed close attention, or already used it in 2016 edition, you know that you need the oracle SE Java Runtime Environment.Polybase Oracle JRE

If this is not already installed on you're computer, the installation will fail, resulting in this message :

---------------------------
 Rule Check Result
 ---------------------------
 Rule "Oracle JRE 7 Update 51 (64-bit) or higher is required for Polybase" failed.

This computer does not have the Oracle Java SE Runtime Environment Version 7 Update 51 (64-bit) or higher installed. The Oracle Java SE Runtime Environment is software provided by a third party. Microsoft grants you no rights for such third-party software. You are responsible for and must separately locate, read and accept applicable third-party license terms. To continue, download the Oracle SE Java Runtime Environment from https://go.microsoft.com/fwlink/?LinkId=526030.
 ---------------------------
 OK
 ---------------------------

 

You need to head over to oracle.com and install a 7.51 or higher version, currently 9.0.1 is the highest, so seems legit to install this one.

Java install

 

 

 

 

 

Once you downloaded the correct product, In my case I choose the Windows Offline. Now run the Java install and return to your SQL Server setup for a re-run.

Wait what? Same message! "Requires JRE 7 update 51 or higher". I just installed the latest JRE version, did a restart and java is up and running.

So, this it the moment you ask yourself, do I really really want the polybase feature that bad? The anwser is Yes! To start the troubleshoot, I decided, to do some backward compatibility, the oldest version available from site, without using my oracle client registration is 8.151, and guess what...This did the trick!

So stay away from the newest 9 version for as long as possible.

Next post will be the setup and configuration of the polybase

 

 


How to use Fusion Tables in your Android app

This tutorial explains how to make a database available in an Android app with google Fusion Tables.

Fusion tables is an experimental data visualization web application to gather, visualize, and share data tables.

Create a new project on the google developers console. Now search for "Fusion Tables" in the list of API's, click on it and enable the tables.

 

 

It ask for credentials, the next step is to create a service account associated with the project.

This service account is for troubleshooting and editing data by multiple users.

 

 

 

 

 

 

 

 

 

 

 

Now the service account is set up, you can resolve a key you need in order to use the tables in your project, in my case i'm using App Inventor. From the API Manager Credentials page select the service account and click 'Create Credentials'.

Now we need to connect the fusion tables to our project.

Look up the Fushi0n tables in the API.

 

 

 

 

Now you can either create a fusion table, use available public data or import a data file and the fusion table is ready to be used.

My next step is to integrate the fusion tables into an android app.  As i said before I'm using app inventor for this project, since it's very user friendly.

 

 

 

 

With the inventor blocks I retrieve the fusion data into my app with an initialize and get statement.