Golang and that strange asterisk and ampersant

When learning Golang and coming from other (scripting) languages, that asterisk and ampersant may strike as a bit odd.

This is a beginner Golang post!

Let's start with a basic Golang program outline and in the main func, let's create a variable named a and assign it the value Three-toed sloth.

package main
import (
    "fmt"
)
func main() {
  a := "Three-toed sloth"
  fmt.Println(a)
}

Notice the : before the = and that no type was specified. The Go compiler is able to derive the type based on the literal value of the variable. So it knows it's a string.

I could also have declared the variable like this:

var a string = 'Three-toed sloth

Ok, so the program will return:

Three-toed sloth

So you see that a is assigned a value of Three-toed sloth. It has stored that value in memory. And we can get the memory address by adding the & (ampersant).

Like this:

package main
import (
    "fmt"
)
func main() {
  a := "Three-toed sloth"
  fmt.Println(a)
  fmt.Println(&a)
}

This will return:

Three-toed sloth
0xc0000861c0

We can get the types like so:

fmt.Printf("%T\n",a)
fmt.Printf("%T\n",&a)

Adding this to our func will return:

 string
*string

We can also get the value of what is stored on that memory address like so (with the asterisk!):

  b := &a
  fmt.Println(*b)

This will return:

Three-toed sloth

And finally we can do:

    fmt.Println(*&b)

Now what would that return?

Exactly:

0xc0000861c0

Anyways, here is the complete code of this very useful exercise:

package main
import (
    "fmt"
)

func main() {
    a := "Three-toed sloth"
    fmt.Println(a)
    fmt.Println(&a)
    fmt.Printf("%T\n",a)
    fmt.Printf("%T\n",&a)
    b := &a
    fmt.Println(*b)
    fmt.Println(*&b)
}

So the gist of this post comes down to:

  1. The & will give you the memory address
  2. The asterisk (*) will give you the value stored at an address, aka the pointer to the memory location where the value of the variable is stored.

Now consider this piece of code without any pointers:

package main
import (
    "fmt"
)

func main() {
    x := 100
    blah(x)
    fmt.Println(x)
}

func blah(y int) {
    fmt.Println(y)
    y = 12
    fmt.Println(y)
}

This will return

100 # from blah
12 # from blah
100

But when we change the signature of 'blah' so that it will take in a memory address instead of an actual int

package main
import (
    "fmt"
)

func main() {
    x := 100
    blah(&x)
    fmt.Println("From main: ", x)
}

func blah(y *int) {
    fmt.Println("1.From foo:", y)
    *y = 12
    fmt.Println("2.From foo:", y)
}

This will return

1.From foo: 0xc00001c0c8
2.From foo: 0xc00001c0c8
From main:  12

That value will be never be the value of 100 because the blah function assigns the memory address the value of 12.

Finally a tip for a great Golang mentor: https://twitter.com/Todd_McLeod


Protected: Simple JSON codes mockup voor Watson Assistant

This content is password protected. To view it please enter your password below:


IBM Cloud platform Watson API – CLI Tools Error: NO CF API endpoint set

IBM Watson is a really good AI platform.
But since development of the Watson Platform goes so quickly, they keep pushing new updates and workspaces.
If you are a developer, this can be quite time consuming, since you need to keep rebuilding the former workspace now called Skills and APi configurments.
This last Update to V2 and the deprecation of the bluemix environment gave me quite a few headaches.

To save you from the hassle, here is an example how you can rebuild a Watson assistant and Watson discovery API with the cloud CLI Tool.

c:\Program Files\IBM\Cloud\bin>ibmcloud cf push
FAILED
No CF API endpoint set.
Use 'ibmcloud target --cf-api ENDPOINT [-o ORG] [-s SPACE]' to target Cloud Foundry, or 'ibmcloud target --cf' to target it interactively.

it's because you are still pointing to the old bluemix link:
c:\Program Files\IBM\Cloud\bin>ibmcloud api https://api.eu-gb.bluemix.net
Setting api endpoint...
API endpoint https://api.eu-gb.bluemix.net is going to be deprecated. Use https://cloud.ibm.com.

here is what you do:
c:\Program Files\IBM\Cloud\bin>ibmcloud api https://cloud.ibm.com
c:\Program Files\IBM\Cloud\bin>\ibmcloud login
now your ENDPOINT is set to the cloud.ibm.com
Now set the right environment for discovery:
c:\Program Files\IBM\Cloud\bin>ibmcloud target --cf-api api.eu-gb.cf.cloud.ibm.com

Now you can (re-)do al your CF functions


Deploy linked Azure Resource Manager templates with a SAS token

ARM templates tend to get huge when your deployments get more complex.
With linking you can call an ARM template from another template and create a hierarchy of your templates, making it easier to adjust and reuse the templates. You can pass parameters from the master template to the linked template.

Linked templates are not very intuitive to use however. In this blog post I will walk you through an example where I deploy a storage account with a linked template. I will also show you how to use the template in a CD/CI pipeline in Visual Studio Team Services.

azure-arm

A complete example is on my Github repository.

 

The linked storage template

Let's start with a regular template for storage. However, without the variables! A linked template only has parameters.
These parameters will be populated by the master template. These parameters can be hardcoded, populated by variables or declared in a separate parameters template.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "storageAccountType": {
            "type": "string",
            "defaultValue": "Standard_LRS",
            "allowedValues": [
                "Standard_LRS",
                "Standard_GRS",
                "Standard_ZRS",
                "Premium_LRS"
            ]
        },
        "storageAccountTier": {
            "type": "string",
            "defaultValue": "Standard",
            "allowedValues": [
                "Standard",
                "Premium"
            ]
        }
    },
    "resources": [
        {
            "apiVersion": "2017-10-01",
            "name": "[concat('disk', uniqueString(resourceGroup().id))]",
            "type": "Microsoft.Storage/storageAccounts",
            "sku": {
                "name": "[parameters('storageAccountType')]",
                "tier": "[parameters('storageAccountTier')]"
            },
            "kind": "Storage",
            "location": "[resourceGroup().location]",
            "tags": {}
        }
    ]
}

Let's call this template storage.json.
Now we are going to call this template from a master template that I will name template.json.

 

The master template

Let's create a folder structure like this:

In template.json I need to make a reference to storage.json. I could put my ARM Templates on Github or GitLab and reference the public URI of storage.json. But what if you are in an enterprise and you need to keep your templates private? What if you want to run the templates from a private storage account?
Then you will want to protect them with a SAS Token. How that works will be described in the last part of this article.

This is how the master.json file will look like:

 {
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "artifactsLocationSasToken": {
            "type": "string"
        },
        "artifactsLocationStorageAccount": {
            "type": "string"
        }
    },
    "variables": {
        "storageAccountType": "Standard_LRS",
        "storageAccountTier": "Standard",
        "nestedTemplates": {
            "storageTemplateUrl": "storageTemplateUrl": "[uri(deployment().properties.templateLink.uri, 'nestedtemplates/storage.json' )]"
        }
    },
    "resources": [
        {
            "name": "storageDeployment",
            "type": "Microsoft.Resources/deployments",
            "apiVersion": "2017-05-10",
            "dependsOn": [],
            "properties": {
                "mode": "Incremental",
                "templateLink": {
                    "uri": "[concat(variables('nestedTemplates').storageTemplateUrl, parameters('artifactsLocationSasToken'))]",
                    "contentVersion": "1.0.0.0"
                },
                "parameters": {
                    "storageAccountType": {
                        "value": "[variables('storageAccountType')]"
                    },
                    "storageAccountTier": {
                        "value": "[variables('storageAccountTier')]"
                    }
                }
            }
        },
    ],
    "outputs": {
    }
}

Some explanation: according to the Microsoft docs you can use deployment() to get the base URL for the current template, and use that to get the URL for other templates in the same location. The templateLink property is only returned when linking to a remote template with a URL. If you're using a local template, that property isn't available.

So we need to concatenate uri(deployment().properties.templateLink.uri plus nestedtemplates/storage.json. That looks like this:

"nestedTemplates": {
"storageTemplateUrl": "storageTemplateUrl": "[uri(deployment().properties.templateLink.uri, 'nestedtemplates/storage.json' )]"
}

And append the SAS Token" parameters('artifactsLocationSasToken') in our resource section:

"nestedTemplates": {
"templateLink": {
"uri": "[concat(variables('nestedTemplates').storageTemplateUrl, parameters('artifactsLocationSasToken'))]",
"contentVersion": "1.0.0.0"
},

 

Pass the parameters

As already mentioned, you can pass parameters:

  • Hardcoded the nested template (not recommended)
  • Hardcoded in the master template in parameters or variables (semi recommended)
  • In a separate parameters file (recommended)

I would recommend to use the parameters file to set values that are unique to your deployment. Then you can use the concat function to create other resources names in variables.

 

Nested templates and dependencies

You can reference to the deployment like this:

"nestedTemplates":
"dependsOn": [
"Microsoft.Resources/deployments/storageDeployment"
]

 

Deployment

Finally, the deployment. If you are in an enterprise and you need to keep your templates private you will want to run the templates from a private storage account. You can achieve this with a SAS Token.

The steps are as follows:

  • Create separate resource group with a storage account
  • Create a container in blob storage
  • Upload all templates and scripts to this container
  • Create a SAS Token for this container with a valid time of 2 hrs
  • Inject the SAS Token to your parameters.json file
  • Append the SAS Token to the nested template URI

Basically, this is what the PowerShell script does when you create an ARM Template in Visual Studio! However, I think it's good to know what it actually does under the hood.

I would suggest you to create a service principal. Here is how.
We need the clientId, Secret, TenantId and SubscriptionId from the principal.

You can find the complete script here.

Then run the script:

$vars = @{
ClientId = ""
Secret = ""
TenantId = ""
SubscriptionId = ""
ResourceGroupName = "azure-vm-poc"
ArtifactsResourceGroup = 'my-artificats'
ArtifactsLocationStorageAccount = 'mybeautifulartifacts'
}

# modify path if needed
.\New-AzureDeploy.ps1 @vars -Verbose

 

Add the script to a build or release pipeline with VSTS

Simply add an Azure Powershell script task and call the script. Define the variables in VSTS.

Troubleshoot

Sometimes the error message in the PowerShell console are a bit cryptic. With this command you will get more verbose error messages:

(Get-AzureRmLog -Status "Failed" | Select-Object -First 1) | Format-List

Artificial Intelligence – Chat Bot Back to Basic part 3

We experimented a little with Machine learning, but now we get to the part where it really get's interesting. Creating a chatbot.

In this example, I use a simple tool called Q&A maker and the Azure QnA maker resource. When you connect both services, you can integrate the bot on social media like Skype, messenger or as a Cortana service, or create a (mobile) app. Sign up at https://qnamaker.ai. and create a new knowledgebase.

I added some of my car data to the knowledge base and named it 'license check'. Based on a license plate we send to the bot, it will respond with the matching car brand. we need to train the bot and publish it. as you can see in this example: Save and train your Q&A with the data, test the data and when you like the results you Publish it.

When you publish the Q&A it will open a new screen with the URL details, you will need the details to associate your Q&A data with the Azure bot:

Go to Azure Bot service in Azure portal and go to the Application Settings page, in the App Settings section, set the QnAKnowledgebaseId, QnAAuthKey, and QnAEndpointHostName values from the published page and save the keys to Azure.

Now try out the new bot in the Azure webchat.

This is still very basic information, but we can 'learn' our bot new skills, for instance to be more social. Since we are humans, we always greet someone when we meet. So, what if someone says hello? the bot will not recognize this as a license plate and will respond with an error, but we can learn the bot, that whenever someone says Hi, Hello or Hoi, The bot will respond back with a friendly Hello! as you can see in this clip. It even understands bad sentences and grammar:

This is just the very basics of a chatbot. You can also add LUIS (Language understanding intelligence service) to your bot. LUIS service is a language processor. You need to create a LUIS service in Azure portal: add new resource and browse for LUIS. Then sign up at https://www.luis.ai and create an intent to associate LUIS with your Q&A content. This makes your bot interesting to publish it under Cortana, You can just tell your computer what license plate you are looking for.

Read my previous post on creating the car data for Machine learning.