Browsing:

Category: Azure

PowerBI -Fix: The import List.ConformToPageReader matches no exports. Did you miss a module reference?

Did you wake up this morning finding PowerBI failure messages in your inbox?

Then you are probably not the only one. Your Gateway is out of date and left you with errors on your Datasets on PowerbI.


Some datasets are not able to refresh due to a module not being found, but testing your connections all seems to be working fine ...or not? It does not cleary indicate something is wrong with your gateway, it just gives you a friendly suggestion to update, but all connections are A OK!

What you need to do is install the recently released (let me do a wild guess, last Sunday I...?) gateway version number: 3000.66.4 november 2020

Once you update your (on-premise) gateway server and give it a restart. You will be able to refresh all your reports.

Thank you Microsoft for keeping us busy today!


Deploy linked Azure Resource Manager templates with a SAS token

ARM templates tend to get huge when your deployments get more complex.
With linking you can call an ARM template from another template and create a hierarchy of your templates, making it easier to adjust and reuse the templates. You can pass parameters from the master template to the linked template.

Linked templates are not very intuitive to use however. In this blog post I will walk you through an example where I deploy a storage account with a linked template. I will also show you how to use the template in a CD/CI pipeline in Visual Studio Team Services.

azure-arm

A complete example is on my Github repository.

 

The linked storage template

Let's start with a regular template for storage. However, without the variables! A linked template only has parameters.
These parameters will be populated by the master template. These parameters can be hardcoded, populated by variables or declared in a separate parameters template.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "storageAccountType": {
            "type": "string",
            "defaultValue": "Standard_LRS",
            "allowedValues": [
                "Standard_LRS",
                "Standard_GRS",
                "Standard_ZRS",
                "Premium_LRS"
            ]
        },
        "storageAccountTier": {
            "type": "string",
            "defaultValue": "Standard",
            "allowedValues": [
                "Standard",
                "Premium"
            ]
        }
    },
    "resources": [
        {
            "apiVersion": "2017-10-01",
            "name": "[concat('disk', uniqueString(resourceGroup().id))]",
            "type": "Microsoft.Storage/storageAccounts",
            "sku": {
                "name": "[parameters('storageAccountType')]",
                "tier": "[parameters('storageAccountTier')]"
            },
            "kind": "Storage",
            "location": "[resourceGroup().location]",
            "tags": {}
        }
    ]
}

Let's call this template storage.json.
Now we are going to call this template from a master template that I will name template.json.

 

The master template

Let's create a folder structure like this:

In template.json I need to make a reference to storage.json. I could put my ARM Templates on Github or GitLab and reference the public URI of storage.json. But what if you are in an enterprise and you need to keep your templates private? What if you want to run the templates from a private storage account?
Then you will want to protect them with a SAS Token. How that works will be described in the last part of this article.

This is how the master.json file will look like:

 {
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "artifactsLocationSasToken": {
            "type": "string"
        },
        "artifactsLocationStorageAccount": {
            "type": "string"
        }
    },
    "variables": {
        "storageAccountType": "Standard_LRS",
        "storageAccountTier": "Standard",
        "nestedTemplates": {
            "storageTemplateUrl": "storageTemplateUrl": "[uri(deployment().properties.templateLink.uri, 'nestedtemplates/storage.json' )]"
        }
    },
    "resources": [
        {
            "name": "storageDeployment",
            "type": "Microsoft.Resources/deployments",
            "apiVersion": "2017-05-10",
            "dependsOn": [],
            "properties": {
                "mode": "Incremental",
                "templateLink": {
                    "uri": "[concat(variables('nestedTemplates').storageTemplateUrl, parameters('artifactsLocationSasToken'))]",
                    "contentVersion": "1.0.0.0"
                },
                "parameters": {
                    "storageAccountType": {
                        "value": "[variables('storageAccountType')]"
                    },
                    "storageAccountTier": {
                        "value": "[variables('storageAccountTier')]"
                    }
                }
            }
        },
    ],
    "outputs": {
    }
}

Some explanation: according to the Microsoft docs you can use deployment() to get the base URL for the current template, and use that to get the URL for other templates in the same location. The templateLink property is only returned when linking to a remote template with a URL. If you're using a local template, that property isn't available.

So we need to concatenate uri(deployment().properties.templateLink.uri plus nestedtemplates/storage.json. That looks like this:

"nestedTemplates": {
"storageTemplateUrl": "storageTemplateUrl": "[uri(deployment().properties.templateLink.uri, 'nestedtemplates/storage.json' )]"
}

And append the SAS Token" parameters('artifactsLocationSasToken') in our resource section:

"nestedTemplates": {
"templateLink": {
"uri": "[concat(variables('nestedTemplates').storageTemplateUrl, parameters('artifactsLocationSasToken'))]",
"contentVersion": "1.0.0.0"
},

 

Pass the parameters

As already mentioned, you can pass parameters:

  • Hardcoded the nested template (not recommended)
  • Hardcoded in the master template in parameters or variables (semi recommended)
  • In a separate parameters file (recommended)

I would recommend to use the parameters file to set values that are unique to your deployment. Then you can use the concat function to create other resources names in variables.

 

Nested templates and dependencies

You can reference to the deployment like this:

"nestedTemplates":
"dependsOn": [
"Microsoft.Resources/deployments/storageDeployment"
]

 

Deployment

Finally, the deployment. If you are in an enterprise and you need to keep your templates private you will want to run the templates from a private storage account. You can achieve this with a SAS Token.

The steps are as follows:

  • Create separate resource group with a storage account
  • Create a container in blob storage
  • Upload all templates and scripts to this container
  • Create a SAS Token for this container with a valid time of 2 hrs
  • Inject the SAS Token to your parameters.json file
  • Append the SAS Token to the nested template URI

Basically, this is what the PowerShell script does when you create an ARM Template in Visual Studio! However, I think it's good to know what it actually does under the hood.

I would suggest you to create a service principal. Here is how.
We need the clientId, Secret, TenantId and SubscriptionId from the principal.

You can find the complete script here.

Then run the script:

$vars = @{
ClientId = ""
Secret = ""
TenantId = ""
SubscriptionId = ""
ResourceGroupName = "azure-vm-poc"
ArtifactsResourceGroup = 'my-artificats'
ArtifactsLocationStorageAccount = 'mybeautifulartifacts'
}

# modify path if needed
.\New-AzureDeploy.ps1 @vars -Verbose

 

Add the script to a build or release pipeline with VSTS

Simply add an Azure Powershell script task and call the script. Define the variables in VSTS.

Troubleshoot

Sometimes the error message in the PowerShell console are a bit cryptic. With this command you will get more verbose error messages:

(Get-AzureRmLog -Status "Failed" | Select-Object -First 1) | Format-List

PaaS taking over the world! Are dba’s a dying breed?

PaaS vs IaaS

 

It's easter weekend, or as the dutch say 'PaaS weekend'. So, it might be a good idea to talk a bit about PaaS. What is it and what does this mean for you as a Database administrator? Are Dba's a dying breed. Will they just shift over to more complex or broader tasks or are they here to stay. It all depends on the company and the software they are running.

 

 

 

What is #Iaas? in short, it's a VM. Your database is running in a data center. As a Dba you have the same job requirements as when running a database on-premise. Update, backup, patches, tuning, security and account control.

But #PaaS is a different story, it runs the database as a service, so there is no need for a dba. At least that's what they tell you. But does PaaS solve all your performance and tuning needs, is that faulty query when moved to PaaS suddely solved? Nope. Machine learning is doing a great job so far, but it isn't the magical quick fix, yet.

In all honesty most companies don't care about tuning a database, not all applications have complex queries and tasks running on their SQL server, most are fine running an express edition. They don't even bother having a Dba. The database is taken care of by a system engineer, if being looked at, at all.

Where does this put you as a Dba? Don't sob, we still need you! It's a big relief that the market, which is still flooded with old school high maintenance MS SQL driven apps. How nice if these could be taken care of with PaaS. The only good thing coming out of these high maintenance, splintered databases, is the data itself. Do you really want to spend your time updating, patching and granting rights to users and saying no to SA account requests? No, you don't!

But second, in the real world, companies don't evolve as fast as the IT world itself. Most applications, lots outdated or not are not being replaced overnight and not all the vendors are quite ready for PaaS environments with their applications. So you just have to decide on which side you want to be, fast IT or slow IT. There is still a big playground available for both for the coming years.


SQL2016 Stretch database. Yes, it’s another linked server

Your manager always wants to keep all the data...now with bigdata being a thing and algoritmes are used more efficient (IOT), people want to hoard data even more, but it needs to be online to be valuable. If you have to restore the data first, your data is not very valuable. But all this extra data also slows down your database performance...and gives the dba'ers extra challenges....but not anymore.! Well, at least when your boss got a big wallet and don't mind spending it on a SQL buget.

In SQL2016 Microsoft introduced the stretch database. which give you live access to 'archived' data and makes it feel like it's on premise. And  it's a secretly just a linked server! A linked server between mssql and Azure.LS+azure

How does this work, and is it safe?

Yes! since it works on secure linked server technology and it stores an encryption key on the on-premises sqlserver, it's suitable for all your cold data, not hot, because it does slow down the query speed, but it will not clogg your network and your data will be available at all times, even when the data is being stretched over to Azure the data still is fully query-able.

First ask yourself, Do you really need to keep this data? If yes, How do you create a stretchdatabase? Well, this is easy!

Hesitations? use this link for info on the MS Stretch database advisor.

I use a simple test database, the adventureworks2016 ctp3.

Before we can start, check if the data archive function is enabled, if not (0) enable it.

####Check status of Remote data archive####
SELECT @@VERSION AS 'SQL Server Version'; 
EXEC sp_configure 'remote data archive'; 
GO 
####Activate Remote data archive####
EXEC sp_configure 'remote data archive' , '1'; 
GO 
RECONFIGURE; 
GO

Next we can create a master key to encrypt all your data locally, this way, the data can't be read in Azure.
Note: You can also create a masterkey from the stretchdb wizard, it's up to you. I'll show both options.

USE AdventureWorks2016CTP3; 
GO 

CREATE MASTER KEY ENCRYPTION BY PASSWORD='stretchdb2016'; 
GO

Next I will create a simple table  and populate it with some data, to keep the demo quick and simple :

CREATE TABLE dbo.Stretchtable
(
FirstName VARCHAR (50),
LastName VARCHAR (50)
);
GO


Use [AdventureWorks2016CTP3]
GO
INSERT INTO dbo.Stretchtable (FirstName, lastName)
VALUES ('Reed', 'Richards'), ('Benjamin', 'Grimm'), ('Sue', 'Storm'), 
 ('Johnny', 'Storm'), ('Victor', 'Von Doom'), ('Willie','Lumpkin');
 GO

Keep your Azure account ready! I use the SQL logical server resource, you need to create a resource before you can stretch the database.

Azure resource

 

Select the table you will be stretching to Azure. Now enable the table for stretching and the wizard will open up.

Enable stretch database

It's a Hybride archive - You can stretch the whole table or filter out the (c)old data from a massive table.
4_EnableStretchTable

 

Next thing you create a master key, if you didn't make it before, if you made it already, It will ask you to fill in the master password.Masterkey_StretchTable
10_LoginStretchTable

Fill in the IP range for a firewall rule.IP_Azure

12_EnableStretchTable

Hey Ho, let's go, that was easy!

Now the table will start to stretch, if you want to see if it stretched yet, try a select query and check the executionplan.
14_EnableStretchTable

Want your data back? No problem, You can bring back the data to your on-premise server, if you don't want to use Azure. Because, when you delete your azure account, the data will be gone too!

bringbackmydatabringbackmydata2bringbackmydata3

 

Note: Data storage can be expensive on Azure, but this function gives us a positive view on the ever growing databases and it's possibilities.


Linked server to Microsoft Azure #Error 405150

When Azure was launched, we missed one important feature, the linked server, It seemed impossible to do cross database queries from your local database to the Azure database, but Microsoft listened and made it possible to do database queries that spans multiple databases across platforms.

There is plenty of hits when googled on the errors, but none of these are complete! I will write down the  process of linking your server to Azure.

when I first created a linked server to Azure, I got this error  "reference to database and/or server name is not supported in this version error: 405150", When trying to add a linked server from SSMS, you are not given option to set a default database. So I used Tsql instead, because without a specific catalog, your linked server will not work.LS_Azure

Azure doesn't allow you to alter the master db, so you have to connect directly on the database you are going to use.

What people never tell LS newbies and make you google this error "Msg 2812, Level 16, State 62, Line 1 Could not find stored procedure 'sp_addlinkedserver' " and read tons of blogs and tutorials all providing different queries, but still no work, follow these steps".

The Tutorial, the Solution:

Open up ODBC, or simply run odbcad32.exe. and start by making a new system DNS with the Native client.

ls_SYSDNS

In the next step, you provide your datasource name and the complete server name as you can find on the Azure server.

LS_makedns

Also provide the database you want to connect to, don't skip this one!

LS_Choosedb

 

Test the connection, see it succeed! if not, go through the above printscreens you probably missed a step.

LS_Testsuccesf

Now we can create the linked server on your database, by creating a query on your SSMS and can run your linked server query, it should look similar to this:

EXEC master.dbo.sp_addlinkedserver @server = N'Azure_server', @srvproduct=N'', @provider=N'sqlncli', @datasrc=N't20ko02v18.database.windows.net', @catalog=N'Teletraan IV' EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'Azure_server', @useself=N'False',@locallogin=NULL,@rmtuser=N'skyscream',@rmtpassword='password'

To test if the LS is added, you could run exec sp_helpserveror just test the connection.LS_SPhelpserver

These are all the steps to a Linked Azure server. Congratulations, you are now connected to the Azure mothership of data.

Data manipulation

Next step is to insert data from the Azure linked server into our local database. Btw, this query works both ways, you could also insert data into the Azure database.

Create an INSERT statement

BEGIN TRAN INSERT INTO [Azure_demo].[dbo].[Demo] (Initials,Prefix,LastName,Gender,Street,HouseNumber ,Zipcode,City,PhoneNumber,MobileNumber,EmailAddress) SELECT Initials,Prefix,LastName,Gender,Street,HouseNumber ,Zipcode,City,PhoneNumber,MobileNumber,EmailAddress FROM [Azure_server].[Teletraan IV].[dbo].[AzureData] WHERE InsertDate BETWEEN '2008-08-22 00:00:00.000' AND '2009-02-06 00:00:00.000' ROLLBACK TRAN --COMMIT TRAN

Ls_Qresult

Now all rows have been added through our linked server into our local database. LS_Check
If this blog saved your ass or just made your day, please let me know 🙂