Browsing:

Author: Melanie koorevaar

Artificial Intelligence – Chat Bot Back to Basic part 3

We experimented a little with Machine learning, but now we get to the part where it really get’s interesting. Creating a chatbot.

In this example, I use a simple tool called Q&A maker and the Azure QnA maker resource. When you connect both services, you can integrate the bot on social media like Skype, messenger or as a Cortana service, or create a (mobile) app. Sign up at https://qnamaker.ai. and create a new knowledgebase.

I added some of my car data to the knowledge base and named it ‘license check’. Based on a license plate we send to the bot, it will respond with the matching car brand. we need to train the bot and publish it. as you can see in this example: Save and train your Q&A with the data, test the data and when you like the results you Publish it.

When you publish the Q&A it will open a new screen with the URL details, you will need the details to associate your Q&A data with the Azure bot:

Go to Azure Bot service in Azure portal and go to the Application Settings page, in the App Settings section, set the QnAKnowledgebaseId, QnAAuthKey, and QnAEndpointHostName values from the published page and save the keys to Azure.

Now try out the new bot in the Azure webchat.

This is still very basic information, but we can ‘learn’ our bot new skills, for instance to be more social. Since we are humans, we always greet someone when we meet. So, what if someone says hello? the bot will not recognize this as a license plate and will respond with an error, but we can learn the bot, that whenever someone says Hi, Hello or Hoi, The bot will respond back with a friendly Hello! as you can see in this clip. It even understands bad sentences and grammar:

This is just the very basics of a chatbot. You can also add LUIS (Language understanding intelligence service) to your bot. LUIS service is a language processor. You need to create a LUIS service in Azure portal: add new resource and browse for LUIS. Then sign up at https://www.luis.ai and create an intent to associate LUIS with your Q&A content. This makes your bot interesting to publish it under Cortana, You can just tell your computer what license plate you are looking for.

Read my previous post on creating the car data for Machine learning.


Artificial Intelligence prepping data – Back to basic part 1

What street legal cars taught me about Machine learning? It’s all about the right data being available and the RDW data can’t be trusted!

I impulsively signed up for an Artificial Intelligence certification track 2 months ago, So I’ve been experimenting with Artificial Intelligence for a while now and in the beginning of the course it was one though cookie! Those formula’s to interpreted the data predictability really freaked me out!

But once I got past the formula’s and I saw the resemblance of the workspace with Microsoft products like SSIS and BI I see endless possibilities. This takes the data to a whole new level.

Preparing the data:

I did a test on all cars that are currently on the road in the Netherlands and combined it with performance data. I wanted to find the fastest street legal car. I guess I just wanted to find out what kind of cars I should fancy these days according to the performance stats.

I used an open data set from the dutch RWD (Driver and Vehicle Standards Agency). It contained 14m rows and it’s 7GB in size. So I had to prepare the data in order to keep the experiment basic and performance high. I imported it into my SQL server and I filtered out the the stationwagons, campers, scooters and trailers, So I was left with a 900000  rows data set.

I use a SQL Server 2017 and the Microsoft Azure machine learning studio to create a new experiment.

In order to make a prediction I needed to combine the brand data with the engine displacement data, because horsepower data was not available, to see which models are high performance based on the engine capacity. So sadly the smaller engines which are supercharged are not correctly represented in the prediction.

The calculation based on above rules, took a local SQL server on an i5 laptop about 15 minutes. I needed more data preparation.

Based on engine displacement, a top 3 came up. But I didn’t like the results at all. Sure, the engine displacement was high, but the cars are heavy and their performance isn’t the best. Super charged Turbo’s and gearing make all the difference, but aren’t properly represented in this data result.

I had to filter out a lot of data, next up I added the weight of the car, but it wasn’t trust worthy either. I found a data set which contained the Kw of the cars and top speed and joined the data with my current results and added a calculation in SQL on the Kw row * 1,362 to calculate the Hp of the car. The Hp outcome looks pretty accurate.  After 4 hours of combining data and filtering the queries I gave up. Based on this data there is no way you can truly point out the fastest cars. I had to change my plans. Too many uncertain variables to make a decent prediction and not even close to the start of an IA project 🙁

Lot’s of NULL data
This No. 1 car can’t be trusted!

After more data crunching, The results are still not really worth to display. So here is a TOP 21 of “fastest” cars…based on…well the obvious HP and Weight sorting:

btw, did you know there is only one Koenigsegg on the dutch roads.

Ok, I got a little bit carried away with data prepping.

Now let’s import it into an IA experiment: First you need to create a resource in the Azure Portal for your workspace. I won’t get into details, we did this before!

Verify that you created the following new resources: A Machine Learning Workspace, A Machine Learning Plan and A Storage Account.

Browse to the Machine learning workspace you created and launch Machine Learning Studio. This opens a new browser page.

Go to experiments and down in the left  corner click NEW.

Rename the experiment and add a dataset. Upload a new dataset. Datasets –> NEW–> Select data to upload. Now that you have the dataset ready, you can drag it into your experiment and start running tests and variables on the data.

In my next post we will dive deeper into Artificial Intelligence

 

 

 


PaaS taking over the world! Are dba’s a dying breed?

PaaS vs IaaS

 

It’s easter weekend, or as the dutch say ‘PaaS weekend’. So, it might be a good idea to talk a bit about PaaS. What is it and what does this mean for you as a Database administrator? Are Dba’s a dying breed. Will they just shift over to more complex or broader tasks or are they here to stay. It all depends on the company and the software they are running.

 

 

 

What is #Iaas? in short, it’s a VM. Your database is running in a data center. As a Dba you have the same job requirements as when running a database on-premise. Update, backup, patches, tuning, security and account control.

But #PaaS is a different story, it runs the database as a service, so there is no need for a dba. At least that’s what they tell you. But does PaaS solve all your performance and tuning needs, is that faulty query when moved to PaaS suddely solved? Nope. Machine learning is doing a great job so far, but it isn’t the magical quick fix, yet.

In all honesty most companies don’t care about tuning a database, not all applications have complex queries and tasks running on their SQL server, most are fine running an express edition. They don’t even bother having a Dba. The database is taken care of by a system engineer, if being looked at, at all.

Where does this put you as a Dba? Don’t sob, we still need you! It’s a big relief that the market, which is still flooded with old school high maintenance MS SQL driven apps. How nice if these could be taken care of with PaaS. The only good thing coming out of these high maintenance, splintered databases, is the data itself. Do you really want to spend your time updating, patching and granting rights to users and saying no to SA account requests? No, you don’t!

But second, in the real world, companies don’t evolve as fast as the IT world itself. Most applications, lots outdated or not are not being replaced overnight and not all the vendors are quite ready for PaaS environments with their applications. So you just have to decide on which side you want to be, fast IT or slow IT. There is still a big playground available for both for the coming years.


Polybase configuration on SQL Server 2017 Part II

Image: Microsoft

Nowadays your precious data can be stored everywhere, not just on several servers with different SQL versions, your data probably is wide spread in the cloud. It’s also a good idea to store data in the cloud with stretch database to release your local discs from excessive data and still be able to query it, but also use it in your SSIS and BI environment and keep an acceptable ETL. With Microsoft’s polybase you can access, import and export any data structured, semi, or non structured on the Hadoop platform and azure blob storage using T- SQL language.

The best business knowledge comes from the data you collect. So it might be a good idea to put the data you collect into some good use. Businesses collect lot’s of data, but in most cases this is also where it ends. Those who read my posts before,  know I am all about combining various sources with linked servers, since SQL 2014 lot’s of new features are available for using all your data on business intelligence platforms.

In my last post, we had a first look and troubleshoot of a polybase installation. This time we are going to configure and use the polybase in SQL server 2017. I’m going to use the Blob storage on Azure to demonstrate how you can implement this solution in your (local) SQL database.

First things first, now you’ve got your polybase installation ready, check if the services for polybase exist and are running.

Services: ‘SQL Server PolyBase Data Movement’ and ‘SQL Server PolyBase Engine’

You need to configure Polybase in order to start using it. Fire up SSMS and open a new query window. Type

sp_configure ' hadoop connectivity', 4;

reconfigure

Option 4 is Azure blob storage (WASB[S]). For more info on availability of the Polybase connectivity configuration, take a look here. Run the query and make sure you restart both Polybase services on the machine to finish the configuration.

In order to start using the blob storage make sure you have an Azure storage account if you don’t have an Azure account yet, create one here.

Login to Azure and on the left side select and create a new storage account

Give it some time, once the storage is created, you also need to create a container on the Azure storage.

To connect your local db to azure  storage, you need to get the azure storage key from your Azure storage account you just created and put it in the configuration file of your SQL installation.
Look for the core-site.xml file in the installation path of SQL Server.
The path looks simular to this: “C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Binn\Polybase\Hadoop\conf”
This will open the config directory  with the core-site.xml file.

Open the file in notepad and add code before the block of code mentioning Kerebos.

Fill in the storage name, in my case polybasedemo  and the storagekey and save the file.

 fs.azure.account.key..blob.core.windows.net

Now we have to create an external data source in SSMS. Replace containername@storagename with the names you created on Azure.

CREATE EXTERNAL DATA SOURCE PolyBaseDemo WITH
( TYPE = HADOOP, -- wasbs:// containername@storagename.blob.core.windows.net/ 
LOCATION = 'wasbs://containername@storagename.blob.core.windows.net/' );

Next up, we create the external file format to define external data on Azure blob storage, this needs to be done in order to create the external table

CREATE EXTERNAL FILE FORMAT PolybaseFormat 
WITH ( FORMAT_TYPE = DELIMITEDTEXT , FORMAT_OPTIONS ( FIELD_TERMINATOR = ',' ) );

This creates 2 new server objects in SSMS and now all is left to create the external table itself. In this demo I use an excel sheet with some irrelevant data to have some test data available.

USE [DemoPolybase] CREATE EXTERNAL TABLE [dbo].[Customers] ( [Name] VARCHAR(255) NULL, [adres] VARCHAR(255) NULL, [postalcode] VARCHAR(6) NULL ) WITH (LOCATION = N’/Customer_Export.csv’, DATA_SOURCE = PolyBaseDemo, FILE_FORMAT = PolybaseFormat, REJECT_TYPE = Value, REJECT_VALUE = 10) GO

And to see if this worked, just query the data 🙂

SELECT * FROM [Customers]

Now put this knowledge into action yourself with some real data!


Polybase installation on SQL Server 2017 part I- Oracle JRE 7 Update 51 (64-bit) or higher is required

Fresh new year, so a good time to check out the newest SQL Server! So far the installing process itself in SQL server 2017 brings no big new surprises. Just like the SQL Server 2016, you have to optionally download and install the SSMS via the Microsoft website, the link will be provided once the installation has finished.

SQL Server 2017

Next the install en configuration starts. I’ll highlight the one pain in the ass I encountered this time.

I already talked about the Polybase feature related to the content in a podcast early 2016, but this time an install and setup walkthrough, plus a warning for all the people bravely installing oracles newest version of java.

When you select the Polybase to be installed and you payed close attention, or already used it in 2016 edition, you know that you need the oracle SE Java Runtime Environment.Polybase Oracle JRE

If this is not already installed on you’re computer, the installation will fail, resulting in this message :

---------------------------
 Rule Check Result
 ---------------------------
 Rule "Oracle JRE 7 Update 51 (64-bit) or higher is required for Polybase" failed.

This computer does not have the Oracle Java SE Runtime Environment Version 7 Update 51 (64-bit) or higher installed. The Oracle Java SE Runtime Environment is software provided by a third party. Microsoft grants you no rights for such third-party software. You are responsible for and must separately locate, read and accept applicable third-party license terms. To continue, download the Oracle SE Java Runtime Environment from https://go.microsoft.com/fwlink/?LinkId=526030.
 ---------------------------
 OK
 ---------------------------

 

You need to head over to oracle.com and install a 7.51 or higher version, currently 9.0.1 is the highest, so seems legit to install this one.

Java install

 

 

 

 

 

Once you downloaded the correct product, In my case I choose the Windows Offline. Now run the Java install and return to your SQL Server setup for a re-run.

Wait what? Same message! “Requires JRE 7 update 51 or higher”. I just installed the latest JRE version, did a restart and java is up and running.

So, this it the moment you ask yourself, do I really really want the polybase feature that bad? The anwser is Yes! To start the troubleshoot, I decided, to do some backward compatibility, the oldest version available from site, without using my oracle client registration is 8.151, and guess what…This did the trick!

So stay away from the newest 9 version for as long as possible.

Next post will be the setup and configuration of the polybase