Category Archives

47 Articles

Microsoft’s Language Understanding Service now speaks Dutch

Over the weekend Microsoft’s Language Understanding Service (LUIS) which is part of the Cognitive Services and the Cortana Intelligence Suite was updated to support the Dutch (Nederlands) language! Great news for anyone who needs to support Dutch in their applications and be able to understand Dutch. We are the 12th language on the list 🙂

Implementing incremental data load using Azure Data Factory

Azure Data Factory is a fully managed data processing solution offered in Azure. It connects to many sources, both in the cloud as well as on-premises. One of the basic tasks it can do is copying data over from one source to another – for example from a table in Azure Table Storage to an Azure SQL Database table. To get the best performance and avoid unwanted duplicates in the target table, we need to include incremental data load or delta’s. Also, we can build mechanisms to further avoid unwanted duplicates when a data pipeline is restarted.

In this post I will explain how to cover both scenario’s using a pipeline that takes data from Azure Table Storage, copies it over into Azure SQL and finally brings a subset of the columns over to another Azure SQL table. The result looks like this:

The components are as follows:

  • MyAzureTable: the source table in Azure Table Storage
  • CopyFromAzureTableToSQL: the pipeline copying data over into the first SQL table
  • Orders: the first SQL Azure database table
  • CopyFromAzureSQLOrdersToAzureSQLOrders2: the pipeline copying data from the first SQL table to the second – leaving behind certain columns
  • Orders2: the second and last SQL Azure database table

Setting up the basics is relatively easy. The devil is in the details, however.

  1. The linked services

Every data pipeline in Azure Data Factory begins with setting up linked services. In this case, we need two; one to the Azure Table storage and one to SQL Azure. The definition of the linked service to Azure Table Storage is as follows:

The SQL Azure linked service definition looks like this:

Note the name property – we will need to refer to it later.

  1. The datasets

Datasets define tables or queries that return data that we will process in the pipeline. The first dataset we need to define is the source dataset (called MyAzureTable). The definition is as follows:

Note that, again, this item has a name. We will use it in the pipeline later. Also, you will need to specify the name of your Azure Table in the “tablename” property. Note that the “LinkedServiceName” property is set to the name of the linked service we definied earlier. This way, Azure Data Factory knows where to find the table. Also, the “availability” property specifies the slices Azure Data Factory uses to process the data. This defines how long ADF waits before processing the data as it waits for the specified time to pass before processing. The settings above specify hourly slices, which means that data will be processed every hour. We will later set up the pipeline in such a way that ADF will just process the data that was added or changed in that hour, not all data available (as is the default behavior). Minimum slice size currently is 15 minutes. Also note that the dataset is specified as being external (“external”:true). This means that ADF will not try to coördinate tasks for this table as assumes the data will be written from somewhere outside ADF (your application for example) and will be ready for pickup when the slice size is passed.

The target dataset in SQL Azure follows the same definition:

Important to note is that we defined the structure explicitly – it is not required for the working of the first pipeline, but it is for the second, which will use this same table as source. Also note that presence of the column ‘ColumnForADuseOnly’ in the table. This column is later used by ADF to make sure data that is already processed is not again appended to the target table. Of course, the SQL table itself will need to have (at least) the same columns and matching data types:

  1. The first pipeline (from Azure Table to SQL)

The first pipeline takes the order data in the Azure table and copies it into the Orders table in SQL Azure. It does that incrementally and with repeatability – which means that a) each slice will only process a specific subset of the data and b) if a slice is restarted the same data will not be copied over twice. This results in a fast processing engine without duplication in the target table – data is copied over once, regardless of the number of restarts. Note that by default ADF copies all data over to the target so you would get so many rows in the table as there are orders in the Azure Table times the number of slices that ran (each slice bringing over the full Azure table). The definition is as follows:

Note that the pipeline consists of a single activity, which is a Copy activity. I could have specified another activity in the same pipeline – I have not done so for simplicity. The Copy activity takes as input the Azure Table (MyAzureTable) and outputs into the SQL Azure Table “Orders”. The source Query is very important – as this is used to select just the data we want! We use the column ‘OrderTimestamp’ which and select only the orders from MyAzureTable where the OrderTimestamp is greater than or equal to the starting time of the slice and less than the end time of the slice. A sample query against the Azure Table executed in this way looks like this:

OrderTimestamp ge datetime’2017-03-20T13:00:00Z’ and OrderTimestamp lt datetime’2017-03-20T15:00:00Z’

Also, look at the specification of the “sliceIdentifierColumnName” property on the target (sink) – this column is in the target SQL Azure table and is used by ADF to keep track of what data is already copied over so if the slice is restarted the same data is not copied over twice.

This pipeline will run each hour (“scheduler” properties), starting at 09:00:00 local clock (“specified by the “start” property) and can run 10 slices in parallel (specified by the “concurrency” property).

  1. The second pipeline (from SQL to SQL)

The second pipeline is there to prove the mapping of specific columns to others as well as showing how to do an incremental load from SQL Azure to another target. Note that I use the same linked service so this exercise is not really useful – the same effect could be retrieved by creating a view. The definition is as follows:

Note that we specify a “sqlReaderQuery” this time which selects the right subset of data for the slice. We use WindowStart and WindowEnd this time instead of SliceStart and SliceEnd earlier. At this point is does not matter as ADF requires both to be the same. WindowStart and WindowEnd refer to the pipeline start and end times, while SliceStart and SliceEnd refer to the slice start and end times. Using the “translator” properties we specify which columns to map – note that we copy over SalesAmount and OrderTimestamp exclusively.

There you have it – a fully incremental, repeatable data pipeline in Azure Data Factory, thanks to setting up a smart source query and using the “sliceIdentifierColumnName” property. The full source code is available on Github. More info on how this works is available in the official documentation.

Questions? Remarks? Let me know!

SQL Server news from connect start to resonate

Hi all,

A month ago at the Connect event in New York Microsoft did some anouncements on SQL Server vNext and Azure Data Services.
It’s good to see the traction it’s getting!

I work with a lot of partners on implementing Microsoft stuff and the partners are very keen in getting the latest info on new capabilities.
Also they enlist in new online and in classroom courses on the new stuff. So let’s take a quick look:

  • Azure Data Lake store and analytics went live in Novmeber, This is a no limits data lake with on-demand analytics that instantly scales to your needs. With petabyte size files and trillions of objects and parallel processing!
  • R Server for Azure HD Insight for Largest R compatible parallel analytics ML library with terabyte-scale machine learning!
  • SQL Server on Linux public preview is released. Check in out on a linux box!
  • Moving selected Enterprise Edition capabilities to SQL Server Standard like in memory and Polybase.

And last but not least: connectivity to all top programming languages!

 

Passing filters to Power BI reports using the URL / query string

It was only recently that I discovered this – you can pass filters to Power BI reports by deep linking to it and adding a filter in the URL (also called query string). I am not even sure this is a feature, since it only seems to work for ‘equals’ and does not work on Dashboards in Power BI.

First of all, we need to understand deep linking in Power BI. Most artifacts in Power BI (for example reports and dashboards) have a their own URL. This means we can open them directly when we know the URL. The URL will have the following format:

https://[tenant].powerbi.com/groups/[workspace]/[artifact type]/[artifact id]/[optional part to be opened]

Thus, the URL may look like this:

https://app.powerbi.com/groups/me/reports/dc121d4b-9aad-4494-b1de-8037f53d8355/ReportSection3

This would open page ReportSection3 in report with id dc121d4b-9aad-4494-b1de-8037f53d8355 in my personal workspace in the tenant app.

If you know anything about web development, you know that you can pass things through the URL to the application this is normally done by adding a question mark to the end of the URL and specifying whatever you want to pass. Multiple items can be added by separating them out by ampersands, like this:

https://myurl.com/?firstname=Jeroen&Lastname=ter%20Heerdt

(notice the %20, which is a URL encoded space).

Combining these two things, you can pass parameters to Power BI reports if you have the report URL (simply open the report to get that). Once you have it, add the following:

?filter=[tablename]/[columnname] eq ‘[filter value]’

So, if I wanted to filter the activity column in my workitem table so it only shows items where the activity is blogging, I would add the following:

?filter=workitem/activity eq ‘blogging’

(eq is short for equals here).

This needs, however, to be encoded for the URL. You can easily find tools online to do that for you, or if you know a little bit about what you are doing, you can do it by yourself. The addition above becomes:

?filter=workitem%252Factivity%20eq%20%27blogging%27

(/ becomes %252F, space becomes %20 and ‘ becomes %27)

This would open page ReportSection3 in report with id dc121d4b-9aad-4494-b1de-8037f53d8355 in my personal workspace in the tenant app with a filter set on the workitem table’s activity column to be equal to blogging:

https://app.powerbi.com/groups/me/reports/dc121d4b-9aad-4494-b1de-8037f53d8355/ReportSection3?filter=workitem%252Factivity%20eq%20%27blogging%27

By the way, you would probably only want to use this is very specific scenario’s. It is way better to look at Power BI Embedded to integrate Power BI in your applications. Note that Power BI Embedded is targeted at organizations providing software to others (hosted or not). It is not for internal applications.

First look: Esri ArcGIS Maps in Power BI

Esri is a leader in the GIS industry and ArcGIS is a very popular product to build great maps. Now, you can use ArcGIS maps in Power BI (in preview). See the official information here: https://powerbi.microsoft.com/en-us/blog/announcing-arcgis-maps-for-power-bi-by-esri-preview/. This is really cool, I know a lot of you have been asking for this for a long time!

You will find the option to enable this preview in PowerBI.com, not in the Power BI Desktop. Log in to PowerBI and open the settings. You can find the ArcGIS preview there and enable it by simply selecting the checkbox:

With that enabled, create a report with some geographical information (or edit an existing one). I used the Google Analytics data that keeps track of my blog. Google Analytics data can be loaded into Power BI simply by using the content pack. In edit mode in the report you will find the ArcGIS component in the Visualizations list:

Click it and create your map as you would with the normal map. I noticed it needs some time to build the map (probably due to the preview) but once it is done it is fully interactive with the other items on your report as you would expect:

You can change a lot of the ArcGIS options, such as switching out maps, changing symbol styles, adding reference layers, etc.

I love this – the awesome power of ArcGIS and Power BI combined! I cannot wait to see what you will create with this.

Adding sequence numbers using R in Azure ML

When going through data preparation sometimes sequence numbers need to be added. If you are like me, you probably spent some time looking for a component in Azure ML to do this. I never found it.

Turns out it is really easy to do this in R and as a result also very easy to do in Azure ML.

In your experiment, add an Execute R Script component and connect it to the data flow.

Edit the script and add a column to the dataset that equals:

See my code example:

 

On the third line the column is added and defined as a sequence number. The resulting dataset indeed has an extra column (called time) that like this:

The small histogram at the top and the details that right confirm it has only unique values and starts at 1; our sequence column has been added!

Power BI Refresh scenarios against data storage solutions explained

A recurring theme with customers and partners is Power BI data refresh. More specifically, there is some confusion on what refresh scenario requires a Pro version and what can be done with the Free version of Power BI. I made the diagram below to help explain this. It shows the refresh scenarios against data storage solutions, such as SQL Azure, SQL in a virtual machine in Azure or SQL server on premises. I used these as examples, there are other options as well. I think the overall time carries over to other data storage solutions. The diagram shows the refresh that can be done using a Power BI Free account as orange and the refresh scenarios that need Power BI Pro as green lines. As shown in the diagram, if  you want to refresh against on-premises sources or a database running in a VM in Azure you will need a gateway and Power BI Pro. This applies not only to the creator of the report and schedule but also to every consumer. If you use PAAS solutions for data storage in Azure such as SQL Azure, it becomes a bit more difficult and it is really dependent on the type of refresh required. If you need a refresh cycle higher than once a day (either max 8 times per 24 hours or live) you will need Power BI Pro. If you just want to refresh against such as SQL Azure and once a day is enough you can do that using Power BI Free. Again, the license requirement carries over from author to viewer; if the author of the report requires Pro, then the viewers also need Pro.

Power BI Refresh scenarios against data storage solutions

Hope this helps. If you have any questions or feedback, please comment below!

Azure Machine Learning pricing explained

Many customers asked me questions on Azure Machine Learning (Microsoft’s fully managed machine learning and data mining solution) and more specifically on it’s pricing. In this post I will try to explain how the pricing works and what components you need to be aware of.

Azure Machine Learning is offered in two tiers: Free and Standard. The Free tier is obviously, well, free. It is however as you could expect limited compared to Standard. Differences are mostly in performance (multiple nodes for execution in standard vs. just one node in free) or storage (10 gb in free, unlimited in standard). There is no SLA for the free version, you cannot set up a production Web API to automate experiments in free and the staging web API is throttled.

For the standard tier, the following items need to be taken into consideration:

  • Seat; Azure ML has a monthly fee per seat, which translates to a user (mostly your data scientist) using the Azure ML web interface to develop and tune experiments. This price is per month per subscription/seat.
  • Studio usage; This is an hourly price for running experiments. You will pay this according to the number of hours your experiments run and thus claim computing resources.
  • API Usage; Azure ML allows you to bring an experiment online through the use of RESTful web services. This means you can automate score and training and have applications, websites, etc. use the experiment without human interference. With this you could do an automated credit scoring, recommendation or churn prediction directly from your app or website. In order to make this work you will need to create a web service in Azure ML (also called API). Azure ML charges per hour for compute used in an API that is production, so that is the fee you will need to pay per hour the web service / API is ‘online’ and usable. Also, you will need to pay per 1000 transactions. Transactions in this case are interactions with the API, such as one recommendation, one churn or one credit score.

 

Hope this clarifies a bit. Please refer to the official page linked above for more details and for the pricing details.

 

Automatically building a Microsoft BI machine using PowerShell – Final post (post #14)

This post is #14 in the series to automatically build a Microsoft BI machine using PowerShell – see the start of series.

In this series:

Start of series – introduction and layout of subjects
Post #2 – Preparation: install files using Azure disk
Post #3 – Preparation: install files using Azure File Service
Post #4 –Preparation: logging infrastructure
Post #5 – Master script
Post #6 – Disabling Internet Explorer Enhanced Security Configuration
Post #7 – Active Directory setup
Post #8 – Configuring Password policy
Post #9 – Installing System Center Endpoint Protection
Post #10 – Installing SQL Server
Post #11 – Installing SharePoint Server
Post #12 – Installing PowerPivot for SharePoint
Post #13 – Configuring PowerPivot for SharePoint

Wow. This has been a long and wild ride. But you and I made it together. We now have the full recipe to automatically configure a Microsoft BI demo machine with PowerShell. Of course there is more to be done, such as configuring other Service Accounts and deploying demo content; this script however saves me a lot of time every time I need to stand up a new demo machine.

You can download the script on Github. Please note (again) that the code is provided as-is and you should use it at your own risk. It is probably still buggy but should give you a good starting point to adapt it to your needs.

I enjoyed the ride with you; hope I made your life a bit easier of the course of this series. Enjoy!

Automatically building a Microsoft BI machine using PowerShell – Configuring PowerPivot (post #13)

This post is #13 in the series to automatically build a Microsoft BI machine using PowerShell – see the start of series.

In this series so far:

Start of series – introduction and layout of subjects
Post #2 – Preparation: install files using Azure disk
Post #3 – Preparation: install files using Azure File Service
Post #4 –Preparation: logging infrastructure
Post #5 – Master script
Post #6 – Disabling Internet Explorer Enhanced Security Configuration
Post #7 – Active Directory setup
Post #8 – Configuring Password policy
Post #9 – Installing System Center Endpoint Protection
Post #10 – Installing SQL Server
Post #11 – Installing SharePoint Server
Post #12 – Installing PowerPivot for SharePoint

Now that PowerPivot for SharePoint has been installed, we need to configure it. I split the configuration into two parts since we need a reboot in between and used MSDN for reference: http://msdn.microsoft.com/en-us/library/hh230903.aspx.

Step A: configuring SharePoint and deploying PowerPivot features

In Post #11 we talked about installing SharePoint, but the actual SharePoint provisioning was not done then. We will do it here in one go with installing PowerPivot features.

 

Step B: updating farm credentials and starting service applications

After the PowerPivot features have been deployed we need to configure Service Applications to get PowerPivot to work.

 

Now we have seen all the steps required to build a Microsoft BI demo machine! The next post will serve as a wrap up and present a download for the full script.

%d bloggers like this: