Category Archives

45 Articles

Automatically building a Microsoft BI machine using PowerShell – preparation: install files using disk (post #2)

This post is #2 in the series to automatically build a Microsoft BI machine using PowerShell – see the start of series.

The first step in our preparation is making the install files available. I see two options for this, namely using VHD / disk and Azure File Service. In this post we will walk through how to make the install files available using a VHD / disk in Azure.

The way this works is making a new disk that you can store the installer files on. After you created your virtual machine that you would like to automatically you would attached that disk so the installer files are available to the machine and thus to the automatic installer script.

To create a new disk, log in into the Azure portal (either the production or the preview) and navigate to Virtual Machines. Select a machine you have created. This could be the one you will be setting up or any other VM. The disk you will create is re-usable across machines. Once you have a machine selected, click ‘Attach’ and select ‘Attach empty disk’ to create a new empty disk:

Enter a file name for the new disk and set up the size in GB you expect to be using. No need to change the host cache settings here. When done, click button at the bottom right:

After the disk has been created, login to the machine you attached the disk to using Remote Desktop. You can you download the install files and save them to the disk you have just attached. In the VM, Start ‘Disk management’ (for example by right-clicking the Windows button and selecting it from the list). You will see a notification to initialize the disk. Accept the defaults and click OK:

Once the initialization is done we need to create a partition on the disk. In Disk Management, right-click on the new disk and choose ‘New Simple Volume’. Follow the steps of the wizard, taking note of the drive letter assigned. Also make sure to set a volume label and wait for the format to finish.

Once the formatting is done, you can download and store you install files to the disk. I created a folder in the root of the disk named ‘Resources’ and created a sub-folder per software item required. I saved the install files in these folders. The scripts we will create will point to these install files. The scripts I also store on the same install disk.

When you are done downloading the files (and later making the scripts) you can detach the disk from the VM and re-attach it to another machine by using the portal and select the VM that currently has the install disk attached and selecting ‘detach disk’ and choosing the disk to detach. You can then re-attach the disk to another VM.

This is an OK way to work with the install files. In the next post we will explore an alternative way using Azure File Service.

 

 

 

 

 

Automatically building a Microsoft BI machine using PowerShell – Start of Series

I used to spend quite some time on building and re-building Microsoft BI demo machines. As you can imagine this manual process takes a lot of time and effort. Therefore (and also for my own education on PowerShell) I decided to look into automating the whole process. I will explain this in this series of posts.

The goal

In the end, we want to have a virtual machine that is configured as follows: Windows Server 2012 R2, with Active Directory Domain Controller role. Additionally, SQL Server 2014 is installed and configured as well as SharePoint 2013. Finally, the BI tools like Power Pivot and Power View are configured.

Ok, but how do we build such a machine?

Here are the steps to take. I always do them in this order, partly because there are some dependencies and partly because it stops me from going insane.

  1. Install Windows (doh). I will skip this step (therefore it is number 0) since I use Azure and a VM in Azure comes with Windows Server pre-installed. I happen to use Windows Server 2012 R2 b.t.w.
  2. Disable Internet Explorer Enhanced Security Configuration. Although it is a great idea (see http://technet.microsoft.com/en-us/library/dd883248(v=WS.10).aspx for more info on this) it is hard to give a good demo on the machine with this thing on. So first step is disabling it.
  3. Set up Active Directory; AD is required for the PowerPivot service.
  4. After AD has been set up we need to promote the Domain Controller.
  5. After promotion we configure a very unrestrictive password policy; remember, this is just a demo machine!
  6. Virus protection is important, even for a demo machine; therefore set up System Center Endpoint Protection.
  7. Install SQL Server 2014.
  8. Install SharePoint.
  9. Install PowerPivot Service.
  10. Configure PowerPivot Service.
  11. Configure last parts of PowerPivot Service.
  12. Configure Master Data Services.
  13. Configure Data Quality Services.
  14. Configure other SharePoint Service Applications.
  15. Activate SharePoint site features.
  16. Add favorites in Internet Explorer to point to MDS and SharePoint site.

In this blog series I will share my PowerShell code to accomplish this. Please note that I am not a developer so things can probably be done a lot smarter J

Next step is preparation: the install files.

Azure SQL Database almost on par with the on-premise version

Hi blogreaders J

I started the blog with the title: “Azure SQL Database almost on par with the on-premise version”.

And looking from a T-SQL perspective it is.
Our latest version of Azure SQL Database (let’s call it SQL Azure for now) is for 95% compatible with SQL 2014.
So not a bad title!

But there is more to it…

There are two perspectives to take in account here:

  1. Microsoft is not only working on T-SQL compatibility, but is working on al fronts to enhance functionality.
    So actually… the engine powering SQL Azure is already beyond SQL 2014 and more towards SQL 2016.
  2. SQL Azure is a PAAS offering in Azure.
    So… you don’t only get the database, but also services around it in maintenance like:
    1. Automatic replication of your date 3 times (and 3 more times if geo-redundant)
    2. Point in time restore up to 14 days
    3. Patching, security updates

Thinking of SQL Azure of being on par with SQL 2014 is kind of old skool thinking.
Thinking of SQL Azure should be about thinking about using a relational database as a service.
And those services will enhance in time.

So no more thinking about versions, but thinking about functions.
This changes the paradigm of developing big time!
You will have to develop is a much more pure way.
Keeping your application architecture clean and crisp.

For example: No more business logic in database triggers, but logic in webservices and the call’s to the database in as plain as ANSI-SQL as possible.
Code examples can be found at: https://msdn.microsoft.com/en-us/library/azure/ee621787.aspx
And to keep up to speed on the latest developments of SQL Azure keep checking the blog: http://azure.microsoft.com/blog/

Happy programming!

Platform as a Service (PAAS) moving a rocket speed!

As an Enterprise Architect in an organization life has always been dynamic to say the least! It is your responsibility to keep up with the latest developments in ICT both in technique as in architecture. In the old days of on-premise only that was a big challenge. But with the Cloud as a integral part of your information systems it became even more complex.

But still… The Cloud was moving vm’s to Amazon or Microsoft. So architecturally not that complex. Identity & access off course, but that’s about it. Then came Platform as a service (PAAS). That was something completely different! Not moving vm’s to the Cloud, but move complete technical workloads to the Cloud like an ESB in the Cloud, Media Services, Federated identity, Storage, etc, etc..

This does impact your architecture!

A blazing 78 new PAAS services were introduced in 2014 within Azure. So it’s moving rocket fast! And to be fair: not only at Microsoft, also are other Cloud vendors moving into the PAAS area with new services.

What is the impact for you as Enterprise Architect?

In your normal day to day work you make choices based on software you can purchase and implement at your data center. But now you should at least ask yourself for every choice you have to make: Do I want to do this myself or shall I take this as a service from one of the Cloud vendors.

An example: Your organization wants to use Cloud services from multiple Cloud vendors but you want a single sign on experience for your users. Now you can buy a federated identity server, do research on all Cloud vendors on how to connect and then build the connections. But you can also use The Windows Azure Active Directory Federation Service (ADFS) from Microsoft with over 2600 Cloud vendors already pre-installed.

Second example: You have a new web application that you need to deploy. Again you can buy a few servers, install IIS, SQL Server, the application and install everything and schedule things like backup, patch management, storage, etc., etc. But you can also take a web-role to host the web-application, Azure SQL database to host you data and let Microsoft worry about backup’s, 3 replica’s for DR, patching the server, etc.., etc.

So my message to all you Enterprise Architects out there: Examine carefully the PAAS offerings from the Cloud vendors before making expensive buy decisions. My recommendations to checkout:

Azure Service Bus, Azure Machine Learning, WAAS, BizTalk Services and Azure SQL Database. Next blog-post I will dig deeper on Azure SQL Database.

In memory technology in SQL Server

Everybody noticed the increase in technology using in memory techniques. At SAP they fully go for Hana, Oracle just started last year with in-memory database. At Microsoft we started in 2012 with in memory analytics and added OLPT in memory April 2014. The buzz is high with big marketing events, lots of whitepapers and broad press coverage.

So, but what about the real life practice?

When I visit my customers (top-50 in The Netherlands) I rarely see in memory databases used. So I always ask why they don’t make use of it. This resulted in the following reasons:

  1. I didn’t know I could run in memory with my databases.
    The marketing engine could be hitting the wrong people. Lots of database administrators are not up to speed.
  2. We don’t do it because it must be very difficult.
    True – if you use SAP then it’s common knowledge that implementing SAP Hana is not very easy. And you have to rewrite some of your programs. False – if you use Microsoft SQL Server. To start using in memory you can switch it on for certain tables (or part of tables) and without any change to the application it will work.
  3. The power of our servers is high enough. We don’t need the power.
    This is of course a compliment that our SQL Servers run so smoothly (-:

But still I think that by using in memory technology you can achieve the following:

  • Prevent hardware refresh. If servers run out of performance, moving to in memory the speeds increases again by 5x – 10x. Thus the servers can remain the same.
  • Run more VM’s on a host. By using in memory technology the number of cores can be less because of the more economic processing in memory. Thus more core’s for new VM’s on the same host.
  • Increase processing to reduce wait time for your users

So my advice: Start experimenting with the technology and look for those business cases.

My ask: anyone who has experience with the practical implementation: please reply with your live experience!

Maiden blog post

Hi all,

Today I joined my colleague Jeroen in this Blogsite about Data.
Where Jeroen is the expert in BI, Big Data & Data warehousing I tend to focus on applications, databases, development & integration. And off course with a main focus on the Cloud with Azure.

I will try to keep you posted on all new developments with my personal view on it combined with my 25 years+ experience and day to day work at my customers.
So keep posted for my new posts!

Greets, Harry

Master Data Services 2014 Add-in for Excel published

Just a bit over a week ago a new version of the Master Data Services Add-in for Excel was released. It is the 2014 release, which will also work for SQL Server 2012. Main benefit is up to 4x performance improvement without any server configuration changes. If you need extra performance you can get 2x extra by enabling Dynamic Content Compression in IIS on your MDS server.

Here is the download page: http://www.microsoft.com/en-us/download/details.aspx?id=42298.

First Look: Azure Stream Analytics

The First Look series focusses on new products, recent announcements, previews or things I have not had the time to provide a first look at and serves as introduction to the subject. First look posts are fairly short and high level.

Today in First Look: Azure Stream Analytics. This service is currently in preview and provides low latency, high available and scalable complex event processing in the cloud. To do complex event processing SQL Server has StreamInsight; with Azure Stream Analytics we provide a CEP engine in the cloud.

Azure Stream Analytics is built to read data from a source, let’s say a device sensor and collect, process and take action on it in a blink of the eye. A simple example would be to read temperatures from a sensor, aggregate them to calculate an average temperature per 5 seconds and store that data into SQL server. Or a fairly more complex example would be to take output from a video camera that reads license plates, run the license plate through a list of license plates of stolen cars and immediately send a message to a police car nearby.

Because this solution is cloud based it is easy to get started. You can be up and running in literally minutes.

More info is available on http://azure.microsoft.com/en-us/documentation/articles/stream-analytics-introduction/

More to come!

First Look: Azure Data Factory

This is the first post of my new first look series. This series focusses on new products, recent announcements, previews or things I have not had the time to provide a first look at and serves as introduction to the subject.
First look posts are fairly short and high level.

Today in first look: Azure Data Factory. This service was only recently announced as is available to all Azure customers in preview mode. To get a hold of it make sure you open the Azure preview portal. In your classic Azure portal click on your email address in the top right and choose ‘Switch to new portal’ or go directly to https://portal.azure.com.

So what is Azure Data Factory? I may be downplaying it a bit, but essentially Data Factory gives you ETL in the cloud. It connects to both on premises as well as cloud data stores and enables you to read data from the stores, do transformations and publish data in stores, while at the same time providing rich analytics on how the data flow is doing. The paradigm here is a factory floor: pieces of data enter the factory floor as raw materials, they undergo some treatment (transformations) and go out the door at the other end of the floor as finished product. The canvas of Data Factory closely resembles this floor and shows an assembly line for data. Here is very simple example, which retrieves data in hourly batches from a table in Blob Storage and stores it in a SQL Azure table:

 

More info is available on http://azure.microsoft.com/en-us/documentation/articles/data-factory-introduction/.

More to come!

 

Power BI pro tip: using Access Online for data entry

With powerful self-service BI tools such as Power BI comes the need for business user data entry; data does not exist in source systems or does need to be enhanced / enriched before going into the report, or the business user just wants to change the way the data is organized. In those cases (which are present more often than not) we need to find a way to give the business user an easy to use way to do data entry while keeping it robust: i.e. not use a tool the user could easily make mistakes in and hurt the reporting process. You could use Excel but you would have to secure it so no mistakes can be made. Also, SharePoint lists are a good option if you have less than 5000 data rows (that’s the hard limit in SharePoint Online). If you need to store a lot of data and need a robust solution, Access Services or Access Online is a great tool for the job and the best part is it works perfectly with Power BI.

Perhaps the biggest change in Access 2013 is that it now stores that in SQL Server Databases rather than Access files. In this post I will show you how to build a sample application concerning reports on KPIs for production plants around the world. The data is entered by the business user using a web form generated by Access and the dashboard is created using Power BI. So here we go.

First step is to get the data. For that I created a simple Access 2013 application that I published on my SharePoint Online site. The Access application consists of three tables: KPIs, Periods, Plants and of course the actual facts: the KPI Values. On top of this sits a very basic data entry screen that enables the user to enter new actuals and targets for a KPI for a period for a given plant:

I entered some test data and saved the app. Imagine your business user just entering their data in here.

The next step is to get the data out of the SQL database Access Services will store it in and build a report / dashboard on top of it. For this, you will need to go to the Info pane of the File menu in Access. Look for the ‘Manage’ button next to Connections:

If you click it you get a big flyout presenting you with a lot of options. You will need to select the following:

-From My location or From Any location. I chose from Any.

-Enable Read Only connections.

See this screenshot:

Now, click on ‘View Read-Only Connection Information’ and leave it open for now. You will need to later.

Next step is to start Excel, go to Power Query, select From Database à SQL Server (and not Access since data is stored in SQL Server by default in Access 2013).

Copy paste the server and database name from the Connection information screen in Access and choose Ok. In next screen enter your credentials and passwords (again copy/paste from the connection information screen in Access). After a while you can select the table you are interested in and you can load the data into PowerPivot. I loaded my Plants, Periods and Values (I skipped KPIs since it was only the KPI label):

Next step is to create relationships between tables in PowerPivot, hide some columns as well as add a KPI definition. I ended up with this model:

Now, with Power View I created the following basic report (I did not give myself time to work on the layout, this is just quick and dirty):

 

This concludes this Power BI Pro Tip!

%d bloggers like this: