Creating a large data warehouse in Azure

Microsoft Azure provides you two options when hosting your SQL Server-based data warehouse: Microsoft Azure SQL Database and SQL Server in Azure Virtual Machine.  Which one is appropriate based on the size of the data warehouse?  What are some hardware features to choose from for an Azure VM for a large data warehouse?

Let’s look at each option.

Microsoft Azure SQL Database is a Platform-as-a-service (PaaS), or more specifically a relational database-as-a-service.  It is built on standardized hardware and software that is owned, hosted, and maintained by Microsoft.  You can develop directly on the service using built-in features and functionality.  When using Azure SQL Database, you pay-as-you-go with options to scale up or out for greater power.  Azure SQL Database has a max database size of 500GB (see Azure SQL Database Service Tiers and Performance Levels).  Also, not all the SQL Server 2014 Transact-SQL statements are supported in Azure SQL Database (see Azure SQL Database Transact-SQL Reference) and it does not support SQL Server instance level features (such as, SQL Server Agent, Analysis Services, Integration Services, or Reporting Services).  But there are some features that land first in Azure SQL Database before on-prem SQL Server, such as Row-Level Security and Dynamic Data Masking.  Note that Azure SQL Database has built-in fault tolerance infrastructure capabilities that enable high availability as well as business continuity options (see Azure SQL Database Business Continuity).  So this option is only appropriate if you have a relatively small data warehouse that does not require full SQL support.

The other option that will almost always be the correct choice for a large data warehouse is to create a Azure VM that has SQL Server 2014 installed, resulting in an Infrastructure-as-a-service (IaaS).  This allows you to run SQL Server inside a virtual machine in the cloud.  Similar to Azure SQL Database, it is built on standardized hardware that is owned, hosted, and maintained by Microsoft.  When using SQL Server in a VM, you can either bring your own SQL Server license to Azure (by uploading a Windows Server VHD to Azure) or use one of the preconfigured SQL Server images in the Azure portal.  If going with a preconfigured image you should choose “SQL Server 2014 Enterprise Optimized for Data Warehousing on Windows Server 2012 R2″ (see VM Images Optimized for Transactional and DW workloads in Azure VM Gallery) which will attach 15 data disks (12 disks for a 12TB data pool and 3 disks for a 3TB log pool).  You will also need to choose the virtual machine size for your VM.  Note you can setup high availability and disaster recovery solutions (see High Availability and Disaster Recovery for SQL Server in Azure Virtual Machines).  This resulting VM will be very similar to an on-prem SQL Server solution except for the various hardware configurations that you have to choose from for your virtual machine size.

If you look at the Azure Virtual Machines Pricing for SQL Server, here are the options you would want to consider for your virtual machine size:

  • A7, 8 cores, 56GB memory, 605GB max disk size, 16 Persistent 1TB Data Disks Max, $3/hr SQL Enterprise
  • A9, 16 cores, 112GB RDMA memory, 382GB max disk size, 40Gbit/s InfiniBand, 16 Persistent 1TB Data Disks Max, $6/hr SQL Enterprise
  • D14, 16 cores, 112GB memory, 800GB max disk size (SSD), 32 Persistent 1TB Data Disks Max, cpu 60% faster than A-series, $6/hr SQL Enterprise
  • G5, 32 cores, 448GB memory, 6596GB max disk size (SSD), 64 Persistent 1TB Data Disks Max, cpu Xeon E5 v3, $12/hr SQL Enterprise.  See Largest VM in the Cloud

The “Persistent 1TB Data Disks” refers to connecting external attached drives but be aware that network bandwidth can be a bottleneck.  From Virtual Machine and Cloud Service Sizes for Azure you can see for G5 you can add 64 of 1TB data disks yielding the potential total volume size of up to 64 TB.  See How to Attach a Data Disk to a Windows Virtual Machine to attach a drive and give it a new drive letter, or attach drives to use as a storage space which is a way to make multiple disks appear as one (see Windows Server 2012 Storage Virtualization Explained).

If this still is not enough disk size for your data warehouse, you will need to use an on-prem SQL Server solution or an MPP solution such as Microsoft’s Analytics Platform System.

There is a script you can download (see Deploy a SQL Server Data Warehouse in Windows Azure Virtual Machines) that allows a user to create a *Data Warehousing* optimized VM on Azure running SQL Server 2012 or SQL Server 2014 and will also attach empty disks to the VM to be used for SQL Server data and log files.


For info on how to connect to an Azure VM using SSMS, click here.  To connect SSMS to an Azure SQL database, click here.

If you are concerned about data throughput from on-prem to azure, check out the ExpressRoute service.  ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments.  Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.

More info:

Introduction to Automating Deployment of SQL Server in Azure IaaS Virtual Machines

Understanding Azure SQL Database and SQL Server in Azure VMs

Inside Microsoft Azure SQL Database

Posted in Azure, Data warehouse, SQLServerPedia Syndication | 4 Comments

Operational Data Store (ODS) Defined

I see a lot of confusion on what exactly is an Operational Data Store (ODS).  While it can mean different things to different people, I’ll explain what I see as the most common definition.  First let me mention that an ODS is not a data warehouse or data mart.  A data warehouse is where you store data from multiple data sources to be used for historical and trend analysis reporting.  It acts as a central repository for many subject areas and contains the “single version of truth”.  A data mart serves the same purpose but comprises only one subject area.  Think of a data warehouse as containing multiple data marts.  See my other blogs that discuss this is more detail: Data Warehouse vs Data Mart,Building an Effective Data Warehouse Architecture, and The Modern Data Warehouse.

The purpose of an ODS is to integrate corporate data from different heterogeneous data sources in order to facilitate operational reporting in real-time or near real-time .  Usually data in the ODS will be structured similar to the source systems, although during integration the data can be cleaned, denormalized, and business rules applied to ensure data integrity.  This integration will happen at the lowest granular level and occur quite frequently throughout the day.  Normally an ODS will not be optimized for historical and trend analysis as this is left to the data warehouse.  And an ODS is frequently used as a data source for the data warehouse.

To summarize the differences between an ODS and a data warehouse:

  • An ODS is targeted for the lowest granular queries whereas a data warehouse is usually used for complex queries against summary-level or on aggregated data
  • An ODS is meant for operational reporting and supports current or near real-time reporting requirements whereas a data warehouse is meant for historical and trend analysis reporting usually on a large volume of data
  • An ODS contains only a short window of data, while a data warehouse contains the entire history of data
  • An ODS provides information for operational and tactical decisions on current or near real-time data while a data warehouse delivers feedback for strategic decisions leading to overall system improvements
  • In an ODS the frequency of data load could be every few minutes or hourly whereas in a data warehouse the frequency of data loads could be daily, weekly, monthly or quarterly

Major reasons for implementing an ODS include:

  • The limited reporting in the source systems
  • The desire to use a better and more powerful reporting tool than what the source systems offer
  • Only a few people have the security to access the source systems and you want to allow others to generate reports
  • A company owns many retail stores each of which track orders in its own database and you want to consolidate the databases to get real-time inventory levels throughout the day
  • You need to gather data from various source systems to get a true picture of a customer so you have the latest info if the customer calls customer service.  Custom data such as customer info, support history, call logs, and order info.  Or medical data to get a true picture of a patient so the doctor has the latest info throughout the day: outpatient department records, hospitalization records, diagnostic records, and pharmaceutical purchase records

More info:

Comparing Data Warehouse Design Methodologies for Microsoft SQL Server

Operational Data Stores (ODS)

The Operational Data Store

Defining the Purpose of the Operational Data Store

Operational data store – Implementation and best practices

Posted in Data warehouse, SQLServerPedia Syndication | 5 Comments

Presentation slides for “Building a Big Data Solution”

Thanks to everyone who attended my session “Building a Big Data Solution” (Building an Effective Data Warehouse Architecture with Hadoop, Cloud and MPP) for Pragmatic Works today.  The abstract for my session is below and the recording will be available here tomorrow.  I hope you enjoyed it!

Here is the PowerPoint presentation: Building a Big Data Solution

Building a Big Data Solution

As a follow-on to the presentation “Building an Effective Data Warehouse Architecture”, this presentation will explain exactly what Big Data is and its benefits, including use cases.  We will discuss how Hadoop, the cloud and massively parallel processing (MPP) is changing the way data warehouses are being built.  We will talk about hybrid architectures that combine on-premise data with data in the cloud as well as relational data and non-relational (unstructured) data.  We will look at the benefits of MPP over SMP and how to integrate data from Internet of Things (IoT) devices.  You will learn what a modern data warehouse should look like and how the role of a Data Lake and Hadoop fit in.  In the end you will have guidance on the best solution for your data warehouse going forward.

Posted in Data warehouse, Presentation, Session, SQLServerPedia Syndication | Leave a comment

Power BI Made Simple

In an effort to understand Power BI and all the products it encompasses, I have made this slide deck to hopefully make things easy for you: Power BI Made Simple.

It is a presentation that covers all the products under the Power BI umbrella. I give an overview of the products and how they all fit together.  Microsoft has a lot of pieces to the puzzle, and I try to show how they all fit together!

Posted in Power BI, Power Map, Power Pivot, Power Query, Power View/Project Crescent, SQLServerPedia Syndication | 2 Comments

What is Advanced Analytics?

Advanced Analytics, or Business Analytics, refers to future-oriented analysis that can be used to help drive changes and improvements in business practices.  It is made up of four phases:

Descriptive Analytics: What is generally referred to as “business intelligence”, this phase is where a lot of digital information is captured.  Then this big data is condensed into smaller, more useful nuggets of information, creating an understanding of the correlations between those nuggets to find out why something is happening (“Diagnostic Analytics”).  In short, you are providing insight into what has happened to uncover trends and patterns.  An example is Netflix using historic sales and customer data to improve their recommendation engine.

Predictive analytics: Utilizes a variety of statistical, modeling, data mining, and machine learning techniques to study recent and historical data, thereby allowing analysts to make predictions, or forecasts, about the future.  In short, it helps model and forecast what might happen.  For example, taking sales data, social media data, and weather data to forecast the product demand for a certain region and to adjust production.  Or you can use predictive analytics to determine outcomes such as whether a customer will “leave or stay” or “buy or not buy.”

Prescriptive analytics: Goes beyond predicting future outcomes by also suggesting actions to benefit from the predictions and showing the decision maker the implications of each decision option.  Prescriptive analytics not only anticipates what will happen and when it will happen, but also why it will happen.  The output is a decision using simulation and optimization.  In short, it seeks to determine the best solution or preferred course of action among various choices.  For example, airlines sift thought millions of flight itineraries to set an optimal price at any given time based on supply and demand.  Also, prescriptive analytics in healthcare can be used to guide clinician actions by making treatment recommendations based on models that use relevant historical intervention and outcome data.

To summarize all four phases:

Descriptive: “What happened?”, Diagnostic: “Why did it happen?”, Predictive: “What will happen?”, Prescriptive: “What is the best outcome and how can we make it happen?”


More info:

Defining Advanced Analytics

Predictive, Descriptive, Prescriptive Analytics

Big Data Analytics: Descriptive Vs. Predictive Vs. Prescriptive

Understanding Your Business With Descriptive, Predictive and Prescriptive Analytics

Descriptive, Predictive, and Prescriptive Analytics Explained

Business Analytics: Moving From Descriptive To Predictive Analytics

From insight to action: why prescriptive analytics is the next big step for big data

Prescriptive Analytics

Forecasting and Predictive Analytics

Business Analytics 101

Posted in SQLServerPedia Syndication | Leave a comment

Microsoft product roadmap now public

Ever wonder about Microsoft’s product roadmap?  With Microsoft rapidly releases products and services, they realized the need to provide better transparency.  Well wonder no more!  They have released the Cloud Platform Roadmap Site (visit here).

Roadmaps are given for: Microsoft Azure, Intune, Power BI, and Visual Studio Online; server offerings such as Windows Server, System Center, SQL Server and Visual Studio; and converged system appliance offerings such as Cloud Platform System, Analytics Platform System and StorSimple.

It’s a great way to so where Microsoft is focusing some of their development efforts and what technology is currently in development and coming within the next few months.  Check it out!

Here is the official announcement: Increasing Visibility for Our Customers: Announcing the Cloud Platform Roadmap Site.

Posted in Microsoft, SQLServerPedia Syndication | 1 Comment

Power BI Designer and Power BI Dashboard

Long has the question been asked “Which Microsoft tool do I use for dashboards?”.  SSRS, Excel, PowerView, Report Builder and PerformancePoint are all candidates.  But that has all changed, and the future of dashboarding is here: Power BI Designer.

The Power BI Designer is a new companion application for Power BI.  It is a standalone Windows Desktop application that can downloaded from the Power BI site or here.

Note that Power BI Designer is an on-prem version, with a companion tool called Power BI Dashboard that is a web version and is available via a “Introducing Power BI Dashboards (Preview) – Try it now” link on the front of your Power BI site.  Power BI Dashboard can use files created with Power BI Designer.  Think of Power BI Designer as a tool to create the reports, and Power BI Dashboard as the tool to create the dashboard that contains those reports.  However, Power BI Dashboard can create a limited number of reports on its own against other products (GitHub, Marketo, Microsoft Dynamics CRM, Salesforce, SendGrid, Zendesk), as well as against on-prem SSAS (see Using Power BI to access on-premise data), and finally it can connect to an Excel workbook (where data can be automatically refreshed from OneDrive).

Power BI Designer combines Power Query, Power View (a slimed-down version) and the Power Pivot Data Model into a seamless experience that will allow you to build your reports in an offline fashion and then upload them to your Power BI Site via Power BI Dashboard.  When you finish building your report you save it in a new Power BI designer file format called PBIX.


Note that Power BI Designer reports can’t be refreshed after uploaded– must be refreshed manually in Power BI Designer an uploaded again (for now).  But you can edit the design of the report after it has been uploaded.

The Power BI add-ins (Power Query, Power Pivot, Power View, Power Map) for Excel 2013 are still available and you can continue to use that to model your data and build reports.  The Power BI Designer will be another option and also can allow customers with an older version of Office to be able to create reports.

Within the Power BI Designer is a true SSAS tabular data model.  A SSAS tabular instance runs as a child process in local mode.

Users can create personalized dashboards to monitor their most important data.  A dashboard combines on-premises and cloud-born data in a single pane of glass, providing a consolidated view across the organization regardless of where the data lives.

Users can easily explore all their data using intuitive, natural language capabilities and receive answers in the form of charts and graphs. They can also explore data through detailed reports that target specific aspects of their business.  Visuals from these reports can also be pinned to their dashboards for continuous monitoring.  As part of this experience new visualizations have been added including combo charts, filled maps, gauges, tree maps, and funnel charts.

Power BI Dashboard provides “out of the box” connectivity to a number of popular SaaS applications.  In addition to the existing connection with Microsoft Dynamics CRM Online, customers can also connect to their data in Salesforce, Zendesk, Marketo, SendGrid, and GitHub with many more to come in the months ahead.  With an existing subscription to one of these services, customers can login from Power BI.  In addition to establishing a data connection, Power BI provides pre-built dashboards and reports for each of these applications.


Also is a new Power BI connector for SQL Server Analysis Services that allows customers to realize the benefits of a cloud-based BI solution without having to move their data to the cloud.  Customers can now create a secure connection to an “on-premises” SQL Server Analysis Services server from Power BI in the cloud.  When users view and explore dashboards and reports, Power BI will query the on-premise model using the user’s credentials.

There is still a way to go before Power BI Designer is the solution for any and all dashboards you want to create.  It will need to add all the features that are in PerformancePoint, such as KPI’s.  And hopefully there will be a version that will allow dashboards to be deployed to on-prem SharePoint.  But this is a great start and I’m excited to see what is next!

More info:

Power BI Designer Preview

Videos Power BI Designer

New Power BI features available for preview

Getting Started with Power BI Designer

Video Power BI Designer Preview

Unwrapping the Power BI vNext Public Preview

Getting Started with the Power BI Dashboards Public Preview

Posted in Power BI, SQLServerPedia Syndication | 1 Comment

Using Power BI to access on-premise data

The following blog describes how to use certain cloud-based Power BI for Office 365 products (Power View and Q&A) on a Power BI Site that will reference data on-premises.  To do this you will use certain on-premise Power BI products (Power Pivot and Power Query) to gather the data.  Note that you cannot currently upload Power Map reports to a Power BI site (instead use maps in Power View, see Maps in Power View).

This blog applies to on-premise data that resides in SQL Server that is on an Azure VM or an Analytics Platform System (APS), as well as a normal server.  I’ll make note of any differences.

Download and install the Data Management Gateway (DMG) on the server that contains the on-premise data that resides in SQL Server.  If you are using APS, this will have already been done for you on a VM installed with AU3.  This VM will already have the DMG installed which allows a gateway to APS.  Next create a Power BI site if you have not already done so (see Add Power BI sites to your Office 365 site).  On your Power BI site, create a data gateway (see Data Management Gateway Introduction) and note the generated gateway key and then enter that key in the DMG.  This will link your DMG to the Power BI site (a DMG can only be linked to one Power BI site).  Then create a SQL Server data source on your Power BI site that uses the data gateway (see Data Sources via Power BI), enter the connection info to your server, and specify the SQL Server tables to expose. Take note of the created OData data feed URL.

Next you will open an on-premise Excel workbook, go to the Power Query tab, sign in to your Power BI site, and use either “Online Search” or “From Other Sources -> From OData Feed” and choose the table(s) to load (both will create an OData feed connection).  If loading the data into Excel, once loaded select the data and choose “Add to Data Model” on the PowerPivot tab.  Or, load the table directly to a data model.  Then create a Power View report.  Save the Excel workbook and upload it to your Power BI site.  You will now be able to view the Power View report on the Power BI site.  Then use the “Add to Q&A” option on the workbook on the Power BI site, and you will then be able to use Q&A.

Note that you are using data in the Power Pivot model in the workbook you uploaded to Power BI and not the data directly on SQL Server (so the Power View report is hitting Power Pivot, so if data changes in SQL Server, that won’t be reflected in Power View unless you setup the DMG to refresh the data from SQL Server as I explain below).  To refresh the data in the workbook in Power BI from SQL Server, use the “Schedule Data Refresh” option on the workbook in Power BI and create a daily refresh (there is no option to update more than once a day).  To refresh the data more frequently, you must do it manually by going to the setting tab on the “Schedule Data Refresh” page and clicking the button “refresh report now” (the status of the connection must be OK to see this button).  See Schedule data refresh for workbooks in Power BI for Office 365.

Before you use data refresh, be aware that the OData feed connection info that was in the Excel workbook when it was uploaded will not work for a data refresh.  This is because any data sources that are retrieved via Power Query using an OData feed are not supported for data refresh.  So what you need to do is get a proper connection string by importing data from SQL Server using an SQL Server connection: In on-premise Excel, go to the Power Query tab, click “From Database -> From SQL Server Database”.  Enter the connection info (server name) and then Excel will populate the Navigator with names of all the tables.  Then load one of the tables into the data model.  Now go to the Data tab, click Connections, select the properties of the Power Query connection, and on the Definition tab copy the connection string.  Next go to your Power BI site and configure a Power Query data source (see Scheduled Data Refresh for Power Query) where you will paste that connection string.  After adding the Power Query data source, any Excel workbooks that you want to refresh should use a connection to SQL Server (“From Database -> From SQL Server Database”) and not an OData feed (“Online Search” or “From Other Sources -> From OData Feed”).  Once uploaded to Power BI the workbooks can be refreshed as Power BI knows to discover your gateway and data sources.  You won’t need to create another Power Query data source as long as each workbook uses the same raw connection info (server name).

Now in beta for Power BI is a way to access data from the Power BI site in real-time on on-premise SQL Server.  This is accomplished by having Power BI connect to an SSAS tabular model that is on-premise and uses Direct Query mode.  That SSAS tabular model is connected to SQL Server, so a Power View report or Q&A on the Power BI site would be hitting SQL Server in real-time.  Power BI is able to “find” the on-premise SSAS via an installation piece that is similar to the DMG, called the Power BI Analysis Services Connector.  Note that Installing the Power BI Analysis Services Connector and the Data Management Gateway on the same computer is not supported.  Once the Power BI Analysis Services connector is installed and connected to an SSAS server, the tabular model databases for the server instance you configured will appear when users select the “Get Data” option when using Power BI Designer Preview (this is the only place the tabular models can be used).  The connector currently supports Analysis Services 2012 and 2014.  In the future, Analysis Services Multidimensional, SQL Server and other data sources will be added.

Some notes:

The DMG can be used to publish raw data sources (connecting to data sources directly using the server name) or expose them as OData feeds (using OData to find the data source to save users from having to know the server name but with the disadvantage of workbooks the use an OData feed will not be able to be refreshed on a Power BI site).  You will want to create both a raw data source connection as well as a OData feed to the same data source so users can use both features: search for data without needing connection info as well as have the ability to refresh workbooks on the Power BI site.

When using the OData feed URL in Excel, you will be asked for the method you want to use to access it.  Select the Organizational account and enter your Office 365 credentials.  Also make sure you are signed into Excel with the same account (see Power BI – Data Management Gateway).

If you are using an Azure VM for your SQL Server, when creating a data source in Power BI, the set credentials does not work (you get the message “Failed to verify gateway ‘InstanceName1′ status.  Unable to connect to the remote server”).  As a workaround, login to the VM itself and configure the data source.  See Using Power BI Data Management Gateway on Non-Domain Azure VM.

If you are using an Azure VM for your SQL Server you will want to change the computer name of the Azure VM (see Using Power BI DMG on Non-Domain Azure VMs – August 2014 Update) and add endpoints in Azure Management Portal and firewall rules exceptions on your Azure VM (see Using Odata feed on Azure client whose Data Source is configured against DMG on Azure VM and How to Set Up Endpoints to a Virtual Machine).

More info:

How to set Power BI Schedule Data refresh with simple excel file Office 365

Power BI first impressions

Power BI Data Management Gateway 1.2 Changes the Game

Limitations for Power Query OData Feeds in Power BI

When is a Data Management Gateway Needed in Power BI?

Video Deep Dive on the Data Management Gateway in Power BI for connectivity to on premise and hybrid scenarios

Tip! How To Configure Power BI Preview Analysis Services

Posted in PDW/APS, Power BI, SQLServerPedia Syndication | 1 Comment

Analytics Platform System (APS) AU3 released

The Analytics Platform System (APS), which is a renaming of the Parallel Data Warehouse (PDW), has just released an appliance update (AU3), which is sort of like a service pack, except that it includes many new features.  These appliance updates are made available frequently, about every 3-4 months.  Below is what is new in this release:

TSQL compatibility improvements to reduce migration friction from SQL SMP

  • Implements TRY/CATCH block and THROW/RAISERROR/PRINT statements
  • Implements error diagnostic intrinsic functions ERROR_NUMBER, ERROR_MESSAGE, ERROR_SEVERITY, ERROR_STATE, and ERROR_PROCEDURE
  • Implements global variable @@ERROR
  • Implements Transction-state function XACT_STATE
  • Implements system message storage sys.messages DMV
  • Implements Stored Procedures sp_addmessage, sp_altermessage, and sp_dropmessage
  • Implements support for intersect and except queries.

Integrated Azure Data Management Gateway enables Query from Cloud to On Prem through APS

  • Enables Azure users to access on-premises data sources via APS by exposing data as secure OData feeds.
  • Enables PowerBI (including Power Query, Power Maps and Power Q&A) to use on-prem data from PDW and external Hadoop tables.
  • Enables Azure cloud service data mashups to be scaled out via Polybase by querying Hadoop on-premises.
  • Creates a scalable, enterprise-class Data Management Gateway that scales out as queries access more on-prem data sources.

Polybase Recursive Directory Traversal and ORCFile support

  • Enables users to retrieve the content of all subfolders by pointing one single external table to the parent folder. Removes the burden of creating external tables for each subfolders.
  • Enables all Polybase scenarios to run against the file format ORCFiles

Install, Upgrade, and Servicing Improvements

  • Improves operational uptime in maintenance windows by improving our Active Directory reliability model.
  • Reduces end-to-end install time from 9 hours to less than 8 hours.
  • Improves resiliency by reducing issues related to AD VM corruptions, 2-way forest trusts, and CSV/cluster failures.
  • Improves upgrade and setup stability and reliability.

Replatformed to Windows Server 2012 R2 as the core OS for all appliances nodes

  • Enhances stability with fixes and improvements to core fabric systems such as Active Directory, Hyper-V and Storage Spaces, including:
    • Automatically rebuilding of storage spaces
    • Safer virtualization of Active Directory domain controllers.

Replatformed to SQL Server 2014

  • Improves engineering efficiency by moving APS and SQL Server to a common servicing branch, reducing latency in getting bug fixes and support.

More info:

Relational Data Warehouse + Big Data Analytics: Analytics Platform System (APS) Appliance Update 3

Analytics Platform System Appliance Update 3 Documentation and Client Tools

Posted in PDW/APS, SQLServerPedia Syndication | Leave a comment

The Modern Data Warehouse

The traditional data warehouse has served us well for many years, but new trends are causing it to break in four different ways: data growth, fast query expectations from users, non-relational/unstructured data, and cloud-born data.  How can you prevent this from happening?  Enter the modern data warehouse, which is designed to support these new trends . It handles relational data as well as data in Hadoop, provides a way to easily interface with all these types of data through one query model, and can handle “big data” while providing very fast queries (via MPP).

So if you are currently running a SMP solution and are running into these new trends or think you will in the near future, you should learn all you can about the modern data warehouse.  So read on as I try to explain it in more detail:


I like to think of “big data” as “all data”.  It’s not so much the size of the data, but the fact you want to bring in data from all sources, whether that be traditional relational data from sources such as CRM or ERP, or non-relational data from things like web logs or Twitter data.  Having diverse big data can result in diverse processing, which may require multiple platforms.  To simplify things, IT should manage big data on as few data platforms as possible in order to minimize data movement and avoid data synchronization issues as well as avoid having lone silo’s of data.  All of this will work against the “single version of the truth”, so a goal should be to consolidate all data onto one platform.

There will be exceptions to the one platform approach.  As you expand into multiple types of analytics that have multiple big data structures, you will eventually create many types of data workloads.  Because there is no single platform that runs all workloads equally well, most data warehouse and analytic systems are trending toward a multi-platform environment so that diverse data can find the best home based on storage, processing, and budget.

A result of the workload-centric approach is a move away from the single platform monolith of the enterprise data warehouse (EDW) toward a physically distributed data warehouse environment (DWE), also called the modern data warehouse.  A modern data warehouse consists of multiple data platform types, ranging from the traditional relational and multidimensional warehouse (and its satellite systems for data marts and ODSs) to new platforms such as data warehouse appliances, columnar RDBMSs, NoSQL databases, MapReduce tools, and HDFS.  So a users’ portfolios of tools for BI/DW and related disciplines is fast-growing.  While a multi-platform approach adds more complexity to the data warehouse environment, BI/DW professionals have always managed complex technology stacks successfully, and end-users love the high performance and solid information outcomes they get from workload-tuned platforms.

A unified data warehouse architecture helps IT cope with the growing complexity of their multi-platform environments.  Some organizations are simplifying the data warehouse environment by acquiring vendor-built data platforms that have a unifying architecture that is easily deployed and has expandable appliance configurations, such as the Microsoft modern data warehouse appliance (see Parallel Data Warehouse (PDW) benefits made simple).

An integrated RDBMS/HDFS combo is an emerging architecture for the modern data warehouse.  The trick is integrating RDBMS and HDFS so they work together optimally.  For example, an emerging best practice among data warehouse professionals with Hadoop experience is to manage non-relational data in HDFS (i.e. creating a “data lake“) but process it and move the results (via queries, ETL, or PolyBase) to RDBMSs (elsewhere in the data warehouse architecture) that are more conducive to SQL-based analytics, providing ease-of-access and speed (since Hadoop is batch oriented and not real-time – see Hadoop and Data Warehouses).  So HDFS serves as a massive data staging area for the data warehouse (see Hadoop and Data Warehouses). This exposes the big benefit of Hadoop in that it allows you to store and explore raw data where the actionable insights are not yet discovered, and it is not practical to do up-front data modeling.

This requires new interfaces and interoperability between HDFS and RDBMSs, and it requires integration at the semantic layer, in which all data—even multi-structured, file-based data in Hadoop—looks relational.  The secret sauce that unifies the RDBMS/HDFS architecture is a single query model which enables distributed queries based on standard SQL to simultaneously access data in the warehouse, HDFS, and elsewhere without preprocessing data to remodel or relocate it.  Ideally it should also push down processing of the queries to the remote HDFS clusters.  This is exactly the problem that newer technologies such as PolyBase solve.


A data warehouse appliance will give you the pre-integration and optimization of the components that make up the multi-platform data warehouse.  An appliance includes hardware, software, storage, and networking components, pre-integrated and optimized for warehousing and analytics.  Appliances have always been designed and optimized for complex queries against very large data sets, now they must also be optimized for the access and query of diverse types of big data.  As mentioned before, Microsoft has such an appliance (see Parallel Data Warehouse (PDW) benefits made simple) that has the added benefit of being a MPP scale-out technology.  The performance of an MPP appliance allows you to use one appliance for queries, as opposed to creating “work-arounds” to have acceptable performance: multiple copies of data on multiple servers, OLAP servers, aggregate tables, data marts, temp tables, etc.

Clouds are emerging as platforms and architectural components for modern data warehouses.  One way of simplifying the modern data warehouse environment is to outsource some or all of it, typically to a cloud-based DBMS, data warehouse, or analytics platform.  User organizations are adopting a mix of cloud types (both public and private) and freely mixing them with traditional on-premises platforms.  For many, the cloud is an important data management strategy due to its fluid allocation and reapportionment of virtualized system resources, which can immediately enhance the performance and scalability of a data warehouse (see Should you move your data to the cloud?).  However, a cloud can also be an enhancement strategy that uses a hybrid architecture to future-proof data warehouse capabilities.  To pursue this strategy, look for cloud-ready, on-premises data warehouse platforms that can integrate with cloud-based data and analytic functionality to extend data warehouse capabilities incrementally over time.

More info:

THE MODERN DATA WAREHOUSE: What Enterprises Must Have Today and What They’ll Need in the Future

Evolving Data Warehouse Architectures: From EDW to DWE

Modern Data Warehousing

A Modern Data Warehouse Architecture: Part 1 – Add a Data Lake

Hadoop and a Modern Data Architecture

Video Hadoop: Beyond the Hype

Top Five Differences between Data Lakes and Data Warehouses

Posted in Big Data, Data warehouse, Hadoop, PDW/APS, SQLServerPedia Syndication | 3 Comments