Hadoop and Data Warehouses

I see a lot of confusion when it comes to Hadoop and its role in a data warehouse solution.  Hadoop should not be a replacement for a data warehouse, but rather should augment/complement a data warehouse.  Hadoop and a data warehouse will often work together in a single information supply chain: Hadoop excels in handling raw, unstructured and complex data with vast programming flexibility; Data warehouses, on the other hand, manage structured data, integrating subject areas and providing interactive performance through BI tools.  As an example in manufacturing, let’s look at using Hadoop and a data warehouse on a Parallel Data Warehouse (PDW):

Untitled picture

There are three main use cases for Hadoop with a data warehouse, with the above picture an example of use case 3:

  1. Archiving data warehouse data to Hadoop (move)
    Hadoop as cold storage/Long Term Raw Data Archiving:
    - So don’t need to buy bigger PDW or SAN or tape
  2. Exporting relational data to Hadoop (copy)
    Hadoop as backup/DR, analysis, cloud use:
    - Export conformed dimensions to compare incoming raw data with what is already in PDW
    - Can use dimensions against older fact table
    - Sending validated relational data to Hadoop
    - Hadoop data to WASB and have that used by other tools/products (i.e. Cloud ML Studio)
    - Incremental Hadoop load / report
  3. Importing Hadoop data into data warehouse (copy)
    Hadoop as staging area:
    - Great for real-time data, social networks, sensor data, log data, automated data, RFID data (ambient data)
    - Where you can capture the data and only pass the relevant data to PDW
    - Can do processing of the data as it sits in Hadoop (clean it, aggregate it, transform it)
    - Some processing is better done on Hadoop instead of SSIS
    - Way to keep staging data
    - Long-term raw data archiving on cheap storage that is online all the time (instead of tape) – great if need to keep the data for legal reasons
    - Others can do analysis on it and later pull it into data warehouse if find something useful

Note there will still be a need for some Hadoop skills: Loading data into Hadoop (unless all done thru Polybase), maintenance, cleaning up files, managing data, etc.  But people who need access to data don’t need any Hadoop skills or special connections – they can use all skills they have today.

Here are some of the reasons why it is not a good idea to have only Hadoop as your data warehouse:

  • Hadoop is slow for reading queries.  HDP 2.0 today will not perform anywhere near PDW for interactive querying.  This is why PolyBase is so important, as it bridges the gap between the two technologies so customers can take advantage of both the unique features of Hadoop and realize the benefits of a EDW.  Truth be told users won’t want to wait 20+ seconds for a MapReduce job to start up to execute a Hive query
  • Hadoop is not relational, as all the data is in files in HDFS, so there always is a conversion process to convert the data to a relational format
  • Hadoop is not a database management system.  It does not have functionality such as update/delete of data, referential integrity, statistics, ACID compliance, data security, and the plethora of tools and facilities needed to govern corporate data assets
  • Restricted SQL support, such as certain aggregate functions missing
  • There is no metadata stored in HDFS, so another tool needs to be used to store that, adding complexity and slowing performance
  • Finding expertise in Hadoop is very difficult: The small number of people who understand Hadoop and all its various versions and products versus the large number of people who know SQL
  • Super complex, lot’s of integration with multiple technologies to make it work
  • Many tools/technologies/versions/vendors, no standards
  • Some reporting tools don’t work against Hadoop, as well as OLAP
  • The new Hadoop solutions (Tez, X, Spark, etc) are still figuring themselves out.  Customers should not take the risk of investing in one of these solutions (like MapReduce) that may be obsolete
  • It might not save you much in costs: you still have to purchase hardware, support, licenses, training, migration costs
  • If you need to combine relational data with Hadoop, you will need to move that relational data to Hadoop since there is no PolyBase-like technology

I also wanted to mention that “unstructured” data is a bit of a misnomer.  Just about all data has at least some structure.  Better to call it “semi-structured”.  I like to think of it as data in a text file is semi-structured until someone adds structure to it, by doing something like importing it into a SQL Server table.  Or think of structured data as relational and unstructured as non-relational.

More info:

Using Hadoop to Augment Data Warehouses with Big Data Capabilities

WEBINAR Replay: Hadoop and the Data Warehouse: When to Use Which

Hadoop and the Data Warehouse: When to Use Which

Video Big Data Use Case #5 – Data Warehouse Augmentation

How Hadoop works with a Data Warehouse

Design Tip #165 The Hyper-Granular Active Archive

No, Hadoop Isn’t Going To Replace Your Data Warehouse

Posted in Data warehouse, Hadoop, PDW, SQLServerPedia Syndication | 2 Comments

Presentation Slides for Modern Data Warehousing

Thanks to everyone who attended my session “Modern Data Warehousing” at the PASS SQLSaturday Business Analytics edition in Dallas.  The abstract is below.  Great turnout for the last session of the day!

Here is the PowerPoint presentation: Modern Data Warehousing

Modern Data Warehousing

The traditional data warehouse has served us well for many years, but new trends are causing it to break in four different ways: data growth, fast query expectations from users, non-relational/unstructured data, and cloud-born data.  How can you prevent this from happening?  Enter the modern data warehouse, which is able to handle and excel with these new trends.  It handles all types of data (Hadoop), provides a way to easily interface with all these types of data (PolyBase), and can handle “big data” and provide fast queries.  Is there one appliance that can support this modern data warehouse?  Yes!  It is the Parallel Data Warehouse (PDW) from Microsoft, which is a Massively Parallel Processing (MPP) appliance that has been recently updated (v2 AU1).  In this session I will dig into the details of the modern data warehouse and PDW.  I will give an overview of the PDW hardware and software architecture, identify what makes PDW different, and demonstrate the increased performance.  In addition I will discuss how Hadoop, HDInsight, and PolyBase fit into this new modern data warehouse.

Posted in Data warehouse, Presentation, Session, SQLServerPedia Syndication | Leave a comment

Modern Data Warehousing Presentation

I will be presenting the session “Modern Data Warehousing” on Saturday at the PASS SQLSaturday Business Analytics edition in Dallas at 4:30pm CST (info).  The abstract for my session is below.  I hope you can make it!

Modern Data Warehousing

The traditional data warehouse has served us well for many years, but new trends are causing it to break in four different ways: data growth, fast query expectations from users, non-relational/unstructured data, and cloud-born data.  How can you prevent this from happening?  Enter the modern data warehouse, which is able to handle and excel with these new trends.  It handles all types of data (Hadoop), provides a way to easily interface with all these types of data (PolyBase), and can handle “big data” and provide fast queries.  Is there one appliance that can support this modern data warehouse?  Yes!  It is the Parallel Data Warehouse (PDW) from Microsoft, which is a Massively Parallel Processing (MPP) appliance that has been recently updated (v2 AU1).  In this session I will dig into the details of the modern data warehouse and PDW.  I will give an overview of the PDW hardware and software architecture, identify what makes PDW different, and demonstrate the increased performance.  In addition I will discuss how Hadoop, HDInsight, and PolyBase fit into this new modern data warehouse.

Posted in Data warehouse, Presentation, Session, SQLServerPedia Syndication | Leave a comment

What is the Microsoft Analytics Platform System (APS)?

Analytics Platform System (APS) is simply a renaming of the Parallel Data Warehouse (PDW).  It is not really a new product, but rather a name change due to a new feature in Appliance Update 1 (AU1) of PDW.  That new feature is the ability to have a HDInsight region (a Hadoop cluster) inside the appliance.

So APS combines SQL Server and Hadoop into a single offering that Microsoft is touting as providing “big data in a box.”

Think of APS as the “evolution” of Microsoft’s current SQL Server Parallel Data Warehouse product.  Using PolyBase, it now supports the ability to query data using SQL across the traditional data warehouse, plus data stored in a Hadoop region, whether in the appliance or a separate Hadoop Cluster.

More info:

The data platform for a new era

Posted in PDW, SQLServerPedia Syndication | 1 Comment

Parallel Data Warehouse (PDW) AU1 released

Microsoft’s PDW has seen a big boost in visibility and sales over the past year, and part of the reason is due to frequent upgrades to the hardware and software.  About every six months there is an appliance update, which is sort of like a service pack, except that in addition to new features it usually also includes upgrades in hardware (since the PDW is a hardware and software appliance).  Just released is Appliance Update 1 (AU1) to version 2 of PDW.  Details below, including improvements with HDInsight, Hadoop, and Polybase,

First on-premises HDInsight region

  • Enables customers to load, query and analyze structured and unstructured data within a single appliance.
  • T-SQL compatibility over Hadoop via Polybase.
  • Windows failover clustering and full hardware redundancy.
  • Unified management and monitoring experience across HDInsight and PDW.

Polybase Enhancements

  • Better Polybase performance
    • Compute push-down enables 2x improvement.
    • Ability to create statistics over external tables via fullscan or sampling.
  • Polybase T-SQL semantics – Close alignment with T-SQL semantics for data types & conversions.
  • RCFile format support.
  • Compressed and uncompressed Hadoop data support.
  • Better manageability and user experience with new catalog views, DMVs, EXPLAIN, and SSDT support.

Hybrid Cloud Support

  • Import and export to Azure Blob storage – WASB/ASV.
  • Seamless query across WASB and HDInsight region.

PDW Performance Improvements

  • Parallel data load enables 2-8x better load performance.
  • Native Type Conversion for Loader improves load time of single load by 75%, with multiple concurrent loads achieving 2x improvements, validated up to 9.4TB/Hr.

Integrated Authentication (PDW)

  • Enables single sign-on for simplified user management and easier authentication.
  • Support for Kerberos and NTLM.
  • Windows authentication supported through SqlClient, ODBC, and OLE-DB.

Transparent Data Encryption (PDW):

  • Protects data at rest by encrypting data pages on disk, transactional logs, and database backups.

Add Capacity (PDW):

  • Enables customers to expand to any supported topology.
  • All operations except data redistribution are online operations, reducing downtime.

Add Region (HDInsight):

  • Enables customers to add HDI region to existing PDW appliance.

Better upgrade experience:

  • Upgrades can now be run remotely.
  • Reduces time to upgrade appliances by simplifying and minimizing manual steps.
Posted in PDW, SQLServerPedia Syndication | 2 Comments

PASS Summit 2012 session videos online

All PASS Summit 2012 recordings are now available for free to all PASS members (201 of them!).  To view, log on to your myPASS account, and watch the sessions from myRecordings.  While there, check out mine: Building an Effective Data Warehouse Architecture

Posted in PASS, SQLServerPedia Syndication, Videos | Leave a comment

Book: Reporting with Microsoft SQL Server 2012

I am happy to say I have published my first book!  It is called Reporting with Microsoft SQL Server 2012 (order).  Much thanks goes to my co-author and friend Bill Anton.  Below is a brief overview or check out the listing on the Packt Publishing site for a sample chapter and the table of contents.


Reporting with Microsoft SQL Server 2012 will cover all the features of SSRS and Power View and will provide a step-by-step lab activity to help you develop reports very quickly.

Starting with the difference between standard and self-service reporting, this book covers the main features and functionality offered in SQL Server Reporting Services 2012 including a breakdown of the report components, development experience, extensibility, and security. You will also learn to set up and use Power View within Excel and SharePoint and connect to a tabular model as well as a multidimensional model. The book provides real-life reporting scenarios that help to clarify when those scenarios are discussing standard reporting, in which case SSRS is the best choice, and when they are discussing self-service reporting, in which case Power View is the best choice.

This book will have you creating reports in SSRS and Power View in no time!

Posted in Power View/Project Crescent, SQLServerPedia Syndication, SSRS | Leave a comment

SQL Server 2014 released April 1st!

SQL Server 2014 has been released to manufacturers (RTM) and will be generally available April 1.  Here is the official announcement.  Here are my blog posts about some of the new features.

More info:

SQL Server 2014 RTM Announced for April 1 Release

SQL Server 2014 releases April 1

Posted in SQL Server 2014, SQLServerPedia Syndication | Leave a comment

Parallel Data Warehouse (PDW) benefits made simple

I have heard people say that the Parallel Data Warehouse (PDW) is Microsoft’s best kept secret.  So let me give a 10,000 foot overview on what PDW is and its benefits.  Keep in mind the purpose for a PDW is for data warehouses, not OLTP systems.

As opposed to a symmetric multiprocessor system (SMP), which is one server where each CPU in the server shares the same memory and disk, PDW is a massively parallel processing (MPP) solution, which means data is distributed among many independent servers running in parallel and is a shared-nothing architecture, where each server operates self-sufficiently and controls its own memory and disk.  A query sent by a user will, behind the scenes, be sent to each server, executed, and the results combined and sent to the user.  PDW is designed for data warehouses only, not OLTP applications.

With most data warehouses on SMP’s, the bottleneck is disk IO and not the cpu.  With PDW, the appliance is optimized so the cpu’s are fed data from the disks as fast as they can accept data, in large part thanks to DAS.  Direct Attached Storage (DAS) is much faster for data warehouse applications (see Performance Tuning SQL Server: Hardware Architectures Throughput Analysis).  While SANs can be great for OLTP applications, they are less optimal for data warehouses, in addition they are costly and hard to predict performance.  CPU’s can consume 250MB/sec/core but SAN disks can be feeding each cpu only 16GB/s.

That is a quick explanation of just one benefit of a PDW.  For more details on this benefit, read What MPP means to SQL Server Parallel Data Warehouse.  Here is a list of the many other benefits provided by a PDW over a SMP solution (where the underlined benefits are the additional benefits not found in a SMP/SQL Server 2014 solution):

  • Query performance: Expect a 10x-100x increase, which is so important because nowadays there is the expectation of fast queries from users.  You can also expect a reasonable linear increase when adding more servers to your PDW.  PDW is not just an appliance for “big data”.  It can be very useful for small sets of data that need performance.
  • Data loading performance: 10-40x faster due to parallel loading of data.  Data loading speed is 250Gb/hr per compute node (a half rack of 4 compute nodes gives 1 TB/hour, with minimal query performance impact)
  • Scalability (data growth) Start with only a quarter-rack (2 compute servers, 32 cores, 15TB of uncompressed capacity) and grow as needed, up to 7 racks (56 compute servers, 896 cores, 1.2PB of uncompressed capacity.  Using a conservative 5:1 compression, data capacity is from 75TB to 6PB.  And there is no “forklifting” when you upgrade (backing up and restoring from the old server to the new server).  Instead, you add the new servers and the data is automatically redistributed
  • Built-in high availability and failover:  One fault-tolerant cluster across the whole appliance.  Virtualized architecture and no dependency of SAN technologies.  Automatic VM migration on host failure.  All appliance components are fully redundant (disks, networking, etc).
  • PolyBase:  Combine relational with non-relational data (Hadoop) using SQL.  Hides all the complexity of using Hadoop so most business users do not need to know anything about Hadoop.  See PolyBase explained for more details.  PolyBase also has the ability to push down portions of the query processing to the Hadoop cluster and allows you to move data faster between the Hadoop and SQL world because of parallel data transfers
  • Integration with cloud-born data (Windows Azure HDInsight, Windows Azure blog storage).  See What is HDInsight? for more info
  • HDInsight integration into the PDW rack
  • Improved concurrency because of how quickly queries execute
  • Mixed workload support (i.e. no performance issues with queries when a data load is happening)
  • Less DBA maintenance:  Don’t need to create indexes besides a clustered columnstore index, don’t need to archive/delete data to save space, management simplicity (monitor hardware and software from System Center), don’t need to worry about many normal monitoring/maintenance that happens with a SMP system (blocking, logs, query hints, wait states, IO tuning, query optimization/tuning, index reorgs/rebuilds, managing filegroups, shrinking/expanding databases, managing physical servers, patching servers).  DBAs can spend more of their time as architects and not baby sitters
  • Limited training needed: If you are already a Microsoft shop, using a PDW is not much different from using a SMP solution
  • Use familiar BI tools: If you are already a Microsoft shop, all your familiar tools (i.e. SSRS, PowerPivot, Excel, Power View) work fine against a PDW.  The only thing you do differently is enter the IP address and port number of the PDW in the connection string.  So you will not have to rewrite and re-implement the many SSRS reports you have created over the years.  Plus you can expand your report filters because performance is not a problem anymore (i.e. increase the number of years).
  • Improved data compression: 3x-15x more than a SMP system, with 5x being a conservative number.  Unique compression because of data distribution across compute nodes
  • Consolidation of all your data warehouses and the ability to integrate data sources that you could not before.  A centralized data warehouse that is one source for the truth
  • Ease of deployment in appliance vs build-your-own: You can deploy in hours, not weeks, thanks to PDW being a turnkey solution complete with hardware and software.  It is pre-tested and tuned for your data warehouse
  • Data warehouse consolidation: With all the disk space and performance you get with a PDW, you can make it a true enterprise data warehouse by bringing in all the sources, data marts, and other data warehouses into one place.  A true “single version of the truth”
  • Easy support model: With a PDW you get an integrated support plan with a single Microsoft contact.  Whether it’s a problem with the hardware or the software, you just call Microsoft and they will work with the vendor if it’s a hardware issue

If you answer “Yes” to a few of the below questions, a PDW may be right for you:

  • Is your data volume growth becoming unmanageable using currently implemented DW technologies? (>20-30% annually)
  • Is there a specific Big Data business need (e.g. social media analysis, fraud detection) in a high-priority industry (Retail, Financial, Pub Sec)?
  • Is your DW or storage spend consuming a disproportionate and increasing amount of your IT budget?
  • Do your business users need to find, combine, and refine structured and unstructured data? Internal and external sources?
  • In the near future do you expect to need both on-premise and cloud-based BI capabilities?
  • Do you have a need to capture and analyze streaming data?  At what scale and velocity?
  • Do you currently (or plan to) collect, store, and analyze multiple forms of unstructured data (XML, JSON, CSV, etc.)?
  • Are you able to serve your business users’ analytics provisioning and data requests in a timely manner?
  • Are you experiencing data management issues such as security or compliance due to business owners (“shadow” IT) creating their own unmanaged data stores?
  • Are you trying to build, grow, and manage your next-generation DW without adding new headcount or talent (data scientists, external consultants, etc.)?

There are three vendors that sell PDW: HP, Dell, and Quanta.  It comes with a integrated support plan with a single Microsoft contact.

Interested in finding out more about PDW, maybe a demo?  If so, shoot me an email!

More info:

Parallel Data Warehouse (PDW) Version 2

Microsoft SQL Server Parallel Data Warehouse (PDW) Explained

Appliance: Parallel Data Warehouse (PDW)

Parallel Data Warehouse Solution Brief

Introduction to PDW (Parallel Data Warehouse)

Introduction to SQL Server 2012 Parallel Data Warehouse

Transitioning from SMP to MPP, the why and the how

Posted in PDW, SQLServerPedia Syndication | 2 Comments

Real-time query access with PDW

The Parallel Data Warehouse (PDW) officially supports Analysis Services as a data source, both the Multidimensional model (ROLAP and MOLAP modes) and the Tabular model (In-Memory and DirectQuery modes).  The big benefit of using ROLAP or DirectQuery is you get real-time query access to the relational data source in PDW (as opposed to the data only up to the last time the cube was processed) and don’t have to process the cube (just make sure to use clustered columnstore indexes on the PDW tables to improve performance).  You create MDX queries when using ROLAP, which get translated to SQL when hitting PDW, and you create DAX queries when using DirectQuery, which also get translated to SQL when hitting PDW.

Keep in mind that the PDW is so fast when using clustered columnstore indexes, that if you have a properly defined star schema you might not even need to use a cube because the results will be returned to the user quickly.  But there are other reasons besides performance as to why you might still want to use a cube (see Why use a SSAS cube?).

An SSAS cube that uses PDW as a data source is just like any other data source that SSAS uses.  Performance is usually fast because of the clustered columnstore indexes, with the only caveat is sometimes the SQL that is generated by DirectQuery to pull data from PDW is not that great (the SQL generated by ROLAP is usually pretty good).

The other thing to note about DirectQuery, which applies to any data source, is you can’t use PerformancePoint or Excel PivotTables with DirectQuery.  This is because MDX queries are not supported for a tabular model in DirectQuery mode, only DAX, so you need to use a DAX client like Power View (PerformancePoint and Excel PivotTables generate MDX queries behind the scenes).  The other limitation with DirectQuery is it does not cache results like ROLAP and there are some unsupported data types (geometry, xml, and nvarchar(max)).  Finally, there are some DAX functions that are not supported in DirectQuery mode and some that might return different results (see Formula Compatibility in DirectQuery Mode) and there are two DAX functions that are not supported (EXACT and REPLACE).  So it seems that ROLAP is the better choice over DirectQuery for many situations.  But if you do go with a tabular model you may want to look into using a hybrid mode (see Tabular query modes: DirectQuery vs In-Memory and Partitions and DirectQuery Mode).  Definitely go with a DirectQuery tabular model over a in-memory model if your database is 1TB or more.

One limit of ROLAP to note is it does not support parent-child hierarchies (which generate recursive CTE queries which PDW does not yet support, so you will have to convert your parent child hierarchies into level based hierarchies (flattened) in the cube via this tool).  One improvement is Distinct Count performance for ROLAP queries is faster if you enable an optimization.  Some other ROLAP limitations against PDW:

  • Auto-cube refresh is not supported
  • Materialized views, also called Indexed views, are not supported
  • Proactive caching is supported only if you use the polling mechanisms provided by Analysis Services
  • Writeback is not supported

Some things I have learned when using ROLAP against a PDW:

  • Sometimes it is better to have your fact tables as ROLAP, but keep the dimensions as MOLAP
  • Think about using MOLAP for your historical partitions and ROLAP for just your current partition
  • Make sure the measures are in BIGINT in the fact tables or MDX aggregates might not work (MDX aggregates use INT by default unless the source is BIGINT)
  • PDW supports hundreds of concurrent users, but if you have a thousands of concurrent users hitting the cube it may be better to move the data from the PDW to a SMP data mart and create the cube there

More info:

Comparing DirectQuery and ROLAP for real-time access

Tabular model: Not ready for prime time?


Parallel Data Warehouse (PDW) and ROLAP

Analysis Services ROLAP for SQL Server Data Warehouses

Columnstore vs. SSAS

Posted in PDW, SQLServerPedia Syndication, SSAS | Leave a comment