SQL Server 2016 is here!

Today is the day: SQL Server 2016 is available for download!  You can download all the versions (enterprise, standard, web, express with advanced services, express, developer) of SQL Server 2016 now if you have a MSDN subscription, and you can also create an Azure VM right now that includes SQL Server pre-installed with one of the versions (enterprise, standard, web, express).  Lastly, you can also experience the full features through the free evaluation edition (180 days) or the developer edition (you have to sign in to Visual Studio Dev Essentials, a free developer program, before you can download the developer edition).

Here is a quick overview of the tons of new features, broken out by edition (click for larger view):

sql2016-a

and here is another view on the features available for each edition (click for larger view):

sql2016-b

More info:

SQL Server 2016 is generally available today

SQL Server 2016 e-book

Posted in SQL Server 2016, SQLServerPedia Syndication | 1 Comment

Azure Storage pricing

Azure storage is a great, inexpensive solution for storing your data in the cloud.  There are many types of Azure storage options, but I wanted to list here the pricing of the most common type: block blob.

The pricing below is based on the East US region (most other regions have very similar pricing).  The pricing comes from the Azure pricing calculator.  The data redundancy options LRS, ZRS, GRS, and RA-GRS are explained at Redundancy Options in Azure Blob Storage, but here is a review:

LOCALLY REDUNDANT STORAGE (LRS) ZONE REDUNDANT STORAGE (ZRS) GEOGRAPHICALLY REDUNDANT STORAGE (GRS) READ-ACCESS GEOGRAPHICALLY REDUNDANT STORAGE (RA-GRS)
Makes multiple synchronous copies of your data within a single datacenter Stores three copies of data across multiple datacenters within or across regions. For block blobs only. Same as LRS, plus multiple asynchronous copies to a second datacenter hundreds of miles away Same as GRS, plus read access to the secondary datacenter

Here is the pricing:

Storage, LRS, 1GB = $.02/month
Storage, LRS, 10GB = $.24/month
Storage, LRS, 100GB = $2.40/month
Storage, LRS, 1TB = $24/month
Storage, LRS, 10TB = $242/month
Storage, LRS, 100TB = $2,396/month
Storage, LRS, 1PB = $23,572/month

Storage, ZRS, 1GB = $.03/month
Storage, ZRS, 10GB = $.30/month
Storage, ZRS, 100GB = $3.00/month
Storage, ZRS, 1TB = $30/month
Storage, ZRS, 10TB = $302/month
Storage, ZRS, 100TB = $2,995/month
Storage, ZRS, 1PB = $29,466/month

Storage, GRS, 1GB = $.05/month
Storage, GRS, 10GB = $.48/month
Storage, GRS, 100GB = $4.80/month
Storage, GRS, 1TB = $49/month
Storage, GRS, 10TB = $484/month
Storage, GRS, 100TB = $4,793/month
Storage, GRS, 1PB = $47,145/month

Storage, RA-GRS, 1GB = $.06/month
Storage, RA-GRS, 10GB = $.61/month
Storage, RA-GRS, 100GB = $6.10/month
Storage, RA-GRS, 1TB = $62/month
Storage, RA-GRS, 10TB = $614/month
Storage, RA-GRS, 100TB = $6,083/month
Storage, RA-GRS, 1PB = $59,852/month

The other cost to note is you are charged for the number of storage transactions, but this is extremely cheap (transactions include both read and write operations to storage):

100k, .01/month
1M, .04/month
10M, .36/month
100M, $3.60/month

Each and every REST call to Windows Azure Blobs counts as one transaction (see Understanding Windows Azure Storage Billing – Bandwidth, Transactions, and Capacity).

Data going into an Azure storage is free (“Inbound data transfers”).  Data going out of an Azure storage is free (“Outbound data transfers”) if within the same Azure data center (“region”), otherwise there is a small cost (see Data Transfers Pricing Details):

OUTBOUND DATA TRANSFERS ZONE 1 ZONE 2 ZONE 3
First 5 GB /Month Free Free Free
5 GB – 10 TB  /Month $0.087 per GB $0.138 per GB $0.181 per GB
Next 40 TB
(10 – 50 TB) /Month
$0.083 per GB $0.135 per GB $0.175 per GB
Next 100 TB
(50 – 150 TB) /Month
$0.07 per GB $0.13 per GB $0.17 per GB
Next 350 TB
(150 – 500 TB) /Month
$0.05 per GB $0.12 per GB $0.16 per GB
  • Zone 1: US West, US East, US North Central, US South Central, US East 2, US Central, Europe West, Europe North
  • Zone 2: Asia Pacific East, Asia Pacific Southeast, Japan East, Japan West, Australia East, Australia Southeast
  • Zone 3: Brazil South

So in summary, your total cost depends on how much you store, the volume and type of storage transactions and outbound data transfers, and which data redundancy option you choose.

Note that there is a new feature for Azure blob storage: hot access tier and cold access tier (see New Azure storage account type).  The hot access tier pricing is as listed above.  The cold access tier has greatly reduced pricing (ZRS is not yet supported).  For example:

Storage, LRS, 100GB = $2.40/month (hot), $1.00/month (cold)
Storage, GRS, 100GB = $4.80/month (hot), $2.00/month (cold)
Storage, RA-GRS, 100GB = $6.10/month (hot), $2.50/month (cold)

Finally, there is a Azure Speed Testing site that you can use to determine the closest data center to your location to provide the fastest network latency.

More info:

Azure Storage Pricing

Windows Azure Storage Transaction | Unveiling the Unforeseen Cost and Tips to Cost Effective Usage

Understand your bill for Microsoft Azure

Posted in Azure, SQLServerPedia Syndication | 1 Comment

Migrate from on-prem SQL server to Azure VM IaaS

Hopefully you went through my presentations Should I move my database to the cloud?Introducing Azure SQL Database and Implement SQL Server on an Azure VM and are convinced to move your databases to the cloud.  If you are going to migrate to Azure SQL Database and want more info on how to copy your SQL Server databases from on-prem to Azure SQL database (PaaS), check out Migrate from on-prem SQL server to Azure SQL Database.  If you want to copy data from on-prem to SQL Server in an Azure VM (IaaS), check out the number of migration methods below:

  • Use the Deploy a SQL Server Database to a Microsoft Azure VM wizard. Recommended method for migrating an on-premises user database when the compressed database backup file is less than 1 TB.  Fastest and simplest method, use whenever possible to migrate to a new or existing SQL Server instance in an Azure virtual machine.  Use on SQL Server 2005 or greater to SQL Server 2014 or greater.  But note this method is for only the classic deployment model
  • Perform on-premises backup using compression and manually copy the backup file into the Azure virtual machine and then do a restore (only if you cannot use the above wizard or the database backup size is larger than 1 TB).  Use on SQL Server 2005 or greater to SQL Server 2005 or greater
  • Perform a backup to URL and restore into the Azure virtual machine from the URL.  Use on SQL Server 2012 SP1 CU2 or greater to SQL Server 2012 SP1 CU2 or greater
  • Detach and then copy the data and log files to Azure blob storage and then attach to SQL Server in Azure VM from URL.  Use on SQL Server 2005 or greater to SQL Server 2014 or greater
  • Convert on-premises physical machine to Hyper-V VHD, upload to Azure Blob storage, and then deploy as new VM using uploaded VHD.  Use when bringing your own SQL Server license, when migrating a database that you will run on an older version of SQL Server, or when migrating system and user databases together as part of the migration of database dependent on other user databases and/or system databases.  Use on SQL Server 2005 or greater to SQL Server 2005 or greater
  • Ship hard drive using Windows Import/Export Service.  Use when manual copy method is too slow, such as with very large databases.  Use on SQL Server 2005 or greater to SQL Server 2005 or greater
  • If you have an AlwaysOn deployment on-premises and want to minimize downtime, use the Add Azure Replica Wizard to create a replica in Azure and then failover, pointing users to the Azure database instance.  Use on SQL Server 2012 or greater to SQL Server 2012 or greater
  • If you do not have an AlwaysOn deployment on-premises and want to minimize downtime, use SQL Server transactional replication to configure the Azure SQL Server instance as a subscriber and then disable replication, pointing users to the Azure database instance.  Use on SQL Server 2005 or greater to SQL Server 2005 or greater
  • Others: Data-tier Application, T-SQL scripts, SQL Server Import and Export Wizard, SSIS, Copy Database Wizard

More info:

Migrate a SQL Server database to SQL Server in an Azure VM

SQL Server 2014 Hybrid Cloud Scenarios: Migrating On-Premises SQL Server to Windows Azure Virtual Machines

How To Move or Migrate SQL Server Workload to Azure SQL Database Cloud Services or Azure VM – All version of SQL Server – Step-By-Step

Free ebook: Microsoft Azure Essentials Migrating SQL Server Databases to Azure

Posted in Azure, SQL Server, SQLServerPedia Syndication | 1 Comment

Azure Speed Testing

Have you ever wondered what Azure region would be the fastest to host your applications when those applications are accessed from your office?  Obviously the closest data center would probably be the best choice (each region has multiple data centers), but how do you know what data center is the closest?  And how can you easily confirm the closest data center is the fastest?  Well, I recently discovered a site that can help you with that.  It’s called AzureSpeed, and it has a number of tests to help you:

  • Azure Storage Latency Test – Test network latency to Azure Storage in worldwide data centers from your location to determine the best region for your application and users
  • Azure Storage Blob Upload Speed Test – Test upload speeds to Azure Blob Storage in worldwide data centers.  Small pre-determined file is uploaded
  • Azure Storage Large File Upload Speed Test – Test uploading of a large file to Azure Blob Storage in worldwide data centers, with different upload options (block size, thread).  You choose a file on your computer to upload
  • Azure Storage Blob Download Speed Test – Test downloading of a 100MB file from any data center

There is also a Cloud Region Finder that enables you to quickly lookup the cloud and region information of an application deployment by entering the URL.  For example, enter “microsoft.com” and it will return “Azure – North Europe”.

Posted in Azure, SQLServerPedia Syndication | Leave a comment

New Azure storage account type

In Azure, there previously has been only one kind of storage account, which is now called a “General Purpose” storage account.  Just introduced is a new kind of storage account, called “Cool Blob Storage”.  When creating a storage account, the “Account kind” will now list two options: the existing type “General purpose” as well as the new type “Blob storage”.

“General Purpose” storage accounts provide storage for blobs, files, tables, queues and Azure virtual machine disks under a single account.  “Cool Blob Storage” storage accounts are specialized for storing blob data and support choosing an access tier, which allows you to specify how frequently data in the account is accessed.  Choose an access tier that matches your storage needs and optimizes costs.

The access tiers available for blob storage accounts are “hot” and “cold”.  In general, hot data is classified as data that is accessed very frequently and needs to be highly durable and available.  On the other hand, cool data is data that is infrequently accessed and long-lived.  Cool data can tolerate a slightly lower availability, but still requires high durability and similar time to access and throughput characteristics as hot data.  For cool data, slightly lower availability SLA and higher access costs are acceptable tradeoffs for much lower storage costs.  Azure Blob storage now addresses this need for differentiated storage tiers for data with different access patterns and pricing model.  So you can now choose between Cool and Hot access tiers to store your less frequently accessed cool data at a lower storage cost, and store more frequently accessed hot data at a lower access cost.  The Access Tier attribute of hot or cold is set at an account level and applies to all objects in that account.  So if you want to have both a hot access tier and a cold access tier, you will need two accounts.  If there is a change in the usage pattern of your data, you can also switch between these access tiers at any time.

Hot access tier
Cool access tier
Availability
99.9%
99%
Availability
(RA-GRS reads)
99.99%
99.9%
Usage charges
Higher storage costs
Lower access and transaction costs
Lower storage costs
Higher access and transaction costs
Minimum object size
N/A
Minimum storage duration
N/A
Latency
(Time to first byte)
milliseconds
Performance and Scalability
Same as general purpose storage accounts

Some common scenarios with the Cool tier include: backup data, disaster recovery, media content, social media photos and videos, scientific data, and so on.

microsoftazureblobstorage770x224

Going forward, Blob storage accounts are the recommended way for storing blobs, as future capabilities such as hierarchical storage and tiering will be introduced based on this account type.  Note that Blob storage accounts do not yet support page blobs or Zone-redundant storage (ZRS).

More info:

Introducing Azure Cool Blob Storage

Azure Blob Storage: Cool and Hot tiers

About Azure storage accounts

Microsoft launches ‘cool blob’ Azure storage – at 1¢ per GB monthly

Microsoft Unveils Cheaper Azure Cool Blob Storage Option

Posted in Azure, SQLServerPedia Syndication | 4 Comments

SQL Server 2016 available June 1st!

Woo hoo!  Microsoft has announced that SQL Server 2016 will be generally available on June 1st.  On that date all four versions of SQL Server 2016 will be available to all users, including brand new ones, MSDN subscribers, and existing customers.

Here is a quick overview of the tons of new features, broken out by edition (click for larger view):

sql2016-a

and here is another view on the features available for each edition (click for larger view):

sql2016-b

In addition to the on-premises release, Microsoft will also have a virtual machine available on June 1st through its Azure cloud platform to make it real easy for companies to deploy SQL Server 2016 in the cloud.  So start planning today!

More info:

Get ready, SQL Server 2016 coming on June 1st

Microsoft SQL Server 2016 will be generally available June 1

Posted in SQL Server 2016, SQLServerPedia Syndication | Leave a comment

Big Data architectures

Over the last few years I have been involved in reviewing the architectures of various companies that are building or have built big data solutions.  When I say “big data”, I’m referring to the incorporation of semi-structured data such as sensor data, device data, web logs, and social media.  The architectures generally fall into four scenarios:

Enterprise data warehouse augmentation

This scenario uses an enterprise data warehouse (EDW) built on a RDBMS, but will extract data from the EDW and load it into a big data hub along with data from other sources that are deemed not cost-effective to move into the EDW (usually high-volume data or cold data).  Some data enrichment is usually done in the data hub.  This data hub can then be queried, but primary analytics remain with the EDW.  The data hub is usually build on Hadoop or NoSQL.  This can save costs since storage using Hadoop or NoSQL is much cheaper than an EDW.  Plus, this can speed up the development of reports since the data in Hadoop or NoSQL can be used right away instead of waiting for an IT person to write the ETL and create the schema’s to ingest the data into the EDW.  Another benefit is it can support data growth faster as it is easy to expand storage on a Hadoop/NoSQL solution instead of on a SAN with an EDW solution.  Finally, it can help by reducing the number of queries on the EDW.

This scenario is most common when a EDW has been in existence for a while and users are requesting data that the EDW cannot handle because of space, performance, and data loading times.

The challenges to this approach is you might not be able to use your existing tools to query the data hub, as well as the data in the hub being difficult to understand and join and may not be completely clean.

Big Data Architectures

Data hub plus EDW

The data hub is used as a data staging and extreme-scale data transformation platform, but long-term persistence and analytics is performed in the EDW.  Hadoop or NoSQL is used to refine the data in the data hub.  Once refined, the data is copied to the EDW and then deleted from the data hub.

This will lower the cost of data capture, provide scalable data refinement, and provide fast queries via the EDW.  It also offloads the data refinement from the EDW.

Big Data Architectures 2

All-in-one

A distributed data system is implemented for long-term, high-detail big data persistence in the data hub and analytics without employing a EDW.  Low level code is written or big data packages are added that integrate directly with the distributed data store for extreme-scale operations and analytics.

The distributed data hub is usually created with Hadoop, HBase, Cassandra, or MongoDB.  BI tools specifically integrated with or designed for distributed data access and manipulation are needed.  Data operations either use BI tools that provide NoSQL capability or low-level code is required (e.g., MapReduce or Pig script).

The disadvantages of this scenario are reports and queries can have longer latency, new reporting tools require training which could lead to lower adoption, and the difficulty of providing governance and structure on top of a non-RDBMS solution.

Big Data Architectures-3

Modern Data Warehouse

An evolution of the three previous scenarios that provides multiple options for the various technologies.  Data may be harmonized and analyzed in the data lake or moved out to a EDW when more quality and performance is needed, or when users simply want control.  ELT is usually used instead of ETL (see Difference between ETL and ELT).  The goal of this scenario is to support any future data needs no matter what the variety, volume, or velocity of the data.

Hub-and-spoke should be your ultimate goal.  See Why use a data lake? for more details on the various tools and technologies that can be used for the modern data warehouse.

Big Data Architectures4

More info:

Forrester’s Patterns of Big Data

Most Excellent Big Data Questions: Top Down or Bottom Up Use Cases?

Posted in Big Data, Data warehouse, SQLServerPedia Syndication | 4 Comments

Azure SQL Database pricing

Pricing Azure SQL database is difficult because various database service tier options such as database transaction units (DTU’s), max database size, disaster recovery options, and backup retention days are used to determine pricing instead of hardware (CPU/RAM/HD).  Generally I recommend starting out with a low service tier and scaling as your needs increase as it only takes a few minutes to scale with no downtime (see Change the service tier and performance level (pricing tier) of a SQL database).

DTU’s are explained at here.  To help, there is a Azure SQL Database DTU Calculator.  This calculator will help you determine the number of DTUs being used for your existing on-prem SQL Server database(s) as well as a recommendation of the minimum performance level and service tier that you need before you migrate to Azure SQL Database.  It does this by using performance monitor counters.

After you use a SQL Database for a while, you can use a pricing tier recommendation tool to determine the best service tier to switch to.  It does this by assessing historical resource usage for a SQL database.

Note this pricing is per database, so if you have many databases on each on-prem SQL server you will have to price each one.  But there are many “built-in” features of SQL database, such as high availability and disaster recovery, that you don’t have to build or manage in the cloud, thus saving you costs and administration time.

You pay for each database based on up-time.  You don’t pay for storage (but there is a database size limit for each tier).

Below is the pricing for all database service tiers, based in the East US region, for the single database model (pricing is different for elastic database pools which are an option to share resources).  The pricing comes from the Azure pricing calculator:

Basic, SQL Database, 5 DTU, 2GB DB, $5/month
Standard, S0, SQL Database, 10 DTU, 250GB database, $15/month
Standard, S1, SQL Database, 20 DTU, 250GB database, $30/month
Standard, S2, SQL Database, 50 DTU, 250GB database, $75/month
Standard, S3, SQL Database, 100 DTU, 250GB database, $150/month
Premium, P1, SQL Database, 125 DTU, 500GB database, $465/month
Premium, P2, SQL Database, 250 DTU, 500GB database, $930/month
Premium, P4, SQL Database, 500 DTU, 500GB database, $1,860/month
Premium, P6, SQL Database, 1000 DTU, 500GB database, $3,720/month
Premium, P11, SQL Database, 1750 DTU, 1TB database, $7,001/month
Premium, P15, SQL Database, 4000 DTU, 1TB database, $16,003/month

sql-database-service-tiers-table (1)

For more details on pricing, see SQL Database Pricing.  Data going into a SQL Database is free (“Inbound data transfers”).  Data going out of a SQL Database is free (“Outbound data transfers”) if within the same Azure data center (“region”), otherwise there is a small cost – see Data Transfers Pricing Details.

An option to get around the 1TB database size limit is to split the data into multiple databases and use elastic database queries.

More info:

Understanding Windows Azure Storage Billing – Bandwidth, Transactions, and Capacity

Understand your bill for Microsoft Azure

Azure SQL Database – How to choose the right service tier

Posted in Azure SQL Database, SQLServerPedia Syndication | 1 Comment

Analytics Platform System (APS) AU5 released

The Analytics Platform System (APS), which is a renaming of the Parallel Data Warehouse (PDW), has just released an appliance update (AU5), which is sort of like a service pack, except that it includes many new features.  Below is what is new in this release:

The AU5 release offers customers greater Transact-SQL (T-SQL) compatibility to aid in migrations from SQL Server and other platforms as well as improved connectivity and integration with Hadoop.  The AU5 release also includes support for faster data loading scenarios through both first party Microsoft and 3rd party tools.  These features continue to provide greater alignment with SQL Server and bring significant value to customers.

APS hardware partners Dell, HPE and Quanta Cloud Technology will ship APS with AU5 starting this month.  Specific shipping dates will vary depending on the hardware partner’s factory process.

This update delivers several new capabilities and features, including:

PolyBase/Hadoop Enhancements

  • Support for Hortonworks (HDP 2.3) and Cloudera (CHD 5.5)
  • String predicate pushdown to Hadoop for improved performance
  • Support for National/Government cloud storage and public Azure data sets
  • Apache Parquet file format support

Data Loading

  • BCP support for additional data loading scenarios
    • Supports the bcp.exe command line interface for simple data import/export scenarios
    • .NET SqlBulkCopy class support for custom application integration
  • Support for SQL Server Native Client OLE DB and the Bulk Copy API unlocking access by many 3rd party ETL, reporting and analytic tools

T-SQL compatibility improvements to reduce migration friction from SQL SMP

  • sp_prepexec – A common dynamic query preprocessor model that allows customers to simplify migrations
  • SET NOCOUNT and SET PARSEONLY set statements used across a variety of customer’s scenarios
  • IS_MEMBER() and IS_ROLEMEMBER() in support of Windows Authentication
  • CREATE TABLE as HEAP option allows customers to explicitly define heap, in addition to clustered index, or clustered columnstore index tables which aligns DDL with the SQL Data Warehouse service

Based on early testing and feedback, there is a performance improvement of up to 30% for short running queries.  In addition to the above, Microsoft is also offering an early preview of Adaptive Query Processing which can automatically re-optimize query execution mid-flight.  Please note that Adaptive Query Processing is currently in beta.  Customers wishing to participate in the beta should contact their support representative.

More info:

Microsoft releases the latest update to Analytics Platform System (APS)

Posted in PDW/APS, SQLServerPedia Syndication | Leave a comment

Scaling Azure VM’s

There are so many benefits to the cloud, but one of the major features is the ease of use in scaling a virtual machine (VM).  A common scenario is when you are building an application that needs SQL Server.  Simply create a VM on the Azure portal that has SQL Server already installed (or choose an OS-only VM and install SQL Server on your own if you will be bringing a SQL Server license over).  When choosing the initial VM, choose a smaller VM size to save costs.  Then as your application goes live, scale the VM up a bit to handle more users.  Then watch to see the performance of SQL Server.  If you need more resources, scale the VM up again.  If you scale too much so the VM is being under utilized, just scale it back down.

All this scaling can be done in a few mouse clicks with the resizing taking just a few minutes (or even just a few seconds!).  Compare this to scaling on-prem: review hardware, order hardware, wait for delivery, rack and stack it, install OS, install SQL Server, then hope you did not order too much or too little hardware.  It can take weeks or months to get up and running!  Then think of the pain if you have to upgrade the hardware: repeat the same process above, then backup and restore the databases, the logins, sql agent jobs, etc, and restore them on the new server and repoint all the users to the new server.  Ugh!

Let me quickly cover the process of scaling a VM in Azure to show you how easy it is.  First you select your VM in the Azure portal and choose “Size” under Settings:

Picture1

Under “Choose a size” will be a list of all the available VM sizes you can scale to.  Some VMs may not appear in the list if you are in a region that does not support them, so keep this in mind when choosing the region for your initial VM:

Picture5

Some of the VMs in the “Choose a size” list will be “active”, meaning you can select them, and resizing requires just a VM reboot.  The VMs that are active depends on if the current VM size is in same family (see list below), or if the Azure hardware cluster that the current VM resides in supports the new VM size (which you are not able to tell ahead of time – click here for more info):

Picture2

If you see VMs in the “Choose a size” list that are grayed out and not selectable, it means the VM is not in the same family and the hardware cluster does not support the new VM size.  No problem!  If you are using the Azure Resource Manager (ARM) deployment model you can still resize to any VM, you just need to first stop your VM.  Then go back to the “Choose a size” list and you will see all the VMs are now active and selectable.  Just remember to restart the VM when the scaling is complete.

Resizing a VM deployed using the Classic (ASM) deployment model is more difficult if the new size is not supported by the hardware cluster where the VM is currently deployed.  Unlike VMs deployed through the ARM deployment model it is not possible to resize the VM while the VM is in a stopped state.  So for VMs using the ASM deployment model you should delete the virtual machine but select the option to keep the attached storage (OS and data disks) and then create a new virtual machine in the new size and reattach the disks from the old virtual machine.  To simplify this process, there is a PowerShell script to aid in the delete and redeployment process.

So once you choose the VM to scale to, you will see:

Picture3

and in a few minutes, or even seconds if the VM is stopped, you will see:

Picture4

If you needed to stop your VM, the next step is to restart it.  If you did not need to stop it, you are ready to go!

More info:

Anatomy of a Microsoft Azure Virtual Machine

Posted in Azure, SQLServerPedia Syndication | 1 Comment