Scaling Azure VM’s

There are so many benefits to the cloud, but one of the major features is the ease of use in scaling a virtual machine (VM).  A common scenario is when you are building an application that needs SQL Server.  Simply create a VM on the Azure portal that has SQL Server already installed (or choose an OS-only VM and install SQL Server on your own if you will be bringing a SQL Server license over).  When choosing the initial VM, choose a smaller VM size to save costs.  Then as your application goes live, scale the VM up a bit to handle more users.  Then watch to see the performance of SQL Server.  If you need more resources, scale the VM up again.  If you scale too much so the VM is being under utilized, just scale it back down.

All this scaling can be done in a few mouse clicks with the resizing taking just a few minutes (or even just a few seconds!).  Compare this to scaling on-prem: review hardware, order hardware, wait for delivery, rack and stack it, install OS, install SQL Server, then hope you did not order too much or too little hardware.  It can take weeks or months to get up and running!  Then think of the pain if you have to upgrade the hardware: repeat the same process above, then backup and restore the databases, the logins, sql agent jobs, etc, and restore them on the new server and repoint all the users to the new server.  Ugh!

Let me quickly cover the process of scaling a VM in Azure to show you how easy it is.  First you select your VM in the Azure portal and choose “Size” under Settings:

Picture1

Under “Choose a size” will be a list of all the available VM sizes you can scale to.  Some VMs may not appear in the list if you are in a region that does not support them, so keep this in mind when choosing the region for your initial VM:

Picture5

Some of the VMs in the “Choose a size” list will be “active”, meaning you can select them, and resizing requires just a VM reboot.  The VMs that are active depends on if the current VM size is in same family (see list below), or if the Azure hardware cluster that the current VM resides in supports the new VM size (which you are not able to tell ahead of time – click here for more info):

Picture2

If you see VMs in the “Choose a size” list that are grayed out and not selectable, it means the VM is not in the same family and the hardware cluster does not support the new VM size.  No problem!  If you are using the Azure Resource Manager (ARM) deployment model you can still resize to any VM, you just need to first stop your VM.  Then go back to the “Choose a size” list and you will see all the VMs are now active and selectable.  Just remember to restart the VM when the scaling is complete.

Resizing a VM deployed using the Classic (ASM) deployment model is more difficult if the new size is not supported by the hardware cluster where the VM is currently deployed.  Unlike VMs deployed through the ARM deployment model it is not possible to resize the VM while the VM is in a stopped state.  So for VMs using the ASM deployment model you should delete the virtual machine but select the option to keep the attached storage (OS and data disks) and then create a new virtual machine in the new size and reattach the disks from the old virtual machine.  To simplify this process, there is a PowerShell script to aid in the delete and redeployment process.

So once you choose the VM to scale to, you will see:

Picture3

and in a few minutes, or even seconds if the VM is stopped, you will see:

Picture4

If you needed to stop your VM, the next step is to restart it.  If you did not need to stop it, you are ready to go!

More info:

Anatomy of a Microsoft Azure Virtual Machine

Posted in Azure, SQLServerPedia Syndication | 1 Comment

My latest presentations

I frequently present at user groups, and always try to create a brand new presentation to keep things interesting.  We all know technology changes so quickly there is no shortage of topics!  There is a list of all my presentations with slide decks and videos in some cases.  Here are the new presentations I created the past few months:

Implement SQL Server on an Azure VM

This presentation is for those of you who are interested in moving your on-prem SQL Server databases and servers to Azure virtual machines (VM’s) in the cloud so you can take advantage of all the benefits of being in the cloud.  This is commonly referred to as a “lift and shift” as part of an Infrastructure-as-a-service (IaaS) solution.  I will discuss the various Azure VM sizes and options, migration strategies, storage options, high availability (HA) and disaster recovery (DR) solutions, and best practices. (slides)

Relational databases vs Non-relational databases

There is a lot of confusion about the place and purpose of the many recent non-relational database solutions (“NoSQL databases”) compared to the relational database solutions that have been around for so many years.  In this presentation I will first clarify what exactly these database solutions are, how they compare to Hadoop, and discuss the best use cases for each.  I’ll discuss topics involving OLTP, scaling, data warehousing, polyglot persistence, and the CAP theorem.  We will even touch on a new type of database solution called NewSQL.  If you are building a new solution it is important to understand all your options so you take the right path to success. (slides)

Big Data: It’s all about the Use Cases

Big Data, IoT, data lake, unstructured data, Hadoop, cloud, and massively parallel processing (MPP) are all just fancy words unless you can find uses cases for all this technology. Join me as I talk about the many use cases I have seen, from streaming data to advanced analytics, broken down by industry. I’ll show you how all this technology fits together by discussing various architectures and the most common approaches to solving data problems and hopefully set off light bulbs in your head on how big data can help your organization make better business decisions. (slides) (video)

Cortana Analytics Suite

Cortana Analytics Suite is a fully managed big data and advanced analytics suite that transforms your data into intelligent action.  It is comprised of data storage, information management, machine learning, and business intelligence software in a single convenient monthly subscription.  This presentation will cover all the products involved, how they work together, and use cases. (slides)

Posted in Presentation, SQLServerPedia Syndication | Leave a comment

Data loading into Azure SQL Data Warehouse

Azure SQL Data Warehouse (SQL DW) is a new platform-as-a service (PaaS) that distributes workloads across multiple compute resources, called massively parallel processing (MPP).  Loading data into a MPP data warehouse requires a different approach, or mindset, than traditional methods of loading data into a SMP data warehouse.

To help you with understanding how best to load data into SQL DW, Microsoft has released an excellent white paper by Martin Lee, John Hoang, and Joe Sack.  It describes the SQL DW architecture and explores several loading techniques to help you reach maximum data-loading throughput and identify the scenarios that best suit each of these techniques.

Check it out: Azure SQL Data Warehouse loading patterns and strategies

More info:

Using SSIS to Load Data Into Azure SQL Data Warehouse

Posted in Azure SQL DW, Data warehouse, SQLServerPedia Syndication | 1 Comment

Microsoft Azure Government

I’m sure you are aware of Microsoft Azure, but are you aware there is special version of Azure for U.S. governments?

Microsoft Azure Government is a cloud computing service for federal, state, local and tribal U.S. governments.  It was generally available in December 2014 after a year in preview.  To see the Azure services available for the government, see the services available by region.

By default, Azure Government ensures that all data stays within the U.S. and within data centers and networks that are physically isolated from the rest of Microsoft’s cloud computing solution, operated by screened U.S. persons.  It’s in compliance with FedRAMP, a mandatory government-wide program that prescribes a standardized way to carry out security assessments for cloud services.  It also supports a wide range of other compliance standards, including Health Insurance Portability and Accountability Act (HIPAA), Department of Defense Enterprise Cloud Service Broker (ECSB), and the FBI Criminal Justice Information Services (CJIS), which is meant to keep safe fingerprint and background-check data that has to be shared with other agencies.

Microsoft also offers government versions of Office 365, which is hosted in a dedicated “cloud community” reserved only for government customers.  There is also a Microsoft Dynamics CRM Online Government.

Also just announced:
Two new physically isolated regions, which will become available later this year, are part of Azure Government and are meant to host Department of Defense (DoD) data.  These regions will meet the Pentagon’s Defense Information Systems Agency (DISA) Impact level 5 restrictions and are, according to Microsoft, “architected to meet stringent DoD security controls and compliance requirements.”

Level 5 data includes controlled unclassified information.  Classified information (up to ‘secret’) can only be stored on systems that fall under the level 6 classification.  To gain level 5 authorization, cloud providers have to ensure that all workloads run (and all data is stored) on dedicated hardware that is physically separated from non-DoD users.

In addition to its new work with the DoD, Microsoft is also expanding its support for FedRAMP, the standard that governs which cloud services federal agencies are able to use.  The company today announced that Azure Government has been selected to participate in a new pilot that will allow agencies to process high-impact data — that is, data that could have a negative impact on organizational operations, assets or individuals.  Until now, FedRAMP only authorized the use of moderate impact workloads.  Microsoft says it expects all the necessary papers for this higher authorization will be in place by the end of this month.

Azure Government is also on track to receive DISA Level 4 authorization soon.

More info:

Microsoft Cloud for Government

Posted in Azure, SQLServerPedia Syndication | Leave a comment

SQL Server on Linux!

Looks outside: pigs are flying!

In an announcement yesterday, SQL Server will be made available on Linux.  The private preview of SQL Server on Linux is available now, and Microsoft is targeting availability in mid-2017.  Microsoft will offer both on-premises and cloud versions of the product (via Linux VMs).  It will include the Stretch Database capabilities that Microsoft is building into SQL Server 2016.  Right now, SQL Server on Linux is available on Ubuntu or as a Docker image, and Microsoft intends to support Red Hat Enterprise Linux as well as other platforms over time.  The private preview is based on SQL Server 2016.

Considering how anti-Linux Microsoft was a few years ago, this is very surprising, but not so surprising if you have followed the changes over the past two years as Microsoft has come to embrace Linux and other open source technologies and tools (see Microsoft Loves Linux).

To find out more about SQL Server on Linux, you can sign up to get regular updates and provide input to the team, as well as apply to the private preview.

More info:

Microsoft is porting SQL Server to Linux

8 no-bull reasons why SQL Server on Linux is huge for Microsoft

Posted in SQL Server, SQLServerPedia Syndication | 1 Comment

Cross-database queries in Azure SQL Database

A limitation with Azure SQL database has been its inability to do cross-database SQL queries.  This has changed with the introduction of elastic database queries, now in preview.  However, it’s not as easy as on-prem SQL Server, where you can just use the three-part name syntax DatabaseName.SchemaName.TableName.  Instead, you have to define remote tables (tables outside your current database), which is similar to how PolyBase works for those of you familiar with PolyBase.

Here is sample code that, from within database AdventureWorksDB, selects data from table Customers in database Northwind:

--Within database AdventureWorksDB, will select data from table Customers in database Northwind

--Create database scoped master key and credentials

CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';

--Needs to be username and password to access SQL database

CREATE DATABASE SCOPED CREDENTIAL jscredential WITH IDENTITY = '<username>', SECRET = '<password>';

--Define external data source

CREATE EXTERNAL DATA SOURCE RemoteNorthwindDB WITH 
           (TYPE = RDBMS,
            LOCATION = '<servername>.database.windows.net',
            DATABASE_NAME = 'Northwind',  
            CREDENTIAL = jscredential 
            );

--Show created external data sources

select * from sys.external_data_sources; 

--Create external (remote) table.  The schema provided in your external table definition needs to match the schema of the tables in the remote database where the actual data is stored. 

CREATE EXTERNAL TABLE [NorthwindCustomers]( --what we want to call this table locally
	[CustomerID] [nchar](5) NOT NULL,
	[CompanyName] [nvarchar](40) NOT NULL,
	[ContactName] [nvarchar](30) NULL,
	[ContactTitle] [nvarchar](30) NULL,
	[Address] [nvarchar](60) NULL,
	[City] [nvarchar](15) NULL,
	[Region] [nvarchar](15) NULL,
	[PostalCode] [nvarchar](10) NULL,
	[Country] [nvarchar](15) NULL,
	[Phone] [nvarchar](24) NULL,
	[Fax] [nvarchar](24) NULL
)    
WITH
(
  DATA_SOURCE = RemoteNorthwindDB,
  SCHEMA_NAME = 'dbo', --schema name of remote table
  OBJECT_NAME = 'Customers' --table name of remote table
);

--Show created external tables

select * from sys.external_tables; 

--You can now select data from this external/remote table, including joining it to local tables

select * from NorthwindCustomers

--Cleanup

DROP EXTERNAL TABLE NorthwindCustomers;

DROP EXTERNAL DATA SOURCE RemoteNorthwindDB;

DROP DATABASE SCOPED CREDENTIAL jscredential;  

DROP MASTER KEY;  

More info:

Elastic database query for cross-database queries (vertical partitioning)

Posted in Azure SQL Database | 5 Comments

HP Superdome X for high-end OLTP/DW

No, Superdome X is not the name of the stadium where they played in the last Super Bowl.  Rather, Superdome X is HP’s top of the line server running Windows Server 2012 R2 and SQL Server 2014.  It can handle up to 288-cores and 24TB of memory!  It use the HPE 3PAR StoreServ 7440c storage array which consists of 224 SSD drives (480GB/drive) for a total of 107TB of disk space.

It set the highest TPC-H metric for 10TB SQL Server 2014 workloads (see HPE Integrity Superdome X achieves two world records on TPC-H benchmark).  It is perfect for high-end OLTP and medium-sized mixed workloads.  It is a mixed workload system, which is a system that will support running two independent workloads (OLTP and data warehouse) concurrently on the same platform on two physical partitions.

The Superdome X platform is ideally suited for solving the scalability and performance needs of these mixed workload environments, allowing a single hardware platform to be logically partitioned to support multiple environments and workloads with dynamic adjustments to the processor, memory, and storage needs of each environment over time.

It’s efficiently bladed form factor allows you to start small and grow as your business demands increase.  As your databases grow or you need to support new applications, or when your application usage increases, you can efficiently scale up your environment by adding blades.  You can start as small as a 2-socket configuration and scale up all the way to 16 sockets.

More info:

HPE Reference Architecture for Microsoft SQL Server 2014 mixed workloads on HPE Integrity Superdome X with HPE 3PAR StoreServ 7440c Storage Array

Video When To Run Mission Critical Applications on Superdome X

Posted in Appliance, SQLServerPedia Syndication | 1 Comment

Azure SQL Database security

Life we be so much easier if we could just trust everyone, but since we can’t we need solid security for our databases.  Azure SQL Database has many security features to make you sleep well at night:

More info:

Securing your SQL Database

Security Center for SQL Server Database Engine and Azure SQL Database

Security and Azure SQL Database technical white paper

Azure SQL Database security guidelines and limitations

Microsoft Azure SQL Database provides unparalleled data security in the cloud with Always Encrypted

 

Posted in Azure SQL Database, SQLServerPedia Syndication | 2 Comments

Azure SQL Database monitoring

Even though an Azure SQL Database stores all data on the Azure cloud, it does not mean that your options for managing and monitoring the databases are limited compared to on-prem databases.  In fact, the options available are very similar to on-prem, including 3rd-party products that support Azure SQL databases.  Here are the various options:

Posted in Azure SQL Database, SQLServerPedia Syndication | Leave a comment

Scaling Azure SQL Database

One of the advantages Azure SQL Database has over on-prem SQL Server is the ease in which it can scale.  I’ll discuss the various options for horizontal scaling, vertical scaling, and other similar features.

Horizontal scaling refers to adding or removing databases in order to adjust capacity or overall performance. This is also called “scaling out”.  Sharding, in which data is partitioned across a collection of identically structured databases, is a common way to implement horizontal scaling.

Vertical scaling refers to increasing or decreasing the performance level of an individual database—this is also known as “scaling up.”

Elastic Database features enables you to use the virtually unlimited database resources of Azure SQL Database to create solutions for transactional workloads, and especially Software as a Service (SaaS) applications.  Elastic Database features are composed of the following:

You can change the service tier and performance level of your SQL database with the Azure portal, PowerShell (using the Set-AzureSqlDatabase cmdlet), the Service Management REST API (using the Update Database command), or Transact-SQL (via the ALTER DATABASE statement).  You can use DMVs to monitor the progress of the upgrade operation for a database.  This allows you to easily scale up or down a database, and it will remain online and available during the entire operation with no downtime.  This is vertical scaling.  See Change the service tier and performance level (pricing tier) of a SQL database.

Another feature is called Stretch Databases, to let your on-prem SQL Server database hold just the core data, with old/cold data that continues to grow sidelined transparently in Azure SQL Database.  This is a feature only available in SQL Server 2016.  See Stretch Database.

More info:

Elastic Database features overview

Video Azure SQL Database Elastic Scale

Video Elastic for SQL – shards, pools, stretch

SQL Azure Performance Benchmarking

Azure SQL Database DTU Calculator

Posted in Azure SQL Database, SQLServerPedia Syndication | 3 Comments