Tag Archive: Microsoft Azure


This is the first module of substance in the my new Microsoft Azure Administration mini-series hosted primarily off the Aditi Technologies blog that parallel my just-release Pluralsight course entitled “Microsoft Azure Administration New Features (March 2014)” .

There are two main components to the Recovery Services functionality. The first is called Backup Recovery Services and is used to do automated backups of data primarily from on premises into the Azure Cloud. The advantage from a security and Disaster Recovery standpoint is that the backups are stored off site from your data center in the safe Azure Cloud storage. Recall that Azure storage gives you automatic 3x free replication within that same data center as well. Before the data is transmitted it is encrypted as it is when it is stored in azure storage. Backups are done incrementally to allow point—in-time recovery. It does incremental backups also to improve efficiency, minimizes transfer time, and reduces storage costs.

The second main part is of Azure Recovery Services is the Hyper-V recovery manager (HVRM). This works in conjunction with System Center Virtual Machine Manager to backup Virtual Machines to the Azure Cloud. So why would you be interested in the Hyper-V Recovery Manager Azure feature? The Hyper-V Recovery Manager is used for Disaster Recovery (DR). When you talk about High Availability (HA) you talk about running your app on a single data center so that your app stays running but possibly in a degraded mode or slower response. But with HA your app does not go down. When we refer to DR, there is failure across the VM either in pieces or as a whole where your entire data center goes down. So you need a solution that brings up your app in a secondary data center in a short amount of time so your business is not affected significantly. HVRM will help you manage your DR from one DC to another. HVRM is the go-between to manage this process

HVRM works with Hyper-V Replica in Windows Server 2012. A host running VMs in a DC can replicate all the VMs into another DC. The challenge is most customers have many VMs so you have to orchestrate the order and timing in which VMs come up in the new DC during DR. You could use piece together a PowerShell or System Center Orchestrator that you could put together a DR solution process. But this is sort of complex so HVRM gives a simple solution to this problem for both the VM and the data it uses. As long as the data is in a VM (in a VHD) you can replicate data transparently.

HVRM is in Azure and use the same Azure account to manage your on premises VM. HVRM gives you a custom orchestrated way for recovery. Recovery scripts that normally run in your primary DC can now run in azure. Azure will monitor the two DCs and orchestrate the DR if one of them goes down. You install a HVRM agent on the VMM host machine (not on all the VMs in the group) and pushes metadata to Azure (but your data stays in the DC). The metadata is what you see on the VM console in Azure that is sent to Azure regularly.

Before we can fully understand recovery services there are a few important Azure recovery services concepts to be understood, such as Azure storage vaults, their associated certificates, and the backup agent.

An Azure “vault” is a logical abstraction onto Azure blob storage. When choosing to backup data in the Azure Cloud (blob storage), either through Backup Recovery Services or Hyper V Recovery Manager, you must create a backup vault in the geographic region where you want to store the data. A vault is created using Windows Azure PowerShell or through the Azure portal using a “Quick Create” mechanism.

You do not set the size of an Azure vault. Since a vaults maps to Azure page blob storage, the limit of the entire vault is set at 1 TB, which is the size limit of an Azure page blob. But the limit on actual stored data on an Azure backup vault is capped at 850GB. That’s because with vaults there is metadata associated with the backup, and it consumes around 150 GB of storage if the blob was completely full. Thus, this leaves about 850 GB of actual storage space. You can have more than one server using an azure storage vault. It’s up to you how you want to architect your storage layer.

Vaults required you to register an X.509 v3 certificate with your servers that are using backup vaults. You can obtain one of these certificates by getting a valid SSL certificate issued by a Certificate Authority (CA) that is trusted by Microsoft and whose root certificates are distributed via the Microsoft Root Certificate Program. Or alternatively you can create your own self-signed certificate using the MAKECERT tool. You download the latest Windows SDK to get access to this tool, then run it with a command similar to this one.

This creates a certificate and installs into the Local Computer \ Personal certificate store.
makecert.exe -r -pe -n CN=recoverycert -ss my -sr localmachine -eku 1.3.6.1.5.5.7.3.2 -len 2048 “recoverycert.cer”

To upload the certificate to the Windows Azure Management Portal, you must export the public key as a .CER formatted file. And whether you purchase or build your own self-signed certificate you end up with a certificate in .CER file format which mean it does not contain the private key. The certificate must live in the Personal certificate store of your Local Computer with a minimum key length of at least 2048 bits.

When the certificate is installed on the server to be backed up it should contain the private key of the certificate (PFX file), in the case of Hyper-V Recovery Services. So if you will be registering a different server than the one you used to make the certificate, you need to export the .PFX file (that contains the private key), copy it to the other server and import it to that server’s Personal certificate store.

The high-level certificate management steps differ a bit if you are using the certificate for a Backup Vault or a Hyper-V Recovery Vault.
Backup Vault certificate management process:
1. Create or obtain a .CER certificate
2. Upload the .CER file to Windows Azure portal Recovery Services vault

Hyper-V Recovery Vault certificate management process:
1. Create or obtain a .CER certificate
2. Export is as a .PFX file
3. Upload the .CER file to Windows Azure portal Hyper-V Recovery vault
4. Import the .PFX file onto the VMM servers to be backed up

Azure Backup Recovery services requires an agent to be installed on any source machines for a file transfer operation to a backup vault. An agent is a piece of software that runs on the source client machine to manage what is uploaded to the Azure cloud for backup. Note that the Hyper-V recovery Manager does not require an agent since it copies an entire VM’s (its metadata), whereas Backup services can back up as little as one file.

The backup agent is downloaded by connecting to the Azure portal from the server to be backed up. You will go to a specific vault and click on Install Agent under Quick Glance. There are two versions from which to choose which one you will install. The tool you will use to manage the backup will determine which one you choose.
• Windows Server 2012 and System Center 2012 SP1 – Data Protection Manager
• Agent for Windows Server 2012 Essentials
After the installation of the agent is complete you will configure the specific backup policy for that server. To do this you can use the following tools. Again, whatever tool you will use will determine which version of the backup agent you install on that server.
• Microsoft Management Snap-In Console
• System Center Data Protection Manager Console
• Windows Server Essentials Dashboard

If you go into my course on Pluralsight (http://pluralsight.com/training/courses/TableOfContents?courseName=microsoft-azure-administration-new-features) you will find a lot more information on Recovery Services plus be able to watch videos of how to use these concepts. You will also learn how to manage Azure Backup Services, see how to take and restore backups, Hyper-V Recovery Managera and its use of Hyper-V vaults. In addition, the course includes a wealth of information and demos on the Azure Scheduler, Azure Traffic Manager, Azure Management Services, Azure BizTalk Services, HDInsight, and improvements to Azure Storage and Web Sites. Hope to see you there!

Aditi, my company, has just published a whitepaper I wrote (originally for MSDN) entitled “Maximizing SQL Server On Windows Azure IaaS”. To access the entire paper, go to http://www.aditi.com/Resources/Whitepaper/Maximizing-SQL-Server-On-Windows-Azure-IaaS/index.html and enter a few brief pieces of info about yourself to download the entire paper.
I have included the Introduction section here to give you an idea of what it includes.

Introduction

I have divided the content in this paper into two distinct parts to cater to both the neophyte and the initiated IT Professional in the Windows Azure IaaS VM environment. The purpose of the first section is to get past the initial hurdle of understanding the basics of installing and configuring SQL Server in Azure IaaS. It is targeted at the IT person who has very limited experience, if any, with SQL Server on a Windows Azure VM (Azure IaaS). The end result of this section is to get a single database up and running in the Azure Cloud. Once this baseline of knowledge is established, in the second section I will provide some advanced topics in the manner of best practices and recommendations to help optimize the performance and increase the availability of SQL Server on Azure IaaS. So if you already have a basic knowledge of SQL Server on IaaS VMs you can review section one or just skip it entirely and go to the advanced topics of the second section as there is not a dependency between the different sections.

The first section opens discussing how to configure and manage Azure disks and images. The Azure disks map to supplemental disks in Azure IaaS VMs and are used for the SQL Server data and log files. There are four primary choice for installing and configuring SQL Server in Azure IaaS to ensure you have a basic working configuration. For instance you can use an existing SQL Server image from the Azure VM Gallery. You can create a base Windows Server image and do a manual installation of SQL Server. You can forklift an existing VHD image with SQL Server already installed up into the Cloud. Or you can capture an existing Azure VM image containing SQL Server and use it as a base for additional SQL Server VMs. I will cover some very basic SQL Server administration done remotely over the public Internet including using the Azure portal and the third party AzureWatch tool for monitoring your installation.

Within the second section we examine some key advanced topics for optimizing SQL Server 2012 performance and availability on Azure IaaS. Topics include properly managing your VMs and disks. We see how to optimize SQL Server disk performance within Azure storage where we discuss optimizing IOPS (Input Output Operations/Second). We look at how to optimize the use of azure storage and provisioning of disks to maximize the performance of SQL Server on Windows Azure IaaS VMs. We’ll discuss Azure geo-replication of data and how it relates to multi-disk layouts and storage IOPs. We look into host caching for data and OS disks, how to format disks optimally for azure, managing lifetime of disks, and how to handle disks larger than 1 TB. We will look at how to utilize some Azure IaaS concepts such as affinity groups, virtual networks and their associated subnet, and best practices around VM connectivity to maximize the performance and minimize latency of SQL Server on Azure IaaS.

Also within the second section we will look at how to utilize Azure IaaS best practices to increase availability, performance, and locality of your SQL Server VMs. Specifically I am referring to ways better manage your SQL Server VMs and to synergistically provide optimal configuration management for more than one VM hosting SQL Server. For example, availability sets provide specialized VM failover protection in case SQL Server goes down. You can utilize some Azure IaaS concepts such as affinity groups, virtual networks and their associated subnets, and best practices around VM connectivity to maximize the performance of SQL Server on Windows Azure IaaS VMs. We will expand upon the simple Internet SQL Server administration model in the first section to show how best to secure remote administration leveraging the benefits of a virtual network. More than one SQL Server can share and accept incoming requests via balancing of the client load using shared endpoints. Using a combination of availability sets, load-balancing endpoints, and affinity groups helps ensure that your application is always available and running as fast and efficiently as possible. We will look at ways to replicate data across more than SQL Server VM. Specifically we will focus on SQL Server 2012 AlwaysOn Availability Groups and Listeners as the primary data replication choice. I will present some design recommendations that are key to correctly implementing configuration of SQL Server AlwaysOn Availability Groups. As a part of AlwaysOn we discuss and how to properly configure SQL Server VMs using Azure Availability Sets and Windows Server Failover Clustering (WSFC), SQL Server Availability Groups, and Availability Group Listeners to manage failover between VMs transparently.

I would like to note that this paper will not painstakingly step you through the steps of how to complete a SQL Server setup of SQL Server AlwaysOn Availability Groups or their associated Availability Groups Listeners. This is documented fully on MDSN in most cases. Rather, I will tell you how to do this in Azure and show you the end result of the steps needed to set these entities up. At the end I will provide a concise summary of the high-level steps to configure SQL Server AlwaysOn Availability Groups and Availability Groups Listeners within the Azure IaaS environment.

Throughout this document I will endeavor to add value via best practices and share experiences I have been through. That means I will provide links to as much existing content whenever possible to save space and not clutter the paper. This gives you information on the topic which we are discussing as well as additional contextual information to enhance your understanding. My goal is to keep this document as concise as possible giving you the main points and key recommendations and not to document processes or content that is already developed.