Latest Entries »

Well it’s official!  Even though you probably have read the article “Kardashians to Kick Bruce and their Affinity Groups to the Curb” in People magazine, you were not really sure what to believe until you saw the official word from Microsoft.  Check out the content “About Regional VNets and Affinity Groups” at https://msdn.microsoft.com/library/azure/jj156085.aspx.   Improvements to the Azure virtual network infrastructure that have made communication between different parts of the Azure DC very quick.  No longer do you need to explicitly group resources in close proximity to each other to minimize latency between items like VNETs, VMs, and storage.

 

 

Take a look at my new book on the topic of Azure Automation (part of the Azure Essentials series from Microsoft Press).  http://blogs.msdn.com/b/microsoft_press/archive/2015/03/06/free-ebook-microsoft-azure-essentials-azure-automation.aspx    This is the PDF format released ahead of Epub and Mobi, which will  be released later in the month.

Microsoft Azure Essentials: Azure Automation (ISBN 9780735698154), by Michael McKeown. This is the second ebook in Microsoft Press’s free Microsoft Azure Essentials series. Future ebooks will cover specific Azure topics, such as Azure Machine Learning, Azure Websites for Developers, and others.

ScottGu posted new info today on GA and features for the following. For more info on these go to https://weblogs.asp.net/scottgu/azure-machine-learning-service-hadoop-storm-cluster-scaling-linux-support-site-recovery-and-more.

  • Machine Learning: General Availability of the Azure Machine Learning Service
  • Hadoop: General Availability of Apache Storm Support, Hadoop 2.6 support, Cluster Scaling, Node Size Selection and preview of next Linux OS support
  • Site Recovery: General Availability of DR capabilities with SAN arrays
  • SQL Database: General Availability of SQL Database (V12)
  • Web Sites: Support for Slot Settings
  • API Management: New Premium Tier
  • DocumentDB: New Asia and US Regions, SQL Parameterization and Increased Account Limits
  • Search: Portal Enhancements, Suggestions & Scoring, New Regions
  • Media: General Availability of Content Protection Service for Azure Media Services
  • Management: General Availability of the Azure Resource Manager

 

My good buddy and ex co-worker Michael Collier has co-authored a very well-done book on Essentials of Azure. Michael is a great communicator and his passion for community and Azure comes out in his content.  You can download it free at http://blogs.msdn.com/b/microsoft_press/archive/2015/02/03/free-ebook-microsoft-azure-essentials-fundamentals-of-azure.aspx. It’s like finding a $100 bill in your wallet unexpectedly.  A great read for not only the beginner but those who want a deeper understanding of the constantly changing Azure platform. Great job, Michael (as always)!

Eventual Consistency Patterns
In the next few blog entries we will examine three popular patterns around the eventual consistency of data – Compensating Transaction pattern, Database Sharding pattern, and the Map-Reduce Pattern. Each requires an agile partitioning of data into commonly formatted segments to yield benefits such in performance, access, scalability, concurrency, and size management. The tradeoff in the eventual consistency of the data means that all segments of the data may not always be showing the same values to all consumers at any given time, which occurs in a strongly consistent data architecture. However, the strongly consistent data model does not always map well to a Cloud data architecture where data can be distributed in many geographic locations and in multiple types of storage. The transactional requirement that strongly consistent data requires the same view of all values at any given time comes with a price of overhead due to locking. When locks need to span large distances to maintain transactional boundaries the latency and blocking that may occur might not be acceptable in the Cloud for your data architecture. Thus the choice of having the data eventually consistent is often a better choice in the Cloud.

Eventual vs. Strong Data Consistency
Before we look at patterns let’s get a good understanding of eventual consistency. To those coming from a traditional ACID enterprise data world, the view that data does not always have to carry the “C” (consistency) part of ACID it may be a totally new concept that is hard to fathom. Let’s start out with strong consistency since that is the paradigm that you are likely most familiar.
Strong data consistency (SC) presents the same set of data values are seen by all instances of the application at any time. This is enforced by transactional locks to prevent multiple instances of an app from concurrently changing these values – only the lock holder can change the value.

All updates to any strongly consistent data are done in atomic transactional manner. So in a Cloud where data is often replicated across many locations the use of strong consistency might be complex and slow since no atomic updates are completed until all of the replicated updated data is done updating. Due to the networked and distributed nature of Cloud resources causing a large number of failures in a Cloud environment the model of strong consistency is not very practical.

It’s best to implement strong consistency only in apps that are not replicated or split over many data stores or many different data centers, such as a traditional enterprise database. If data is stored across many data stores use eventual consistency.
Just a note that if you are storing your data in Azure storage that does not enforce transactional strong consistency for transactions that span multiple blobs and tables.

Eventually data consistency (EC) is used to improve performance and avoid contention in data update operations. This is not a simple and straightforward model to use. In fact, if possible to architect an application to use the native transactional features for update operations – then do that! Only use eventual consistency (and the compensating operations) when necessary to satisfy needs that a strongly consistent data story cannot.

A typical business process consists of a series of autonomous operations. These steps can be performed in all sequences or partially in parallel. While they are being completed that overall data may be in an inconsistent state. Only when it all operations are complete is the system consistent again. EC operations do not lock the data when it is modified.

EC typically makes better sense in a Cloud since it typically uses data replicated across many data stores and sites in different geographies. These data stores do not have to be databases as well. For instance, data could be stored in the state of a service. Data may be replicated across different servers to help load balancing. Or it could be duplicated and co-located close to the services and its users. Locking/blocking only effectively works when all data is stored in the same data store, which is commonly not done in a Cloud environment.

You trade strong consistency for attributes such as increased availability (since no locking/blocking to prevent concurrent access) when you design your solutions around the notion of eventual consistency. With EC, your data might not be consistent all of the time. But in the end it will be – eventually. Data that does not use transactional semantics only becomes eventually consistent when all replicas have been updated through synchronization.

Compensating Transaction Pattern
So now that you understand a bit of the difference around the two consistency models you probably want to know more about how the “C” eventually compensates to make the data eventually consistent. This pattern is best used for Eventually Consistent operations where a failure in the work done by an eventually consistent operation needs to be undone by a series of compensating steps. In the ACID “all-or-nothing” strongly consistent transactional operations, a rollback with is done via all resource managers (RM) involved in the transaction. In a typical two-phase commit, each of the RMs update their part of the transaction in the first phase (locking the data in the process). When all of the RMs are complete they all vote into the main transaction coordinator. It all vote “Commit”, the RMs then commit their individual resources. If one or more votes “Abort”, all the RMs roll-back their piece of the transaction to its original state.

But with eventually consistent operations, there is no supervising transactional manager. Eventually consistent operations are a non-transactional sequence of steps with each step committing totally once it’s complete. If a failure occurs then all partial changes up to the point of failure should be rolled back. From a high level this rollback is done the same as in a transactional strongly consistent environment. But at a lower level the process is very individualized and not coordinated among the different steps automatically via resource managers.
It’s not always the case of simply overwriting a changed value with the original value to compensate. This is because the original value may have been changed a few more times before the rollback needs to occur. Replacing the current value with the original value may not be the right thing to do in that case.

One of the best solutions for a compensating transaction is to run it all in a workflow. And in some cases the only way to recover from a non-transactional eventually consistent failed operation is via manual intervention. These compensating operations often are specific and customized to the user and the environment. In fact, the sequence of the strongly consistent operations do not need to be the same necessarily when a failure occurs and the compensation process is started.

The steps in a compensating transaction should also be idempotent since they can fail and have to be rerun again over and over. Part of this is it’s not always easy to tell when an eventual consistency operation has failed. Unlike an ACID operation when individual resource managers check in with the transaction coordinator and update it when one of them fails (to begin the transactional rollback) this does not occur on an eventually consistent operation.

In subsequent blogs on Eventual Consistency patterns, we will address the Database Sharding and the Map-Reduce patterns.

Scalability in the cloud entails the managed process of increasing the number of capacity when the load increases. This can occur either by increasing the number of nodes, or by preserving the same number of nodes while increasing their physical capacity.

The Vertical Scaling Pattern requires downtime to reconfigure and “scale up” the hardware by increasing the capacity (CPU, memory, disk) of a node. Scaling up is also has limits since the node can only be scaled up so much due to its resource capability limit. Vertical scaling is the least common pattern of the scaling options.

Increasing the Virtual Machine (VM) nodes when load increases, and reducing the number of VMs when the stress on a system tails off below a certain level, is known as Horizontal Scaling Compute Pattern. “Scaling out” requires minimal or no downtime to reconfigure the number of nodes. In the end it provides a capacity that exceeds a single node using multiple nodes by increasing the number of nodes of same size and same configuration (homogeneous) to scale upon load. It’s used to handle capacity requirements that vary seasonally or due to unpredictable load spikes. Care must be exercised to not use sticky session or session state unless that state is commonly stored in a central location not bound to a particular VM instance.

The Horizontal Scaling Pattern can be implemented manually or automatically. Its often preferred to minimize the human intervention required to automate the horizontal scaling process via custom or standard auto-scaling rules. A standard rule can typically be written against CPU utilization, memory usage, average queue length, or average response times to continuously monitor a fluctuating resource. For example, you can set a rule such that is a customer query on a product takes more than 20 seconds to come back, then increase the number of VM instances hosting the database cluster by one.

Scaling out can be more than just a compute node and can be other resources, such as storage, queues, and other components that can grown and shrink dynamically. For instance, queues can be added or removed as needed. Databases can be sharded or consolidated as the growth occurs or scales back.

When scaling out a VM not you may want to pay attention to the “N+1″ rule. This deploys N+1 new nodes when scaling up even though only N nodes are needed at one time. Doing this provides additional ready-to-go computing resources if a sudden spike occurs. It also provides an additional instance in case of hardware failure.

To start of 2015 I am going to step up a level conceptually from the low-level Azure technologies and talk Cloud architecture patterns for the short term on the blog. As an Principal Cloud Solutions Architect we need look at a customer’s business requirements, use cases, and their desire to move to the cloud. From that we try to fit their new architecture into well-known, tried-and-true architectural patterns. I will present a basic knowledge of some of my favorite patterns to give your a good conversational understanding of those that are useful for cloud applications. It will not be an in-depth exhaustive study of each pattern. Rather, we will talk about the problem the pattern solves and how it works.

Stay tuned for the first pattern very soon!

New! I just released my fourth full-length Pluralsight Azure Course, this one on Azure Automation. This new technology allows you to create, test, execute, and schedule PowerShell Workflow runbooks in the Microsoft Azure portal. This course defines the process of runbook management and deployment, script management, and global assets to help centralize common and reusable automation resources. You can find the course at http://pluralsight.com/training/Courses/TableOfContents/microsoft-azure-automation.

On Saturday, September 6, Microsoft Azure MVP Michael McKeown, Principal Cloud Solutions Architect for Aditi Technologies (www.michaelmckeown.com) hosted the first Azure Partner Boot Camp in Charlotte NC. Over 75 attendees spent a day meeting each other, talking Cloud, eating pizza, and listening to great Azure sessions. There were three main tracks. Understanding Azure was more for decision makers and focused on business reasons why Azure and the Cloud makes sense. It also discussed business advantages and common Cloud architectural patterns. Developing Azure was focused on developing Azure applications looking at Azure technologies like Azure Media and Mobile Services, Azure Cache, and specific examples of end-to-end Azure applications. The Administering Azure track for IT Pros rounded out the remainder of the sessions. This covered common topics like Azure Virtual Networks, Common IaaS issues, the Azure Resource Manager, Azure Storage, and the recently introduced Azure Automation.

Jeff Nuckolls, VP of Cloud Services, from Aditi kicked off the conference with the keynote speech. Jeff spoke about the Internet of Things and even demoed some sensor hardware live on stage. Then speakers from Microsoft corporate and the Southeast, Aditi Technologies, and RDA Corporation delivered the sessions for the three tracks. RDA provided five speakers at the event and was a main contributor logistically to the success of the conference. In the end there was an Xbox given away (the original winner was not present so had to redraw!), two $250 gift certs to New Egg, and multiple Xbox games and video hardware. The plan is to do this again next year but a bit later in the year so as not to have to fight the sunny 85 degree Saturday weather!

Here is the link (thanks to sponsor RDA!) to the finalized three tracks, times, sessions, and speakers for the MVP Charlotte Azure Boot Camp in at Microsoft in Charlotte, NC on Sept. 6th. Quite an impressive lineup to topics and speakers. This will be an incredible one-day of Cloud that you don’t want to miss!

*** Register now for Charlotte Azure Partner Boot Camp Sept 6 at Microsoft here!

Double-click on schedule below if you don’t want to go to the link above to expand to see speakers and sessions.

Sessions

Follow

Get every new post delivered to your Inbox.