Archive for December, 2012

This is the last in a  three-part series on multi-tenancy within Windows Azure applications. Here is a breakdown of the topics for these posts.

Post 1 –  Tenants and Instances

Post 2 – Combinations of Instances and Tenancy

Part 3 – Tenant Strategy and Business Application

Multi-Tenant Strategies

Azure’s approach to scalability is to scale out as the load on an application dictates.  But what parts exactly do you want to scale out? You have different options based upon application specific requirements. You don’t have to use the same instance scaling approach for each layer of your application.

For instance, you can have multiple Web roles which write messages into a Windows Azure Service Bus queue that is managed by a single 1st-level Worker role. The Worker role can process the requests and send them to multiple 2nd level Worker roles. These roles then process the requests and write them to a single SQL Azure database. Or the 2nd level Worker roles can write to multiple SQL Azure databases. Or there could be multiple 1st level roles which read the queue and call a single level 2nd level Worker role which then writes to multiple SQL Azure databases.  The combinations are many but you have to carefully make sure they correctly support your application requirements. Here are just a few of the many tenancy combinations possible in your Azure application.

  • A multi-tenant UI (Web role) which calls out to a multi-tenant service (Worker role) which links to a single tenant data storage layer.
  • A single-tenant UI (Web role) calling into a multi-tenant service (Worker role) calling into a single-tenant data layer.

There are many reasons to group the application nodes in different combinations.  You have the flexibility within your architecture to choose which services can be shared and how much they are shared.  In an SIMT or a MIMT application you can logically group customers in your application domain into dedicated instances that support those common usage patterns.

Suppose you sell a multi-purpose application that is used by businesses in different industries.  Based upon usage patterns you could group customers by vertical markets since they tend to use similar transient state and work with similar schemas for persistent data.  For instance, you could put all the medical companies into one SIMT app and the all the restaurants that use your app into another SIMT app.  Or based upon different SLA requirements you can group customers with different SLA levels for availability in different SIMT applications.

Based upon customer security requirements there are different ways to separate or share data.  Multiple tenants can use different databases or schemas to have isolated storage. Coming up one level of isolation tenants can share the same database but have their data stored in different rows in the same table.  Or isolation may not be needed and data is shared at the field level.  Tenants can have different schemas or custom columns or table level permissions based upon requirements.  But however data is configured the multi-tenant application must protect each customer’s data from being visible or accessed by other customers as per application requirements.

State can be isolated or shared as the application dictates just like the database. All tenants in a multi-tenant application can share state, but most likely will have their own state. State can be stored across all customers in one instance or across multiple customers in multiple instances.

Again it all depends upon the application logic and data/business requirements.

In some situations within the services layer or the Web UI layer you may require more than one Worker or Web role respectively.  Dividing the work up by different tasks and assigning it to these roles can be done in many ways.

Case 1:


Case 2:


Case 1 – You can assign each role a specific task that the other roles do not have.   This means you will need to have at least one instance for each type of role.  For instance, you can have a services layer Worker role A that does number-crunching and another Worker role B that does caching of data.   This requires typically more Worker role instances to support the application’s scalability requirements than if both the number crunching and caching was all done in a combined instance.  If you had only one Worker role service that did all the number crunching and all the dependent Web roles required that service regularly so it was in high contention, you might need to scale out with multiple number crunching instances.  The good news is the code to support only one function tends to be simpler than if the role had to handle multiple tasks.

Case 2 – Alternatively you can have multiple roles that all do the same group of tasks.  For instance Worker role A does number crunching and caching, and Worker role B also does the same number crunching and caching.  This is more efficient typically since you don’t need to host as many nodes. However the code and logic is more complex to manage similar operations across multiple instances.  Your code will need to differentiate when it is permissible to carry out the same task in multiple roles at once.   There may be times when that task can only be done in one of the roles at any given time and you will need to implement a synchronization mechanism. If using Azure storage you can require a lease on that storage entity (i.e. blob storage) to do any work. Once that lease is required no other instances can access that resource.

Business Models and Tenancy

Let’s look at perhaps the most important driving factor of all behind tenancy – how will you charge customers of your application so you make $? Will you charge customers monthly based upon the actual resources they use, a percentage of resource usage for an instance shared among other tenants, or a flat fee?  Can customers of your application run concurrently in a shared instance? Or do they require their own dedicated instance? Can they share application code but not the same database?  Or can they share the same database tables but co-reside in adjacent rows? And so on.

There are many ways your architecture can evolve out of the answers to those business questions. At a high level you can use these answers as a basis to decide which tenancy model is correct for you.  At a more granular level you can decide upon different topologies from these architectures that further support your business model. Here are a few business points to consider when making tenancy decisions.

  • Choosing to sell to a large or small customer base?  Or both?
  • Strict regulatory data storage requirements?  Or data that can be stored anywhere and viewed by anyone?
  • Different performance, availability, and scalability requirements
  • A variety of different customer subscription pricing options

Depending upon the answers to these questions here are a few typical billing options you can use for your application.

Fixed Fee

You could agree up front to charge customers a fixed-fee each month regardless of the variable costs they incur.  This is like your standard cable TV model where whether you want ESPN 30 minutes a month or 10 hours per day your bill is still the same each month.

Actual Usage

For single-tenant applications it’s simpler to measure costs per customer since all the costs incurred belong to that specific customer. You could create a separate Azure account for each customer to simplify this process.  This is like your electric bill where you pay for what you use each month.

Variable Shared Costs

Shared of costs infers multi-tenancy.  You can do it at a granular level where you try to specifically charge each customer for what they use.  Thus in an app with 200 tenants you would have all sorts of various monthly bills for each customer, all summing up at least to the amount of the total costs.  Each fall season my neighbors and I used to pay approximately $100 each to rent individual pull-behind aerators to attach to our lawn mowers to aerate and seed our lawn.

We finally got smart and instead of us all paying $100 we all went in on one aerator and shared it across the weekend. If three of us went in together it costs us each $33 apiece. If only two of us shared the cost is was $50 apiece. Each year we paid a variable fee based upon how many of us shared the cost – a shared cost but at a variable amount.

Fixed Shared Costs

Customers could share the costs using a simpler model by taking the total Azure costs of that instance serving that group of tenants and divide it evenly.   To expand upon my shared aerator example suppose three of us went and purchased a deluxe whiz-bang self-propelled aerator for $2400 that not only aerated but seeded at the same time.  We went on a two-year interest-free payment plan and agreed to share the monthly payment of $100 per month for two years – a shared cost but at a fixed amount.

Whatever model you choose you will probably want to build in some profit % to charge customers once the actual Azure costs have been paid.  So it is absolutely critical that you do your homework and establish solid estimates of expected usage costs and profit points before you decide upon your billing strategy. This is especially true if you are using a fixed fee approach or you will end up eating the overage costs yourself. Note that regardless of how much actual CPU time a customer uses the compute cost is a fixed cost per month.  Other charges like storage, SQL Azure, bandwidth, etc. are variable costs and are dependent upon how much the customer uses the Azure infrastructure.

Regardless of the billing model try to maximize the # of tenants in the instance without a degradation of performance. You should consider the price of required resources against the customer’s need for isolation.  The more tenants can share resources the more correspondingly the cost will be minimized for all the tenants and make your application more attractive to a larger group of customers.  You could even have two different deployments of your application.  The more expensive deployment could be dedicated per each high-paying customer who needs isolation and higher performance.  The other deployment could be shared among the other customers who don’t need isolation or the very best performance but want lower pricing.


Provisioning customers for a multi-tenant application is typically a bit more involved than for a single-tenant application.  With a single-tenant application it is probably nothing more than a configuration update. But for a multi-instance application a new Azure instance will need to be configured as each new tenant is added.

A part of provisioning has to do with customizing the UI of the application. For a single-tenant application each customer will have their own instance running a customized version of the UI for that customer.  You can map a custom DNS name (using DNS CNAMEs) to each customer’s instance of the application.  So for Company1 and Company2, you might expose the URLs of and http://myAccount/  This approach works fine for the HTTP protocol.  But for secure HTTPS protocol a problem occurs in this strategy. When using HTTPS only a single SSL certificate can be associated with the standard HTTPS port 443.   To remedy this you can have different Web sites within the same Web instance. This can be done by adding port numbers onto the URIs. For instance, you could have the following addresses for four of the tenants within the same Web instance.





You can also use a custom addressing scheme with the same core part of the URL but just change other parts of it per tenant.  For instance, depending on the configuration of your site and app you could have something like these two URIS for Company1 and Company2.

https://<myAccount&gt; https://<myAccount&gt;

There are other ways to provision and divide functionality among tenants to allow customization of their UI and processing.  You will have to decide just how much liberty to give customers to customize their applications.  It could be a simple change to a part of a page or using cascading style sheets. Or you can allow them to customize entire pages within their namespace.

You will need to ensure that technically a customer’s data is safe within both Azure storage and SQL Azure within a multi-tenant application. More importantly, you will also need to support the perception that the customer’s data is indeed safe in the shared tenant environment.

We mentioned earlier how the cost of using the compute instance is minimized with more customers.  The same applies to SQL Azure and the number of databases used.   For customers that need complete guaranteed data isolation it would make sense to allocate one database for each of them.  Others customers may more economically be able to share rows in the same database table.


In summation, tenancy in Azure applications is largely dependent upon your customers business and application requirements.  You should carefully examine your business and data storage requirements and weigh out the pros and cons of running in a shared vs. isolated environment.

A poor multi-tenant architecture can make the experience of using your application very frustrating for customers.  It can also more importantly result in corrupted data.   Conversely covering your eyes and not doing your research by simply relying upon simplified single-tenant architecture you can make the cost of using your application cost prohibitive to certain customers.  Take your time and ensure you get the correct balance of tenancy and instances to allow your application to take full advantage of the Windows Azure platform.


A best practice for SQL Server is to store the data/log/backup files on a disk other than the OS Disk. This transfers over to SQL Server installed on an Azure IaaS VM.  Due to storage requirements it may be necessary to have more than one physical disk compose a logical disk for these log files.  Another best practice to increase IOPS (Input Output Operations) on SQL Server running on an Azure VM is to store each of these physical disks in their own Azure blob storage account.

However, the Azure Management portal will not give us the ability during VHD provisioning to select a preferred storage account.  By default the VHD will be created at the storage account where our OSDisk is created.  Therefore you must take the following steps to attach the disks one-by-one to the VM. Once that is done you create one spanned disk for data, and one for logs/backup, using the Computer Management Admin console.  Then within SQL Server Management Studio you replace the default C:\xxx locations for SQL Server data, logs, and backup files with the spanned disk volumes.

To create and attach the data disks to the VM you must do the following:

  1. From the Azure Portal for the VM select “Attach an empty disk” and create a new VHD. By default it will be created at the default OSDisk Azure storage location X for the VM
  2. Create a new Azure destination storage account Y. You will create one account for each VHD as best practice
  3. With ClumsyLeaf Cloud Explorer expand both the destination storage account X (left hand pane) and the source storage container with the blob containing the new VHD (right hand pane) from step 1.
  4. Drag/copy the VHD blob from original storage location X (right hand pane) to the new destination storage location Y (left hand pane)

    An alternative way to do the VHD copy is to use the Azure command line from your local machine defined in this Aditi blog entry You will first need to download the NodeJS tool first from here to your local machine. A third copy option is to use the PowerShell command Add-AzureDisk with the –ImportFrom parameter.

  5. In the Azure Portal select Detach Disk for the specific VM
  6. Once the disk is detached go to the Azure Portal VM screen and click on Disks, the select Attach Created Disk
  7. In the Browse Cloud Storage dialog for VHD URL field locate the storage blob to which you copied the VHD to in step 4, and select Open and complete the dialog.
  8. Repeat the steps again for as many disks as you need to attach to that the VM running SQL Server.