Latest Entries »

Transitioning from ASM to ARM

Azure services to support the Resource Manager modelThe interface to Azure has transitioned from the Azure Service Management (ASM) model to the Azure Resource Manager (ARM) model over the past year or so.  You can view the predecessor ASM as the REST API that allowed you to programmatically (PowerShell or CLI) access the functionality of the Azure “Classic” Portal.  Using X.509 certificates, ASM was the alternative and automated way to manage deployments, storage, hosted services, and almost anything you could manually do in the Classic portal.

In 2015, ARM gradually began replacing the ASM model in incremental service rollouts. ASM supports the current Azure “IBIZA” Portal. Using Azure Active Directory for authentication, ARM (is the recommended and supported way going forward to manage your Azure resources.  ARM promotes the concept of Resource Groups (RG). An RG is a declarative way of specifying Azure resources (such as storage, VMs, Web Sites, etc.) in a logical grouping to that they are deployed, or released, as a unit.

RGs use a JSON-based template model to describe the relationship hierarchy of all the resources.   During execution the template can be run in inline manual entry mode, or in a file named azuredeploy.parameters.json.  This is typically obtained at runtime from the GitHub code repository.

I remember in grammar school having to learn the metric system since they told us that over time the US would no longer use the measurements we had grown up with (inches and pounds and ounces).  The transition to the metric system would have been wrought with hair-pulling I’m sure for awhile as we cowered in our circa 1970 back yard “bomb shelters” (okay, dating myself here a bit!).  The transition from ARM to ASM has not had as dramatic an influence on society as the shift to the metric system would have had on the US population.  However, it has caused much consternation on the part of IT folks responsible for the deployment and management of Azure resources for their company. Here is some current guidance and information on where the process is as of today (Leap Day 2016).

Some services still don’t support ARM: We have re-tooled the majority of Azure services to support the Resource Manager model. For some of the remaining services that don’t yet support this model, we’re planning to complete the work within the next few months.

Connecting ASM and ARM environments: Connecting ASM and ARM environments is possible today by linking VNETs created under these models via the gateways. Both environments can also be connected to on-premises via VPN.  If using ExpressRoute though, an ExpressRoute circuit created in the ASM model can only be used with a VNET in ASM model, and ExpressRoute circuit created in the Resource manager model can be used with VNET created in Resource Manager model. We are working to address this and by April 2016, just one ExpressRoute circuit will be sufficient to connect Virtual Networks from both environments. In addition, we are also working on a capability that would allow setting up a high bandwidth ASM VNET to ARM VNET Peering connection [in the same region] without a gateway in between them. The Peering should be available in the summer timeframe.

Migration from ASM to ARM: Today, there are scripts to help you migrate Virtual Machines from ASM to the ARM model. We are also working on a few more solutions that will help reduce the VM reboots and network downtime when migrating.

When to choose ASM vs. ARM: When building new applications in Azure or starting new projects, we strongly encourage considering using the ARM model. However if some of the services aren’t yet supported under ARM, or if you have existing deployments under ASM, please continue using ASM until you become comfortable with connecting and managing both environments simultaneously. In the long term, please consider migration of ASM to ARM deployments (using migrations solutions we have planned) but rest assured that that we will be supporting the ASM model for a long time to come.

Yes – take a look at the new Microsoft Azure Stack (just put into technical preview). Azure Stack (AS) is a new hybrid cloud platform product that allows organizations to deliver Azure services from their own datacenter, thereby helping them achieve more.

The main motivations behind the development of AS is customers have asked for Azure in their DC – plain and simple. Various motivations have driven the creation of AS.

Business requirements, If the Customer has a set of requirements that the Azure Cloud can’t support, then the hybrid cloud may be the answer. These could be items such as wanting to minimize latency, customization of application architectures, data sovereignty, etc.

Application Flexibility – Make cloud-first innovation possible everywhere . This allows you to make app deployment decisions based on business need (vs. technology constraints). Deployment is based not upon technical capabilities but rather upon business needs (On prem vs in Cloud). In other words, we don’t want to use technologies in the Cloud for the sole reason that they are not available on premises. We will talk a little more about the innovation possibilities that this opens up in another slide.

Inadequate Alternatives – Finally, customers are finding that the alternatives they currently have don’t meet their needs. For organizations that are looking for speed and innovation of cloud computing in their datacenter, Microsoft Azure Stack offers the only hybrid cloud platform that is truly consistent with a leading public cloud. Only Microsoft can bring proven innovation – including higher level PaaS services – from hyper-scale datacenters to on-premises environments to flexibly meet customers’ business requirements.

So let’s look at the three cases for the hybrid Cloud platform starting with Business and Technical considerations.

  • Latency – Latency is an issue if an app requirement that cannot be satisfied by using the public Cloud.
  • Customization – For example, an organization needing deep integration with internal applications, systems,. Or Customization for using (on-premises) a certain type of hardware that a company already uses.
  • Data sovereignty – Data cannot be allowed to leave country borders or the enterprise – e.g. EU.
  • Regulation – Local laws around how to transact business, public sector organizations, compliance and auditing needs etc. Regulations that require data to be handled on premises

The Microsoft Hybrid Cloud with Azure Stack brings the power of Azure into your DC. There are three main investment areas or benefits of the hybrid cloud platform.

  1. Azure Services in your Datacenter – initially a subset of the full azure set (compute, Networking, storage, app services, service fabric). Your IT folks can transform DC resources into Cloud services. This is a Cloud “Inspired” infrastructure – due to translation to on prem machines to Azure VMs. Have to map what’s on prem to the Azure models.
  • Transform on-premises datacenter resources into cloud services for maximum agility.
  • Run Azure infrastructure services – including Virtual Machines, Virtual Network, and Blob/Table Storage – for traditional apps like SQL Server or SharePoint.
  • Empower developers to write cloud-first apps using on-premises deployments of Azure App Service (Web Apps) and Docker-integrated containers.
  • Make developers productive with the same self-service experience as Azure.
  • IT gets to control on-premises Azure experience to best meet business requirements.
  • Hit on PaaS differentiation: – Web Apps, Docker-integrated containers

2. Unified App Development – The same API, PowerShell, ARM etc use in Azure work with Azure Stack. Write once deploy both AC and Azure. RBAC and Powershell, azure portal, Visual studio. Choice of open source app platforms, languages, and frameworks.

  • Identical APIs and application model with Azure Resource Manager
  • Role-based access control with Azure Active Directory and Azure Resource Manager
  • Unified Azure SDK
  • Native Visual Studio integration
  • Support for application platforms, languages, and frameworks, including Linux, Java, node.js, and PHP

3. One Azure Ecosystem – can get quickly productive in AS since the platform is the same on both locations.

  • Curated Azure Resource Manager templates for SharePoint, SQL, AD
  • Curated gallery images for Windows Server and Linux
  • GitHub-based gallery

This approach and solution builds bridges between all three schools of thought: developer, IT Pro, and business dev person.

  • As a developer, I would be excited about… Application developers can maximize their productivity using a ‘write once, deploy to Azure or Azure Stack’ approach. Using APIs that are identical to Microsoft Azure, they can create applications based on open source or .NET technology that can easily run on-premises or in the public cloud. They can also leverage the rich Azure ecosystem to jumpstart their Azure Stack development efforts.
  • As an IT professional, I’m going to be super happy about (moving forward in my career, pleasing my developers, etc.). IT can help transform on-premises datacenter resources into Azure-consistent IaaS and PaaS services, thereby maximizing agility and efficiency. End users can quickly provision services using the same self-service experience as Azure. IT gets to use the same automation tools as Azure to control the service delivery experience.
  • As a business, you can truly take advantage of cloud on your terms.

Stack Architecture

Let’s look at the architecture of the Azure Stack (with Level 1 as the topmost layer and Level 5 on the bottom).

Level 1 – Guest workload resources created by Azure Stack and which end-users (devs / IT Pros) use to get their work done. Each of those resources as supported by an Azure service or combo of azure services.

Level 2 – At the next layer down we get into the actual bits of Azure Stack, starting with the end user experiences. Each of these guest workload resources are supported by the both unique or common (in both AS or Azure portal) end user experience and developer tools.  It’s important to remember that this is “just Azure” and so the experiences that you’ve come to know and count on there are the same in Azure Stack. This means the same Azure Portal, same support for a variety of open source technologies and same support for development tools including integration with Visual Studio.

Level 3 – Unified Application Model from Azure Resource Manager. This means the same model of provisioning/accessing public cloud Azure is the same as provisioning/accessing AS. This piece of technology is central to how Azure and Azure Stack operate as clouds and we’ll talk more about this one.

Level 4 – The extensible service framework is mapped into three services

  • Core Services – Common services across all resource types (RBAC, subscriptions, Gallery, usage, metrics, etc.) these services are a core part of AS. (Ex. Azure Portal)

 

  • Additional Services – The Azure model is extensible that you can add to AS and customize the AS in your DC. (Ex Web apps for TP1 can be installed to AS if you want). Also in future could be API Apps, Mobile Apps, and Logic Apps.

 

  • Foundational Services – Some services in Azure are at the root/basis for other services. This is the main part of what Azure Stack is – this set of foundational services (i.e. VM, Containers, Blob, Queues, and Table storage, and Premium Storage, Networking (with LB and gateway), Platform services) that provide resources to customers as well as serve as the basis for other services.

Level 5 – This is all supported by the Cloud Infrastructure that runs on Windows Server technology on the physical hardware (Infra. Mgmt., Compute, storage, networking)

Azure Stack came out the end of  January for Public Preview 1. GA is planned for Q4 CY 16.

 

Previously within the Azure PaaS and IaaS environments, the auto-scaling of VM instances is done in a manner that replicates additional instances of a running VM.  If you have ‘N’ instances of a VM in a Cloud service scaling up means you now have N+X instances of the VMs (where X is the amount of VMs you increment by in a period of time).  In the IaaS VM world you would have to first pre-allocate VMs into an availability set, then shut them down. Once Azure was triggered (via metrics and threshold indicators to manage the attributes around a scale-out process) Azure would scale instances up (turn the VM on), or down (turn the VMs off). Each of these VMs used the same base image as those running in the availability set. 

This scaling only worked at the VM level. Other Azure resources, such as storage, were not part of that scaling operation.  With Azure VM Scale Sets you take a VM PLUS a set of related resources, and creates a base ‘stamp’ image.  You then create one or more instances of these resource sets while scaling out. Unlike single-instance VMs, you do not need to independently define and correlate network, storage and extension resources for individual VMs as the resources scale as a ‘set’ of resources.

Scale sets leverage Azure Resource Manager (ARM), with its efficient and parallel resource deployment model. With ARM templates, you are able to provision groups of related Azure resources with a JSON script file.   That means you can create a relationship between the VMs that need to scale out and other resources like NIC cards, VM extensions, storage, and any other Azure resource you want to include in the scaling process.  Using VM Scale Sets allows you to create and manage a group of identical VMs to correlate the creation of many independent resources.  For instance, the scale set you deploy could have multiple VMs behind an external load balancer with a public IP address with inbound NAT rules that allow you to connect to a specific instance for solving problems, and Azure storage accounts.  Just think how hard that would be to do that in a scaling situation without Scale Sets.

Right now you use resource group templates for Scale Sets.  Eventually you will be able to create Scale Sets directly from the Azure portal.

 

Azure Dev Test Labs

Practically speaking…wow! Take a look at the new (Public Preview) features called Azure Dev Test Labs. It rocks! This will be an incredibly popular feature for our customers.  With Azure Dev Test Labs your development and test team can dynamically use a self-service provisioning mechanism to quickly deploy Windows or Linux test environments in the Azure cloud. There is a built in process to template specific types of resource allocations. Once provisioned, there is a way to utilize quotas and policies to manage the expense and usage of those resources. That way you make sure your team does not ring up unnecessarily large bills for unused or over-provisioned resources.

With the advent of Azure Resource Manager and resource group templates a process exists to repeatedly allocate entities in Azure using a full reproducible and repeatable method.   The same concept of a template exists in Azure DevTest labs to allow you to create an on-demand Dev-Test environment in a few simple strokes on the mouse.  Leverage these templates via source control across groups in your company or across the enterprise. You can use Visual Studio online along with Git (version control system), and GitHub (web-page on which you can publish your Git repositories and collaborate with others) to add private artifacts. Artifacts can be items like tools that you want to install on a VM, actions that you want to run on the VM, or applications that you want to test. Azure DevTest Labs ships with a lot of key artifacts to get going.  There is a blade on the Azure Preview Portal that asks for the GIT Clone URI and the folder path. The Azure DevTest Lab will speak to the Git repository and sync the content.

DevTest Labs can be worked into your deployment process where you can set a limit to manage the costs and lifetime of you Azure Dev/Test resources. There are also existing pools of VMs from which you can quickly draw from to expedite the setup of your development or test environment.

To manage security of who has access by using role-based access control, such as the DevTest Lab user role. This allows to manage access at different levels (i.e. subscription, lab level, etc.) to use and provision resources.

For more info on Azure Dev/Test Labs go to https://azure.microsoft.com/en-us/services/devtest-lab/

 

As you may know, the Azure cloud platform has been built from the ground up to be an open platform, and currently 1 in 4 Azure virtual machines is powered by Linux and 60% of the solutions in Azure marketplace are Linux based. This week, we are announced another significant step forward in our ongoing openness efforts and support for Linux on Azure, and I am pleased to inform you that we have entered into a partnership with Red Hat to provide joint, enterprise grade support on Azure for a breadth of Red Hat solutions. Key elements of the partnership include:

  • Red Hat solutions available natively to Microsoft Azure customers. In the coming weeks, Microsoft Azure will become a Red Hat Certified Cloud and Service Provider, enabling customers to run their Red Hat Enterprise Linux applications and workloads on Microsoft Azure. At this time, Red Hat Cloud Access subscribers will be able to bring their own virtual machine images to run in Microsoft Azure. Microsoft Azure customers can also take advantage of the full value of Red Hat’s application platform, including Red Hat JBoss Enterprise Application Platform; Red Hat JBoss Web Server; Red Hat Cluster Storage; and OpenShift, Red Hat’s Platform-as-a-Service (PaaS) offering. Also, in the coming months, Microsoft Azure and Red Hat plan to provide Red Hat On-Demand— “pay-as-you-go” Red Hat Enterprise Linux images available in the Azure Marketplace, supported by Red Hat.
  • Integrated enterprise-grade support spanning hybrid environments. Customers will be offered cross-platform, cross-company support spanning the Microsoft Azure and Red Hat offerings in an integrated way, unlike any previous partnership in the public cloud. By co-locating support teams, the experience for customers will be simple and seamless.
  • Unified workload management across hybrid cloud deployments. Red Hat CloudForms will integrate with Microsoft Azure and System Center Virtual Machine Manager, offering Red Hat CloudForms customers the ability to manage Red Hat Enterprise Linux on both Hyper-V and Microsoft Azure. Support for managing Azure workloads from Red Hat CloudForms is expected to be added next year, extending the existing System Center capabilities for managing Red Hat Enterprise Linux.
  • Collaboration on .NET for a new generation of application development capabilities. Expanding on the preview of .NET on Linux announced by Microsoft in April, developers will have access to .NET technologies across Red Hat offerings, including OpenShift and Red Hat Enterprise Linux, jointly backed by Microsoft and Red Hat. Red Hat Enterprise Linux will be the primary development and reference operating system for .NET Core on Linux.

There is significant customer demand for Red Hat support on Azure, and this announcement unlocks a huge opportunity for our partners to engage with customers to help them move existing or new Red Hat solutions to the Azure platform.

For more information:

  • Visit the Microsoft and Red Hat landing page here and view the announcement webcast by Microsoft’s Scot Guthrie and Red Hat’s Paul Cormier.
  • Attend (or watch a recording) of a technical webinar on running Red Hat solutions on Microsoft Azure.
  • Go here to learn more about our overall Microsoft open cloud approach and join our social blog.

I met with a customer today and the subject of Azure Scale Units (SU) and the limitations to scaling-up Azure VMs was discussed.  So I want to provide a simple definition of this issue and how to work around it.

Scale Units

Azure VMs that are part of the same Cloud service have free access to each other and share the load balancer and a virtual IP (i.e. http://myapp.cloudapp.net).  A Cloud Service is bound to a single Scale Unit, which Azure uses to scale out any of the VMs in a Cloud service. VMs can only be resized to a size supported in the Scale Unit (SU) where the VM is deployed. At times the word “stamp” is used to refer to the same concept as an Azure scale unit.  You can view a stamp or a scale sectioned off group of hardware in the DC that works together.

As new hardware becomes available MS builds it into a new SU. Currently there are five SU types. But keep in mind these will evolve and change as new hardware and new data centers are added to the Azure Cloud.  Scale Unit 1 is the oldest hardware in the DC, while SU 5 is the newest.

  • Scale Unit 1: A0-A4 (original VM sizes) Basic VMs (No LB, no Autoscale) and can only scale between A0-A4)
  • Scale Unit 2: A0-A7 (like SU1 but adds A5-A7)
  • Scale Unit 3: A8/A9 (“HPC” VMs, with Infiniband)
  • Scale Unit 4: A0-A7 and D1-D14 (D’s series and all A0-A7)
  • Scale Unit 5: G1-G5
  • How do you know which scale unit you are using?
  • Go into the VM/Configure ta
  • Click on VM Size and drop down all sizes.
  • If you see A0-A4 so you can tell what SU (#1) you are using. So you cannot scale up to anything above S4 in this case.

The Problem

Azure has generational hardware in its datacenters. Scale Units are groups of hardware resources in an Azure DC.  Due to the progressive rollout of constantly improving VM types (such as A8-A9), not all stamps support all of the new hardware. So if you are in an older stamp, and you try to scale up by increasing the VM type, you may or may not be able to do so. This depends up on the age and hardware functionality of the particular stamp to which your original VM is assigned. For instance, all the stamps will support A1, but not all support the new A8 and A9 VMs, or the D- and G-series VMs.

The Solution

There is not a portal setting or a  “scale-unit” PowerShell parameter to control the stamp to which your VM is assigned .  So what should you do?

>> If you want to allocate a new VM and make sure you can move up to bigger Scale Units in the future:

  • To ensure you get a SU that will meet your needs for scaling up, ensure the first VM deployed in that Cloud Service (or legacy Affinity Group) is in the upper range. So if you want SU2, deploy an A5 or above (A6 or A7) and you will be in SU2 at that point for all subsequent allocations.

>> If you want to move an existing VM to a new bigger size that is not in your current Scale Unit:

  • If you are in SU1 and need to move to a VM size that is not in SU1 (say A5-A7 in SU2) you can’t change it directly from the UI. So find the OS disk name in the Usage Overview
  • Delete the VM but be sure to choose “Keep the Attached Disks”
  • Go to VM/Disks and make sure that disk is not attached to any other VMs
  • Go to Gallery and create a new VM using that saved OS Disk and select the upgraded size to which you want to scale up

Note that once you allocate a G-series VM you can only change the VM size (scale up) to another G-series VM). If you want an A or D series VM you need to delete the VM, save the OS disk, etc. The D-series is similar in nature but also includes A0-A7 for SU #4.

Closing Thoughts

The key here is planning ahead of time what possible up-sizing could occur and in what tiers. It’s a part of procurement design.  “Sizing up” is not a very common process and typically it is done manually due to possibly undersized planning estimates.  So if you feel there is a good chance you are going to possibly eventually need a D-series, and are initially allocating planning on allocating a A-series, you should allocate a D series (recommend do the lowest of D series, D1, just to get the absolute lowest monthly usage cost). Once you allocate, drop down to the A series and run from there.  Later, if you need to scale up to a D series you have that capability to do so in that S4 scale unit.  You don’t have this option with the G-series Scale Units, which does not contains any D or A series options.

 

Azure MVP x 2!

Received my 2nd consecutive Azure MVP award today from Microsoft!   I love how Microsoft takes care of its partners… this is an awesome program!  Anyone considering becoming an MVP for sure it’s worth the time and effort.  For  some of my thoughts on becoming an MVP refer to “How to Become an MVP” https://michaelmckeownblog.wordpress.com/2014/04/22/how-to-become-an-mvp

And a special thanks to my manager Jeff Nuckolls and Aditi for supporting me in this program time and moneywise over the past year!

IMG_1877a microsoft-mvp-logo

Azure Scale Units

When scaling Azure based upon load, you can scale horizontally (add more compute instances) or vertically (increasing the size of the VM). Azure Auto-scaling functionality works with horizontal scaling.  For more details about these options see my previous blog post on “Cloud Scaling Patterns“.

Azure Scale Units (SU) apply to vertical scaling, where you are wanting to scale ‘up’ the unit on which your VM is partitioned.  As new hardware becomes available Microsoft builds them into a new “scale unit”. If scaling UP, there are some sizes you can’t scale up (resize) to depending upon what SU your VM is currently partitioned.  VMs can only be resized to a size supported in the SU where the VM is deployed. Currently there are five scale unit types:

  • Scale Unit 1: A0-A4 (original VM sizes) Basic VMs, no LB, no Autoscale, can only scale between A0-A4)
  • Scale Unit 2: A0-A7 (like SU1 but adds A5-A7)
  • Scale Unit 3: A8/A9 (“HPC” VMs, optimized networking with Infiniband)
  • Scale Unit 4: A0-A7 and D1-D14 (D’s series with SSDs and better CPUs, and all A0-A7)
  • Scale Unit 5: G1-G5 (monster powerful VMs up to 32 core Xeon CPUs/448 GB RAM/6596 GB SSD storage/64 data disks)

How to check the SU unit a VM is currently allocated

  1. VM/Configure tab
  2. Click on VM Size and drop down all sizes. Here, you see A0-A4 so you can tell what SU (#1) you are using. So you cannot scale up to anything above S4 in this case.

How to choose scale unit

  • To ensure you get a SU that will meet your needs for scaling up, ensure the first VM deployed in that Cloud Service (or legacy Affinity Group) is in the upper range. So if you want SU2, deploy an S5 or above (S6 or S7) and you will be in SU2 at that point for all subsequent allocations
  • When you deploy a smaller size, like A2, you can get put into many different scale units. You won’t necessarily be in SU1.

How to move to a bigger size VM that is not in your current SU:

  • If you are in SU1 and need to move to a VM size that is not in SU1 (say A5-A7 in SU2) you can’t change it directly from the UI
  1. Note the OS disk name in the usage overview
  2. Delete the disk but “Keep the Attached Disks”
  3. Go to VM/Disks and make sure that disk is not attached to any other VMs
  4. Go to Gallery and choose that OS Disk and select the upgraded size you want to upgrade it to.

Network Security Groups (NSG) are an improvement upon VM endpoint Access Control List (ACL) rules. They provide network segmentation within an azure VNET at the infrastructure management level. This is an improvement over the earlier endpoint protection where you can define an ACL to define rules and permit or deny ingress traffic to the public IP port. ACLs only apply to incoming traffic, whereas NSGs apply to both ingress and egress traffic.

  • You must first remove an ACL from any endpoints on a VM to use NSGs. The two (ACL and NSG) cannot co-exist on the same endpoint.
  • Each VM endpoint has both a private and a public port. The private port is used by other VMs in the subnet to communicate with each other and does not use the Azure load balancer. The public port is exposed to the implicit Azure load balancer for ingress Internet traffic. ACLs apply to the public port. For a VM endpoint you’ll need to ensure that the VM firewall allows data to be transferred using the protocol and private port corresponding to the endpoint configuration
  • Today there is no support in the portal for NSGs. It all needs to be done via the REST API or PowerShell.
  • NSGs offer full control over incoming/outgoing traffic into a virtual machine in a VNET
    • NSGs do not work with VNETs that are linked to an affinity group.
    • Filters requests before they hit VM (ingress) and before they go out to Internet
  • Provides a logical DMZ for database and app servers to be protected
    • Web proxies (mask actual incoming and outgoing IP addresses) and DNS servers are usually put in DMZ since they are exposed to the internet. NSGs can logically create this type of abstraction for your VNET.
  • NSG provides control over traffic flowing in and out of your Azure services in a subnet or a VNET as a set of ACL rules
    • Are applied to all VMs in a subnet in a VNET in the same region. VMs must be in a regional VNET.
    • NSGs can be applied to an individual VM as well.
    • To give additional layers of protection you can associate a NSG to both a VM and another NSG to the VMs subnet
      • Ingress traffic the packet goes through the access rules first specified in the subnet, then secondly by the VM NSG rules
      • Egress traffic goes through the VM rules first, then the NSG rules in the subnet.
    • You can now control access coming in and out of the Internet to your VMs for an entire subnet via a since ACL rule
    • ACL rules on multiple VMs can be quickly changed in one operation vs. having to go into the firewall on each VM individually. You make the changes at the NSG level and don’t even need to go into the VM.
  • Azure provides default NSG rules around the VNET, and access to the internet, that you can modify if needed. It also provides default ‘tags’ to identify the VIRTUAL_NETWORK, AZURE_LOADBALANCER and the INTERNET. These tags can be used in the Source/Destination IP address to simplify the process of defining rules.
  • Defined rules are prioritized in their execution (as in the case in most business rules engines) so that the most importantly prioritized rules are executed first.
  • You define the rules (in PowerShell or REST) using the following fields
    • Priority, Access Permissions, Source IP/ Source Port, Destination IP/Destination Port, Protocol

For more information on NSGs refer to “About Network Security Groups” at https://msdn.microsoft.com/library/azure/dn848316.aspx.

Well it’s official!  Even though you probably have read the article “Kardashians to Kick Bruce and their Affinity Groups to the Curb” in People magazine, you were not really sure what to believe until you saw the official word from Microsoft.  Check out the content “About Regional VNets and Affinity Groups” at https://msdn.microsoft.com/library/azure/jj156085.aspx.   Improvements to the Azure virtual network infrastructure that have made communication between different parts of the Azure DC very quick.  No longer do you need to explicitly group resources in close proximity to each other to minimize latency between items like VNETs, VMs, and storage.

 

 

Follow

Get every new post delivered to your Inbox.

Join 27 other followers