Latest Entries »

Back In The Saddle!

This Azure blog, once very active, has been on vacation for a few years.  Now… starting back slowly with posts to help others with Azure issues. I will include some work blogs I’ve written for fairly recently to start with.

Serverless Computing with Azure functions   https://www.accenture.com/us-en/blogs/blogs-michael-mckeown-serveless-computing-azure

Developing for Auto-Scaling in Azure   https://blogs.dxc.technology/2020/05/15/developing-for-autoscaling-in-azure/

Understanding Azure Databricks and Resource Groups  https://blogs.dxc.technology/2020/01/10/understanding-azure-databricks-and-resource-groups/

Unlock Azure Tagging with Policies and Initiatives  https://assets1.dxc.technology/cloud/downloads/DXC-Cloud-Unlock_Azure_tagging_with_policies_and_initiatives_white_paper.pdf

 

Advertisement

I have not seen a lot of official guidance from Microsoft on the number of Resource Groups  (RGs) in a deployment.  Should I put all my resources in one large group? Or use multiple RGs, and what criteria do you use to segment them? Here are some thoughts on granularity and Resource Groups.

  1. Putting all resources in a large group is akin to writing one large monolithic main program with no functions – it is not good modular design and changes later can be an issue. T
  2. Tagging of tiers for monitoring, billing status, and management.   You can tag all resources in one RG (vs each individual resource). If all on one big RG, you can’t differentiate between tiers.
  3. Being able to update all the resources in a RG as one or separately.  Only resources that life cycle together should be in the same RG. A resource group is a unit of provisioning resources as a unit in one action (Application Lifecycle Management – ALM).  You can update and delete them as a unit.
  4. You can restrict access to RGs using RBAC at a more granular level.  This requires more analysis in this architecture but is a administrative positive in multi-tier environments. If everything is in one RG you love this capability. For production applications, you can control the users that may access those resources.  Or for the core infrastructure resources you can limit only infrastructure folks to work with them.
  5. With different RGs, you can define different admin access permissions for each tier. Owner, Reader, Contributor.  Some admins may be owner on one tier, but contributor on other tiers. Not a good security idea to let everyone in the door at the party and let them roam freely in the house into different rooms that they may not have a true business need to administer.  Its why we have different roles/permissions in SQL Server for admins.
  6. Compartmentalizing templates/RGs helps division of responsibility to enhance security. Each template for each tier can be under different RBAC roles for full separation of duties.  Using one large RG you lose the granular security for different tiers of your solution.
  7. Centralized RGs that contain core components (storage, VNETs/subnets, etc.) are an option with >1 RG. As you scale out, creating centralized Resource Groups for your virtual networking and subnets makes it easier to build cross-premises network connections for hybrid connectivity options. When combined with RBAC, you can protect these common groups to only be administered by the correct admin.

Here are some key points around Azure VNET to VNET Peering.

REGIONS

  • Virtual network (VNet) peering enables you to connect two VNets in the same region through the Azure backbone network (no Internet). Once peered, the two VNets appear as one for connectivity purposes.
  • If your VNets are in the same region, you do not need to use a Gateway. Rather, connect VNETs in the same regions with VNET Peering.
  • If VNETs are in different regions, you need to use a Gateway.
  • Each VNet, regardless of whether it is peered with another VNet, can still have its own gateway and use it to connect to an on-premises network.
  • VNet-to-VNet traffic within the same region is free for both directions.
  • Cross region VNet-to-VNet egress (outbound) traffic is charged with the outbound inter-VNet data transfer rates based on the source regions

BENEFITS OF PEERING

The traffic between VMs in the peered VNets is routed through the Azure infrastructure (Backbone) (not through a gateway) much like traffic is routed between VMs in the same VNet.  This yields a low-latency, high-bandwidth connection between resources in different VNets.  VMs in the peered VNets can communicate with each other directly by using private IP addresses.

REQUIREMENTS

  • The peered VNets must exist in the same Azure region.
  • The peered VNets must have non-overlapping IP address spaces.
  • You can peer VNets that exist in two different subscriptions. A privileged user of both subscriptions authorizes the peering. The subscriptions are associated to the same Active Directory tenant.
  • The network throughput is based on the bandwidth that’s allowed for the VM proportionate to its size. Though the communication between VMs in peered VNets has no additional bandwidth restrictions, there is a maximum network bandwidth depending on the VM size that still applies.

 

 

Ever struggle with how to properly select the correct VM type/size for your Azure architecture? I do!  More choices than Kim Kardashian has shoes!  It can be really hard to make the right decision since they are all very good VMs to run your apps on.
Here’s a simplified cheat sheet, and a links that give you cllear info on your decisions.
A series – Normal generic VMs

F series – Eventually replace generic A-series

D/DS series  – “DISK” with faster caching

G/GS series – “GODZILLA” with very large RAM

N series – “NVIDIA” (Graphics units)

H series – “HPC” Compute Intensive with fast Infiniband RDMA

L/LS series -“LOW LATENCY” storage optimized

Choosing the most appropriate Azure Virtual Machine Specification

 

Don’t look in the rear view mirror of your F-14 Tomcat, but Microsoft Azure is ramping up its Federal compliance story bigtime!  Microsoft has recently announced three major Azure features to support government entities, partners and customers using Azure Government.  These announcements formalize that new fact that that complex and secure government and defense workloads can run on Azure Government Cloud.   The GovCloud is certified by the US Department of Defense and the US Federal Government.

  1.  International Traffic in Arms Regulations (ITAR) compliance – This supports customers and partners who store and process International Traffic in Arms Regulations regulated data to leverage Azure Government to conform with these data requirements.  Azure Government meets the strict location and personnel requirements required under ITAR by restricting access to US persons and storing data in the US.
  2. Information Impact Level 4 by the Defense Information Systems Agency – This allows all US Department of Defense (DoD) customers and mission partners to leverage Azure Government for controlled unclassified information—data requiring protection from unauthorized disclosure and other mission-critical data including data subject to export control, privacy, or protected health information, or data designated as For Official Use Only, Law Enforcement Sensitive, or Sensitive Security Information.
  3. FedRAMP High formal award – This formalizes that Azure Government has controls in place to securely process high-impact level data—data that, if leaked or improperly protected, could have a severe adverse effect on organizational operations, assets, or individuals

 

I came across two great posts I'd like to combine and share on how to RDP into a VM from the new Azure Portal (using ARM), then a simple way to copy files from your local environment to that VM via File Explorer.

To RDP into the VM via the (new) Azure Portal, first you need to add a DNS name in the porta.  Then you will replace the public IP in the RDP file with that DNS name to RDP into the VM successfully.

http://pleasereleaseme.net/remote-desktop-connections-to-new-style-azure-vms-where-has-the-dns-name-gone/

 After you do that, here is a tip on how to configure the downloaded RDP file to use local resources. For instance, you can map your local c:\temp drive to be seen in the File Explorer view on the VM after you RDP into it.

http://windowsitpro.com/azure/tip-how-transfer-files-azure-vms

 

 

Transitioning from ASM to ARM

Azure services to support the Resource Manager modelThe interface to Azure has transitioned from the Azure Service Management (ASM) model to the Azure Resource Manager (ARM) model over the past year or so.  You can view the predecessor ASM as the REST API that allowed you to programmatically (PowerShell or CLI) access the functionality of the Azure “Classic” Portal.  Using X.509 certificates, ASM was the alternative and automated way to manage deployments, storage, hosted services, and almost anything you could manually do in the Classic portal.

In 2015, ARM gradually began replacing the ASM model in incremental service rollouts. ASM supports the current Azure “IBIZA” Portal. Using Azure Active Directory for authentication, ARM (is the recommended and supported way going forward to manage your Azure resources.  ARM promotes the concept of Resource Groups (RG). An RG is a declarative way of specifying Azure resources (such as storage, VMs, Web Sites, etc.) in a logical grouping to that they are deployed, or released, as a unit.

RGs use a JSON-based template model to describe the relationship hierarchy of all the resources.   During execution the template can be run in inline manual entry mode, or in a file named azuredeploy.parameters.json.  This is typically obtained at runtime from the GitHub code repository.

I remember in grammar school having to learn the metric system since they told us that over time the US would no longer use the measurements we had grown up with (inches and pounds and ounces).  The transition to the metric system would have been wrought with hair-pulling I’m sure for awhile as we cowered in our circa 1970 back yard “bomb shelters” (okay, dating myself here a bit!).  The transition from ARM to ASM has not had as dramatic an influence on society as the shift to the metric system would have had on the US population.  However, it has caused much consternation on the part of IT folks responsible for the deployment and management of Azure resources for their company. Here is some current guidance and information on where the process is as of today (Leap Day 2016).

Some services still don’t support ARM: We have re-tooled the majority of Azure services to support the Resource Manager model. For some of the remaining services that don’t yet support this model, we’re planning to complete the work within the next few months.

Connecting ASM and ARM environments: Connecting ASM and ARM environments is possible today by linking VNETs created under these models via the gateways. Both environments can also be connected to on-premises via VPN.  If using ExpressRoute though, an ExpressRoute circuit created in the ASM model can only be used with a VNET in ASM model, and ExpressRoute circuit created in the Resource manager model can be used with VNET created in Resource Manager model. We are working to address this and by April 2016, just one ExpressRoute circuit will be sufficient to connect Virtual Networks from both environments. In addition, we are also working on a capability that would allow setting up a high bandwidth ASM VNET to ARM VNET Peering connection [in the same region] without a gateway in between them. The Peering should be available in the summer timeframe.

Migration from ASM to ARM: Today, there are scripts to help you migrate Virtual Machines from ASM to the ARM model. We are also working on a few more solutions that will help reduce the VM reboots and network downtime when migrating.

When to choose ASM vs. ARM: When building new applications in Azure or starting new projects, we strongly encourage considering using the ARM model. However if some of the services aren’t yet supported under ARM, or if you have existing deployments under ASM, please continue using ASM until you become comfortable with connecting and managing both environments simultaneously. In the long term, please consider migration of ASM to ARM deployments (using migrations solutions we have planned) but rest assured that that we will be supporting the ASM model for a long time to come.

Yes – take a look at the new Microsoft Azure Stack (just put into technical preview). Azure Stack (AS) is a new hybrid cloud platform product that allows organizations to deliver Azure services from their own datacenter, thereby helping them achieve more.

The main motivations behind the development of AS is customers have asked for Azure in their DC – plain and simple. Various motivations have driven the creation of AS.

Business requirements, If the Customer has a set of requirements that the Azure Cloud can’t support, then the hybrid cloud may be the answer. These could be items such as wanting to minimize latency, customization of application architectures, data sovereignty, etc.

Application Flexibility – Make cloud-first innovation possible everywhere . This allows you to make app deployment decisions based on business need (vs. technology constraints). Deployment is based not upon technical capabilities but rather upon business needs (On prem vs in Cloud). In other words, we don’t want to use technologies in the Cloud for the sole reason that they are not available on premises. We will talk a little more about the innovation possibilities that this opens up in another slide.

Inadequate Alternatives – Finally, customers are finding that the alternatives they currently have don’t meet their needs. For organizations that are looking for speed and innovation of cloud computing in their datacenter, Microsoft Azure Stack offers the only hybrid cloud platform that is truly consistent with a leading public cloud. Only Microsoft can bring proven innovation – including higher level PaaS services – from hyper-scale datacenters to on-premises environments to flexibly meet customers’ business requirements.

So let’s look at the three cases for the hybrid Cloud platform starting with Business and Technical considerations.

  • Latency – Latency is an issue if an app requirement that cannot be satisfied by using the public Cloud.
  • Customization – For example, an organization needing deep integration with internal applications, systems,. Or Customization for using (on-premises) a certain type of hardware that a company already uses.
  • Data sovereignty – Data cannot be allowed to leave country borders or the enterprise – e.g. EU.
  • Regulation – Local laws around how to transact business, public sector organizations, compliance and auditing needs etc. Regulations that require data to be handled on premises

The Microsoft Hybrid Cloud with Azure Stack brings the power of Azure into your DC. There are three main investment areas or benefits of the hybrid cloud platform.

  1. Azure Services in your Datacenter – initially a subset of the full azure set (compute, Networking, storage, app services, service fabric). Your IT folks can transform DC resources into Cloud services. This is a Cloud “Inspired” infrastructure – due to translation to on prem machines to Azure VMs. Have to map what’s on prem to the Azure models.
  • Transform on-premises datacenter resources into cloud services for maximum agility.
  • Run Azure infrastructure services – including Virtual Machines, Virtual Network, and Blob/Table Storage – for traditional apps like SQL Server or SharePoint.
  • Empower developers to write cloud-first apps using on-premises deployments of Azure App Service (Web Apps) and Docker-integrated containers.
  • Make developers productive with the same self-service experience as Azure.
  • IT gets to control on-premises Azure experience to best meet business requirements.
  • Hit on PaaS differentiation: – Web Apps, Docker-integrated containers

2. Unified App Development – The same API, PowerShell, ARM etc use in Azure work with Azure Stack. Write once deploy both AC and Azure. RBAC and Powershell, azure portal, Visual studio. Choice of open source app platforms, languages, and frameworks.

  • Identical APIs and application model with Azure Resource Manager
  • Role-based access control with Azure Active Directory and Azure Resource Manager
  • Unified Azure SDK
  • Native Visual Studio integration
  • Support for application platforms, languages, and frameworks, including Linux, Java, node.js, and PHP

3. One Azure Ecosystem – can get quickly productive in AS since the platform is the same on both locations.

  • Curated Azure Resource Manager templates for SharePoint, SQL, AD
  • Curated gallery images for Windows Server and Linux
  • GitHub-based gallery

This approach and solution builds bridges between all three schools of thought: developer, IT Pro, and business dev person.

  • As a developer, I would be excited about… Application developers can maximize their productivity using a ‘write once, deploy to Azure or Azure Stack’ approach. Using APIs that are identical to Microsoft Azure, they can create applications based on open source or .NET technology that can easily run on-premises or in the public cloud. They can also leverage the rich Azure ecosystem to jumpstart their Azure Stack development efforts.
  • As an IT professional, I’m going to be super happy about (moving forward in my career, pleasing my developers, etc.). IT can help transform on-premises datacenter resources into Azure-consistent IaaS and PaaS services, thereby maximizing agility and efficiency. End users can quickly provision services using the same self-service experience as Azure. IT gets to use the same automation tools as Azure to control the service delivery experience.
  • As a business, you can truly take advantage of cloud on your terms.

Stack Architecture

Let’s look at the architecture of the Azure Stack (with Level 1 as the topmost layer and Level 5 on the bottom).

Level 1 – Guest workload resources created by Azure Stack and which end-users (devs / IT Pros) use to get their work done. Each of those resources as supported by an Azure service or combo of azure services.

Level 2 – At the next layer down we get into the actual bits of Azure Stack, starting with the end user experiences. Each of these guest workload resources are supported by the both unique or common (in both AS or Azure portal) end user experience and developer tools.  It’s important to remember that this is “just Azure” and so the experiences that you’ve come to know and count on there are the same in Azure Stack. This means the same Azure Portal, same support for a variety of open source technologies and same support for development tools including integration with Visual Studio.

Level 3 – Unified Application Model from Azure Resource Manager. This means the same model of provisioning/accessing public cloud Azure is the same as provisioning/accessing AS. This piece of technology is central to how Azure and Azure Stack operate as clouds and we’ll talk more about this one.

Level 4 – The extensible service framework is mapped into three services

  • Core Services – Common services across all resource types (RBAC, subscriptions, Gallery, usage, metrics, etc.) these services are a core part of AS. (Ex. Azure Portal)

 

  • Additional Services – The Azure model is extensible that you can add to AS and customize the AS in your DC. (Ex Web apps for TP1 can be installed to AS if you want). Also in future could be API Apps, Mobile Apps, and Logic Apps.

 

  • Foundational Services – Some services in Azure are at the root/basis for other services. This is the main part of what Azure Stack is – this set of foundational services (i.e. VM, Containers, Blob, Queues, and Table storage, and Premium Storage, Networking (with LB and gateway), Platform services) that provide resources to customers as well as serve as the basis for other services.

Level 5 – This is all supported by the Cloud Infrastructure that runs on Windows Server technology on the physical hardware (Infra. Mgmt., Compute, storage, networking)

Azure Stack came out the end of  January for Public Preview 1. GA is planned for Q4 CY 16.

 

Previously within the Azure PaaS and IaaS environments, the auto-scaling of VM instances is done in a manner that replicates additional instances of a running VM.  If you have ‘N’ instances of a VM in a Cloud service scaling up means you now have N+X instances of the VMs (where X is the amount of VMs you increment by in a period of time).  In the IaaS VM world you would have to first pre-allocate VMs into an availability set, then shut them down. Once Azure was triggered (via metrics and threshold indicators to manage the attributes around a scale-out process) Azure would scale instances up (turn the VM on), or down (turn the VMs off). Each of these VMs used the same base image as those running in the availability set. 

This scaling only worked at the VM level. Other Azure resources, such as storage, were not part of that scaling operation.  With Azure VM Scale Sets you take a VM PLUS a set of related resources, and creates a base ‘stamp’ image.  You then create one or more instances of these resource sets while scaling out. Unlike single-instance VMs, you do not need to independently define and correlate network, storage and extension resources for individual VMs as the resources scale as a ‘set’ of resources.

Scale sets leverage Azure Resource Manager (ARM), with its efficient and parallel resource deployment model. With ARM templates, you are able to provision groups of related Azure resources with a JSON script file.   That means you can create a relationship between the VMs that need to scale out and other resources like NIC cards, VM extensions, storage, and any other Azure resource you want to include in the scaling process.  Using VM Scale Sets allows you to create and manage a group of identical VMs to correlate the creation of many independent resources.  For instance, the scale set you deploy could have multiple VMs behind an external load balancer with a public IP address with inbound NAT rules that allow you to connect to a specific instance for solving problems, and Azure storage accounts.  Just think how hard that would be to do that in a scaling situation without Scale Sets.

Right now you use resource group templates for Scale Sets.  Eventually you will be able to create Scale Sets directly from the Azure portal.

 

Azure Dev Test Labs

Practically speaking…wow! Take a look at the new (Public Preview) features called Azure Dev Test Labs. It rocks! This will be an incredibly popular feature for our customers.  With Azure Dev Test Labs your development and test team can dynamically use a self-service provisioning mechanism to quickly deploy Windows or Linux test environments in the Azure cloud. There is a built in process to template specific types of resource allocations. Once provisioned, there is a way to utilize quotas and policies to manage the expense and usage of those resources. That way you make sure your team does not ring up unnecessarily large bills for unused or over-provisioned resources.

With the advent of Azure Resource Manager and resource group templates a process exists to repeatedly allocate entities in Azure using a full reproducible and repeatable method.   The same concept of a template exists in Azure DevTest labs to allow you to create an on-demand Dev-Test environment in a few simple strokes on the mouse.  Leverage these templates via source control across groups in your company or across the enterprise. You can use Visual Studio online along with Git (version control system), and GitHub (web-page on which you can publish your Git repositories and collaborate with others) to add private artifacts. Artifacts can be items like tools that you want to install on a VM, actions that you want to run on the VM, or applications that you want to test. Azure DevTest Labs ships with a lot of key artifacts to get going.  There is a blade on the Azure Preview Portal that asks for the GIT Clone URI and the folder path. The Azure DevTest Lab will speak to the Git repository and sync the content.

DevTest Labs can be worked into your deployment process where you can set a limit to manage the costs and lifetime of you Azure Dev/Test resources. There are also existing pools of VMs from which you can quickly draw from to expedite the setup of your development or test environment.

To manage security of who has access by using role-based access control, such as the DevTest Lab user role. This allows to manage access at different levels (i.e. subscription, lab level, etc.) to use and provision resources.

For more info on Azure Dev/Test Labs go to https://azure.microsoft.com/en-us/services/devtest-lab/