Category: Cloud

Deploy an Azure VM from a custom image using ARM templates

by John Grange

Most of the companies we work with need to be able to build Azure VMs from custom OS images. This makes a lot of sense, most organizations use fairly specific software packages and other configurations and ensuring that every virtual machine is based on a known configuration is good practice. But with Azure moving from the classic deployment model to Resource Manager, it’s not as straight forward as it once was.

The best way to deploy VMs from custom images in Resource Manager is via ARM templates. The ARM template model allows you to avoid dealing with some of the more esoteric aspects of Azure CLI and Azure Powershell and instead use simple declarative json templates to define and execute the deployment.  Using ARM templates allows you to deploy or update resources for an entire solution in a single, coordinated operation. The consistency that this practice creates improves security and the templates end up being very handy documentation for your environment.

The first thing you need to do is create a custom vhd. This can be done inside or outside of Azure, however, if you do it outside of Azure I recommend you build the image in Hyper-V. Microsoft has some great resources available on capturing custom Windows and Linux images. Additionally, you’ll want to fire up Visual Studio (Community Edition will work and VS Code also supports ARM templates too) and follow these instructions for getting the project setup.

The ARM Template

Now the fun part: building out the ARM template. Make sure that your custom vhd is in the same storage account as you plan on using to deploy the VM. You’ll also want to make sure you have the url for your vhd handy. You can find it by going to your storage account > blobs > vhds > your_custom_image.vhd. 

The ARM template, in a very simplistic form, is two files: an azuredeploy.json file that defines everything about that resource deployment and then a parameters file in which you can define values for certain objects that then get passed into the deployment file.

Below is an example of the main azuredeploy.json file which defines all of the resources in the deployment:


And here is an example of a parameters file. In this instance we’re using it to pass some simple values into the deploy file, namely user and pass, storage account name, vhd uri, and VM specifications.


As you can see, the parameters file is pretty simple and it’s actually where most of the action happens (i.e. where the values specific to this deployment are defined).

I would recommend starting with a very basic existing template to start and you can find on in the Azure Quickstart Templates repo on Github.  Go ahead and browse to the 101-vm-from-user-image template and use that as your baseline. There are cleaner ways to do this, but the way I do it is copy the contents of the azuredeploy file and the parameters file into your corresponding files in your visual studio project.

Once you have an azuredeploy.json and azuredeploy.parameters.json that match up with the quickstart template you’ll want to adjust the values to match your Azure resources. As you can see above, I added parameters that defined the storage account I wanted to use (this is important because this needs to be the same storage account that the vhd is stored in) and referenced the uri for the vhd itself. You can add any values that you want to get passed into the deploy template as part of this operation.


Once you’ve definite all the necessary values for your resources in the template files it’s time to deploy. If you look to the right pane in your Visual Studio window you’ll see the Solution Explorer, right-click on your project name and select ‘Deploy’. It will prompt you to enter your Azure credentials if you haven’t and then you’ll see the deploy to resource group window.


Choose your subscription and a resource group making sure that resource group matches up with the resources you defined in your template. Now, you’re ready to go ahead and hit the deploy button! I don’t think I’ve ever done one of these that didn’t error out at least once, so don’t worry if your first one fails. You’ll be able to see the output of the operation as it’s happening and if it fails you can just scroll through and see what the error was. Most of the time it is something simple like a resource didn’t have a valid name or it had a duplicate name.


Change controls are key to enterprise cloud success

by John Grange

The conventional wisdom about enterprise cloud adoption has always been that big companies are too big and too security conscious (paranoid?) to use the public cloud. No matter the cheap scale or business agility it provides, their concerns about data privacy, governance, and control will always keep them from making the public cloud leap. Well, today we know that this isn’t exactly the case anymore. Azure, AWS, and now Google have touted growing enterprise adoption of their respective cloud platforms. But the majority of enterprise cloud migrations are still in front of us and the long tail opportunity is enormous. This begs the question: what is the key to enterprises finally gaining a level of comfort with the public cloud and how can they ensure success?

Very simply, the fear of not being able to control change is what initially kept enterprises away from the public cloud in the first place. So it should come as no surprise that the companies that are seeing the greatest success in the public cloud today have evolved their legacy change controls to be relevant in an automated cloud environment. Change control is paramount in the cloud because when provisioning is automated, data and computing are distributed, and their is little friction to activating resources at scale, there exists a tendency for environments to become disordered. We like to call this data center entropy, and to be fair, it exists in on-premises data centers as well, it’s just magnified by the very nature of the public cloud.

It’s much more challenging to prevent a public cloud environment from becoming disorganized and unmanageable as opposed to traditional infrastructure. Simple things like naming conventions, that provide great utility in tracking and auditing, are very difficult to maintain when cloud services can be provisioned with ease by anyone and with default values (that may or may not violate your internal policies). It’s also important to recognize that the automation and speed that the cloud provides are the source of it’s real advantage, so you want to be able to track changes while allowing your organization to fully leverage powerful capabilities. The bottom line is that you can’t look at change control through the same lens as before.

To be successful you’ll want to incrementally look at each part of your infrastructure operation (network, storage, keys, VM images, VM provisioning, OS configuration) and determine the best way to track and alert to change based on factors like frequency of change, risk, and regulatory requirements. All changes should be tracked, to be sure, but you’ll want to be judicious about alerts to cut down on noise. For example you should track VM creation/de-allocation, however, in a cloud environment VMs are going up and down frequently so maybe you don’t want to be alerted to every single operation. But a change to, say, port rules on a subnet? That change carries serious risk potential for your network and the overall security of the environment so you’ll want to track it and know about it right away. Looking at the environment holistically and identifying how you interact with each component and service helps you determine the appropriate way to address change within the context of a fast moving, always changing cloud platform.

By tailoring change controls to the specific components you begin to have meaningful visibility into your public cloud data center. Because your public cloud data center is completely programmable and extensible, you gain the ability to have much more control over your environment than you ever did on-premises.  There is great power in having such nuanced visibility and control over your environment and that translates into enhanced security, better uptime, and improved efficiency.

Change control is at the heart of making a public cloud environment production ready. Being production ready, to me, is tantamount to being successful because if you’re only putting non-essential workloads into your public cloud environment your organization has probably reaped very few cloud advantages. Like everything else cloud related, good change control in the cloud involves rethinking existing processes to meet new requirements that have a different set of capabilities and constraints. Where I see companies fail is in trying to force legacy change controls (and other processes) on a cloud environment which slows things down, mitigates any benefits, and limits the ability for an enterprise to be successful in the cloud.

Is a DevOps strategy a prerequisite for IT transformation?

by John Grange

Earlier this week I was fortunate enough to speak on a panel at AIM Infotec on the topic of IT transformation. My role on the panel was to comment on the security implications of the cloud.  It takes some serious effort (if not charisma) to get people engaged at talks like these. But these days when you start talking more directly about cloud and how larger organizations can best embrace it, you’ll notice people perking up since these issues are finally top-of-mind for most business.  It’s very clear to me that in the enterprise, the desire for a real transformation is there, they just need the right strategy. The panel discussion confirmed this and, from my perspective, the line of questioning from the audience was a perfect representation of where IT leader’s heads are at.

So what are the C-Suite and IT leaders thinking right now? Well, one thing is clear, they want to get out of the hardware business. There are many implications to an organization moving away from owning infrastructure. Decision makers are seeing that the value to the business lies in speed-to-market, customer experience, and business agility. All of these things can be achieved without owning any infrastructure and, in fact, owning infrastructure hinders your ability to achieve those goals effectively. The questions we fielded from the audience were informed and relevant signalling that these are hot issues at their companies.

I thought I’d jot down a few of the really good questions we received from audience members and address each one. What struck me about the questions were how many times I’ve heard these same things asked by customers and prospects just recently. These sorts of topics are top-of-mind in the industry and how they’re addressed in board rooms is going to determine whether those companies can really transform how they use and deliver IT.

Is a DevOps strategy a prerequisite for IT transformation?

DevOps is a loaded term – not unlike ‘cloud’ – which has come to mean so many things that it no longer means anything specifically. In reality, DevOps is a methodology that brings together the ideas of agile software and agile operations to optimize the software development lifecycle. If organizations want to truly move faster and be more competitive, that often means deploying new code faster and more frequently. To make this happen operations (which includes systems, security, networking, dba, etc) and application development need to work together seamlessly as part of a software development pipeline. When these two formerly disparate entities – operations and development – are brought together the delivery of technology becomes more streamlined enabling organizations to be more competitive and react faster to business change. That is the essence of IT transformation.


If our corporate data footprint includes data that falls under HIPAA, can we even look at public cloud?

The myth that the cloud is not secure or can’t be used to store compliance regulated data is strange because it couldn’t be further from the truth. Go take a look at Azure or AWS’s compliance certifications for their facilities and platform and find me the corporate or MSP data centers that are better. I’m sure there are some but not many. For example, GE Healthcare moved all of their core customer solutions (HIPAA compliant) to Azure.

Compliant workloads can absolutely go to the cloud, but like anything else, you need to plan by identifying the sensitive data and make sure you understand how it will be accessed and how it will flow through the architecture. The most important thing is to understand how the shared responsibility model impacts your data. Make sure you’re capturing the right logs since the cloud environment is likely different than your on-premise environment. Also, leverage vendor tools to help secure those workloads and make your life easier.

How does security and management change, if at all, in a cloud environment?

The cloud is a paradigm shift. Where infrastructure used to be permanent with capacity meant to last years, the cloud is ephemeral with servers and resources going up and down while you only pay for what you use. The flexibility and variability of the cloud, combined with the tools and interfaces those platforms expose, presents some challenges surrounding security and management that are unique to the cloud model. One thing companies need to do is to not try and lift and shift every process into the cloud but to look at the intent of each process and determine the best way to accomplish it in the cloud. Many times the management process or security procedure will translate to the cloud just fine, in other scenarios you may have to adjust and use a new tool or tweak the process.  Yes, management and security practices will likely change in the cloud but you ultimately gain more capability and thus more control as a result.

When looking at cloud providers, how should we approach portability and avoid provider lock-in?

The good news is that virtualization is mature and fairly standardized with most workloads being capable of moving into and out of a provider with relative ease. Both Azure and AWS have VMware connectors and most MSP’s are using VMware or Hyper-V which have supported migration paths between the platforms. By and large, provider lock-in is not much of an issue for IaaS. The place to be careful is PaaS, including access management.

As soon as your application relies on platform APIs for authentication or unique attributes of a database-as-a-service backend, you cannot just move your application without extra work to remove the dependency on those platform specific components. If you’re going to leverage PaaS, make sure your team clearly understands the application’s dependencies and that you have a known path off of that platform if the situation arose. They key here is to know what you’re getting into and plan for it.


5 reasons why Azure Site Recovery is the best place to start your migration to Azure

by John Grange

For larger companies, the hardest part about looking at using Azure is picking a good place to start. Conventional wisdom has been to start with a dev/test environment or some other non-production application and go from there. However, even with that approach, you must still address the internal cloud-fear-mongers who are terrified of infrastructure they can’t reach out and touch. Sometimes that fear, or maybe it’s just a general lack of comfort, comes from the top which can be the kiss of death for a cloud strategy.

To make a move to Azure a reality companies need to accomplish a couple of things: 1) show corporate leadership that the tangible benefits of utilizing Azure far exceed the less tangible risks and 2) establish some data presence in Azure, no matter how non-essential it is, to begin to set the expectations that data CAN exist outside of your company’s four walls. Once data is in the cloud and leadership sees that their world is not coming to a fiery end, your company will start to feel more comfortable with the cloud.

For many companies, Azure Site Recovery (ASR) is a perfect place to start their Azure strategy. For starters, with pressure on capital budgets CIO’s want to avoid buying more new hardware. Disaster recovery has long been a huge cost center for IT departments and often companies are spending as much or more on DR than they are on their production environments. With ASR you can completely remove the need to buy and support expensive hardware and software for DR but it also accomplishes something else that’s very important: ASR gets your company’s data up to Azure which has some beneficial, if not psychological, implications, meaning, once your company’s data is on Azure there will likely emerge a desire to do more with Azure because the benefits are huge.

ASR has evolved into a very robust DR service that can fully replace many enterprise DR programs. Keep in mind, you can also dip your toes in the water with smaller ASR deployments too – just protect a small subset of machines or a single app to start. The key is to demonstrate the cost savings of ASR as a DR solution and begin to get data into Azure giving you the start of a cloud migration strategy. To help, here are the top 5 reasons ASR is a great DR solution and the best place to start your company’s journey to Azure:

1) Security

Security is generally the cloud concern that keeps executives awake at night. Leadership may voice concerns about ASR and not providing the level of security that the on-premise production environment enjoys. The key thing to remember is that Azure is almost certainly more secure and compliant than the vast majority of corporate data centers. With certifications such as HIPAA, PCI-DSS, SOC 1,2, and 3, FedRAMP, and FISMA it’s hard to look at Azure as anything but enterprise-ready. With ASR, all transmitted data is encrypted and it supports live replication of data encrypted-at-rest. It’s secure and compliant out-of-the-box and all you have to do is use it.

2) VMware and Hyper V integrated

One thing that surprises a lot of people is that ASR integrates directly with customer’s VMware vCenter environments. ASR will automatically discover virtual machines and replicate those machines to Azure. Believe it or not, many companies use ASR, not for DR, but as a tool to migrate workloads to Azure using live replication and making it a zero-downtime operation. The technology is powerful and that it integrates into your enterprise hypervisor makes it seamless.

3) Easy Fail-over Testing

Anyone who has deployed an enterprise DR program knows that testing fail-over often proves to be elusive. Many companies have never actually tested their DR plan at all and if they do it’s a major undertaking. With ASR fail-over happens at the click of a button and your ASR deployment can be configured in a way that allows for frequent tests that do not disrupt production at all. One of the major selling points for ASR is that it allows for greater availability because it’s so easy to test and audit the fail-over process, and ultimately, the business continuity plan.

4) Cost

ASR pricing is simple and inexpensive – two characteristics that traditional enterprise DR packages are void of. You simply pay by the average number of virtual machines protected over a 30 day period – no software licensing, no hidden fees. ASR is agentless so as you spin up and spin down VM’s in your data center, they’re automatically protected but you’re also only paying for what you use. Each protected VM is $54 per month. After that you merely pay for storage being consumed which can be under $.03 per gigabyte depending on the type of storage you’re using. From a cost standpoint, ASR is almost certainly more appealing than alternative DR programs.

5) Fully Customizable

ASR allows you to design recovery plans which define what happens when you fail-over. You can control how Active Directory gets handled, the order in which resources come online, addressing schemes, you can even drop in scripts to perform specific tasks on a machine at a point in the process. These are all well documented features and allow IT teams to make ASR meet their organization’s unique DR needs while providing a quick and inexpensive path to a full Azure migration down the road.



Linux on Azure not only isn’t an after-thought, it’s a first-class citizen

by John Grange

Microsoft isn’t joking when they say they love Linux.  If you’ve been in technology for a while then this whole new open Microsoft that’s progressive and wants it’s users to use any technology on all Microsoft platforms, is a little weird. But despite the initial awkwardness – Steve Ballmer did once call open source a cancer – the new Microsoft is the real deal when it comes to open source and Linux.

If there were any questions of Microsoft’s sincerity when it comes to their open source commitment then look no further than their flagship .NET Framework, which is at the core of every Microsoft technology. They actually open sourced it. Not only did they open source .NET but, in a surprising and welcome move, they put the source code out on Github. Outside of their own open source contributions, they’ve invested heavily in making Linux a first-class citizen. And on Azure, Linux is indeed just that, a first class citizen.

Microsoft’s biggest problem with Linux on Azure is that their reputation precedes them. I constantly talk to CIO’s who simply aren’t aware that their Red Hat workloads are well supported on Azure and will run great. Or that Powershell DSC runs natively on Linux and plays a part in Azure automation – DevOps anyone? To them, Azure isn’t even an option. We end up spending a lot of time educating IT leaders on all of these formerly out-of-character things Microsoft does today that make Azure really powerful and transformative for their organizations.

Given the misconceptions of Linux on Azure, I wanted to put together a list of some important things you need to know about Linux on Microsoft Azure.

Support for a wide range of Linux distributions

Microsoft has a wide range of endorsed Linux distributions on Azure. An endorsed Linux distribution is one that Microsoft will formally support and is available in the Azure Marketplace. With a range of versions of Centos, Red Hat Enterprise Linux, and Ubuntu right along side Windows Server, Azure can handle most enterprise technology stack’s out-of-the-box.

Well documented and supported process for building custom Linux OS images

The reality is that while many companies can just go out to the Azure Marketplace and build VM’s from one of the official images, larger enterprises likely need a custom configuration to meet more complex requirements. Microsoft has detailed documentation on the process of creating custom Linux images for Azure. I’ve found this documentation to be very good and kept up-to-date. Additionally, Hyper V is great for building customer VM images because it will automatically inject certain key drivers into your image.

According to Microsoft, 60% of the Azure Marketplace images are Linux-based

Yes, you read that right, 60% of the Azure Marketplace images are Linux-based. This means that other vendors such as Oracle, Red Hat, and TrendMicro have Linux-based images with their software preconfigured and ready to go.

You can purchase Red Hat Enterprise Linux through directly Azure

Through Microsoft’s recently announced partnership with Red Hat, you can now purchase your Red Hat subscription through the portal and your operating system will be supported by both Red Hat and Microsoft. Previously, you could bring your Red Hat Enterprise Linux subscription over to Azure but now it’s an integrated part of Azure.

Microsoft’s commitment to Linux is undeniable, at this point. With so many enterprise organizations looking to the public cloud for greater speed and efficiency, it’s really important that those companies understand that their Red Hat and Oracle deployments will work on Azure.


Azure Site Recovery for VMware disaster recovery just got a lot easier (and cheaper)

by John Grange

Azure Site Recovery (ASR) is an extremely powerful disaster recovery (DR) service that’s built into Microsoft Azure. Unlike Azure Backup, which is very inexpensive and also lacks some key enterprise features, ASR is an enterprise class solution for replicating protected vm’s and physical servers and then orchestrating a pre-designed recovery plan on Azure. Much to the surprise of many there is native support for VMware environments which makes it a viable DR solution for the droves of organizations running VMware in their data centers.

ASR has had commercial support for VMware since summer ’15 and it was a result of their acquisition of InMage Systems, Inc., a company that developed industry leading cloud business continuity technology. Fortunately, this has been a major focus for Microsoft and VMware support is a first-class citizen within the ASR feature-set as it rounds out a very strong cloud DR solution for enterprises. Now, Microsoft has announced enhanced functionality for VMware DR using ASR and it’s a huge improvement from the previous iteration bringing with it a lower total cost and a simpler implementation.

We’ve implemented ASR’s enhanced VMware to Azure scenario and here are the big changes:

  • No more need for live VM’s in your Azure environment to enable replication and orchestration
  • Implement ASR on source machines without reboot
  • Less disruptive testing and certification of failover and failback
  • Drastically less complex capacity planning

Our experience with the previous VMware to Azure scenario was that it worked well but on-boarding, specifically the capacity planning component of it, was confusing to customers. Having to right-size a target server with the correct amount of attached disks to correspond with protected source volumes and then having to work within Azure storage constraints to ensure adequate IOPS availability was cumbersome and time consuming. Also, being able to test failover and failback required significant upfront planning to avoid disruption. These challenges are not unique to ASR. These are the types of challenges we’ve all dealt with when implementing a DR solution. The good news is that ASR’s new enhanced scenario resolves these issues and it’s made the total solution less expensive to implement.

How does ASR’s enhanced VMware to Azure scenario lower the total cost of the solution?

  • You can deploy ASR components on-premise avoiding the cost of the live resources on Azure
  • Protected machines are replicated directly to Azure storage, removing the time consuming exercise of planning capacity on the previous target servers
  • A unified installer cuts down on the time to on-board
  • Testing failover and failback is much easier and reduced to a simple process

In previous iterations of ASR there was so much time spent on-boarding. Getting customers comfortable with IOPS ratios, storage limitations, and learning new workflows all made the implementation take longer and involve more people. The new model reduces the hard costs of having expensive instances running in Azure to enable replication and orchestration, while also reducing the softer costs of the time and people involved in on-boarding and testing. While ASR still represents a new paradigm for many organizations who are accustomed to more traditional DR approaches, the latest iteration of ASR has even more distinct cost saving advantages to go with it’s robust functionality, making it even more difficult for technology leaders to ignore.

FAQ: Achieving compliance with newly released Azure disk encryption for Windows and Linux

by John Grange

Early this year, we wrote about encryption-at-rest as part of our security series. Traditionally, encryption-at-rest, or disk encryption as we’ll refer to it going forward, has been costly and more difficult to implement in a public cloud environment. As cloud offerings have matured they’ve become much more appealing to the enterprise: security is improved, competition has driven costs down, and new features allow for unprecedented speed and efficiency when compared to the traditional data center. Azure is at the forefront of enterprise cloud functionality, API’s, and integration with core enterprise applications like Active Directory, but until very recently, there wasn’t a native way to implement disk encryption.

Up until now you could use Microsoft’s Bitlocker inside your Windows VM, or for Linux you could use any number of filesystem or disk encryption options, but none of these are integrated in any way into the broader Azure environment. This means management and security of your keys/secrets is an issue and it drastically complicates deployments. Even if you’re using configuration management platforms like Ansible, Chef, or DSC , you now have to add a new complicated layer to each deployment.

Now, as part of Azure’s preview portal – Azure’s platform for the future built on Azure Resource Manager (ARM) – you can deploy native disk encryption for existing VM disks as well as disks for new VM builds. This new functionality adds much-need flexibility for those organizations that require disk encryption to meet data security and compliance commitments.

In an effort to provide more definition around this new functionality while remarking on common questions we’ve been asked in the field, I will use an FAQ format to address the details of Azure disk encryption for Windows and Linux.

Isn’t Azure disk encryption for Windows just Bitlocker?

For Windows, Azure disk encryption is based on Bitlocker technology, but it’s been built into Azure as a native feature that is configured outside of the VM via ARM. Key management is integrated into Azure Key Vault providing an end-to-end disk encryption solution for your Windows IaaS VM’s. Disk encryption for Linux VM disks works the exact same way but the underlying technology is dm-crypt and not Bitlocker.

Can I implement Azure Disk encryption from the service management portal?

No, Azure disk encryption is only available in Azure preview which is based on ARM. Currently, disk encryption can be implemented via Azure CLI or ARM json templates. New VM’s can be created from the Azure gallery with encrypted disks or you can encrypt existing disks but both tasks need to be done via templates of CLI.

How do I manage my keys?

Azure Key Vault is used to manage keys and policies leaving no application with access to keys and a central place to audit.

How do I audit the keys and policies?

Yes, you can add key vault logs to your log pipeline for auditing.

Can I encrypt the disks on existing VM’s or just newly created one’s?

You can encrypt disk on newly created VM’s as well as existing VM’s. To create a VM with encrypted disks it needs to be created from the Azure Gallery. To encrypt disks on a VM built from a custom image, you would need to provision that VM with your custom image and then go through the process of encrypting the disks of an existing VM.

Does disk encryption work the same on Linux as it does with Windows?

Yes, the Azure disk encryption process is the same whether you’re using a Linux or Windows OS. You use the same CLI or ARM template configuration to implement disk encryption on Linux as you do on Windows while keys are also managed in the same way. The only difference is the underlying technology used for encryption but that technology has been abstracted out of the process.

What are the current limitations?

Currently, there are some limitations to the this initial implementation of Azure disk encryption. The main limitations today are that all of your disks and keys must be in located the same region, there is currently no integration with on-premise key management systems, and you cannot disable encryption once it has been setup.

Cloud API’s aren’t just for developers, IT ops and security may see the greatest gains

by John Grange

IDG (parent company to media brands such as CIO, CSO, Network World, and more) recently released their 2015 Cloud Computing Study in which they survey nearly 1,000 qualified executives from enterprise organizations around the country. Buzz-word-laden studies like this can often be abstruse and lack accessibility, but they still provide keen insights into the issues real-world decision makers are facing on the front lines. I find it valuable to go through studies like this and compare it to what I’m hearing from customers and seeing from industry folks on twitter.

In this particular study, one of the conclusions really stood out to me:

90% of enterprises are relying on APIs in their cloud integration plans for 2016.

To me, this says a lot about the maturity of the cloud market – both the vendor offerings and the customers using them. When we sit down with companies to talk about what they’re strategy is for infrastructure and what a cloud transformation looks like to them, there’s still this mindset that’s characterized by lifting existing apps off existing virtual machines in an existing data center and then moving them to a new “cloud” where they sit on new virtual machines. A transformation this is not. Most organizations never achieve any real cost savings with this approach and yet, until recently, it’s been the primary mode by which enterprises have adopted cloud.

At the heart of the cloud technology revolution are API’s that provide interfaces into your entire stack – network, compute, storage, and identity. These API’s drive innovative and invaluable functionality such as automation, autoscaling, variable costing, policy enforcement, and security. From my perspective, if companies aren’t taking a cloud-native approach, one that utilizes the full feature-set and leverages native API’s, they will never see the cost savings, speed increases, or enhanced security (yes, the modern cloud model provides the opportunity to ENHANCE security compared to the on-premise past). To see that 90% of these enterprises will be utilizing API’s in their cloud plans in 2016 is a huge step forward for the market at large. The enterprise is finally starting to get it!

The major misconception about cloud API’s is that it’s just about giving your software development teams some convenient ways to write code into their apps that interacts with the underlying cloud environment. This is true, and very powerful, but software development skills are not required to leverage many of these services – and organizations are rightly leery of handing over tasks network configuration to developers. Microsoft’s Azure Resource Manager uses declarative JSON templates (no coding required) and Powershell to interact with the entire Azure environment. These are operations tools! Nowhere are the impact of cloud API’s more transformative than to IT operations and security. From mass automation of deployments, to streamlined access to logs data, to enforced consistency, at no time has IT been able to move so quickly while supporting so much different technology.

From my perspective these are the top benefits of cloud API’s for IT operations and security:

-Coordinated automated deployments (roll entire environments out with a single action).

-Easier collection of logs across the entire environment for auditing and compliance.

-Enforce configuration policies across network, storage, and servers for better security and uptime.

-Adjust on the fly to new security policies or application architectures. Change becomes easier!

-Power servers up and down on a schedule to maximize cost savings by only paying for resources you’re using.

Companies that don’t just move workloads to the cloud, but actually embrace the cloud-native mindset are able to transform the way they innovate and deliver technology. The others just end up with new virtual machines. Utilizing cloud API’s to achieve your IT operations and security requirements is a game change when it comes to making an impact.

Red Hat on Azure: A Case Study for Cloud Transformation

by John Grange

On the heels of the recent announcement of Red Hat Linux becoming available on Azure, I feel like it’s the perfect time to share our recent experience delivering Red Hat Enterprise Linux (RHEL) solutions to our customers on Azure. If you’ve been around the industry for a while this may seem counterintuitive but there is actually robust support for Linux on Microsoft’s Azure cloud. Today, Azure provides endorsed VM images for the popular Ubuntu, the nascent CoreOS, Oracle Linux, SUSE/openSUSE, and CentOS via OpenLogic, but very conspicuously, no Red Hat Enterprise Linux. Microsoft supports these endorsed distributions and they even have a very capable Linux team which has been helpful in my experience. I’ve heard rumblings that as much as 40% of VM’s on Azure are running Linux – this clearly isn’t the Microsoft of old.

Azure has settled firmly into the number two spot in the cloud wars with Amazon’s venerable AWS still firmly out in front. They’ve become Amazon’s only real competitor by being the cloud that appeals to enterprise sensibilities. Microsoft has significant CIO mindshare and those CIO’s are quick to trust them with the cloud more readily than AWS in many cases, something Matt Asay highlighted in Info World earlier this year. If Microsoft’s software presence in the enterprise is a key cloud growth driver, then Red Hat, who’s products are along side Microsoft’s in nearly all the same organizations, is a natural extension.

This particular customer case involves a mid-sized enterprise with RHEL requirements and a strategic desire to adopt Azure as a cloud platform. This company’s IT leadership had been working on a strategic vision to leverage the cloud (they’re all doing this at this point), they trusted Azure as they’re subject to PCI-DSS compliance and other standards, use Active Directory, Exchange (some Office 365), and other Microsoft products, and had identified their QA environments as an optimal jumping off point. This is a really common scenario since initial cloud deployments are most successful with lower risk, non-production workloads. However, in this situation the customer’s production environment was running Red Hat 5x, a version for which the CentOS analog isn’t even supported on Azure, yet RHEL 5x was a hard requirement for the QA environment on Azure and thus dictated the relative success or failure of the cloud initiative.

Luckily, we run a substantial amount of Linux on Azure in addition to a 50/50 Linux/Windows split in our own data centers.  This background gave us confidence that we could get RHEL 5x running and stable inside Azure. The first step was to construct a custom RHEL 5.11 Azure VM image to meet their specs and this is best done inside a Hyper V environment.

1. Creating Custom Linux VHD in Hyper V for upload to Azure

Microsoft has very good documentation as well as blog posts on building custom OS images for Azure, including instructions for some specific Linux distributions. A few things to be mindful of:

-Use standard partitions when installing RHEL and NOT LVM.

-Register the OS with RHN so you can use Yum and update key packages.

-You’ll likely have to install LIS 4.0.7 and not 4.0.11 as the latter doesn’t have an install package for RHEL 5.11.

-Azure requires OpenSSL v1.0+ which is not supported on RHEL 5. However, Azure is really looking for the Heartbleed patch that was part of OpenSSL 1.0 which was backported by Red Hat into 5x. You should be able to Yum Update OpenSSL and bring it to 0.9.8 with the correct patch levels that work with the Windows Azure Agent.

-Waagent requires Python 2.6 which is isn’t available on RHEL 5x. You’ll need to either use the EPEL repository or build your own rpm from source. My recommendation would be that you do the former.

2. Upload deprovisioned VHD to Azure

Once you go through the cleanup steps of the custom image and finalize everything with the ‘waagent –deprovision’ command (think sysprep for Linux), you’re ready to upload your VHD to Azure. You’ll want to make sure you create a container inside a storage account in Azure and then within that container with a valid DNS name. This will be the uri you point your VHD upload to.

Once you’ve got your storage account and container setup you can upload the VHD using Azure Powershell. Again, Microsoft has some great documentation on performing this action. Once Azure Powershell is all linked up to your Azure subscription the command is very simple:

Add-AzureVhd -Destination <BlobStorageURL>/<YourImagesFolder>/<VHDName> -LocalFilePath <PathToVHDFile>

3. Create Image in Azure From VHD and Deploy Initial VM

Once you have your VHD up there it’s pretty simple to create a new image from it. Once that image is created it’s available to you when you’re creating a new Virtual Machine from the Gallery. You’ll want to provision a VM with that image right away to make sure that the Waagent is interacting appropriately with Azure.

Azure Resource Manager Deployments

For enterprises to truly leverage the cloud’s scale and efficiency and not just ‘lift and shift VMs’, their environments need to be implemented with a cloud-native mindset. This means thinking about automation, thinking about resources as ephemeral, and right-sizing resources to fit needs knowing scale comes easy. With the custom RHEL 5.11 image operational on Azure it was time to fully implement their environment. Working with their internal infrastructure, network, app dev, and security teams, we put together the comprehensive requirements and implementation plan for for the QA environment. For enterprises to adopt the cloud, there needs to be a translation of enterprise security policies and standards to the cloud environment. We use Azure Resource Manager (ARM) to accomplish this.

With ARM we can build declarative JSON templates that define every aspect of the environment. Our customer has specific requirements around user access controls, subnet configuration, VPN configuration, as well as the desire to have workloads spin up and down on a schedule to control costs. Our team took their requirements and built out ARM templates that defined their unique requirements and brought them a level of deployment consistency they don’t even have in their own data center. For our customer to deploy an entire multi-tier, AD integrated, and compliant QA environment, all their infrastructure team needs to do is run a single Azure Powershell command. Additionally, we have tools that capture logs of these processes and enforce the consistency in policy and configuration.

The Business Case

This has been an interesting technical case study in that we worked around dependencies and other technical issues to get a clean and certified version of RHEL 5.11 running and stable on Azure but that obscures how transformative this deployment is to the existing business.

-QA environment spin-up went from days to hours.

-Not only are they not having to buy and maintain hardware but the cloud resources they’re now using are right-sized and they’re using automation to ensure the servers are running only when they need them.

-With ARM and the automation we’ve built for them they actually have more consistency in their cloud environment in Azure than they do on premise.

-The speed and cost savings has funded other innovative projects which solidifies the cloud business case.

Companies don’t need to adopt a micro-services architectures or be running the latest software to reap the benefits of cloud transformation. Viewing the cloud as simply a new, modestly less expensive, place to put virtual machines obscures the fact that those cost savings pale in comparison to those arising from automation, agility, consistency, and the ability to coordinate complex infrastructure operations faster than ever. Gaining an orientation around the cloud beyond new virtual machines, a cloud-native orientation, if you will, is a key factor in extending cloud transformation to business success.

Highly secure, highly available virtual machines on Azure

by John Grange

Microsoft’s Azure public cloud offers hyper-scale infrastructure and availability around the globe; a feat that’s difficult for even large enterprises to achieve.  One of the key paradigm shifts of the past half-decade or so, is the move to apps that scale horizontally and are “built for failure”. Being built for failure sounds dubious, however as a concept it’s been influenced by the fault-tolerance and high availability that are required by tech-driven businesses today. With that being said, traditional enterprise applications can absolutely leverage hyper-scale infrastructure to achieve continuous operation while maintaining data security and compliance certification.

One of our goals is to be the on-ramp to hyper-scale infrastructure and cloud computing for security conscious organizations by making the initial transition and the day-to-day management painless. Our team configures each customer environment to meet their unique performance and scalability needs while adhering to best-practice security standards to help our clients maintain compliance with regulatory standards like HIPAA and PCI.

The most critical component to a Layeredi managed and protected environment is the composition of the individual virtual machine’s themselves. These VM’s are serving up key applications so data integrity, security, availability and performance are all address through our platform and services on Microsoft’s Azure.

Let’s profile Layeredi virtual machine’s on Azure to get more acquainted the specific components we use and our overall approach although we’ll focus more on security in this post.

Identity and Access Management

If it hasn’t already been done we start by integrating a client’s Active Directory into their Azure services via Active Directory sync and configure two-factor authentication. Many departments bypass IT and setup subscriptions without this in place and it puts the company at risk. Next, we configure any co-administrators that are necessary and proceed to use Role Based Access Control, or RBAC, to control what cloud services employees can access and what they can do with those services through a least-privilege model.

Network Security

Network security is one of the most important pieces of your overall security design. For each client, we build out custom virtual networks, or VNet’s, that properly segregate the different tiers of the application’s architecture i.e. web tier, application tier, database tier, or even different availability zones.  Other network security configurations include VPN and firewall configuration and policy management.

Operating System Hardening

We support most standard operating systems such as the Windows Server family, Red Hat and CentOS, and Ubuntu. Every VM is provisioned via our automated tools with a hardened configuration that ensures every system meets specific standards. Without going into too much detail, this would entail things like disabling ssh for root, renaming the administrator account, custom IP tables and Windows firewall rules, along with a whole lot more. In addition to the setup, we also install some tools we use to do things like gather and inspect logs, monitor configuration changes in real-time, and performs non-intrusive anti-malware activities. It’s a comprehensive process and we ensure each and every VM we deploy into your Azure environment includes these enhancements.


We use host based IPS/IDS that is deployed on every server. They report back to a central interface that is tracked in real time by our support engineers. We also configure alerts that automatically open up support tickets for specific scenarios.

Performance Monitoring

There should never be a trade-off between performance and security. We implement New Relic on every server and alerts come right into our support ticket system.

On-Ramp to Azure and the Public Cloud

Most companies either have some public cloud presence or the desire to dip their toe in, but many don’t know where to start.  Hassle-free migrations, high security, compliance, patching and management, and just having real people providing support make the first jump so much easier for mid to large sized organizations.