Category: Community

Change controls are key to enterprise cloud success

by John Grange

The conventional wisdom about enterprise cloud adoption has always been that big companies are too big and too security conscious (paranoid?) to use the public cloud. No matter the cheap scale or business agility it provides, their concerns about data privacy, governance, and control will always keep them from making the public cloud leap. Well, today we know that this isn’t exactly the case anymore. Azure, AWS, and now Google have touted growing enterprise adoption of their respective cloud platforms. But the majority of enterprise cloud migrations are still in front of us and the long tail opportunity is enormous. This begs the question: what is the key to enterprises finally gaining a level of comfort with the public cloud and how can they ensure success?

Very simply, the fear of not being able to control change is what initially kept enterprises away from the public cloud in the first place. So it should come as no surprise that the companies that are seeing the greatest success in the public cloud today have evolved their legacy change controls to be relevant in an automated cloud environment. Change control is paramount in the cloud because when provisioning is automated, data and computing are distributed, and their is little friction to activating resources at scale, there exists a tendency for environments to become disordered. We like to call this data center entropy, and to be fair, it exists in on-premises data centers as well, it’s just magnified by the very nature of the public cloud.

It’s much more challenging to prevent a public cloud environment from becoming disorganized and unmanageable as opposed to traditional infrastructure. Simple things like naming conventions, that provide great utility in tracking and auditing, are very difficult to maintain when cloud services can be provisioned with ease by anyone and with default values (that may or may not violate your internal policies). It’s also important to recognize that the automation and speed that the cloud provides are the source of it’s real advantage, so you want to be able to track changes while allowing your organization to fully leverage powerful capabilities. The bottom line is that you can’t look at change control through the same lens as before.

To be successful you’ll want to incrementally look at each part of your infrastructure operation (network, storage, keys, VM images, VM provisioning, OS configuration) and determine the best way to track and alert to change based on factors like frequency of change, risk, and regulatory requirements. All changes should be tracked, to be sure, but you’ll want to be judicious about alerts to cut down on noise. For example you should track VM creation/de-allocation, however, in a cloud environment VMs are going up and down frequently so maybe you don’t want to be alerted to every single operation. But a change to, say, port rules on a subnet? That change carries serious risk potential for your network and the overall security of the environment so you’ll want to track it and know about it right away. Looking at the environment holistically and identifying how you interact with each component and service helps you determine the appropriate way to address change within the context of a fast moving, always changing cloud platform.

By tailoring change controls to the specific components you begin to have meaningful visibility into your public cloud data center. Because your public cloud data center is completely programmable and extensible, you gain the ability to have much more control over your environment than you ever did on-premises.  There is great power in having such nuanced visibility and control over your environment and that translates into enhanced security, better uptime, and improved efficiency.

Change control is at the heart of making a public cloud environment production ready. Being production ready, to me, is tantamount to being successful because if you’re only putting non-essential workloads into your public cloud environment your organization has probably reaped very few cloud advantages. Like everything else cloud related, good change control in the cloud involves rethinking existing processes to meet new requirements that have a different set of capabilities and constraints. Where I see companies fail is in trying to force legacy change controls (and other processes) on a cloud environment which slows things down, mitigates any benefits, and limits the ability for an enterprise to be successful in the cloud.

Is a DevOps strategy a prerequisite for IT transformation?

by John Grange

Earlier this week I was fortunate enough to speak on a panel at AIM Infotec on the topic of IT transformation. My role on the panel was to comment on the security implications of the cloud.  It takes some serious effort (if not charisma) to get people engaged at talks like these. But these days when you start talking more directly about cloud and how larger organizations can best embrace it, you’ll notice people perking up since these issues are finally top-of-mind for most business.  It’s very clear to me that in the enterprise, the desire for a real transformation is there, they just need the right strategy. The panel discussion confirmed this and, from my perspective, the line of questioning from the audience was a perfect representation of where IT leader’s heads are at.

So what are the C-Suite and IT leaders thinking right now? Well, one thing is clear, they want to get out of the hardware business. There are many implications to an organization moving away from owning infrastructure. Decision makers are seeing that the value to the business lies in speed-to-market, customer experience, and business agility. All of these things can be achieved without owning any infrastructure and, in fact, owning infrastructure hinders your ability to achieve those goals effectively. The questions we fielded from the audience were informed and relevant signalling that these are hot issues at their companies.

I thought I’d jot down a few of the really good questions we received from audience members and address each one. What struck me about the questions were how many times I’ve heard these same things asked by customers and prospects just recently. These sorts of topics are top-of-mind in the industry and how they’re addressed in board rooms is going to determine whether those companies can really transform how they use and deliver IT.

Is a DevOps strategy a prerequisite for IT transformation?

DevOps is a loaded term – not unlike ‘cloud’ – which has come to mean so many things that it no longer means anything specifically. In reality, DevOps is a methodology that brings together the ideas of agile software and agile operations to optimize the software development lifecycle. If organizations want to truly move faster and be more competitive, that often means deploying new code faster and more frequently. To make this happen operations (which includes systems, security, networking, dba, etc) and application development need to work together seamlessly as part of a software development pipeline. When these two formerly disparate entities – operations and development – are brought together the delivery of technology becomes more streamlined enabling organizations to be more competitive and react faster to business change. That is the essence of IT transformation.

 

If our corporate data footprint includes data that falls under HIPAA, can we even look at public cloud?

The myth that the cloud is not secure or can’t be used to store compliance regulated data is strange because it couldn’t be further from the truth. Go take a look at Azure or AWS’s compliance certifications for their facilities and platform and find me the corporate or MSP data centers that are better. I’m sure there are some but not many. For example, GE Healthcare moved all of their core customer solutions (HIPAA compliant) to Azure.

Compliant workloads can absolutely go to the cloud, but like anything else, you need to plan by identifying the sensitive data and make sure you understand how it will be accessed and how it will flow through the architecture. The most important thing is to understand how the shared responsibility model impacts your data. Make sure you’re capturing the right logs since the cloud environment is likely different than your on-premise environment. Also, leverage vendor tools to help secure those workloads and make your life easier.

How does security and management change, if at all, in a cloud environment?

The cloud is a paradigm shift. Where infrastructure used to be permanent with capacity meant to last years, the cloud is ephemeral with servers and resources going up and down while you only pay for what you use. The flexibility and variability of the cloud, combined with the tools and interfaces those platforms expose, presents some challenges surrounding security and management that are unique to the cloud model. One thing companies need to do is to not try and lift and shift every process into the cloud but to look at the intent of each process and determine the best way to accomplish it in the cloud. Many times the management process or security procedure will translate to the cloud just fine, in other scenarios you may have to adjust and use a new tool or tweak the process.  Yes, management and security practices will likely change in the cloud but you ultimately gain more capability and thus more control as a result.

When looking at cloud providers, how should we approach portability and avoid provider lock-in?

The good news is that virtualization is mature and fairly standardized with most workloads being capable of moving into and out of a provider with relative ease. Both Azure and AWS have VMware connectors and most MSP’s are using VMware or Hyper-V which have supported migration paths between the platforms. By and large, provider lock-in is not much of an issue for IaaS. The place to be careful is PaaS, including access management.

As soon as your application relies on platform APIs for authentication or unique attributes of a database-as-a-service backend, you cannot just move your application without extra work to remove the dependency on those platform specific components. If you’re going to leverage PaaS, make sure your team clearly understands the application’s dependencies and that you have a known path off of that platform if the situation arose. They key here is to know what you’re getting into and plan for it.

 

Are you encrypting your data-in-flight? If not, you should be.

by admin
encrypting

We spend a lot of time working with our customers to align their technology platform choices with best-practices security and compliance standards.

Over the past couple of years there has been a rash of high-profile data breaches and hack’s that have rocked the business world. At the same time there’s also been a veritable cambrian explosion of application frameworks, libraries, languages and database engines that are part of a new era of cloud-native applications – a direct response to mobile solidifying itself as the platform of the internet.  With existential security risks for technology at an all-time high, and powerful cloud-native services popping up at a torrid pace, data security has never been more critical than it is today.

Having a fundamental knowledge of basic application environment security should be required for all of the members of your team. Whether dev or ops, everyone should understand the what it takes to keep your data protected. We thought it would be valuable to do a blog series of quick tips for securing your server environments and processes. Let’s start with a security fundamental that it is a requirement for nearly all of the main compliance standards: in-flight encryption.
All data that goes over your internal network or the internet is potentially vulnerable. Encrypting data in-flight means that you encrypt data when it’s being transmitted over a network.

Here are some tips to ensuring all of your data transmissions are encrypted in-flight:

  1. Don't use ftp for file transfer, it's unencrypted and insecure. Instead, use scp or sftp. Additionally, you can use rsync over ssh for secure transfer using rsync's robust feature-set. On Windows you can transfer files over Remote Desktop which is also encrypted.
  2. On your web servers, whether you're running on Windows or Linux, be sure to use TLS (transport layer security) for https on all of your connections.
  3. From time to time, a VPN is necessary to provide private, encrypted access to your network. We use OpenVPN as well as a hardware-based solution through our firewall. OpenVPN is software based, easy-to-use and is a great tool in your ops toolbox.
  4. 4. When implementing encryption, try to avoid self-signed certificates wherever possible. It's better to use a certificate that's signed by a Certificate Authority and so your public key is always verified by a trusted third party.

These 4 tips are just the basics to encrypting your data in-flight. One challenge to implementing encryption is to ensure that it's consistently implemented correctly across your entire environment as you grow. In our own infrastructure we've built encryption in-flight into our entire environment through our automation tools and we do quarterly audits of internal and customer environments. Security is a process that has to be taken seriously.

Next time we'll touch on encryption at-rest and how to secure the data you're actually storing.

Are You Production Ready? Layeredi’s Cloud Hosting Solution is the Answer.

by admin
photo-1424746219973-8fe3bd07d8e3

To us, managed hosting is a craft and not a commodity.

Layeredi cloud-enabled managed hosting is a fresh take on hosting and application management that’s fit for today’s technology environment.

Where companies used to provide hardware with ping monitoring and a nightly backup, we provide compliant data centers, enterprise-cloud hardware, managed backups, deep system and performance monitoring, DevOps tools and automation, full-stack managed security, 24x7x365 support and much more.

Layeredi is not a just some servers and a control panel you interact with, we’re a partner you can rely on. Backed by an outstanding team of experienced engineers, we’ll take care of your infrastructure, installations, configuration management, maintenance, security, and backups so you can focus on your business

image description

Image Description

Bringing startup culture to small town life

The centerpiece of Comstock’s vision is the Rural Innovation Catalyst Program.

In partnership with Peru State College (about 10 minutes from Auburn), the program will include a high school accelerator program that provides coaching so high school students can start their own business inside a “soft failure” environment.

The Catalyst will also include college support and a post-secondary fellowship for rural community development. In all, the program will offer resources for rural enterpreneurs from their junior year of high school to post-college.

“It’s a heck of a lot easier to keep young people in rural communities than to try to convince them to come back,” said Comstock. Comstock plans to build a resource network that brings the dynamism of startup communities like Lincoln to rural towns across the region.

“There’s a lot of talk and less action in rural communities,” said Comstock. “I’d like to see more small business owners get involved.”

Hey Everyone, Meet LayeredFeed! Layeredi’s new blog.

by admin
image description

To us, managed hosting is a craft and not a commodity.

Layeredi cloud-enabled managed hosting is a fresh take on hosting and application management that’s fit for today’s technology environment.

Where companies used to provide hardware with ping monitoring and a nightly backup, we provide compliant data centers, enterprise-cloud hardware, managed backups, deep system and performance monitoring, DevOps tools and automation, full-stack managed security, 24x7x365 support and much more.

  • Pellentesque sagittis magna a dapibus laoreet.
  • Pellentesque sagittis magna a dapibus laoreet.
  • Pellentesque sagittis magna a dapibus laoreet.
  • Pellentesque sagittis magna a dapibus laoreet.

Layeredi is not a just some servers and a control panel you interact with, we’re a partner you can rely on. Backed by an outstanding team of experienced engineers, we’ll take care of your infrastructure, installations, configuration management, maintenance, security, and backups so you can focus on your business

Where companies used to provide hardware with ping monitoring and a nightly backup, we provide compliant data centers, enterprise-cloud hardware, managed backups, deep system and performance monitoring, DevOps tools and automation, full-stack managed security, 24x7x365 support and much more.

Are You Production Ready? Layeredi’s Cloud Hosting Solution is the Answer.

by admin
photo-1424746219973-8fe3bd07d8e3

To us, managed hosting is a craft and not a commodity.

Layeredi cloud-enabled managed hosting is a fresh take on hosting and application management that’s fit for today’s technology environment.

Where companies used to provide hardware with ping monitoring and a nightly backup, we provide compliant data centers, enterprise-cloud hardware, managed backups, deep system and performance monitoring, DevOps tools and automation, full-stack managed security, 24x7x365 support and much more.

Layeredi is not a just some servers and a control panel you interact with, we’re a partner you can rely on. Backed by an outstanding team of experienced engineers, we’ll take care of your infrastructure, installations, configuration management, maintenance, security, and backups so you can focus on your business