Why we support the public cloud too

by John Grange
publiccloud

This summer has absolutely flown by. The highlight for us has been finally rolling out our managed public cloud service. It was a long time in the making and we received great early customer input to help us shape the offering. Now that we’ve been providing our services on AWS and Azure for a couple of months, I’ve noticed some reoccurring questions and misunderstandings that I thought I would address.

Given the positive reception to our nascent public cloud services and in the interests of providing better definition, I thought I’d provide a quick rundown of what our public cloud services are and why companies need managed public cloud.

Why is there a need?

As companies rebuild and replatform their line-of-business apps to leverage new technologies the public cloud is a natural choice because of the scale, flexibility and ever-evolving tool sets. There’s definitely reasons to run certain things like your ERP or another core production workloads in your on-premise datacenter or a private cloud, but for less critical systems or cloud-native applications the public cloud provides many benefits.

Despite all the fancy interfaces and capabilities, businesses still need to ensure data security, privacy, and governance. In the pervading shared responsibility model, the burden still falls on the customer to enforce security above the infrastructure layer. With tools, concepts and capabilities that are vastly different from in-house environments, companies now require new processes and that their staff have a different set of skills.

A managed public cloud provider alleviates many of these issues by providing the setup and day-to-day maintenance so that internal staff can focus on the application itself.

We enable public cloud adoption in an supported, secure, and enterprise-ready way

Since we provide hands-on support, advanced monitoring, and secure and compliant configurations on our own infrastructure it wasn’t much of a stretch for us to extend that service onto Azure or AWS. The biggest challenge was in building processes around the PaaS elements and ancillary services such as Azure Backup and Azure Site Recovery.

In the end, we provide our clients with an instant ops team to setup, configure and secure, along with a support capability to monitor and respond to incidents. We take away a ton of the risk while maximizing the value of the public cloud inherent scale and tooling.

Key service attributes:

-Best-practice environment configuration

-24 x 7 support

-Health and Performance Monitoring

-Hardened OS configurations, user access controls

Doesn’t this negate the cost advantages of public cloud?

I hear this a lot but typically not from actual clients. Most organizations who are exploring public cloud are doing so because of the operational efficiency associated with the scale and tool sets available on those platforms. The cost of the actually server resources are really only a small part of the equation. If a company can focus their internal resources on directly supporting their users and not on servers and maintenance, the benefits of the public cloud become substantial.

Adding management services to cover the day-to-day responsibility of the customer in a public cloud environment allows companies to move faster because the key “boxes” are checked. Public cloud allows you to move faster and can be secure, our goal is to make it easier for companies to get there.

3 tips for making your multi-cloud approach wildly successful

by John Grange
3tipsmulticloud

As I speak with customers and partners, it’s striking how many of them are no longer making the choice between infrastructure in their own datacenter’s or going all-in on one of the public clouds. More and more companies are taking hybrid or multi-cloud approaches to their applications and infrastructure – a practice that maximizes the value and utility of the cloud. When you can right-size your infrastructure to be in line with the technical and cost requirements, you end up running more efficiently by providing more flexibility as time goes on and requirements change. In IT things are always changing so it’s wise to put a high premium on flexibility.

So why isn’t everybody right-sizing there workloads through a hybrid cloud model? Like everything else in IT, it really comes down to inertia and fear. The inertia stems from the propensity of organizations to continue to do what they’ve always been doing. It’s an easy route to take because it’s generally considered more difficult to get fired for a decision NOT made rather than to make the decision to blaze a new path. The fear component comes from, not only change, but the enterprises concerns over data security in the cloud. Recent analysis shows that data governance and security are major concerns for companies who are considering cloud computing.

The multi-cloud approach with it’s efficient, right-sized workloads and variable cost model is so obviously advantageous, what’s the best way to overcome the organizational inertia and fear and adopt this approach? We have a lot of experience in this realm since we offer public, private, and hybrid cloud services. Here are some things that we see successful companies doing to adopt a multi-cloud approach:

1. Find the low-hanging fruit

All workloads aren’t created equal. To make your first foray into the public cloud successful, start with an application that would be somewhat easy to move into a new environment. Examples would be a web application that runs on common database software and uses a fairly vanilla configuration. Often times the “low hanging fruit” are non-essential or internal applications. If your first migration to the public cloud is successful, it will be easier to get organizational support to move other applicable workloads there as well.

2. Leverage vendors and tools

Just like you use a wide range of software tools and vendors to run your datacenter, managing a multi-cloud environment should be no different. Like any good engineer would say, sometimes it’s about just having the right tools for the job. Leveraging vendors can allow you to ensure security, monitor performance, troubleshoot problems and increase the general reliability of your applications. The most powerful reason to do this is that it reduces the burden on your team and ultimately allows you to do much more with less.

3. Enforce consistency

Consistency is really important. Whether it’s OS configurations, access methodologies, or deployment processes, consistency increases stability and enhances security. As a matter of security and organization, your public cloud presence should be as consistent with your private cloud. This doesn’t have to mean they’re exact replica’s; it means that the general processes, guidelines, and procedures you use everywhere else are part of your public cloud environment, regardless of whether you use the exact same tools to achieve that parity. Enforcing consistency will save you headaches by minimizing mistakes and ensuring enterprise security regardless of where the data sits.

What to think about when considering data-at-rest encryption

by John Grange
atrest

In our¬†previous post in our security best-practices series we addressed data-in-flight encryption, what it means, and offered some tips for implementing it in your environment. Like data-in-flight, many compliance regulations require your data-at-rest to be encrypted as well. Data-at-rest is the inactive data that’s being digitally stored on your servers. While keys, access policies and audits are also critical, encryption is the front line in protecting your data-at-rest.

Encrypting your your data while it’s at-rest can be a much more complex and costly operation than encrypting your data-in-flight. Often times, and depending on a number of factors, encrypting the the data you’re storing can require changes to physical hardware or adjustments to your application to interact with an encrypted file system.

By 2017, two-thirds of all workloads will be processed in the cloud. Protecting that data is challenging because the popular cloud hosting platforms vary in their security practices, customization, and capabilities. Understanding your data footprint and the available encryption options are key to avoiding a costly data breach and meeting compliance regulations. Ensuring your cloud hosting vendor offers compliance options that include encryption of your data-at-rest is a good way to find a vendor that has a strong orientation around security and compliance.

Here are a few things to consider when looking to encrypt your data-at-rest:

If possible, use Self-Encrypting Drives (SED’s)

A SED is a self-encrypting drive that has a built-in ability to encrypt any data coming in and decrypt any data going out. Self-encrypted disks are easy to implement and their use is essentially invisible to users. Because the encryption is native to the disk itself, you can achieve very high performance despite the encrypting and decrypting of data being written and read. SED’s are more expensive than regular drives but are a sure-fire way to protect your data.

Choose software-based full-disk encryption wisely

There are countless full-disk encryption packages out there and some are better (much better) than others. Make sure to choose software from a vendor that’s stable and will continue to support their product. Also, your solution should use industry standard encryption algorithms and not proprietary one’s and provide key management. Finally, if you need to encrypt data that’s already there, be sure to choose an option that doesn’t require a re-partitioning of the server (something Microsoft’s Bitlocker requires). Software-based full-disk encryption is a less expensive proposition than SED’s but they degrade server performance and introduce complexity.

Native solutions are your best bet

Native solutions are implementations of encryption that are “built-in” to a system. A SED is a native solution in that the encryption is actually built into the disk itself, this is why SED’s perform well and provide simpler implementation. There are also encryption options that are a component of the file system your Operating System is using. While these implementations can require additional software, NTFS (Windows) and ext4 (Linux) are common file systems that have native encryption capabilities. The act of encrypting and decrypting data as it’s being written or read creates performance overhead and the closer that process is happening to the actual disk the better performance you get.

Disaster recovery isn’t just about worst-case scenarios, it’s about delivering high-availability to your core applications.

by admin

Disaster recovery (DR) is a difficult concept in IT. It requires one to not only think about the many implications of a worst-case scenario but also develop a sound response. You have to think about that crazy thing that may or may not actually happen while deciding how much resources should be committed to that theoretical event – without materially disrupting your business, of course. But what if your DR site served an additional purpose? What if you could make your DR site more than just an expensive store of data that may or may not save your behind in the event of a worst-case scenario? Well, in the modern data center it’s much easier to make your plan B your plan A.

Think about how you’ve traditionally thought about DR. It’s been “backup the heck out of our data and get it to some off-site place so we can restore it if we need to.” The obvious questions becomes, how does that backup data end up becoming a live application environment? How long will it take to transfer the backup data to new hardware? For that matter, what infrastructure are we really planning to restore to that’s geographically diverse from our production environment? At the core of these questions is data availability; and availability is really what DR is all about.

So many of today’s applications are web-based and may even include native mobile and tablet components associated with them. As an infrastructure professional in charge of the availability of these applications, redundancy, load balancing, and replication are all things you’ve probably implemented in some form. If one of the application servers goes down, it fails over to another node. Well, you know what? This is how your DR site should work too.

At Layeredi we approach DR from the standpoint of making your plan B your plan A. What that means is that to be truly resilient to a prolonged outage and achieve speedy recovery time objectives, your DR site should be a literal extension of your production environment. And this sort of setup doesn’t have to be cost prohibitive. Capacity, hardware specs and recovery time objectives can be tweaked to reflect financial constraints. Here are a few things that our team does to make DR a core application availability strategy, not only for us, but also for our customers:

Store backups in native hypervisor format

In a modern data center you’re probably highly virtualized. You may not be 100% there, but chances are your virtualization footprint is over 60%. Whether you’re running VMware or Hyper-V there are built-in tools for snapshots and other ways to quickly and easily migrate VMs from host to host.

Use VM replication

We use tools that allow us to replicate live VMs at our productions sites to powered-down VMs at our DR site. The DR VMs aren’t actually consuming resources when they’re powered down yet they can still be quickly powered on in the event of a disaster scenario. This produces a cost effective scenario whereby a replica of the production environment can be turned on in short order.

Pre-stage VM recovery

In many application environments, there are certain dependencies that need to be in place for it to be fully operational. So certain VMs need to go live before others. An example would be a Domain Controller needing to be up before you can bring on your sql server. We use tools that allow us to pre-stage the order of recovery in our DR site. When we say “recovery”, we really mean the order in which powered down replica VMs get turned on, so this process is quick.

Tiered Storage

In our production and DR environments we use flash-optimized tiered storage. What this means is that for any workload we can define whether it’s running on SSD or 7k spinning disks based on profile. This allows us to save money by keeping VMs on cheap 7k storage until they need to become production and we can instantly provide 10’s of thousands of IOPS to our applications by moving them into the SSD tier.

This was a quick primer on how we do DR for ourselves and our customers. Technology is so incredibly business-critical today, that we feel like it’s important that we continue to iterate and innovate on DR and application availability because it’s more important than ever.

 

Are you encrypting your data-in-flight? If not, you should be.

by admin
encrypting

We spend a lot of time working with our customers to align their technology platform choices with best-practices security and compliance standards.

Over the past couple of years there has been a rash of high-profile data breaches and hack’s that have rocked the business world. At the same time there’s also been a veritable cambrian explosion of application frameworks, libraries, languages and database engines that are part of a new era of cloud-native applications – a direct response to mobile solidifying itself as the platform of the internet. ¬†With existential security risks for technology at an all-time high, and powerful cloud-native services popping up at a torrid pace, data security has never been more critical than it is today.

Having a fundamental knowledge of basic application environment security should be required for all of the members of your team. Whether dev or ops, everyone should understand the what it takes to keep your data protected. We thought it would be valuable to do a blog series of quick tips for securing your server environments and processes. Let’s start with a security fundamental that it is a requirement for nearly all of the main compliance standards: in-flight encryption.
All data that goes over your internal network or the internet is potentially vulnerable. Encrypting data in-flight means that you encrypt data when it’s being transmitted over a network.

Here are some tips to ensuring all of your data transmissions are encrypted in-flight:

  1. Don't use ftp for file transfer, it's unencrypted and insecure. Instead, use scp or sftp. Additionally, you can use rsync over ssh for secure transfer using rsync's robust feature-set. On Windows you can transfer files over Remote Desktop which is also encrypted.
  2. On your web servers, whether you're running on Windows or Linux, be sure to use TLS (transport layer security) for https on all of your connections.
  3. From time to time, a VPN is necessary to provide private, encrypted access to your network. We use OpenVPN as well as a hardware-based solution through our firewall. OpenVPN is software based, easy-to-use and is a great tool in your ops toolbox.
  4. 4. When implementing encryption, try to avoid self-signed certificates wherever possible. It's better to use a certificate that's signed by a Certificate Authority and so your public key is always verified by a trusted third party.

These 4 tips are just the basics to encrypting your data in-flight. One challenge to implementing encryption is to ensure that it's consistently implemented correctly across your entire environment as you grow. In our own infrastructure we've built encryption in-flight into our entire environment through our automation tools and we do quarterly audits of internal and customer environments. Security is a process that has to be taken seriously.

Next time we'll touch on encryption at-rest and how to secure the data you're actually storing.