Most people come to the cloud for cost savings, and stay for the flexibility it offers, said TeraGo cloud specialist Nabeel Sherif, who recently joined TeraGo Senior Product Manager Mohamed Jivraj and ITWC CIO Jim Love for Four Keys to Smashing Success in the Cloud, a discussion around the cloud’s place in protecting assets in case of disaster.
“Moving to the cloud is always a journey, it’s an evolution of how to use this new tool in the toolkit and how to use it right and wrong,” said Sherif. “The cost savings are there, depending on how well versed and how well you execute. Most of the time the cost savings you expect in the dream state of cloud people don’t really get, but they achieve some cost savings and they find the flexibility allows them to focus on real value generation and get away from the plumbing.”
Jivraj agreed, noting that people are getting more comfortable with the cloud; it’s no longer scary.
However, said Sherif, they need to tie results to a workload or a business process to be meaningful. Quick wins are tied to cloud getting them to a better state than they are. He cites four keys to success in the cloud as:
- Disaster recovery
- Cloud mix
A poll of webinar attendees revealed that 36 percent currently use DR in the cloud, and 44 percent rely on cloud for resiliency and uptime.
Resiliency in general — DR, backup, and other functions used on demand — are ideal applications for cloud, he said, and we’ve gone from a place where security and regulation were reasons to avoid moving to the cloud to a time when we realize that we often get better security execution by offloading the task to a cloud provider with the appropriate expertise.
As people move to the cloud, they tend to start with one provider, staying there, then realizing that they shouldn’t have put everything in one place, Sherif said, comparing choice of cloud to managing a wardrobe: shoes don’t belong in the same spot in the closet as coats. Similarly, all workloads don’t necessarily belong in the same cloud. We have to figure out which workloads belong where and how to optimize across heterogeneous systems, not shoehorn everything into one place.
DR is a way of moving sure-footedly into cloud, he went on. Ultimately cloud is about making applications work better and more reliably; one way to do that is making them resilient using DR.
“One of the biggest challenges in IT is what I call the janitorial work of running an IT system,” he said. “Work that’s necessary, that’s valuable to be done, but it’s the kind of work where you’re never going to get glory when it’s done right, but you’re certainly going to get a lot of trouble if you get it wrong.” Patching falls into that category, as does network optimization – what Sherif calls the ugly sysadmin work that your sysadmins don’t want to do.
“If you can get away from doing these cumbersome, non-glory activities and concentrate your skill set and your team on actual value generation that’s going to be appreciated even when it doesn’t go wrong, you’re delivering better value to your business, and probably better satisfaction to your staff,” he noted, recommending that the so-called janitorial functions be offloaded to a competent cloud provider.
In the first level of moving to the cloud, the provider handles basic functions like procurement & capacity planning, patching, security, hardware and software upgrades, mobility, accessibility and MDM, as well as simplifying the separation of environments for development, testing, staging, and production. The customer trades out the work and time that it takes for opex cost.
He said big wins come from DR, security, and compliance.
Jivraj added that a lot of people have adopted cloud for DR or IT resiliency, but while the notion is that the cloud can help with DR, people not sure how to implement it; there some fundamentals to establish.
First, it’s important to define objectives, and determine that they match the outcomes defined for the business. For example, decide which workloads should be restored first in case of disaster.
You also need to know how much data you can afford to lose vs how much time you can afford to be idle. Is it more or less than 24 hours?
If you operate in multiple markets, how do you address the multiple locations? Are there regulatory requirements around downtime, or location of the backup site to comply with data sovereignty needs?
The goal is to minimize the impact of the outage.
“There’s a lot more confidence and faith in the cloud,” he observed. “If you look at companies like Microsoft and Amazon, they’ve come a long way in promoting and evangelizing the cloud, and giving credibility.”
But you have to look at the business impact of an outage and do a financial assessment to see if the cost of your solution will be offset. DR comes in different flavours, at different prices.
Also be aware that the public cloud is not one size fits all. They may not have multiple datacentres in a single country. If you use public cloud, and the area where its datacentre resides is struck by a disaster, it may not have an unaffected location. If data sovereignty is important, you need to assess private and public vendors to see how they would address the issue.
“I haven’t seen a successful deployment without having expertise in-house or by using a service provider,” Jivraj said. “There are a lot of intricacies.”
“When you’re building out a DR solution, there are many key elements to designing a solution that will work,” he added, “and there are different stages.”
You first have to plan, deciding on business priorities, risk tolerance, and other factors. Based on the plan, design a solution, then deploy it, monitor it to ensure it’s performing properly, then maintain and test it.
“People think they can just deploy DR and forget it,” Jivraj said. “You can’t. A lot of changes happen on a daily basis. If you rely on a DR plan that’s a year or even a month old, it risks a lot of your investment.”
Added Love, “You need someone to do it who does it every day, all the time.”
The focus should be on IT resiliency, Jivraj said. It’s not just about natural disasters. It can be about a power outage or human error too. DR is a loose term; businesses should be more thorough about what happens if they can’t access their environments. That can cost them a lot of money. TeraGo has an online tool to help people calculate cost of downtime and help them decide whether to turn to a professional or handle DR in-house.
Sherif then picked up on another component: security.
“Security is one of those places where, in the lifespan of cloud, it’s been interesting to watch the about-face,” he said. People once looked on hosted services and cloud with suspicion, whereas now they think that a competent provider can do better job than they, as a company whose core competency isn’t running network and system operations, can do. One proof point for cloud occurred in 2010, when the hacker group Anonymous started its distributed denial of service (DDoS) attacks. The only system it couldn’t it kill was AWS, which had focused on system availability and performance.
Security, even in the cloud, is a shared responsibility. There’s the security baked into its systems by the cloud provider, and then there’s the customer piece. It’s important to understand who is responsible for each security component (it depends on the contracted services; you may buy only infrastructure, which the provider secures, and you handle the rest, for example), and for the customer to properly use the tools offered by its provider to secure its systems. After all, Sherif noted, if you don’t lock your doors and arm the alarm system in your house, you won’t be secure even with the best locks and the best alarm system.
Love added, “If the provider can’t discuss the shared responsibility model, run far and fast! The time when you have a disaster is not the time to say ‘I thought you were …’”
Sherif agreed. Companies need a provider who can walk them through the ‘gotchas’. A good cloud provider has the technology, the resources, and the methodology they’re practicing regularly, so they can improve a business’s security. However the business needs to consider what it wants to happen to its data: the retention policy, destruction policy, and how it will be secured in operation either at rest or in transit.
Jivraj added that the impact of regulation such as GDPR, and PIPEDA (including international & inter-provincial data transfers) provides a similar argument for cloud as security – do you want to spend your time on it when it’s not your core business, or let an expert provider handle the complexity?
“Compliance is vast,” he said. With data sovereignty a critical concern of many regulators, service providers should ensure your data remains where you put it.
“Dealing with breaches is a reality,” Jivraj observed. “But there are nuances.” For example we think that GDPR is an EU regulation with no Canadian impact, but actually anyone who deals with the EU has to comply, and Canada has its own equivalent regulation. There are regulations at the datacentre level, regulations at the data level, and industry-specific regulations. There are many companies who can help navigate all of them.
Picking the right cloud mix is important. When people talk about hybrid cloud, they talk about the dream state, Sherif said, where they can move workloads around at will and don’t need to know or care where they are. But the reality is, people tend to use cloud like another tool in their toolbox. Certain workloads perform best in different environments.
Lot of people start by tossing a bunch of stuff into the cloud, he said. Soon they start to realize they’re not generating the savings or are not achieving expected performance, and re-examine the environment. Some of the workloads may not even belong in the cloud. They evolve to a model with different workloads and IT assets are running in different environments. The challenge is in deciding what goes where.
In the main, he said, the need for elasticity and redundancy in an environment with demand spikes, volatility, and lack internal skillsets, make a workload a good public cloud candidate, while a private cloud is better suited when there’s predictable consumption, sufficient scale to justify running an environment, and when the cost and management effort to add capacity is cheaper than it would be in the public cloud.
For DR, he said, as a rule of thumb choose public cloud when RTO/RPO is greater than 24 hours, or when you’re dealing with workloads that are usage based. Choose private cloud when RTO/RPO is less than 24 hours, or when data sovereignty is a concern (some public clouds have only one datacentre in Canada), with the caveat that you need the skillset to manage it.
Forrester Research has revealed that 56 percent of organizations surveyed use multiple cloud solutions to improve security and compliance, data management, infrastructure management and flexibility. Smashing success in DR is about building a plan in a thoughtful holistic way and then executing on it. There needs to be proof and validation.
TeraGo provides end to end services: it creates a written business and resilience plan, looks at the solution at all levels, matches the runbook and technology with desired outcomes, performs regular testing and validation, and offers a mix of clouds — private, and AWS through their partnership. It uses backup and DR technology from Zerto and Veeam.
Jivraj concluded, “Moving to the cloud is a continuous journey. Its appeal is that it will create opportunities that you won’t have if you have to buy an asset. It provides skill sets and talents, and lets you play with cost and flexibility.”
Sponsor: BriteSky Technologies
The Ottawa Senators score on a cloud assist from Britesky Technologies
By relying on Britesky Technologies for cloud services, the Ottawa Senators can focus on the thing that matters most — the fans.