Cloud outage audit update: The challenges with uptime

Other issues, like autoscaling and cargo balancing, inherit play as service demand rises and falls throughout the day, and may considerably impact the performance of larger applications, he added. It's still better than having no data, but it is not sufficient to truly tell you what the customer goes to be experiencing," Sloss said. Becoming savvier about risks, tradeoffs Most cloud vendors' service-level agreements (SLAs) include an equivalent 99.95% uptime guarantee. Private data centers aren't resistant to outages, but the shortage of control and insufficient availability from public clouds deter some IT shops. information technology courses That's also partly why most public cloud workloads aren't used for production or mission-critical applications.

CityMD, an urgent care organization with 50 facilities in NY and New Jersey, uses Google for email collaboration and hosts a number of its external websites with AWS, but none of its infrastructures is within the public cloud. We are almost a 24/7 shop, therefore the uptime may be a big priority for us," said Robert Florescu, vice chairman of IT at CityMD. "It was more cost-efficient to place our infrastructure within the data center." IT pros are at different maturity stages with the cloud, so there's still some fear of unavailability, said Kyle Hilgendorf, an analyst with Gartner.information technology security While most aren't putting their most vital assets within the public cloud, they're confident in vendors' track records and are willing to be proactive in their design. They so desire cutting-edge features and services that [public clouds] offer that they can not integrate their own data centers that they are willing to require a number of the danger," Hilgendorf said.

Very few customers seek higher-level uptime guarantees than public cloud vendors' typical SLAs because the benefit would be marginal compared with the extra cost and engineering required, Sloss said. When you really need to make tradeoffs -- and to be fair to Amazon, we're beat an equivalent boat -- with any folks, if you would like to travel beyond running during a single zone, then you are going to possess to try to additional work," Sloss said. Still, cloud vendors attend great lengths to avoid disruption. At the foremost recent re Invent, the annual Amazon Web Services user conference, Jerry Hunter, vice chairman of infrastructure, discussed the extent to which Amazon has gone to regulate uptime, including steps to enhance building design and energy usage. the corporate also built its own servers, network gear and data center equipment to scale back potential breakdowns, and improved automation to scale back human errors and help increase reliability.

AWS has purpose-built networks in each data center, with two, physically separated fiber paths extending out from each facility. technology degree Amazon has built its own Internet backbone to enhance Web performance and monitoring capabilities. We've turned our supply chain from [a] potential liability into a strategic advantage" using input from the retail side of the business in Amazon fulfillment centers, and learned the way to modify processes and workflows, Hunter said Several years ago, there have been considerable discrepancies between vendors, but by now, all the large providers have industrialized and automatic their processes to shut the gaps, said Melanie Posey, an analyst with IDC in Framingham, Mass. Customers even have put in additional redundancies to redirect users and processes to a different region if there's an outage in their primary data center.

How customers are getting more savvy IT pros who use Amazon not only are increasingly savvy about strategies to guard themselves against downtime, they're dismissive of these who decry the cloud supported availability. For example, after an AWS outage last September, IT pros said the foremost disruption came from people that saw it as evidence that cloud computing is inherently more risky than on-premises deployments. One user compared it to one accident on a freeway leading drivers to conclude that highways generally are less safe than surface roads Despite the inconvenience and every one the press attention, you would be hard-pressed to seek out corporate customers or consumer end-users who are so uninterested with the AWS outage that they might abandon their cloud services," wrote AWS expert and TechTarget contributor Jeff Kaplan shortly after the September issues.

Proactive users should cash in of multiple availability zones and multiple regions, if possible, to guard against natural disaster or a neighborhood taking place, Hilgendorf said. to require it a step further, the simplest practice to handle cascading software bugs or expiring certificates is to use multiple vendors. All of this comes right down to what proportion cost are you willing to soak up for the insurance of maybe protecting against the low likelihood of an unavailability event," Hilgendorf said A more reactive stance involves management and monitoring with things like CloudWatch, though this may help more with the unavailability of a selected application, instead of the provider, Hilgendorf explained. Most of the unavailability events we hear from customers are, 'Oh, we screwed our application up, we misconfigured something,' or, 'We had a developer enter and alter a setting, and everything went awry,'" he said.

Additionally, there are resources like CloudTrail for logging API requests and other changes to try to root-cause analysis. Any cloud service could go down at any given moment, but given the number of tools made available to users now, in most situations, the fault probably lies with the customer, Hilgendorf said.

High-performance cloud storage, not a pipe dream there's cloud storage, there's high-performance storage, but is there really such a thing as high-performance cloud storage For an extended time, the solution was no. Any time you progress your infrastructure somewhere outside of your data center, there's getting to be latency involved, and you run into the speed of sunshine problem," said Scott Sinclair, an analyst with Enterprise Strategy Group in Milford, Mass. "The speed of sunshine can only go so fast." Those that required high-performance storage out of their cloud providers either learned to compromise or stayed home. Increasingly though, there are emerging technological approaches that suggest that you simply can have your cloud storage cake and eat it too – that's, it's possible to run IO-intensive, latency-sensitive applications with some level of cloud-based infrastructure.

High-performance cloud storage could allow organizations to run demanding database applications within the cloud that are stymied by cloud storage's limitations. It could also allow you to stay applications on-premises, but cash in of cheap and scalable cloud storage over the wide-area network. and eventually, it could make it possible to run compute within the cloud that accesses storage infrastructure back within the private data center. But unlike most storage problems, the trick to achieving high-performance cloud storage is not just to throw more disk drives or flash at the matter, Sinclair said. When solving for the speed of sunshine, new technologies "need to believe a selected innovation to unravel the matter," Sinclair said -- namely, colocating data very on the brink of computing, or introducing some kind of network optimization or caching mechanism. Some solutions combine all three of those approaches. And while it's still a youth , early adopters have seen promising returns. On-prem compute, cloud storage

"We wont to have the mindset that storage is reasonable, and if you would like more storage, just go buy some more," said David Scarpello, COO at Sentinel Benefits & Financial Group, a benefits management firm in Wakefield, Mass. "Then I came to the belief that storage isn't cheap, and whoever told me that was hugely mistaken." Between purchasing extra capacity, support, and maintenance, staff, backup, maintaining a knowledge center and disaster recovery site, Sentinel pays upwards of $250,000 per annum to take care of 40 TB worth of on-premises storage – over $6,000 per TB. "It's tons," he said – and for what? Storage is vital – it keeps us safe -- but it isn't something that you simply want to be spending tons of cash on." Meanwhile, public cloud providers offer raw capacity at rates that rival consumer hard disc drives. Prices for Amazon Web Services (AWS) Simple Storage Service (S3) start at $0.03 per GB per month -- less for greater capacities and infrequent access tiers -- or $240 per annum for a managed, replicated TB.

But that cheap capacity tier is predicated on object storage, whose performance is adequate within the better of times -- and downright slow when accessed over the wide-area network. therefore the challenge for several IT organizations is the way to tap into the cloud's scalability and low cost while maintaining a modicum of performance For Sentinel, one potential fix may be a data caching and acceleration tool from Boston-based startup ClearSky Data that mixes an on-premises caching appliance and a sister appliance located during a local point of presence (POP) that's directly connected to high-capacity public cloud storage. By caching hot data locally and accessing the cloud over a fanatical, low-latency connection, customers cash in of cheap cloud-based storage for on-premises compute without a performance hit. In an initial release, ClearSky promises near local IOPS and latencies of under two milliseconds for patrons out of its Boston, Philadelphia and Las Vegas POPs. The plan is to extend its geographic presence, and add support for extra cloud storage providers, said ClearSky Data co-founder and CEO Ellen Rubin.

Sentinel has begun to maneuver about 7 TB of test and development volumes to AWS via ClearSky, with no complaints from developers. Ideally, the corporate will slowly give way all its data, thereby eliminating a $5,000 per month maintenance fee to NetApp, also because the need for backups and offsite disaster recovery Cloud computing, and storage, too If you're running a latency-sensitive database application within the cloud, best practices dictate that you simply accompany the cloud provider's block storage offering, like AWS Elastic Block Storage (EBS). That wont to be a death-knell for giant database workloads that became stymied by limited IOPS and smaller volume sizes. When Realty Data Company's parent company National land went bankrupt in 2012, it had to form some quick decisions concerning its three data centers: enter another data center, rent colocation space or attend the cloud.

"As very much like it's hard to abandoning, getting to the cloud made the foremost sense, financially," said Craig Loop, director of technology at the Naperville, Ill., firm. At first, Realty Data scrambled to try to lift-and-shift migrations of its applications but stumbled to migrate its 40-TB image database off of an EMC array and into the cloud. Latency and performance numbers from S3 were unacceptable and meant rewriting its in-house application to support object storage. Even with shims, we couldn't catch on to figure," Loop said. Meanwhile, AWS EBS wasn't a true option either, because, at the time, EBS supported volume sizes of just one TB. "EBS would are a management headache," Loop said. Working with cloud consultancy RightBrain Networks, Realty Data used a Zadara Virtual Private Storage Array (VPSA), dedicated single-tenant storage adjacent to the cloud data center and connected via a fiber link, and purchased employing a pay-as-you-go model.

 The Zadara VPSA presents familiar SAN and NAS interfaces, and storage performance developers expected with an on-premises EMC array. Zadara has since added VPSAs at other cloud providers, also as an on-premises version that gives cloud-like pay-as-you-go consumption. Native cloud block storage options have also upped their game. AWS EBS, as an example, now supports volume sizes of up to 16 TB, and EBS Provisioned IOPS Volumes backed by solid-state drives deliver up to twenty,000 IOPS per volume. Still, while that's ok for tons of database workloads, it is not for all of them. Lawter Inc., a specialty chemicals company based in Chicago, Ill., recently moved its SAP and SharePoint infrastructure to a public cloud service from Dimension Data and chose Zadara VPSA because it needed to ensure a minimum of 20,000 IOPS for its SAP environment. "[Dimension Data's] standard storage couldn't meet our IOPS requirements," said Antony Poppe, global network and virtualization manager with the firm.