gotukola epa | information technology degrees





Verizon the newest Big Cloud Player

By Christopher Morris, Tech Commentator

The retail firm, Amazon, has already established itself because of the world’s biggest player within the cloud market. the corporate that originally made its name as a bookseller has greatly diversified because the size of the corporation has grown, and it's successful enter electronics with the electronic book reader the Kindle has convinced the strategists behind the company’s future to maneuver into cloud computing. this might hardly are more successful, as long as the corporate that Forbes listed because the seventh most innovative within the world is currently the ‘go-to’ cloud provider.

However, in what's bound to be a trillion-dollar industry in years to return, Amazon would be extremely naive should they expect to urge the apex of the mountain to themselves for a big period of your time. the character of cloud computing means many diverse companies can become involved during this embryonic technology, and it's the first years of a specific innovation or new phenomenon when very big gains, or losses, are often made during a market.

This is probably doubly true for love or money associated with the hi-tech industry or electronics, with many a brand today consigned to history, and lots of early market leaders having been completely exhausted. Thus, Betamax was exhausted by VHS despite many devotees claiming it to be technically superior, Sega made a series of abysmal decisions that turned them from the world’s biggest console manufacturer to completely out of the business during a matter of a couple of years, while it's only the foremost committed Internet fanatic who will today remember the name Netscape Navigator.

we can expect to ascertain many familiar names announcing cloud services and trying to urge a slice of the action

So while big corporations still vie for supremacy within the cloud space, and indeed to merely establish themselves as a viable cloud player, we will expect to ascertain many familiar names announcing cloud services and trying to urge a slice of the action.

The latest major corporation to urge involved the cloud in a big way is Verizon, which has recently announced a cloud-based service for businesses, simply called Verizon Cloud. getting into cloud computing on a serious level would appear to be a natural move for the international telecommunications firm, whose yearly revenue is overflowed $100 billion, and whose assets include a big share within the Italian arm of Vodafone.

In order to sell their new service to potential customers, Verizon has claimed that “the new Verizon Cloud service will offer better end-user control over performance than the other cloud solution.” at the present, the service remains within the beta phase, but enough is understood about Verizon Cloud for it to be stated that the service will feature an IaaS elastic computer system, Verizon Cloud Compute, and an object storage system, Verizon Cloud Storage, once it's fully released.

…enough is understood about Verizon Cloud for it to be stated that the service will feature an IaaS elastic computer system, Verizon Cloud Compute, and an object storage system, Verizon Cloud Storage…

The gimmick, as you would possibly call it, that Verizon hope will make their service stand aside from others is that they claim that Verizon Cloud delivers the performance that you simply actually require from the service, instead of forcing you to buy stuff that you simply don’t need. Verizon state that Verizon Cloud will enable users to “set the virtual machine and network performances”, which they claim will enable users to take care of a uniform and predictable system performance from their cloud service, even during peak terms of usage. This differs from Amazon’s Web Services for cloud, for instance, which compels users to pick a preset virtual machine size.

Whether Verizon can deliver or not on this claim at this early point within the lifespan of the cloud remains to be seen. What is often said surely is that their cloud-based offering for businesses is to be supported by technology produced by Terremark; a cloud technology company that Verizon acquired a few years ago.

…their cloud-based offering for businesses is to be supported by technology produced by Terremark (Verizon Terremark)

The CTO of Verizon Terremark, John Considine, has been considerably pushing the flexible element of the new Verizon system, stating that unlike existing applications, the behavior of your neighbor will haven't any effect on the way that Verizon Cloud operates. there's a posh technical explanation behind how Verizon state that they need to achieve this, but essentially Considine claims that Verizon will allocate performance on a private basis, which may be supported by individual elements and user capabilities.

It is very youth for Verizon’s offering, as long as we've yet to ascertain a final release. But evidently, Verizon believes that they’ve made a big technological breakthrough because the cloud power struggle begins in earnest.

Excerpt: Christopher Morris reflects on the recent news that Verizon goes head-to-head with Amazon for flexible cloud compute resourcing at scale. The retail firm, Amazon, has already established itself because of the world’s biggest player within the cloud market. the corporate that originally made its name as a bookseller has greatly diversified because the size of the corporation has grown, and its successful enter electronics with the electronic book reader the Kindle has convinced the strategists behind the company’s future to maneuver into cloud computing.

Next-generation IaaS: Multi-cloud via APIs

By Basant Singh, programmer and Blogger

Demystifying the new buzzword: “Multi-cloud”

When you enquire about the availability of a service to cloud IaaS providers, without fail they mention three nines (99.9%), four nines, five-nines uptime percentages, and style for failure ideas. the foremost notable IaaS service – Amazon EC2 – offers a 99.95% service commitment which translates to 4.38 hours of downtime per annum (or 5.04 minutes per week). Rackspace alternatively offers a 100% network uptime guarantee, as do numerous others.

Information – The above-mentioned SLA availability excludes scheduled downtime for maintenance also as many other exclusions mentioned in mouse-print.

Although on any given day the cloud service availability is far above the normal hosting service yet cloud IaaS has its own share of hiccups which make them talk about the town because the expectations are very high. information technology degree Every now then we keep hearing about the outages of Amazon AWS, Microsoft, or Google. As a result, in a previous couple of major outages, the social media was all buzzing with the talks of unavailability of some popular cloud-hosted services like Netflix, Twitter, Zynga, Quora, Heroku, Instagram, Airbnb, Foursquare, and Reddit, etc. that went offline thanks to their service providers outages.

Outages are a neighborhood of IT and you can’t stop them

No matter how well prepared you're to stop an outage; somehow it's waiting to happen! There are multiple components in hardware & software aside from multiple parameters that have got to collaborate seamlessly to run a service and any one of them can fail at any point in time making the service unavailable. Additionally, there are many external factors (natural disasters, grid failures, etc.) that are out of the control of anyone or any single entity. I feel it's not knowing to expect a service that's always available and 100% reliable.

But what about your customers?

…you can easily play the tweet-and-blame your provider game, as many services are already doing, but eventually, it’s you who has got to bear the revenue loss

For every downtime, you'll easily play the tweet-and-blame your provider game, as many services are already doing, but eventually, it’s you who has got to bear the revenue loss, loss of competitive edge, and therefore the damage to the brand value of your company. Even a couple of minutes of downtime is blown out of proportion in social media circles. Also, it seems, your competition is simply waiting to grab this chance and switch it into a plus, something like this:

So how can IaaS minimize the effect of outages?

Existing solutions

Mirroring and Availability Zones: Rackspace and Amazon EC2

IaaS providers do indeed have a technique for it. Rackspace offers multiple geographically separated regions and it recommends that by mirroring your infrastructure between data centers you'll mitigate outage risk. On the opposite hand, Amazon EC2 offers multiple AWS Availability Zones. As per Amazon’s portal FAQs:

Each availability zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Common points of failures like generators and cooling equipment aren't shared across Availability Zones. Additionally, they're physically separate, such even extremely uncommon disasters like fires, tornados or flooding would only affect one Availability Zone.”

But it's been observed that in last year’s outage of Amazon EC2 (US East data center) on 22 Oct. 2012 even the applications configured for multiple availability zone were knocked off.

Multi-hypervisor design: Example: OnApp Cloud

OnApp’s server-based multi-hypervisor design monitors different cloud services and supports automatic failover by relocating virtual machines and rerouting application data. This service empowers you to point-click-and-manage clouds supported by different virtualization platforms. OnApp currently supports Xen, KVM, and VMware hypervisors.

Federated network of public cloud providers – Computenext and 6Fusion

Federation of cloud services (cloud brokerage) is about accessing multiple cloud IaaS from one sign-on account and managing them aside from comparing, measuring, and unifying billing of the compute resources. supported your requirements, you'll change your provider if needed without rewriting your code and API calls. No more vendor lock-ins!

It seems the above-mentioned platforms are already serving because the basis of the start of the much talked multi-cloud approach by providing the API abstraction (explained later) to the multi-cloud deployments. I feel they ought to take their service to a subsequent level by providing automatic failover and built-in real-time communication between multiple vendors. A formidable challenge!

Nextgen solution

Is the Multi-cloud approach a far better strategy to realize the highest possible service availability?

Yes, in fact, it's. a couple of companies have already started experimenting with this (PayPal for instance). a couple of have already sensed the upcoming demand for it and are within the process of building the right tools for the multi-cloud approach (RightScale multi-cloud management). aside from the apparent advantage of high availability, multi-cloud may cause discount and healthy competition among IaaS providers for a far better service. As a customer, you did not need to face that vendor-lock-in issue.

Apart from the apparent advantage of high availability, multi-cloud may cause discount and healthy competition among IaaS providers for a far better service.

What are the challenges in multi-cloud implementation?

as a developer I understand that implementing this is often easier said than done unless we address the subsequent cloud standards issues:

Interoperability

Portability etc.

The path to multi-cloud goes via APIs

APIs…? I won’t define APIs here but let me offer you a practical scenario from lifestyle to form you understand the role of an API during a multi-cloud approach.

How does one book (reserve) flight tickets?

You simply check in to your favorite travel portal (or the airline’s portal) and enter the journey date, city-pair detail, and within seconds you've got an inventory of obtainable airlines on your screen. you select one among them and after a couple of clicks and within a couple of minutes you’ve got the booking confirmation message. Sounds so simple.IT Managers Struggling to ascertain Through all-Flash Storage Myths

By Gavin McLaughlin, Solutions Development Director at X-IO Storage

UK businesses are being bamboozled by overhyped marketing myths about the performance, reliability, and power consumption of all-flash storage arrays despite practical evidence and common-sense arguments to the contrary.

74% have a rose-tinted view…

According to a recent survey, on the brink of three-quarters (74%) of IT managers have a rose-tinted view of all-flash storage, despite misgivings about the value, risk, and general lack of need for all-flash arrays.

As many as 76% accepted the parable that all-flash arrays are faster than hybrid arrays. But while flash can undoubtedly assist with lowering latency for random reads, it can often be an equivalent or may be slower than neat hard disc arrays under some workloads, particularly sequential writes.

A similar number accepted that all-flash solutions used less power and cooling compared to hybrid storage. But real-life testing by numerous users has found hybrid arrays use half the facility of all-flash arrays in like-for-like evaluations. Flash modules/SSDs draw less power as raw components than HDD, but enterprise storage arrays aren't just raw storage. They use processors and cache memory is additionally needed in some designs, both of which require power. on average, all-flash arrays draw double the quantity of power and need more cooling than true hybrid arrays.

…real-life testing by numerous users has found hybrid arrays use half the facility of all-flash arrays in like-for-like evaluations

Two in five (40%) respondents believed all-flash arrays provided a better degree of reliability than hybrid arrays supported marketing messages from all-flash vendors that HDDs are unreliable. But this is often not the case. Some enterprise storage vendors are guilty of using the incorrect tool for the work by counting on consumer-grade SATA drives to chop supply chain costs, causing hard disc drives to be seen as problematic. technology credit union when issues with flash media are taken under consideration, particularly with cell failure on NAND silicon, flash arrays usually have shorter duty cycles than hybrid storage.

It’s clear IT managers are unaware of the reality behind these all-flash myths. Many vendors try to convince customers and resellers that tough disks are outdated technology and flash is that the most appropriate media for all use cases. the reality is that hybrid storage is that the practical option, it offers the simplest of both worlds by combining the benefits of flash with the established benefits of hard drives.

The truth is that hybrid storage is that the practical option, it offers the simplest of both worlds by combining the benefits of flash with the established benefits of hard drives.

The survey was undertaken by independent research firm Vanson Bourne, which interviewed senior IT decision-makers at 100 large enterprises across the united kingdom. the aim of the survey, commissioned by hybrid storage vendor X-IO Technologies, was to match the adoption plans of all-flash arrays within the enterprise against the hype being promoted by all-flash array vendors. It also shows where the market is educated about the promise of all-flash arrays and where it's misinformed.

IT budgets are currently under tight restrictions, IT managers got to implement a storage solution that provides the proper amount of performance required whilst remaining cost-effective. In time, storage architects and buyers will realize that flash may be a tool instead of an answer. But in the meantime, it won’t stop some users from getting their fingers burnt within the world of store sales, an area where people push hard to shut deals that always fail to supply the simplest solution for a customer’s needs. it's up to the storage industry to assist customers and resellers understand the strengths and weaknesses of every sort of storage, helping people traverse the hype.

It is up to the storage industry to assist customers and resellers understand the strengths and weaknesses of every sort of storage…

The survey data was collected via a web survey completed by a nationally stratified sample of 100 IT managers from key industry sectors like financial services, manufacturing, retail, distribution, and transport and therefore the commercial sector from across the Uk. The survey was prepared on behalf of Vanson Bourne and conducted in February 2013. The identity of the 100 respondents will remain confidential, in accordance with the Vanson Bourne Code of Conduct. The X-IO name wasn't revealed during the interview to make sure the info remained unbiased.

Gavin McLaughlin is Solutions Development Director at X-IO Storage based out of X-IO’s London office covering the EMEA region. X-IO Technologies may be a recognized innovator within the data storage industry thanks to its award-winning, cost-effective, high performance, and self-healing Intelligent Storage systems, including flash-enabled hybrid storage.

Now you want to remember that there are 1000s of travel portals, offering flight booking services to many direct customers (registered/guest users) and travel agents in almost 200 countries spread around the globe! Similarly, for train and bus bookings, many of those portals are providing you with the power to look at seat layout and book as per your requirement. As booking are happening simultaneously across the planet through 1000s of portals:

How do they make sure that an equivalent seat isn't booked for quite one customer or that the entire number of bookings shouldn't exceed the available seats? Now, this sounds a touch complex, isn’t it?

All this is often made possible via the magic of APIs running within the background

Airlines, train, or bus operators have their inventory. They (or a 3rd party) maintain a computer reservation system (CRS) where they enter their inventory details in their respective database. Once they need a CRS, every vendor who wants to be a neighborhood of GDS (Global Distribution System) must expose an internet API to access its inventory. A GDS will aggregate many such APIs from different vendors and may offer its own web API to multiple channels like travel portals throughout the planet.

Now, any booking request to the GDS API is going to be directed to all or any of the participating vendor's computer reservation systems (CRS), and therefore the response from them is consolidated and shown to the calling program, (i.e. to your favorite travel portal). So, the GDS APIs are simply a layer of abstraction between the particular vendor inventory and therefore the consumer (travel portals). More or less this is often the workflow in any ticket booking system.

The API will receive a couple of parameters like date, city-pair, etc. from your portal and can respond with the supply and fare details.

To simplify the above let me say that if you would like to develop a travel portal you don’t get to worry about a lecture on the multiple operators for inventory. you'll contact a GDS provider, (Galileo, Amadeus, Sabre, etc.) purchase a license, and integrate their API in your web application. The API will receive a couple of parameters like date, city-pair, etc. from your portal and can respond with the supply and fare details.

So, if you would like to create a travel portal for flight booking, it doesn’t matter in which part of the world you're, as a prerequisite you've got to integrate the GDS APIs to your application. This abstraction, to some extent, lowers the barrier of developing a portal and that’s the rationale there are numerous new travel (agent) websites beginning operations every other day.

Bottom-line

Maybe the cloud should evolve and mature a touch more in terms of interoperability and standards so as to supply this much-awaited new approach at a reasonable cost.

I think the above analogy says it all for cloud services also. The IaaS providers have gotten a listing to share and most of them have already got their own APIs to manage server resources. But before the IaaS goes the multi-cloud way on an industrial scale, many rough areas got to be smoothed out. Maybe the cloud should evolve and mature a touch more in terms of interoperability and standards so as to supply this much-awaited new approach at a reasonable cost.

What does one think… are we getting to witness many multi-cloud implementations in 2014?

Aside from the shortage of dogmatism with reference to cloud computing in Africa, the relative lack of existing IT infrastructure has also been cited as a reason for the growing interest within the cloud in Africainformation technology degrees

. It makes little sense for companies to take position money that they'll alright not have in expensive IT equipment if it's not necessary, and should even become obsolete within the future. an equivalent is often said for the uptake of mobile networks and mobile devices in Africa – why build expensive legacy fixed-line network infrastructure when modern wireless communications will do?

Aside from the African study, Cisco has also recently published its third annual Global Cloud Index, which suggests that global cloud traffic will increase by over 450% by 2017, reaching 5.3 zettabytes four years from now. This same global study predicted that the center East and Africa will have the very best growth in cloud traffic over the amount.

The Middle East and North Africa are often related to each other hence the acronym MENA, partly thanks to their geographical proximity, but also thanks to business and economic ties which are built up thanks to the area’s housing of a number of the world’s richest supplies of petroleum. It goes without saying that the adoption of the cloud within the world’s most oil-rich region is going to be an enormous boost to the longer term of the technology.