suba aranchiya | call center technology





The cloud community faces industrial-scale challenges

No one group has all the expertise needed to make an ideal cloud environment – it needs expertise in application behavior, data center design, network, and machine virtualization, wide-area networking then far more. Getting this all working together involves collaboration, says James Walker President of the CloudEthernet Forum.

With the arrival of the smartphone: a replacement sort of thin client appeared that appeared to hold the entire world in its Internet grasp. People didn't need to shift perspective and embrace the SaaS model, they only found they were already using it, and therefore the word of this new eon was “cloud”. The result has been a surge in cloud uptake that took even its strongest advocates all of sudden.

The total worldwide marketplace for cloud infrastructure and services is predicted to grow to $206B in 2016.

The signs are everywhere, as massive new data centers are arising within the coldest places: Dell’Oro Group predicts that within five years quite 75% of Ethernet ports are going to be sold into data centers, with similar predictions for computing and storage gear from Gartner and Forrester. therefore the total worldwide marketplace for cloud infrastructure and services is predicted to grow to $206B in 2016, and therefore the cloud is going to be the hub for many business investments well into the subsequent decade.

This means that the industry will soon be facing away steeper sales incline – and this is often just when it can least afford to slide. If the cloud fails now, it could send the entire market tumbling backtrack the slope.

The bad news is that the cracks within the cloud structure have already begun to show. the great news is that this has been recognized in time and therefore the industry has launched the CloudEthernet Forum which is rallying members to tackle fundamental issues and ensure a reliable, scalable, and secure cloud for the approaching generation.


Flash Systems: the longer term of knowledge Storage.

By Steve Hollingsworth, Director at Applied Technologies

As more and more businesses and individuals migrate into the Cloud, it's predicted eventually data storage centers will gradually become a thing of the past. The Cloud believers are expecting the old churning mass banks of servers tucked away in companies basements to pack up and gather dust, as we all start trying to figure out the way to use Google Docs.

I am a non-believer!

The Cloud may be a fantastic innovation in technology and for a few, it'll convince be revolutionary in its results. But, not everyone wants to wholly migrate, especially businesses who are wont to and dealing effectively outside the Cloud.

Migration itself for instance takes time, and if you select to use Amazon Web Services (AWS) or Google Apps, staff are going to be required to find out the way to use new software which can take up valuable time and lower productivity.

This point is illustrated by the very fact that a lot of larger organizations aren't yet fully getting into the cloud, choosing instead to use a “hybrid” cloud, with core applications hosted by themselves on agile infrastructures, still requiring data storage centers.

The amount of knowledge businesses store and use is growing at a really fast rate with an estimated 45 fold growth of knowledge storage by 2020, this, therefore, means the bulk who still store data physically will still require extra storage capacity.

With physical space and power becoming a premium, flash-systems are set to unravel these issues. The old-style font of just throwing more disks into a knowledge center so as to extend capacity isn't only expensive to fund and requires a surplus of space, but in terms of efficiency and energy use, it'll also prove costly. Replacing out of date data storage technology with Flash Systems will end in an on the average drop in environmental damage along with side energy costs as a result.

For businesses that require high power processing, Flash Systems represent a quantum jump in performance. Flash is allowing businesses’ to mine data quickly for immediate business decisions, increase productivity ‘per core’ to scale back license costs, and changing computing in many other ways.

Good business requires maximum productivity which may only really be provided by the maximum speed and process management.

Although once installed, Flash Systems are less expensive to run, I do appreciate they're expensive to line up. However, when a business chooses to take a position during a Flash System, it is often gradually introduced, and properly sized and implemented with hot data, a limited amount of flash can have exponential benefits. Businesses are ready to retain their hard disc drives for data retention and Flash Systems are often prioritized to high demand data or high-end processes.

This, therefore, invites the introduction of hybrid data storage systems, which combine hard disc drives and Flash Systems to hide a plethora of knowledge storage and operations.

Good business requires maximum productivity which may only really be provided by the maximum speed and process management. Flash can deliver this productivity, at costs points perhaps unexpected, and may be implemented as a part of a personal or Hybrid cloud, hooked into the appliance requirements.

With 20 years of experience in solution design and sales, Steve has worked with IBM Systems from System36/38, through the AS/400 and RS/6000, to this POWER7 iOS / AIX and therefore the latest Storage technologies. Steve may be a Director at Covenco, the parent company of Applied Technologies.

Excerpt: As more and more businesses and individuals migrate into the Cloud, it's predicted eventually data storage centres will gradually become a thing of the past…Getting serious about Cloud

Claire Buchanan, Senior vice chairman, Global Operations at Bridgeworks identifies data transfer rates together with the critical components to successful cloud adoption.

Sitting in a meeting room the opposite day, our discussion swirling around Big Data and Cloud, as is perhaps common in most technology companies, we decided to urge real and answer the question: “What is that the barrier stopping enterprises from adopting the cloud?”

We next identified the six pillars, the Six Ss’, to successful cloud adoption; service, speed, scale, security, sovereignty (data), and ease. We agreed that security might be overcome and alongside service and sovereignty, which was considered right down to the selection of geo and provider, the important barriers are speed and scale – then keeping it simple.

The barrier stopping enterprises from adopting the cloud? [The] real barriers are speed and scale – then keeping it simple.

Clearly, speed and scale are often used for several disciplines within the technology business. For the needs of this blog also together of the most important inhibitors, this is often right down to the sheer size of the large data challenge and moving data over distance fast, be that from the data center to data center or customer to host provider. the big challenge of getting that data quickly, efficiently, and during a ‘performant’ manner from source to host, is that the key. The WAN limitations are easy to identify:

Size of pipe (in this case bigger doesn't necessarily mean faster), the

Protocol, and the

Amount of knowledge needed to be moved.

We have all heard of these projects that ever nearly made it to the Cloud Provider on the other hand the time and price of the transfer of the info made it far too big to contemplate. The movement of knowledge on tapes remains commonplace because it is viewed as a safer and faster route during a non-cloud world. From small amounts of knowledge, a personal individual trying to upload a movie to Dropbox, or an enormous Corporation performing a US coast to coast daily batch process, the matter is extremely real. Even the mighty Amazon allows clients to send their drives in to upload the info to the Cloud such is that the problem transferring data across the WAN – all day a day.

So what do CIO’s want?

Let’s start with the straightforward stuff – get the cold data (Legacy) off the first storage tier, 80% of that data has no real value in day to day business terms except for regulatory, governance, and compliance reasons the info has got to live somewhere. The challenge, therefore, is to migrate data out of the assembly environment and into storage. The cloud offers safe, compliant storage driving down costs and bringing simplification to corporate environments but still the very basic challenge remains – moving it. Most large organizations want to dump their legacy data, have somebody else manage or archive it for them at a fraction of their in-house cost.

Before Cloud we had the philosophy of Write Once Read Many (WORM) but the truth of the cloud is we would like to place data that we'd not necessarily need, but cannot destroy, somewhere less expensive . during this cloud world it's longer WORM, it's considerably a write once read possibly (WORP). We aren't talking warm data here but more often than not cold or maybe glacial

Ask an easy question, “How many megabytes per second can your WAN Optimisation product get on a 10Gb link?” – that ought to be fun.

All sound simple so far?

For small organizations, it's, except for medium to large organizations the elephant within the room is the way to move terabytes or maybe petabytes of knowledge across the WAN. WAN Optimisation I hear you say, that ought to fix it. Not during a Write Once Read Possibly world (remember WORP?) so as to dedupe the system must learn. the technology credit union I could easily slip down the slippery slope, these guys don’t optimize the WAN they simply hamper the dimensions of the files you send across the WAN and this has computational limitations, more Data Optimisation than WAN Optimisation, would you not agree? Ask an easy question, “How many megabytes per second can your WAN Optimisation product get on a 10Gb link?” – that ought to be fun.

What really is required, and it's not what we all know as WAN Optimisation, we'd like something that moves data at the speed of sunshine because that's the sole physical limitation. Something which will move all data at scale (even encrypted or compressed from the source – business wants security and encryption is becoming ever more popular) in an unrestricted manner. Remember simplification. It should be easy to put in, say but an hour, and not require any network or storage management time. It should just run on its own, the sole intervention to tug off the stats.

The news is sweet, it is often done. you only got to open your minds to the seemingly impossible. Just to whet your appetite, on a ten Gb link how does 1 GB in 1-second grab you? Better yet, 1 TB in 16.2 minutes? There are corporations that live and in production on 20Gbps basking within the simplicity of it all, to not mention the value, productivity, and continuity savings. only for good measure, they will throttle back if they have to limit the share of the pipe for other things and still get lightning speed. The barrier to the Cloud for enterprise adoption is often removed.

Which vision of SDN will win?

At the top of last year, Compare the Cloud took a glance at what we thought was the future for Cloud users and Cloud service providers, and one among our predictions was for greater awareness of Software Defined Networking (SDN) to develop this year. But, if this proves to be the case, which vision of SDN will win through in 2014?

The Case for Software-Defined Networking

The argument for SDN is extremely tempting: by applying the logic of server virtualization to a network, we should always be ready to achieve an equivalent level of abstraction and therefore the same benefits, right? Well, the jury is out on whether it's really achievable and, within the meantime, two very different visions are emerging about how those benefits are often realized.

By applying the logic of server virtualization to a network, we should always be ready to achieve an equivalent level of abstraction and therefore the same benefits, right?

Attempts to make a policy-defined, more centrally-managed network aren't new. At CohesiveFT they’ve been developing a software-defined (overlay) network since 2007. However, the success and widespread use of virtualized servers, the growing acceptance and move towards cloud computing models, and therefore the success of companies like Facebook and Amazon in efficiently managing huge data centers have really ignited interest within the potential of SDN.

Hardly a week goes by without some new SDN-related acquisition, strategy unveiling or the likes. And a schism seems to be emerging between those that see SDN as a software-only solution, which decouples the network from the physical infrastructure, and people who argue that network virtualization must be paired with a capability to deal with the physical infrastructure if SDN is to possess real value.

Network Virtualisation Holds All the Answers

Given where they're coming from, it's no surprise that the “decouple, decouple, decouple” argument is being led by leading hypervisor vendor, VMware. The vision for his or her network virtualization platform, NSX, is to enable users to deploy a virtual network for an application at an equivalent speed and operational efficiency that you simply can deploy a virtual machine.

VMware delivers this through its hypervisor virtual switch, vSwitch, and its NSX controller. The virtual switches hook up with one another across the physical network using an overlay network, handle links between local virtual machines, and supply access to the physical network should a foreign resource be required

An advantage of this approach is that additional services, like distributed firewalls and load-balancers, can reside within the virtual switch, allowing a more flexible firewalling policy and greater network efficiency. information technology degrees, However, the strength of the answer – of taking the intelligence out of the physical infrastructure, so it's responsible just for forwarding overlay packets – is additionally at the core of its criticism.

An advantage of this approach is that additional services, like distributed firewalls and load-balancers, can reside within the virtual switch.

Because it offers no visibility into the physical underlay network on which the solutions reside, there is often no insight or assistance with regards to traffic engineering, fault isolation, load distribution, or other essential physical network management activities.

Software is merely a part of the solution 

Those who believe that a virtualized network which doesn't support communication with network switching hardware cannot represent a real SDN solution should look no further than the words of Padmasree Warrior, Chief Technology and Strategy Officer at Cisco, who argued last year that: “Software network virtualization treats physical and virtual infrastructure as separate entities, and denies customers a standard policy framework and customary operational model for management, orchestration, and monitoring.”

CISCO has been quick to focus on this perceived weakness within the VMware vision because it promotes its own SDN vision: Application Centric Infrastructure (ACI). Through its APIC controller, CISCO will enable users to make a policy-driven infrastructure, controlling both virtual and (CISCO) hardware switches, and built round the needs of the appliance.

Through its APIC controller, CISCO will enable users to make a policy-driven infrastructure, controlling both virtual and (CISCO) hardware switches.

CISCO argues this approach will deliver both network virtualization and visibility into the physical network which can enable the fine-tuning of network traffic.

Which Vision Will Win Out?

VMware is quick to dismiss CISCO’s vision of the appliance Centric Network as ‘Hardware Defined Networking’; casting it as an effort to slow a shift in architecture specifications towards virtual networks that may run on low-cost hardware. The commoditization of the hardware model doesn’t work, they argue, when you’re tied into specific hardware.

But it isn’t as simple as that: the very fact remains that hardware will still be got to be updated to deal with changing network needs, and it'll still be necessary to watch and optimize it.

I guess whichever side of the argument ultimately wins hearts and minds, the important winners here are going to be cloud customers and CSPs. Two major technology firms bringing to the market two different solutions and investing huge marketing efforts into promoting their own approaches will definitely raise awareness within the potential of SDN.

As SDN becomes adopted more widely, so will customer organizations’ ability to leverage the complete advantage of virtualization and Cloud computing.

For CSPs, a key advantage of virtual networks is that the provision of multi-tenant isolation on a shared physical network. call center technology to the present end, many CSPs are running their own proprietary SDN solutions in their environments already (as we saw from Chris Purrington’s post earlier), so this argument is basically one about raising awareness about SDN and communicating the advantages.

Those benefits include helping organizations manage hybrid and multi-cloud environments in a way that helps leverage the greatest performance, efficiency, and security. then whichever vision wins out, as SDN becomes adopted more widely, so will customer organizations’ ability to leverage the complete advantage of virtualization and Cloud computing.