Hardware standardization benefits cloud computing, virtualization

IT administrators at data centers with standardized server hardware typically have fewer headaches than those working in mishmash hardware environments, especially when it involves virtualization and cloud computing. But standardization also results in vendor lock-in. When you standardize with one or two sorts of server hardware, you've got the advantage of getting common hardware parts, technical familiarity, common firmware images and upgrade techniques, easier management and special pricing from vendors. Many customers are very curious about simplifying, consolidating and standardizing their IT wherever possible. virtualization technology the main target is increasingly on the operational cost related to maintaining a posh IT," said Matt Eastwood, the group vice chairman of enterprise platform research at Framingham, Mass.-based IDC. San Francisco-based Virgin America Inc. airlines chief information officer Bill Maguire has standardized server hardware for years because it keeps management costs – and aggravation – to a minimum.

Maguire, who also worked within us mail data center for quite 20 years, said hardware standardization was the norm there also. Each of those vendors has its own quirky differences. I standardized on Verari Systems servers here, and before that, I standardized on Compaq because they made enterprise-class servers," Maguire said. "At the mail, we used all IBM mainframes and within the server environment, we used Sun [Microsystems, Inc.] to run Unix, before the Linux days, and Intel-based systems. cloud technology Even then, we tried to attenuate the number of vendors we used so we could manage patches and upgrades easier, and keep the overhead of management to a minimum." Another plus of standardizing on best-of-breed hardware is added negotiating power, especially for giant data centers, Maguire said. "You can compute upgrade deals in your contract with a vendor, so you'll get some sweeteners within the deal." Also, the larger you're, the simpler [hardware standardization] is because you do not need to hire people with a good range of skill sets," he said.

Virtualization, cloud computing drive hardware standardization
In addition to the apparent management and price benefits, virtualization can also drive hardware standardization. as an example, live migration of virtual machines (VMs) works only across an equivalent family of processors from an equivalent vendor (Intel or AMD), so the uniformity of CPUs for virtualization host platforms is additionally necessary. Cloud computing is another technology that lends itself to hardware standardization, said Sam Charrington, the vice-chairman of product management and marketing for St. Louis, Mo.-based Appistry Inc., which makes a grid application platform. Hardware standardization will need to happen so as for applications and data to maneuver seamlessly from one cloud environment to subsequent," Charrington said during his session at the Linux World/Next-Generation Data Center conference in San Francisco. The downfall of standardizing is vendor lock-in, but that's only a drag if the seller -customer relationship is sour or the vendor isn't a stable company, Maguire said. Hardware standardization may be a risk if you do not maintain an honest relationship together with your vendor," Maguire said. information technology education

"Fostering an honest vendor relationship may be a win-win for business. I do not see tons of downside to standardizing unless you do not plan your roadmap and you do not foster an honest relationship. you've got to be smart about your decisions." But standardization isn't possible. "In some cases, Fortune 500 as an example, the range of workloads that are being supported continues to expand and no single infrastructure will ever be capable of handling all the utilization cases," Eastwood said. And OEMs have segmented the market into even small chunks and offering servers that are "optimized" for specific market segments, like branch offices, small and medium-sized businesses, enterprise data centers, virtualization, and high-performance Computing, Eastwood said. "So whilst users work to simplify, the OEM business models are becoming more complex." In fact, the info Center Decisions 2008 Purchasing Intentions Survey of quite 600 management-level IT administrators shows that 36% of users said they are doing not have a typical hardware platform for VMs.

Because complete hardware standardization is not the philosophy of each data center, other standards, like Open Virtualization Format (OVF), are being developed to standardize the means to packaging and distributing VMs across heterogeneous platforms. Charrington likened the OVF standard to MP3 formatting, which allowed audio to be transmitted between different vendor sound devices. The economic downturn has made companies large and little wanting to spend less, so data center plans are shelved in favor of virtual servers and a budget. IT pros have found that they will get off the bottom without spending quite they have. Startups have sprung up around the cloud infrastructure to deliver that capability in ever-increasing variety and class, and real-world examples are multiplying. Concerns remain around security and reliability, but confidence is growing as major players step into the sport and therefore the cloud markets still improve even during a dire economy.
Choosing an application architecture for the cloud for most folks, building applications to run during a cloud environment may be a new ball of wax. 

But to take advantage of the pliability of a cloud environment, you would like to know which application architectures are properly structured to work within the cloud, the sorts of applications and data that run well in cloud environments, data backup needs and system workloads. There are three architectural choices for cloud environments: Traditional architecture; asynchronous application architecture (i.e., one focused on information science, not end-user interaction); and Synchronous application architecture This article outlines these architectures and indicates when each is most appropriate supported system needs and cloud provider capabilities. Cloud environments differ As a part of your system design, you would like to research and style your application appropriately for the actual environment offered by your cloud provider. Cloud environments aren't all created equal: they provide different mechanisms to implement applications. Amazon Elastic Compute Cloud (EC2), for instance, delivers "empty" virtual machines into which any sort of software could also be installed and run; achieving scalability for individual applications is left up to the appliance creator.

By contrast, Google and Microsoft provide programming frameworks (Google Apps, a component-based framework, and Microsoft's Azure Services Platform, a .NET-based framework, respectively) that transparently scale, relieving the app creator of that burden; however, these frameworks limit a system designer's architectural options. Your environment: The considerations Data and applications. the primary step is to work out which applications and data sources employed by the appliance will run within the cloud. These apps or data probably reside within the company data center, requiring that the cloud-based app be ready to reach the info center to access them. In these cases, the simplest option is for apps or data sources to be made available as services which will be called remotely. Despite service-oriented architecture fervor, though, many applications and data sources haven't been front-ended with service interfaces. And even with workarounds like screen scraping, these approaches are inelegant, make a cloud-based application more complex and have a tendency to be fragile.

Data backup. While Infrastructure as a Service (Iaas) offerings allow traditional SQL Server databases to run, you continue to get to perform data backups to make sure recoverability just in case of crashes. Since the info and applications may reside in several places, the established backup mechanisms won't work. Amazon's Simple Storage Service (S3) can store system recovery data via the backup capabilities of the system database itself. For system recovery, the appliance itself also can be protected via snapshots into S3 Because of its internal replication of S3 data, Amazon makes a recovery scenario less likely, but it is vital to not believe that capability blindly. Several startups have released products to copy system data. Amazon has also released Amazon Elastic Block Storage (EBS) which will "persist" system data without having S3. you'll install a database on EBS, which is then protected against system failure thanks to Amazon's redundant internal backup inside its data centers. EBS won't, however, create point-in-time backups, so you want to do them manually or use a third-party product

Traditional apps within the cloud These applications follow an enterprise architecture model and are designed to satisfy roughly stable demand instead of tolerating huge variations in system load. they do not require an architecture that will proportion or down, so an architecture scoped for a gentle state works fine. a standard architecture involves one or more Web server systems interacting through a middle-tier software framework, ultimately interacting with a database. The good news is that Infrastructure as a Service cloud providers; like Amazon Web Services (AWS) also as GoGrid and Rackspace accommodate this architecture. The bad news is that the so-called Platform as a Service (PaaS) offerings, like those from Microsoft, Google, and Salesforce.com, do not: These offerings have pre-built frameworks within which your application must operate. So unless you design for one among these frameworks, your application won't run. But one reason cloud computing draws such interest is that applications expected to possess stable demand don't: In these cases, hardware and architecture designed with a particular load in mind often prove inadequate within the world ..

There are two kinds of opplications that require the scalability of cloud environments: user-facing and, for lack of a far better word, batch. differently to define them is synchronous (i.e., a user interacts with the system and waits for an answer) and asynchronous (i.e., data is input, processed, and eventually concludes being aroused, with nobody sitting around expecting the results). Synchronous cloud applications For synchronous apps end-user interaction is that the primary factor, like with Web usage. With these sorts of applications, large numbers of users may hit the system during a short duration and potentially overwhelm the system's capacity or create a poor performance. As a result of these usage issues, there are several system design implications. Provide enough Web servers to handle total traffic. The key here is to watch each Web server's load and, when performance passes a given threshold, start another Web server system to share traffic. this will be wiped out both directions (i.e., more servers are often started as load increases, and unneeded servers stopped as load decreases). Load-balancing software spreads the traffic across all live Web servers in order that capacity is often dynamically increased as required. it is also easy to get rid of Web servers from a pool and shut them down.

 While many companies use load-balancing appliances within their data centers, cloud providers offer software-based load-balancing capabilities. If these capabilities aren't sophisticated enough, you'll add more load-balancing software during a cloud environment to distribute the load. Provide enough middleware to manage demand. even as end-user demand can overwhelm an internet server layer, the center layer gets overloaded. So this layer must be designed to scale also and enable traffic to be transparently spread across middle-tier servers. it's normal for a load-balancing strategy to be used here also, with dynamic registration and de-registration of middle-tier servers to enable end-user traffic to flow across the acceptable number of servers. Provide a knowledge tier that scales. the info layer often proves to be the Achilles' heel of scalable systems. In many designs, of these multisystem tiers funnel right down to one database