Virtualization vulnerabilities leave clouds insecure

One of cloud computing's touted advantages -- virtualized environments users can consume in easy bites -- is fundamentally insecure, consistent with new research from MIT and UC-San Diego. The study ("Hey, You, Get Off of My Cloud") shows that it's possible to seek out where a virtual-machine instance is running. technology credit union Once that's accomplished, a 'shotgun' technique is in a position to start out a replacement virtual machine on equivalent hardware, opening the door to virtualization 'side-channel' attacks that would compromise those instances. The study was administered on Amazon.com's Web Services because researchers say, it's the leading example of public cloud computing. Amazon isn't unique, however -- all the main cloud providers are susceptible to these sorts of attacks.

"Concurrent virtualization vulnerabilities aren't Amazon's fault. this is often how the technology is," said Dr. Eran Tromer, postdoctoral associate at the MIT computing and AI Laboratory (CSAIL) and one among the lead researchers on the study. "All of those [cloud infrastructure services] create these sorts of inadvertent common channels," he said. Tromer added that the techniques they used were rudimentary and, in some cases, took advantage of the very nature of how cloud computing resources are sold. The 'cloud cartography' research was administered with basic network discovery techniques, like correlating the general public addresses handed bent every machine instance with geographical DNS information and sending HTTP requests to ascertain if Web servers had left this basic security precaution undone. By doing this, Tromer and his fellow researchers were ready to roughly determine where a given server was physically running in Amazon's data centers. Tromer said they didn't plan to exploit the vulnerabilities exposed in Amazon's EC2 service for ethical reasons, nor did they plan to co-locate with machines not started within the course of research.

 They only verified that data transmission from one VM to a different within the cloud was possible. Tromer said that they had demonstrated proof-of-concept exploits against virtual machines in their own networks and saw the potential for similar attacks publicly clouds. Tromer said potential exploits in cloud ranged from denial-of-service attacks, where an attacker would co-locate with a target and monopolize CPU or memory usage, to stylish data interception, like recovery of encryption keys or decoding data because it passed from disk drive to CPU.information technology degree private clouds susceptible to the same attack Tromer said that non-public clouds using multi-tenant based virtualization were even as likely to suffer from these vulnerabilities as public clouds. These profiling tools do an honest job of illustrating the fragility of cloud," said security expert Christofer Hoff, director of cloud and virtualization solutions at Cisco. He called the study a "not-so-subtle reminder" that, despite the hype, cloud computing hadn't seriously addressed fundamental security concerns; it had aggravated them.

He said it also highlighted claims that providers could necessarily "do security" better than consumers. "What users are given to figure within terms of network control is extremely limited, especially in Amazon's cloud," said Hoff. He noted that cloud providers don't assume any risk against security breaches for users and are not transparent about measures they are doing a take. "The security controls that they put in situ are very, very, very basic," he said. The entire premise…comes right down to them saying, 'trust us'," said Hoff. He added that this is often obviously ok for patrons who are practically lining up to use AWS and other cloud services, but as private clouds grow in scope and interconnect with public computing resources, security professionals would wish to seem at problems like this from the bottom up.

Amazon responds to security issues
For its part, Amazon is taking the research seriously. Spokeswoman Kay Kinton said in an email that Amazon was coming to grips with the matter very quickly. She acknowledged that the researchers noted several mitigating factors that helped Amazon be immune to potential virtualization exploits. While it's unclear what specific attacks might be made during this scenario, AWS takes any potential security issue very seriously and that we are within the process of rolling out safeguards that prevent potential attackers from using the cartography techniques described within the paper," she said.

Rackspace didn't answer requests for discussing its awareness or efforts to affect either cloud cartography or virtualization insecurity. Joyent claimed that its use of an OpenSolaris-virtualized environment, instead of Xen, meant it didn't have the vulnerabilities expressed by the research. Micro Focus pitches COBOL within the cloud CTO of application modernization at Micro Focus, Mark Haynie sees cloud as having come full circle from the earliest days of distributed computing. Micro Focus makes "application modernization" platforms that translate application dinosaurs for mainframes, written in COBOL, onto modern server hardware. Now that cloud computing has come aged, he says, the transition of those old fashioned applications that underpin many of the world's financial systems into the cloud quite makes sense: it is a natural fit. Mark Haynie: What's interesting about the way that mainframes and therefore the applications built upon them are finished the last 20-something years is that they were designed to be very stateless, restful-state type implementations, but the explanations were different. 

Back in 1980, the rationale why your mainframe applications didn't retain state, and abuse the info stream protocol between the client device and mainframe to convey that state back and forth, was because the most important mainframe was 6 MIPS in terms of speed and also limited to six megabytes of RAM. which supported 800 connected users. The cloud architecture exactly mirrors the expectations of mainframe programming? Haynie: Essentially, these applications have always been inbuilt a multi-tenant, multi-user environment. In fact, they've often been provisioned in such how because mainframes have are available a spread of sizes over the years. People have had to dice their applications up into KIX regions then cross-communication between those regions.

 that sort of technology is often impressed upon, say, Amazon's EC2 and S3 environments, so [e.g] VSAM files would map to blobs in S3 buckets. At the top of the day, it's just bits and bytes and communication protocols. The key that we're trying to urge across is that you simply do not have to vary or rewrite these applications from scratch to require advantage of cloud computing platforms. COBOL is hardly state-of-the-art. Why won't customers specialize in modernization? Haynie: There are over 240 billion lines of COBOL code out there which install base is growing by five billion lines a year. Just the extra five billion lines of COBOL code surpasses by many-fold the number of lines of Ruby code that's being written or Python for that matter. And you're saying why bother when you've already got something stateless, distributed and already designed and working?

Haynie: We've had for years the power to translate [mainframe applications] into a SOAP and WSDL XML-based Web service, and we're clearly using that very same technology within the cloud. this is often possible, of course, because these applications that were built 10 or 20 years ago were built with of these ideas in mind. such as you said, it's an "a-ha" moment, that each one of those things will fit the right top of cloud computing infrastructures. For instance, MQSeries was the primary message queuing mechanism provided by IBM to maneuver a message between the CICS regions. Essentially, when running on Amazon, we map them to the straightforward Queuing Server (SQS). Now, SQS isn't reliable, so we had to feature in some transaction capabilities to form it reliable. How does one convince someone running a COBOL/mainframe environment, who looks at reliability with a totally different eye than someone who's running an internet server, that Amazon may be a good idea?

Haynie: We're cloud-agnostic, so technically, you'll use our enterprise cloud services layered on top of Azure and our enterprise cloud services layered on top of EC2 to effectively be disaster recovery for every other. you'll host two copies of your application, one a production, one a disaster recovery, mirrored in two different cloud infrastructure services. That's why the entire "cloud agnostic" a part of the story is vital. It gives customers a quick track of cloud computing, and [they] can still get all the advantages. you switch your servers off for the night or weekends, and what's amazing is, people say, "Gee, that's like time-sharing within the '60s, once I wont to buy CPU seconds used, cards punched, pages printed?" and that I say, "Yeah." it's precisely the same thing. That model has come within the sort of pay-per-drink cloud computing paradigms.

What does this portend for the longer term of old-school business applications within the cloud? Haynie: It just gives customers more flexibility. I do not expect cloud computing to supplant on-premise data centers. But it is sensible to possess, if you'll, a hybrid sort of model, where you're running some applications and transactions locally and perhaps you're using the cloud for excess capacity or disaster recovery. After familiarity [sets in] thereupon cloud environment, you would possibly flip those two, because it works an equivalent way as on-premise. Then you would possibly plan to close up that on-premise data center because you're using cloud services on two different Infrastructure as a Service (IaaS) providers, and you're using one to failover to the opposite. So we do not know which goes to figure, but what we're saying is: These applications are working, they're producing revenue, they're valuable to your IT organization. Just put them out there like you're running them locally and see what works.is the CTO of application modernization at Micro Focus where he concentrates on core IT application and data reuse in today's computing world. He has been a focus in defining Micro Focus products and services to assist IT organizations re-host IBM mainframe applications on Windows or Linux environments and therefore the performance and availability requirements those systems must meet.

 He has architected products to satisfy customer demand to increase applications to the online and Web service environments common during a Service Oriented Architecture.
He joined Micro Focus in 1999, has over 25 years of experience in IT and has conducted seminars and published papers and books on design automation, database design, and high-performance transaction systems. Federal CIO redirects infrastructure dollars to cloud
n what could also be the start of a bonanza for the evolving cloud computing marketplace, US Federal CIO Vivek Kundra has officially launched Apps.gov, a site for federal agencies to check in for and utilize cloud computing resources. Kundra also announced he's streamlining the federal procurement process for IT and committing funds within the Obama administration's 2010 budget towards cloud computingKundra placed an important emphasis on modernizing infrastructure spending thereon, which he said soaks up $19 billion per annum out of the approximately $70 billion federal IT budget. Kundra, speaking at NASA's Ames Center, home to NASA's Nebula cloud computing program, said that the present IT infrastructure was wasteful.

"When a little business can get online during a matter of minutes, why should the govt be spending billions upon billions" on traditional data centers, he said. Kundra has said before that he wants to pool government computing resources and buy hosted services whenever possible. He called current IT practices "duplicative," saying agencies should not be building data centers for things that the private sector gets for free of charge. He cited a proposed blog for the Transportation Security Administration, saying costs for starting the blog amounted to $600,000, including buying and maintaining infrastructure. He said the Apps.gov site would be how for federal IT procurement to eventually mirror the convenience and speed of using IT services within the private sector. Kundra said the portal currently contains a variety of free tools and services users can experiment with, also like access to resources from a heavyweight list of vendors, including Salesforce.com, Google and Microsoft.In a move which will gladden the hearts