Google enterprise cloud challenge unlikely to be solved soon

The lack of Google enterprise customers remains a serious perception challenge for the cloud platform -- one unlikely to be solved any time soon. Google has made strides to enhance Google Cloud Platform over the past year, changing its leadership team and adding features -- particularly around new machine types, storage and containers. Nevertheless, a scarcity of enterprise customers remains the most important run into the platform, and analysts said its lack of maturity likely will hold it back from being a robust public cloud competitor -- a minimum of for now. Google enterprise customers are everywhere, with tools like search, mobile devices or Gmail, said Carl Brooks, an analyst with 451 Research, based in NY. 

Google wants the "brass ring" of cloud computing, too -- but the overwhelming majority of the company's revenue remains tied to advertising, and none of these services help Google Cloud Platform expand to enterprise IT, he added.
"[Google] wants what Amazon is getting, what Microsoft is clearly getting and what Oracle is now clearly a significant contender to [get]," Brooks said. "They want a subsequent generation of cloud computing, but I do not think there is a lot in their corporate DNA that's fitted to this." Google isn't entirely deprived of high-profile customers, with enterprises like Macy's, Sony and Coca-Cola using the platform, 

also as popular startups, like Vimeo and Snapchat. Cloud analysts and resellers, however, said their conversations with enterprise clients tend to specialize in Amazon Web Services (AWS) and, increasingly, Microsoft Azure, with rarely a mention of Google Cloud Platform. No one doubts Google can do advanced computing and find efficiencies in its data centers which will be passed on to its users. However, the capabilities of hyper-scale cloud platforms grow linearly, and Google's platform is fundamentally immature, compared with Amazon and Microsoft, which both had head starts, Brooks said.

"It's a drag of aspiration," Brooks said. "They're, frankly, quite late to the present game, and they are getting to need to claw their high if that is what they need to try to to ."The cloud may be a "natural place for us," proclaimed Sundar Pichai, Google CEO, in its earnings call earlier this month, citing the company's history of operating at scale with products like YouTube and search. Cloud has reached the tipping point with businesses, and it'll be one among Google's major investment areas in 2016, he added. To catch AWS and Azure, Google must address features in response to customer feedback, and Pichai said he expects significant traction this year because the company's cloud is prepared to be used at scale. A lot of it's about ensuring we are very seriously committed to the present space, which we are," he said. While Pichai said Google has surpassed 4 million applications running on its cloud, most of these sleep in the developer world and not the enterprise, noted 451's Brooks.

"That's not Lockheed Martin abandoning its data center for Google -- that's not happening," he said. Catering to the enterprise Google made smart hires to lure in additional enterprise customers, particularly last November, putting Diane Greene, co-founder and former CEO of VMware, responsible for its cloud, said Arun Chandrasekaran, research vice chairman at Gartner.call center technology Greene understands the way to cater to enterprises, and she or he also are going to be tasked with addressing Google's most pressing needs around building a marketplace for channel partners and independent software vendors to tie in with its platform, Chandrasekaran said. From a product standpoint, you'll argue there are some functional gaps where they will improve, but that's a little a part of it," he said. "It's more about the ecosystem. Big data upgrades: SSD boost and a Lambda competitor

Many Google customers highlight the platform's big data capabilities as its biggest draw, with services like Bigtable, BigQuery, Cloud Dataflow, and Cloud Dataproc. Earlier this month, Google doubled the dimensions of its local solid-state drive (SSD) storage capacity to three TB and upped persistent disk capacity from 10 TB to 64 TB per VM -- a hike that would be beneficial for patrons that require more I/O closer to the nodes for workloads, like Spark and NoSQL applications. Lytics, a Portland, Ore., startup that gives automation services for marketing, switched from AWS to Google Cloud Platform after issues with poor disk I/O, especially when using Cassandra, said Aaron Raddon, co-founder and CTO. Google Cloud Platform also provided considerable improvement in network speed and therefore the switch resulted in three to fourfold the savings from running in AWS.

Lytics has tested the new, larger local SSD and has seen tremendous performance improvements by coordinating the capacity with the new custom machine types.
 It just provided tons more flexibility on provisioning to urge the proper mixture," Raddon said. It appears Google is being selective within the areas it's trying to catch up to AWS and Azure, observed Mike Matchett, senior analyst with the Taneja Group Inc., based in Hopkinton, Mass., and a TechTarget contributor. Google is picking and selecting where they will get the foremost leverage, and that they feel that with big data, they will take it farther faster," Matchett said This week, Google also quietly launched the alpha version of an event-based service, Google Cloud Functions. The new feature is seen because of the first true competitor to Amazon's Lambda machine learning service.

Still, enterprises aren't necessarily as curious about top-end performance and should not be as ensconced in big data as they're in reliability, lowering risks and managing costs, Matchett said. Amazon has shown leadership on integration and data protection and features a mature directory service that customers believe "It's not that Google doesn't have similar services at some level, I just don't think they're mature yet and proven," Matchett said. "Yet, if I'm fixing an enormous data app, I'm getting to [look] at Google Compute Engine. Why 2016 is shaping up because of the year of multi-cloud computing
What lies ahead for cloud computing and cloud application development? during this exclusive interview, Jim Ganthier, Dell's vice chairman and head of engineered solutions, said 2016 is that the year we move beyond hybrid clouds to the subsequent big thing -- multi-cloud. What is your view of the present state of the hybrid cloud and where the technology is heading?

Jim Ganthier: We said it long before anyone else did or agreed -- that the planet would ultimately attend the hybrid cloud. I feel I can say confidently that everybody now agrees this is often probably the proper end state for our industry. If you think the planet goes to the hybrid cloud, it's not a war between public or private. It's literally how does one choose the proper cloud at the proper costs with the proper characteristics. What are those characteristics? Ganthier: It's combinations of public, private or managed cloud, with the top state being a simple, seamless construct of all of the above. If you think that definition of hybrid cloud, then, by associative law, you want to believe the planet will attend multi-cloud. Do you see a specific architecture or approach to multi-cloud computing?

Ganthier: once I say multicolored, it should not be a public or a personal cloud. I'd have multiple private clouds, multiple public clouds and multiple managed clouds, including managed services or service providers. Our ability is to possess someone truly 'stand up' [deploy] a cloud-like infrastructure -- instantiate server, storage, networking, lay down the workload and templatize that workload. What is the goal of this approach? Ganthier: we would like IT to be perceived as simple and straightforward, and as seamless as you'll get from any public cloud provider. More importantly, because of the CIO, we would like to offer you back some level of costs, some level of control and a few levels of compliance. By control, does one mean security or operations? Ganthier: the internet is you'll now arbitrate among the multiple public clouds in order that as prices change, as characteristics change or as capabilities recover, you'll move seamlessly back and forth between all of the various sorts of public clouds.

You see multi-cloud computing because of the architecture of the longer term. What does that mean for developers? Ganthier: We believe, in 2016, that subsequent jump goes to be multicolored. Start watching what you'll do with multicloud. Like everything else in our industry, if you stand still, you get run over. So, confirm you've got something that basically is future-ready. Something that works not only in today's traditional siloed, virtualized world, but something which will also work for you within the future that covers a world of cloud, big data and to some extent, software-defined The challenge is that technology is usually changingGanthier: The industry is changing as we speak; new things are shooting up. we will mention server storage, and networking and workloads, but what happens during a world of containers? What happens during a world of CoreOS, or Kubernetes or Mesosphere?

What's a developer to do? Ganthier: confirm the software stack has hooks, in order that no matter where the industry goes, you not only have an honest idea of that roadmap and where it's headed, but you'll feel comfortable that you are not being either locked in or forced down a road of eventual rip and replace. What is the key to success in going toward multi-cloud computing Ganthier: Don't attempt to be a pioneer. There are tons folks -- Dell, Microsoft, Red Hat [and] VMware -- that have tons of experience and keenness. Learn from our experience, and admittedly, learn from our connective tissue. we will specialize in reducing costs, space, power, and make CIOs and developers the heroes again. CoreOS brings a different approach to container security The container market continues to heat up because the security-centric rkt reached its first production-ready release last week.

A little over a year after the open-source project was first made available, version 1.0 of the rkt container application runtime focuses on security and a stripped-down role in application deployments, marking yet one more option for users to deploy Linux containers.
 is positioning it as a way more modular component into the general application framework than Docker, which has expanded its push beyond just formatting and packaging containers to constructing a whole platform for building and running containerized applications. Rkt will still work with the Docker image, and other ecosystem partners have put out add-on features for the 1.0 release around monitoring, networking, and a container registry for its runtime images and to convert Docker images to rkt images. cloud computing technology through a partnership with Intel, users can also launch it as a virtual machine for extra security overhead.

CoreOS plans to integrate it into Tectonic, its commercial Kubernetes platform. Kubernetes and other orchestration tools also compete with services like Docker Swarm. Deis, a division of Engine Yard and an open-source platform as a service provider, has Docker containers in production for giant enterprises, but it runs into problems after prolonged usage at scale. The Docker team has been supportive in fixing the issues, but as Docker keeps adding surface areas to the Docker client, it gets further faraway from the straightforward rock-solid container engine Deis wants, said Gabriel Monroy, CTO at Engine Yard, based in San Francisco .information technology degrees We just want something that [does] one thing and does it well," he said. Deis has done scale testing and prototyping with it, and plans to eventually swap out Docker for rkt for runtime while maintaining the Docker image format, Monroy added.

Project Calico, an open source networking stack sponsored by Metaswitch, supports Docker and rkt, although it sees the later as better suited to production at scale, said Christopher Liljenstolpe, director of solutions architecture at Metaswitch Networks, based in London. Docker, he explained, has more mechanisms wrapped around it, while rkt requires fewer running components.