suduloonu epa | information technology degree





IBM z Systems – Edge 2015 Las Vegas 

Compare the Cloud attended the IBM Edge conference 11-15th May. IBM held a weeklong conference within the grand environment of the Palazzo Hotel, Las Vegas, demonstrating and educating a number of the new ways to think for technical roadmaps. information technology degree Highlighted, as always, were some major breakthroughs for technical innovations for his or her current and planned products and services. An audience of 6000+ CTOs, CIOs Executive level representatives of companies from everywhere the earth attended the week.

IBM z Systems highlights at Edge2015

The Executive Edge (a two-day event running at an equivalent time because of the main conference) was the highlight of the week. Not that I’m biased within the slightest but young Andrew McLean and that I began the Cloud keynote on day 2 with fresh thinking from an independent perspective! Our session was entitled: a replacement thanks to thinking.

In the session, we brought together a mixture of IBM Products/Services that might best serve the retail sector. IBM z Systems Mainframe for the core infrastructure, Bluemix for the consistent Omnichannel presence with DevOps as a consideration, Spectrum Accelerate for the expansion of knowledge using storage defined networking, and Watson for the analytics and brains behind the processing.Ron Peri – CEO Radixx

A great presentation from Ron, who absolutely surprised me on his U-turn from x86 servers to an entire adoption of mainframe infrastructure. Why does this surprise me? Well, if you get the chance to talk to Ron and know his background you'd realize that for many years he was liable for keeping x86 infrastructure in situ, running and price effectively, and was completely against the other technology. He didn't want to understand or maybe be told that a mainframe would be a viable option, thinking z Systems were closed old architecture and once he eventually had an indication from Vicom Infinity, a replacement York-based expert mainframe consultancy, he purchased one within 30 days to migrate faraway from the 172 x86 cores to a staggering 10 on a z Systems mainframe. Simply incredible, and what a pleasure it had been to interview him during the week!

So back to the highlights of the event. a number of the bullet point announcements were:

A mainframe can analyze over 30,000 transactions a second, encrypted, and securely to prevent Mastercard fraud!

Adding the power to layer data across IOT devices like watches etc to cross-reference static data i.e. clinical data for a more accurate diagnosis – and at the present z Systems are the sole player within the marketplace that have the processing power and scalability to realize this securely!

Using the new z13 mainframe system, it's possible to lower the entire cost of ownership by 32% compared with an x86 private cloud!

Generation z Program and continued support – Assisting the millennial generations (estimated to be half the worldwide workforce by 2020) with mastering the Mainframe program.

X86 systems reach processing around 70/80% and with a mainframe, you'll effectively use virtually 100% of the resources.

So in summary for IBM z Systems and therefore the mainframe, it's not old tech and will never be seen as so.

The technology behind the IBM z13 is incredible and scalable, It’s also affordable for firms that are serious about security and resiliency with true cloud benefits that are designed for giant data analytics and mobile services.

The week continued with additional announcements across other IBM systems and merchandise lines that were tied into a holistic Hybrid cloud standpoint. From Rocket Data Access services on Bluemix for z Systems that permit the event of mobile applications seamlessly integrated to a mainframe. information technology schools Storage announcements included the IBM power grid E850 – a four-socket system with up to 70% guaranteed utilization.

IBM Spectrum Control Storage Insights that permits, amongst other benefits, an on-premise Storage Defined Storage that permits external developers to help with coding and software development – on your infrastructure without the necessity of additional infrastructure. This including the compression of IBM's XIV GEN 3 real time compression, reclaiming existing free storage, and scaling for the Hybrid cloud, proved that again IBM is thinking of the future and has one of the foremost comprehensive storage management portfolios within the market.

Cattermull’s Take:

Following on from this brief summary I might wish to add my personal thoughts on the week. IBM is seriously underrated as a Cloud player compared to some others I could mention. Their tech breakthroughs come thick and fast monthly and mostly go unnoticed. IBM must shout louder and be pleased with their achievements more as a number of the breakthroughs are groundbreaking. only one point that got announced is sort of literally life-changing, Watson has the power to analyze medical data that's unstructured at a private level and plan treatment supported that's individually tailored to you. this is often quite literally live-saving and this has never been done before. the maximum amount as I really like my Doctor and have known him for several years, this breakthrough will change my health for the higher in ways he never could, and even predict potential illnesses that I might develop.

Keep it up IBM and that we can’t wait until next year for Edge 2016 and therefore the product news and technology breakthroughs you'll announce.

AWS Chalk and OpenStack Cheese

I’ve been at the OpenStack Summit in Vancouver in the week, and what has struck me is that the difference between the AWS ecosystem and therefore the OpenStack community – they're sometimes like chalk and cheese.

So what are the most differences?

Amazon has about captured the general public cloud space. It doesn’t have this space all to itself (Microsoft and Google are growing rapidly to challenge it), but it doesn’t really have any interest within the private cloud space (unless you've got a budget the dimensions of the CIA). Conversely many of the most players in OpenStack only really play within the private cloud space – HP for instance let slip that it wasn’t really a player publicly cloud, before rapidly withdrawing the comment and claiming that it had been, when actually it isn’t. So it's getting increasingly easy to picture a future during which a couple of players dominate the public cloud with their own proprietary stacks, while OpenStack becomes the de facto standard for the personal cloud.

Then there are 2 different ecosystems:

As we covered in our preview for this month’s cloud rankings AWS’s ultimate appeal isn’t necessarily its cost (even with continually falling prices, it can still be quite expensive) or its simple procurement as an elastic hosting provider. Its main appeal is now its massive ecosystem of services and therefore the ability to tap into them fairly quickly. As fast as AWS innovates, its ecosystem adds value faster than it could ever do on its own. virtualization technology It’s all relatively quick and comparatively easy to implement and there are many services to settle on from.

The OpenStack ecosystem is extremely different: as we are seeing at the Vancouver OpenStack Summit, OpenStack boasts an enthusiastic following among the developer community, and a really long line of powerful companies are supporting the project and contributing to the code and marketing of OpenStack.

I have also been impressed by the progress that the OpenStack community has recently made to become more organized and address a number of its most blatant weaknesses. in the week it announced interoperability testing requirements for products to be branded as “OpenStack Powered” including public clouds, hosted private clouds, distributions, and appliances (initially supported by 14 community members). a replacement federated identity feature will become available within the OpenStack Kilo release later this year (which is supporting initially by over 30 companies), and it's announced a community app catalog. increase this the work that it's done to certify major vendors via its DevCore program and it's making all the proper moves – but are they insufficient, too late? As I discussed above it's already lost the general public cloud to Amazon et al., and realistically only has the private cloud in its scope – albeit this is often one fairly large niche.

The development ethos within each community is extremely different as well:

In the Amazon marketplace, you've got partners developing stand alone services that are designed to plug into each other. Typically they're the closed source, well-integrated via AWS APIs, and normally as easy to spin up as any Amazon service. The OpenStack community much of the main target is on collaborating to develop new functions, instead of standalone services. The risk capital firms that were so keen to pour money into OpenStack startups a couple of years back are only now beginning to realize that much of the event has been contributed to the commonweal within the sort of open ASCII text file, which the risk capital firms are unlikely ever to form a return on this.

OpenStack’s Achilles heel is its usability. Speakers wouldn’t be joking about the necessity for a PhD to implement OpenStack if there wasn’t a minimum of a modicum of truth to the matter. for giant clients with the tech skills to implement and maintain it, this isn’t a drag. and therefore the SMB community that has minimal tech skills look to MSPs to supply cloud to them as a service. it's within the middle ground between these where OpenStack risks ceding ground to AWS et al. – although innovative firms like Platform9 are seeking to deal with this.

And finally the differences between the business models:

One major OpenStack client that I spoke to framed this alright. He said that his company has many apps to support, each of which is employed by a limited number of users. Amazon is about up as a volume operation where you would like to possess a service that appeals to several clients, whereas OpenStack caters more for the bespoke needs of every client. He visited Amazon for any off the shelf apps, but to OpenStack for everything that needed any tailoring, as Amazon would either be uneconomic or unsuitable for the number of users he had on each app.

Given these differences, you’d think that AWS and OpenStack were indeed like chalk and cheese. However the underlying technology for each is comparatively similar – for every component in AWS there's the same in OpenStack – this is often all illustrated alright during a blog by RedHat listing the equivalent components in each stack, here: http://redhatstackblog.redhat.com/2015/05/13/public-vs-private-amazon-compared-to-openstack/

Suddenly the differences don’t look so great. Maybe they’re both kinds of cheese in any case. AWS is that the pre-packaged cheese that you simply get within the supermarket – well packaged, uniform, and comparatively cheap. OpenStack is that the cheese that you simply get in at the deli counter – move size, often richer in flavor, and arguably worth paying a touch more for once you actually need and may afford to.

Hope For the simplest, steel oneself against The Worst

Tips For IT Disaster Recovery

IT disasters accompany big costs. As noted by Fortune, experts can’t even agree on what proportion companies will lose if data is compromised; some experts say the typical cost is quite $200 per record, while others claim it’s slightly below 60 cents.

Andrew Lerner of research firm Gartner turns this into more actionable insight: the typical enterprise loses between $140,000 and $540,000 per hour during a network failure. Fortunately, it’s possible to limit the prospect of this worst-case scenario by leveraging the cloud; here are five tips to assist companies to hope for the simplest but steel oneself against the worst.

Plan to Fail

The first tread on the road to raised disaster recovery? Create an idea. IT leaders got to sit down with C-suite executives and take a tough check out what it means if records are compromised, networks fail or servers go completely offline. Once this impact assessment is complete, the subsequent step is devising a disaster recovery (DR) plan. consistent with InfoSecToday, companies got to identify which systems are most crucial to operations, for instance, email, applications, or database access, then design a DR plan that focuses on restoring these systems as quickly as possible.

Fail the Test

Next up? Fail your test. Find a reliable cloud DR provider then simulate a disaster scenario to ascertain what happens. With data safely within the cloud, there’s no real risk for day-to-day operations, but companies get the prospect to ascertain their recovery plan in action. This serves two purposes: First, it helps employees and end-users to not panic within the event of a true emergency, and it lets IT experts find any weak spots during a DR plan. Better to correct these during a rehearsal than when the storm really hits.

Test Your Data

It’s also important to make sure that your data is stored safely and securely within the cloud if a disaster occurs, consistent with Tech Target. this suggests making a full copy of all critical information on cloud servers and testing it for speed of access, security, and regulatory compliance. Your DR provider should be ready to offer on-demand access to data, day or night, alongside enforcing strict access requirements. If possible, choose two-factor authentication, which needs users to possess not only a password but also a physical token like a fob or USB key. Finally, make sure that all data stored meets regulatory requirements.

This is especially critical if your company deals with medical records, legal documents, or financial information since compliance agencies require an auditable trail that shows the continuity of knowledge even in times of disaster. If companies can’t provide this information they might face significant penalties; PCI DSS non-compliance costs up to $25,000 for the primary violation by Level 1 and a couple of merchants.

Data Delivery

How quickly are you able to revisit up and running? this is often one key advantage of the cloud; the proper provider can either act as a surrogate computer system during your period of disaster recovery or “hot swap” your data onto the local stacks of your choice. the perfect DR solution should allow your company to port any data protected within the cloud or other secure storage facilities to the server of your choice in two hours or less. Ideally, your recovery time objective (RTO) should be quick enough to avoid any costs incurred as a result of system downtime.

Delivering Peace of Mind

As noted above, it’s critical to seek out the proper provider to deliver peace of mind in times of disaster. RCP Mag notes that 52 percent of companies surveyed said that they might consider adding some sort of DRaaS; as a result, there are now a number of providers for companies to settle on . to make sure your data is well handled and your cloud backups are always secure, it’s critical to make a robust, specific service level agreement (SLA) that spells out provider and client responsibilities for disaster recovery. By establishing this solid base from the very beginning it’s possible to create a strong, cloud-based DR solution.

Want better DR? Consider the cloud. Plan your failure, test the impact, secure your data, determine your speed, and confirm your SLA is up to snuff.


Legal Issues to think about When Moving To The Cloud


Moving to the cloud could seem sort of a no-brainer for many business owners. You won’t get to hand over cash for a server, you save on an on-call IT guy and you'll receive more specialist service. Furthermore, with the standard of the web in 2015, there's little noticeable difference between upload and download speeds of an on-site server and therefore the cloud.

However, several legal issues got to be considered before moving to the cloud. a number of them include: whether your data would be safe, and what would happen if your data were stolen?

This article will undergo the highest 4 legal problems with cloud computing and provides you some insight before you create your final judgment.

Privacy and security

Every business has tips – starting from client lists to trade secrets. Hence, often the primary and foremost concern of a business owner is: just how safe is my data within the cloud?

The answer to the present question really depends on the standard of the hosting company. An experienced cloud host will have extensive security mechanisms to make sure your data is safe. confirm you are doing some thorough research and pick an honest host!

Leaked data and liability

But you would possibly still ask: what would happen if my tip were leaked anyway?

In this situation, you would like to work out if your cloud provider has acted in a negligent manner. Very generally, this is often whether or not they have taken all reasonable precautions to guard your data and stop theft.

Another consideration could also be the indebtedness clause in your provider’s contract. this may generally state the utmost monetary amount your provider can pay just in case of negligence. it's going to be in your interest to barter this clause before making a contract if the potential damages your business may incur will vastly outweigh the liability cap.

Service and reliability

If all of your emails, files, and data are run through a cloud, the graceful operations of your business are going to be totally reliant on cloud service. for instance, were your cloud provider to possess a technical problem, your whole business might be down for an indefinite period of your time.

There are two ways to mitigate this sort of issue. First, confirm you've got an honest cloud provider. The Office 365 Cloud, for instance, has multiple servers and multiple backups of knowledge around the world. this suggests albeit one among their servers were down, you'd still have access to your information.

Secondly, confirm your contract stipulates that you simply are going to be credited for each day of interruption experienced. There should even be a clause allowing you to terminate the contract if the interruption spans over an agreed timeline. Nothing is worse than having to buy a service that you simply are not any longer using!

Ownership

The question of knowledge ownership is another quite common legal issue. it's not enough to easily assume that you simply own all rights to data stored within the cloud. confirm your initial contract makes it expressly clear that each one data your company places within the cloud are going to be 100% yours and retrievable whenever necessary.

For more information on the cloud, look out for the cloud education hashtag on Twitter. #CloudEducation