sudu loonu epa | information technology training






Understanding DevOps in 2016

I recently sat down with Vadym Fedorov, a Solutions Architect from SoftServe to debate DevOps. Fedorov specialises in Enterprise Technologies, Cloud Solutions and DevOps. As an answer Architect Fedorov manages the complete software development life cycle, including application design, performance analysis, code optimisation, re-design and platform migration, requirements analysis, usage and development of design patterns and infrastructure planning. 
Fedorov is certified by major technology providers including Cloudera (Hadoop), Microsoft and Cisco, so he's quite qualified to assist me (and you!) understand DevOps.
Q. DevOps seems to be a buzzword – why should we care?
In the not too distant past enterprises had a little number of servers, and it had been standard for one supervisor to manage up to 30-40 servers. Today, organisations have many servers, and their infrastructure has also changed.


find out how “DevOps” was born


Whereas within the past the deployment of infrastructure was linked to the manual deployment of hardware and software, most Operations and Infrastructure teams now work with a cloud infrastructure provider that gives programming access for infrastructure management. As a result, System Administrators had to start out programming or Programmers had to start brooding about infrastructure so as to automate the provisioning and management of infrastructure – and “DevOps” was born.
It lends itself to customer-facing software: an app, a website, firmware updates, but that software customer could equally be internal: helping a business try new technologies or drive continuous improvements.
It’s also enabled by advances in technology.technology insurance
 The Increased automation of infrastructure combined with collaboration tools can make it possible for everybody during a team to figure together in an agile approach, accelerating software delivery like developer teams have with their coding for several years.

Q. You mentioned automation, what role does it play in DevOps?

Automation is the backbone of cloud services, helping providers offer customers powerful control panels and hugely flexible and dynamic configurations. Cloud providers are opening up these automation layers through additional services and APIs and this suggests companies are ready to stage, test and deploy new software automatically from continuous delivery tools which orchestrate the software pipeline.
This level of automation is usually harder with on-premise servers, networking and storage that aren’t built for the software-defined era.

Q. What opportunities does the cloud offer?

For startups, it's a highly cost-effective and lean infrastructure to require thought from concept to a public beta or trial. for instance, we helped user experience optimisation expert Yottaa employ agile development on Amazon Web Services to deliver its operation Intelligence platform into production.
Larger companies can leverage an equivalent infrastructure for R&D or production applications where multi-tenant hardware suits their needs.
As long as your application features a distributed design from the outset, it can proportion on a versatile IaaS as required to fulfil demand without crippling CAPEX.



Tackling a replacement CIO agenda


IT is changing organisations and their workforces beyond recognition. It’s said that each employee may be a new digital employee; we enjoy technology and recognise its relevance to the broader corporate agenda. Our evolving technical knowledge also allows us to adopt shadow IT within the workplace, in an effort to streamline individual workloads. As such, being just ‘the IT guy’ is not any longer an option – IT professionals are now expected to intensify to require a valued and strategic role within the broader business.
With this renewed focus, IT leaders are expected to juggle day-to-day operations whilst remastering business units to reply to the new digital economy. The CIO is considered a lynchpin for digital transformation in the least levels; whether consulting on new technologies, shaping cross-departmental initiatives or leading pivotal company-wide projects. within the instance of IT challenges or disasters, they too are the primary port of call.
Such pressure might be viewed as a given during this role. After all, the IT function has always accompanied a high element of risk. But as technology becomes more ingrained within the business, how are these mounting expectations impacting the individual at the helm?
the IT function has always accompanied a high element of risk
This updated description should, in theory, equal a replacement mind-set. Yet during a recent Colt study, it had been found that the bulk (68%) of CIOs are still making pressured decisions supported their own instinct and knowledge, above the other factor. the bulk (76%) also admit that their intuition is often added odds to other sources, like big data reports or advice from third parties. While this is often understandable given the fast pace of change and potentially contradictory data on offer – it's going to also indicate a more deep-seated issue to be addressed.
What is the explanation for this reliance on our tried-and-trusted method decision making? CIOs certainly feel greater pressure on their shoulders, so one could attribute this to an increased feeling of private risk. This sentiment was echoed within the same ‘Moments that Matter’ study, where quite three-quarters (76%) of senior IT leaders said they felt more individual risk when making decisions because it evolves into a more strategic role. After all, if the stakes are high and therefore the individual is compelled to form the proper decision which will end in business (and thus career) success – gut instinct and professional judgement will surely outplay the doubtless conflicting insights or advice. 
Yet the question remains: are IT leaders making full use of the expertise that already exists within the company? Do they create the foremost of internal – and to some extent – external expertise to deal with this huge digital transformation?
Whilst CIOs are already adapting their way of thinking – focusing less on operational matters and instead of majoring on delivering value to the business – there's still a concerning pattern which emerges from the research. The findings suggest that some IT departments still act in an insular way by making their own IT-based decisions. When handling issues or risks, the IT professional tends to consult others in their department, instead of the broader business.
This approach worked well when IT’s main objective was to prioritise the interior needs and pressures within a business. But to satisfy the stress of today's’ digital world, innovation must be led by the requirements of the end-customer which of the market. consistent with Gartner, quite 21 per cent of IT investment now takes place outside of a politician IT budget. Department leaders have their own priorities and budgets, which must continue with market demands and therefore the increasing pace of change.


The modern-day CIO is predicted to be a trusted advisor across the business


The modern-day CIO is therefore expected to be a trusted advisor across the business and to require the time to find out from counterparts in other divisions – their priorities, struggles and customer needs. Fostering these solid relationships enables insights on different functions, but also provides the CIO with a way of the business tolerance to risk. This depth of understanding will help them prioritise the projects to specialise in – whether prototyping, piloting or rolling out – and identify the ideas which will make the foremost business impact.
In an age of digital transformation, it's interesting and understandable that IT decisions are often still supported instinct and knowledge. Yet looking to the subsequent phase, ensuing solid communications and collaborative approaches can promote enthusiasm around new sorts of tech adoption. most significantly, the collaboration presents a standard goal to drive innovation and make advantage, regardless of whose idea it had been originally. For the CIO, this approach helps to ease the pressure of IT-driven deciding, and help them grow into a replacement generation of IT leader. Virtual HPC Clusters Enable Cancer, Cardio-Vascular and Rare Diseases Research
Medb, a partnership of seven leading bioinformatics research and academic institutions, is employing a new private cloud, HPC environment and large information system to support the efforts of many researchers studying cancers, cardiovascular and rare diseases. Their research focuses on understanding the causes of those diseases and the way a person’s genetics may influence their predisposition to the disease and potential treatment responses.
The new HPC cloud environment combines a Red Hat Enterprise Linux OpenStack Platform with Lenovo Flex System hardware to enable the creation of virtual HPC clusters bespoke to individual researchers’ requirements. The system has been designed, integrated and configured by OCF, an HPC, big data and predictive analytics provider, working closely with its partners Red Hat, Lenovo, Mellanox Technologies and together with Matlab's research technologists.
The High-Performance Computing environment is being hosted at a shared data centre for education and research, offered by digital technologies charity Jisc. the info centre has the capacity, technological capability and adaptability to future-proof and supports all of Medb's HPC needs, with its ability to accommodate multiple and varied research projects concurrently during a highly collaborative environment.visible technologies
 The ground-breaking facility is concentrated on the requirements of the biomedical community and can revolutionise the way data sets are shared between leading scientific institutions internationally.
The MatLab partnership was formed in 2014 with funding from the Medical Research Council. Original members University College London, Queen Mary University of London, London School of Hygiene & medicine, the Crick Institute, the Wellcome Trust Sanger Institute and therefore the EMBL European Bioinformatics Institute are joined recently by King’s College London.


Bioinformatics may be a very, very data-intensive discipline


“Bioinformatics may be a very, very data-intensive discipline,” says Jacky Pallas, Director of Research Platforms, University College London. “We want to review tons of de-identified, anonymous human data. It’s not practical – from data transfer and data storage perspectives – to possess scientists replicating equivalent datasets across their own, separate physical HPC resources, so we’re creating one store for up to six Petabytes of knowledge and a shared HPC environment within which researchers can build their own virtual clusters to support their work.”
The Red Hat Enterprise Linux OpenStack Platform, a highly scalable Infrastructure-as-a-Service [IaaS] solution, enables scientists to make and use virtual clusters bespoke to their needs, allowing them to pick compute memory, processors, networking, storage and archiving policies, all orchestrated by an easy web-based user-Interface. Researchers are going to be able to access up to six,000 cores of processing power.
We generate such large quantities of knowledge that it can take weeks to transfer data from one site to a different,” says Tim Cutts, Head of Scientific Computing, the Wellcome Trust Sanger Institute. “Data in eMedLab will stay in one secure place and researchers are going to be ready to dynamically create their own virtual HPC cluster to run their software and algorithms to interrogate the info, choosing the number of cores, OS and other attributes to make the perfect cluster for his or her research.”
Tim adds: “The Red Hat Enterprise Linux OpenStack Platform enables our researchers to try to do this rapidly and using open standards which may be shared with the community.”
Arif Ali, Technical Director of OCF says: “The private cloud HPC environment offers a versatile solution through which virtual clusters are often deployed for specific workloads. The multi-tenancy features of the Red Hat platform enable different institutions and research groups to securely co-exist on equivalent hardware, and share data when appropriate.”
This may be a tremendous and important win for Red Hat,” says Radhesh Balakrishnan, head, OpenStack, Red Hat. “Medb's deployment of Red Hat Enterprise Linux OpenStack Platform into its HPC environment for this data-intensive project further highlights our leadership during this space and skill to deliver a totally supported, stable, and reliable production-ready OpenStack solution.
Red Hat technology allows consortia like MatLab to use leading-edge self-service compute, storage, networking, and other new services as these are adopted as core OpenStack technologies, while still offering the planet-class service and support that Red Hat is renowned for. the utilization of Red Hat Enterprise Linux OpenStack Platform provides leading-edge technologies alongside enterprise-grade support and services; leaving researchers to specialise in the research and other medical challenges.”
Mellanox end-to-end Ethernet solutions enable cloud infrastructures to optimize their performance and to accelerate big data analytics,” said Kevin Deierling, vice chairman of selling at Mellanox Technologies. “Intelligent interconnect with offloading technologies, like RDMA and cloud accelerations, is vital for building the foremost efficient private and cloud environments. The collaboration between the organisations as a part of this project demonstrates the facility of the eco-systems to drive research and discovery forward.”


The new high-performance environment and large data environment consists of:


Red Hat Enterprise Linux OpenStack Platform
Red Hat Satellite
Lenovo System x Flex system with 252 hypervisor nodes and Mellanox 10Gb network with a 40Gb/56Gb core
Five tiers of storage, managed by IBM Spectrum Scale (formerly GPFS), for cost-effective data storage – scratch, Frequently Accessed Research Data, virtual clusters image storage, medium-term storage and former versions backup.




Consolidation Among Cloud Service Providers


Time to urge Big, Get Local or Get Out!
While much of the recent focus has been on the large players within the Cloud market, that are all competing for scale, there remains a competitive play for focused MSPs that are often overlooked.
After an initial growth introduces which many pioneers crate a replacement market, there typically follows a consolidation introduce which mergers, acquisitions and divestitures herald the start of A level of market maturity. there have been once many small motor manufacturers, but now you've got the quantity players like Volkswagen, Ford or Toyota and therefore the specialists with niche brands like Porsche, Mercedes or Jaguar. Those caught in no-man’s land tend to not survive – just check out Fiat – it lacks the quantity to compete on scale and therefore the brand to compete on quality.
The same trend is currently playing call at the cloud market. Signs of this process are as follows:


1) Generalists quit:


HP has been a pacesetter in printers, PCs and company servers, but it realised that it couldn’t compete publicly cloud so pulled out (shortly before splitting into two firms) to specialise in what it really was good at. Likewise, Verizon has attempted to span the areas of wired and wireless connectivity also as a cloud but is now reportedly looking to unload a number of its data centre facilities, during a move to refocus back on its wireless business.


2) Large players acquire smaller ones:


Before admitting defeat HP had bought Eucalyptus, ContexStream, Aruba Networks and Voltage Security. Meanwhile, Microsoft has bought companies like Adallom, while IBM has added Gravitant, Blue Box, and Cleversafe to its earlier big Softlayer purchase.


3) Specialists emerge:


MSPs specialise in their core area of experience and partner across value networks
As markets mature, niche or vertical segments appear that are served by local, specialist or premium providers as has been the case with the various segments now served by local MSPs. Unlike Verizon that attempted to try to everything from connectivity to cloud, the MSPs specialise in their core area of experience and partner across value networks during which each player from the MSP itself to the connectivity providers ISVs and other players that they team with to supply an answer that meets a client’s needs more closely than the large players ever could.
So what should clients make of this or understand if they're to urge the foremost from their provider?
For commodity requirements (like future storage) consider the large players – for instance, AWS Glacier is a smaller amount than a cent per Gb per annum.
For business-critical needs consider an MSP with the specialist skills and knowledge that you simply are trying to find – one that understands your business.
Also, attempt to avoid generalists and instead consider those suppliers that are best at what they are doing which specialise in driving value and competitive advantage through their core strengths.
Consider value networks where the varied players have experience of working together to supply better of breed components during a fully integrated overall solution. Avoid that jack-of-all-trades that claims to be ready to roll in the hay all.
Be aware: Firms that get acquired are either keen to sell (cashing out) or forced to sell (fire-sale bargains). If your provider is one among these and does get acquired then you would like to be prepared for the results – your applications and data may move meaning potential changes in terms of service, SLAs, etc.
[easy-tweet tweet=” There are #MSPs for each industry, you only need to find one that serves you right says @zsahLTD”]
In any business, there are commodity needs (like future storage) that are hardly critical, but they're also are areas where your data and your applications are critical, not only to the way that you simply do business but also to the worth that you simply provide to your own clients. If you're a firm then there are MSPs that serve this segment. information technology training they understand how law firms operate, they're conversant in the applications that they use and have worked regularly with the ISPs and connectivity suppliers that also serve this market segment and that they are probably the simplest option for you. an equivalent is true for several other segments. Whatever your business there's likely to be an MSP out there that's right for you – we’d wish to think that it'd be Zsah, but albeit it isn’t then these rules still apply!