karapincha epa | information technology security






Traingate’ shows us that CCTV systems need the cloud


The furore over Jeremy Corbyn and ‘Traingate’ may have died down, but it highlights an unfortunate truth about CCTV which those folks who use it might had best to notice .

The press, whilst reporting Mr Corbyn’s mis-telling of the events which led to him sitting on the ground rather than a seat on a Virgin train, seem to possess missed a crucial point. Given Virgin’s reputation for using the newest technology and providing accurate, rapid communication through every media channel, why could it not produce CCTV footage to counter Mr Corbyn’s claims immediately after the event, rather than leaving it every week then producing some grainy footage with incorrect time-stamping?

To be nice about it – CCTV generally isn’t excellent .

In this day and age you’d think that footage would are quickly identified within a secure central corporate repository with an accurate timestamp, downloaded and sent immediately to Mr Corbyn’s office with a polite note requesting that he withdraw his remarks. We sleep in an age when anyone can quickly upload video from their phones to Facebook, so why did this not happen?

[easy-tweet tweet=”Why is CCTV bad in an age once we can quickly upload video from phones to Facebook” hashtags=”tech, security, CCTV”]

I have little question that if Virgin could have gotten their hands on the CCTV footage of what actually happened as soon as they needed it they might have countered Mr Corbyn’s claims immediately. the difficulty is that the CCTV footage is recorded to DVRs in each train carriage and, as trains are on the move, even getting hold of the footage within every week would are no mean feat.

The Daily Mail reported that a number of the footage was incorrectly time-stamped. it's not that important during this instance, but if the system had been recording a significant crime or surprise attack , timings would are vital. CCTV systems are notoriously bad at keeping the right time and need to be constantly checked to make sure they're correct.

Another small point on time-keeping is that, to satisfy the wants of the info Protection Act (DPA), it’s essential that footage should be of evidential quality and is correctly timestamped. It’s not great for Virgin to possess to display such an inaccuracy within the property right – particularly because it makes the footage unlawful.

Of course I’m pointing all this out for a reason – CCTV really doesn't need to be this manner . Footage from one or many corporate CCTV cameras are often simply, securely and inexpensively consolidated onto cloud servers, where it are often quickly interrogated by authorised personnel from any location and on any device (even by a busy CEO on their smartphone on a Caribbean Island , provided they need the authorisation).

[easy-tweet tweet=”IoT devices need to tell the proper time or they won’t work very well” hashtags=”IoT, tech, CCTV”]

What’s more, IoT devices need to tell the proper time or they won’t work alright , so cloud-based systems like Cloudview are constantly polling NTP servers to make sure they're telling the proper time no matter location.

In this age of instant opinion CCTV footage must continue – and it can, as long because it becomes a part of the IoT.



Accelerate your Compute at lower cost with innovation!


Imagine your computer software running faster; your datacenter footprint smaller. Imagine you would like less servers, software and support licenses. Imagine what your business could achieve with better performance at a lower cost. for several businesses, this dream is already a reality.

[easy-tweet tweet=”Innovation drives IBM Power Systems and collaboration from the OpenPOWER Foundation” user=”IBM” hashtags=”IBM, tech”]

Innovation drives IBM Power Systems and collaboration from the OpenPOWER Foundation to deliver business workloads and therefore the most demanding high performance compute requirements needed for scientific and genomic research. These servers are designed to be as compute effective as a mainframe, bringing a formidable platform that revolutionises Linux workloads.

10 reasons why IBM Power and OpenPOWER rock Linux workloads
1. Innovate, innovate innovate! – The OpenPOWER Foundation

The IBM Power processor is an awesome processor – so great that tech powerhouses IBM, Google, NVIDIA, Tyan and Mellanox formed the OpenPOWER Foundation in 2013.virtual technology IBM opened the technology round the Power architecture so members could build out their own customised servers, networking and storage hardware.

Through the OpenPOWER Foundation, new innovative Linux-only Power-based servers are created by the likes of Wistron and Mellonox, Tyan, Rackspace and Google. There are many new components like GPUs and field-programmable gate arrays (FPGAs) designed and built specially for the IBM Power architecture. OpenPOWER is enabling a replacement Linux open ecosystem.

2. Linux all the way

Power is not any longer just AIX; it’s all flavours of Linux. Linux-based software can utilise the facility platform and realise performance gains. The software only needs a recompile as a minimum to run on Linux on Power. If you then continue to tune your software to take advantage of the innovative enablers like the hyper threading, GPUs and FPGAs, you'll realise even greater performance. inspect how MariaDB realised 3x the performance over Linux on x86 when tested independently by Quru.

[easy-tweet tweet=”Linux-based software can utilise the facility platform and realise performance gains” user=”IBM” hashtags=”tech, linux, cloud”]

3. Hyper-threaded processor

The POWER8 processor is meant for virtualisation; the cores are multithreaded with eight threads per core. this enables for virtualisation of the many workloads at the thread level, enabling increased processing ability with linear performance. The POWER8 multithreading makes it easy to understand over 2.5x the performance, with no tuning! It’s like changing the processor core into an eight lane highway, instead of having a slow single track road to mosey along. inspect my performance blog for more.

4. GPUs

Imagine a card you'll increase into your computer architecture to massively increase the amount of cores you'll access. That’s the sweetness of GPUs. The GPU virtual cores are ideal for processing compute intensive workloads.

Thanks to the OpenPOWER Foundation, the NVIDIA GPUs enable compute intensive workloads and are now also ready to handle those self same loads even when faced with high bandwidth requirements like big data. With the new NVLink technology from NVIDIA, businesses can redefine how they handle compute and large i/o.

It is so cool when innovation brings components like NVIDIA Cards and NVLink to unravel and make light work of hungry Hadoop and analytics workloads.

5. FPGAs and CAPI

FPGAs are among my favourite innovations. i really like that a programmable card are often configured specifically for your workload. You instruct the FPGA with the foremost efficient thanks to process your compute needs, then you'll reduce the amount of instructions, movement of knowledge , then forth. this will increase your workloads’ performance by the thousands. FPGAs are perfect for machine learning algorithms, situations where the processing is compute heavy. Think visual recognition and surveillance applications. Using FPGA’s with IBM’s CAPI technology removes the overhead and complexity of the I/O sub system, supplying you with even greater performance.

6. Cache & Flash

Each generation of cache has seen architecture changes to bring innovation and greater performance. Having access to more cache means memory-intensive applications will perform better as memory latency is reduced, but the info within it's available to be processed immediately with no waiting time for retrieval. POWER8 comes with a replacement Level 4 (L4) cache, boasting 128MB – unachievable by x86. there's also many cache in L1, L2 and L3 – Power Systems significantly beats the other server it competes with.

[easy-tweet tweet=”Each generation of cache has seen architecture changes to bring innovation and greater performance.” hashtags=”tech, compute, cache”]

Cache are often complemented with storage. Let’s check out flash storage, which is almost as fast as cache. Adding 75TB of IBM Flash Storage will absolutely rock your data response times.

7. Bandwidth

Bandwidth, or the i/o subsystem, might not sound sort of a thrill. But this is often what moves data around quickly, and once you have data-hungry applications like data warehousing and analytics applications, the POWER8 i/o will deliver significantly greater performance and scalability. the particular i/o speed is 230-410GB/sec, which in comparison to Intel Haswell, we’re watching a mean of 6x faster.

8. Footprint & Cost

With all the innovation and skill to run mixed highly virtualised workloads and all-around awesome performance, Power Systems are a dream-come-true for the datacentre. These servers easily halve the amount of processors required, yet deliver increased performance for more demanding workloads.

The bottom line? you would like fewer servers, and in every TCO study I even have seen comparing POWER8 with x86, both the initial and three-year costs for both commercial and Open Source (with Enterprise Support Licenses) is a smaller amount for POWER8, including the server price. POWER8 servers still amaze me – the worth for compute is far lower, and that’s what matters.

9. Reliability, Availability, Security or RAS

All essential requirements for servers are traditionally associated as being on the vanguard for UNIX on Power. within the same way, these capabilities are available for Linux when running on POWER8. inspect what SUSE say.

10. Ecosystem

There is tons of software, both commercial and open source, that has been officially ported to Linux on Power, and there's now a waterfall of vendors moving to the platform. Customers are adopting it because it delivers value and enables their businesses on premise and within the cloud. Here is one who realised 25 times greater performance gain.

“Porting was easier than expected” – Robert Love, Google

If you haven’t checked it out, you’re behind your competition. now's an excellent time to urge your Linux ported to POWER8 and OpenPOWER.Hybrid it's close to get interesting — Introducing Azure Stack

Hybrid cloud, and by extension hybrid IT, is here to remain .
Few companies will use pure public or private cloud computing and positively no company should miss the chance to leverage a mixture . Hybrids of personal and public cloud, multiple public cloud services and non-cloud services will serve the requirements of more companies than any single cloud model then it’s important that companies stop and consider their future cloud needs and strategy.

[easy-tweet tweet=”Hybrid cloud services will serve the requirements of more companies than any single cloud model” user=”PulsantUK” hashtags=”tech, cloud”]

Since such a lot of IT’s focus within the recent past (and in fact , even now) has been on private cloud, any analytics that show the expansion of public cloud give us a way of how the hybrid idea will progress. The business use of SaaS is increasingly driving a hybrid model by default. Much of hybrid cloud use comes due to initial trials of public cloud services. As business users adopt more public cloud, SaaS especially , they're going to need more support from companies, like Pulsant, to assist provide solutions for true integration and governance of their cloud.

The challenge, as always within the cloud arena, is that there's no strict definition of the term ‘hybrid.’ There has been, until recently, a definite lack of vendors and repair providers ready to offer simple solutions to a number of the day-to-day challenges faced by most companies who try to develop a cloud strategy. Challenges include those of governance, security, consistent experiences between private and public services and therefore the ability to easily ‘build once’ and ‘operate everywhere’.

Enter Azure Stack. For the primary time you've got a service provider (for that’s what Microsoft is becoming) that's addressing what hybrid IT really means and the way to form it simple and straightforward to use.

[easy-tweet tweet=”Business use of SaaS is increasingly driving a hybrid model by default.” user=”PulsantUK” hashtags=”cloud, hybrid, tech”]

So what's Azure Stack?
Azure Stack is, simply put, Microsoft’s Azure public cloud services brought into an organisation’s own datacentre providing a personal cloud solution. Under the hood Azure Stack is running Microsoft’s vNext technology ‘stack’, Window Server 2016, Hyper-V and Microsoft networking and storage. In truth, when it launches later this year, it'll only have a limited number of Azure public services able to go – but – the exciting bit is that regardless of how you check out it's , you'll be running Microsoft’s Azure on-premises. It’s not just “something that's functionally almost like Azure,” but running Azure Stack is running Microsoft’s public Azure cloud in your datacentre.

The obvious first question on people’s minds is probably ‘why run a cloud service offering in your own datacentre at all’? Isn’t the entire idea of (public) cloud to push workloads out of your own organisations to 3rd party hosting solutions to minimise IT costs?

Well, yes. Offloading infrastructure and datacentre workloads to the general public cloud has been the most (but not only) use of public cloud thus far . The challenge that also faces many CIOs and CTOs is that there are many workloads and services that they can't or refuse to place up within the public cloud (right now). Common excuses include security, compliance, and guaranteed reliability, along side the very fact that core business apps aren't (currently) allowed outside of the datacentre and other reasons.

One way of addressing this issue has been to create private clouds on-premises. This works but often these are ‘disconnected’ from public cloud services a corporation could also be using, are poorly integrated (if at all) to other cloud services and are often bespoke solutions that lack any true ability to form use of public cloud services without tons of re-engineering, re-development and tons of ‘know how.’

What Azure Stack does for the primary time is provide a uniform platform that permits companies to, for instance , stage the event and implementation of apps on-premises during a private Azure Stack cloud then seamlessly move or scale bent Azure public cloud. Azure Stack provides the precise environment on-premises as within the public cloud. Not something similar on-premises, but truly in an on-premises dev/test and on-premises internal datacentre environment that's just like a public cloud (Azure) environment.

How is Azure Stack (or Azure public) better than an existing VM environment?
This is the straightforward question that completely differentiates Azure (public)/Azure Stack from a standard VM-based environment. once you understand this, you understand how Azure Stack may be a disruptive and game changing technology.

For an extended time now application scalability has been achieved by simply adding more servers (memory, processors, storage, etc). If there was a requirement for more capacity the solution was “add more servers”. Ten years ago, that also meant buying another physical server and putting it during a rack. With virtualisation (VMware, Hyper-V, OpenStack) it's been greatly simplified, with the power to easily “spin-up” another virtual machine for the asking . Even this is often now being superseded by the arrival of cloud technologies. Virtualisation may have freed companies from the necessity for having to shop for and own hardware (capital drain and therefore the constant need for upgrades) but with virtualisation companies still have the matter of the overhead of an OS (Windows/Linux), possibly a core application (e.g. Microsoft SQL) and, most annoyingly, a raft of servers and software to patch, maintain and manage. Even with virtualisation there's tons of overhead required to run applications as is that the case when running dozens of “virtual machines” to host the applications and services getting used .

[easy-tweet tweet=”Even with virtualisation there's tons of overhead required to run applications” user=”PulsantUK” hashtags=”tech, cloud, virtualisation”]

The public cloud takes subsequent step and allows the aggregation of things like CPUs, storage, networking, database tiers, web tiers and easily allows a corporation to be allocated the quantity of capacity it needs and applications are given the required resources dynamically.technology degrees
 More importantly, resources are often added and removed at a moment’s notice without the necessity to feature VMs or remove them. This successively means less ‘virtual machines’ to patch and manage then less overhead.

The point of Azure Stack is that it takes the advantages of public cloud and takes subsequent logical step during this journey — to bring the precise capabilities and services into your (private) data centre. this may enable a number of latest ideas letting companies develop an entire Azure Stack ecosystem where:

Hosting companies can sell private Azure Services direct from their datacentres
System integrators can design, deploy and operate Azure solution once but deliver in both private and public clouds
ISVs can write Azure-compatible software once and deploy in both private and public clouds
Managed service providers can deploy, customise and operate Azure Stack themselves
[easy-tweet tweet=”Azure Stack takes the advantages of public cloud and takes subsequent logical step” user=”PulsantUK” hashtags=”cloud, hybrid, tech”]

Azure Stack are going to be a disruptive, game changing technology for cloud service providers and their customers. it'll completely change how datacentres will manage large scale applications, and even address dev/test and highly secured and scalable apps. it'll be how hosting companies will offer true hybrid cloud services within the future. 


PCI FAQs and Myths


Payment Card Industry (aka PCI) compliance may be a group of guidelines that oversees data security across a series of credit and open-end credit payments. Businesses must to suits these rules outlined by the PCI Security Standards Council so as for his or her merchant account to stay in good standing.

[easy-tweet tweet=”Every business that accepts credit/debit cards must obey these PCI standards” hashtags=”tech, payment, cloud, data”]

Every business that accepts credit/debit cards must obey these PCI standards, regardless of what processing method they use. BluePay has put together a slideshow guide, answering some FAQ’s along side deflating some myths surrounding PCI compliance.

Don’t get grounded like Delta – challenge your Disaster Recovery plan

You can attempt to plan for it. you'll train for it the maximum amount as possible. you'll create multiple systems designed to stop it. When a disaster does occur, it’s usually once you least expect it. 

[easy-tweet tweet=”Companies can lose up to $5,600 per minute on the average during an outage” hashtags=”tech, cloud, security”]

Just ask Delta Airlines.

A huge power failure within the US delayed and cancelled flights worldwide, causing widespread chaos and significant damage to the Delta brand. Enterprises and businesses should view this as a cautionary tale – having a tried and tested disaster recovery plan in situ is critical.

I can’t stress enough how imperative it's to possess a disaster recovery plan in situ that ensures you’re prepared to deal with any disaster which may come along. The aim of any disaster recovery plan should be to urge copy and running as soon as possible. There’s the damage to your brand to think about , not mentioning the prices involved. consistent with Gartner, companies can lose up to $5,600 per minute on the average during an outage.

To make sure your disaster recover plan is up to the task, consider these 6 questions:

1. Has your disaster recovery team set roles and responsibilities?
Each member of your disaster recovery team needs a clearly defined role. When a disaster hits, everyone must know exactly what their job is to make sure the plan is administered without hesitation or mistakes.

Contact information for every member may be a must and clarity on their role for him or her and therefore the remainder of the team. Sounds minor but this may make all the difference within the event of a disaster.

2. Will your budget cover the unexpected?
In the same way a disaster are often unexpected, so can the prices involved in getting back online. does one need cloud support? Usage fees got to be factored in.information technology security Will there be a requirement for external consultants to help the disaster recovery team? they're going to presumably be costlier than your employees so there must be the required budget in situ .

Having a surplus just in case of a disaster is vital . Every minute of downtime may result in huge IT expenses, especially for larger enterprises.information technology management
 Considering these additional expenses and building them into the plan is critical to make sure your team doesn’t meet any longer snags when trying to urge authorisation for these costs.

3. are you able to mobilise your data?
[easy-tweet tweet=”Data mobility means immediate and self-service access to data anytime” hashtags=”data, tech, recovery”]

If a disaster strikes, you’ll got to consider the mobility of your data – is it confined to your physical infrastructure or can your data move freely to different locations?

Data mobility means immediate and self-service access to data anytime, anywhere. It enables accelerated application development, faster testing and business acceptance, more development output, improved productivity and improved time-to-impact business intelligence.

Without data mobility, there’s a big risk of recovery cost and time to urge back online.

4. what's presumably to travel wrong together with your data?
Once you've got your plan in situ , it’s important to analyse it closely and remember of what's presumably to travel wrong when executing it. Hone in on those vulnerabilities and ensure they're corrected. Finding ways to avoid them will bring a way smoother disaster recovery process.

5. Has your plan been put to the test?
Test, test and test again – ensuring your disaster recovery plan is prepared to be unrolled successfully at any time is an absolute must.