It partly boils down to the explosion of devices. Joe Root, co-founder of Edge Computing Company, Permutive, which provides a data management platform built for purpose, told Information Age that, “over the past 10 to 20 years, we’ve seen this explosion in the number of devices. So we’ve gone from 500 million to 15 billion devices, but over the next 10 years will go from 15 billion to a trillion devices.”
“These devices generate enormous volumes of data — anything from my watch, which knows my heart rate constantly, all day long, all the way through to smart cities and factories, the data is being centralised on the cloud. This explosion of data causes problems for the way in which we currently process data.”
So, Edge computing helps solve this, by providing computational power, in the form of local computing power, such as smart phones, on the edge.
Joe spoke to us about what he calls the five pillars of Edge Computing.
The first pillar of Edge Computing — latency
The cloud is at a distance. It takes time for data to be transferred from the cloud to the point where you need it. If you make a network request, it takes time to download the information you need. At the moment, says Root, “this is around 100 milliseconds. This will come down with 5G,” but no matter how advanced the technology there will always be a time lag involved in pulling data from the cloud defined by the speed of light. The closer you are to the data, the less the latency. Sure, the speed of light might be 300 million metres a second, or a metre every 3 nanoseconds, but when you are processing millions of pieces of data, pulling it from sources that may be many miles away, those nanoseconds add up. “If you need to make a decision in milliseconds, actually, physics prevents that from being possible,” explained Root. “I think we’re seeing companies approach this in different ways — if you get the CDN (content delivery network) or if you have 5G, then they’re trained to move the processing closer to the user to minimise that network latency. The most extreme way to do that though is to do it on the device. So that is why Apple face ID, for example, doesn’t rely on the cloud, because milliseconds matter in that example.
AI lies at the heart of solving network latency issues
The second pillar of Edge Computing — bandwidth
The second pillar is the limitation imposed on the amount of data you can access rapidly from the cloud imposed by bandwidth. Due to limitations of physics, you can only send a certain amount of data from the network before it sees the bandwidth limit and you can’t send it. So, the second benefit of edge computing is that you aren’t constrained by the bandwidth, because you’re processing the data on the device, you don’t need to transfer it.
As hype hysteria subdues: 5G network infrastructure revenue to reach $4.2 billion in 2020 — Gartner
The third pillar of Edge Computing — the explosion in computing power on the Edge
The explosion in the number of devices on the internet comes into play here. There is enormous computing power residing on the internet, which has already been funded. Furthermore, much of this computational power is under-utilised. Edge computing takes advantage of that processing power, meaning you don’t have to pay for this computation in the cloud.
One day, quantum computers may change this relationship — the computing power in the cloud will exceed that which exists on the edge. But that is some way off. In any case, even in the distant time when quantum computing provides superior processing power on the cloud, this is just one of the five pillars of Edge Computing.
Accessing the benefits of edge computing — a must for industry 4.0
The fourth pillar of Edge Computing — resilience
What happens if the cloud goes down, or you lose your connection? It is something we all intuitively understand — we may download a document to read on our computer or smart phone, so that if we are travelling and internet connection is intermittent, we can still read the document.
The fifth pillar of Edge Computing — privacy
When data is stored on the cloud, we have less control over it. Our wearable health checker may record all kinds of information about us, but it’s our personal information, which we want to keep private — and storing it locally and not allowing this data to seep onto the cloud is a good way to achieve this.
Indeed Root speculates that the greatest breach of all time is ongoing — open RTB, (real time bidding), a protocol concerning programmatic advertising, in which advertisers can bid to have access to data about you. GDPR is shining a light on this, but from the point of view of individuals concerned about their privacy, Edge Computing could overcome this problem in one foul swoop. It doesn’t mean advertisers will no longer be able to target specific customers accurately, but such targeting will be permission based and transparent.
Root claims that this is a data breach that is happening millions of times a second, right now.
Digital self defense: Is privacy tech killing AI?
The Edge has got the edge
The advantages of cloud computing are well known — you can turn your cloud computing up and down when you need it — no need for startups, for example, to invest in expensive IT infrastructure — you pay as you go.
But the limitations of the cloud are well known too — latency, bandwidth, diluted processing power, resilience in the event of lost connection and privacy.
Edge computing, by taking advantage of hardware that has already been funded, can overcome many of those disadvantages without necessarily losing the flexibility of the cloud.
It’s not that Edge Computing makes the cloud redundant — but it does make it, well, and sincere apologies for the obvious pun, it makes it more ‘edgy.’