As featured in Acceleration Economy Network
Let’s talk about the first principles of edge computing. Edge computing – why do we do it? Why will we do it? To be honest, edge computing is such a broad and deep topic, it is difficult to discuss the need for it. There are several lenses that can be applied to consider rationale and benefit. I will give it a shot and appreciate your help if you think I got it wrong. We are all in this together.
In my view, edge computing boils down to two things: lower latency and higher reliability. I will borrow from the acronym URLLC (Ultra-Reliable, Low-Latency Communications) from the 5G world to dub these axioms of edge computing URLLEC (Ultra-Reliable, Lower-Latency Edge Computing). I might have coined the tech acronym of the year, though it’s still early of course.
The First Principles
URLLEC might not come as much of a surprise. After all, we frequently hear that edge computing reduces latency versus central cloud computing. The rationale is simple—you place the edge computing workload closer to the endpoint client device or default location. The idea is that you are reducing the number of hops and the distance that light must travel through the network or Internet, which can be significant. The latencies are too high for many industrial applications that require distributed system latencies in the milliseconds range.
Primary Benefits of URLLEC
What does URLLEC mean for business leaders and end-users of edge computing systems? Let’s tackle each first principle one at a time, starting with latency.
Lower Latency
- Better User Experience
- Improved Distributed System Performance
Reliability
- Availability
- Consistency
Read the full article by clicking the Accelerated Economy logo below and subscribe to the Cutting Edge column.