Edge computing refers to bringing processing and storage capabilities closer to where they are needed. To break it down, this means that processing can be done faster which makes data more accessible. This can be used to enable ultra-low latency applications, as well as reduce backhaul traffic volumes and therefore costs.
Applications can be expedited when their processors are stationed closer to where the data is collected. This is especially true for applications for logistics and large-scale manufacturing, as well as for the Internet of Things (IoT) where sensors or data collecting devices are numerous and highly distributed.
IDC predicts that the edge computing market worldwide will grow to $250.6 billion by 2024. Dave McCarthy, the firm's Research Director of Edge Strategies, thinks edge products and services will power the next wave of digital transformation.
Keith Higgins, Vice President of Digital Transformation for Rockwell Automation, told TechRepublic the edge is the new cloud. He predicts that real-time availability of mission-critical workloads will be vital for companies scaling smart factory initiatives this year. "Edge computing will complement existing cloud infrastructure by enabling real-time data processing where the work takes place: motors, pumps, generators, or other sensors," Higgins said.
One of the biggest problems with cloud computing services is that they’re far too slow. Which means the cloud isn’t able to support real-time securities markets forecasting, autonomous vehicle piloting, and transportation traffic routing.
However, edge computing offers minimal latency. Where clusters of stand-alone, data-gathering appliances are widely distributed, having processors closer to even subgroups or clusters of those appliances could greatly improve processing time, making real-time analytics feasible on a much more granular level.
Another benefit of edge computing is that it also makes it possible to harness AI in enterprise applications, such as voice recognition. Voice recognition applications need to work locally for a fast response, even if the algorithm is trained in the cloud.
Ultimately, next-generation applications and services require a new computing infrastructure that delivers low latency networks and high-performance computing at the extreme edge of the network. And it’s not just cutting-edge apps that can benefit from edge computing and 5G networks.
The concept of moving intelligence to the edge didn’t really become prominent until around four years ago, when telecommunications companies began making plans for 5G wireless and realized that 5G’s speeds can help.
When you combine the speed of 5G with edge computing’s processing capabilities, it’s only natural to focus on applications that require low latency. This is why early use cases tend to involve AR/VR, artificial intelligence, and robotics, which require split-second decisions from computing resources. But there’s potential for a variety of business apps to benefit from both edge and 5G.