Fog Vs. Edge Computing: What Are the Differences that Matter?
This article focuses on the difference between edge and fog computing. It details how there are, as yet, no set standard definitions for these terms, with different organizations using them differently. It also explores how choosing one over the other could be important when deploying your IoT solution.
The last few decades have seen a massive shift from on-premise software to cloud computing. By storing data and performing computing processes elsewhere, we have freed ourselves to be able to do more on our phones, computers or IoT devices without needing the corresponding extra memory or computing power. However, we’re about to see things begin to swing back in the other direction.
This is for a variety of reasons, including a need for extremely low latency in certain applications, such as self-driving cars. Shifting computing power nearer to the edge of the network can also reduce costs and improve security.
According to Matt Vasey, who focuses on IoT strategy at Microsoft, “The ideal use cases [for both fog and edge computing] require intelligence near the edge where ultra-low latency is critical, run in geographically dispersed areas where connectivity can be irregular or create terabytes of data that are not practical to stream to the cloud and back.”
The Two Types of Computing Share Many Similarities
The terms edge and fog computing seem to be more or less interchangeable, and they do share several key similarities. Fog and edge computing systems both shift processing of data towards the source of data generation. They attempt to reduce the amount of data sent to the cloud. This is to decrease latency and thereby improve system response time in remote mission-critical applications, to improve security as the need to send data across the public internet is lessened, and to reduce costs.
Some applications may gather a huge amount of data, which would be costly to send to a central cloud service. However, only a small amount of the data they gather may be useful. If some processing is done at the edge of the network and only the relevant information is sent to the cloud, then this will reduce costs.
Think of a security camera. Sending 24 hours of video to a central server would be hugely expensive, and 23 of those hours may be of nothing more than an empty hallway. If you utilize edge computing, you can choose to only send the one hour where something is actually happening.
Both fog and edge computing involve processing data closer to where it originates. A key difference is exactly where that processing takes place. Fog computing processes take place in the local area network (LAN) level of network architecture. Fog computing uses a centralized system that interacts with industrial gateways and embedded computer systems.
Edge computing processes much of the data IoT devices are generating directly on the devices themselves.
How Fog and Edge Computing Are Used Differently
As we can see, these two technologies are very similar. To differentiate them, let’s think about the use case of a smart city.
Imagine a smart city, complete with smart traffic management infrastructure. The traffic lights have a sensor connected, which can detect how many cars are waiting at each side of a junction and prioritize turning the light green for the greatest number of cars. This is a fairly simple calculation that can be performed in the traffic light itself using edge computing. This reduces the amount of data that needs to be sent over the network, reducing operating and storage costs.
Now, imagine those traffic lights are part of a network of connected objects that include more traffic lights, pedestrian crossings, pollution monitors, bus GPS trackers, and so on.
The decision about whether to turn that traffic light green in five seconds or ten becomes more sophisticated. Perhaps there’s a bus that’s running late on one side of the junction. Maybe it’s started raining and, in a bid to encourage residents to travel more actively, the city has decided to give priority to pedestrians and cyclists at lights when it rains. Is there a pedestrian crossing or a cycle path nearby? Is anyone using it? Is it raining?
In this more complex scenario, micro-data centers can be deployed locally in order to analyze data from multiple edge nodes. These data centers act like a local, mini cloud within the local area network and are considered to be fog computing.
So, Which is “Better”? Fog or Edge Computing?
In summary, as IoT continues to grow and more data is generated, processing that data close to the point of generation will become imperative. Clearly, other people agree with this. According to a recent report by Million Insights, the global edge computing market size is predicted to be valued at around $3.24 billion USD by 2025.
Edge and fog computing will play vital roles in the future of IoT. Whether edge or fog computing is utilized is less important and will depend on the application and the specific use case. Like many IoT considerations, such as which type of connectivity to choose, the answer is not black and white. Whether fog or edge computing is “better” will depend on the IoT application and what its requirements and desired outcomes are.