Edge Vs Cloud Computing: A Myth…..or Reality?
The speed of change in the technology landscape has been accelerating over the years. The average lifespan of technology has shrunk by a factor of 60% from 100 years; from invention of landline connection to its adoption by 90% population to 15 years for Mobile phones to reach similar adoption levels. Further, this rapid adoption has also quickly led to an innovation going out of favor and losing its relevance. This obsolescence of technology is visible not only in the consumer technology areas but also in the enterprise domain, where the speed of technology adoption has accelerated and led to cloud service providers growing by over 20% YoY at an annual run rate of over USD 20Bn. Further the new age domains, like Internet of Things, have opened up new markets for cloud computing and accelerated the growth of cloud.
On one hand technology use cases like IoT have become possible by innovations like Cloud Computing, and at the same time acceleration and growth of IoT depends upon further technology evolution. Hence new technology paradigms like Edge computing and Fog computing are taking shape. But before we judge further, it’s important to define these.
Cisco defines Edge Computing as the concept of bringing computation and storage close to the data source. This is done to eliminate the latency associated with large scale data transmission to the central computer hub of Cloud infrastructure and enable quick decision making
Fog computing is defined as a standard to achieve the concept of Edge computing and is characterized by using a repeatable structure at the edge to reduce dependence on centralized cloud infrastructure.
The above definitions group Edge and Fog computing together, and define that Edge & Fog computing is conceptualized to reduce dependence on cloud technologies. At the face of it, it appears that Edge computing is competing with cloud technologies. Thus, we need to differentiate cloud computing from Edge and Fog.
What is Cloud Computing?
Cloud computing is the concept of utilizing scalable compute, storage and network infrastructure on demand basis using an orchestration layer. This was conceptualized to support IT infra requirements for enterprise customers and reduce the time to set up the infrastructure. However, over the years this has evolved from IaaS to SaaS, and now a days into serverless computing.
Though cloud computing has many benefits, scalability and flexibility being most important ones, technology paradigms like Edge computing have provided an alternative to the dependence on cloud for every compute requirement. Meanwhile, several news articles have already started writing epitaphs on cloud computing. However, cloud being an underlying technology for many of innovations that we see today, it seems to be in enviable position, and so are cloud services leaders like AWS and Azure. However, there are still some instances where the industry is looking at reducing the reliance on cloud computing and leveraging edge computing. Contrary to popular beliefs, the reason behind such an inclination has more to do with the certain use-cases than hostility towards cloud technology.
Why industry is adopting Edge Computing:
Adopting Edge computing essentially means placing additional compute and storage infrastructure at edges close to the source of data. This leads to overall increase in cost of deployments, however more plausible business reasons are leading the adoption of Edge computing, as stated below:
1. Transmission latency: One advantage of IoT adoption is to get real-time data from the field devices, and reason on top of that by processing and analyzing it. The speed at which such reasoning is done is of utmost importance to maintain the quality of production. In many scenarios, this data is not only the sensor readings, but it may also include rich media like images and videos. Such use case includes, using image analytics for quality check for PCB manufacturing or image analysis-based property and assets monitoring. (Refer to my previous article to read more about it).
If the data from sensors is collected at a high frequency, and the time latency associated with transmitting it is very high that will defeat the purpose of real-time data analysis.
2. Long term Importance of data: Often, large amount of data that is transmitted to the cloud, is used for analysis only once. Such data doesn’t provide much insight, and it is not possible to leverage the same in the future for additional analysis. Hence processing such data at the edge and transmitting only the data insights can help in saving the cost of data transmission and storage.
3. Cost of data transmission: Cost of data transmission is sometimes the most critical factor determining the adoption of edge computing for certain scenarios. This is especially important for use-cases, where data is collected at high frequencies and cost of such data transmission is very high. One such use case is ATM monitoring, which aims at reducing security incidents and providing alerts and alarms in a timely manner to the central control room. In such scenarios, the industry is moving towards adoption of Edge computing to make analysis and decisions at the edge, and only transmit alerts and notifications related to security incidents to the central hub.
Is this adoption at the expense of Cloud Computing?...
So there is no doubt that above scenarios showcase how Edge computing is finding traction with the industry to solve business problems, but it doesn’t mean that in the absence of Edge computing, cloud computing is next best technology. There are many use cases that became feasible only because of the availability of low cost compute at edge. For example, in cold chain monitoring solution, any adverse changes in temperature should be able to trigger corrective action at the edge level, as any delay in decision making, due to communication delays to cloud compute, can lead to degradation in quality of goods transported- thus defeating the purpose of proactive monitoring solution. Hence, in such scenarios the decision making is done at edge, and results of analysis, notifications and actions taken are transmitted to the cloud infrastructure for posteriority analysis
Conclusion:
We have discussed a number of scenarios above, where edge computing has eliminated the need to transmit large quantities of data to central cloud instances and decision making is done close to the data source. Although such deployments empower edge to do decision making, this architecture has made many use cases feasible which otherwise would not have been feasible. Further, to support such scenarios, it becomes important to keep the results of the analysis at edge, the cloud, so that root cause analysis, and corresponding actions can be audited in future. This further fuels utilization of cloud computing in scenarios which were not considered feasible earlier, hence opening up the broader market for cloud adoption.
Though technologies have their own shelf life before they start facing decline, cloud computing is a relatively new and evolving technology, and it shall continue to see the growth as new markets and use-cases are explored. We will continue to read about the myth of competition of cloud and edge computing while these technologies shall keep complementing each other.