There and back again
There is a huge craze nowadays around Edge Datacenters. From Micro-edge to larger edge, from device to cloud, everyone has his own understanding on what and where the edge should be. However, the answer to this question is not clear yet and everybody is looking for use cases to validate an option or the other. One of the key drivers is latency: usage and acceptable latency between users and compute will determine the ideal location. At 10 ms, you can probably accommodate an edge at 500 km (considering switching time and good fibre routes). Below that, there’s a balance to find between deploying tens of medium datacenters to give decent national coverage and going denser to accommodate less than 2 ms and then managing hundreds or thousands of micro-datacenters.
The purpose of this post is not to try to give an answer, but more to reflect on how IT has constantly been swinging from centralised to decentralised over the past decades. Like fashion, it sometimes looks like innovation is just something old enough for people to have forgotten.
Let us start with Micro-edge datacenters. For many years, we have been explaining to businesses that they should move their on-premise infrastructure to a third-party datacenters so they can benefit from better environment, larger network availability and closer proximity with customers, suppliers and partner they want to connect with, including Cloud Service providers for high-performance hybrid and multi-cloud deployments. This is totally true, and this has led to building large, centralised campuses where all the parties can exchange traffic in a highly efficient way. Enterprises are then transforming their infrastructure to adopt new models and getting rid of their legacy computer rooms on their premises.
With the explosion of volume of data generated at the edge by all the connected devices, we are seeing issues in terms of scalability and we see the need for local compute and storage to perform analysis at the edge and transfer only a summary of the data to the centralised Datacenter. For some people, this edge could/should be on-premise and hence requiring some computer room again. Azure Stack or AWS Outpost are good examples of this kind of deployment and they offer a significant improvement with a consistent infrastructure from edge to cloud.
This is not reversing the trend of moving to the cloud, it is complementary, and we will see both centralised/cloud and edge grow in the future.
Talking about Cloud, while it is transformative for today’s businesses, the concept is not new. Deploying a shared computer and offering compute based on usage is something that IBM was doing in the previous century with its Service Bureau. Before the late 70s, few companies could afford to have their own computer, but many were willing to benefit from the power of these new machines. Renting time on a computer you could not own was a good solution, as this is exactly what was offered. But at that time, you had to prepare your program before on punch cards and go to the Service Bureau to process them on a shared computer. Certainly not as convenient as just coding directly in the cloud through the Internet. When personal computers and servers became affordable, the world undergone massive computerization and this service became obsolescent. Cloud concept is similar though, offering shared resources to many users.
Recommended by LinkedIn
SaaS is not new either. Your browser is certainly more advanced than a computer terminal, but the concept is the same. All the resources sit centrally, and your computer is just used to enter data and display the results. This is equivalent to IBM 3270 or DEC VT50/VT100 dumb terminals connected to a mainframe. The main differences are of course the GUI, the scale and the distance between users and the service.
Even the smartphone apps that everybody love so much are just the resuscitation of client/server architecture where workload is split between the client (app) and the server (cloud).
Why are we seeing all those old concepts being so successful now? Because of the network!
Moving workloads from on-premises to the cloud is achievable now because the performance and reliability of networks ensure that users can access their applications with a similar experience as when they were connected locally. More and more companies are using SaaS because of the consumption model and the fact that the provider is taking care of everything, enabling users to focus on their core activities. Apps are successful because of the quasi-permanent connectivity offered by mobile networks.
The performance and ubiquity of modern networks are the enabler of digitalization and nobody should underestimate how critical networks and interconnections are in today’s world.