OpenRAN, the perfect scenario to test Cloud capabilities
With the deployment of Cloud capabilities in the network many IT methodologies and technologies have been used to virtualise the network functions. One that seems the most challenging to transform is the Radio Access Network (RAN).
A core network service could be deployed in a Central Data Center and it can operate easily as a traditional appliance with no real difference. On the opposite, RAN needs to be close to antennas that provide the services to the mobile customers of the Telco. This requires a deployment at the edge that has to be synchronized and monitor from a central site.
To understand better how the deployment of OpenRAN works let’s describe its components. In traditional deployment we have the Antenna that connects to the Remote Radio Unit (RRU) and this one transfers the signal to the Base Band Unit (BBU). In the BBU, the signal is transformed to IP packages and sent to higher protocols in the ISO stack. The OpenRAN architecture is similar to the traditional approach modifying the BBUs, that is divided in two components: The Distributed Unit (DU) and the Central Unit (CU). The DU takes care of the data plane, while the CU handles the control plane.
The goal will be to virtualize and consolidate as much components as possible. The RRU has to be always a physical component with no option to be virtualized (for the moment). The DU needs to be close to the RRU to have a latency of a few microseconds and also certain bandwidth. This means that if we cannot situate the DU in a Data Center no further than a few kilometres from the RRU (more or less 20 km) it will have to be installed underneath of the RRU. In this scenario the DU could be another physical appliance installed with the RRU (scenario 1) or could be a virtual solution (vDU) running on a commodity IT server (scenario 2). The best option will be to consolidate a number of vDUs in a Data Center, for instance a Central Office (CO), and optimize the HW infrastructure (scenario 3). Of course, for this last option we have to check to fulfil latency and bandwidth requirements. In all the scenarios de Central Unit is virtualized (vCU) and deployed in commodity server in a central Data Center, since the latency requirements is sized in milliseconds and give us a possible distance of more than 500 km, which facilitates their consolidation.
Depending on the final scenario we end up with three different point of deployments (PoDs): the antenna where we have the RRU and maybe the DU or the vDU, the CO where we could have the vDU, and the central Data Center where we consolidate the vCUs. In the deployment project we can design to have the same HW and SW solution at the beginning to minimize the operation and maintenance scenario, but with time and the renewal of components, we are going to end up with an inventory of heterogenous HW (entropy is inevitable). We cannot change all the servers at the same time. Depending on the capabilities of the upgraded HW we also will be able to deploy new SW components. We have to work with a scenario with heterogeneous HW and SW, with different releases deploy and the need to manage all in a coordinated way. From the functional point of view each PoD will probably have to give support to different number of antennas and the requirements will be different, that is something that needs to be monitored and controlled in a centralized way.
As all these requirements are deployed in commodity IT servers and the RAN software components can be design as Cloud Native applications, we can design the whole context as a cloud environment with different distributed PoDs than need to be orchestrated. We will need an orchestration solution resilient enough to react to possible errors in the same way that an IT cloud deployment reacts, with multiple stateless instances of each of the components that can react dynamically to the demand of the services. The RAN components need to be design following the 12 factor principles of designing Cloud Native solutions and chaos monkey testing methodologies need to be tested in order to warranty the assurance of the service.
The best option to build the components is probably to wrap them in containers running in a Kubernetes environment. In fact, the builders of OpenRAN solution are moving from Virtual Machines to containers architectures already. The orchestration solution that we mention before must be capable of doing an automatic deployment based on different policies that will identify all the context and the different situations: HW capabilities, geographic needs, services non-functional requirements, business requirements, etc.
The configuration of the internal network that connects all PoDs and all the components running on them is another challenge, since it has to be efficient enough to facilitate the throughput of the data with the latency and bandwidth required without adding too much overhead.
From the cloud point all the details described of the OpenRAN deployment is a challenge. We have different PoDs that need to be coordinated, with different components deployed that have high requirements in terms of latency and bandwidth and all in a heterogeneous environment. There is no doubt that if we are able to create a proficient cloud architecture to support OpenRAN deployment, that architecture will be capable to support any IT service deployment.