Evolution of WAN and What Business Challenges SD-WAN Solves in Enterprise Networking
What is SD-WAN:
SD-WAN is a software defined approach to manage the wide area network that allows companies to leverage any combination of transport service–including MPLS, LTE and broadband service -to connect user to applications.
In order to comprehend better, it's necessary to first study the evolution of WAN, then the evolution of Data Centers and then SDN to understand what issues SD-WAN solves.
Evolution of WAN
Early eighty's, organizations began opening branch offices in different countries because of globalization. To connect these offices, companies required voice, video, and data services. Applications hosted on servers at the central branch needed to be accessible from multiple remote branches.
Physical Circuits: In the early 1980s, ISPs (Internet Service Provider) offered dedicated leased lines for enterprises to connect between different branches. These dedicated physical circuits were established, maintained, and terminated through a carrier network for each communication session. Integrated Services Digital Network (ISDN) is one example of a circuit-switched WAN technology. It is a high-capacity service carried on T1 trunk lines between a Telco's central offices and your branch location. ISDN is a communication protocol offered by telephone companies that carries data, voice, and other types of traffic.
Challenges: Since ISPs invested heavily on leased lines, they charged premium to customers. For the Enterprise Customers the leased dedicated lines were costly, and there was an extra cost of configuration.
Permanent Virtual Circuits(PVC): Infrastructure moved from physical circuits to Virtual circuits. A virtual circuits are logical pathway made to provide reliable communication between two network devices. Frame Relay, X.25 and ATM operate through virtual circuits. Frame Relay is a data link layer protocol that uses High-Level Data Link Control (HDLC) encapsulation to support multiple virtual circuits between connected devices. It is more efficient than X.25.
Challenges: For Enterprise customers, PVCs reduced the link cost drastically. The cost of setting up Frame Relay, such as Data Link Connection Identifier (DLCs), and routing protocols like OSPF increased. At one end, the link cost gone down, but the configuration cost gone up three times as much.
MPLS VPN: Multi-protocol Label Switching (MPLS), the successor to Frame Relay, was the true technology breakthrough of the decade. MPLS operates differently than conventional routing. Packets are assigned labels and forwarding decision made based on the label headers instead of IP addresses or Layer 3 information. Virtual Private Network (VPN) services are made available through MPLS, which makes up the primary network. ISPs invested in MPLS based networks. Enterprise Customers just had to connect their branch to the nearest ISPs Point of Presence (PoP) through a leased line. The PoP ISPs took part in intelligent forwarding by learning routes for each branch of enterprise.
Challenges: ISPs PoP locations were common, so other Enterprise Customers also connected, leading to security concerns. It was necessary for customers to have IPSec VPN, not GRE VPNs. In order to configure the IPSec VPN on routers, it was necessary for individuals to possess a CCIE certification in routing, switching, and security. IPSec VPN, MPLS, and OSFP made configuration overhead a bigger issue.
IP VPN as a Backup: MPLS was costly, because SLAs were signed to provide continuous connectivity without a second of fail. In case failure, companies need alternative to remain connected. For Redundancy, enterprise used Internet based VPNs. Most of traffic were on MPLS and very less on Internet. This is how the WAN was built.
Challenges: MPLS was the primary network, and the Internet was used as a backup. The Internet offers "best effort" services, so security and Quality of Service (QoS) Configuration was necessary for IP-based routers. The configuration overhead was a problem again.
Evolution of Data Centers
The 1980s brought the introduction of personal computers, which began a new era of personal computing. PCs were widely adopted. Desktop applications were installed on individual machines (PCs) and could only be accessed by one user at a time. The application needed to be installed to be executed. There was no way to share data. In case of any difficulty with the application, needed to get in touch with experts or IT department.
In the 1990’s, big businesses began to use old computer rooms as servers, and these rooms became known as data centers inside their office premises. As the internet evolved, client-server applications became commonplace. Application moved from Desktop to Centralized data center Servers.
IT Team: The Enterprise invested heavily in an MPLS network to facilitate the access of applications on servers by all branches through the WAN. Every organization had a IT team that included Unix/Linux admins, the NOC (Network Operation Center), SOC (Security), DevOps (Development Operations) teams. Generally, when a user requires access to the application, the data usually travels from a branch location via MPLS network to the main office, then to the Data Center / Head Quarter. Every new project initiated within the organization requires a great amount of time to arrange resources like compute, storage and memory, and to get the license for applications, budgeting and spending.
It was a challenge to plan ahead and decide how much compute and storage was needed, and the company made a major investment. It was not an easy task to provision. It was difficult to buy servers with configurations and have a license go unutilized. Allocating resources and managing time efficiently were two significant challenges when running an on-premise data center.
Recommended by LinkedIn
Public Cloud: According to google, “A public cloud is an IT model where public cloud service providers make computing services including compute and storage, develop-and-deploy environments, and applications—available on-demand to organizations and individuals over the public internet.” Using public cloud offers many benefits, such as cost savings due to sharing of resources, freedom to access it from any internet-enabled location, and time savings as the provider takes care of data centers.
The public cloud hosted on the internet and accessed remotely from all over the world. A private cloud or data center that is located on premise accessible only to internal users of organization.
Cloud servers are becoming the norm instead of on-premise servers, and application and email servers are moving to the cloud. The Enterprise level experienced a gradual rise in Internet usage and a drop in MPLS VPN utilization. The Internet VPN which used to be a backup connection, but now it carries a majority of the traffic.
Challenges: MPLS cost drastically reduced, earlier 100% dependency now reduced to 30% to 40%. Proprietary applications were still in data centers due to security reasons. The branch user required access to email servers in the public cloud through the internet, and additionally needed an specific application access through MPLS network which is hosted on the local data center. To make these all possible, routers needed lots of ACL policies to be written for selective routing at each branch.
The network architect will design the flow and a someone will write all the configurations. The NOC and SOC staff will then take the configurations and apply them to the routers. Every branch office needs a team at their location to get routers configured. The process of applying and evaluating the configurations needed to get the branches functioning optimally requires a good amount of time, about a month.
SDN Introduction
Networking involves routers that function on data and control planes. Data plane forwards packets/frames from one interface to another. Control plane does not involve end user data packets, and it refers to the brain of the network. Protocols like OSPF, ISIS, EIGRP, spanning tree, and LDP are all part of the control plane. The device had both the control and data plane components of the router and switches on it.
Software Defined Networking: Software defined networking (SDN) is an approach via which we take the control plane away from the switch and assign it to a centralized unit called the SDN controller.
To take advantage of the centralized control plane, Enterprise took on SDN, particularly on the WAN side, which they named SD-WAN. This SDN controller will replicate the configuration to every router. What would happen if the controller that is controlling the four routers’ crashes? Therefore, for greater redundancy, we need to add another controller that is like High Availability.
Each branch has a couple of routers. There are several branches within a single region. Resulting in a lot of controllers. Management plane should handle the configuration (templatizing) and push it to all the controllers. These controllers will push configuration to all routers. SDN automates the process of pushing configurations. The Single Plane of Glass provides a management console with a unified display to view data from multiple sources.
SD-WAN (Software Defined WAN)
On every branch, the router will be partially a dump, as control plane will not be there. This router will be SDN (SD-WAN) supported. Configuration overhead drastically reduced with SD-WAN.
For example, Cisco has vEdge routers with SD-WAN support, receives complete control and data policies from the vSmart controller.
The vSmart controllers are connected to vEdge routers through MPLS, the internet, or through 3G, 4G, or 5G networks. vManage is fully manageable centralized portal that run and operate software defined network. A single pane of glass or screen is used by vManage to provide its graphical user interface. The vBond serves as a guard for each vEdge router and must inform where the vSmart controller and vManage server are located. By default, vEdge routers are aware of vBond for ZTP ( Zero Touch Provisioning).
SD-WAN features:
SD-WAN Challenges:
Key Takeaways