1310nm.net

SD-WAN – or should we say WAN4.0?

Is the current buzzword “SD-WAN” really new, or is it actually a progression of the WAN technologies we all know and love? Is it going to be a revolution or an evolution? To help you understand how SD-WAN becomes the WAN of the future, but builds on directions established in the past, read this article.

WAN 1.0 – Fixed Line Circuits – No virtualisation

In the beginning, computers were large monolithic devices and communicating only in the same room. WANs (Wide Area Networks) provided access across larger distances to these central resources from other sites, reducing the cost per user. In making these networks resilient and survivable, the Advanced Research Projects Agency (ARPA) of the American Department of Defence created the ARPAnet, which then grew up to become the Internet of today.

The central computing resource is connected radially to the remote sites and created hub or star networks because of this WAN topology. This topology is still the most frequent way of connecting sites to a central data centers.

The challenge with the early WAN deployments is that they were incredibly expensive, especially over international distances. You paid for all the resources, including the bandwidth you didn’t use. Resilience was expensive to add and was difficult to use effectively, as load-sharing was challenging to design into the network, and the best and most efficient design led to a dual-star topology, deploying dual hubs in the central location.

WAN 2.0 – Virtualised Bandwidth (e.g.Frame-Relay)

The first step of network virtualisation reduced long-distance bandwidth by allowing you to the expensive portion with others. You paid now for a shorter, cheaper link to a local point of a providers network (their Point of Presence, or PoP). There you used shared resources with customers to cross the provider’s network to reach a PoP close to your central location. The use of “virtual circuits” means an organisation can provide multiple logical links across the one physical connection to the provider network, allowing mesh topologies. Relaying Layer 2 Frames between nodes in the network gave Frame-Relay its name.

This allowed many customers to pay for higher shared bandwidth across longer portions of the provider’s infrastructure, reducing the cost of bandwidth. Capacity was guaranteed at a level (using the CIR or Committed Information Rate), and burst capability provided using the “spare” capacity not used by others in the network. The provider’s network creates the resilience, leaving only the local tails at risk. Frame-Relay attempted to provide prioritisation with various extensions such as FRF.12, attempting to deliver VoIP at acceptable quality levels.

The shared bandwidth model was actually started sometime before, using X.25 networks. The growth of Frame-Relay actually made this a viable alternative for many, as it was speedier avoiding error checking and retransmission on each hop through the network, which X.25 required.

WAN 3.0 – Virtualised Routing  (e.g. MPLS)

Voice over IP (VoIP) use in the late 1990s raised issues with traffic routing. Frame-Relay networks using virtual circuits relied data flowing over fixed paths between sites and a reliance on circuits to and from the customer sites to actually pass traffic from one location to another. This routing via intermediate sites added delay and jitter from other traffic reduced VoIP call quality. The cost of the WAN increased from the consumption of extra bandwidth to and from the hub locations.

MPLS (Multi-protocol Label Switching) moved the routing into the network. By making a decision in the first network PoP where to route the traffic, MPLS avoided routing via hubs which are geographically distant. This keeps traffic within countries or regions as much as possible, minimising delay. Avoiding queuing in intermediate customer routers reduces packet loss and jitter, improving voice call quality.

MPLS provides Quality of Service in which VoIP and critical traffic are ensured prioritisation over other traffic in the network. Managed services now provide Voice, Video, and three or more data classes to prioritise traffic.

WAN 4.0 – Virtualised Network using SD-WAN

Now SD-WAN improves on the virtualisation of the network started with Frame-Relay and moved on with MPLS. Virtualisation integrates multiple networks into one combined logical network. SD-WAN creates the virtual network that links the sites. These links are made using one or more physical, logical and temporary connections of MPLS, Internet VPN or LTE.

Software APIs configure SD-WAN. Business requirements drive applications prioritisation and routing via alternate paths. Transfer times for files and transactions are reduced using WAN Optimisation, making effective use of all bandwidth. Firewalls deliver local internet breakouts and security. Bandwidth and queue allocations are changed quickly. Integrated monitoring and measurement shows events and status in the network, and reasons for specific path selections. IPv6 is supported for the internal and external networks as standard, alongside IPv4. Configuration across multiple devices is accomplished via a central interface, using just a few clicks. Not all of these are available in each platform at the moment, but across the marketplace, every one of these features exists.

Is SD-WAN in your future? Absolutely!

SD-WAN drives down WAN cost in two ways. Using cheaper Internet alongside more expensive MPLS bandwidth reduces cost. SD-WAN creates a reactive, programmable network, reducing response time to change. This lowers costs for networks provided as a managed service, and for the over-the-top do-it-yourself deployments.

Whilst SD-WAN might not be ready to meet all your requirements today, they should provide most of what you need. People will make the switch when the technology reaches an acceptable price, consequently, over the next two years, most people will switch to an SD-WAN. Will you be one of these?

Exit mobile version