Having been widely reported about in the media, most IT departments are familiar with the term, why it is the next step in the evolution of networks, and the promise of a networking future freed from the shackles of labour-intensive manual configuration.
Still, there is some debate over how much IT really understands, what the real impacts of an SDN deployment are on the overall IT infrastructure, and what is just hype.
So what is software-defined networking? By moving beyond the intricacies of SDN that are difficult to articulate without a thorough grasp of the technology and look at it in the simplest terms, a software-defined network is one that is intelligent, programmable and automated. To understand what this means in terms of real-world application, it’s best to compare it to how traditional networks work.
>See also: Clearing the smoke on the software-defined data centre
The way any LAN, WAN or data centre network manages and directs the data it is carrying is controlled by how each networking device is controlled. These settings and rules determine where the data goes, how quickly the data flows, how the data is checked against security policies, which data is allowed, and which is blocked.
The challenge at present is that network managers are required to configure each device individually, often by physical adjustment. When considering that across a large real estate this will comprise several hundred routers, switches and ports, configuring a network optimally becomes a very laborious process.
The reason each device needs manual configuration lies in the way they are constructed. Each router and switch consists of multiple planes or layers. These include the data plane through which data packets are transferred and the control plane, which controls how data is handled and where network applications are embedded.
As such, network intelligence is dispersed across all devices with no central point of control.
With SDN, the opposite is true with the data, control and application planes of devices being separated. The corollary of this is that intelligence can be kept discrete from the packet-forwarding engine and moved centrally.
With intelligence being moved centrally, devices become programmable, using routers and switches that are configured centrally by a central software programme called a controller.
By making the network a programmable entity, each individual device can be instantly and automatically configured to enable its own optimal provisioning. The result is that a software-defined network is easier to implement; simpler to control and manage, with less human interaction; and is far more dynamic in accommodating the constantly changing requirements of today’s ICT environments, particularly in data centres.
Nonetheless, uncertainty remains around how this will play out in the market, with promotional and antagonistic forces at play.
Sweeping technology changes rarely come without their fair share of market politics and SDN is no different.
The vast majority of the networking equipment already installed isn’t compatible with SDN. As such, this means that a ‘rip and replace’ approach would be required from those who did wish to make the transition. It goes without saying that this would be an expensive, risky and disruptive process.
The headaches don’t stop there though. There are currently three different approaches to the implementation of a software-defined network, which adds a further layer of complexity:
The first is to use an industry standard protocol such as OpenFlow. This standard was developed by the Open Network Foundation (ONF), an organisation whose members include Verizon, Deutsche Telecom, NTT, Google, Microsoft, Facebook and Yahoo. As an open standard, any OpenFlow-enabled controller can use the protocol in conjunction with any OpenFlow-enabled networking device, regardless of manufacturer. Due to the greater freedom and flexibility this approach affords with regards to network design, it has been championed by smaller vendors and end-user organisations alike.
The second is vendor proprietary APIs, which allow external tools, software and applications to communicate with the infrastructure. These APIs expose the maximum functionality of a vendor’s devices, as proprietary APIs are limited to devices from that specific vendor. While flexibility may be limited, they represent a potentially advantageous option for organisations operating in a single-vendor environment.
The third is to implement virtual network overlay. This is software that functions on top of the physical network, while also providing programmability and central control. This is of particular use in data centre environments where each network can have different configurations and can be moved anywhere within a data centre, or between data centres, while the virtual view remains the same. Virtual networks function in the same way as virtual server environments, providing flexibility and instant scalability.
>See also: The software-defined data centre is happening today: Eschenbach, VMware
With this in mind, how should organisations proceed when deploying software-defined networks?
The dynamic nature of technology progress is understandably resulting in end-user organisations having a tough time evaluating which is the best way forward for them. Perhaps the greatest piece of advice is to remember: Every organisation is unique and so is every network.
An information superhighway for one business may be a cul-de-sac for another. It is of utmost importance to first understand the technical implications of software-defined networking and the benefits it may hold for the business before all else.
Enterprises need to gain a clear view of their network’s current readiness state, and identify a desired ‘to-be’ state. From there they can then define a clear roadmap of how to get there over time.
Sourced from Gary Middleton, Dimension Data