The network has undergone some significant changes in recent years with trends such as mobility, cloud and consumerisation taking hold and changing the way we think about managing IT infrastructure.
Users today can access applications, data and content from wherever they are, using whatever device they have and, while this increases productivity, it also makes the network far more complex and requires increased bandwidth to cope with demand.
As such, networks have been upgraded in order to ensure the responsiveness and availability of business critical applications and, where 1Gb architectures were once sufficient, it is now common to see 10Gb connections.
However, network traffic is continuing to grow and, as our appetite for network bandwidth shows no sign of letting up, increasing demand for high-speed and low latency applications will necessitate a move to 40Gb networks, and even 100Gb.
>See also: The emergence of the app-based network
To add further complexity, newer technology such as software-defined networking (SDN) and network functions virtualisation (NFV) are becoming increasingly attractive propositions, thanks to their promise of simplifying and enabling a more agile IT organisation. However, they also add layers of network abstraction, which decreases visibility into traffic crossing the physical layer.
In the midst of all of these changes, the way networking has been implemented has remained fundamentally unchanged, despite the fact that the network speeds and the protocols that are used to keep the network operational have evolved.
The network today is still run as a set of interconnected devices each of which operate as individual entities – having their own local control plane and forwarding or data plane. The control plane determines where traffic is to be sent, and the data plane takes care of forwarding the traffic at very high speeds and progressively lower latency.
Both the control plane and the data plane today are co-resident on each networking device, making the large network a complex distributed computing problem that has to respond and react to dynamic changes at very high speeds and with large traffic volumes.
To compound the challenge further, each networking component or attached device has only a partial view of what is going on in the network and piecing together the whole picture becomes a complex exercise.
Untrustworthy networks
Traditionally, network monitoring has been something of an afterthought when designing network environments. However, as 40Gb and 100Gb networks deliver an eye watering number of data packets per second, it becomes more important than ever to track all network activity at full line-rate.
As such, effective monitoring strategies and real-time troubleshooting will quickly become a growing concern for businesses that need to see what they are missing. While 10Gb monitoring, managing and security tools are becoming more readily available, the same cannot be said for their 40Gb and 100Gb counterparts, and it simply isn’t possible to use 1Gb or 10Gb tools at these speeds.
As the flood of packets at 40Gb and 100Gb hit 10Gb and 1Gb outputs, tools are instantly oversubscribed and serious packet losses will render the tools incapable of meaningful analysis.
At the same time, SDN changes the fundamental nature of networking, as the intelligence functions of the network are centralised on one controller. This maintains a centralised view of all the networking switches and programs each switch with the knowledge it needs to correctly forward traffic.
While this provides a simplified operational model for dynamic environments, it requires an increased need for tools that monitor security, performance and user experience as, for example, separating the data and control pane can lead to synchronisation issues between the components.
While many may anticipate that visibility would fall under the remit of the SDN controller, relying solely on the controller to provide visibility is a dangerous and incomplete solution. A controller may believe everything is fine, but if any of the components of the underlying infrastructure such as the switches, routers or virtual switches is misbehaving, the controller may be unaware and report a healthy infrastructure.
Essentially, the combination of faster network speeds and SDN environments will create a perfect storm for the network, where information cannot be relied upon to be wholly accurate. As such, a solution that delivers traffic pervasively across all segments of the network to a centralised set of tools responsible for the monitoring and correlation of performance, security analysis, and user experience is required.
Increasing visibility
The key to creating improved monitoring capabilities is to build in a tool that filters, aggregates, consolidates, and replicates data to the existing monitoring tools. This provides the ability to combine thousands of different rules in a logical order to achieve the desired packet distribution – for instance, sending only packets matching a user-defined pattern or perhaps discarding all traffic from a particular IP address.
>See also: The mobile network untangled – how to simplify mobility
Through this approach a user can reduce the amount of data being sent to a tool so that it is only receiving the data it needs to see, rather than processing vast amounts of unnecessary information. What is more, this traffic-based approach to analysis ensures that nothing is missed as, even in SDN environments, it examines the actual packets – not just the initial flow set up.
The networks of the future will offer organisations many benefits, but they will also open up new security risks, troubleshooting complexities, and demanding application management.
Unless there is a way to ensure the continuity of securing, monitoring and managing the infrastructure, success will be limited. Pervasive visibility will allow businesses to see what they would otherwise miss, and future-proof their networks for years to come.
Sourced from Trevor Dearing, Gigamon