1310nm.net

State of the Network – a response

So Network Instruments have published their latest State of the Network report, and it makes for some worrying reading for network managers. The report is gathered from questionnaires answered by 592 network professionals, covering geographically diverse locations, with differing numbers of users. This time Network Instruments have concentrated on three elements, Time Consuming Troubleshooting, VoIP, and adoption of 10Gb and MPLS technologies.

“Over the last two years, while IT staffs purchased new tools to optimize applications and traffic, the amount of time spent troubleshooting performance problems increased,” said Charles Thompson, manager of systems engineering for Network Instruments. “It’s clear that relying on new tools or increasing bandwidth doesn’t address the performance problems.” – Source

Network managers need to solve some of the problems created in today’s networks, but it’s not just about choosing a tool or solution that gives you a working solution on the data side, you also need to ensure that you have the reports and ability to analyse the data to ensure that the network keeps on delivering. It’s no use having a swanky new network toy that speeds up the data, if when the crunch comes, it’s totally obscured your ability to understand where today’s particular performance issue lies.

We’ll take each of the elements in turn, starting with the time consuming troubleshooting.

The basic information I’ve distilled out of this so far indicates the following:

Network instruments’ own interpretation of this data is as follows:

A partial explanation for the large amount of hours spent troubleshooting may be found in the relatively low number of respondents that used tools and technologies to monitor application performance and health.

This is a huge amount of time being sunk into fixing performance issues, which by their very nature often are impacted by things outside the direct control of the network manager, and the nature of replicating the issues often removes (in my experience) the original source of the problem. This is obviously true in relation to the deployment of tactical optimisation technologies, as these often mask the data being transferred across the network as optimised flows. These flows also have different behaviours compared to the single end-to-end TCP flow, and providing coherent linkage from client to server of all the involved flows is a major challenge in understanding application performance related issues.

Some of the traditional site-centric application performance tools can provide monitoring of some of the issues being seen, but only with respect to the location at which they are installed. Other solutions such as NetFlow can not provide that end-to-end management of the total data path as it reports on each of the various legs of the data’s journey, rather than the end-to-end client / server route. Other solutions are available that provide total end-to-end application performance monitoring by installing agents at each end of the network, in the actual clients and servers. This provides the most accurate data, but distilling this into useful information that can be used to resolve problems is most often still quite a challenge.

On the VoIP question, the respondents answer:

This implies that VoIP as a technology is here to stay, but that deployments haven’t yet matured enough to integrate the reporting necessary for the network professionals piece of mind on the issues their users are most likely to complain about. It also shows that although they may have brought into the technology, they haven’t found a suitable integrated monitoring platform that provides them information about both the voice and data traffic in their network.

On the network deployment side, there appears to be a bias to towards the US marketplace, where 56% of respondents came from, and this appears to my mind to bias down the reported figures for MPLS deployment with 27% of organisations having deployed it, with an 8% growth set for this year. The 10Gb deployment also has an impact on the mix of respondents, since most use smaller networks with only 25% of the respondents having a userbase of more than 2500 users.

However, those that have deployed MPLS networks (particularly for voice) know that the advantages of “routing directly within the cloud” and any-to-any connectivity also has implications on the control of traffic flows between sites, since this normally means implementing points of control at each site, increasing cost and complexity.

To my mind these can all be solved with a single solution that provides the following things:

To my mind the Ipanema solution does all of these things:

Since the bandwidth of the applications in use is dynamically adjusted constantly by the system across the network, the business traffic should always be given the bandwidth it needs to function effectively, whilst less critical traffic is also delivered within the requirements of the business-based rules. So not only does this provide the needed visibility, but also provides the control needed by network managers to solve the problems they have:

All this leads to the Ipanema solution providing a business optimized network.

Exit mobile version