State of the Network – a response

State of the Network – a response

So Network Instruments have published their latest State of the Network report, and it makes for some worrying reading for network managers. The report is gathered from questionnaires answered by 592 network professionals, covering geographically diverse locations, with differing numbers of users. This time Network Instruments have concentrated on three elements, Time Consuming Troubleshooting, VoIP, and adoption of 10Gb and MPLS technologies.

“Over the last two years, while IT staffs purchased new tools to optimize applications and traffic, the amount of time spent troubleshooting performance problems increased,” said Charles Thompson, manager of systems engineering for Network Instruments. “It’s clear that relying on new tools or increasing bandwidth doesn’t address the performance problems.” – Source

Network managers need to solve some of the problems created in today’s networks, but it’s not just about choosing a tool or solution that gives you a working solution on the data side, you also need to ensure that you have the reports and ability to analyse the data to ensure that the network keeps on delivering. It’s no use having a swanky new network toy that speeds up the data, if when the crunch comes, it’s totally obscured your ability to understand where today’s particular performance issue lies.

We’ll take each of the elements in turn, starting with the time consuming troubleshooting.

The basic information I’ve distilled out of this so far indicates the following:

  • 75% percent of respondents claim “identifying the source of the problem” as the key concern in troubleshooting
  • 27% percent spend between 26-50 days replicating network issues each year
  • 41% percent spend upto 25 days doing the same.
  • 17% percent spend more than 75 days a year determining the source of performance issues.
  • 33% percent see bandwidth consumption as their biggest challenge

Network instruments’ own interpretation of this data is as follows:

A partial explanation for the large amount of hours spent troubleshooting may be found in the relatively low number of respondents that used tools and technologies to monitor application performance and health.

This is a huge amount of time being sunk into fixing performance issues, which by their very nature often are impacted by things outside the direct control of the network manager, and the nature of replicating the issues often removes (in my experience) the original source of the problem. This is obviously true in relation to the deployment of tactical optimisation technologies, as these often mask the data being transferred across the network as optimised flows. These flows also have different behaviours compared to the single end-to-end TCP flow, and providing coherent linkage from client to server of all the involved flows is a major challenge in understanding application performance related issues.

Some of the traditional site-centric application performance tools can provide monitoring of some of the issues being seen, but only with respect to the location at which they are installed. Other solutions such as NetFlow can not provide that end-to-end management of the total data path as it reports on each of the various legs of the data’s journey, rather than the end-to-end client / server route. Other solutions are available that provide total end-to-end application performance monitoring by installing agents at each end of the network, in the actual clients and servers. This provides the most accurate data, but distilling this into useful information that can be used to resolve problems is most often still quite a challenge.

On the VoIP question, the respondents answer:

  • 55% have already implemented VoIP solutions
  • 48% worry about VoIP call quality.
  • 43% worry about VoIP?s impact on other applications.

This implies that VoIP as a technology is here to stay, but that deployments haven’t yet matured enough to integrate the reporting necessary for the network professionals piece of mind on the issues their users are most likely to complain about. It also shows that although they may have brought into the technology, they haven’t found a suitable integrated monitoring platform that provides them information about both the voice and data traffic in their network.

On the network deployment side, there appears to be a bias to towards the US marketplace, where 56% of respondents came from, and this appears to my mind to bias down the reported figures for MPLS deployment with 27% of organisations having deployed it, with an 8% growth set for this year. The 10Gb deployment also has an impact on the mix of respondents, since most use smaller networks with only 25% of the respondents having a userbase of more than 2500 users.

However, those that have deployed MPLS networks (particularly for voice) know that the advantages of “routing directly within the cloud” and any-to-any connectivity also has implications on the control of traffic flows between sites, since this normally means implementing points of control at each site, increasing cost and complexity.

To my mind these can all be solved with a single solution that provides the following things:

  • Visibility of all network traffic flows, including performance parameters
  • Understanding of VoIP flows and needs
  • Control and management of traffic flows based on business objectives
  • Reporting simply the performance on applications on a traffic light scale: Green, Yellow, Red (or Relax, Worry, Panic)

To my mind the Ipanema solution does all of these things:

  • Ensures the delivery of business critical applications via an objective based ruleset
  • Understands VoIP traffic and directly measures MOS voice call scoring, and provides AQS (application quality score) as an equivalent score for application performance.
  • Provides network-wide visibility of traffic flows with instant (helpdesk) and historical (management) reporting

Since the bandwidth of the applications in use is dynamically adjusted constantly by the system across the network, the business traffic should always be given the bandwidth it needs to function effectively, whilst less critical traffic is also delivered within the requirements of the business-based rules. So not only does this provide the needed visibility, but also provides the control needed by network managers to solve the problems they have:

  • factual information on the usage and performance of the applications transiting the network
  • proactive tools for anticipating incidents and easily managing changes
  • accurate real-time information on application performance across the network
  • protection of business critical applications by the system even in congested network states
  • control of network bandwidth based on the business requirements
  • ensures VoIP or video conference call quality, without impacting business critical applications

All this leads to the Ipanema solution providing a business optimized network.

John Dixon

John Dixon is the Principal Consultant of thirteen-ten nanometre networks Ltd, based in Wiltshire, United Kingdom. He has a wide range of experience, (including, but not limited to) operating, designing and optimizing systems and networks for customers from global to domestic in scale. He has worked with many international brands to implement both data centres and wide-area networks across a range of industries. He is currently supporting a major SD-WAN vendor on the implementation of an environment supporting a major global fast-food chain.

Comments are closed.