As part of an investigation into several other things, my Internet research dredged up some work by Jim Gettys on Bufferbloat, which was related closely enough to my intended target to get stuck into it a little more.
It has previously been believed that adding buffers across a network to help mitigate against packet loss is a “good thing?” Especially if we’re going to be dealing with VoIP and other protocols sensitive to packet-loss, but in some cases, it isn’t. Bufferbloat is the phenomenon caused by these buffers when the data source(s) can generate enough data to fill these (often deep) buffers, and thereby destroy the linkage between TCP’s congestion control mechanism and the actual data rate.
The congestion leads to added latency, increases packet loss, lowers network efficiency (all those retransmitted packets take up space that other data should use), and lowers throughput. These are all situations that we have experienced on the Internet, every day, and shown in the sporadic performance downloading large files across the Internet.
There is a nice introduction to bufferbloat on the team’s web page, which explains the problem. Every buffer within a network is a potential problem as if data can be sent into the system faster than the buffer can empty, this has a knock-on impact on the performance of the flow. (Of whatever application or protocol.) Whilst this does optimise the use of bandwidth in the network, (i.e. it keeps it full), it doesn’t maximize the performance of the flows, and this means that the network efficiency drops as it’s carrying many packets that are re-transmissions of the original (still buffered) data, because it hasn’t yet arrived.
The solution in the past has been to implement technologies such as random early discard (RED) to determine if the buffer is filling and to selectively discard packets on flows that occupy more of the buffer than others. But it hasn’t always fixed the problem, and leads to packet loss for the more substantial flows, reducing their efficiency in the network.
While the Bufferbloat project is looking at ways to minimise buffer size (and potentially self-tune the buffers within devices), the situation exists in every network because buffers are cheap (both in parts and logic) to implement. But it may take a long time until router and switch (and all the other bits of comms kit) suppliers fix their environments to adjust automatically. The internet might be a challenge for a while, but it’s certainly possible to improve the enterprise environment.
So how do you manage the performance of a network if the feedback mechanisms are broken? I believe that the best mechanism is the management of the performance of each flow against the available bandwidth at the edge of the network. How can we implement one of these?
- A system with intelligence at the edge
- A way of understanding the available bandwidth in use at each point
- A mechanism to ensure that packet loss is managed to protocols that are less ‘important.’
- A mechanism to determine if there is bufferbloat in the network core