This is a static archive of our old Q&A Site. Please post any new questions and answers at

Rate limit causes excess retransmissions and duplicate acks


We recently had a customer complaining that performance from their remote site to our home office here was bad. Investigation with Iperf and Wireshark showed a massive number of retrans and dupe acks. In chasing the circuit back to the home office we found out that the circuit provider had put a rate limit of 15 Mbps on their Foundry router in between us. When they took off the limit the circuit tested clean.

I've heard of some network devices simply dropping the packets when the limit is reached which, I guess, is why we saw this result. I have a hard time believing that all rate limit devices are this crude. Are there other methods or is this pretty much what I'm going to see in terms of symptoms?

asked 01 Apr '11, 08:19

dribniff's gravatar image

accept rate: 0%

Well, most devices do drop packets. How you drop those packets is important. Just tail dropping can cause issues like this. Some type of early detection coupled with dropping would be better. If you limit the throughput to 15Mbps, do you see packet drops?

(01 Apr '11, 17:51) hansangb

More like 8-10 Mbps. We're getting a lot better with Iperf than we ever were before. By limiting the rate we're able to see a clean capture. This sounds like the old days where to kept bumping the MTU around until you got a clean, full stream of data.

In this case, the three hops thru provider layer two devices cleanly allowed up to 80M on the 100m circuit until we hit the router (we backtracked with Iperf) The problem is we're trying to consolidate the remote servers back here and all that user traffic easily runs over 50m aggregate. The provider grinned and said "buy more"

(04 Apr '11, 06:12) dribniff

One Answer:


I can't speak for "real" routers, but FreeBSD's traffic shaper (see the ipfw man page) allows you to limit traffic to a given speed but it also allows you to control the queue size of the "pipe" (see the 'bandwidth' and 'queue' parameters). I would (possibly naively) think that "real" routers would have similar mechanisms.

answered 01 Apr '11, 11:09

JeffMorriss's gravatar image

JeffMorriss ♦
accept rate: 27%

So here's the question that came up in our group: If you can't control the provider and you can't buy more bandwidth are you better to turn the users down to 10 Mbps at their workstations and push the throttling back to the computer? Of course this depends on the ratio of local versus remote (to the user), but when you're trying to move all of the servers back...

In the Cisco IOS when QoS is invoked I see the word "drop" at the end of the command. They mean that literally, don't they?

(04 Apr '11, 06:21) dribniff

You can also rate limit yourself. If the carrier is using a pure taildrop scenario, you may have better luck smoothing it out using your own router. The drop counter is literal, but you can rate limit things that you don't care about, you can use WRED so you can be more intelligent in your drops etc. But keep in mind that your carrier may already be doing this. Using a tool like iperf may not truly give you a real world handling of packet loss (i.e. it may look worse during testing than it really is in reality) good luck

(04 Apr '11, 11:06) hansangb

I agree with Hansangb - controlling yourself at your edge is the best option. What is the latency between the two sites? By dropping the packets your provider is likely causing TCP to go through a slowstart. If the latency between your sites is great the recovery from slowstart can take longer, which greatly degrades performance. If you can keep your traffic slightly below the providers imposed limit you'll make out better in the long run.

(05 Apr '11, 05:17) GeonJay