This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

Estimating/modeling TCP latency

0

I'm using TCP to send messages among two linux hosts on the same subnet at a fixed period and I can see occasional cases where a "fast retransmission" occurs and the application appears to see significant jitter as a result.

A single segment looks like it may have been lost and the subsequent ones all back upare queued for delivery to the application/head of line blocking -- classic case where TCP was not the right tool for the job.

I know I can mitigate the problem with UDP or SCTP, but I'd like to understand just how bad it is for TCP.

EDIT: UDP is appropriate because the application is so latency-sensitive. Late data is invalid data.

In my particular case, is the recovery/retransmission delay a function of the period at which I'm sending the messages? Or are there TCP specifications which guide this (usually specified as a function of RTT?). Or perhaps are there stack/implementation-specific timeouts that can be adjusted/tuned?

asked 29 Mar '13, 10:23

bcain's gravatar image

bcain
16115
accept rate: 0%

edited 29 Mar '13, 12:56


One Answer:

1

Usually a lost TCP segment should not block remaining segments to be accepted, because the sender will keep sending until it notices that the segment was lost, and the receiver should keep putting the incoming segments into it's receive window. What may kill your performance (if you're close to doing real time processing) is the fact that the data in the TCP window may not be forwarded to the application while there still is the gap from the missing segment. You'd probably have a similar problem with UDP because it will lead to the same problem just a layer above the stack.

If you notice that your receiving TCP stack requests ALL packets again from the lost segment on you have a pretty inefficient stack on the receiver's side. If the sender starts retransmitting packets without them being lost in the first place your sender TCP stack is not very good at what it does.

What can happen is that when you have a pretty fast connection that it will take a while for a retransmission to get through, because it has to "get in line" after all the other segments that are already on their way. Maybe it could help if you force a smaller receive window on the receiving node by calculating the optimum window size. That way the sender cannot blast away with packets like crazy, and retransmission should be getting through as fast as possible.

answered 29 Mar '13, 12:35

Jasper's gravatar image

Jasper ♦♦
23.8k551284
accept rate: 18%

"You'd probably have a similar problem with UDP because it will lead to the same problem just a layer above the stack" -- but any one of these messages is adequate (late data is invalid data). So I don't think it would be a problem.

(29 Mar '13, 12:42) bcain

Okay, you didn't mention that you can afford to lose some of the messages. In that case UDP might have a slight edge, but I still don't see why subsequent TCP messages should be delayed by a lost segment.

(29 Mar '13, 12:44) Jasper ♦♦

Because it's designed to preserve order. I'm talking about the delay perceived by the application.

(29 Mar '13, 12:54) bcain