Hi all, I hope anybody can help me. I'm trying to measure the RTT for TCP packets over LTE. I have running a server with a file (size 1K) and I download this file 10 times with wget on a client, capturing the packets with tcpdump. To get the RTT I analyze this packets by observing both SYN-SYN/ACK time difference and the FIN/ACK - FIN/ACK time difference. I expected to get values in the same magnigute. But I observe, that the SYN-SYN/ACK round trip time is always 3-5 times bigger than FIN/ACK RTT. Example:
As you can see, the FIN,ACK rtt is 5 times smaller than the initial SYN-ACK RTT? I can observe this behaviour for most of the packets. But why ? What i also observe, the RTT is the first time very big (0.156ms) and than downloading the file again it gets smaller (0.045ms)? I disable caching (wget --no-cache) Thanks for your help, Dieter asked 20 Dec '16, 09:43 DieterMeier edited 20 Dec '16, 10:13 grahamb ♦ |
A complete guess, but it could be that the first time you connect the mobile device has to do a RACH to get an uplink grant, whereas for subsequent packets you get an uplink grant more quickly?