This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

Timestamps and Latency.

0

Hi,

I am running a captures on my PC of a SSL transfer to a server about 14 hops away. ICMP gives me an average roundtrip time in the order of 160ms.

But when looking at this through wireshark 1.12.9, it appears the displayed time delta averages about 200 Microseconds. This seems consistent throughout the capture (I use the IO graph). This time value seems impossibly small given the distance between the nodes. I have max latency values of 100ms and min values of 10microseconds in this set of data. When calculating max, min and average are calculated through the IO graph (with a DISPLAY FILTER in the main window restricting it to the TCP conversation).

Additionally the conversation seems to be tit for tat, i.e one packet sent for me, then one from the server and the pattern repeats (I was thinking maybe my results were due to consecutive server transmits, but the Seq and Ack values make sense for these packets in sequence with incredibly small latency).

Could this be an issue with my PC mucking up the timestamps for sub millisecond values?

Thanks

asked 09 Mar '16, 00:10

Edward%20Teach's gravatar image

Edward Teach
6113
accept rate: 0%

edited 09 Mar '16, 00:14

It would be easier to understand what is the issue if you could post the capture (maybe after shaving away the payload using tracewrangler if you are afraid of data leakage), but is there any NAT or proxy device between the session client and server or are all the intermediate devices just routers? Also, a capture taken simultaneously at the client and server side could bring more light.

(09 Mar '16, 00:27) sindy