Hello! I want to know guys how to calculate a delay for TCP, DNS, HTTP. And how much delay is too much? I know that not all delays are that important, I'm talking about the ones that really matters.
asked 30 Mar '17, 03:04
edited 31 Mar '17, 09:45
For TCP: in the packet details pane, right-click on the "Transmission Control Protocol" section then select "Protocol Preferences" and check "Calculate Conversation Timestamps." You can then scroll to the bottom of the TCP section in the packet details pane to the "Timestamps" area, right-click on "Time since previous frame..." and select "Apply as column." This will give you a column of TCP Delta times which represent delay.
For DNS, you can create a column using dns.time for a similar setup as the TCP example above.
HTTP can typically use the TCP Delta, but there is also an http.time filter.
This information can be input into advanced IO graphs for further analysis.
From there, you will need to analyze the capture to determine which delays matter. I typically look for any delays over .5 second, but that might even be too long depending on your application. Delay during the handshake will give an indication of overall latency or problems if it's too high. Generally, I ignore long delays on RST and FIN packets at the end of a conversation. Analyzing delay can be somewhat of an art form at times. Obviously, long delays that line up with complaints or log files can be easy to spot. Other times, it takes patience, experience, and trial and error.
answered 30 Mar '17, 07:27