Dear all, I'm new in Wireshark and I would like to know if it's possible to get the delay and jitter of UDP packets directly in WireShark. I saw many examples on Internet but they are all about RTP protocol. Thank you in advance for your help. Best regards. asked 18 Jul '12, 13:48 nezha |
2 Answers:
As @Jaap said: Currently Wireshark can only calculate jitter for RTP, as nothing else is implemented. However, it would be possible to calculate UDP jitter. Explanation The following link describes it in a nice way
Cite: "Jitter: The jitter of a packet stream is defined as the mean deviation of the difference in packet spacing at the receiver compared to the sender, for a pair of packets...." Basically, you could do it with one capture file as well, if you accept the time delta between two frame timestamps (in a capture file) to be the variation of network delay. However that's not really correct.
This would void the calculation! If you don't care about this, you can calculate (a kind of) jitter as the variation of the frame timestamp delta ( With RTP it's much easier to calculate jitter, as every RTP packet carries a timestamp value ( Search for 'Timestamp' in the following page:
With that timestamp it is possible to calculate jitter with only one capture file, as you can calculate the time delta between two RTP packets and use the timestamp within those packets as a reference for "out of order packets" and "delay at the sender" (basically!). See How jitter is calculated in the Wireshark RTP Wiki and the following link. Search for "jitter estimator formula": There are tools for UDP jitter measurement.
UPDATE: Thinking again about the possibility to calculate UDP jitter with just one capture file, I come to the conclusion, that it's NOT possible to get any usable values. So, you can't just ignore or accept the problem I mentioned above, as the error rate would be unknown, which makes the data useless. Reason: If you measure only at one side, you are measuring a mix of delay at the sender and delay in the network.
So, what you are measuring is actually the sum of delay at the sender (ds - delay in the stack, the application, etc.) and the variation in delay on the network (dn(2) - dn(1)). Actually you are just interested in the variation of network delay (jitter). The delay at the sender (ds) could well be much higher than the variation of delay in the network (d(n+1) - d(n)), so you can only use that value if the delay at the sender (ds) is zero. Hoewever, that is something you cannot assume. Conclusion: To calculate UDP jitter, you need either two capture files (as I mentioned in the first place) or timestamps in the packet, which is not the case for standard UDP. The other option is to actively measure it with a tool like xjperf. Sorry for my confusion about the “one capture file theory”. One should always do the math and not rely on the stomach ;-)) Regards answered 19 Jul ‘12, 09:53 Kurt Knochner ♦ edited 20 Jul ‘12, 00:50 |
UDP has no delay or jitter, since it has no notion of time or sequence. That's what RTP is about. answered 19 Jul '12, 05:05 Jaap ♦ |
Thank you so much for your reply. So, if I have understood correctly, I can calculate the jitter by myself( using two .pcap files, one from the sender and the second from the receiver); I can also use a shell script where I enter the .pcap file or the xjperf tool. Can I ask an other question please: if I use the shell code, how can this sript calculate the jitter since we enter only the pcap file from the receiver side, it needs also the information about transmission times to find the delay and then the jitter of each packet? Thank you very much.
Yes, this is the only way to do it right (correct calculation). You need to calculate the delay (transmission time) for each packt. That is: delay = timestamp_sender - timestamp_receiver. HINT: The time source of both capture machines need to be synchronized to the same base (e.g. with NTP). Then calculate the variation of delay.
xjperf measures the jitter of its own UDP test packets. So this is an active test of a connection with test data sent over a link.
That’s correct and you get the real transmission times only by using two capture files!
HOWEVER, if you can assume, that there is a constant stream of packets, without much delay in the sender application and without “out of order” arrival, then the difference between the arrival time of packet n and the arrival time of n+1 can be used as a rough estimation of the transmission time. That’s how the shell script (link above) works. It’s not 100% correct, but a good estimation and better than nothing, if you have only one capture file ;-)
see my UPDATE in the answer!