This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

UDP packets' jitter and delay

0

Dear all, I'm new in Wireshark and I would like to know if it's possible to get the delay and jitter of UDP packets directly in WireShark. I saw many examples on Internet but they are all about RTP protocol. Thank you in advance for your help. Best regards.

asked 18 Jul '12, 13:48

nezha's gravatar image

nezha
6112
accept rate: 0%


2 Answers:

2

As @Jaap said: Currently Wireshark can only calculate jitter for RTP, as nothing else is implemented. However, it would be possible to calculate UDP jitter.

Explanation
In general you can calculate UDP jitter, however, to do it in the correct way, you would need two capture files. One in front of the client and one in front of the server to be able to calculate the transmission time (delay) and then the variation of delay (jitter). This is called Packet Delay Variation.

The following link describes it in a nice way

http://nms.lcs.mit.edu/~hari/papers/CS294/paper/node5.html

Cite: "Jitter: The jitter of a packet stream is defined as the mean deviation of the difference in packet spacing at the receiver compared to the sender, for a pair of packets...."

Basically, you could do it with one capture file as well, if you accept the time delta between two frame timestamps (in a capture file) to be the variation of network delay. However that's not really correct.

  1. it does not take into account delays at the sender side (within the application)
  2. a packet could take a different route and could arrive in front of another (earlier) packet.

This would void the calculation!

If you don't care about this, you can calculate (a kind of) jitter as the variation of the frame timestamp delta (frame.time_delta or frame.time_delta_displayed). See link to shell script below.

With RTP it's much easier to calculate jitter, as every RTP packet carries a timestamp value (rtp.timestamp).

Search for 'Timestamp' in the following page:

http://www.networksorcery.com/enp/protocol/rtp.htm

With that timestamp it is possible to calculate jitter with only one capture file, as you can calculate the time delta between two RTP packets and use the timestamp within those packets as a reference for "out of order packets" and "delay at the sender" (basically!).

See How jitter is calculated in the Wireshark RTP Wiki and the following link.

Search for "jitter estimator formula": http://toncar.cz/Tutorials/VoIP/VoIP_Basics_Jitter.html

There are tools for UDP jitter measurement.

So basically, one could implement "UDP jitter" calculation in Wireshark, if the problem described above (delay at sender, out of order packet arrival) is simply ignored or accepted. This could be done as a TAP, similar to the RTP statistics TAP. Volunteers are welcome ;-)

UPDATE: Thinking again about the possibility to calculate UDP jitter with just one capture file, I come to the conclusion, that it's NOT possible to get any usable values. So, you can't just ignore or accept the problem I mentioned above, as the error rate would be unknown, which makes the data useless.

Reason: If you measure only at one side, you are measuring a mix of delay at the sender and delay in the network.

      Sender                     Network           Receiver
p(1): ts(1)                       dn(1)            tr(1)
p(2): ts(2) = ts(1) + ds(2)       dn(2)            tr(2)
p(3): ts(3) = ts(2) + ds(3)       dn(3)            tr(3)

p = packet 1,2,3,… ts = send time at sender tr = receive time at receiver dn = delay in the network

Delay: tr(2) - tr(1) = [ts(2) + dn(2)] - [ts(1) + dn(1)] = [ts(1) + ds(2) + dn(2)] - [ts(1) + dn(1)] = ds(2) + [dn(2) - dn(1)] = ds(2) + ["variation network delay"]

So, what you are measuring is actually the sum of delay at the sender (ds - delay in the stack, the application, etc.) and the variation in delay on the network (dn(2) - dn(1)). Actually you are just interested in the variation of network delay (jitter). The delay at the sender (ds) could well be much higher than the variation of delay in the network (d(n+1) - d(n)), so you can only use that value if the delay at the sender (ds) is zero. Hoewever, that is something you cannot assume.

Conclusion: To calculate UDP jitter, you need either two capture files (as I mentioned in the first place) or timestamps in the packet, which is not the case for standard UDP. The other option is to actively measure it with a tool like xjperf.

Sorry for my confusion about the “one capture file theory”. One should always do the math and not rely on the stomach ;-))

Regards
Kurt

answered 19 Jul ‘12, 09:53

Kurt%20Knochner's gravatar image

Kurt Knochner ♦
24.8k1039237
accept rate: 15%

edited 20 Jul ‘12, 00:50

Thank you so much for your reply. So, if I have understood correctly, I can calculate the jitter by myself( using two .pcap files, one from the sender and the second from the receiver); I can also use a shell script where I enter the .pcap file or the xjperf tool. Can I ask an other question please: if I use the shell code, how can this sript calculate the jitter since we enter only the pcap file from the receiver side, it needs also the information about transmission times to find the delay and then the jitter of each packet? Thank you very much.

(19 Jul ‘12, 14:59) nezha

I can calculate the jitter by myself( using two .pcap files, one from the sender and the second from the receiver);

Yes, this is the only way to do it right (correct calculation). You need to calculate the delay (transmission time) for each packt. That is: delay = timestamp_sender - timestamp_receiver. HINT: The time source of both capture machines need to be synchronized to the same base (e.g. with NTP). Then calculate the variation of delay.

I can also use a shell script where I enter the .pcap file or the xjperf tool.

xjperf measures the jitter of its own UDP test packets. So this is an active test of a connection with test data sent over a link.

if I use the shell code, how can this sript calculate the jitter since we enter only the pcap file from the receiver side, it needs also the information about transmission times to find the delay and then the jitter of each packet?

That’s correct and you get the real transmission times only by using two capture files!

HOWEVER, if you can assume, that there is a constant stream of packets, without much delay in the sender application and without “out of order” arrival, then the difference between the arrival time of packet n and the arrival time of n+1 can be used as a rough estimation of the transmission time. That’s how the shell script (link above) works. It’s not 100% correct, but a good estimation and better than nothing, if you have only one capture file ;-)

(19 Jul ‘12, 16:00) Kurt Knochner ♦

see my UPDATE in the answer!

(19 Jul ‘12, 23:51) Kurt Knochner ♦

0

UDP has no delay or jitter, since it has no notion of time or sequence. That's what RTP is about.

answered 19 Jul '12, 05:05

Jaap's gravatar image

Jaap ♦
11.7k16101
accept rate: 14%