This is our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

Hi,

I hope I don't sound stupid, but I have problems to understand the time delta from previous packet in wireshark.

I learnt that a G.711 codec in a VoIP session would normally generate one RTP packet every 20ms. But the time delta from previous packet of consecutive RTP packets often shows something around 0,001 sec. How can this be?

Wish I would understand. Maybe someone has a suggestion?

asked 12 Jun '16, 08:37

nel's gravatar image

nel
6113
accept rate: 0%


First of all, there are two time delta columns:

  • "plain" delta is the timestamp difference from the previous captured frame

  • "display" delta is the timestamp difference from the previous frame shown in the packet list while some display filter is applied.

So if your capture contains more than a single RTP stream and you haven't used a display filter such as ip.src == x.x.x.x and udp.srcport == X and ip.dst == y.y.y.y and udp.dstport == Y, both these delta values are not related to a particular RTP stream and thus a delta of 1 millisecond are not strange.

If you have applied a display filter as above, and the display delta column still shows values significantly different from the packetization time expected, it is worth deeper analysis.

First, G.711 frames may theoretically be of any length, yet multiples of 10 ms are used most often, and among them 20 ms is the most typical choice as a compromise between packet size (ethernet+IP+UDP+RTP headers add 58 bytes to payload of any size) and the propagation delay from the speaker to the listener.

The sending side should normally send each packet as soon as it loads it with the last sample; as the samples come at regular intervals and their count in a packet is constant, the packets should leave at regular intervals too. However, the time it takes each individual packet to traverse the network may differ, as there may be priority queues and multiple network paths between the source and destination phones (leaving aside traffic despatching in wireless networks). So if capture at receiving side and the RTP does not arrive just across a single LAN segment, it is well possible that packet arrivals are sometimes almost simultaneous, and sometimes even their order is reversed. That's why the RTP uses sequence numbers and timestamps.

permanent link

answered 12 Jun '16, 10:19

sindy's gravatar image

sindy
6.0k4851
accept rate: 24%

Sindy, thanks for clarifying!

I have a simple experimental setup in a lokal network with no other traffic and plain delta is usually the same as display delta and the frame numbers are continuous. Same for the deltas on sender and receiver side.

Even if there would only be 10ms in the frames it doesn't really correspond to a time delta of 0,0005 or 0,0007, doesn't it? I wouldn't be surprised by a higher delta than the codec framing time, but a lower one...? Maybe this isn't really a wireshark question but more a VoIP question...I will try some other clients and see what deltas I get...

(13 Jun '16, 00:32) nel
1

As your post was not an Answer to your original Question, I've converted it into a Comment to my Answer, see site FAQ for other house rules.

Regarding the 10 ms - I've discussed the possible values of PCMA packetization time only in order to correct the information you've been told, i.e. that it is always 20 ms. Of course 0,01 s and 0.0005 s is a difference.

I wouldn't be surprised by a higher delta than the codec framing time, but a lower one...?

Think one step further. As the fixed-size packets are assembled from constantly incoming individual samples in regular intervals, if just a single one of these packets is received later than it should have been, it means that its own delta from the previous one is higher than usual, but the next one's delta from the shifted one must thus be lower than usual.

In another words, whenever RTP packet N's delta from packet N-1 is lower than expected, it actually doesn't mean that packet N came too early but that packet N-1 came too late.

Maybe this isn't really a Wireshark question

It is a Wireshark question in terms that you should be interested in the metrological parameters of Wireshark. In particular, to what extent you can trust the timestamp values.

And the answer here is "it depends on a lot of factors". Unless you use a dedicated hardware designed with timestamp precision in mind, you depend mostly on the load of the operating system on the capturing machine. An ordinary network card these days saves the incoming bytes of a frame into a buffer using DMA, and raises an IRQ once the frame has been completely received. But it has no reason (nor ability) to augment the contents of the frame with any other information. The operating system's kernel finds time to serve that IRQ later, starts processing the frame at different layers of the protocol stack, and at some stage, the libpcab or WinPcap is given the packet for processing. And it is only here where the timestamp is assigned. While processing by layers of network stack takes roughly the same time for each packet, the delay between when the IRQ has been raised and when the kernel picks up the frame may be quite long if the machine is under heavy load. It is not unusual that several frames are then processed in a single burst, so their timestamps differ only slightly.

For sent frames the story is a bit different, but the application generating the frames usually has a lower priority than the kernel.

(13 Jun '16, 02:48) sindy

So to summarize, the irregularity of packet timestamps in the capture may be caused by

  • the load of the sending machine - in this case, the packets are really sent irregularly,

  • the varying travel time of the packets through the network - in this case, the packets really arrive to the receiver irregularly,

  • the load of the capturing machine - in this case, the actual arrival of the packets to the receiver may be regular.

Of course, all the factors above may exist simultaneously.

So if you want to know what is really going on on the wire, the capturing machine should be neither the sending one nor the receiving one and it should have as little other load as possible. For methods of capturing of foreign traffic on a LAN, see the relevant page on Wireshark wiki.

(13 Jun '16, 09:21) sindy

Thanks again, sindy! I was trying to relate the time delta directly to the codec packetization time and this is of course not working well. I thought that observing the stream on the sender side would leave aside the network latency issues. But of course there are enough local factors because of packet procession. I didn't took this into account...

The bursts you mention sound very reasonable, and - surprise - when i take the average of all deltas we are very near the 20ms framing time

(14 Jun '16, 00:49) nel
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×238
×6

question asked: 12 Jun '16, 08:37

question was seen: 3,222 times

last updated: 14 Jun '16, 00:49

p​o​w​e​r​e​d by O​S​Q​A