This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

How can I measure throughput, packet loss, delay and jitter for a streaming video over WiFi?

0

Hi! if anyone can help me and explain in detail, I need to know how to measure parameters such as throughput, packet loss, jitter and delay over a WiFi on which runs the service streaming video on web streaming. Thank you

This question is marked "community wiki".

asked 07 May '16, 08:16

ndfbn's gravatar image

ndfbn
6113
accept rate: 0%

edited 07 May '16, 08:18

The first step is to define the terms and understand them.

  • throughput seems to be quite straightforward - the amount of data transported per unit of time. However, you have to distinguish between "frames (packets) per second" and "bytes per second", and in the latter case, also between "payload bytes per second" and "total bytes per second" (i.e. including the overhead).

  • packet loss also seems straightforward - it is normally defined as the rate between the number of packets lost and the total number of packets sent. However, such value becomes less representative for reliable transport protocols such as TCP - if these are used, packet loss still characterizes the network quality, but without taking other parameters into account it doesn't say much about its impact on the payload.

  • delay is normally the travel time of the packet between the source and the destination; I'm not sure whether it is worth measuring in a single WLAN as in such case, the delay almost equals the frame on-air duration (which recently became a field of the 802.11 radio information forged layer. In any case, you cannot measure packet travel time using Wireshark unless you have separate captures from both the source and the destination machines, and both of them are precisely synchronised to a common time reference. The issue with short delays is that during capture, the timestamp is not assigned exactly at the moment when the packet has just left the NIC or just arrived (unless you use a dedicated capturing hardware providing such functionality) but as late as when the kernel has time to process the interrupt from the NIC informing it about the arrival of the packet. So for short actual delays, the timestamp allocation error may well be comparable to the delay itself.

(08 May '16, 06:20) sindy
  • jitter describes the amount of irregularity of the travel time (delay) among different packets; again, there is not much to measure on a single WLAN due to the timestamping precision. Another point is that it makes sense to look at jitter where the rate of packet arrival is regular; this is often true with audio codecs but not so often with video ones. You can determine the value of jitter if you can measure the packet travel time (i.e. you have captures from both source and destination) or if the packets themselves contain a timestamp field. The latter is true for RTP, so Wireshark can calculate jitter for audio streams using RTP and constant audio frame rate.

Packet loss and jitter both can lead to drop-outs in the audio or video playback on the receiving side. It doesn't matter whether a packet is actually lost or just arrives later than expected; if it isn't available (yet) at the moment when playback of the previous one has ended, the output gets affected.

Depending on what issue you actually debug, the importance of these parameters differs, so please precise your goal.

(08 May '16, 06:20) sindy