This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

IO graph for display filter does not match IO graph for .pcapng created by exporting selected packets

0

Hello,

I'm trying to Network profile for some embedded applications on 4 different devices. I'm using a managed switch to mirror the ports connected to each device to my capture pc. During a 2 hour capture, the traffic does not exceed 731 Kbits/sec.

I typically use a display filter to isolate the traffic for one device and export the specified packets to a new .pcapng file that is smaller and easier to work with. While trying to find the peak data rates of short bursts of traffic I noticed a discrepancy between the IO graph from the original capture file and the exported capture file. For each capture I added a new graph and applied the same display filter used to export the packets.

Here is an example display filter, obviously the MAC has been changed:

(!(ip.addr==172.31.155.43 or ip.addr==172.16.5.122 or ip.addr==172.16.9.109 or ip.addr==172.31.155.95 or ip.addr==172.31.155.145 or arp)) && (ip.addr==172.31.155.42 or eth.addr == 12:34:56:78:90:12) && (frame.time >= "Feb 28, 2017 09:10:00.000000" && frame.time <= "Feb 28, 2017 11:10:00.000000")

For one device the, difference in data rates for the same burst of traffic is 10031 Bits/s. For another device the difference was 72280 Bits/s. Even more confusing is the fact that in the capture file properties, the "Displayed" statistics from the original capture, when using the display filter used to export traffic for a particular device, match the "Captured" statistics in the exported capture file exactly. I should mention that this is all UDP traffic.

If I change the Y axis from bit/s to packets/s, these also do not match...

What is causing these discrepancies?

asked 01 Mar '17, 08:52

joeg4go's gravatar image

joeg4go
6112
accept rate: 0%


One Answer:

0

I think this is the result of a change in start time, resulting in different sampling intervals. For example, in the original capture file, you might have packets split between two intervals whereas in the filtered file, they could fall within the same interval.

For example, suppose you had this distribution of packets in the original capture file (here X represents where those packets are within the interval):

0         1         2         3
+----+----+----+----+----+----+----> time (s)
|         |       X | X       |
|<-- 0 -->|<-- 5 -->|<-- 5 -->|

In this case you would conclude an average of 5 packets per second over the 2 intervals where those packets occur. But once you filter only those packets, you end up with something like:

0         1         2         3
+----+----+----+----+----+----+----> time (s)
|X   X    |         |         |
|<-- 10-->|<-- 0 -->|<-- 0 -->|

Now you would conclude an average of 10 packets per second within the interval in which these packets occur. Same data.

Try reducing your IO Graph time interval from 1 sec to 100ms or 10ms or even 1ms until the values match.

answered 01 Mar '17, 10:08

cmaynard's gravatar image

cmaynard ♦♦
9.4k1038142
accept rate: 20%

I have defined my sampling interval within the display filter. If what you're suggesting were true, wouldn't the "Displayed" statistics (from original pcap) and "Captured" statistics (from exported pcap) differ? In my case, they match exactly. The # of packets, timespan, everything matches...

Displayed 79781 (35.1%) 7199.501 11.1 186.5 14860775 (37.1%) 2064 16 k

Captured 79781 7199.501 11.1 186.5 14860775 2064 16 k

(01 Mar '17, 11:30) joeg4go

I have defined my sampling interval within the display filter.

You have defined the time interval. I'm referring to the graphing interval, which used to be known as the X Axis Tick interval. Try changing it to 100ms (0.1 sec) or smaller as needed.

(01 Mar '17, 11:55) cmaynard ♦♦