This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

wireshark timestamp accuracy

0
1

I am trying to measure packet processing performance of a server. The server process is running on a linux system and it is connected to an ethernet switch. A windows machine connected to the sniffer port and running wireshark to capture the packet.

When I captured the packet at 1 packet/sec rate, using wireshark running on windows, the response time obtained is at around 400 micro seconds. However, when wireshark is run on the Linux machine itself, the response time obtained is around 200 microseconds.

Why is there a such a big difference in the time reported by wireshark in the two systems? Note, that i am calculating the difference in the time when wireshark sees the incoming and outgoing packet. The switch is gigabit ethernet switch and the 2 systems are connected via gigabit ethernet.

Is wireshark timestamp accurate enough for measuring times in the 100 microsecond range ?

any other light-weight tools to collect more accurate timestamps on a system, without impacting performance of the system itself ??

Thanks Ramesh

asked 29 Jan '11, 12:13

ramekris's gravatar image

ramekris
1232
accept rate: 0%


4 Answers:

1

Does this help you understand? It basically states you're dependent on the platform you capture on, and the place in the network you capture.

Light weight tools? Not really, there's timestamping network hardware for that, like this.

answered 30 Jan '11, 05:18

Jaap's gravatar image

Jaap ♦
11.7k16101
accept rate: 14%

edited 30 Jan '11, 11:09

Guy%20Harris's gravatar image

Guy Harris ♦♦
17.4k335196

1

Here's my 2 cents.

I sometimes work on market trading platforms, and sometimes on micro algorithmic trading platforms. They make money via millions of transactions performed in sub seconds. They (the business folks that run this platform) are always asking us to give them the finest possible resolution when it comes to providing performance data. This allows them to track variable latency in the data coming from the feed source, variable latency as it traverses the network, and finally variability in response time from the trading grid itself. I won't go into the craziness of attempting to archive and analyze microsecond sampled performance data from hundreds of disparate systems. What I will tell you is that Cisco themselves won't rely on sub-100 microsecond data that comes from their systems. The variability of the bus clock at such fine increments is enough to sway the data. If you're capturing on a highend router you can possibly trust the ticks to 100µs. If you're capturing on a Cat switch I wouldn't trust the ticks at less than 1ms. If it's a regular ol' PC then I would also not rely on less than 1ms. Virtualized systems are a joke with it comes to bus tick regularity.

answered 01 Feb '11, 12:25

GeonJay's gravatar image

GeonJay
4705922
accept rate: 5%

0

The difference is actually not very big, it's only a couple of hundred microseconds.

If you capture on the server you see a delta time between incoming and outgoing frame of 200 microseconds, which is the actual local response time of the service.

If you capture with a monitor session on a switch you have to add forwarding delay of the switch for both directions, which is typically around 100 microseconds for switches operating in Store and Foreward mode. So that would explain why you have 200 microseconds plus 2 times 100 microseconds when capturing on the switch instead of on the server itself.

answered 30 Jan '11, 05:25

Jasper's gravatar image

Jasper ♦♦
23.8k551284
accept rate: 18%

The switch delay should have been the same for incoming and outgoing frame, and I am calculating the difference in timestamps, it would eliminate the switch delay.

How much is the delay from the time packet in received at the driver till the wireshark stamps it (for incomfing), and the delay from when wireshark stamps outgoing packet till it is put on the wire ? Would that be in the order of 100 microseconds ?

(30 Jan '11, 06:09) ramekris

I'm not sure how fast Wireshark picks up and timestamps a frame but I guess it is a lot faster than 100 microseconds.

(30 Jan '11, 14:05) Jasper ♦♦

As the reference Jaap gave indicates, Wireshark doesn't timestamp frames - on almost all platforms (the only significant exception I know of is HP-UX), even libpcap doesn't do so, it just uses timestamps supplied by the OS.

The delay between the arrival of the last bit of the packet at the network adapter, and the time stamping in the networking stack, is probably less than 100 microseconds. Time stamps might have a multi-millisecond precision, however.

(30 Jan '11, 16:57) Guy Harris ♦♦

0

I confess I don't understand the issue. Why do you care about individual packet processing times. Surely you care more about how many packets you process in a time interval. Remember that in any real operation there are likely to be multiple processes reading and writing to the wire. So there is a queue of stuff waiting to go in or out. Do the measurements with a million packets, and calculate average and standard deviation.

If you want to find times take at different stages, perhaps start with timing a million icmp echo packets to 127.0.0.1, then to the other server's IP, then to whatever process you have running on the server.

Then repeat the tests with 2, 4, 8, 16 clients all pounding on the server. And repeat with 2, 4, 8 16 processes on the server. I think you will find that turnaround for an individual packet is small, and that the standard deviations are large enough that it's no longer safe to assume a normal distribution. Poisson maybe?

answered 13 Feb '11, 07:44

SGBotsford's gravatar image

SGBotsford
1
accept rate: 0%