This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

Using dumpcap on several servers and sending the pcaps to a fileserver/LUN

0

Hey all,

We have come upon quite a large issue that seems to just not go away. We currently have dumpcap setup on over 2 dozen media servers as a scheduled task to run all the time. It captures on both NICS of our Media Servers, 1 NIC handles data, other NIC handles voice. As of right now, we are having the dumpcap dump the pcaps on our "E" drive. What I would like to do is to have all those servers log to a fileserver or LUN. I am aware that I could just put the path into the scheduled task. My question is has anyone done this before? Do you recommend it or what do you recommend? Also, what did it do to your network saturation?

Thank you in advance!

asked 30 Mar '16, 09:31

JonNolan81's gravatar image

JonNolan81
6113
accept rate: 0%


One Answer:

0

My recent practical experience has shown that creation of each file on a remote fileserver (assuming you are using circular file buffers to facilitate a "continuous" capture) takes too long for tcpdump's buffering to handle even though my frame rate is just about 4000 frames per second, so I've ended up writing tcpdump's output to local ramdisk and then using a script to move the data to the final destination at the "file server" (where another script takes care about compressing them). I do admit that nfs is not the fastest available means so your environment may be more favourable. I don't expect dumpcap's buffering to be significantly different from tcpdump's, and I admit I might have overcome that "file creation lag" issue by instructing tcpdump to use several megabytes of buffers - it is just something you have to think about and test before starting to use the remote file storage routinely.

The other point is whether your media servers' NICs have enough spare bandwidth to more than triple the traffic in Tx direction during transfer of the captured data, as each captured byte causes another one to be sent over the network plus some overhead of the file transfer protocol used. Here I am lucky as I am capturing at USB so the Ethernet bandwidth used to copy the files does not compete with the source traffic, and don't need to throttle the file transfers; in your case, the traffic bursts caused by file transfers could cause packet loss unless you would use traffic policing, giving the signalling some priority over transfer of the capture files, which could mean a significant modification of the "natural" behaviour of the gateways. So think about the peak volume of the source traffic at both NICs and the spare bandwidth available at the signalling NIC (which is likely to carry much less source traffic than the media one).

The last point is the available bandwidth of those interfaces of the switches to which the file server is connected: 0.9 Gbit/s of capture data (assuming 0.3 Gbit/s per media direction plus 50 Mbit/s per signalling direction plus some overhead) from each of the 10 sources generate a 10 Gbit/s flow to the file server.

answered 30 Mar '16, 10:19

sindy's gravatar image

sindy
6.0k4851
accept rate: 24%

Thank you Sindy. Those are very good points and ones that I will have to test out. I was thinking about starting with 1 server and monitoring its behavior so see if any issues arise. I will update this thread with the result.

(30 Mar '16, 10:24) JonNolan81

A technical remark: I've converted your post into a Comment to my Answer as the post itself was was not an Answer to your Question. The idea of the site is to build a Q&A knowledge base, see the site FAQ.

(30 Mar '16, 10:28) sindy