This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

bandwidth measurements using IO chart

0

I am running some performance tests on a bunch of servers in a closed network. Right now the only approved software I have is Wireshark for this test.

Part of the test is having Wireshark capture raw data and dump it to a file.

For small tests, I have found that IO chart works perfect in extracting the data into Excel; however, when accessing very large files I end up well; causing Wireshark 1.x 2.x to crash.

I started off with 100 meg files and was able to extract the data; but a simple 2 hr test generated well over 32 log files. I changed the system to only generate one long file (5.5 gig) but this caused Wireshark to die when trying to just open the file.

I then moved to a fatter computer running Windows 7 - 16 gig of ram and dropped the file size to 2 gig. Simple loading tests work fine, but when collecting data from a more deserve environment still caused Wireshark to die.

My goal is to use io chart to provide how many bytes went thru the ip address (send/receive) for about 50 addresses.

If I stick to the small files I will spend too much manual time in the re-assembly process. I have tried another machine with 32 gig and the files still cause Wireshark to lock up (1.x & 2.x).

My current options are find the magical max file size that Wireshark likes and use that as a bases.

Find a way to dump the payload to reduce the file size, however, I am not sure if it is a file size issue but the number of packets that is causing the issue.

The current box is a Windows machine, maybe find some sort of command line option to extract all traffic that references 1 IP address and feed that into IO chart.

Any other ideas?

asked 21 Jan '16, 09:52

Brad%20M's gravatar image

Brad M
6335
accept rate: 0%


One Answer:

0

You could try using some of Wireshark's command line tools to perform all the functions for capturing, editing, merging and analyzing. Then use an external software (such as Excel) to build the graphs.

  1. Capturing = use dumpcap (https://www.wireshark.org/docs/man-pages/dumpcap.html), you can stop all the unnecessary services on your computer running the capture to prevent packet drops
  2. Merging = use mergecap (https://www.wireshark.org/docs/man-pages/mergecap.html), you can use this function to combine multiple captures into a single file, if you decide to break the captures into smaller files in Step #1
  3. Editing = use editcap (https://www.wireshark.org/docs/man-pages/editcap.html), you can use this function to make any necessary changes to the files. Maybe you have some duplicate packets after merging, or maybe you want to adjust the time, etc.
  4. Analyzing = use tshark (https://www.wireshark.org/docs/man-pages/tshark.html), this is a very powerful tool and it has many functions. If you are interested in just finding the throughput, the easiest tshark command to use is the following:

tshark -q -z io,stat,1 –r C:\temp\Network_Capture.pcap >c:\temp\Net-Analysis.csv

In the above command, the drive, directory and file name will depend on your specific settings.

Now you have a CSV file, you can open into any spreadsheet program (Excel) and perform graphing, analysis, etc.

Also, all these above tools are located in Wireshark's main install directory - for a Window's machine = Program Files / Wireshark

I found using the Wireshark GUI over an extended capture is not the best way to perform these types of captures. I actually prefer to use a Linux machine and perform the capture using tcpdump - just because Linux is more stable. :)

answered 21 Jan '16, 11:33

Amato_C's gravatar image

Amato_C
1.1k142032
accept rate: 14%

While the tips from @Amato_C will undoubtedly help, they don't cover the fundamental issue which is likely to be memory exhaustion. See the wiki page on out of memory issues for more info. As Wireshark, and tshark maintain lots of state about connections and relationships between packets, they can run out of memory in certain conditions. Using other tools that don't build and retain that state allows you to work with such captures.

Basically use dumpcap for high performance capture, and then look at other tools for analysing the I/O rate, e.g. captcp or tcpstat or maybe one of the other tools listed on the wiki tools page.

(21 Jan '16, 14:20) grahamb ♦

I'd add my explanation to @Amato_C's regarding why tcpdump may be helpful in this situation if you have a chance to use it.

As @grahamb says, tshark as well as Wireshark keep track about state so for both of them, there is always a finite "critical" count of packets processed which depends on the amount of available memory. dumpcap doesn't store state, but it cannot read packets from a file.

So if you think it would help you to split the files into several ones, using IP addresses to choose the right file (as you've suggested that only some IPs may be interesting for you), you have to do that using a capture filter (rather than display filter). With dumpcap, you can do so only while capturing directly from the interface (but you can run several dumpcaps with different capture filters over the same interface simultaneously); with tcpdump, you can use -r to let tcpdump read the contents of a saved file and use a "capture" filter (as "display" filters do not exist in tcpdump anyway) to split a saved file. So it is not so much a question of OS stability in this particular case.

(22 Jan '16, 02:01) sindy

I will need to look at some of the answers a bit closer. I have looked at some of the how not to be a memory hog; but sort of limited on my options. I rather not do 40 files, stitching the data back to make one file for one extended test event.

Data collection is not an issue; and I do want to collect 100% of all system activities. I had figured out how to split and or join the files before posting. This should keep the tester happy at least in that phase of data collection.

tshark -q -z io,stat,1 –r C:\temp\Network_Capture.pcap >c:\temp\Net-Analysis.csv

This would run tshark and use the capture file to generate a csv file; good start to automation. If I filter based on ip address or mac address I can reduce the data down into something a bit more manageable.

Now to figure out the Windows Path environment so I can get this to run under Windows 7 from some place other than the install directory.

added: I was able to parse a smaller 500 meg file - took about 1 min; however, the 2 gig file is still running after 5 mins. I am going to have to play a lot with the command line just to get something out that is useful to feed into excel.

I am very limited what software I can run in the lab. Wireshark was approved but that was about it.

More stuff that makes you go - ahh what the heck is going on.

The first of the 3 wireshark files would cause issues with tshark and not process, the 2nd and 3rd seem to run fine. Now I am wondering if the capture file was damaged. I set the record option to generate a new file every 2 gig . . . 1st file toast, 2nd file ok, the 3rd file 500 meg ok.

Is there a way to run a command line test to see of the wireshark capture file is 'good'?

(22 Jan '16, 09:05) Brad M