In the following question, @hoangsonk49 claims that he is running tshark continuously for 6 months without getting any memory usage problems, as predicted by the Wiki.
Last comment/answer of @hoangsonk49:
Presumably he is using the following command:
My understanding of the dissection engine is the following:
Running tshark on a link without any filter (I'm waiting for a comment of @hoangsonk49 on that issue), he should run into a memory problem sooner or later, as almost every dissector creates at least an entry in the conversation hash tables. Some dissectors also add additional data to a conversation (e.g. HTTP).
So, the hash table will keep growing as long as tshark is running. Furthermore, there should be other data structures as well in certain dissectors, which would increase the memory usage even more.
At least that’s my understanding of the dissection engine.
So, if he does not see any increase in memory usage after running tshark for 6 months (just my interpretation of his comments), my understanding of the dissection engine might be wrong.
Any idea why tshark does not crash with an out of memory error after running it continuously for 6 months?
Is there anything in tshark that clears ‘old data structures’ if it is running in the way described above? I was not able to find such a thing in the code!?!
asked 22 Jun ‘14, 08:25
Kurt Knochner ♦
edited 22 Jun ‘14, 08:47
Either he's modified the source code directly (in a non-trivial way) or he's seeing very little traffic.
Or, potentially, he's running it via a system like upstart or systemd which automatically restarts crashed processes, in which case it does restart every couple of days he's just never noticed.
Tangentially: I had a hack at one point of just a couple of lines which would wipe out all state after each packet, letting tshark run in "stateless" mode but I've lost it and I don't know if it ever worked all that well in the first place.
answered 22 Jun '14, 09:18
Maybe the traffic he's capturing doesn't have the usual memory creation issues. He said in his post he was looking at CAMEL protocol, which I guess means he's capturing M3UA/SCTP (right?)... and a quick peek at SCTP's dissector code doesn't show it creating conversations like UDP and TCP does. So if all he's capturing is traffic between two SS7 systems or something similar, maybe he just doesn't have the type of packet traffic that would have the issues.
answered 22 Jun '14, 10:23
I've used Tshark numerous times, but never for a period of 6 months.
But I have used the following on a WinTel machine for anywhere from 10 ~ 20 days without any errors.
tshark -i 3 -w MyCapture.pcap -s 80 -b filesize:1000000
I dump all my files in 1GB size. Then used Editcap to weed out only the portions I want (usually within 5 ~ 30 minutes prior to and/or after the problem to reduce the 1GB file to a more usable size.
I've had 192GB of data over 7 days and 233GB of data over 20 days, but never experienced any such error.
answered 15 Apr '15, 06:17