This is a static archive of our old Q&A Site. Please post any new questions and answers at

During reassembly of a protocol running on top of TCP, setting desegment_offset=0 and desegment_len=DESEGMENT_ONE_MORE_SEGMENT seems to break down the behavior stated in README.dissector. Is this expected?


I am writing a dissector for a custom protocol running on TCP. A single TCP packet can contain multiple PDUs for this protocol, and PDUs can be split between TCP packets. It is difficult to distinguish between cases where a protocol requires more data from the TCP stream to continue dissection, and when there is bad data in the protocol. I am trying to solve that by grabbing up to N packets (by setting desegment offset=0; desegment len=DESEGMENT ONE MORE SEGMENT;return; to get the next packet each time) if my dissector can't make sense of what is in the current buffer. Once it's over N packets, I assume that the first packet is bad and drop it from the buffer by setting desegment offset = length of first packet; desegment len = DESEGMENT ONE MORE SEGMENT; return;.

According to README.dissector, this should give me the tvb starting from the second packet up to packet N+1. However, I get a tvb starting from the original first packet (that was supposed to be dropped from the buffer), up to N+1.

Does anyone know why this behavior is happening, and what can be done to make it work like described in the README.dissector?

asked 19 Jun '15, 12:17

oleks's gravatar image

accept rate: 0%