This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

Why was the IP fragment size required to be a multiple of 8 bytes?

1

In order to calculate Fragment Offset we need to divide the data block by 8.

How we landed up on the digit 8?

Is it total length of packet <2 to the power 16> (divided by) fragment offset <2 to the power 13> gives us 8

(or)

Is it 2 to the power 16(total packet length) divided by 8 gives 2 to the power 13(fragment offset length)

which one came first?

asked 14 Mar '13, 00:29

krishnayeddula's gravatar image

krishnayeddula
629354148
accept rate: 6%

edited 14 Mar '13, 10:57

Guy%20Harris's gravatar image

Guy Harris ♦♦
17.4k335196


One Answer:

2

I can't look into the minds of the people who wrote the IPv4 RFC in 1981. But I suspect they decided to use 16 bits in the header for IP fragmentation purposes and then used 3 bits for flags and then used the remaining 13 bits for the offset. Resulting in an acceptable size of a multiple of 8 for the fragments.

I'm wondering why you'd like to know which came first?

answered 14 Mar '13, 05:34

SYN-bit's gravatar image

SYN-bit ♦♦
17.1k957245
accept rate: 20%

An example for IP Message of 12,000 bytes [where the Payload is 11,980].MTU is 1500.The data blocks[Application data + TCP Header] before dividing by 8 {0-1479;1480-2959;2960-4439;4440-5919;5920-7399;7400-8879;8880-10359;10360-11839;11840-11979} up to 8192 we can fit in 13 bit when it exceeds 8192 fragmentation offset will spill over from 13 bit so there is a need to condense the data block.As the maximum length of IP Packet is 65536 and dividing 65536 by 8 gives 8192 because of which our fragment offset never exceeds 13bit.

Little confused at this coincidence and hence my question/

(14 Mar '13, 14:11) krishnayeddula
1

It's not a coincidence. If the maximum IP packet size, in bytes, fits in 16 bits (so it's 2^16-1), and the maximum fragment offset is 13 bits (so it's 2^13-1), then the units of the fragment offset need to fit in (2^16-1)/(2^13-1). Throwing the -1 away doesn't really matter, so it's 2^16/2^13, which is 2^(16-13), or 2^3, or 8.

That's what SYN-bit was saying - if they picked a 13-bit fragment offset, and had a 16-bit packet size, the fragment offset would have to be in units of 8 bytes.

(14 Mar '13, 22:17) Guy Harris ♦♦

nice explanation thanx man it helps

(27 Sep '13, 15:10) Rummy Khan