This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

HTTP 408 Timeout from particular IP

0

I've uploaded my first Wireshark session to CloudShark, I don't have sslkeylog from client, I guess I can walk them through configuring if it comes to that (I assume that will work with an imported snoop log, just as it does 'real time' in Wireshark?).

https://www.cloudshark.org/captures/00f74e9e979d

64.114.102.2 is client 10.1.4.61 is host/receiver, where snoop was running

The client is ultimately getting an HTTP 408 from Apache, POST'ing a SAML request. Everything works dandy, except from a block of this particular clients IP's.

I've read that 'TCP Previous segment not captured' can be result of packet loss, or snoop not keeping up. But does the TCP Dup ACK tell a different story?

Is there anything here which would suggest there's an issue with the client/and/or a proxy between us?

I'm not really a 'network guy', so still trying to understand some of this, but was hoping if there were anything definitive here, someone might share it.

Thanks for any insights, Jeff

asked 03 Aug '15, 09:10

JHuston's gravatar image

JHuston
6112
accept rate: 0%


One Answer:

1

The problem is that none of the client's full-MSS (1368 bytes) sized segments make it to your server even though the server's full-MSS segments reach the client.
Unless you have control over the VPN routers and their ip adjust-mss configuration towards these IP addresses the circumvention will likely be to add a static route in Solaris and reduce the MTU size towards the IP subnet to something less than 1420 bytes (1400 may already do)

Regards Matthias

answered 03 Aug '15, 10:43

mrEEde's gravatar image

mrEEde
3.9k152270
accept rate: 20%

Okay Matthias, this makes some sense to me. Although am I mistaken, or shouldn't my server be limiting it's MSS to that in the initial client packet? (1380) I'm reading something to that effect elsewhere, or doesn't it work that way?

Thanks for the quick response, good information.

Jeff

(03 Aug '15, 13:42) JHuston

The server will offer its MSS based on the MTU size of the interface, it is up to intelligent network devices to reduce the MSS if the net MTU size is shrinking... . The client actually sees 'your' MSS of 1380 bytes when the SYN_ACK reaches the client. As TCP timestamps are also negotiated the net MSS is further reduced by 12 bytes so the maximum remaining tcp.len IS 1368 bytes.

However somehow this is still too high to pass the VPN infrastructure along the path without requiring IP fragmentation.

(03 Aug '15, 13:48) mrEEde

The VPN infrastruture should ideally be reducing the client MSS in the initial SYN as well then, correct? and if so, is this a common issue? Does ip adjust-mss actually set the max MSS? or does it tell the device to make the adjustment as well?

Thanks so much,

Jeff

(03 Aug '15, 13:57) JHuston

"The VPN infrastruture should ideally be reducing the client MSS in the initial SYN as well then, correct?" Yes, and this is what happened: The SYN packet's MSS option is arriving as 1380, certainly not what the linux client initially sent.
So the we can assume that the VPN infrastructure reduced it to adapt to the believed net MTU size of 1420 bytes in the tunnel.
The largest segment the server sent contained 1368 and was acknowledged so we can deduct it made it to the client successfully.

However, in the reverse direction all 1368 bytes segments are reported missing by the server.

"is this a common issue?"
This is not really common as usually both directions should have the same MTU size - if forward and backward route follow the same path.

"Does ip adjust-mss actually set the max MSS? or does it tell the device to make the adjustment as well?"

The SYN and SYN_ACK packets are intercepted and the MSS option gets modified in flight. As a result receiver TCP will not be sending segments larger than the offered MSS ( in fact 12 bytes less of what was offered if tcp timestamps are negotiated).

(03 Aug '15, 22:53) mrEEde