This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

window sizing always at 65k with .116 sending? Does only the ack from dst only show scale size?

0

alt textHello,

Current transfer test over 10gigabit is around 7.3mbit/s; not so good.

I see my syn and syn ack set rwin multiplier to 11 in the beginning.

The ack's from .171 show good win values; but my intial send is always at 65336 (default for tcp_wmem). is this normal?

Anything indicative of a window scaling problem for the dismal performance?

asked 23 Aug '13, 18:52

zerobane's gravatar image

zerobane
11113
accept rate: 0%

edited 24 Aug '13, 10:12


3 Answers:

0

In this conversation the .116 is only sending data, so there is no need to change the window size (the window size in the packets from .116 tell .171 how much data can be sent without waiting for an ACK, but .171 is not sending data, so no need to increase the buffer size).

The window size sent by .171 is indeed growing from the initial 64K. This only happens a bit later in the trace file as it is not needed in the slowstart fase of the TCP connection. In the slowstart fase, the congestion window is exponentially grown, once the congestion window gets larger than the window size, the window size will be increased as well. It would be interesting to see the whole transfer, as it should grow to the configured maximum value.

The culprit in your capture file is the round trip time of 75 ms, have a look at the bandwidth delay product.

answered 24 Aug '13, 00:06

SYN-bit's gravatar image

SYN-bit ♦♦
17.1k957245
accept rate: 20%

0

"Current transfer test over 10gigabit is around 7.3mbit/s; not so good."

This is a transfer over a WAN connection, as the RTT of ~75ms and inbound TTL 56 indicate. So I doubt that you will have a 10G bandwidth all the way and I think you need to set your expectations a little lower. ;-)

answered 24 Aug '13, 01:10

mrEEde's gravatar image

mrEEde
3.9k152270
accept rate: 20%

Thanks for reply / info guys,

This is on a long distance private network; but fairly heavy congestion / busy.

Yes; 64KB win size gets me around 7.8mbit/s; hoping to get at least 50mbit/s (what it was before the upgrade to 10gbit ironically)

Attached a larger pcap; entire transfer dump was 500mb using netcat/dd.

I see the window size from the acks increase to about 400,000 then it resets; both the window size and byes in flight seem to drop down.

My max window size on each side is set to 64MB (in sysctl)

(24 Aug '13, 06:21) zerobane
(24 Aug '13, 06:22) zerobane

Is it reasonable to have bytes in flight and BDP at 85Megs on a large scale WAN?

If not; whats a good expectation / setting?

Could traffic shaping be causing the bytes in flight / RWIN to be limited to 400,000 bytes?

My background in networking is strictly limited to internal appliance/solution in which there is 1 to 10 switches that I have complete control over.

(24 Aug '13, 09:59) zerobane

0
  • "Is it reasonable to have bytes in flight and BDP at 85Megs on a large scale WAN?"

No, this is not reasonable. With a 75 ms RTT and a 50Mb/s capacity you need 468750 bytes to fill the pipe. However yor trace shows that we start losing packets in the network when we approach 400000 bytes_in_flight and congestion avoidance algorithm at the sender will drop the congestion_window.

  • If not; whats a good expectation / setting?

That is a good question, ask 3 experts and you get 9 suggestions! here is a good read and another one ;-) And here is my suggestion for this scenario : I'd stay below 350000 bytes to avoid packet loss. Sometimes 'less is more'.

  • Could traffic shaping be causing the bytes in flight / RWIN to be limited to 400,000 bytes?

I doubt that traffic shaping is in place here. To me this looks more like a result of segmentation offload that is in place now and maybe wasn't on your old NIC card and is now generating a higher (too high) packet rate

Good luck in your tuning effort and please post your results!

Ah, and maybe changing your algorithm might optimize this also:

sysctl net.ipv4.tcp_available_congestion_control

sysctl net.ipv4.tcp_congestion_control

answered 24 Aug ‘13, 22:46

mrEEde's gravatar image

mrEEde
3.9k152270
accept rate: 20%

edited 24 Aug ‘13, 22:51