Current transfer test over 10gigabit is around 7.3mbit/s; not so good.
I see my syn and syn ack set rwin multiplier to 11 in the beginning.
The ack's from .171 show good win values; but my intial send is always at 65336 (default for tcp_wmem). is this normal?
Anything indicative of a window scaling problem for the dismal performance?
asked 23 Aug '13, 18:52
edited 24 Aug '13, 10:12
In this conversation the .116 is only sending data, so there is no need to change the window size (the window size in the packets from .116 tell .171 how much data can be sent without waiting for an ACK, but .171 is not sending data, so no need to increase the buffer size).
The window size sent by .171 is indeed growing from the initial 64K. This only happens a bit later in the trace file as it is not needed in the slowstart fase of the TCP connection. In the slowstart fase, the congestion window is exponentially grown, once the congestion window gets larger than the window size, the window size will be increased as well. It would be interesting to see the whole transfer, as it should grow to the configured maximum value.
The culprit in your capture file is the round trip time of 75 ms, have a look at the bandwidth delay product.
answered 24 Aug '13, 00:06
"Current transfer test over 10gigabit is around 7.3mbit/s; not so good."
This is a transfer over a WAN connection, as the RTT of ~75ms and inbound TTL 56 indicate. So I doubt that you will have a 10G bandwidth all the way and I think you need to set your expectations a little lower. ;-)
answered 24 Aug '13, 01:10
No, this is not reasonable. With a 75 ms RTT and a 50Mb/s capacity you need 468750 bytes to fill the pipe. However yor trace shows that we start losing packets in the network when we approach 400000 bytes_in_flight and congestion avoidance algorithm at the sender will drop the congestion_window.
That is a good question, ask 3 experts and you get 9 suggestions! here is a good read and another one ;-) And here is my suggestion for this scenario : I'd stay below 350000 bytes to avoid packet loss. Sometimes 'less is more'.
I doubt that traffic shaping is in place here. To me this looks more like a result of segmentation offload that is in place now and maybe wasn't on your old NIC card and is now generating a higher (too high) packet rate
Good luck in your tuning effort and please post your results!
Ah, and maybe changing your algorithm might optimize this also:
answered 24 Aug ‘13, 22:46
edited 24 Aug ‘13, 22:51