Wireshaark shows hi volume of TCP flags including “TCP Previous Segment Not Captured”, “TCP Out-of-Order”, “TCP Dup Ack”, and “TCP Retransmission”.
Case 1: 50000 Max Bytes 39% 40% Case 2: 100000 Max Bytes 29% 33% Case 3: No Max Bytes (local only) 0.2% 0.2%
Based on these results, it seems that the reduction of MAX Bytes Broadcasted (every 100 ms) from 100000 bytes to 50000 bytes did not have any positive effect on the reduction of the TCP packet loss. Also, it is obvious but I would like to point out that there is virtually no packet loss on local client side. (I used the packet loss term loosely due to the presence of TCP Dup Ack along with TCP Out-of-Order. It also appears that TCP Out-of-Order is sometimes indicated as TCP Retransmission.) Previously, it was expected that the reduction of the bursty nature of packets would reduce these packet losses. However, the trial of 100000 to 50000 MAX Bytes did not yield any difference. (In looking at the wireshark data at 100 ms interval, it was not immediately clear if there is any reduction in the size of the bandwidth used from 100000 to 50000 MAX Bytes. However, there was a definite reduction from previous version of the patch with the new version that supports MAX byte settting.) Once the increase of send_buffer is implemented, it would help.
I am beginning to wonder: 1. If the MAX byte setting was really enabled during the testing. 2. If there are any other issues on the WAN network such as FW module as it does the packet inspection on the traffic
asked 08 Mar '14, 09:41
edited 10 Mar '14, 09:08