Wireshaark shows hi volume of TCP flags including “TCP Previous Segment Not Captured”, “TCP Out-of-Order”, “TCP Dup Ack”, and “TCP Retransmission”.
Case 1: 50000 Max Bytes 39% 40% Case 2: 100000 Max Bytes 29% 33% Case 3: No Max Bytes (local only) 0.2% 0.2% Based on these results, it seems that the reduction of MAX Bytes Broadcasted (every 100 ms) from 100000 bytes to 50000 bytes did not have any positive effect on the reduction of the TCP packet loss. Also, it is obvious but I would like to point out that there is virtually no packet loss on local client side. (I used the packet loss term loosely due to the presence of TCP Dup Ack along with TCP Out-of-Order. It also appears that TCP Out-of-Order is sometimes indicated as TCP Retransmission.) Previously, it was expected that the reduction of the bursty nature of packets would reduce these packet losses. However, the trial of 100000 to 50000 MAX Bytes did not yield any difference. (In looking at the wireshark data at 100 ms interval, it was not immediately clear if there is any reduction in the size of the bandwidth used from 100000 to 50000 MAX Bytes. However, there was a definite reduction from previous version of the patch with the new version that supports MAX byte settting.) Once the increase of send_buffer is implemented, it would help. I am beginning to wonder: 1. If the MAX byte setting was really enabled during the testing. 2. If there are any other issues on the WAN network such as FW module as it does the packet inspection on the traffic asked 08 Mar '14, 09:41 M320charles edited 10 Mar '14, 09:08 showing 5 of 6 show 1 more comments |
What exactly is your question?
It looks like you are performing some tests with different versions of patches and parameters.
Can you be more precise as to how we can be of help?
to add some questions
Server are Solaris 10 with 2 vm zones, client is Windows 7 with 4 XP vm's Problem is that the user is seeing hi rate of TCP retransmissions and alot of out of order packets, UDP test show 35% or better packet loss. Trying to find a solution to reduce volume of loss
Case 1: 50000 Max Bytes 39% 40%
Case 2: 100000 Max Bytes 29% 33%
Case 3: No Max Bytes (local only) 0.2% 0.2%
Based on these results, it seems that the reduction of MAX Bytes Broadcasted (every 100 ms) from 100000 bytes to 50000 bytes did not have any positive effect on the reduction of the TCP packet loss. Also, it is obvious but I would like to point out that there is virtually no packet loss on local client side. (I used the packet loss term loosely due to the presence of TCP Dup Ack along with TCP Out-of-Order. It also appears that TCP Out-of-Order is sometimes indicated as TCP Retransmission.) Previously, it was expected that the reduction of the bursty nature of packets would reduce these packet losses. However, the trial of 100000 to 50000 MAX Bytes did not yield any difference. (In looking at the wireshark data at 100 ms interval, it was not immediately clear if there is any reduction in the size of the bandwidth used from 100000 to 50000 MAX Bytes.
Here is data from spreedsheet showing test