This is our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

Looking for any TCP experts to help explain this!

pcap: https://www.dropbox.com/s/9kk81d59ub1njwe/LB%20nstrace1-6-stream8only.cap?dl=0

We are troubleshooting a slowness issue involving traffic through a load balancer, and we have found a smoking gun where a 9k file is being transferred, hits some packet loss, but then after what looks like successful retransmission, see what appears to be the load balancer sending 1 packet at a time, after waiting about 5 sec each packet. The result is that it takes 35 sec to transfer a 9k file (7 packets * 5 sec).

In the attached capture, you see:

  1. normal TCP behavior, and the lb attempting to transfer the file to the client (frames 15-22)
  2. some packet loss; the client ACKS for frame 15, about 7 packets back
  3. a pattern of:
    • client ACKs for the lb to retransmit some data
    • lb ACKs that request
    • lb waits 5 sec
    • lb transmits the requested data (this pattern in frames 28-41)

I am confused, because: - the window size is fine (66640) - yet the load balancer is transmitting only one packet before waiting for an ACK - the lb waits 5 sec (an eternity) and never speeds up upon subsequent quick ACKs from the client

I also noted that SACK is actively being used (frames 27-28). Maybe that's confusing something.

Can anyone explain what is going on here? What are the possible mechanisms in modern TCP that should, after some packet loss, be reducing the # packets sent, and/or increasing the time between sent packets? Could one of these be going haywire? Anyone seen something like this?

The lb is a NetScaler.

Thanks! :-) Shawn

asked 27 Apr '16, 08:31

shawncarroll's gravatar image

shawncarroll
0112
accept rate: 0%

edited 27 Apr '16, 10:29

Jasper's gravatar image

Jasper ♦♦
23.8k551284


First, a small hint - if you're tracking strange TCP behavior you should not capture on any of the nodes involved. As far as I can tell you captured on the loadbalancer (10.166.28.40), which may not exactly show what was really sent on the wire. Always use an additional PC or a professional capture device to get a fully passive reading via TAP or SPAN port. It's the only way not to be fooled by a device you suspect of doing things the wrong way (and, I have to add this, is especially true for loadbalancers and traffic shapers, because they do really crazy stuff with TCP sometimes)

Now, for the retransmission in frame 23 - it looks like it's for frame 15 based on sequence and acknowledge numbers as well as the TCP payload. My guess is that the retransmission was triggered by a retransmission timeout timer not seeing an ACK for frame 15. In 24 you get the ACK, very likely for the retransmission, not the original. In 26 the server tries to continue with data following frame 22, based on the sequence numbers seen, but the client tells it to send frame 16 again. It looks like it never got any of the 1466 byte frames at all, and they get now retransmitted one by one.

The problem here is in fact that it takes 5 seconds for each of the retransmitted frames. This again looks like the retransmission timer having to run out each single time.

My advice:

  1. Do simultaneous captures at the client and the server to be able to compare what they see and what they don't. Use additional laptops for this; do not capture on client or server. Check this blog post for reasons why.

  2. Check the loadbalancer for mechanisms that would reject or drop retransmissions from either side if it still has the original in it's own buffers. I have seen loadbalanchers receiving retransmissions and saying "no, I still have an unacknowledged copy of this packet myself. I'll drop the retransmission and keep waiting for an ack for the original I already sent.", which lead to a timely retransmission being blocked by the device in the middle.

permanent link

answered 27 Apr '16, 10:47

Jasper's gravatar image

Jasper ♦♦
23.8k551284
accept rate: 18%

edited 27 Apr '16, 10:53

Thanks Jasper, this is illuminating.

Does the RTO of 4.87 sec make sense to you? Why would it be this high in the first place? I thought it would be a smaller multiple of the round trip time (about 150 msec)

I think right now the LB is using the default, which is supposed to be Reno or enhanced Reno.

(27 Apr '16, 13:56) shawncarroll
1

An RTO of more than 3 sec is unusual, yes, but not in a range I'd find highly suspicious. Maybe someone "tuned" something somewhere.

And no, most stacks today do a hard coded 3 sec RTO, because it almost never happens - except for the lost odd packet at the end of a transmission (which means there's no packet following the lost packet with which you can spot that there's a gap). All other situations are recovered via fast retransmission or SACK.

With behavior like you're seeing I always investigate the "middleboxes", e.g. your loadbalancer. Current client and server TCP stacks just do not behave the way you're seeing in your trace.

(27 Apr '16, 15:44) Jasper ♦♦
1

Just following up on this in case anyone is following along, after not much motion at all on the part of Citrix, finally this known bug was identified as being the probable cause, with the recommendation to update to a certain firmware version to fix.

"Retransmission Timeout Causes Network Latency on SSL Connections Through NetScaler" http://support.citrix.com/article/CTX205656

Thanks Jasper!! :-) Shawn

(06 May '16, 12:58) shawncarroll
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×752
×56
×55
×43

question asked: 27 Apr '16, 08:31

question was seen: 1,604 times

last updated: 06 May '16, 14:40

p​o​w​e​r​e​d by O​S​Q​A