I am running a scanner against my server. I am trying to understand why TCP conversations (12-17) below have a SYN (sent by the scanner) with a RSTACK response sent by my server. Does anyone know why? Thanks in Advance. I am sorry that the data below seems to get reformatted. It should be 4 values per line where the value is null for the column representing the receiver.
asked 07 Nov '13, 08:16 malhenry edited 07 Nov '13, 09:54 Jasper ♦♦ |
2 Answers:
When you sort the packets chronologically it is obvious that your server only allows 5 concurrent connections from the same client. This seems to be caused by the SOMAXCONN limit as a forum thread in stackoverflow.com suggests. When streams 12-17 send their SYNs we still have 7-11 in an established state. As soon as the server closes stream 7 the next SYN (stream 18) gets in. answered 08 Nov '13, 22:22 mrEEde edited 13 Nov '13, 12:36 Just see that the state is CLOSEWAIT (FINACK arrived) when the SYN12 comes in... (08 Nov '13, 22:27) mrEEde Looks like a good possibility. I will check the application software. It would be awesome if you are right.! Thanks!! Here is the packet capture - http://cloudshark.org/captures/28fee41588d1 (12 Nov '13, 14:52) malhenry The trace shows the server is being scanned by Linux and the Windows server is rejecting SYN packets whenever there are more than 5 active connections to the client. (13 Nov '13, 07:49) mrEEde This may not be the best forum for the following post. If there is one more appropriate, please inform. Ultimately I have to have development change our server software in order to pass the PCI scan. One bandaid is to increase the number of open connections in the listen call to the number that the scanner will request on this port, but this seems like a kludge. So I am trying to figure out a better solution. I wonder if timing is an issue. For example I notice in tcp.stream 7 that: our server responds to syn with synack in .00004 sec our server responds to finack with ack in .00003 s our server responds to ack with finack in 3.5 s Is this because the overhead of tearing down a connection? Or is something wrong with our server. Thanks... (13 Nov '13, 11:23) malhenry Your problem was discussed in http://stackoverflow.com/questions/4709756/listen-maximum-queue-size-per-windows-version YOur server is actively closing idle connections after exactly 5 seconds, this is why you see the FIN_ACK going out that late in stream 7. (13 Nov '13, 12:38) mrEEde Great link! Can I assume that you are saying to examine my keepalive tcp parameter in light of the needs of my application? http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html Thanks. (13 Nov '13, 14:32) malhenry No, TCP Keepalive is not involved here. What I was trying to say is that when a client connects to your server and does not send the expected data (or no data at all), your server will close the connection 5 seconds after the accept(). This is a timer in your server application (13 Nov '13, 22:44) mrEEde In looking at stream 7, the only 5 second delta that I see is between the 3 way handshake and FinAck sent by my windows 2012 server in response to the FinAck sent by the scanner. So the original finack is sent by the scanner. So unless I am mistaken, I think my server is responding to a finack packet, rather than a timer expiring. Comments? In any event, are you referring to the Fin_wait_2 timer http://msdn.microsoft.com/en-us/library/windows/hardware/ff550023(v=vs.85).aspx ? My earlier analysis 4 posts ago was slightly wrong. The ack and finack (packets 65 and 78) sent by my server were in response to the finack sent by the scanner. The largest delta between any consecutive packets in stream 7 is between packets 65 ack and 78 finack which are both sent by my server. Is a 3.5 sec delay between an ack and finack normal for a windows 2008 R2 server on a Hyper-V vm? My server is not under heavy load. Lastly doesn't the accept() call map to the action of pulling a connection out of the socket receive buffer? If so, I don't think that there is a packet that corresponds to this action and therefore there is no way to tell in wireshark when an accept() occurred. Am I mistaken? Thanks. (14 Nov '13, 07:52) malhenry "I think my server is responding to a finack packet, rather than a timer expiring. Comments?" While it is true that the finack packet acks the scanner's FIN packet I would think it is too much of a coincidence that the delta time between frame 41 and frame 78 http://cloudshark.org/captures/28fee41588d1?filter=frame.number%3D%3D41%20or%20frame.number%3D%3D78 is almost exactly 5 seconds. So my feeling is, the server didn't get notified that a fin has arrived and the server wakes up after its own 5 secs idle timer popped (16 Nov '13, 06:46) mrEEde "In any event, are you referring to the Fin_wait_2 timer" No, the FINWAIT2 timer starts when your server closes the connection and TCP is waiting for the scanner's ACK to your FIN. (16 Nov '13, 06:49) mrEEde "Lastly doesn't the accept() call map to the action of pulling a connection out of the socket receive buffer?" Not from the TCP receive buffer but from the listener's backlog queue (16 Nov '13, 06:51) mrEEde showing 5 of 11 show 6 more comments |
That's a rather bad idea and unfortunately a (common) result of totally useless and dumb PCI scans, just to pass that scan. First: The tool scans the same port over and over again. What kind of security problem do they (Qualys) think to find by that kind of scan? Secondly: The scanner of Qualys is kind of broken, as it scans the sever with source port 9150 (seems to be the destination port on the server), which results in a TCP Reset from either the firewall or the server itself. See Frame#4 in http://cloudshark.org/captures/28fee41588d1 How much sense does that make? {RANT: So, instead of 'fixing' your server software go back to Qualys and ask them to fix their scanner ;-)) Maybe your server software is not broken, and it has good reasons to allow only a certain number of concurrent connections from one client. That could be a security measure in itself. So, by 'fixing' that behavior, you might pass a dumb PCI scan, but actually you might also weaken the security of your server software. But let's be honest: PCI scans are not about improving security. It's all about passing a bunch of dumb tests and getting a certificate ;-) } BTW: Is there a firewall between the client and the server? There should/must be one, if you want to pass a PCI audit ;-)). Did you check what the firewall does to those scanning tests. Maybe it's the firewall and not the server software that limits the number of concurrent connections. Regards answered 14 Nov '13, 08:18 Kurt Knochner ♦ edited 14 Nov '13, 08:20 Just curious: How did you know it was Qualys? Any indication in the trace ? (16 Nov '13, 06:54) mrEEde @malhenry mentioned it in a comment. (16 Nov '13, 14:29) Kurt Knochner ♦ |
I do not believe the socket is closed during the time of RSTACK. Also TCP conversation 18 seems to be successful after 6 failed connections.
Thanks for the edit Jasper!
impossible to tell without further information.
The server is Windows Server 2008 R2 as a VMware VM. The firewall is disabled. There is no AV running on the server. The scanner is QualysGuard it is doing a PCI scan which involves many ports and tests (which Qualys does not describe in detail). Every time this scan is run, Qualys says there was 7 good connections followed by 3 bad. Well there are more than 10 TCP streams in this capture. Qualys says there are two phases to the scan: discovery and vuln testing and that maybe the discovery packets are not included in their logging. I will see if I can at least post the deltas....not sure I can post the capture file.
Thanks.
O.K. as an alternative, can you please post the output of the following command:
If you want, you can replace the real IP addresses with dummy values (search - replace in a text editor).