Usually,when we check one tcp flow traffic and evaluate if the packets lost or large rtt exit.If the two factors both exit,how to judge which effect the tcp throughput seriously? Does anybody know the good way to judge this? asked 22 Dec '12, 01:12 chinasan edited 22 Dec '12, 03:03 grahamb ♦ |
One Answer:
It depends on your application. An application like FTP usually doesn't care too much about the RTT when transferring large files since there is no request-reply-request-reply communication going back and forth all the time - there is one "give me the file"-request, followed by tons of response packets. If you're doing small transfers instead (like requesting thousands of tiny files) the RTT will hurt you a lot more since you're serializing requests one after the other. So basically what you have to look at is this: how many requests are there that need to be executed one after the other, and how much data is returned each time? The smaller the data and the more requests you serialize, the more a large RTT will hurt you. answered 22 Dec '12, 02:03 Jasper ♦♦ |
Mostly we analyze application such as ftp or http streaming,no much requests.I have searched some classical tcp throughput formula(including rtt and lost ration ,MSS),very complicated.Is there any simple way to judge which factors effect seriously?
I don't think there is a simple way, since too many factors come into play here. You need to look at the amount of requests, minimum bandwidth of the connection, RTT, TCP inital window size, window size adjustments made by the receiver (which might lead to window full issues), plus packet loss and recovery time penalties. Yikes! ;-)