This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

Developing a model for estimating throughput

0

Hi all,

Before I explan my query I'd just like to put it out there that I'm new to Wireshark and protocol analysis. I'm trying to go about learing this the right way, but am not quite there yet, hence my query.

So here goes...

We, a large multinational company, are looking to make use of the Microsoft hosted OneDrive for Business application. Now I know not everyone is a fan of Microsoft, that's a different story. What I'm trying to do is to establish a model that estimates the impact the use of this application will have on various WAN circuits around the world. So what do I know? I know that there are an awful lot of varibales that you need to consider, so many that some might suggest it's impossible to estimate (client, network, Internet, server, etc). I've tried explaining this to the business but they want something, even if it only provides a worst case scenario. What I also know is a little about the OneDrive application, it uses TCP (SSL), for every file a user attempts to upload it creates a separate TCP/SSL socket, that application won't attempt to sync a single file that is larger than 3Mb, any file larger than this is chunked up. And the application doesn't permit more than five concurrent uploads...

So how do you estimate this? I thought about calculating the time it would take to transmit over both wired and wireless lans, the Internet, etc. I've got some basic captures and from what I can see the server is responding well (reasonably large window size), the client too seems to perform well.

I'm trying to devise a calculation for doing this. I've pulled together many captures but am struggling somewhat on this. Can anybody suggest a way in Wireshark I can pull together what is an average throughput? I guess a lot of this will depend on the number of users uploading at any given time. I've also recently got a licemse for the Steelhead Analyser software. I can see on avergae that clients generally seem to have an average throughput in Mb not Mbps of 1.4Mb. No more.

If I used the example of a 20Mbps circuit and 10 users can anybody suggest how you would estimate the time it would take to transfer this data, high-level only of course. Just think I might be missing something obvious in my current calculations as my estimates v actuals are way out.

Thanks

asked 18 Sep '14, 15:46

dcarr's gravatar image

dcarr
11111
accept rate: 0%