This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

Web page size

0

One of our internal web pages keeps getting slower and slower and I believe it's due to it keeps getting bigger and bigger. Anyone know of a way to easily determine a web page size? I'm using "tcp.reassembled.length" now but was wondering if there is a better way.

thanks,

asked 17 Nov '10, 06:28

JimL's gravatar image

JimL
1111
accept rate: 0%


2 Answers:

1

It depends... If your webpage is delivered over a single TCP session the length of the reassembled PDUs is pretty fine. If there are several TCP sessions involved, I would try an IP-based filter, if there is no other conversation between both hosts except the requested webpage

Another way to quickly lookup the size would be to filter for the TCP stream (e.g. via Stream Index) and look at

Statistics->Summary

There you see the transmitted bytes for the whole capture plus the statistics for your current display filter - of course including a little overhead from ACKs and headers...

BTW: Under Statistics->Conversations you can directly see lots of statistical info about transmitted bytes/packets etc. for TCP-based as well as for IP-based conversations

answered 17 Nov '10, 06:39

Landi's gravatar image

Landi
2.3k51442
accept rate: 28%

edited 17 Nov '10, 06:42

Thanks, there area several TCP sessions. I'll check out Statistics->Conversations.

(17 Nov '10, 11:50) JimL

0

It's much easier to use an HTTP specific tool like Fiddler or HTTP Analyzer when troubleshooting web server's performance. You can gather the data from Wireshark (within HTTP, look for content length etc), but above tools are easier to spot issues.

You should troubleshoot this the same way as any other issue:

1) Remove any issues at the network layer. Look for any retransmissions (lost packets) or TCP window sizes (use "tcp.analysis.flags" display filter to see what's going on at a high level).

2) Remove HTTP issues by tracking response codes such as 500 server error, 304 not modified (no cache control in place so you're wasting round trips).

3) HTTP 1.0 vs 1.1: There's a reason why 1.1 was created so quickly after 1.0. Make sure you are using the optimization technique deployed in 1.1 (keepalive for example)

4) Any SSL issues? Are you wasting time by having to renegotiate SSL because of short SSL Keepalive timers?

5) Add the DELTA from PREVIOUS packet column and sort by it. Look for the biggest delays then zoom in to see what's going on.

In most of my troubleshooting experience, the problem typically lies in the backend process (web server to app/db server) or because cache control was not implemented properly. And of course the usual suspect of not having an optimized tcp stack rears its ugly head from time to time (see RFC 1323 for window scaling options)

Good luck.

answered 17 Nov '10, 06:43

hansangb's gravatar image

hansangb
7912619
accept rate: 12%

I haven't heard of Fiddler and will look into it. The network is clean, no HTTP errors and there is a hardware load balancer in front of the server doing SSL offloading. This IS an area increasing delay. Thanks for all of your ideas, I'll double check them all.

(17 Nov '10, 11:53) JimL

Fiddler is pretty nice to find out what happens while loading a web page. I can also recommend using the Firefox addon "FireBug" to see what element takes how long to load from the browser perspective, or maybe if you don't want to use a proxy tool like Fiddler. It's a little tricky to get Firebug to work though if you've never done it: 1. Install Firebug 2. Open it (Status bar "bug" icon) 3. Click and "Enable" Net Tab 4. Load web page

You'll get a nice bar chart diagram telling you what element was loaded when and how long it took

(17 Nov '10, 16:29) Jasper ♦♦