Does anybody know (not just "suspect or think") why the IO Graph for a given source-destination IP changes it's value on the X-axis when chosing different Tick intervals on the X-Axis?
I need to measure the max bandwidth taken by a new application that we want to roll-out on our corporate laptops, before we deploy it, as to analyse the impact on the WAN. The application is called DLO from symantec and makes automated backups to a NAS, using incremental backups (also for Outlook PST files). It uses SMB2 protocol.
So, if I open the IO Graphs, I use a source-destination filter to see the traffic between client and the server, but I find different peak-values when changing the Tick-Interval for the X-Axis. How come? I don't see anything explained in the 2nd edition of Laura Chappell's "Wireshark Network analysis" book. the peak-values on the X-axis increase when decreasing the tick-interval, I would have expected the opposite; i.e. by decreasing the Tick-interval the "sampling" frequence decreases, hence statistically missing more short peaks.
I've seen comments about the IO Graphs doing averages between two tick-intervals, but is that confirmed officially? Thx!
asked 15 May '14, 03:40
OK, I think I found the answer myself; because the amount of traffic is measured over the time-interval (Tick interval), i.e. if time between ticks is shorter, the amount of traffic is less of course, because bandwidth=amount of traffic per time unit (per second actually). If the Tick interval is e.g. 10 x shorter, the amount of traffic will be 10x less, so the "peaks" will be 10x smaller. Laura's book litteraly says (Chapter 21, p500): "... The tick interval indicates how often traffic should be plotted on the graph. If the interval is set to 1 second (the default), data will be examined for one full second and then plotted. ..." Sorry Laura, it just didn't get into my brains at first. Hope you excuse me for quoting a phraze in your book!
answered 15 May '14, 06:01