This is our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

I'm a newbie to this stuff and I was wondering if anybody could tell me how to simply track visited websites on computers and mobile devices using Wireshark. I've kinda figured out how to use the HTTP filter and Destination Addresses to see websites, but I'm really just making lucky guesses on how to do things. I'm on a Windows laptop using a wireless network without a password. I know that Monitor Mode may be required but I don't know how to "turn it on" Any help is greatly appreciated! Thanks!

asked 24 Jul '16, 21:48

Turpz's gravatar image

Turpz
6113
accept rate: 0%

edited 25 Jul '16, 08:27


Unless you are really lucky about your wireless hardware and driver, monitor mode on Windows is not much useful. Now as NPcap (and NDIS 6) is here, it does work in principle, but only with some chipsets and there are some limitations. But let's assume that you are lucky and could set your WiFi adaptor to monitoring mode using the wlanhelper utility from the npcap suite, after installing NPcap with the wireless capturing option.

Doing so will, at first place, prevent your PC from connection through that WiFi adaptor (because monitoring mode replaces the STA mode). So unless you are even more lucky and have an additional WiFi adaptor you could use, you'll be only able to capture WLAN traffic of other devices.

The next point is how "visiting a site" works. After you write an url to your browser's address field, the browser first asks a DNS subsystem to resolve the domain name part of it. If you have visited that page shortly before, the answer is available in the cache, so no DNS request is sent over the WLAN. But let's say it is, so it is the first bit of information you are interested in.

Next, the browser sends a http GET to one of the IPs from the DNS response. However, if the web server redirects the GET to an encrypted connection (https), you will only see the initial GET. If the user himself has opted for https, you won't be able to read even the contents of even the initial GET unless you have access to key dump from the browser. It is possible to decrypt https, but you need information about the keys from the browser, which only some browsers can provide.

From the DNS queries, you can see the domain names, but not the rest of the urls (the paths to files and eventual parameters). The complete url is only available in the payload of the http GET.

Next, a single html page may refer to many other urls (pictures, advertisements), many of them placed on other servers, so for a single web page visited, you may see several DNS requests and http sessions.

On the other hand, if the user clicks between several pages hosted on the same server, there may be just a single TCP session, as the browser doesn't close it immediately after receiving the response.

P.

permanent link

answered 25 Jul '16, 09:55

sindy's gravatar image

sindy
6.0k4851
accept rate: 24%

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×24
×14
×6
×4

question asked: 24 Jul '16, 21:48

question was seen: 3,714 times

last updated: 25 Jul '16, 09:55

p​o​w​e​r​e​d by O​S​Q​A