Hi, I am trying to filter pcap files to allow me to analyse them to determine if there are any fast flux networks using my domain name servers. The filters i am using with tshark right now are: -e ip.src (This will show the source of the request, the client) -e dns.qry.name (This will show the domain being quieried) -e dns.resp.ttl (This will show the TTL of the DNS query) -e dns.resp.ns (This will show the name server that is providing the answer) -e dns.qry.type (This will show the DNS Record Type, A, AAAA, MX ...) Please let me know if any of my descriptions are wrong. I am trying to think of other filters that i can use to try and get relevant information, any suggestions? I need to filter the IP address that is being sent back to the client, the answer, which filter does this? dns.resp.addr ? Any help you could give me would be great! Thanks very much asked 07 Feb '13, 04:34 Embed |
One Answer:
just a clarification. Why do you think your name servers are beeing used? DNS fast flux works by adding a huge number of A records for a FQDN and changing them quickly, as well as changing NS records. Both should be impossible with your name server, otherwise somebody has control over your server and then you have a bigger problem than just dns fast flux ;-) Now for the way to detect dns fast flux: If you search google you will find a lot of papers. The common strategy is to count the number of A records for a FQDN and monitor changes of those A records in different answers for a query. If those two take place, it's most likely dns fast flux, with some possibility for false positives (e.g. pool.ntp.org and similar). So, what you need is the dns query name (FQDN), all A records of the answer and a history of NS answers (A records) to monitor any changes. With tshark this is would be:
Sample Output:
The sample divewithsharks.hk is a real world sample (honeynet.org) of dns fast flux with modified last octet. Then you need a script (perl, python, whatever) to analyze the output of tshark, as described above. Obviously you can 'enrich' the output with DNS TTL, frame time, etc. But that's not really necessary just to detect fast flux. If you need a history and some time analysis, then the frame time would be helpful (-e frame.time). Good luck! **UPDATE** Regards answered 07 Feb '13, 05:30 Kurt Knochner ♦ edited 05 Mar '13, 13:58 showing 5 of 8 show 3 more comments |
Hi Kurt,
Thank you very much for your response, very informative!
Let me answer your questions. I am a student doing a project on fast flux but I also work for a large datacentre that provides dedicated servers, colo, internet access etc so we have a very large number of queries coming through our DNS servers. So i have captured over 60 million DNS packets from client networks to analyse.
We don't believe that our servers are being used, but i am interested to see if i can detect if client machines are being used as part of a botnet which is using fast flux as one of its hiding techniques. This should be possible by watching the A records of domains being looked up by the clients machines correct?
Thanks!
Edit
When I run the command you wrote i get this error (I'm using windows): tshark: "separator" is not a valid field output option=value pair. TShark: The available options for field output "E" are: header=y|n Print field abbreviations as first line of output (def: N: no) separator=/t|/s|<character> Set the separator to use; "/t" = tab, "/s" = space (def: /t: tab) occurrence=f|l|a Select the occurrence of a field to use; "f" = first, "l" = last, "a" = all (def: a: all) aggregator=,|/s|<character> Set the aggregator to use; "," = comma, "/s" = space (def: ,: comma) quote=d|s|n Print either d: double-quotes, s: single quotes or n: no quotes around field values (def: n: none)
In a Windows cmd shell you'll need to quote the compound parameters, eg.
-E "occurrence=a" -E "separator=;"
well, it's a different game if you provide dns services for your customers. Then dns fast flux could be a problem with your servers.
You should be able to do a first analysis as described above.
Yes
I have been working on the filtering a bit more but require some help with something that i'm not sure is possible but i will give it a go!
In tshark my current filter looks like: -E separator="/t" -T fields -e ip.src -e ip.dst -e dns.qry.name -e dns.resp.addr -R 'dns.flags.response == 1' -R 'dns.qry.type == 0x0001' -R 'dns.count.answers >= 1'
Now this is giving me the information i need but what i didn't realize is that the dns.resp.addr will return all of the answers, Authoritative name servers and Additional Records. All i am interested in is the answers from this DNS server and not those from the others. So, how can i filter so that i do not show the Authoritative name servers and Additional Records?
You could try to add this filter: -R "dns.flags.authoritative == 1".
Thanks again Kurt, that seems to have cut it down somewhat.
I have 1 more question, im sure there will be more, but it surrounds exporting to a csv file using the deliminator ",".
The columns i have are: -e ip.src -e ip.dst -e dns.qry.name -e dns.resp.addr
Now the data dns.resp.addr returns is the address for that domain, but some domains have a ton of IPs and so a long list of IPs is printed. They are separated with a Comma so i set the rest to be determinated with commas so that when i import it into my sql database it can detect the separate columns. The issue is that it only separates the first 4 addresses then doesn't bother separating the rest and checks them into 1 column. I think this is probably an SQL db issue but i want to make sure that its not an output issue with tshark?
Your "answer" has been converted to a comment as that's how this site works. Please read the FAQ for more information.
Yes, it's probably not a tshark problem. Can you show an example (tshark output) where it happens? How do you import the data into the DB?
BTW: You can use the following flags to change the aggregator and separator characters (see tshark man page).