Hello, I've captured packets of an HTTP request made from a defaced webpage that loads several objects from various sources, one of those objects is a Flash animation, I got the captured HTTP request but I'm unable to load the URI or interpret it as it contains /.../ in it, this is an exapmle of the packet info: Request Method: GET Already tried: If I point at that URI address with /.../ in it or not, I still cannot load the flash file which indicates that I'm misinterpreting the URI information. Could someone clarify what I'm misinterpreting or missing from the meaning of the /.../ chars in what should otherwise be the URL? Thanks for your help, "packethunter", but in my case it's not that simple... Request URI: /.../My_flash.swf?soundswf=http://192.168.20.55/.../flash.swf&autoplay=1&loops=1 My question still maintains, if it's even answerable... Thanks for the help! asked 21 Jun '11, 17:20 ner0 edited 22 Jun '11, 04:36 |
2 Answers:
What catches my eye is the pattern with 3 dots A single dot refers to the current directory, two dots are used for directory traversal. Let's say your webserver is configured to store document in the directory /www/docs The URI /my_flash.swf would deliver the file /www/docs/my_flash.swf The URI /../my_flash.swf would deliver the file /www/my_flash.swf. If your webserver processes that request and deliver the file /www/my_flash.swf you would be prone to directory traversal, which is undesired (to say the least). The URI /.../my_flash.swf would deliver the file /www/docs/.../my_flash.swf. Note that the 3 dots refer to a directory. This directory would not show in the output of the regular ls command, use ls -a to find the directory. If ls -a does not show the files you might have a root kit at work that keeps the directory hidden from all users. Everything that follows the questionmark is a parameter to be processed by the script running on the web server. The URI from your question would deliver three parameters to my_flash.swf:
Good hunting! answered 21 Jun '11, 23:44 packethunter As it turns out, thats about it. There's nothing really wrong or odd about the URI, it's a plain directory traversal method, which means that the target server is that unsecure. At first I had problemas loading the URI through a webbrowser but then it started working, I guess mainly because I was using all kinds of web browsers, except for MS Internet Explorer which was in fact the prefered targeted agent. I already contacted the hosting services where the content is located so that they can look into it. Thank you very much for your explanation. (23 Jun '11, 10:16) ner0 |
Looks like you managed to stream your SWF-File to console window. Here are the steps to save it into a file: First extract the HTTP request from the trace file. In the packet details pane, right-click on the top line of the HTTP decode. Make sure that the whole HTTP request is marked in the hex view. Then select the function "Export Selected Packet Bytes ..." Save the file to any convenient location, say C:\Temp\get.txt Next Obtain a copy of netcat and pipe the GET request to netcat:
result.dat now holds the HTTP response header followed by the SWF. If you run Wireshark while making the request you can export the SWF with the almighty function File -> Export -> Objects -> HTTP answered 22 Jun '11, 07:34 packethunter edited 23 Jun '11, 12:44 The command you proposed, is it intended for Linux/Unix? maybe I should have pointed out that I'm using Windows at the moment. Is it possible to gather the GET information and pass it to NetCat in Windows? I'm closer than I was when I asked the first question, thanks much man! I'll have a few more tries later when I get home, you've been of great help. (22 Jun '11, 08:02) ner0 From the "type" (rather than "cat"), "C:", and backslashes, the comment he proposed is pretty clearly intended for Windows. Yes, there's a version of netcat for Windows. (23 Jun '11, 10:36) Guy Harris ♦♦ |
Does snippet from the HTTP represent the complete request? It is lacking a number of usual fields. Most notably Accept-Encoding, User-Agent and a few other items are missing.
Malicious web servers often fingerprint the browser and customize the exploits for each victim. Automatic requests made by a bot (or a download agent for that matter) look slightly different than a users request.
Try to sling the request that you recorded at the malicious web server using netcat and see what you get.
Thanks for the tip but I'm a bit of a noob on these matters and can't figure out how to use netcat in a way that will get me the result I want, which is finding the exact URL behind the /.../ The only thing I achieved so far is to get garbled chars in my command console and an endless PC Speaker noise. You're right about the download agent, if I change even the page title it won't reach the URI anymore. The request snippet is not complete and neither the IP is the correct one, the fields you mentioned are: Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 Want to identify the user hosting