The road to CREST

Hey, Dave here.. I’ve recently sat the CREST Australia exams which in turn resulted in Asterisk becoming one of the first Australian CREST member organisations. This has been a long (and difficult) journey and I wanted share a few of my experiences, thoughts and comments.

First, a bit of background. CREST is the Council of Registered Ethical Security Testers. The organisation was formed in the UK in 2007 / 2008 [1,2] with an aim to standardise ethical penetration testing and to provide professional qualifications for testers. By most accounts CREST has been a big success in the UK; the accreditation was adopted by the UK government, who now require that all penetration testing is performed by CREST certified testers from CREST approved organisations.

In 2011, the Australian Attorney Generals Department provided one-off seed funding to establish CREST in Australia, and in 2012 CREST Australia was created as a non-profit organisation [3]. Like the UK, the Australian government’s goal, was to provide Australian businesses and government agencies with a means of assuring that security testing work is performed… “with integrity, accountability and to agreed standards.

July this year I was invited to become part of the technical establishment team for CREST Australia. This was a real honour for me, but at the same time a bit daunting when I considered the calibre of the other individuals and organisations that were to be involved. When I’d first started hearing about CREST Australia, I suspected that it might end up being comprised of organisations at the big end of town. I like to think that Asterisk were invited as a representative presence for the many excellent niche information security providers in the market.

The next few months involved a lot of preparation and planning; licensing for the exam IP was obtained from the UK, and hardware for the testing rig was procured, configured and shipped to Australia. Also, July to September involved a lot of study and preparation for me personally. Although I have been doing pen-testing in one form or another since 1997, the CREST syllabus covers a lot of ground, and unless you’re testing regularly on a wide variety of platforms, these exams are no walk in the park.

At the end of September the technical establishment team descended on the bustling metropolis that is Canberra to sit the three exams that CREST Australia offer: CREST Registered Tester (CRT), CREST Certified Tester – Applications (CCT App) and CREST Certified Tester – Infrastructure (CCT Inf). We all sat these three exams over a period of three days; to be brutally honest, this was a horrendous experience – 15+ hours of the hardest exams that I’ve ever experienced in the space of 3 days. I can’t remember being so stressed in my entire life. Pro tip: don’t try to do all the exams back-to-back.

In the end somehow I pulled the rabbit from the hat and achieved the CCT certification necessary to go on to become an assessor.  We spent the next few days learning the ins and outs of exam invigilation (yup, this is a real word), then closed out the week by running the very first CREST Australia exams for a packed house of candidates.

Who knows what the future holds for CREST Australia. Asterisk are hoping that the various arms of government, regulators and corporations will recognise the value of CREST certification and will incorporate it into their evaluation process for pen-testing providers. While there is absolutely no assertation that a pen-testing company needs to be CREST certified in order to deliver quality results, we believe that CREST certification provides clients with a degree of confidence that pen-testing will be performed to a high, repeatable standard.

Personally, I’d really like to see more niche providers in the game. There are a few of us and I think we do great work. I’d like to break down some of the ingrained corporate mentality that for security testing to be done well, it needs to be done by a Big 4 company / IBM.  Maybe CREST is a way for us to start competing on more of a level footing.

TLDR; CREST exams are really hard; don’t think that you can pass without extensive prep and/or experience. If you’re from a client organisation, CREST certified testers & organisations know what they are on about.

Nightmare on Incident Response Street

Steve and I will be presenting at next week’s ISACA 2012 Annual Conference Perth, held on the 31st of October. We’re pretty damn excited (we’re going for a pretty radical form of presentation format, so, we apologise to all those people who are really looking forward to PPT slides), not just because of the presentation, but, also because it’s one of our favourite days: Halloween! Steve and I will be dressing up as Gomez and Morticia from the Addams Family.

Our synopsis, just in case you wanted to learn a bit more:

Over the last several years we have seen the trend of large data breaches continue. Incident response is critical during these incidents and done well, can protect the reputation and customer base of the organisation involved. In this presentation we will review select case studies of security incidents in 2012, and ask if these ‘black swans’ are really black any more. We will then pose the concept that organisations should assume that the nightmare has already occurred, and discuss the importance of planning your incident response from end to end, including data acquisition and handling, event detection and triage, containment and response, and of course your communication strategy.

Supporting this foray into incident response we’ll also be covering available maturity models, and incident frameworks. The combination of these will give you a good starting point for reviewing and growing the capability you have within your organisation.

So come on down and say ‘hi!’.

Introducing Prenus .. the Pretty Nessus .. thing.

One of my passions in information security is finding new ways to look at old problems. Big problems and small problems, doesn’t really matter. I’ve found a really useful way to look at these problems is visualising them. Last year this led me to hack together Burpdot, a Burp Suite log file to Graphviz “DOT” language formatted file for transformation into a graphic. Over the past few months we’ve been spending quite a bit of time with Nessus, and when you’re dumped with tons of hosts and hundreds and hundreds of findings what are you meant to do?

I don’t believe many would argue that the default Nessus web UI is ideal for analysing bulk data. And I’m not the only person to construct something to parse and process Nessus files into HTML/XLS files, you can see Jason Oliver’s work here. Hence, Prenus, the Pretty Nessus .. thing. Combining my love of Nessus, visualisation, and hacking stuff together.

Following the same principles as Burpdot, Prenus simply consumes Nessus version 2 exported XML files, and outputs the data in a few different formats. Initially, I was interested in finding a better way to analyse the results (Personally, I find the the Flash web interface frustrating), so the first output was a collection of static HTML files with Highcharts generated pie and bar graphs.

For example, the below creates a folder called ‘report’ with a bunch of HTML files:

$ prenus -t html -o report *.nessus

The top of the Prenus index, highlighting unique Nessus criticality ratings, and a split of the top 20 hosts
The top of the Prenus index, highlighting unique Nessus criticality ratings, and a split of the top 20 hosts

Vulnerability Overview page
Vulnerability Overview page

Vulnerability Detail
Vulnerability Detail

Host Overview
Host Overview

Not just wanting to end there, I thought it’d be useful to also generate a simple 2 column CSV formatted output that could be consumed and processed by Afterglow.

For example:

$ prenus -t glow *.nessus
46015 (4),
46015 (4),
46015 (4),
53532 (4),
53532 (4),
53532 (4),

But, if piped through Afterglow with our file, and then through Graphviz (in this instance Neato), you get something like this (I had to run this from within the afterglow/src/perl/graph folder):

$ prenus -t glow ~/prenus/*.nessus | ./ -t -c ~/prenus/ | neato -v -Tpng -Gnormalize=true -Goutputorder=edgesfirst -o prenus.png

Prenus Afterglow Example
Prenus Afterglow Example

If you prefer a Circos-style graph you can do that too. The circos output mode generates a tab formatted table output which can be consumed by the Circos “TableViewer” tool. Wrapping that together is relatively simple (I use the word simple here lightly, getting Circos working on OS X was a bit of a pain due to GD deps, but, super simple to do on Linux). The following two commands assume the following directory layout:


Executed from within the “tableviewer” folder, the following should create a file called “prenus.png” in the “img” folder.

$ prenus -t circos ~/prenus/*.nessus | bin/parse-table -conf samples/parse-table-01.conf | bin/make-conf -dir data
$ ../../../circos-0.62-1/bin/circos -conf etc/circos.conf -outputfile prenus

Prenus Circos Graph Example
Prenus Circos Graph Example

Potentially useful for you analysis, or maybe just some prettiness to add to your reports. Any methods or tools that can help dig through stacks of data are pretty useful to us. The above diagram has a few layout issues, so if you want to just analyse your critical severity issues you can include the “-s 4” flag to prenus.

I’ve got an ad-hoc list of enhancements which include:

  1. 1. Construct an EC2 bootstrap to allow you to deploy this on a throw-away environment so as not having to fight with the damn dependencies like I did
  2. 2. Look at using d3.js for ALL chart generation instead
  3. 3. Perhaps just looking at a standalone Rails app for your own deployment (either local, Heroku or whatever)

Let me know what you think, or, just grab the code and have a play yourself:

Application Security: Your First Steps

One of the areas of information security that Asterisk has a keen interest and involvement in is that of Application Security. Whilst security of your infrastructure, in particular the perimeter and end-points, has been a focus point for a number of years now most of the important information stored by your business doesn’t usually reside in those locations. Sure, transient remnants of information are always likely to exist on your end-points, but centralised storage and management of sensitive information has been a central enabler for IT since the concept of client/server architecture began. For most people involved in information security, or even information technology, this is not news at all. In fact, it’s been the message that organisations like OWASP have been hammering on about for over a decade now. Unfortunately traditional firewalls and anti-virus don’t really help you when it comes to assuring the security of your applications, especially your web-applications, on the contrary your firewalls are usually configured to explicitly allow access to your web-applications, I mean that’s what they’re for.

As part of our involvement with the application security space we participate in conferences and events focused on the security of applications, unfortunately, these events always struggle to draw in the people that would really benefit from this knowledge. I’m talking about the masses of people who actually run their businesses online, or the people that rely on the Internet for their commerce, and there’s lots of us (yes us too, we utilise various online services for the management of our business too).

So where do they start? Can they talk to their IT guy? Can they talk to their AV vendor? If there were an easy solution to securing applications, we would have all done it already, right? And if you’re in the business of relying on your staff, or contracted staff, to build applications for you, then trust us, this is definitely an issue that you should be aware of. If you haven’t had an opportunity to read Verizon’s Data Breach Investigations Report for 2012 then you should get your hands on it [pdf] (Or a really good high level summary can be found over on Securosis). One of the takeaways from the report is the number of breaches where the vector was hacking via web applications. (The recent report indicates that overall 10% of external hacking incidents, leading to breaches, where related to web applications. This statistic increases to 54% when looking at organisations with 1000 or more employees)

Honestly, starting is the simplest bit; it is being aware of the problem. Awareness that a lot of attacks are opportunistic in nature, and that you aren’t necessarily a target, except for the fact you reside in some form on the Internet. The tools and the methods employed by these attackers are not a dark art; they’re relatively simple and widely discussed in the industry and by many ‘above-board’ organisations. One such organisation is OWASP, a not-for-profit worldwide organisation focused on improving the security of application software. And you know how they do it? The publish materials and tools, for free, online. Asterisk is so keen to dedicate itself to this cause that two of our founders are local chapter leaders within OWASP.

If step one is increasing your awareness of just how exposed your applications are online, then step two would be dedicating your morning read to some of OWASP’s materials (If I had to choose a starting point, the latest version of the OWASP Top 10 is as good as any), or better yet, finding out when your next OWASP chapter meeting is and heading on down to say ‘hi’.

Don’t give up hope, and don’t worry, this is going to be the first of many posts on how you can start looking a little closer at the security of your applications.

Website defaced? Check your PCs for Malware!

Asterisk were recently called in to assist a Not-for-profit in responding to a website defacement security incident. Initially it was suspected that there must be some vulnerable web application code on the website, perhaps via an un-patched web forum, or old, archived copies of an out of date forum. Of course when you start looking through the web logs and you can’t see anything obvious that’s when you have to widen your net.

So naturally we checked out the FTP logs, and voila, a number of uploads from an IP from Monaco, placing numerous .php files in numerous folders. This IP address logged into the FTP account using the primary FTP account holder’s username, which we knew had a really complicated (read: 12+ characters with a mix of upper, lower and symbols) password. After confirming with the client that they stored these credentials in FTP client software on a few computers, the next avenue of investigation was that they’d had their one of their PCs compromised… and there was the root cause.

Whilst this isn’t that uncommon these days, it’s certainly something to consider when you’re responding to website defacement incidents.

Doing a bit of research now on the IP address you can also see that this attacker went after quite a few sites..