Category: Security

  • On APT

    Recently, RSA was attacked by adversaries who targeted their two factor authentication fobs.

    These devices have known MITM issues, but folks still used them because there was so little information out there to say that a better choice is required. RSA liked it that way.

    RSA chose not to discuss the details of the attack, using the old furphy that disclosure will damage their customers (reality: it would damage RSA’s brand). RSA’s silence allowed

    Advanced

    Persistent

    Threats

    to execute the boldest cryptographic information warfare attack since Enigma.

    RSA’s (IMHO) cowardly silence has actually damaged their customers in highly spectacular fashion. RSA told us nothing, so we couldn’t ask our clients to change vendors in a staged way, or to disable access, or put in other controls. We could guess, but business decisions are not made that way.

    Now the brand damage to RSA will truly begin. This is the end of the simple RSA fob. Even if a better algoritm or fob is used, RSA are toast as no one will trust them any more, particularly in the sort of organizations that buy fobs by the palette.

    APT boosters have said vociferously – “see, it was APT!”. Yep, I agree. It’s one of the few times that truly worthy attacks are out in the open enough for us to get a small glimpse into what’s really going on.

    Unfortunately, due to widespread abuse of the term, APT is the laughing stock of the information security world. The folks who routinely use it with knowledge can’t discuss why APT is any different to the other threats out there today. Everyone else has no clue.

    I’ve seen CSOs give up, thinking that since these attackers are so advanced, surely we can’t protect against them, or they buy stuff marked “Solves APT TODAY!1!” when in fact, hard work is required. Nothing very hard, just simple stuff like input validating every field and not tolerating insecure software any more.

    But for your average CSO, finding out if an application was developed in a secure fashion and that every parameter is validated is impossible. It shouldn’t be. But that’s not the main point of today’s post.

    It’s moderately clear in the fog of active disinformation that the weaknesses used in the RSA, Sony, and PBS hacks are well known and easily exploitable. The solution is like losing weight. There is a simple solution that works – albeit slowly. It’s called eating the right amounts of good food for a year or two and exercising hard every day. Anyone who has tried to lose weight, including myself, knows that we really just want an APT strength diet pill.

    I think most of us in our industry will acknowledge that penetration testing has become “different” over the last few years, from literally shooting fish in a barell with the most rudimentary or no tools, to requiring a fair bit of work, and moving up the value chain to find interesting and exploitable issues the business cares about.

    In terms of results, I think we’re still finding 10-20 things wrong in every app. Attackers need one. This is the attacker’s advantage. The number of weaknesses, the type of weaknesses, and the severity of the weaknesses are NOT “advanced” in any way shape or form in 95%+ of the code reviews and penetration tests I perform. The other 5% have been working with me for a while, are mature risk managers, and they’re hard to attack as a result.

    But because of the hard core mystique surrounding the use of the term “APT”, we’re seeing completely inappropriate uses of the term everywhere from anti-virus scanners through to security appliances that promise data loss protection but forget that the information security triangle is people-process-technology. Putting one in place doesn’t solve the other two, nor negate your responsiblities to put in appropriate controls that PEOPLE can live with to do their JOBS and make the business MONEY.

    My twitter icon is the famous drive around control image:

    Access controls are only for those with easy access
    Access controls are only for those with easy access

    This is where folks promoting APT fail. I am not denying that the attackers who have found a end run around a widely known security control are

    Advanced

    Persistent

    Threats

    Anyone who targeted a particular firm, and utterly broke a long standing crypto system, and everything else required to obviate hardened controls of at least two military industrial giants are worthy of the term APT.

    Unfortunately, APT as a term is so brand damaged in the info sec community (try saying it at a public event without being openly laughed at), that we have to choose a better one, one that marketers would never dream of using inappropriately. I don’t know what it is, but surely

    Enemy Combatent

    or

    Soon To Be A Small Pile Of Glowing Ash (STBASPOGA, or the more friendly sounding Strasbourg)

    are right up there.

    Worse still, the fact that these Strasbourgs really are APTs doesn’t mean that we should forget to do the hard work, but instead demonstrates the paucity of protective information security research. Some of you might remember me saying a year or two ago that too much attention is paid to those who hack, and not enough on those who defend. Strasbourgs should mean more dollars in pro-active research. We need to make it difficult to develop insecure software. We should make easy to determine if Acme’s latest release of their widgets are insecure. We should have metrics that easily demonstrate insecure software costs more. We should make it legally untenable to ship insecure software, and give redress to consumers when their investments, privacy and intellectual property are violated due to stupid, simple weaknesses that we knew about in 1965.

  • Time for something new

    As many of you have probably noticed by now, my larger than life frame is not at AusCERT 2011. This is a shame as it sounding like one of the best AusCERTs in the history of AusCERT. There’s a couple of reasons for my absence – flu and the strange case of the disappearing job.

    My services at Pure Hacking are no longer required, and so I need to get on with the job of getting on with the next phase of my life – and that means finding a great job that allows everyone to win.

    There are a couple of options on the table as I write this. But the most intriguing to me right now is to be the advanced gun for hire for consultancies with schedule overload. If you think your consultancy could use me in that fashion even a few times a year, I definitely want to hear from you. If I can make alliances with even a few of you, this could work for us all. This would allow me to work for anyone in the world from my lab here, and would allow consultancies all over the world to plug their scheduling nightmare with one of the best web app sec minds* out there period.

    I have a strong preference for remote telecommuting jobs as I live in a regional city. This doesn’t mean that a full time job in Melbourne is out of the question, but I will be upfront about my need for flexibility (i.e. allow me to work on the train and a day a week at home), or full time remote working from Geelong. Being 2011, full time or partial telecommuting should not be a difficult decision today.

    I know I have a small but loyal readership in this blog, so if you know someone who knows someone, I’m available. I only have a short window before I have to make a decision, so if you’re able to pick me up, I definitely want to hear from you – vanderaj @ greebo . net.

    * Just in case you didn’t know, I was the Project Leader and primary author of the OWASP Developer Guide 2.0, OWASP Top 10 2007 (the one in PCI DSS), and ESAPI for PHP, and I helped set the exam for the SANS GSSP (Java).

  • Upcoming speaking engagements – AusCERT and iTSMF

    I am scheduled to talk or give tutorials at a couple of places so far this year.

    AusCERT

    I am giving a two day Secure Coding tutorial using OWASP’s Application Security Verification Standard.

    This course is different to most security training courses you’ll ever take. It teaches architects, lead developers and developers how to design and code in a positive fashion. You’ll learn of about 80 controls over the two days, and complete four hands on labs and a bunch of demos. Of course, you’ll see me demonstrate ninja levels of breaking crappy applications, but my primary goal is for you to build secure software.

    Now that you want to come, you should bring your laptop with the ability to run a 64 bit VMware VM. As the VM is Linux, it could be converted to KVM, Xen, Parallels, or Virtual Box. You can take the VM home along with the slides and learn even more later.

    This is the cheapest method of getting instructor led training by me. Registration here. There’s about 10 spots left as far as I’m aware.

    itSMF

    Later in the year, I am giving my well received talk at itSMF, an ITIL aligned operations conference, on how to make your security dollars work harder for you. This talk is aimed at CIO, CISO’s, and those who are tasked at securing their stuff with ever less budget, or ever more capability (or both).

  • OWASP Podcast 82 – Authorship of OWASP Top 10 2007

    Dave Wichers* appears in the latest OWASP Podcast (go get it!). In the podcast, he goes through the huge number of OWASP projects he’s been involved in. There’s no doubt Dave’s massive investment in time, intellectual property, and money have been instrumental to OWASP’s success. Without Jeff and Dave’s leadership and contributions, OWASP would be a far poorer place.

    But…. the problem starts when he goes through attribution for the OWASP Top 10, starting around the 17 minute mark. Dave says “Jeff Williams and I basically wrote it” (17:10 onwards), and had various people in OWASP review it such as Dinis Cruz and myself. This is exactly what happened for the 2004 version. But the way it was said implies that the OWASP Top 10 2007 was Dave and Jeff’s and I reviewed that too. I’m sure Dave didn’t mean to miss out on appropriate attributions (he’s a straight up and down sort of guy), but just in case anyone thinks like I did when listening to the podcast, I’d like to set the story straight:

    The OWASP Top 10 2004 was Jeff and Dave’s. Absolutely agree with this. I’m pretty sure I reviewed it as I was working on the Developer Guide 2.0 at the time.

    The OWASP Top 2010 is primarily Jeff and Dave’s efforts. No problems. I gave up leadership in the project sometime in 2008 when I had to concentrate on personal matters. At that time, I had no draft or made any effort to update the text. Dave’s effort to restart the project didn’t start until after I’d left Aspect. After the draft PPTX was complete, I reviewed drafts of the release candidates, along with about another 30 or so folks.

    The OWASP Top 10 2007 is primarily mine in methodology (strict adherence to MITRE statistics in 2006), research and development, authorship, editing and leadership. For example, I sat down with Raoul Endres in a pho restaurant in a wintery day Melbourne, Australia well before I moved to the USA and worked out the methodology. I delivered a draft to about 30 folks in early January of 2007. Jeff Williams and Dave re-wrote and included a few items that I disagreed with (effectively two crypto sections that were not representative in the statistics), and dropped important issues that I felt strongly about. You don’t win them all, but I would have loved for these findings to have made it.

    Some of the sections I wrote up in the draft that missed out in the final version:

    • A7 – Malformed input (dropped – a bad call in my opinion as nearly all flaws are due to insufficient input validation and output encoding)
    • A8 – Broken authorization (dropped – a bad call in my opinion, as most of the easily discovered business logic flaws are authorization related)
    • A9 – Insecure cryptography and communications (became A8 – A9 in the final version)
    • A10 – Privilege escalation (dropped – a bad call in my opinion, as attackers try to do this all the time)

    You can see an early draft here. DO NOT USE THIS VERSION – IT’S NOT OFFICIAL!

    I strongly disagreed with the dropping of RFI as it’s one of the biggest reasons that PHP sites are taken over, and PHP is by far the most prevalent server platform. RFI belongs in the OWASP Top 10 probably as the #1 item in the Security Configuration section. There are still millions of sites with this particular flaw.

    Call me hypersensitive to the way Dave phrased just one sentence in 45 minutes, but I want folks to realize that I didn’t dedicate many nights and weekends to the OWASP Top 10 2007 to have that taken away from me in glossing over of efforts. I also want to make sure that folks understand that I consider Jeff and Dave friends and utterly respect their long time efforts with OWASP.

    * Full disclosure – I worked for Aspect Security between December 2006 and January 2009. Dave and Jeff are founders of Aspect Security and thus my employer during the latter stages of Top 10 2007 gestation. I had a great time at Aspect, worked with amazing customers on cool projects, and have very fond memories of the USA.

  • Need a secure code review? We have slots available

    I don’t normally pimp my employer, but I’d rather be doing secure code reviews than pen tests any day of the week. 🙂

    We have open slots in our schedule for secure code reviews starting from mid March 2011.

    We perform our code reviews against the OWASP Application Security Verification Standard

    • Level 2B – Automated Review using Fortify 360 coupled with a manual verification of 83 items (Architecture, Authentication, Authorization, Session Management, Data Protection, Cryptography, etc)
    • Level 3 – Includes all of the above, but 110 inspection points. The sweet spot of our reviews in my personal opinion.
    • Level 4 – Includes all of the above, plus manual inspection for trojans, backdoors, etc.

    These reviews help folks wishing to comply with PCI DSS or PCI PA DSS, or just wish to know that their websites are safe and secure.

    If you’d like to discuss things further, please e-mail avanderstock (at) purehacking.com.

  • Take Two on Top 10 2010 Security Defenses

    A little while ago, I was thoroughly sick of the usual attack attack attack gumpf, and decided to put up a competition for Top 10 defenses.

    Epic fail.

    Looking back at it, attacking the attackers is not a winning strategy. It’s a fact of human nature that it’s better to be a hot firefighter putting out a fire that costs a million bucks to put right than to be the materials engineer who designs cheap fireproof cladding. I’m burying the hatchet as I burnt a fair bit of goodwill in my original announcement, which not my intention at all. We still need folks to break stuff and disprove snake oil, so there’s a place for the dark side whether I agree with the focus on the dark side or not.

    Just two nominations made Andrew sad despite the worthiness of the submissions.

    1. Rob Lewis nominated Trustifier http://trustifier.com/ryu/features.html
    2. I nominated Josh Zlatin, a colleague for the work he has done on PureWAF, extensions for the OWASP Core Rule Set + Mod Security. You can see the results of PureWAF on Pure Hacking’s website, which is behind our WAF in the cloud service. That’s not an invitation to attack us, just sayin’

    Please discuss or vote in the comments section for who you think should get the non-existant gong.

    The Sorta Inaugural 2011 Pure Hacking Top Web App Sec Defenses Competition

    There’s a couple of changes. Pure Hacking will be sponsoring the competition in 2011. There will be categories, such as Life Time Achievement, Best Security Architecture, Best Left Field Idea, Best Secure Business Idea, Best Quick and Dirty Defense, Best Educator, and of course Best Defense. I will detail more about the categories as time goes on. I will be getting inappropriate statuettes made with engraving and everything. If you feel like you can donate something to boost the booty, contact me.

    As for nominations, I will keep a running tally of awesomeness from my RSS feeds and other sources. You can nominate your favorite folks and defenses by e-mailing me – vanderaj ( at ) owasp.org. Come December 1, 2011, I’ll put them up for voting at which time I will disclose the prizes.

    So far –

    1. OWASP’s XSS roundtable at the OWASP Summit in Portugal is a worthy nominee. Let’s stamp out XSS.

    2. I think Gunnar Peterson should get a Lifetime Prize just for being Gunnar. If more of us thought like Gunnar, the world would be a safer place and folks would be making a LOT more money than they do today.

    Please keep this competition in mind throughout 2011.

  • Force.com secure code review howto Part 1

    For those of you who have to review unusual platforms, here are my notes for reviewing apps coded in Apex and Visual Force. As I learn more, I might add some additional entries, but I’ve been so constrained with time for so long, don’t hold your breath.

    Terminology and Basics

    Force.com is Sales Force’s SAAS API for ISVs and customers to write custom CRM apps atop the Sales Force platform. To provide some serious platform lock in, they use a new strongly typed language called Apex. Apex is sort of Java based. Java programmers will be somewhat familiar with its capabilities, but it has some surprising differences. As a reviewer, there’s nothing really head hurty when reading the code, but it’s important to realize it’s not grandpa’s Java you’re looking at.

    Some things you’ll come across:

    Meta data. You’ll see code with associated XML files. This XML data has a lot of stuff going on that describes it and allows Force.com to correctly handle it, particularly static resources. You can’t just ignore meta data – you need to inspect it.

    Visual Force is a MVC based framework. It appears to act like a tag library with the <apex:… prefix, used inside files with a .page extension. These mimic the traditional type 1 JSP model. I think most of you will be familiar with this model and will not have too many difficulties in reviewing it. However, there are some asynchronous AJAX helpers (timers, future events, etc) that you will need to be aware of, particularly in relation to race conditions.

    Objects. Sales Force have defined an object interface over their CRM data model. This has some interesting gotchyas, in particular, queries across these objects is called SOQL, and is pretty much a semi-injection proof sub-dialect of SQL 99. There will be an entire blog post for those issues primarily as there’s several ways code can be written to be unsafe.

    Triggers. Triggers are executed after users undertake actions within the public site / sand box application. I need to learn more about them before I write about them, but they are the start of the flow of execution after the user does things within the application. If you have custom classes, they are generally called by triggers.

    Bulk importer and Batch Apex. ETL support. I need to learn more about this functionality before I comment.

    Flash and Flex support. Just in case some of the options weren’t scary enough, you can implement your presentation and business logic in a client side language. Sweet. I will not document Flash / Flex support as a) I hate Flash and have it disabled b) I have yet to see such code in action and I hate slamming or praising things I’ve not used. c) I don’t have any Flash or Flex tools to build test cases, so it’s going to be hard to nail this one down. Feel free to steal my thunder here if you so desire.

    Web Services. These are traditional SOAP web services. Instead of using WS-Security, Sales Force have implemented their own session manager. Probably a good idea since no one besides Gunnar Petersen understands WS-Security. However, we all know that web services can be a mine field, so I will experiment with them and see how things work in a much later article.

    Ajax. The Ajax API is one of the newest, and allows Javascript to make pretty much any call to the web services back end that a traditional SOAP web service can. Without WS-Security. Awesome. I’ll be looking into this issue a bit later as I learn more.

    Some things they did right

    Please don’t take my tone for disparagement, for it is not. There are some cool things Sales Force did right:

    • Everything is escaped by default. You have to add code or an attribute to get this wrong.
    • CSRF protection in every form. You have to do the wrong thing to be CSRFable.
    • The easiest way to do SOQL is sorta magically injection proof. There are injectable ways, but again, you have to work at it.
    • Many defaults chosen by Sales Force are good – SSL by default. Yay. SAML by default for SSO. Yay. GET and POST only. Yay. UTF-8 only. Yay. UCS-2 only. Yay. Illegally encoded Unicode characters are replaced. Yay. Content Type is safe unless you do the wrong thing. Yay.
    • Sending cookies or headers are escaped. I’m not sure they’re properly escaped yet, but they are escaped.
    • There are encoders for not just HTML and URL, but for JavaScript and others. Yay
    • To promote code into production out the sand pit requires at least 75% test coverage. O.M.G. YAY! Tests are also not counted towards billing. There’s exactly zero reasons not to test your code.

    This is but a part of the overall list of goodness. But that doesn’t help you figure out how to secure code review things yet.

    The trouble for secure code reviews is several fold:

    • There are no static code review tools to review Apex code. This is a serious deficiency that will only get worse if others try to emulate Sales Force’s success in crafting an entirely new language and API for their SAAS offerings.
    • The security documentation is relatively sparse, and only gives hints as to how to shoot yourself with XSS, CSRF, SOQL, fine grained access control and other issues. This series is an effort to break through that and provide more documentation.
    • There is a tight coupling between the code in your IDE and the sand box / public site. If you break this nexus, you do not have configuration data. With Sales Force’s “No code” logo, they hide some code and configuration from you. So expect to ask for the login and hope it’s not production.
    • Sales Force have given a lot of thought to security, and many common Java issues are “fixed” or safe by default. But as Apex is a serious systems language, it allows you to shoot yourself in the foot. I don’t know yet as to the extent of it, but I will find out with some luck.

    If you’re from Sales Force, please don’t worry. I’m not about to give away 0days – I am not a weak minded moron who delights in creating grief with no solutions. This series will be primarily about how to review Force.com code, followed by advice on recommendations for “fixing” it. Which is most likely to be “Do it how Force.com told you to do it in the manuals”.

  • In defense of Microsoft’s SDL

    Richard Richard Bejtlich says on Twitter:

    I would like fans of Microsoft’s SDLC to explain how Win 7 can contain 4 critical remote code exec vulns this month

    I am surprised that Richard – an old hand in our circles – can say such things. It assumes defect free commercial code is even possible, let alone what everyone else but MS produces. As much as we’d all like to have defect free code, it’s just not possible. It’s about risk reduction in a reasonable time frame for a acceptable price. The alternative is no software – either cancelled through cost overruns or delayed beyond use. This is true of finance industry, health, government, mining, militaries, and particularly ISVs, even ISVs as well funded as Microsoft.

    In the real world,

    • We create building codes to reduce fires, damage from water leaks, damage from high winds, and improve earth quake survivability. But houses still burn down, water floods basements all the time, tornadoes destroy entire towns, and unfortunately, many buildings are damaged beyond repair in earth quakes.
    • SOX requires organizations to have good anti-fraud / governance, yet still IT projects fail and still companies go out of business due to senior folks doing the wrong thing or auditors stuffing up
    • PCI requires merchants and processors to handle CC details properly, yet we still have CC fraud (albeit much less than before PCI)
    • We engineer bridges not to fall down, but they still do.
    • The SDL requires certain calls not to be used. This should prevent common classes of buffer overflow. However, you can still write code like this:
    char *MyFastStrcpy(char *dest, const char *src)
    {
       char *save = dest;
       while(*dest++ = *src++);
       return save;
    }

    Does code using calling that function likely to have buffer overflows? Sure does. Standards and better design eliminate stupid issues like the above.

    It’s not a perfect world.

    The code MS works on nearly all dates back to prior to the SDLC push in 2001. Windows 2008 has roots in code started in the late 1980’s. They literally have a billion + lines of code running around with devs of all competencies poking at it. The idea that there should be zero defects is ludicrous.

    Richard, if you’ve completed a non-trivial program (say over 100,000 lines of code) that does not have a security defect from the time you started writing it, you’re a coding god. Everyone else has to use programs like the SDL to get ahead. Those who don’t, and particularly those that do no assurance work are simply insecure. This is risk management 101 – an unknown risk is consider “HIGH” until it is evaluated and determined.

    Let’s take the argument another way. If the SDL has failed (and I think it is succeeding), what would be the signs?

    We know empirically that LOC ~= # of security defects. However, the number of critical remotely exploitable issues affecting Windows 7 is dramatically less than that of XP at the same time of release. Like 10x less. That’s an amazing achievement that no one else in the entire industry has managed to do, despite knowing how Microsoft has achieved that amazing effort.

    What are the alternatives? Until Oracle saw the light a few years ago, they had the hilarious “Unbreakable” marketing campaign. Sadly for them, they were all too breakable. See D Litchfield for details. Not reviewing or keeping dirty secrets secret does not make things secure. Only through policies requiring security, standards that eliminate insecure calls like dynamic SQL calls or strcpy(), careful thought about security in the requirements process, secure design, secure coding, code reviews, and pen tests to validate the previous steps do you have evidence of assurance that  you are actually fairly secure. The SDL is a framework that puts that cycle into motion.

    Oracle got it. They’re now pumping out 30-40+ CPU’s per quarter for several years in a row. I’d prefer 4 remotely exploitable issues once or twice a year than 40 per 3 months thanks. But even so, I’m glad Oracle has jumped on the SDL bandwagon – they are fixing the issues in their code. One day, possibly in about 5 to 10 years, they’ll be at the same or similar level that MS has been at for a few years now.

    I agree that monocultures are bad. I use a Mac and I have been unaffected by malware for some time. But do I believe for even one second that my Mac is secure just because it’s written by Apple and not Microsoft? Not in a million years. Apple have a long way to go to get to the same maturity level that Microsoft had even in 2001.

    All code has defects. Some code has far fewer defects than others, and that code is written by Microsoft in the last few years.

  • Risk Management 103 – Choosing Threat Agents

    A key component in deciding a risk is WHO is going to be doing the attack. The Top 10 threat model architecture depicts a risk pathThe above image is from the excellent OWASP Top 10 2010, and I will be referencing this diagram a great deal.

    We’re talking about the attackers (threat agents) on the left today. So you’re busy doing a secure code review or a penetration test (how I loathe that term – so sophomoric) and found a weakness. You’ve written up a fantastic finding and need to rate it so that your client (whether internal or external, for money or for free) can do something about it. It’s vital that you don’t under or over cook the risk. Under cooking the risk looks really really bad when you get it wrong and the wrong business decision is made to go live with a bad problem. Overcooking the risk erodes trust, and often leads to the wrong fixes being made or none at all, which is worse. You can tell if you’re overcooking a risk if your clients are constantly arguing with you about risk ratings. Let’s get to a more realistic risk rating first time every time.

    Risk Management 103 – Establishing the correct actor

    I am more likely to be successful than a script kiddy who is more likely to be successful than my mum. Unfortunately, there’s just one of me, but there’s a million script kiddies out there. That doesn’t mean you should use them. Script kiddies are simply unlikely to find business logic flaws and access control flaws, such as direct object references. So you should reflect this in your thinking about risk – even though it might be simpler to go with what everyone already knows:

    • Skill level – what sort of skill does the threat agent bring to the table? 1 = My mum. 5 = script kiddy (generous), 9 = webappsec master
    • Discovery – how likely is it that this group of attackers will discover this issue?
    • Ease of exploitation – how likely will this group of attackers exploit this issue?
    • Size of attacker pool – 0 – system admins or similar, 9 – The Entire Internet (==script kiddies)

    So you need to do the calculation for the weakness you found for these various groups to determine the maximum likelihood. This often leads into impact. Let’s go with an indirect object reference, such as the AT&T attack

    Likelihood – AJV

    • Skill level – 9 web app sec master
    • Motive – 4 possible reward
    • Opportunity – 7 Some access or resources required
    • Size – 9 anonymous internet users (remember, this attack relied upon a User Agent header for authentication)
    • Ease of Discovery – 7 easy
    • Ease of exploit – 5 easy
    • Awareness – 9 public knowledge
    • IDS – Let’s go with Logged without review (8)

    This brings us a total of 54 out of 72. I put this as a “HIGH” likelihood in my risk charts.

    Likelihood – Script kiddy

    • Skill level – 3 some technical skills (script kiddy)
    • Motive – 4 possible reward
    • Opportunity – 7 Some access or resources required
    • Size – 9 anonymous internet users (remember, this attack relied upon a User Agent header for authentication)
    • Ease of Discovery – 1 Practically impossible
    • Ease of exploit – 1 theoretical
    • Awareness – 1 Unknown
    • IDS – Let’s go with Logged without review (8)

    This brings us to 34. So we shouldn’t consider script kiddies when there might be a motivated web app sec master on the loose. But is that entirely realistic? Honestly, no.

    Who is really going to attack this app?

    Think about WHO is likely to attack the system:

    • Foreign governments – check.
    • Web app sec masters – Our careers are worth more than the kudos.
    • Bored researchers trying to make a name for themselves – check even though quite dumb (see previous bullet)
    • Script kiddies – check but fail. Realistically, unless someone else wrote the script, they wouldn’t be able to do this attack.
    • Trojans – check but fail for the same reason as script kiddies.
    • My mum doesn’t know what a direct object reference is. Not going to happen.
    • Terrorists – check, but seriously, remember dying by winning lotto, buying a private plane with the lotto winnings, having the plane struck by lightning on its four leave clover encrusted hull eight times, parachuting out and then for the main and the secondary to both fail is more likely than a terrorist attack. Don’t use this unless you’re after Department of Homeland Security money as everyone else will just laugh at you. Especially if you use it more than once.

    So let’s go with #1 as this is an attack that they would be interested in. They have resources and skilled web app sec masters, so this attack likelihood is a HIGH. So let’s work out the impact for this scenario:

    Sample Impact Calculation

    There’s a lot of subjectivity here. You can close that down significantly by talking it over with your client. This doesn’t mean you should go with LOW every time you have the conversation, but instead set out objective parameters that suits their business and this application. Yes, this takes a fair amount of work. You can either do it before you deliver the report, or you can do it after you deliver the report. If you choose the latter path too often, your reputation as a trusted advisor can be found in the client’s trash bin along with your reports and the client relationship.

    Let’s do the calculations based upon the sketchy information I have from third hand, unreliable sources and vastly more reliable Tweets. i.e. I’m almost certainly making this up, but hopefully, you’ll get the picture.

    • Loss of confidentiality. Check big time. All data disclosed (9)
    • Loss of integrity. In this case, no data was harmed in the making of this exploit 0
    • Loss of availability. If every government tried it at once, I’m sure there’d be a DoS but let’s be generous and say minimal primary services interrupted (5) as the system would have to be taken offline or disabled after it was discovered
    • Loss of accountability. It’s already anonymous, so 9
    • Financial damage. AT&T is big. Really really big. In the grand scheme of things, this probably didn’t hurt them that much. That said, it has to be in the millions. So let’s go with Minor effect on annual profit (3)
    • Reputation damage. AT&T’s reputation is somewhat already tarnished, so let’s go with loss of major accounts (4) as I’m sure RIM will pick up all of those .mil and .gov accounts very soon now.
    • Non-compliance. PII is about names and addresses, but AFAIK, e-mail addresses are not protected at the moment. Happy to hear otherwise – leave comments. Let’s go with “clear violation” 5
    • Privacy violation. 114,000 is the minimum number, so let’s go with 7 and it could tip towards 9

    This gives us 42 / 72, which is a MEDIUM impact (just shy of “HIGH at 46), giving an overall risk of HIGH. That is about right, and thus should have been caught by a secure code review and fixed before go live.

    Next … Risk Management 104 – Learning to judge impacts

  • Risk Management 102 – when is a high a high

    There’s a lot of consultants (and clients) who know little to nothing about proper risk management. This is not their fault – it was never taught at computer science or most similar courses. If you get good at it, you’re unlikely to be a developer or a security consultant. That’s a shame, because risk management has a lot to offer both consultancies and their clients if done properly.

    The problem is that most consultants think technical risk, and will happily assign “Extreme” risks to things like server header info disclosures. Many clients actively campaign to reduce risk ratings for whatever reason, some for valid reasons, others not. And they will win if the risk ratings are wishful thinking or outright wrong. This could cost the organization billions of dollars if a HIGH risk becomes a LOW risk and is accepted, when really it’s a sort of a MEDIUM to HIGH risk depending on the situation.

    We as consultants have a responsibility to THINK about the findings we put into reports. Don’t be a chicken little, but also don’t be bullied into reducing bad risks as you’ll be chosen for your outcomes rather than your honesty and integrity. Be open and honest about how you came to that risk decision, talk over the factors, and help the client understand and agree to the choices you’ve made. So don’t just stick “HIGH” in there, you need the entire enchilada. Lastly, be reasonable when you’ve made a mistake and ensure there’s as few as possible as that’s a huge reputation risk.

    Clients have a responsibility to talk over the risk ratings so they fully understand the risk. All parties should agree that they document the original risk, the discussion about the risk, and any revisions to the rating and / or vulnerability. Maybe there’s a control that’s being missed, or may be there’s a misunderstanding of how easy it is to perform. Otherwise, there’s no accountability. In the end, consultants should never change a risk without documenting that change.

    How to improve the situation

    I like the OWASP Risk Rating methodology. The primary reason is that two different consultants can come up with the same result independently, removing a lot of the subjectivity and argument from the equation. I like to include the entire calculation as this allows clients to repeat my work and thus understand why it turned out the way it did.

    There are issues with the OWASP Risk Rating methodology:

    • It’s far too easy to generate “Extreme” risks. Extreme risks are really, really rare. They are company ending, life ending, project ending, shareholder value strippers, reputation destroyers. Think BP and the Gulf Coast. SQL injection at TJ Maxx is an extreme risk (despite them still being in business, it did cost a lot).
    • It’s difficult to game the numbers to create “Low” risks when you know that it really should be a “Low”. I basically take nine off the top, as I’ve never gotten a value less than nine. This helps a bit, but even then.
    • It’s hard to do it manually. I use Excel spreadsheets, but you may want to automate it more.
    • You must talk to your customers first. Otherwise, you need to take out the business elements (financial, legal, compliance, privacy) as you will not be able to lock these in.
    • Impact values are not the same for the entire review. They change as per the asset value/classification, and you will most likely have more than one asset value / classification in your review. There’s a difference between contexts, help files, PII, and credit cards. Document which one applied.

    That said, the OWASP risk rating methodology is way better than pretty much everything else out there for web apps. CVSS is not suitable as it’s for ISVs who produce software. That doesn’t describe most enterprise, hobby, open source projects, and so on. If you need to do AS4360 risks, CVSS is not going to cut the mustard.

    Risk Management 102.

    We spend a lot of time arguing with some clients because we haven’t thought through our risk carefully enough, or worse, just used the one from the last report. No two clients and no two apps are ever the same. Therefore, the risk ratings for each of your reports MUST be different. Spend the time to do it right the first time, or you’ll spend a lot more time later when your client argues with you. And they may have a point.

    • Try not. Do… or do not. There is no try. The likelihood rating is solely about the likelihood of the MOST SKILLED threat agent SUCCEEDING at the attack / weakness / vuln you’ve described.
    • The impact rating is solely about the WORST impact of the attack / weakness / vuln using the threat agent you’ve described.

    For example, you have a direct object reference in the URL and no other controls – my Mum could do this attack. The IMPACT is off the charts, and the likelihood too. Just because a n00b consultant with an automated tool is unlikely to do more than annoy the web server, doesn’t mean that’s the threat agent you should document.

    If you came so, so close to exploitation and you just know that it could be bad, but you failed miserably after several hours, exploitability has to be set to 0. Seriously. The impact has to be low too, as there’s no impact that you’ve proven. To document anything else is wrong. I’m happy for folks to write up how close they came, and draw attention to it in the executive summary and in the read out, but to put a high likelihood says that you’re lame, and a high impact says you’re a chicken little. Don’t do it.

    If you’re unsure, map out different attackers (n00b consultants with automated tools, script kiddies, organized crime, web app sec masters), work out how likely they are to succeed at the attack, and then work out what the impact is for each of these threat agents. Do the math and use the most likely choice with that most likely choice’s impact. Don’t under or over blow it – if a web app sec master could totally rip a copy of the database with both hands tied, the impact is likely to be low.

    Lastly, don’t go the terrorist route. You are more likely to win lotto, fall out of your new private plane from 30,000 feet and then get killed by lightning than you are ever likely to be a victim of terrorism. Chicken little scenarios work once or twice, but you’re just wasting everyone’s time and scorching the earth for all those who follow you.