Category: OWASP

  • Top 2010 Defenses

    I’d like to announce the inaugural Top 2010 Web App Sec Defenses Compendium. I can’t offer prizes, because defenses are simply not that sexy. (If you do have prizes that could be offered, web app sec researchers will be over the moon. E-mail me)

    Defenses change the world. Defenses make software more secure – permanently, and not just for the week or two until the latest sexy attack is patched. But defenses aren’t sexy and don’t get invites to all the cool conferences, so there’s no prizes beyond a grateful planet.

    Yet.

    I’m not very surprised to see that attacks are getting all the pretty girls and invites to sexy parties.

    Researching attacks as a priority MUST stop. It’s wasting incredible talent. We KNOW that input validation and output encoding is the answer to nearly all the attacks in this year’s Top 2010 attacks (seriously – go look). Input validation and output encoding is unfortunately not sexy. It’s hard work.

    Building is far, far, far harder than breaking. If you have elite security researcher skills, you should show your stuff by putting your research time and resources into making the planet safer for everyone. Not everyone can do it. Building a solid defense is at least two to three orders of magnitude harder than finding a new form of XSS or a defect in some poor Gawker PHP script. Just one novel concept can take thousands of hours of hard graft. You still need to know how to break – a defense is useless unless you’ve tested it. But on top of that, you need to know how to code and know HTML/JavaScript backwards. Building defenses takes a lot of effort and in my view is why we have so few serious defence researchers.

    Nominations

    As I’m starting so late, let’s make it serious to allowing all of 2010 to pass. Nominations can be sent in until Australia Day (January 26, 2011). I’ll put up a vote for folks to say which is their favorite. The winner of our eternal gratitude will be announced on Valentine’s Day.

    Please e-mail me – vanderaj (a.t.) owasp.org with your nominations. I’ll update this post continuously until the cut off date.

    I’d like to start with:

    I know it’s heresy in some ivory tower circles that I nominated WAF modules written by a colleague, but honestly, we need defense in depth measures until coders and frameworks make WAFs somewhat obsolete.

    Please send ’em in.

  • Force.com secure code review howto Part 1

    For those of you who have to review unusual platforms, here are my notes for reviewing apps coded in Apex and Visual Force. As I learn more, I might add some additional entries, but I’ve been so constrained with time for so long, don’t hold your breath.

    Terminology and Basics

    Force.com is Sales Force’s SAAS API for ISVs and customers to write custom CRM apps atop the Sales Force platform. To provide some serious platform lock in, they use a new strongly typed language called Apex. Apex is sort of Java based. Java programmers will be somewhat familiar with its capabilities, but it has some surprising differences. As a reviewer, there’s nothing really head hurty when reading the code, but it’s important to realize it’s not grandpa’s Java you’re looking at.

    Some things you’ll come across:

    Meta data. You’ll see code with associated XML files. This XML data has a lot of stuff going on that describes it and allows Force.com to correctly handle it, particularly static resources. You can’t just ignore meta data – you need to inspect it.

    Visual Force is a MVC based framework. It appears to act like a tag library with the <apex:… prefix, used inside files with a .page extension. These mimic the traditional type 1 JSP model. I think most of you will be familiar with this model and will not have too many difficulties in reviewing it. However, there are some asynchronous AJAX helpers (timers, future events, etc) that you will need to be aware of, particularly in relation to race conditions.

    Objects. Sales Force have defined an object interface over their CRM data model. This has some interesting gotchyas, in particular, queries across these objects is called SOQL, and is pretty much a semi-injection proof sub-dialect of SQL 99. There will be an entire blog post for those issues primarily as there’s several ways code can be written to be unsafe.

    Triggers. Triggers are executed after users undertake actions within the public site / sand box application. I need to learn more about them before I write about them, but they are the start of the flow of execution after the user does things within the application. If you have custom classes, they are generally called by triggers.

    Bulk importer and Batch Apex. ETL support. I need to learn more about this functionality before I comment.

    Flash and Flex support. Just in case some of the options weren’t scary enough, you can implement your presentation and business logic in a client side language. Sweet. I will not document Flash / Flex support as a) I hate Flash and have it disabled b) I have yet to see such code in action and I hate slamming or praising things I’ve not used. c) I don’t have any Flash or Flex tools to build test cases, so it’s going to be hard to nail this one down. Feel free to steal my thunder here if you so desire.

    Web Services. These are traditional SOAP web services. Instead of using WS-Security, Sales Force have implemented their own session manager. Probably a good idea since no one besides Gunnar Petersen understands WS-Security. However, we all know that web services can be a mine field, so I will experiment with them and see how things work in a much later article.

    Ajax. The Ajax API is one of the newest, and allows Javascript to make pretty much any call to the web services back end that a traditional SOAP web service can. Without WS-Security. Awesome. I’ll be looking into this issue a bit later as I learn more.

    Some things they did right

    Please don’t take my tone for disparagement, for it is not. There are some cool things Sales Force did right:

    • Everything is escaped by default. You have to add code or an attribute to get this wrong.
    • CSRF protection in every form. You have to do the wrong thing to be CSRFable.
    • The easiest way to do SOQL is sorta magically injection proof. There are injectable ways, but again, you have to work at it.
    • Many defaults chosen by Sales Force are good – SSL by default. Yay. SAML by default for SSO. Yay. GET and POST only. Yay. UTF-8 only. Yay. UCS-2 only. Yay. Illegally encoded Unicode characters are replaced. Yay. Content Type is safe unless you do the wrong thing. Yay.
    • Sending cookies or headers are escaped. I’m not sure they’re properly escaped yet, but they are escaped.
    • There are encoders for not just HTML and URL, but for JavaScript and others. Yay
    • To promote code into production out the sand pit requires at least 75% test coverage. O.M.G. YAY! Tests are also not counted towards billing. There’s exactly zero reasons not to test your code.

    This is but a part of the overall list of goodness. But that doesn’t help you figure out how to secure code review things yet.

    The trouble for secure code reviews is several fold:

    • There are no static code review tools to review Apex code. This is a serious deficiency that will only get worse if others try to emulate Sales Force’s success in crafting an entirely new language and API for their SAAS offerings.
    • The security documentation is relatively sparse, and only gives hints as to how to shoot yourself with XSS, CSRF, SOQL, fine grained access control and other issues. This series is an effort to break through that and provide more documentation.
    • There is a tight coupling between the code in your IDE and the sand box / public site. If you break this nexus, you do not have configuration data. With Sales Force’s “No code” logo, they hide some code and configuration from you. So expect to ask for the login and hope it’s not production.
    • Sales Force have given a lot of thought to security, and many common Java issues are “fixed” or safe by default. But as Apex is a serious systems language, it allows you to shoot yourself in the foot. I don’t know yet as to the extent of it, but I will find out with some luck.

    If you’re from Sales Force, please don’t worry. I’m not about to give away 0days – I am not a weak minded moron who delights in creating grief with no solutions. This series will be primarily about how to review Force.com code, followed by advice on recommendations for “fixing” it. Which is most likely to be “Do it how Force.com told you to do it in the manuals”.

  • Code of Hammurabi – or 4000 years later, we still haven’t got it

    The Code of Hammurabi is one of the earliest known written laws, and possibly pre-dates Moses’ descent from the Mount.

    In it, we get a picture of the Babylonian’s laws and punishments. In particular, there’s this one:

    If a builder builds a house for someone, and does not construct it properly, and the house which he built falls in and kills its owner, then the builder shall be put to death.(Another variant of this is, If the owner’s son dies, then the builder’s son shall be put to death.)

    (Source: Wikipedia)

    So essentially, this is one of the earliest building codes. Pretty harsh, but you know…

    What this means is that only qualified builders prepared to take the risk of death built houses. This obviously focuses the mind.

    In our industry, we have hobbiests and self-taught folks working side by side with software engineers and computer scientists, but they usually share one thing in common: they know nothing of security.

    This is like an accountant graduating without knowledge of auditing principles or GAAP. It’s exactly like a civil engineer being unaware of load stresses and envioronmental factors necessary that require safety and tolerances to be built into every structure.

    When the average person goes to a builder or architect, and asks for a house to be built, we expect them to know how to build the two or three story building such that it not only complies with minimum code requirements, but that it will not collapse. When they do, we strike those builders off the master builder’s register and they can no longer build homes. We can sue them for gross negligence.

    When the average small company does their books, they expect the accountants they hire to know how to do double entry book keeping, and be aware of local, state and federal tax rules. When they fail to do so, they lose their CPA accreditation and we can sue them for gross negligence.

    When a city or state wants to build a new bridge, they expect the winning tenderer to design the bridge to last for the expected period of time, satisfy all state and federal road and safety laws, and obtained specialist advice for key elements of constructions, such as wind tunnel tests. If the bridge falls down, this is usually the end for that building group and they are sued out of existence.

    Why is so different in our field? What we do is not art. SQL injection is so utterly preventable and has been for over 10 years that I truly believe it is gross negligence to have injectable code in any running code today.

    There is a huge difference between using MYOB to run a small business and building a cubby house. Yet this is all 99.9% of all developers are capable of today. They lack the most basic awareness of software security, the only key non-functional requirement of all software – from games through national treasury finance systems.

    Efforts like Rugged Software and OWASP are vital. We must get out to Universities and employers and make sure that security is taught and that all IT, CS, and software engineering graduates have done at lease one 13 week subject on it, and make it the easiest possible path to major in software security. We must get out to employers and make sure they require all new hires to know about it and be able to code for it. Moreover, if they buy off the shelf software, we must get them to include clauses in contracts, such as the OWASP Secure Software Contract Annex to protect themselves from gross negligence such as SQL injection or XSS. We must reach out to frameworks and make them utterly aware that what they do affects millions of developers and they simply must be better at security than everyone else.

    It’s time for the software industry to grow up, realize that fortunes, privacy and lives really are at risk, and we’re doing a repeatable engineering process, and not some black art. We have to have consequences.

  • Risk Management 103 – Choosing Threat Agents

    A key component in deciding a risk is WHO is going to be doing the attack. The Top 10 threat model architecture depicts a risk pathThe above image is from the excellent OWASP Top 10 2010, and I will be referencing this diagram a great deal.

    We’re talking about the attackers (threat agents) on the left today. So you’re busy doing a secure code review or a penetration test (how I loathe that term – so sophomoric) and found a weakness. You’ve written up a fantastic finding and need to rate it so that your client (whether internal or external, for money or for free) can do something about it. It’s vital that you don’t under or over cook the risk. Under cooking the risk looks really really bad when you get it wrong and the wrong business decision is made to go live with a bad problem. Overcooking the risk erodes trust, and often leads to the wrong fixes being made or none at all, which is worse. You can tell if you’re overcooking a risk if your clients are constantly arguing with you about risk ratings. Let’s get to a more realistic risk rating first time every time.

    Risk Management 103 – Establishing the correct actor

    I am more likely to be successful than a script kiddy who is more likely to be successful than my mum. Unfortunately, there’s just one of me, but there’s a million script kiddies out there. That doesn’t mean you should use them. Script kiddies are simply unlikely to find business logic flaws and access control flaws, such as direct object references. So you should reflect this in your thinking about risk – even though it might be simpler to go with what everyone already knows:

    • Skill level – what sort of skill does the threat agent bring to the table? 1 = My mum. 5 = script kiddy (generous), 9 = webappsec master
    • Discovery – how likely is it that this group of attackers will discover this issue?
    • Ease of exploitation – how likely will this group of attackers exploit this issue?
    • Size of attacker pool – 0 – system admins or similar, 9 – The Entire Internet (==script kiddies)

    So you need to do the calculation for the weakness you found for these various groups to determine the maximum likelihood. This often leads into impact. Let’s go with an indirect object reference, such as the AT&T attack

    Likelihood – AJV

    • Skill level – 9 web app sec master
    • Motive – 4 possible reward
    • Opportunity – 7 Some access or resources required
    • Size – 9 anonymous internet users (remember, this attack relied upon a User Agent header for authentication)
    • Ease of Discovery – 7 easy
    • Ease of exploit – 5 easy
    • Awareness – 9 public knowledge
    • IDS – Let’s go with Logged without review (8)

    This brings us a total of 54 out of 72. I put this as a “HIGH” likelihood in my risk charts.

    Likelihood – Script kiddy

    • Skill level – 3 some technical skills (script kiddy)
    • Motive – 4 possible reward
    • Opportunity – 7 Some access or resources required
    • Size – 9 anonymous internet users (remember, this attack relied upon a User Agent header for authentication)
    • Ease of Discovery – 1 Practically impossible
    • Ease of exploit – 1 theoretical
    • Awareness – 1 Unknown
    • IDS – Let’s go with Logged without review (8)

    This brings us to 34. So we shouldn’t consider script kiddies when there might be a motivated web app sec master on the loose. But is that entirely realistic? Honestly, no.

    Who is really going to attack this app?

    Think about WHO is likely to attack the system:

    • Foreign governments – check.
    • Web app sec masters – Our careers are worth more than the kudos.
    • Bored researchers trying to make a name for themselves – check even though quite dumb (see previous bullet)
    • Script kiddies – check but fail. Realistically, unless someone else wrote the script, they wouldn’t be able to do this attack.
    • Trojans – check but fail for the same reason as script kiddies.
    • My mum doesn’t know what a direct object reference is. Not going to happen.
    • Terrorists – check, but seriously, remember dying by winning lotto, buying a private plane with the lotto winnings, having the plane struck by lightning on its four leave clover encrusted hull eight times, parachuting out and then for the main and the secondary to both fail is more likely than a terrorist attack. Don’t use this unless you’re after Department of Homeland Security money as everyone else will just laugh at you. Especially if you use it more than once.

    So let’s go with #1 as this is an attack that they would be interested in. They have resources and skilled web app sec masters, so this attack likelihood is a HIGH. So let’s work out the impact for this scenario:

    Sample Impact Calculation

    There’s a lot of subjectivity here. You can close that down significantly by talking it over with your client. This doesn’t mean you should go with LOW every time you have the conversation, but instead set out objective parameters that suits their business and this application. Yes, this takes a fair amount of work. You can either do it before you deliver the report, or you can do it after you deliver the report. If you choose the latter path too often, your reputation as a trusted advisor can be found in the client’s trash bin along with your reports and the client relationship.

    Let’s do the calculations based upon the sketchy information I have from third hand, unreliable sources and vastly more reliable Tweets. i.e. I’m almost certainly making this up, but hopefully, you’ll get the picture.

    • Loss of confidentiality. Check big time. All data disclosed (9)
    • Loss of integrity. In this case, no data was harmed in the making of this exploit 0
    • Loss of availability. If every government tried it at once, I’m sure there’d be a DoS but let’s be generous and say minimal primary services interrupted (5) as the system would have to be taken offline or disabled after it was discovered
    • Loss of accountability. It’s already anonymous, so 9
    • Financial damage. AT&T is big. Really really big. In the grand scheme of things, this probably didn’t hurt them that much. That said, it has to be in the millions. So let’s go with Minor effect on annual profit (3)
    • Reputation damage. AT&T’s reputation is somewhat already tarnished, so let’s go with loss of major accounts (4) as I’m sure RIM will pick up all of those .mil and .gov accounts very soon now.
    • Non-compliance. PII is about names and addresses, but AFAIK, e-mail addresses are not protected at the moment. Happy to hear otherwise – leave comments. Let’s go with “clear violation” 5
    • Privacy violation. 114,000 is the minimum number, so let’s go with 7 and it could tip towards 9

    This gives us 42 / 72, which is a MEDIUM impact (just shy of “HIGH at 46), giving an overall risk of HIGH. That is about right, and thus should have been caught by a secure code review and fixed before go live.

    Next … Risk Management 104 – Learning to judge impacts

  • Risk Management 102 – when is a high a high

    There’s a lot of consultants (and clients) who know little to nothing about proper risk management. This is not their fault – it was never taught at computer science or most similar courses. If you get good at it, you’re unlikely to be a developer or a security consultant. That’s a shame, because risk management has a lot to offer both consultancies and their clients if done properly.

    The problem is that most consultants think technical risk, and will happily assign “Extreme” risks to things like server header info disclosures. Many clients actively campaign to reduce risk ratings for whatever reason, some for valid reasons, others not. And they will win if the risk ratings are wishful thinking or outright wrong. This could cost the organization billions of dollars if a HIGH risk becomes a LOW risk and is accepted, when really it’s a sort of a MEDIUM to HIGH risk depending on the situation.

    We as consultants have a responsibility to THINK about the findings we put into reports. Don’t be a chicken little, but also don’t be bullied into reducing bad risks as you’ll be chosen for your outcomes rather than your honesty and integrity. Be open and honest about how you came to that risk decision, talk over the factors, and help the client understand and agree to the choices you’ve made. So don’t just stick “HIGH” in there, you need the entire enchilada. Lastly, be reasonable when you’ve made a mistake and ensure there’s as few as possible as that’s a huge reputation risk.

    Clients have a responsibility to talk over the risk ratings so they fully understand the risk. All parties should agree that they document the original risk, the discussion about the risk, and any revisions to the rating and / or vulnerability. Maybe there’s a control that’s being missed, or may be there’s a misunderstanding of how easy it is to perform. Otherwise, there’s no accountability. In the end, consultants should never change a risk without documenting that change.

    How to improve the situation

    I like the OWASP Risk Rating methodology. The primary reason is that two different consultants can come up with the same result independently, removing a lot of the subjectivity and argument from the equation. I like to include the entire calculation as this allows clients to repeat my work and thus understand why it turned out the way it did.

    There are issues with the OWASP Risk Rating methodology:

    • It’s far too easy to generate “Extreme” risks. Extreme risks are really, really rare. They are company ending, life ending, project ending, shareholder value strippers, reputation destroyers. Think BP and the Gulf Coast. SQL injection at TJ Maxx is an extreme risk (despite them still being in business, it did cost a lot).
    • It’s difficult to game the numbers to create “Low” risks when you know that it really should be a “Low”. I basically take nine off the top, as I’ve never gotten a value less than nine. This helps a bit, but even then.
    • It’s hard to do it manually. I use Excel spreadsheets, but you may want to automate it more.
    • You must talk to your customers first. Otherwise, you need to take out the business elements (financial, legal, compliance, privacy) as you will not be able to lock these in.
    • Impact values are not the same for the entire review. They change as per the asset value/classification, and you will most likely have more than one asset value / classification in your review. There’s a difference between contexts, help files, PII, and credit cards. Document which one applied.

    That said, the OWASP risk rating methodology is way better than pretty much everything else out there for web apps. CVSS is not suitable as it’s for ISVs who produce software. That doesn’t describe most enterprise, hobby, open source projects, and so on. If you need to do AS4360 risks, CVSS is not going to cut the mustard.

    Risk Management 102.

    We spend a lot of time arguing with some clients because we haven’t thought through our risk carefully enough, or worse, just used the one from the last report. No two clients and no two apps are ever the same. Therefore, the risk ratings for each of your reports MUST be different. Spend the time to do it right the first time, or you’ll spend a lot more time later when your client argues with you. And they may have a point.

    • Try not. Do… or do not. There is no try. The likelihood rating is solely about the likelihood of the MOST SKILLED threat agent SUCCEEDING at the attack / weakness / vuln you’ve described.
    • The impact rating is solely about the WORST impact of the attack / weakness / vuln using the threat agent you’ve described.

    For example, you have a direct object reference in the URL and no other controls – my Mum could do this attack. The IMPACT is off the charts, and the likelihood too. Just because a n00b consultant with an automated tool is unlikely to do more than annoy the web server, doesn’t mean that’s the threat agent you should document.

    If you came so, so close to exploitation and you just know that it could be bad, but you failed miserably after several hours, exploitability has to be set to 0. Seriously. The impact has to be low too, as there’s no impact that you’ve proven. To document anything else is wrong. I’m happy for folks to write up how close they came, and draw attention to it in the executive summary and in the read out, but to put a high likelihood says that you’re lame, and a high impact says you’re a chicken little. Don’t do it.

    If you’re unsure, map out different attackers (n00b consultants with automated tools, script kiddies, organized crime, web app sec masters), work out how likely they are to succeed at the attack, and then work out what the impact is for each of these threat agents. Do the math and use the most likely choice with that most likely choice’s impact. Don’t under or over blow it – if a web app sec master could totally rip a copy of the database with both hands tied, the impact is likely to be low.

    Lastly, don’t go the terrorist route. You are more likely to win lotto, fall out of your new private plane from 30,000 feet and then get killed by lightning than you are ever likely to be a victim of terrorism. Chicken little scenarios work once or twice, but you’re just wasting everyone’s time and scorching the earth for all those who follow you.

  • Intelligent Session Manager Architecture

    As security researchers, I think we’ve let down users in the quest to close down questionable and unlikely events. The problem is that even though unlikely, these events – such as MITM attacks – work nearly 100% of the time. They make great demos to scare folks who don’t understand what they’re seeing. It’s a shame that they just don’t occur in the real world all that often. So let’s move beyond “Expire it after 10 minutes”, and to a session manager that actually helps the business and makes users love you, and really close out some of these attacks.

    The reasoning behind 10 minutes is a balance between the business (who’d prefer no time outs really and would love to have a magic “remember me” function that is somehow secure) and Tin Foil freaks like me who know how incredibly simple MITM, session fixation, and session hijacking can be. Many of the goals of our advice has been based on 1970’s standards and thinking, and 1990’s type of attacks that still work, primarily because we’ve been asking for the wrong solutions, like short time outs and don’t let users log on twice.

    As Dr Phil says, “How’s that working out for you?”

    So let’s think about ways to improve session managers to blunt the known attacks. We know that TLS has issues with MITM attacks, but we’re very lucky that this is a local attack (for now). Such attacks are also exceedingly unlikely outside of security conference wireless networks, and motivated attacks on behalf of organized crime (very rare but devastating – see TJ Maxx).

    However, some of the other assumptions we’ve made when recommending bad ideas usually don’t think about the user of the application. My wife does all of our shopping online. The system is awful. It times out within a short period of time, and it usually takes 4 to 5 attempts to finish an order. I’m sure there’s some poor risk manager going “WTF? PCI is stupid – we have to implement 10 minute time outs for a process that lasts 30-40 minutes?” Let’s move beyond quick fire “gimme” penetration test results, and think about HOW the USER is impacted when we make recommendations with our consultancy hats on.

    What goes wrong if it takes 40 minutes to assemble a shopping list? Do we have a financial loss? No. Do we have a reputation loss. Yes. Do we have a shareholder loss. No. Do we have a privacy impact? No. Do we have a regulatory impact? Only if you consider PCI DSS a regulation worthy of its name. What can we do to make it better?

    With the online shopping example, losses start when we can order stuff. Easy! Keep everything intact (and allow items to be placed in and removed from the cart), but make the user re-authenticate to purchase or see their profile if it’s been more than 10 minutes. But with 100% of session managers today, that very act is impossible without significant customizations and we all know there’s some B List pen tester willing to ping you on long timeouts if you do write that secondary all singing all dancing session manager. THINK BEFORE YOU RECOMMEND RECEIVED WISDOM!

    Realistically, we need to set some baseline parameters for every session manager.

    • Strong. Session tokens should be random enough to resist being brute forced in a reasonable time frame. I still see this although it’s been solved on most platforms since 1996 or so.
    • Controlled. Session managers should only accept their own session tokens.
    • Session hijacking resistant. Session managers should rotate their tokens from time to time automatically. Every five minutes is fine, as is every request as long as there’s a sliding window of acceptable tokens to allow the most used button (Back) to work. All frameworks should possess a regenerate token API – it’s ridiculously hard in all frameworks but PHP today.
    • Session hijacking resistant. Session managers should watch headers carefully and reject requests that don’t perfectly match up with previous requests. There is no reason for a user agent or a bunch of other headers (upto and including REMOTE_ADDR) to change within a session.
    • CSRF proof. Session managers should tie themselves to requests, and check that the session and forms match up. OWASP CSRF Guard can do it, and realistically, this should be standard in every session manager.
    • Cloudy Web Farm support. It’s very hard to do federated session state with most session managers, and yet the hackiest solutions I’ve seen for getting around this issue is due primarily to the isolated session manager mentality. There are good last writer wins replication mechanisms around, including “deliver at least once” – not everyone needs this functionality, but those who do really need it badly. This can be used as a pre-cursor to…
    • Notifications. Most SSO products use work arounds so that the primary session manager times out before the SSO token does. This means that their are active SSO sessions you could reconnect to if you know what you’re doing. Let’s make it easy for folks like Ping to get notified when regenerate, idle, absolute and logout events occurs.
    • Adaptive timeouts. Sessions that “expire” should be put into a slush pool, that comes alive again up to an absolute limit. But the instant that a user wants to perform a value transaction, the session manager should require re-authentication.
    • Integration with common SSO protocols. SAML and WS-Federation are the two most popular SSO mechanisms out there. Realistically, all session managers should be aware of how these work, and tie into them strongly so that if folks use SAML/WS-Federation, this can be tied to the session token in use. How many times have we seen these two operate in completely separate worlds and then been a target for replay, session expiry and other attacks.
    • Destroy means destroy. Make it easy for devs to do the right thing when the user clicks logout. Not only clear the session properly, but also all associated copies of that token – headers, cookies, DOM, etc, etc.

    Notice that I didn’t put one of the lazy pen tester’s favorites in the above list – “Logging on more than once”. I REALLY don’t care about that. I care about what VALUE TRANSACTIONS you can do within the assigned sessions. If there’s a problem with value transactions, preventing two sessions at once isn’t going to save your bacon. Transaction signing / SMS authentication / re-authentication will help, or if it’s about resource consumption, then transaction governors like in ESAPI will help. THINK BEFORE YOU PUT STUPID THINGS IN YOUR REPORTS.

    Many of these items are in ESAPI. That’s awesome, but it would be nice if all session managers dealt with sessions to support users and business uses, rather than obscure and unlikely attacks.

  • Sticking your neck out

    For as long as I can remember, the standard “security” talk is a negative and destructive talk, where the presenter presents their latest “research” as if it’s going to solve world hunger, totally end the Internet as we know it, cure herpes, or put the spooks out of business as anyone could spy on the whole Internet.

    The reality is that a few hours, weeks, or if it’s someone like Oracle circa 2005, years later, the problem is solved and we go back to giving our identities away for free on Facebook as if nothing had happened.

    Seriously, why do we put up with this?

    I believe it is because negative Chicken Little (“the sky is falling”) talks are much easier to do:

    • Hand waving talks can be put together on the plane whilst going to the conference, or even later if you don’t hit the bar as soon as you get to the hotel. Talks of this type include “Why the IT Security industry sucks”, “This language is garbage”, “What you know is all wrong”, and my favorite, “PCI sucks”. These talks have zero merit because you can’t fix them. They’re opinion pieces barely better than a script kiddy blog entry, and are typically badly researched opinions rather than game changers.
    • The buffer overflow, CSRF, Ajax, RIA, XSS, SQL injection, or latest attack with a twist talks are easy to do. You might need to start working on these talks at the airport lounge, but you’ll still pump out a talk. Patches for these talks are sometimes delivered before the talk has finished. The world has not ended.
    • The fuzzing talk is is a bit harder. You have to run the fuzzer and let it find at least one badness. Probably a good idea to do it the night before you fly. Better yet, run it against a bunch of products in case someone did a good job.
    • Developing new devastating attacks that can be blocked by CS101-level controls, like the magic pixie dust of input validation. What a complete WAFTAM.
    • The pinnacle of negative talks has awesome demos, but realistically still demonstrates a paucity of ideas (such as how to detect if you’re in a VM – I mean really, who cares?). I have respect for these researchers, and really wish they’d apply their talents to good quality positive research instead of wasting their most productive research years on pointless baubles.

    Why are positive talks harder? Because you have to work at them!

    • Firstly, it’s about research, and original research is hard to do properly.
    • Research takes time, and consistent application to an idea that may not even pan out. But if you don’t do it, you’ll never know.
    • You have to find an area that is not yet solved. There’s a reason it’s not solved yet. These issues have made talented brains hurt already.
    • You have to think of a new and novel solution to the issue, and the solution should be effective, simple and cheap. Most of the speakers on the party circuit simply don’t have this capacity, and haven’t had an original idea in years.
    • You have to develop your solution and test it out against lab and real world scenarios to make sure it doesn’t suck. It helps if your solution is repeatable, your solution and code is documented, and its useable by others without sacrificing chickens.
    • Many folks write papers and talks as if they succeeded at first go. That’s not science, that’s puffing up Brand Speaker. We learn from the paths not taken more than the eventual solution. Think about CSRF and session fixation for example – there’s heaps of folks who think CSRF is solved by a random nonce. But it’s not the entire story. Same deal with click jacking. Write up your failures as much you write up your successes.
    • You have to hand your research and methods around to trusted peers to see what they think and hope they don’t spill the beans or steal your thunder. Once you’ve published, you need to make sure others can repeat your experiments and results.
    • If you want to change the world, you have to give it away. You can’t patent it. You can’t tie it up in trade secrets. You can’t keep it to yourself. This is the hardest of all – think of the IT landscape today if AT&T had kept Unix to themselves. Exactly.

    Lastly, and probably the most important – positive research and subsequent talks means sticking your neck out. Your peers evaluate what you’ve said and how your solutions work. If you’re not sure of self, this can be a huge risk to one’s ego. If you’re wrong, it’s real bad and you’ll be a virgin for another year. If you’re right, you will get {girls, boys, furries} and invites to all the sexy parties*

    I will not claim that all of the hundreds of controls I documented in the OWASP Guide 2.0 are right. In fact, I know some of them are wrong. That’s how science works. At least I stuck my neck out and documented what I thought at the time. I’m happy to come back to the controls, do the research to find new controls that do work with minimal cost, and document those.

    For those of you lucky to know me personally, you’ll know that I have no shortage of self, in fact probably enough self for two people, but you need it if you’re going to have a shot at this brave new world of repeatable, scientific progress in the web application security field.

    I hope to see more conferences like OWASP’s AppSec Research conference, to be held in Sweden this year. Make sure you go to it. More importantly, stop wasting time on negative talks, and get moving on doing that research for next year’s conference.

    * This is actually false advertising, as you’ll struggle to be invited to most conferences even though your research and talk will mean more long term than 100 negative talks. On the other hand, I’ve been told that Furries are easy to rub the right way.

  • OWASP ASVS – also good for architecture reviews

    I’ve just finished a job where I used OWASP’s Application Security Verification Standard as a light weight security architecture template.

    The good news is that it helped us decide a bunch of controls (using ESAPI of course) that will hopefully improve the security of the application. I’ll find out in a few months if any of it helped.

    What worked: pretty much everything.

    What didn’t work: Some controls are not relevant to some classes of application. Do your homework before you go into the meeting so you can skip over ASVS controls that simply can’t work.

    We found that there are controls we discussed that aren’t in the ASVS. The ASVS is a 80%/20% (Pareto principle) standard as pretty much all apps come from such a low basis today, so any security improvement is a worthwhile improvement even if it’s not milspec. I wasn’t too fussed that we missed a few key items.

    For those of you floundering around trying to figure out how to do Security Architecture reviews, ASVS can be your friend!

  • OWASP Top 10 2010 – Cheat Sheet

    Here is a two page cheat sheet for the OWASP Top 10 2010.

    OWASP Top 10 2010 Cheat Sheet (100 kb PDF)

    Double side to create a single piece of paper and hand it out to all your developers for free – it’s licensed under a Creative Commons Sharealike with attribution license. Once I’ve had a bit of feedback and I’ve tweaked it a bit, I’ll donate it to OWASP.

    This cheat sheet is an unapologetically developer centric list of things to do right.

    I’ve made it as simple as possible by only including things that I personally know will work with the least amount of (re-)work. Therefore, I have purposely left out all the different alternatives. You can (and probably will) have differing views as how to do it better.

    The cheat sheet assumes the reader knows how to program, use a search engine and thus find OWASP. I might have to change these assumptions.

    I’d love to hear feedback. Comments or e-mail will work fine.

  • Advanced Persistent Threat – risk management by a new name

    I am so sick of APT this and APT that. Advanced Persistent Threats, essentially state sponsored intelligence gathering, are no different to the age old espionage between EADS and Boeing – something that CANNOT be prevented by coining yet another new FUD term like APT. Espionage is at least the second oldest profession in the world, and moaning about whatever APT is called this week is not going to change that. If your CFO wants to leak information to a competitor, there is NO information security system ever built that has or can prevent that level of misconduct.

    Look behind who is promoting APT this time around. Companies that have IT security services and products to sell. I have worked in that industry for over 12 years now. We have enough work without ambulance chasing as part of our marketing plan.

    Remember SOX? Lots of FUD then just like APT today. Lots of “security” (and even non-security) programs designed to bring in so-called SOX compliance – and for what? There were more breaches and losses post SOX compliance than before and its getting worse! Lots of money was wasted on useless programs, and hundreds of millions if not billions of dollars went down the drain for no business return.

    If you ever wondered why business folks are rebelling against PCI DSS (which is actually fairly good), fear factor is to blame. We lose respect every time we yell “fire!” when there’s not even a match’s worth of smoke, and when asked for a solution, we want to bring in a DC-10 water bomber. It’s even worse when we come with a reasonable, cost effective, and long term solution and we can’t do it because of the reasonable expectation it’s just another false alarm.

    Stop doing it! We have plenty of good reasons to do security (properly), and APT is simply not one of them. If you’re going to yell “APT APT APT!” have the courage to talk about solutions and make them workable, effective, financially responsible, and not to just rabbit on about security theatre solutions to sophomoric movie plot threats. I am not diminishing those organizations like the oil and steel industry who are responding properly where they have a real expectation that industrial or state based espionage will occur or has occurred in the past, but responding to APT for 99% of organizations is just a complete WAFTAM.

    I hate APT and all the FUD surrounding it. Scaring the punters is chicken little or crying wolf. Get with the “do something” program. If you’re a news org, instead of talking about folks who got pwned, let’s talk about folks who through good management and effective IT Security programs have survived such “advanced persistent threats”.

    What would I suggest we do about APT? Let’s take it back a step – what would I suggest EVERY firm of more than about 10-20 employees should do. Let’s start at the beginning with:

    IT Security Management 101

    AS/NZS 4360 Standard for Risk Management (1999) and ISO 17799 (now 27000 family) is a great starting point. This stuff is simply not rocket science, any organization no matter what business (charity, big oil, health, military, government, financial, etc) can and should look at what they have today, and start implementing them if they have nothing.

    1. ISMS – Create an Information Security Management System. This requires an effective CSO or a CIO who are a force for change with a mandate to take the opportunity cost out of the equation. Spending money on IT security seems a cost for most orgs, but if you see it has an opportunity to do better, you will succeed. Security is a business enabler and indicator of growth. CIO / CSO’s that choose the negative “no” speed hump path simply don’t get it and should be replaced. However, in all cases, it’s important that the CSO or CIO can force business owners to do the right thing or make the business owners accept the responsibilities and risks of poor security decisions. Most orgs do not have an ISMS, and rarely do CIO’s / CSO’s sit on the board or are effective in any fashion. If the CIO / CSO has responsibility and accountability, but no budget and no power to improve things, resign. There’s no way you can effect substantial change when all software is insecure.
    2. Create and maintain IT security policies, procedures and allocate (and enforce) responsibilities. Someone has to have the power to say “turn that off”. Someone has to know when it’s time to “turn that off”. Someone should have known before hand that certain systems are more likely to end up in the “turn that off” category and have the power and responsibility to do something about it. The best IT security policy I ever saw* was 10 pages long, had less than 500 words (none of which were “don’t”) and 20+ images in it. Staff knew what they had to do and they did it as it worked with human nature rather than just saying “no” or “don’t do this” or “you’ll get the sack”. If your IT Security policies would make Stalin proud, occupies three massive binders, and is gathering dust in a cupboard, you’re doing it wrong.
    3. Create and maintain a global risk register. Start with an Excel spreadsheet if you have to, but most of you should probably go out and acquire one of the many excellent products out there that satisfy the ITIL marketplace.
    4. Create a catalog of all your assets (particularly DATA and the systems that handle that data!) and make sure it’s kept up to date. ITIL related products are your friend here – there’s heaps of asset register products out there, but make sure you register data assets as most are all about physical boxes. Assign all assets a classification and make sure folks know how things with that classification are to be dealt with. I prefer a simple three tiered classification system (public, internal, restricted), but whatever floats your boat. 90%+ of all orgs I deal with do not have any idea of what they are running nor the value of their assets or how they should treat them. I know of one org whose HR system was running on a desktop in a cupboard. Unacceptable. But if you don’t know it, you’re negligent, pure and simple.
    5. Perform a risk assessment of all assets, particularly critical ones. Risk assessments used to be popular, but I haven’t seen any done for a while now. This is a huge mistake. Put the risk assessments and any findings from reviews in there. Track, assign responsibilities and dates, and …
    6. Fix – Assign – Accept. Remediate what you can where it makes sense to do so. This doesn’t mean fix everything, just the things that matter. Insure (risk assign) the truly catastophic outcomes. Accept what’s left.
    7. Security is an enabler! Be treated how you’d like to be treated! Train the business folks and developers in secure requirements and coding. Adopt a SDLC and do it. Get and use a defect tracker. Get and use code control. If you’re doing agile, make sure security is a key deliverable of every single user story / sprint / milestone. Make sure your testers test for abuse cases as well as business cases. Think outside the box and think about your customers when you do your security. Security that doesn’t work is wrong. Security theatre is wrong. A multitude of security features doesn’t mean you’re secure. Do security well, and you’ll win because your customers / clients / users will love you and appreciate the efforts you made to make security transparent, easy and effective.
    8. Expect to keep up with the Joneses. You don’t need to be bleeding edge, but anyone running Lotus Notes from 2001 or IE 6 should put money aside to deal with the cleanup of any lame attack from the last X years. Just because you’re not paying out on cap ex this year doesn’t make you a good manager. Long term, you’re gonna pay. Even out the expenses and roll out new stuff all the time and retire old stuff all the time. Don’t be afraid to run XP, Vista, Linux, Windows 7, and Macs all side by side. You shouldn’t require everyone to use the same XP image from 2003 on modern hardware – that’s just stupid. Keeping up is the cost of using IT and those who update regularly pay less than those who wait. And wait. And then get attacked. Plant and equipment is tax deductible in most tax regimes, so there’s no excuse not to depreciate and retire old crap. It does mean you’ll need to cope with patching and scalable roll outs of new hardware and software. You need this anyway for those zero days.
    9. Get rid of crap that costs a lot to operate. Systems that need patching all the time are doing it wrong. Systems that are attacked all the time because they are insecure should be retired. These systems are not worth supporting. Make the ISVs realize that you only pay for secure software that requires little maintenance. Wean off any supplier who refuses to understand this most basic of requirements. They’ll go out of business, and you’ll save money. Ensure when you buy customized software or have it developed for you that the contract states that the ISV has to fix all security bugs for free and they are responsible for paying for the code reviews and penetration tests to prove that they are secure. That’ll keep the ISVs in line.
    10. Monitor and escalate. No system is perfect. Put in procedures to cope with the horse bolting, but try not to have your entire herd and all their tackle gallop out the stables.
    11. Don’t be a cowboy – do it all the time. A good ISMS is not a “fire once and you’re done”. You can’t buy a product that does it for you. This is a commitment like GAAP is a commitment to financial standards to use the same systems year in year out. Those that forgot this lesson are now paying for APT. I’m not going to justify why you need to do this stuff, it should be obvious.

    This stuff is simply not rocket science. It’s not new. Most well governed orgs already have this in place and have been doing it for a decade or more. The problem is that few orgs are well governed or have any particular driver to do IT Security well. Most CIO’s are untrained in security as they’re often accountants who are brought in to rein in costs – which is a mistake. Most CSO’s lack board presence and have no authority other than to be a speed hump. This has to change. Orgs who grew up overnight (like Google) will get hit –  and hard – by APT.

    I don’t want to hear about APT unless you have a solution to whatever you’re bleating about. If you’re going on about how the script kiddies have all grown up and now do exactly what they did before, but are now bank rolled by intelligence agencies, my question to you is “so what?” If you’re doing IT security and governance right, APT is just so much hot air.