Category: Security

  • OWASP Top 10 2007 is done

    After a nine month process, starting with a visit to a pho restaurant with Raoul Endres in Melbourne Australia, and ending with me working in a hotel room in Pennsylvania, USA, the Top 10 2007 is really done. It’s 35 pages packed to the rafters of good advice.

    The document will be launched at OWASP EU this week. Look for it on our Wiki shortly in PDF, Word and Wiki format.

    Whilst not quite a 1-1 mapping to MITRE data, this is a succinct update to the 2004 work, and I think a very worthy successor. Hopefully, it will not be three years between this release and the next.

    Jeff Williams and Dave Wichers (my co-authors) have put in some excellent work on the back end, as well as being a devil’s advocate when it was necessary. Much thanks to Steve Christey of MITRE for his excellent careful line by line reviews, and indeed all our peer reviewers.

    Feel free to download it and have a read. I welcome all comments.

  • Automated detection of CSRF

    I’ve been finishing the OWASP Top 10. One of the things I profess I know little about is automated tools. Up until recently, they’ve created more work (false positives, false negatives) than is actually justified in running the tool. However, they are getting better.

    After discussions with Jeremiah Grossman of White Hat Security and WASC fame, I was a little surprised to find that none of the tools can detect CSRF vulnerabilities. This is doubly surprising in that of all the attacks, this one is tractable given a flexible enough engine.

    The basic game plan is this:

    a) Watch an action take place
    b) Determine what changed so you can create a signature of the attack
    c) Create a pre-canned request from (a) but add in your CSRF locator strings
    d) Go.

    Now, how to exploit the CSRF? It will need something like a eggshell site of pre-canned CSRF attack payloads. Most CSRF attacks start with a XSS, so the game plan has to be … find a XSS and insert the eggshell in there. This is not hard, just requires that the victim browser has access to the eggshell site, or the CSRF attack is small enough to be self contained by the hosting site itself (as in the Samy worm).

    However, it is fun to read professionals dismissing the potential of CSRF. This list either shows ignorance about how many apps work, or worse, ignorance of how you can easily submit a form from a GET request, creating a POST request. All it requires is sufficient payload space to include a link to a vulnerable reflected or persistent XSS to start obviating the HTML stream. Once that’s done, CSRF is all but in the can.

    Interesting. We need to do more, not less CSRF promotion in our camp. However, on the other hand, our defenses are woeful, so maybe leaving it in a less than well understood state is a good idea. Hard to say. What do you think?

  • On CSRF

    Many folks are failing to understand CSRF properly, and how to protect against it.

    Let’s do this from the beginning and look at what works, and why.

    CSRF diagram

    Click for full size version.

    Cross-site request forgeries are simple at heart; force the victim to use the victim’s session and credentials to perform authorized work. This almost always uses XSS as the injection path, and can take the form of a hybrid attack, stored or reflected XSS, or a pure DOM attack (including remote script to take over the page).

    Typically, a simple CSRF might look like this:

    <img src=”logout.php”>

    If the application has a logout.php, including a URL will force the victim’s browser to load logout.php. As per normal, the user’s credentials (the session) is sent with the request to logout.php. If logout.php is not CSRF aware, it will log out the user.

    This is simple DoS attack. However, imagine if you could do this for Internet Banking, forcing the user to transfer money from their account to a nominated attack account or via a wire transfer service. Unfortunately, this attack is present in every application with a XSS problem, which means > 90% certainty the apps you use have CSRF issues.

    What can be done, and what doesn’t work and why

    The first thing to realize is to look at the diagram above. Anything in the red box is 0wned by the attacker if they have access to run their script on your user’s browsers. This includes:

    a) session (with its implied credentials)
    b) credentials (if any, such as Basic auth or NTLM auth)
    c) random tokens
    d) the entire DOM (i.e. all aspects of the page’s look n feel, as well as how it interacts with your server).

    Now there are some good ideas to prevent what I call “no click” attacks. These are like the example above – just by viewing a page, the attacker forces the victim to perform their actions. In order of usefulness:

    Move from GET to POST

    This is in the HTTP RFC – any request that *alters* the state of the application (such as transferring money, logging folks out, etc), SHOULD be done via verbs other than GET. In our case, we choose to use POST as it’s simple. This raises the bar … a little. Anyone who can code basic JavaScript can get around this.

    Add a random token to the request

    This scheme is simple: add a nonce hidden field to the form, and check that the nonce is the same on the server upon return. You know what? This will defeat all no click attacks, but will not block advanced hybrid attacks, like the Samy worm.

    This approach is done by most anti-CSRF tools out there today, including the CSRF Guard from OWASP. It works… against script kiddies.

    Add the session ID to the request

    On the surface, this is even simpler than the previous example, and attempts to provide check that the session identifier is sent with the request, thus preventing simple e-mail attacks. However, this misses the point – the victim’s browser sends the request, thus including the credentials. So this is a false mechanism – you’re just repeating what the web server does to associate you with your session, and therefore, this mechanism is not a valid or viable method of protecting against CSRF.

    Ask the user for a confirmation that they want to do the action

    Many CSRF attacks send one request and will fail if there is a second page asking for confirmation. Guess what – this does not prevent scripted CSRF. Samy worm broke new ground in many different ways – it did a multi-page submit process to make a million folks their hero. But this approach is nearing the correct solution.

    Ask the user for their password

    In this scenario, the attacker should not know the user’s password, so we’re moving towards the correct solution for CSRF as it’s out of band thus not knowable in an automated way.

    However, anyone who has been phished or knows what phishing does will realize straight away why this does not work – not only has the attacker full control of the DOM, they can re-write the page any way they wish, including intercepting forms by changing the submit function, intercepting data sent to the server, and they can pop up their own dialog to authenticate their request. Something along the lines of “I’m sorry, your password didn’t work. Please try again”.

    SMS authentication

    In this scenario, for our high value transactions, you may wish to consider using SMS based two factor authentication. What happens is that the user will get a random code with explanatory text, like this:

    “The code is WHYX43 to authorize transferring $2000 to account 23214343 (“My checking account”). If you did not initiate this transfer, please call 1-888-EXAMPLE.”

    This takes the app out of the left hand red box to a second red box. Sure, you don’t control this new red box, but to attack this scenario, the attacker must:

    a) Attack the application successfully and run their script
    b) Be available when the user logs on to the application
    c) Attack the telco’s SMS infrastructure such as to intercept the token to re-write the message or redirect the token at the right time
    d) Ensure the user cannot reverse the transaction because they would most likely receive the SMS or if they didn’t they would be expecting to get a new code which may invalidate the attack token.

    This confluence of attacks is not easy. It requires too much, and I personally believe for its cost, this solution cannot be beaten. It doesn’t make it impossible, just really really hard.

    Two factor transaction signing

    Bring on the big boys. This is how two factor authentication should have been done: authenticate the transaction / value, not the user.

    In this scenario, the attacker would have to convince the user to type in a sequence of steps, most likely including the value of the transaction and return a code to the attacker. Phishers are clever, but not clever enough in this case.

    I really think that for the highest value systems, two factor transaction signing is the way to go.

    Sequencing

    This is a counter-measure that I discovered by accident when reviewing a Spring Web Flow app a little while ago. By mixing up SWF’s flow mechanism, we can create really hard to obviate applications.

    The approach is this:

    Application has a range of functions on a page which perform actions. Each action has a special flow ID, flow step, and random nonce mixed in and calculable from the the server only.

    So if you want to create a link to go somewhere, you do it like this:

    myUrl = createURL(FLOWID, FLOWSTEP, someRandomFn());
    or
    myUrl = createFormAction(FLOWID, FLOWSTEP, someRandomFn());

    It would create a special link, or a post action. When a user views a page, only those links or form actions are permissable. Therefore, a hostile attacker wishing to go from page x to a goal function g’ simply can’t without that goal function being reachable by x. This means that by introducing the concept of landing pages and confirmation pages for special functions like logout or change profile, you can only do so whilst in the midst of that flow.

    Attackers would have to inspect the current URLs to determine where they are and this is not easy if the location is somewhat randomized or commonalized (typical of MVC apps, which have a single entry point).

    This could be taken to the next level, forcing the client to perform public key crypto to calculate the correct response token by signing where they want to go like this:

    rsToken = sign(serverPublicKey, destination Flow ID, Flow Step);

    The server could then determine if the response was calculated by one of its clients, rather than one of the hordes of attack zombies. If the server then eliminated all previous steps as a potential flow source, it would immediately block out the user, or the attacker, and thus make the attack detectable.

    This makes it much harder for a hostile DOM / attacker to move you directly to their goal function g’ and thus make the attack delayed, diminished, or at worst detectable. As most attackers are only out for a good time, this may be enough for them to move on to another application which is easier to attack.

    However, as it requires re-jigging all applications, and we can’t eliminate XSS in the current set of applications today, I doubt this approach will work outside those who are prepared to try.

    Administrative attacks

    One of the things that has got my goat up for a while now is why application authors insist on mixing up user and admin privileges in one application. CSRF just makes a very silly non-compliance issue a really stupid and foolish mistake.

    Administrators by their very nature use the app a lot more than most users. They have more privileges than your average bear. Attackers using CSRF would be silly not to attack the administrative users of the application.

    So… what does this mean?

    SOX (I’ll get to this), COBIT, ISO 17799 and a host of other compliance regimes all mandate that users are not administrators. Make it so. Get the administrative functions out of your app today and into their own app. Force the admins to use a different credential. If the admins view user created content, they are still at some risk of CSRF attack, so make sure those pages have the highest levels of anti-XSS and CSRF protection.

    SOX is simple and often misused to get unwilling business folks to (at worst) spend big on IT’s latest geegaws or (at best) to fund chronically underfunded security budgets. In any case, the basics are this: your app, if it pertains to the financial underpinnings of your business, must have anti-fraud controls. This essentially boils down to initiator / approver model. If one person is allowed to create an order for $100 million, that same person shouldn’t also be allowed to authorize it. In a perfect world, neither of the two roles mentioned so far wouldn’t receive the order. Fraud thrives when one person can do all three things. So if you have users that can create all three roles, then that user MUST NOT be able to use the application, and that user MUST be extremely heavily audited. Such admins are not users … by law. I hate reviewing such cretinous mistakes, so please fix it. This fixes the CSRF issue as the admins are unlikely to CSRF attack themselves.

    In the real world

    The problem is that most applications are not high value transaction based systems. They’re forums, blogs, social networking sites, book selling sites, auction sites, etc. What about them?

    They should be eliminating XSS in their apps as a matter of priority – XSS is the buffer overflow of the web app world. They need to stop using GET immediately. They should be using random tokens.

    These simple methods of stops most simple CSRF. Adding additional protection will provide additional protection – but every application is different. If you need more, add more, but always consider usability. Forcing users to use a two factor authentication device for every page view is impractical and foolish. Choose wisely by protecting only your sensitive functions from abuse.

  • On injections

    A fair number of years ago, I had the “pleasure” of reviewing an application written in ASP. Unfortunately, it had over 2000 SQL injections. I do not know what happened to the company, which produced legal case management software, but it would have taken a great deal of work to re-engineer the code to be safe. Why then, some six years later are injections still all the rage?

    Injections, are to put it in the simplest possible terms, are the simple result of intermingling instructions with potentially hostile user supplied data. This paradigm, although powerful, has failed. As Dr Phil says, “how’s that working for ya?”

    So we have to move on. Luckily, this post is not all bad.

    HTML injections

    It’s becoming increasingly hard to ensure that output is properly encoded, especially as I18N becomes more popular. Will encoding data to be XSS safe be viewable to non-US readers? Hard to say. I’ve been working with Benjamin Field (or more precisely, I farmed out) the re-implementing the Microsoft AntiXSS library API to PHP. This is nearly done. Once it is ready, we’ll make it available.

    However, I’m still worried as it’s not the simplest, default way to output. When the simplest way to output is wrong from a security stand point, mistakes will be made.

    SQL injections

    Seriously, we have the technology to stop these today. Both strongly typed parameterized queries (a.k.a bound statements) and most forms of ORMs and Active Record systems are SQL injection resistant. Stored procedures are mostly safe, but are at risk if a certain lack of care is demonstrated.

    LDAP injections

    Want to be someone else? It’s easy today. This is the great unexploited attack vector for *serious* applications. Toy apps don’t use LDAP, so most researchers do not concentrate on it. But you betchya that most large orgs and govt types have signed up for the “SSO magic bullet” and landed themselves with a LDAP shaped white elephant. They may not be even aware that they are running LDAP. It’s certainly not made clear in many of the marketing materials. How do architects who have never coded understand the risks?

    Today’s LDAP code are eerily reminiscent of the SQL days of yore. Here’s a typical LDAP login method (I have worse, but for this I’ve borrowed from php.net’s manual page):


    $ds = ldap_connect($ldaphost, $ldapport) or die(“Could not connect to $ldaphost”);

    if ($ds) {
    $username = “some_user”;
    $upasswd = “secret”;
    $binddn = “uid=$username,ou=people,dc=yourdomain,dc=com”;
    $ldapbind = ldap_bind($ds, $binddn, $upasswd);

    if ($ldapbind) {
    print “Congratulations! $username is authenticated.”;
    }
    else {
    print “Nice try, kid. Better luck next time!”;}
    }

    Yes, not only can you choose your location within the ldap tree, you can also do XSS if you’re clever.

    XML

    Don’t even get me started. Creating uninjectable XML functionality is a PhD in itself, and once it’s done, I doubt the resulting XML would be anywhere near as useful as it is today.

    Injections will continue to occur to why the USA still has pennies and $1 bills: they didn’t remove the old form from circulation. This is the only solution: ensure the safer solution is available by default (and is the easiest method to use), and remove the old unsafe methods. Make it HARD or IMPOSSIBLE to do it badly.

  • Top 10 2007 is done

    The document is a complete re-write from scratch, and is totally up to date. It’s 34 pages of goodness wrapped in a shiny new document format. Essentially it’s over all bar the shouting… which comes next! 🙂

    The document will be uploaded to our Wiki in the next week (post-board approval). If you want your review points or changes to be included, you will need to be on the Top 10 mail list to make the suggestions or changes. To join the OWASP Top 10 mail list (it’s free!), go here:

    OWASP Top 10 Mail man interface

    I am particularly interested in hearing from people in the

    • PCI DSS arena
    • Department of Homeland Security
    • NIST
    • – Your nation’s equivalent of the above two if you are outside of the USA
    • If your organization has previously adopted the OWASP Top 10 2004
    • Vendors in the WAF, automated code review, and other automated tool arena (yes, we finally discuss if these automated controls are likely to work, but as we don’t know about every product, the more advice we can get the better)
    • Frameworks, particularly the PHP team, J2EE / Struts / JSF / Hibernate / Sun / BEA, JBoss, etc, and of course Microsoft’s folks in the .NET team

    The last two bullet points are REALLY important as we make some stringent suggestions about how best to code to avoid the Top 10 weaknesses and we want to ensure that it really is the best advice. If you can’t be seen contributing publicly, feel free to e-mail me… vanderaj (at) owasp.org.

    UPDATE >> Here it is!

    http://www.owasp.org/index.php/Top_10_2007

    Andrew

  • The usual suspects and what to do about them

    I’ve been busy on the Top 10 2007 with Dave Wichers and Jeff Williams. I’m very close to finalizing a draft release right now. This process made me think, how can we eliminate these issues? Why should every developer have to learn how to fix the same problem? We know On some frameworks, some classes of application programming error are history. Obviously, we’re not going to be able to fix business process or logic issues, but I’d prefer people working on that than wasting time searching mountains of code looking for every last vulnerability.

    So over the next week or so, whilst I’m traveling and theoretically have more time (i.e. less TiVo!), I’ll pump out what’s wrong with the current model, and propose how it might be fixed. Permanently.

    Some of my recommendations will be hard to swallow, but the alternative (“same old, same old”) has failed, and failed miserably for years. It’s time for something new, or in the case where it works real well on another framework, let’s adopt their ideas and maybe even improve on it a bit. Up first is our old friend XSS.

    XSS

    xss-table001.png

    The table shows just how wrong the old way (PHP) is. I made the number up in the difficulty column, arbitrarily setting it to 10 / 10 for the weakest solution, and then thinking carefully about what would be required in the other platforms to come to the same desired point: a safe application. In this round, J2EE using any of of the common presentation layers wins hands down. Sure, you can do bad things in J2EE and .NET, but the important thing is that it is not the default. You have to work at being insecure. But when you need to be, those frameworks allow you to be as insecure as the next guy.

    Given the likelihood that a PHP developer is no better or worse than a J2EE or .NET developer, the PHP application requires more care and thought to get right. This means, in the entire universe of webapps (regardless of language), a larger percentage of PHP applications will have XSS issues than other frameworks.

    What’s needed to get it right?

    With PHP, there’s no real solution other than … a very ballsy decision to make PHP 6 XSS safe by default. PHP 6 is Unicode. Let’s make the Unicode output functions XSS safe by default, and output a E_STRICT warning when apps use print or echo. Obviously, there will need to be a way to output HTML dynamically, but this is the corner case, not the default. Let’s make the devs who need REAL html do the hard work, rather than make all devs do the hardwork, everywhere.

    With .NET, all controls need to be XSS safe by default and have an encoding property, and it should be set to true by default. Enough are properly done right now to protect the usual newbie programmer, but it’s wrong to assume that even advanced devs will remember to encode everything. Where a value is stuck into a field that is likely to be displayed without encoding, a warning should show in the debugger.

    With J2EE, the java compilation step should issue a warning when the old style <%= … %> is used un-nested. <%= … %> is required to put values into messages and bean:write to do useful work. But if it’s just on its lonesome, that’s XSS right there.

    Tomorrow: Injections…

  • Research time

    A few weeks ago, the announcement of the PDF hole made it clear that the age of stupid XSS vulnerabilities is still with us. Is it time for me to surf in a read only sandbox? XSS is so old school, and yet so damaging. It is so SIMPLE to prevent, but so HARD to stamp out. I was disheartened.

    But then today rolled around.

    We had a board meeting tonight and I’m excited with what we have planned, and it’s re-invigorated me tremendously. It’s a very exciting time to be in the midst of the OWASP community right now.

    I hereby declare 2007 the year of pro-active webappsec research. Not looking for or researching new vulnerabilities, but researching and developing long term effective methods to close down common holes which plague browsers and common frameworks. It’s time to kick XSS, CSRF, injections of all types in the slats and make it impossible for folks to say “well, I didn’t know” or “that’s too hard / costly / time consuming”.

    We have a range of projects we’re doing this year, and I will make it my task to ensure that OWASP builds the knowledge, tools, patches, and so on to eliminate wide swathes of wepappsec retrobugs. Let’s see how I go in 345 days or so.

  • Rebutting MJR’s rant

    It was nice to see Marcus Ranum (who has an interesting slant to the security industry) get some press again. This time it’s on responsible / full / no disclosure. In a probably unrelated attack, his site is defaced by a SEO blackhat. Irony, eh? If only he had patched or used software which has learnt the hard lessons.

    Here’s the anti-rant I wrote my co-workers a Friday or two ago:

    Ranum’s argument has four major elephant sized flaws (at least).

    Firstly, he states that security has not gotten better. This is clearly wrong. Security has gotten a great deal better, but so have the attacks and our knowledge. However, the impact of attacks has been steadily decreasing. When I first joined the Internet, there were perhaps 100,000 people on it at a very small number of sites. That year, the Morris worm nearly destroyed the entire Internet. There have been no significant attacks like that for some time. Yes, there are more attacks, but considering there are more than a billion of us on it now, that’s to be expected. Attacks require a great deal more skill today than in Morris’ time. Old software, particularly in the webappsec is trivial to exploit. Proof – modern stuff which is hardened through the lessons we’ve learnt is very hard to exploit. Software which does not heed the lessons is trivial to exploit (see MJR’s site, natch!). Without some pressure, all software would be trivial to exploit, not just the lesser used stuff.

    Secondly, he states that disclosing vulnerabilities is akin to shouting fire when there is barely any smoke. The implication is that you should never shout fire, even if there is the possibility of fire. However, if no one shouted fire, children’s pajamas would still be made of highly flammable materials resulting in third degree burns or death instead of slow or insulating materials we have today. Only through research, standards and indeed, advocates (akin to vulnerability researchers) doing shock stories on tabloid TV did we move from obviously deadly dangerous to moderately safe. Fire is a particularly weak analogy as the metaphor breaks down very quickly – fire always occurs and is a natural phenomena.

    Thirdly, Ranum ignores evidence that contradicts his position. Vendors and customers are hurt by rampant full disclosure, and I agree that some folks are only out to get on CNN for a few cycles. However, responsible disclosure is the only proven way to make security sloppy companies like Oracle pay attention – eventually. It made Microsoft more secure, and I think if you look at NT 4.0 (1996) versus Vista (2006), Vista is a much larger but harder target. Oracle’s CSO (is in my view) negligent because she thinks like Ranum, and refused to protect her customers and ipso facto all of us.

    Lastly, Ranum HATES – and I mean truly despises – upgrading software. This leads directly to his point of view that if there was no disclosure, there would be no (or much less) patching, therefore he wouldn’t have to upgrade. This is a logical fallacy as one does not lead to the other. If all of us had his world view, we’d be running NCSA web server with no firewall on SunOS 4.1, i.e. completely unsafe. How would have Microsoft|Apple|Sun learnt how to secure (as best they are able) their operating systems without the challenges of security researchers and malware creators? It’s like MSRA golden staph – damn near unkillable around hospitals today. It didn’t get like that because we used soapy water.

    He rants against the creation and sale of malware as if we’re powerless to stop it. However, it is already illegal to do this in many countries. So if someone writes malware, they are already breaking the law. Why would they stop now, or in the past in his alternate no disclosure universe.
    I remember a few years ago that CERT sat on a major DNS issue for oh 8 years (I’m making this number up, but it was not a few months) until the last root server was upgraded to bind 8.something. There was an architectural flaw that could have destroyed the internet with a few packets. And I knew about this in like 1992 or 1993 and at that stage I was not in the security game fully – just a sysadmin. It only required someone with bad intentions and the Internet would have been dead. Why X years? Because there was no impetus to upgrade the root servers, despite it being 14 times redundant, simply because CERT sat on the problem. When I met Spaf a few years later at a SAGE-AU conference, I asked him about this, and he was unapologetic about it. Who gives him the right to decide if the Internet stays alive or not? It should have been fixed, and indeed it was fixed – eventually.

    Will we ever be secure? No. Will Ranum’s or my site be safe from attack? Doubtful. Ranum is simply wrong in his thinking if by stopping disclosure we will suddenly become safe.

    Ranum’s alternative is no alternative.

    ps. I am no apologist for unrepentant full disclosure types out for their 15 minutes on CNN. Hint: I will never employ or recommend ANY full disclosure folks.

  • OWASP Top 10 2007 nearly done

    This edition’s headings:

    A1. Cross-site scripting
    A2. Injections
    A3. Insecure Remote File Include
    A4. Insecure direct object reference
    A5. Cross-site request forgeries
    A6. Information leakage and improper error handling
    A7. Malformed input
    A8. Broken authorization
    A9. Insecure cryptography and communication
    A10. Privilege escalation

    Note what’s missing? Note what’s new? 😉

    If you want to review it, please mail me. We are putting it out to at least a month’s peer review, including previous users such as PCI and SANS, as well as folks who had no particular love for the old 2004 edition.

    Unlike 2004’s edition, updating the Top 10 will become a yearly event. With some luck, we will be releasing it each and every January.

  • WebAppSec Past and Future

    All the cool kids get the press for the wrong reasons. It’s much easier to destroy than to create. Therefore, my 2006 and 2007 lists will only highlight those things which I think have helped create safer web apps, not made it harder for us to protect against them.

    2006 Highlights

    • IE 7.0 released. Seriously. Prevents many phishing attacks, reduces the damage through low privilege browsing, and stops some forms of XSS (including the recent Adobe PDF problem). Firefox and Apple could learn a few things from Microsoft.
    • Publication of Ajax Security guidelines by many folks (including me)
    • PCI updated their guidelines to encourage vendors to take CC handling seriously, mandating code reviews by 2008
    • Folks who are normally hidden started blogging, such as this PCI DSS blog and this
    • OWASP Testing Guide gets off the ground in a big way. When this is released (soon!), normal folks will have a way to review existing code properly.
    • OWASP Autumn of Code starts, funding approximately nine projects (8 were chosen and we funded another as it is strategic to OWASP’s mission). Many projects are nearly finished! This has been extremely successful and we will be doing it again in 2007
    • Encoding gets a fresh look: OWASP Encoding library and Microsoft’s revamped AntiXSS library which takes the refreshing approach of deny all crap and let through known good.

    2007 Projections

    It’s going to be a very busy year for vendors in this space, such as my new employer, Aspect Security. With PCI compliance coming through the works, folks writing PHP apps finally grokking that they need code reviews and pen tests, it’s going to be a bumper year.

    Things that I think will make a difference or need more research:

    • Protections against malicious XSS. This will almost certainly focus attention on Javascript implementations
    • Better browser protections for users. All browsers need to look at IE 7.0 and think of that as a starting point. You hearing me Firefox and Safari / webkit devs?
    • Research into safe I18N methods and prevention. This is an almost completely green field today, and needs serious researchers
    • Working on safer API for free form protocols such as XML and LDAP which are essentially utterly injectable today
    • Work with the PHP group to get them to make PHP 6 safe by default. They have an excellent opportunity and a huge responsibility to not screw up
    • Open source web app sec training for open source languages such as PHP and Ruby is direly needed. Lots of information out there, but how to publish to this audience? Extremely challenging
    • Projects utilizing the latest fads (Spring, Ruby, Ajax, etc) MUST catch up with the latest in webappsec trends or they WILL fail. It is not enough to adopt the latest and greatest fad and think it’s secure. It’s not.
    • Folks like Gunnar Petersen are getting the secure SOA message out there. This baby’s time was several years ago, but I think in 2007 large organizations will finally start realizing that hooking up web services to 30+ year old Cobol is an insane proposition without a dose of security
    • REST will be put to rest, as it is insecure and cannot made to be so… without looking an awful lot like WS-*. At which point you may as well use WS-* and be done with it. SSL != secure.
    • A lot greater focus will have to be paid to business logic security. Code scanners and app scanners CANNOT find this stuff, and yet it is the raison d’etre for the web apps. Securing business logic requires hard graft, and a great deal of focus in the architecture and business requirements phase. Hopefully, OWASP will be working on secure architecture, business requirements and design resources this year.

    However, it’s going to be a annus horribulous for folks who cannot or will not undergo PCI compliance. PCI compliance is mandatory in 2008, and doing brain dead stuff like storing credit card details will mean many smaller CC gateways and providers will have to shut down, leaving only the big providers. This will mean higher processing fees and less competition. However, the reality is that the financial and identity theft losses from non-compliant places outweigh the benefits from letting them live. I’m happy to pay a little extra and know that my details are reasonably safe from unsavory types.