Category: Security

  • OWASP Guide 2013 – Developers needed!

    The Developer Guide is a huge project; it will be over 400 pages once completed, hopefully written by tens of authors from all over the world, and will hopefully become the last “big bang” update for the Guide.

    The reality is our field is just too big to do big bang projects. We need to continuously update the Guide, and keep it watered and fresh. The Guide needs to become like a metaphorical 400 year old eucalypt, all twisty and turny, but continuously green and alive by the occasional rain fall, constant sunlight, and the occasional fire.

    If you are a developer and have some spare cycles, you can make a difference to the Developer Guide. I need everyone who can to add at least a paragraph here and there. I will tend to your text and give it a single conceptual integrity and possibly a bit of a prune, but with many hands, we can get this thing done.

    Why developers? Many security industry folks are NOT developers and can’t cut code. We need developers because we can teach you security, but it’s difficult to instil 3 years of post graduate study and a working life cutting code. I am not fussed about your platform. Great developers know multiple platforms, and have mastered at least a couple.

    I am installing Atlassian’s Greenhopper agile project management tool to track the state of the OWASP Developer Guide 2013’s progress.

    Feel free to join the mailing list, come say hi, and join in our next status meeting on Google+.

  • Speaking at Linux.conf.au 2013

    I’m glad to say that I’ve been accepted to speak at linux.conf.au 2013.

    My talk is how to apply the OWASP Developer Guide 2013 to your open source project.

    The Open Web Application Security Project (OWASP) Developer Guide 2013 is coming soon. In this presentation, you’ll learn about the major revision to one of the major open source code hardening resources.

    The new version will encompass not only web applications (although that is its primary focus), but also general advice for all languages, frameworks, and applications through the use of re-usable architecture, designs, patterns and practices that you can adopt in your code with a bit of thought.

    Learn about:

    • The latest research in application security
    • How to apply new patterns to eliminate hundreds of security flaws in your apps, such as the bizarre world of race conditions, distributed and parallel artefacts. Few apps can afford to be single threaded any more, and yet these subtle flaws are easily prevented if you only knew how
    • Challenges of documenting bleeding edge practices in long lived documents
    • How to pull together a global open source document team whilst holding down a day job

    If you code web apps, or write apps that need to be secure, this is a must attend presentation!

    Come see me! Challenge me! Make the Guide better for non-web apps!

  • PCI DSS QSA vs ISA smack down

    In his post “PCI’s Money Making Cash Cow“, Andrew Weidenhamer must have had a bad week of being challenged (or in his words, “bullied’) by an PCI DSS Internal Security Auditor (ISA). This is not acceptable, but QSA’s must accept that their advice is there to help the organization become compliant, not to provide a cash cow of their own nor to be unchallenged.

    Not knowing the specifics of the background that led to this article, I have to assume that the ISA has pushed back on one or more of:

    • Scope – this has traditionally been the QSA’s sole domain, and (uncharitably) they probably don’t want anyone else busting a move in their profitability zone.
    • Interpretation of the meaning of various clauses. I wrote the OWASP Top 10 2007, which was incorporated in the PCI DSS. I find it highly amusing to hear some of the “meanings” attributed to what I wrote.
    • Being forceful about adhering to the “intent” versus the “letter” of the PCI DSS. This is a problem where the standard has to be deliberately vague, but the Council should be open and honest about what they meant when they wrote it – do they mean a web app, or something else? The PCI DSS is highly focussed on web apps, not other apps. Trying to extend it is like extending a repair manual for a ship to a bus. They both have diesel engines, but you know it doesn’t work that way. Don’t force the issue if you don’t know.

    Being in this space right now, I understand the issues here. There are several problems I hope the SSC will pick up and resolve in the next major overhaul of the standard.

    • Make the meaning of “in scope” and “out of scope” a great deal more tightly defined. The biggest problem in my view is it’s far too easy to drag in unrelated systems in a cloud / virtualized / management environments. I’m all for a solid ring fence, but to think the only way to do it is by layer two firewalls is farcical at best and destructive of the Council’s reputation at worst. Firewalls have their place, but as part of a wider set of more than adequate other controls, such as strong authentication, authorization, auditing and escalation. Let’s put it this way, I do nearly all my penetration tests over SSL and through firewalls and in direct view of IDS’s, and I still manage to have a very, very good time. If firewalls are all you’ve got, we’ve got it very, very wrong.
    • Leaving the QSA to determine the scope is inherently conflicted. They get a lot more money if they scope it conservatively (i.e as many of the requirements as possible, and as many systems as possible), and there’s a lot of risk if they scope it to be a minimal but to the letter of the standard. I strongly suggest SSC require tier one merchants hire two QSA’s, one to find the information out and set the scope, and one to assess the desired scope and systems. Or work just like the internal audit versus external audit functions in the financial world, where the ISA’s output is treated as trustworthy and evaluated from time to time. Is either method perfect? No, but it’s a lot less conflicted than the current situation.
    • The glossary, the prioritized list, fact sheets, and PCI DSS for Dummes, what you heard on the community grapevine, or the guidelines ARE NOT the standard. They can be used to support an argument to do something in the spirit of the standard, but they are most certainly NOT the standard. QSA’s – please understand unless you demonstrate that your reason for a “not in place” actually is required by one of the in scope requirements, then it’s not required to be in place. Is it good idea? Almost certainly, but that’s a different standard.
    • Many folks need and want an Attestation of Compliance … but at what cost? The process of working through not getting an AoC is almost completely off the reservation. Most folks don’t even think about this third way, but it’s actually fairly likely. If your activities are all about getting an AoC at all costs, PCI DSS has failed to achieve a good balance. There are places for a black and white compliance standard, and there are places for risk based assessments. If it’s going to cost you $25m to fix a $25 a year problem, that’s a terrible, terrible outcome. I hope the SSC addresses this in the future, as many folks going through PCI DSS compliance will need an AoC but can’t get one because their QSA has said no for the most minor of reasons.
    • Make it easy for folks to ask questions directly to the council. Nearly all of the requirements are vague. One QSA might have been told one thing by the Council, and other has never come across it before, and you have two opinions, one right and one somewhat wrong. Too many times, an argument that goes on for weeks can be solved with a simple email to the Council. Channeling it through one side of the argument (the QSA) is inherently conflicted. Let’s be open and transparent in this process.

    In my view, the best way to deal with a QSA is to be friendly, but make it known that you will challenge them in a collegiate way from time to time, and that there’s nothing personal about that challenge. The QSA may not understand the business or the technology, and they may have got it completely wrong.

    On the other hand, you as an ISA or as a hiring company may not understand the intent or learnings of the Council, and need to get your house in order, which is far, far more likely.

    PCI DSS does this in a very blunt, non risk assessed way. For the first time ever, someone with a bigger stick is holding you to account to do it the way you should have done it in the first place. There is simply NO EXCUSE for SQL injection or XSS in any app, let alone a payment app. However, so many of the requirements are vague and so open ended as to be nearly impossible to comply with unless you hoodwink the QSA. And that doesn’t serve the real purpose of this exercise.

    QSA’s who fear going to every meeting with you are not going to offer good advice. They wont offer advice at all. It’s best to walk a very fine line between being friendly and learn all you can get from A to B in the best way possible that achieves credit card security, but don’t be so chummy that you find it hard to say “no” when you need to say “no”.

    My rule of thumb is that if you’re having a difficult conversation with your acquirer when you should have been having a difficult conversation with your developers, your marketers, your business or the QSA, then you’ve done it wrong. PCI DSS is here to save your bacon, not be a speed bump. However, there is much to improve in the QSA engagement process, mainly in my view to advance true independence of QSAs.

  • On penetration testing – harmful?

    Over at Sensepost Security, there’s a new blog entry wondering about Haroon Meer‘s talk “Penetration Testing Considered Harmful“. Those who know me know that I’ve had this view for a very long time. I’m sure you could find a few posts in this blog.

    Security has to be a intrinsic element of every system, or else it will be insecure. Penetration testing as a sole activity and piece of assurance evidence makes security appear on the fringes of the development, something that you pass or fail, something to be commodotized, a box to be ticked, and ultimately ignored. Penetration testing as is done by most in our industry is incredibly harmful. It’s a waste of investment to most organizations, and they know it so they try to minimize wastage by minimizing the scope, the time, and poo-pooing the outcomes.

    Penetration testing should be a part of a wider set of security activities, a verification of all that came before. All too often, we come across clients who want to do a one or two day test the day before go-live. They’ve done nothing else, and when you completely pwn them, they’re terribly surprised and upset.

    We need to move on to make penetration testing the same as unit testing – a core part of the overall software engineering of every application.

    Penetration testing should never be ill informed (zero knowledge tests are harmful and a WAFTAM for all concerned), and it should have access to source, the project, and all documentation. Otherwise, you’re wasting the client’s money up against the wall and acting unethically in my view.

    Tests should come from the risk register maintained by the project (you do have one of those, right?), as well as the use cases (the little cards on the wall) as well as from the OWASP ASVS / Testing Guides. More focus must be made on access control testing and business logic testing.

    Penetration testing has become vulnerability assessment – run a tool, drool, re-write the tool’s results into a report, deliver. No! Write selenium tasks and automate it. If you’re not automating your pentests, how can your customers repeat your work? Test for it? They should be taught how to do it.

    Folks at consultancies will shriek away in horror at my suggestion, but getting embedded is actually a good thing. Instead of hearing from a client once in a blue moon, you’re integrated into the birth and growth of software. This is a huge win for our clients and the overall security of software.

  • OWASP Development Guide – what do you want in, and what do you want out?

    It’s time to do some curating of the OWASP Developer Guide. This is where my tastes meet the community’s – what do you want in the Guide, and what do you want out of the guide?

    As much as I want to be comprehensive, there is a real risk that a 800 page book would never be read. There ARE easter eggs in the Guide that no one has found or bothered to e-mail me about yet, so I know it’s not being read widely.

    I want to ensure the Guide is used, in a way that the OWASP Top 10 and ESAPI are used daily throughout our industry.

    • What would you like to see IN the Guide? Why?
    • What would you like to see OUT of the Guide? Why?

    Let me know by June. I’ll be sure to share your thoughts with the Developer Guide mail list.

  • OWASP Guide 2013 Development

    It’s been nearly seven years since I finished the herculean effort of holding down a day job and leading, editing or excising the existing material, cat herding all the collaborators, and writing a goodly portion of the OWASP Developer Guide 2.0.

    I finished PDFing 2.0 around 4.30 am and pushing it to the OWASP website. I was rush packing for BlackHat as my plane was due to depart at 11 am. I checked my mail as I was shutting down my home lab, and got a last minute set of edits from Michael Howard on the crypto chapter (which is definitely not my strong suit). So I fired up Word again, made the changes, and issued 2.0.1 in Word and PDF format pretty much just as I had to walk out the door to catch the plane.

    That was the last time the Guide was formally issued.

    It’s time to pick up the whip, and dip the pen in the inkwell (well, TextMate this time – we are working in Wiki format at Google Code).

    I plan to write at least one blog entry a week to describe how we are going. I am determined this time to not write > 80% of the content. I simply don’t have the time, and honestly, if we’re going to do this before 2.0 can vote, I really need helpers.

    The first steps have been put into place:

    • Put out a mail to the Guide mail list asking if I can take over
    • Got a bunch of public and private e-mails saying yes, plus most pleasing to me of all – offers of help!
    • Got an e-mail from Vishal Garg, the previous leader – 1, saying that he had actually stood down last year (!)
    • Got an e-mail from Abraham Kang, the previous leader, saying that he would be happy to co-lead with me (awesome!)
    • Asked the Global Projects Committee to assign the project to me, along with a PM. I’ve not heard back from them, but at this stage, I’m happy to do first, apologise later.

    Current status

    I’ve been reading the current materials out of the SVN repo. Oh wow. So much work to do. My plan is to use a few hours each day to write a precis of what I have in mind for each section, and then farm out the work to all those who volunteered.

    I have to make a few basic executive decisions. These help get the project re-oriented in the right way, so as to encourage lateral thinking about some of the hardest topics in our industry. I need the Guide to lead the charge against group think that XSS or SQL injection is insolvable, or that (weak) passwords will be with us forever. Other decisions are just necessary for logistical reasons. I will try to make as few unilateral decisions as possible.

    First executive decision: We cannot possibly know what will be the new hotness.

    Developers are a creative and fickle bunch. Business would love us to code everything in COBOL or VB … or Java, but that’s not how the game is played. Freaking awesome developers (the taste makers) choose new and interesting things to them at least once or twice a year or more. A pool of talent builds behind the cooler / better marketed languages / frameworks. Not knowing what will be the next new hotness is my only real assumption whilst we develop the new version of the Guide. 

    During Guide 2.0 development, classic ASP was winning the battle over ASP.NET, PHP was very popular and very insecure, and J2EE was just starting the process of moving from Struts 1.x to Spring, modulo a dead end or two (JSR-168 comes to mind). Ruby on Rails was a brand new plaything with a few fervent supporters. How times have changed.

    What hasn’t changed are the underlying principles of web application security. I don’t care if you are writing in technologies like Ajax, GWT, Ruby on Rails, Haskell, or you’ve moved to a web flow type model – we know what works and what doesn’t, and to a large extent, it’s in the existing Guide 2.0.

    So I want to move the Guide up a level to be a hybrid architecture / detailed design guide, rather than an implementation guide, a set of repeatable architectural / design patterns that are easily adaptable and applicable cross-language, cross-framework, and be aware of new fads that come and go without knowing exactly what they are.

    Second Executive Decision: Diagrams must not suck

    The Guide has always needed a lot more diagrams than it has. The diagrams I drew back in 2004 and 2005 … suck. I have the originals here, but honestly, I don’t feel we should re-use them.

    I will be approaching the Projects committee to find us a good graphic designer to give a cohesive design language for us to do the diagrams in, or simply farm out our hand drawn diagrams to someone who can do them all in the one style in a way that looks good in the Wiki, Word, iPad and PDF versions of the Guide.

    In the meantime, I will hand draw and photograph the diagrams I have in mind and include them in the wiki as markup. That way, we’re not spending hours in a diagramming tool when we really need to be writing at this stage.

    Third executive decision: Distributed computing

    In 2005, the problem of race conditions in web apps was only really in J2EE web apps that did the very wrong but very arcane things. I had planned for 2.0 (and then 2.1) to include a distributed computing chapter that discussed race conditions, but it’s time to include a detailed discussion on asynchronous, distributed computing: i.e. cloud computing.

    Not only do we need to take into account the many threads / cores of a typical processor today, thus meaning that any server worth its salt will have multi-threading issues, there are parallel languages (F# with the parallel extensions to .NET, and Go for two), and there is Ajax and all the multitude of frameworks that support asynchrony. I don’t want to forget the oldest of them all – batch and background processes that can still produce surprising results.

    So its time to bring this bunch of issues to the forefront, because the cloud genie is out of the bottle, Ajax is well and truly plastered all over the Internet, and if there’s ever a new single core CPU running a new single threaded OS ever again, I’d be immensely surprised.

    Where to from here?

    It’s time to gather the offers for support and start to build a road map, and build consensus on where we should be going. In my view, we need to and indeed must lead the industry by at least two-three years to be relevant on day one of our launch. 2.0 was ahead of its time, but only just, and in the last seven years, my lack of foresight / bravery in targeting the absolutely crazy bleeding edge meant irrelevance by 2008 at the latest.

    If you want to help, please join the mail list and please offer your services. It’s time to get OWASP Developer Guide 2013 going again.

  • Safety culture – let’s add it

    Last year, I was at a site which took safety very, very seriously. On the wall in a break room was a poster with several steps that I think we in the security industry could learn from:

    • Eliminate the risk. In this case, if you see a risk and it has a known solution, that should be done. For example, with SQL injection and XSS, we know the solution. There simply is no excuse. If you don’t know about SQL injection, XSS, or even input validation, then you shouldn’t be writing software. It really is that simple.
    • Engineer the risk. If the risk is too hard to eliminate, then workarounds should be created to reduce the risk to acceptable levels. To do this means you are aware of the risk, and that you know how to address the risks in at least one way. If you cannot do this, you should not be in our industry.
    • Operating procedures. Systems languages do useful things, and useful things include shooting yourself in the foot with the safety off. Learning how to write safe useful code is vital (i.e. don’t create a system that has “Okay” for “Destroy data”. All useful systems must be operated safely, and this means skilled and trained system administrators and highly practiced procedures. You cannot legally outsource responsibility for your risk (otherwise contract killings would be acceptable), and thus you cannot expect low skill, low cost operators to do manage something that is vital to your business.
    • Involve the everyone in safety. If it’s going to happen to you, at least let folks participate in the process. In this case, consider a security@example.com, risk register, and so on
    • Wear protective equipment (hard hats, etc). All I know is that we let folks with no experience use computers. If we want to continue doing this, then …
  • I hate being proven right – mass pwnage

    Seriously. When will people (even security pros) ever learn? This is the IRC log between a few security pros who are involved in w00w00.org and BlackOps.org from an insanely long tour de force brag post that seemingly showed up folks from the big guns like Google, through security ISVs such Core Security through several security pros that I truly admire. I am not perfect, and honestly, I feel for these folks as it could happen to me, but weak passwords? OMG! Passwords seem to have cost one of them a great deal of money and time, irreversible data loss and now involvesd law enforcement (update – see comments, this log is from the 1990’s I’m so duh that I missed that bit, but it still proves my point that passwords have sucked for a long time):

      [14:41] <@rkl> shit.
      [14:41] <@rkl> whoever broke into blackops.org
      [14:41] <@rkl> when we caught them
      [14:41] <@rkl> they began rm filesystems
      [14:41] <@rkl> and removed my only copy of some photos i had of me and my
              fiance'
      [14:42] <@rkl> that i had up there for like 2 days while i reinstalled my OS
      [14:42] <@rkl> she's going to be sad about that
      [14:44] <@nobody> ur shitting me
      [14:44] <@nobody> who broke in?
      [14:44] <@rkl> we know.
      [14:44] <@rkl> luckily they were incompetent
      [14:44] <@rkl> however
      [14:44] <@nobody> bunch of savages in this town
      [14:44] <@rkl> because they tried to use blackops as a platform to launch
              attacks against a few corporations
      [14:44] <@rkl> now the FBI is involved
      [14:45] <@nobody> wonderful
      [14:45] <@rkl> me and murray couldnt' give a rat's ass
      [14:45] <@rkl> we back up blackops 1 time a month
      [14:45] <@rkl> to cd, now dvd
      [14:45] <@rkl> they got in through a weak user passwd
      [14:45] <@rkl> cause there were near 100 users
      [14:45] <@rkl> just normal users, so they didn't practice good security with their passwds
      [14:45] <@nobody> typical
      [14:46] <@rkl> we've had to turn over everything to the FBI
      [14:46] <@nobody> a system is only as secure as its users

    In my previous post, my first item stated unequivocally that passwords are crap and first against the wall when the revolution comes? That revolution starts today.

    Everyone’s New Year resolution has to be to change their crappy password (or in the rare case, passwords) for their computer to a passphrase (20 characters or more), install a password manager, and change all those crappy passwords into long (20 characters or more) random passwords for every single service. If your service doesn’t let you use > 20 character passwords, STOP USING IT. There’s something very dumb, wrong and insecure with that service.

    I do not have a single password that is the same for any service on the Internet. Changing a password to me is extremely simple because I DO NOT CARE about any of them. I do not type them, I do not remember them. They are all at least 20 characters long, and occasionally way more if I care about the system in question.

    Additionally, I have no truthful answers for the weak Q&A security backdoor on any system I use. What is your first pet’s name? Just try to crack fazEha*u@eJAM#!#6DafRatrAm6Q before the universe ends. p.s. I generated that one just for this blog entry. Don’t waste your time trying it out anywhere.

    Passwords are insecure, always have been, always will be, and that goes double for the horrifically insecure Q&A backdoor that many sites insist upon who should (and most likely do) know better. Passwords are unsuitable even for this blog. Folks who say passwords are free or worse – “the norm” – are idiots and should be ignored whilst the rest of us get on with getting rid of them as Priority #1.

    CALL TO ACTION!

    If you are responsible for passwords on your site or service, the very first thing you must do when you get back to work is to call an urgent meeting with all stakeholders. The very first agenda item must be “We’re getting rid of passwords as of right now. How do we do that?” Don’t stop until you succeed. Your users will love you.

    If you are a victim of passwords, you should ask “Why are we still using passwords? When will you get rid of them?”

    Just Do it. Do It Now. I’m deadly serious.

  • Security trends for 2012

    1. Folks will continue to use abc123 as their password. They will then be surprised when they’re completely pwned.
    2. Folks will continue to not patch their apps and operating systems. They will then be surprised when they’re completely pwned.
    3. Folks will continue to use apps as administrator or god like privileges. They will then be surprised when they’re completely pwned.
    4. Folks will continue to click shit. They will then be surprised when they’re completely pwned.
    5. van der Stock’s immutable law of gullibility: Folks will continue to be sucked in by incredibly basic scams. They will then be surprised when they’re completely pwned.
    6. Folks despite extensive and continuous evidence to the contrary for over 25 years, will continue to be sucked in by grandiose vendor claims (“buy X now, and you’ll be protected from X…”) in the unfounded belief that technological solutions can fix people problems. They will then be surprised when they’re completely pwned.
    7. Folks will continue to allow mobile and web apps to transmit their sensitive crap without any form transport layer encryption. They will then be surprised when they’re completely pwned.
    8. Folks will turn on a firewall and think they’re safe. They will then be surprised when they’re completely pwned. It’s not 1995 any more. Never was.
    9. Folks will continue to run old crap, or allow old crap to connect to them. They will then be surprised when they’re completely pwned.
    10. Folks will continue to think that they will be safe if they just virtualize or cloud enable their crappy apps. They will then be surprised when they’re completely pwned.
    If we can’t learn from our most basic of basic mistakes, 2012 will be exactly like 1989 – 2011. And that’s sad.
    Because I hate solution free hand waving posts like the above, here are some basic solutions:
    • Adopt strong authentication TODAY – passwords have NEVER been appropriate.
    • Patch your crap.
    • Implement low privilege users and service accounts.
    • Don’t click shit.
    • Learn about basic phishing and scams.
    • Fire folks who post on Twitter or Facebook all day. You know who they are.
    • Don’t buy any product marked “Protects against APT”. If you do, fire yourself as you’re an idiot.
    • Only use products that use SSL. If you don’t know, assume it doesn’t and find something that does.
    • Evaluate your security needs with 2012 in mind – firewalls alone are a few sheep short of a full paddock.
    • Upgrade to the latest OS and apps. Not only will your users love you, it’ll be harder to attack you.
    • Protect data assets no matter where they are. The plumbing is unimportant.
  • Hope

    One of my favorite TV shows is the Gruen Transfer, a show deconstructing advertising. Don’t laugh, it’s the ABC’s #1 TV show.

    A few weeks back, one of the panelists revealed that there are two fundamental ways to sell things – fear, as in:

    Late 1980\’s Anti-AIDS advert 

     

    and hope, as in:

    Durex condom ad

    The panellist’s comments are revealing – fear sells well for a short while and then stops working. This is true of the AIDS campaign. The campaign reduced HIV / AIDS infection rates to a low that hasn’t been repeated anywhere else on the planet since that time. Then the ad stopped, and there’s been no replacement campaign for nigh on 25 years. You can guess that the HIV / AIDS infection rates are back up.

    We need to change the security industry from selling fear to selling (and delivering) hope. The results will last longer, and have better long term outcomes.