There’s a lot of consultants (and clients) who know little to nothing about proper risk management. This is not their fault – it was never taught at computer science or most similar courses. If you get good at it, you’re unlikely to be a developer or a security consultant. That’s a shame, because risk management has a lot to offer both consultancies and their clients if done properly.
The problem is that most consultants think technical risk, and will happily assign “Extreme” risks to things like server header info disclosures. Many clients actively campaign to reduce risk ratings for whatever reason, some for valid reasons, others not. And they will win if the risk ratings are wishful thinking or outright wrong. This could cost the organization billions of dollars if a HIGH risk becomes a LOW risk and is accepted, when really it’s a sort of a MEDIUM to HIGH risk depending on the situation.
We as consultants have a responsibility to THINK about the findings we put into reports. Don’t be a chicken little, but also don’t be bullied into reducing bad risks as you’ll be chosen for your outcomes rather than your honesty and integrity. Be open and honest about how you came to that risk decision, talk over the factors, and help the client understand and agree to the choices you’ve made. So don’t just stick “HIGH” in there, you need the entire enchilada. Lastly, be reasonable when you’ve made a mistake and ensure there’s as few as possible as that’s a huge reputation risk.
Clients have a responsibility to talk over the risk ratings so they fully understand the risk. All parties should agree that they document the original risk, the discussion about the risk, and any revisions to the rating and / or vulnerability. Maybe there’s a control that’s being missed, or may be there’s a misunderstanding of how easy it is to perform. Otherwise, there’s no accountability. In the end, consultants should never change a risk without documenting that change.
How to improve the situation
I like the OWASP Risk Rating methodology. The primary reason is that two different consultants can come up with the same result independently, removing a lot of the subjectivity and argument from the equation. I like to include the entire calculation as this allows clients to repeat my work and thus understand why it turned out the way it did.
There are issues with the OWASP Risk Rating methodology:
- It’s far too easy to generate “Extreme” risks. Extreme risks are really, really rare. They are company ending, life ending, project ending, shareholder value strippers, reputation destroyers. Think BP and the Gulf Coast. SQL injection at TJ Maxx is an extreme risk (despite them still being in business, it did cost a lot).
- It’s difficult to game the numbers to create “Low” risks when you know that it really should be a “Low”. I basically take nine off the top, as I’ve never gotten a value less than nine. This helps a bit, but even then.
- It’s hard to do it manually. I use Excel spreadsheets, but you may want to automate it more.
- You must talk to your customers first. Otherwise, you need to take out the business elements (financial, legal, compliance, privacy) as you will not be able to lock these in.
- Impact values are not the same for the entire review. They change as per the asset value/classification, and you will most likely have more than one asset value / classification in your review. There’s a difference between contexts, help files, PII, and credit cards. Document which one applied.
That said, the OWASP risk rating methodology is way better than pretty much everything else out there for web apps. CVSS is not suitable as it’s for ISVs who produce software. That doesn’t describe most enterprise, hobby, open source projects, and so on. If you need to do AS4360 risks, CVSS is not going to cut the mustard.
Risk Management 102.
We spend a lot of time arguing with some clients because we haven’t thought through our risk carefully enough, or worse, just used the one from the last report. No two clients and no two apps are ever the same. Therefore, the risk ratings for each of your reports MUST be different. Spend the time to do it right the first time, or you’ll spend a lot more time later when your client argues with you. And they may have a point.
- Try not. Do… or do not. There is no try. The likelihood rating is solely about the likelihood of the MOST SKILLED threat agent SUCCEEDING at the attack / weakness / vuln you’ve described.
- The impact rating is solely about the WORST impact of the attack / weakness / vuln using the threat agent you’ve described.
For example, you have a direct object reference in the URL and no other controls – my Mum could do this attack. The IMPACT is off the charts, and the likelihood too. Just because a n00b consultant with an automated tool is unlikely to do more than annoy the web server, doesn’t mean that’s the threat agent you should document.
If you came so, so close to exploitation and you just know that it could be bad, but you failed miserably after several hours, exploitability has to be set to 0. Seriously. The impact has to be low too, as there’s no impact that you’ve proven. To document anything else is wrong. I’m happy for folks to write up how close they came, and draw attention to it in the executive summary and in the read out, but to put a high likelihood says that you’re lame, and a high impact says you’re a chicken little. Don’t do it.
If you’re unsure, map out different attackers (n00b consultants with automated tools, script kiddies, organized crime, web app sec masters), work out how likely they are to succeed at the attack, and then work out what the impact is for each of these threat agents. Do the math and use the most likely choice with that most likely choice’s impact. Don’t under or over blow it – if a web app sec master could totally rip a copy of the database with both hands tied, the impact is likely to be low.
Lastly, don’t go the terrorist route. You are more likely to win lotto, fall out of your new private plane from 30,000 feet and then get killed by lightning than you are ever likely to be a victim of terrorism. Chicken little scenarios work once or twice, but you’re just wasting everyone’s time and scorching the earth for all those who follow you.
1 thought on “Risk Management 102 – when is a high a high”
Great article. The GlobalRisk community site, now with more than 800 top risk professional around the world and growing, invites you to join and contribute to the discussions.
Please sign up, contribute, create awareness and share knowledge.
The site is: http://www.globalriskconsult.ning.com
(easy sign up procedure < 1 minute)