In defense of Microsoft’s SDL

Richard Richard Bejtlich says on Twitter:

I would like fans of Microsoft’s SDLC to explain how Win 7 can contain 4 critical remote code exec vulns this month

I am surprised that Richard – an old hand in our circles – can say such things. It assumes defect free commercial code is even possible, let alone what everyone else but MS produces. As much as we’d all like to have defect free code, it’s just not possible. It’s about risk reduction in a reasonable time frame for a acceptable price. The alternative is no software – either cancelled through cost overruns or delayed beyond use. This is true of finance industry, health, government, mining, militaries, and particularly ISVs, even ISVs as well funded as Microsoft.

In the real world,

  • We create building codes to reduce fires, damage from water leaks, damage from high winds, and improve earth quake survivability. But houses still burn down, water floods basements all the time, tornadoes destroy entire towns, and unfortunately, many buildings are damaged beyond repair in earth quakes.
  • SOX requires organizations to have good anti-fraud / governance, yet still IT projects fail and still companies go out of business due to senior folks doing the wrong thing or auditors stuffing up
  • PCI requires merchants and processors to handle CC details properly, yet we still have CC fraud (albeit much less than before PCI)
  • We engineer bridges not to fall down, but they still do.
  • The SDL requires certain calls not to be used. This should prevent common classes of buffer overflow. However, you can still write code like this:
char *MyFastStrcpy(char *dest, const char *src)
{
   char *save = dest;
   while(*dest++ = *src++);
   return save;
}

Does code using calling that function likely to have buffer overflows? Sure does. Standards and better design eliminate stupid issues like the above.

It’s not a perfect world.

The code MS works on nearly all dates back to prior to the SDLC push in 2001. Windows 2008 has roots in code started in the late 1980’s. They literally have a billion + lines of code running around with devs of all competencies poking at it. The idea that there should be zero defects is ludicrous.

Richard, if you’ve completed a non-trivial program (say over 100,000 lines of code) that does not have a security defect from the time you started writing it, you’re a coding god. Everyone else has to use programs like the SDL to get ahead. Those who don’t, and particularly those that do no assurance work are simply insecure. This is risk management 101 – an unknown risk is consider “HIGH” until it is evaluated and determined.

Let’s take the argument another way. If the SDL has failed (and I think it is succeeding), what would be the signs?

We know empirically that LOC ~= # of security defects. However, the number of critical remotely exploitable issues affecting Windows 7 is dramatically less than that of XP at the same time of release. Like 10x less. That’s an amazing achievement that no one else in the entire industry has managed to do, despite knowing how Microsoft has achieved that amazing effort.

What are the alternatives? Until Oracle saw the light a few years ago, they had the hilarious “Unbreakable” marketing campaign. Sadly for them, they were all too breakable. See D Litchfield for details. Not reviewing or keeping dirty secrets secret does not make things secure. Only through policies requiring security, standards that eliminate insecure calls like dynamic SQL calls or strcpy(), careful thought about security in the requirements process, secure design, secure coding, code reviews, and pen tests to validate the previous steps do you have evidence of assurance that  you are actually fairly secure. The SDL is a framework that puts that cycle into motion.

Oracle got it. They’re now pumping out 30-40+ CPU’s per quarter for several years in a row. I’d prefer 4 remotely exploitable issues once or twice a year than 40 per 3 months thanks. But even so, I’m glad Oracle has jumped on the SDL bandwagon – they are fixing the issues in their code. One day, possibly in about 5 to 10 years, they’ll be at the same or similar level that MS has been at for a few years now.

I agree that monocultures are bad. I use a Mac and I have been unaffected by malware for some time. But do I believe for even one second that my Mac is secure just because it’s written by Apple and not Microsoft? Not in a million years. Apple have a long way to go to get to the same maturity level that Microsoft had even in 2001.

All code has defects. Some code has far fewer defects than others, and that code is written by Microsoft in the last few years.

Comments

2 responses to “In defense of Microsoft’s SDL”

  1. Joshbw Avatar

    For fear of starting an entirely different religious war, Richard’s comment is akin to using a single day’s weather to dispute or support global climate change. In every field, in all categories, a trend is not derived from a single data point.

    What is interesting isn’t just that Microsoft has had a consistent downward trend in vulnerabilities since the inception of the SDL, and it quantifiably certainly has, but that it has that trend DESPITE increased sophistication in finding vulnerabilities and more interest in finding vulnerabilities. In 2001 fuzzing was reasonably rare, static binary analysis has improved leaps and bounds, etc. In addition there is now significantly more effort put forth in finding vulnerabilities, largely because there is now a very robust underground economy to pay for it. We have more people using better techniques, and yet Microsoft still manages a downward trend. That, right there, is evidence that it works. Doesn’t mean it is perfect, but it also isn’t complacent (just look at the evolution of threat modeling to see MS’s efforts to continually refine the process).

    It is also incredibly transparent, which I don’t think they get credit for. I can’t think of any other company that discusses their security practices as openly. If you want to criticize the SDL, pick an aspect of it and level criticism. What is ineffective or counterproductive? Threat Modeling? Banned API? Fuzzing? What parts of the SDL doesn’t reduce vulnerabilities and flaws? You can specifically call them out, rather than simply parading a single month’s critical numbers.

  2. vanderaj Avatar

    100% agree with everything you said.

    The other thing MS doesn’t get credit for is the basic research they’ve put in, the papers, articles, how-tos, Practices & Patterns, and books they’ve put out,

    Basic research in defenses (countermeasures) is so incredibly rare. The unsung heros at Redmond should have their names shouted from the rooftops at every security conference, not the poseurs with the latest 24 hour – 2 week unpatched 0day.

Leave a Reply

Your email address will not be published. Required fields are marked *