On APT

Recently, RSA was attacked by adversaries who targeted their two factor authentication fobs.

These devices have known MITM issues, but folks still used them because there was so little information out there to say that a better choice is required. RSA liked it that way.

RSA chose not to discuss the details of the attack, using the old furphy that disclosure will damage their customers (reality: it would damage RSA’s brand). RSA’s silence allowed

Advanced

Persistent

Threats

to execute the boldest cryptographic information warfare attack since Enigma.

RSA’s (IMHO) cowardly silence has actually damaged their customers in highly spectacular fashion. RSA told us nothing, so we couldn’t ask our clients to change vendors in a staged way, or to disable access, or put in other controls. We could guess, but business decisions are not made that way.

Now the brand damage to RSA will truly begin. This is the end of the simple RSA fob. Even if a better algoritm or fob is used, RSA are toast as no one will trust them any more, particularly in the sort of organizations that buy fobs by the palette.

APT boosters have said vociferously – “see, it was APT!”. Yep, I agree. It’s one of the few times that truly worthy attacks are out in the open enough for us to get a small glimpse into what’s really going on.

Unfortunately, due to widespread abuse of the term, APT is the laughing stock of the information security world. The folks who routinely use it with knowledge can’t discuss why APT is any different to the other threats out there today. Everyone else has no clue.

I’ve seen CSOs give up, thinking that since these attackers are so advanced, surely we can’t protect against them, or they buy stuff marked “Solves APT TODAY!1!” when in fact, hard work is required. Nothing very hard, just simple stuff like input validating every field and not tolerating insecure software any more.

But for your average CSO, finding out if an application was developed in a secure fashion and that every parameter is validated is impossible. It shouldn’t be. But that’s not the main point of today’s post.

It’s moderately clear in the fog of active disinformation that the weaknesses used in the RSA, Sony, and PBS hacks are well known and easily exploitable. The solution is like losing weight. There is a simple solution that works – albeit slowly. It’s called eating the right amounts of good food for a year or two and exercising hard every day. Anyone who has tried to lose weight, including myself, knows that we really just want an APT strength diet pill.

I think most of us in our industry will acknowledge that penetration testing has become “different” over the last few years, from literally shooting fish in a barell with the most rudimentary or no tools, to requiring a fair bit of work, and moving up the value chain to find interesting and exploitable issues the business cares about.

In terms of results, I think we’re still finding 10-20 things wrong in every app. Attackers need one. This is the attacker’s advantage. The number of weaknesses, the type of weaknesses, and the severity of the weaknesses are NOT “advanced” in any way shape or form in 95%+ of the code reviews and penetration tests I perform. The other 5% have been working with me for a while, are mature risk managers, and they’re hard to attack as a result.

But because of the hard core mystique surrounding the use of the term “APT”, we’re seeing completely inappropriate uses of the term everywhere from anti-virus scanners through to security appliances that promise data loss protection but forget that the information security triangle is people-process-technology. Putting one in place doesn’t solve the other two, nor negate your responsiblities to put in appropriate controls that PEOPLE can live with to do their JOBS and make the business MONEY.

My twitter icon is the famous drive around control image:

Access controls are only for those with easy access
Access controls are only for those with easy access

This is where folks promoting APT fail. I am not denying that the attackers who have found a end run around a widely known security control are

Advanced

Persistent

Threats

Anyone who targeted a particular firm, and utterly broke a long standing crypto system, and everything else required to obviate hardened controls of at least two military industrial giants are worthy of the term APT.

Unfortunately, APT as a term is so brand damaged in the info sec community (try saying it at a public event without being openly laughed at), that we have to choose a better one, one that marketers would never dream of using inappropriately. I don’t know what it is, but surely

Enemy Combatent

or

Soon To Be A Small Pile Of Glowing Ash (STBASPOGA, or the more friendly sounding Strasbourg)

are right up there.

Worse still, the fact that these Strasbourgs really are APTs doesn’t mean that we should forget to do the hard work, but instead demonstrates the paucity of protective information security research. Some of you might remember me saying a year or two ago that too much attention is paid to those who hack, and not enough on those who defend. Strasbourgs should mean more dollars in pro-active research. We need to make it difficult to develop insecure software. We should make easy to determine if Acme’s latest release of their widgets are insecure. We should have metrics that easily demonstrate insecure software costs more. We should make it legally untenable to ship insecure software, and give redress to consumers when their investments, privacy and intellectual property are violated due to stupid, simple weaknesses that we knew about in 1965.

Comments

4 responses to “On APT”

  1. Andre Gironda Avatar
    Andre Gironda

    Adversaries are faster than we can deal with. No amount of secure software process is going to stop the influx of damage that is about to occur.

    I hate to say this, and I regret to say this, but Pen-Testing is the only thing that can save us. We need to cleverly combine scanning for vulnerable apps and their backends in order to re-configure them and/or shut them off temporarily. Most CMSes do not buy the business much. We need to shut down Marketing and all of their infrastructure, basically.

    I suggest documenting all of your IP space, public and private. Then you need to grab the SVNDigger lists, covert them to Skipfish, sort them by a rough risk analysis — and run it with -c1 or 2 against all of your infrastructure.

    You also need to run nmap (preferably dogtown604 style), feed it to WhatWeb via tellmeweb, and combine that analysis with SHODAN and SearchDiggity data — looking for vulnerable packaged software such as CMSes and similar. Nmap can be made to be very fast. However, this could generate lots of false positives, so I recommend that you spreadsheet the results and manually verify them using BuiltWith in Chrome (main page or one URL down is usually all that is necessary). Like I mentioend, just shut down or shut off anything that’s vulnerable and scold the department who didn’t keep their CMS updated.

    Finally, much of the problem with the recent waves of attacks is a misconfigured PHP.ini file, or a lightly audited database. Run nmap again, looking for vulnerable RDBMSes — http://bit.ly/jkgvbd — and get them configured in a better state. How many non-standard PHP.ini files could an organization possible have? Perhaps a lot, but standardization is going to be a huge win, especially if we start using the CIS benchmarks (or at least as much as we can from them) as our standards. Server and application hardening can be done with Group Policy for Windows systems and Chef or Puppet for Unix systems. It’s really that simple — and while you are there, configure OSSEC agents to read ModSecurity audit files with the OWASP Core Ruleset in monitoring-mode-only and send the OSSEC system, app, and file monitoring activities to a SIEM, such as AlienVault, Novell, Q1 Labs, or ArcSight. Staff a SOC that knows your SIEM, OSSEC, and ModSecurity architecture — along with Skipfish, Nmap, and WhatWeb.

    At the next iteration evolution of InfoSec 101 Six Sigma, you can then concentrate on application security from a standardization perspective. Just make sure that Ops won’t put running code on any production server unless it matches the standard. Appdev teams need to be using Django-Security if they do Python, ESAPI and PhpSecInfo if they do PHP, ESAPI if they do .NET (plus WPL on Codeplex) and Java Enterprise, and similar practices for other language use. There needs to be approval of the use of anything outside of those components and the base class libraries. External components must be standardized so that they can be assessed. Standardize first and assess second.

    However, in order to conduct proper application assessments today, a lot of work and expensive toolchains are necessary. While selecting and standardizing on third-party components — make sure that these are supported in several commercial static analysis source-sink databases before you buy. Otherwise, there is no reason to buy them — have consultants customize their own in-house static analysis tools instead. This will also push the SAST vendors to support your needs before you buy.

    Do the CBA, CEA, BEA up-front work before you invest in SAST. When your app portfolio reaches a state where the cost-benefit, cost-effectiveness, and benefit-effectiveness reaches a certain threshold — and you can afford to buy at least 3 major SAST engines (assuming you mostly deal with dynamic and managed code) — then it is time to make a move. If you rely on unmanaged code such as C/C++, then you will need 5 commercial SAST engines, and thus the standardization and other appsec control efforts will be even more critical.

    I don’t suggest investing in commercial WAF or commercial DAST technology at all. These are wasted dollars. If you really want some of the features that Appscan Standard Edition or HP AMP provide — then look at Arachni, W3AF, ZAP, The Dradis Framework, or Tamper Data XML to provide them. Yes, this may require a bit of glue code — so it is critical to hire pen-testers for your CISO or SOC teams that can develop this glue code. The greatest benefit of open-source DAST is the ability to keep up with adversaries by developing specific needs — as well as to mine past pen-test and DAST data sources. The Dradis Framework is an excellent place to consolidate this data, as it takes XML in and supports XML out — in addition to other output sources such as Mediawiki.

    I also suggest some light application of training skills for pen-testers and appdev teams. It would be wise to invest in wiki-like platforms such as SD Elements or SI TeamMentor. Additionally, some web-based training from SafeLight Security or a similar service would help, especially when combined with the Offensive-Security online training. However, giving all of your pen-testers and appdev teams full-access to Books24x7 and SafariBooksOnline would be one of the greatest investments you could possibly make. Lead by example with your teams to share Twitter, Reddit, and Google Reader feeds — as well as internal blogs, wikis, and social networks.

  2. Seona Avatar

    Would be interested in hearing from you if you are interested in writing guest articles for Frisk.

  3. vanderaj Avatar

    @Seona

    Hi Seona,

    I’m extremely flattered, but I don’t think you’ve checked how infrequently I update my own blog! 🙂

    Love to chat of course, but if you’re after an article a week, it’s been a long time since I wrote a monthly column for a major newspaper’s IT section.

    thanks,
    Andrew

  4. AbiusX Avatar

    Gironda,
    Your way of security is exactly what left information world so full of holes.
    Please tell me what kind of PenTest could detect a crypto system weakness?
    Would you (as a CIO) rather hiring two security experts to audit your development/software obtaining process, or spend 10 times each month doing a full pen test?

Leave a Reply

Your email address will not be published. Required fields are marked *