Category: Security

  • Demise of BlueSecurity

    According the Register, Blue Security has decided to close up shop.

    http://www.theregister.co.uk/2006/05/17/blue_security_folds/

    The problem with Blue Security’s model is that a single attacker with sufficient resources can bring it down. Blue Security had it nearly right – if enough people took the spammers up on their offer to de-list users from spam registries, then the spam issue starts to become managable until such time it becomes law the spammers are jailed and refused access to the Internet forever and ever all over the world.

    We need to set up a (de-)centralized place for spammers to check the “do not intrude” list without blowing their cover or exposing e-mail addresses, and a totally anonymous decentralized categorization effort without causing any harm to innocent bystanders (such as Tucows or Typepad).

    The primary spammer who took out Blue Security can be considered to be essentially an organized criminal, and has committed criminal acts in taking out Blue Security. In general, fighting organized crime takes a lot of guts as it can be quite dangerous as they have nothing to lose and live in generally lawless societies. These thugs are like extremly stupid gruff dogs – they must be shown exactly who the boss is, and it’s not them. If they require a good slap on the snout or worse for shitting all over the Internet, well, it’s not for us to do so – it’s for the local police and SWAT teams to do. And in my personal opinion, I’d love to see that on COPS instead of their usual fare of poor drugged out wackos, who need social workers not arresting.

    As I do not want any innocent bystanders, developers, moderators, ISPs (who are somewhat guilty), or key infrastructure targeted, I have thought about ways to protect as many stages of the life cycle as possible. I propose the following:

    Server Infrastructure

    Use newsgroups.

    The infrastructure already exists at nearly every ISP, and is available read only at many other places to allow both the spammers and newsless ISP customers to participate, is sufficiently de-centralized, replicates relatively well, and the attacks are already well known (post flooding, etc)

    Process:

    • Spammer would upload a batch file of e-mail hashes to a particular newsgroup (say alt.evil.spammers.must.die) with a response address to which the user’s clients will respond with a lightweight message. This prevents emails from being exposed to other spammers.
    • Individuals run a plugin on their mail application, which parses each message posted to this newsgroup
    • If the plugin’s protected e-mail address(es) are found, the plugin will ping the response address in the batch file
    • The ping would traverse a peer to peer network set up via the plug-ins. All of the plug-ins communicate via a de-centralized model to prevent the sort of attacks which might take it out (flooding, rubbish pings, etc). After a random number of hops, the last random peer will perform the takedown notice to the properly categorized spammer page.
    • The Spammer receives the do not intrude ping request from the individual and they take them out of their lists.
    • Problem solved for “less evil” spammers.

    What to do about more evil spammers

    Escalate. Spammers who refuse will get 2x … 4x … 8x the number of “unsubscribe me” from various anonymized addresses spread over a few days. In time, they’ll learn. Take the e-mails out, hits go down.

    Categorizing spam

    The plugins will need to know how to deal with spam, and to do that, it must be categorized, URL form found, and regulatory reports performed (ie, BSA for pirate software, FDA and other drug regulators for meds, etc).

    However, as Blue Security demonstrated, being the centralized categorization source of truth does not work. That’s soooo Web 1.0. Let’s move on to a decentralized, people power version for several reasons:

    • If it’s a small group, they would be in severe danger. I don’t believe we could protect this model
    • If it’s a moderate sized group, taking out even one or two could cow the rest. This is how organized crime works today
    • If it’s the entire group, the risks are spread out over a large population, and taking out even a small number of users will not affect (and indeed will drive) membership.

    Being in a large anonymous group makes it harder for attackers to find or attack anyone. If no one is a permanent moderator / categorizer and can always decline the task, taking out any number of individuals simply wont work – the service continues and the spammers continue to get hit with unsubscribe requests. This makes it impossible for the most mobile and ruthless of spammers to take effective action against the network and is a first hand demonstration of people power.

    Each node is randomly chosen to be a categorizer for a few hours as per slashdot. If a user decides to participate, the nearby network will hear about it, and new uncategorized spams will be sent to current categorizers.

    The hash of the spam is noted to remove dupes and this is spread everywhere. This will help prevent the same spam being categorized more than once.
    If the categorizer can’t read the spam (say it’s in another language), it can be categorized to be a particular language, and then re-forwarded to peers who accept that language.
    Let’s make it reliable via voting. Completed categorizations are offered to three other plugin users for peer approval. If all two peers agree with the categorization, it’s accepted and spread throughout the network.
    If the spam is not categorized, for safety’s sake it is not acted upon, but instead spread to another node when the node’s time is up. This stops big spams from being lost in the system. However, there should be a maximum age for spams to prevent overload. Spammers usually send out more in a few days time.
    At install time, node owners can say they are “advanced” nodes when their turn comes to be a categorizer. Each approved categorization will be looked at by one advanced node to see if it has enough information to detail the source. Let’s get those zombies closed down – find and report each and every zombie to the ISP abuse queue. Do this politely and in batches so they can deal with a bot fleet in a managable way. ISPs are not our enemies – they need to be helped to clean up the net from being abused. Hopefully the ISPs will get the idea and close down outbound SMTP from the zombies, or even better take the customers offline until they’re cleaned up.

    An alternative I had thought of – a network of resilient web apps, which allows anonymous volunteers to contribute to categorization with voting to ensure that only good categorizations are let through, wouldn’t work. Spammers would just DDoS it out of existance. This particular model wouldn’t work.

    Another alternative is to use another newsgroup to distribute categorizations. I like this as Plan B in case the attacker manages to kill the P2P network. However, as more headers are available, the attackers may be able to identify key nodes, particularly categorizers, so I don’t really think this is a safe idea.

    Attack models

    PharmaSpammer basically threatened to take the Internet out. As it’s essentially protected infrastrucuture these days (with no real SLA though), doing so will create a real law enforcement retaliation, as well as get ISPs to finally take responsibility for their zombie customers and get them the hell off our Internet. So let’s discard this attack – the spammers want to spam, and to do so, they need the Internet to be more or less working.

    Let’s look at more realistic attacks:

    a) Attacking news servers. DDoSing each and every news server in the world is just not likely, especially if ISPs make sure their news servers can only be reached by their own customers (which is typical today).

    b) Attacking news groups. Post flooding can be dealt with via automated moderation of articles. This is a very old attack, and there are some methods to deal with it. Automatic cancellation is the wrong approach as this creates 2x replication traffic. Lastly, adding huge quantities of fake hashes to slow down client plugin processing of the newsgroup or to force the news server to archive legitimate and reasonably fresh articles to conserve disk space.

    c) Attacking the peer to peer network. The RIAA has yet to make huge inroads into their little P2P problem, so I think with a bit of research, we can come up with a manageable P2P model for our purposes. Things to worry about are: rogue clients injecting rubbish. Flooding. Rogue clients looking for identifying information, rogue or real clients injecting “unsubscribe” URLs to attack competitors. These issues would need to be looked at.

    d) Attacking the categorization volunteers / moderators. This is definitely a problem, but one which if there are enough moderators (say 100 or 150 volunteers) makes it that much less likely that attacking one or two of them will make any difference to the spam meisters – they will still be receiving one cancel message for each spam they pump out.

    e) Attacking the plug in development. I propose that like the spread of DeCSS or Linux, this could be done in a relatively de-centralized fashion – let’s propose a standard for the p2p protocol, and then allow as many implementations as possible. Individual implementations could be distributed via P2P networks with known good hashes found on the more trusted sites to prevent malware being issued. Obviously we need open source implementations, as well as allowing vendors to integrate this feature into their fat apps.

    I’d be really interested in peoples’ thoughts on this one. We can’t let organized crime win this one.

  • Moronic security is a risk in itself

    There must be a special breed of moron common in the physical security world. Much is made of how secure many office buildings are, but this is not my experience as a gifted tailgator.

    Today, after 14 months of waiting, I managed to get a car park in my building. I am chuffed as it is nice to have a fast easy way to get to work. I know I am lucky** as many people would like to park there, but there’s a … 14 month waiting list. That’s not why I write.

    My spot is on level 2. I work on level 3. The benefits of parking so close should include not having to go out in the crappy weather – what with a short lift ride between the two floors. However… moronic “security” comes to the rescue and ensures that this is not to be.

    Upon entering the carpark in my car, I can only exit via the lifts as the emergency exits are alarmed. I enter the lifts, swipe my card and press “3”. Nothing happens. It turns out I have to press “G” (ground in Australia = “1” in the US), and exit the building completely, walk *all* the way around it, re-swipe my access card to re-enter the building … walk to the same lifts, and then press “3”. I am not making this up.

    It makes no sense. I am authorized to be in the car park *and* the building. But I can’t transit one floor.

    kurios119.jpg

    (Image from Bruce Schneier’s excellent blog. See links to the right and subscribe to his blog and Cryptogram!)

    This sort of stupidity makes people disrespect actual security measures. Until we can eliminate morons in the “security” industry, real security will always be worked around. We’re all seen as fools until we rid ourselves of fools.

    ** For environmentalists reading this… I have a tiny fuel efficient car (Citroen C3), and I carpool with my girlfriend, so it’s not just a single person clogging the roads. It’s two people clogging the roads and dirtying the air. However, it’s faster and cheaper for us to drive than to take public transport, even when you take into consideration the cost of parking, fuel, depreciation, insurance, and other running costs. Peter Batchelor needs to improve public transport in the west of Melbourne. It should *never* be cheaper or faster to drive in compared to public transport. But whilst it is, I’ll drive and park at work.

  • Service Orientated Architecture (SOA) Security

    Recently, I’ve been doing a fair amount of work in the SOA area. It’s funny how many folks want to expose ancient code directly to untrusted third parties.

    All is not well in the SOA space, and it’s important to understand the risk of web service enabling calls to “trusted” systems. That code is generally not written to handle input from malevolent attackers – it was designed to be called from internal staff who you have a strong legal relationship with and all the motivation in the world to keep their jobs.

    This slide pack was intended for the April Melbourne OWASP chapter meeting, and it’s a basic taster of the stuff I’m going to be including in the forthcoming OWASP Guide 3.0.

    Securing SOA (927 kb, PDF)

  • greebo.net blacklisted by various terrorist organizations

    I am pissed.

    My server has been blacklisted by various spam blacklist sites… because my nameserver (something I do not control) and my netblock is owned by someone the RBLs don’t like.

    I found out today that our hoster, Quantum Tech, is owned by a convicted spammer. But unless you rub shoulders in the dark and dingy vigilante world, it’s actually pretty hard to find out that Quantum Tech and the spammer are related. Global Web have been convicted and so they must have been forced to pay up, or else QT wouldn’t still be here. My view is that once justice has been handed out, life goes on. So like IBM and Microsoft, anti-trust convicts and other nefarious firms, once the punishment is handed out, people continue to buy from them even though their reputation has been sullied. Except that I had no idea that QT were dodgy. Saying that though, QT have provided us pretty good service for the price, and the performance of the server and network has been fine, unlike our previous hosters.

    The RBLs cannot act like some cowboy sheriff from the wild west and continue their jihad against their mortal enemies. The law has had its say. If further crimes are committed, then it’s still the law’s turn, not theirs.

    But that’s all an irrelevant red herring – my problem is not with Quantum Tech. It’s with the RBL vigilantes.

    The terrorists at Spamhaus and SPEWS are blocking my nameserver and my dedicated host’s netblock. This basically means that for ISPs – who like stupid sheep are using these services – password reset e-mails from our site do not work reliably due to the black listing. Despite the fact WE DO NOT and NEVER WILL SPAM. If the RBLs had proof that our IP or host spammed, then sure, I can understand that, but to be tarred with the feathers of someone we don’t control and don’t care to know anything about is just stupid. It’s like all the people in a state of a country being convicted of a crime because one or two people in that state actually did do that crime. Convicted by people who appointed themselves as judge, jury and executioner, with no appeals.

    I’ve had two communications so far, both dismissive of my complaint. It’s harder to get off an RBL than it is to get off a spammers mail list using the “Remove me” link. As these RBL folks act illegally, there’s no natural justice, ie no recourse to arbitration, and no mediation or dispute resolution services. Why would they? They impose their view upon the world, damn the rest. It’s creating a nuclear wasteland. More to the point, their actions are illegal.

    I did some research to see what laws they are breaking in Australia. The one that got my fancy is the CyberCrime Act 2001, which amends a bunch of criminal laws to make DoS and attacks illegal. It’s pretty comprehensive and balanced for the most part. I had a hand in getting a few changes in there whilst I was president of SAGE AU – we responded to the Senate enquiry to get system admins protected whilst they were doing their job as we remember what happened to Randal Schwartz and I personally wanted to make sure that the clauses previously protecting only Commonwealth computers was extended to all computers in Australia.

    The section which I draw your attention to is 476.2:

    476.2 Meaning of unauthorised access, modification or impairment
    (1) In this Part:

    (a) access to data held in a computer; or
    (b) modification of data held in a computer; or
    (c) the impairment of electronic communication to or from a
    computer
    ; or
    (d) the impairment of the reliability, security or operation of any
    data held on a computer disk, credit card or other device used
    to store data by electronic means;

    by a person is unauthorised if the person is not entitled to cause
    that access, modification or impairment.
    (2) Any such access, modification or impairment caused by the person
    is not unauthorised merely because he or she has an ulterior
    purpose for causing it.
    (3) For the purposes of an offence under this Part, a person causes any
    such unauthorised access, modification or impairment if the
    person’s conduct substantially contributes to it.

    Therefore, any unauthorized impairment, even for supposedly good purposes like spam prevention is illegal unless authorized. And for my system, you require my authorization, and I’m not going to give it. So effectively, SPEWS and Spamhaus are acting criminally if they block any Australian IP address or system controlled by Australians.

    But far, far worse than this is the sheer arrogance demonstrated by their faceless peons who are too cowardly to sign their own names to their e-mails.

    I asked reasonably firmly but politely that they remove their blocks:

    Hi there,

    You have placed my sites into an overreaching netblock, affecting aussieveedubbers.com, a site containing 4500 VW car nuts. None of the sites hosted on my dedicated server under my direct control are spam boxes. I detest spam, but you’re not helping … at all.

    Please carve out two IP addresses from this listing:

    69.31.39.108 – aussieveedubbers.com
    69.31.39.109 – greebo.net vanderstock.com codesqa.com

    Our nameservers will also need unblocking.

    ns1.wickedtechnology.net 69.31.33.67
    ns2.wickedtechnology.net 69.31.33.68

    If your aim is to reduce spam, you are not doing it by blocking my site as we don’t spam. All you are doing is making me very angry. For the last few months, I have been hand processing 10 or 15 password resets per day that would have otherwise been handled automatically. That’s right – your useless service is blocking 10 or 15 legitimate e-mails a day. Good work, fellas. That’ll really knock the spam problem on the head.

    If you do not fix this up within 24 hours, further action will be taken.

    Here’s their response:

    “We have placed?” How long have you been hosted on these IP addresses?

    This range was listed on Feb 05, 2004 – almost exactly TWO YEARS AGO.

    We’d suggest your talk to Mike Van Essen and his “Quantum Tech Pty Ltd”, the owner of these IP addresses, why he does not tell people, 1) that they are listed by us and others, and 2) why they are listed.

    One must have due diligence as to where one hosts.


    Regards,

    The Spamhaus Project

    Despite their arrogant imputation we are clueless noobs (“due diligence as to where one hosts”), we in fact checked out Webhostingtalk (there’s one link to “Quantum Tech” back in 2002), and read over the AUP and conditions carefully. The price was right for a dedicated host for our non-profit car forum.

    But it is completely unreasonable to think that we should perform a criminal background check against the ISP. Could you imagine every customer doing this to AOL, OptusNet, BlackBerry, or Verizon? Don’t make me laugh!

    But it still misses the point – I DO NOT SPAM. Therefore, Spamhaus and friends should get their hands out of their backsides and remove their black list. Spamhaus and friends are causing us financial loss as users can’t register on our site and they can’t recover their passwords if they forget them. Spamhaus and friends are performing criminal and illegal denial of service / impairment of our legitimate service to our Australian users provided by a legitimate site run by Australians.

    If this is not resolved soon, I will be reporting them to the police. I do not take such action lightly, but I have no choice. If you’re an admin, there’s no better time to ditch the awful RBLs and go with something that works. I will also do the ring around to my mates are various large ISPs and make sure they are not using these services. Nothing would make me happier than making Spews and Spamhaus powerless.

    If I were Spamhaus or Spews, I’d be looking seriously why their efforts have failed. I get a bucket load of spam every day, and so their approach has obviously failed miserably. As a someone who respects the scientific method, you need to evaluate your own methods and results so you can improve them over time. I personally believe that RBLs are ineffective and need to be scrapped. But most of all, they need to respect the rule of law and work with their country’s anti-spam and cybercrime laws. They are effective. RBLs are not – their days are over.

  • Ajax Security Presentation up

    Here’s the quick and dirty preview of the new Ajax chapter of the Guide 2.1. It’s also some of the first real guidance anywhere on Ajax security – period. It was interesting to find so many apps adopting Ajax, but so little information on how to secure it.

    If anyone wants to proof the new chapter, please join the OWASP Guide development.

  • PHP Security Architecture: SABSA approach

    There are only a few acknowledged industry security architectures. SABSA (best documented in Enterprise Security Architecture by Sherwood, Clark and Lynas) is probably the best known.

    The various artifacts from this architecture include:

    Enterprise security layers

    SABSA Security Architecture

    Each of these layers needs to be thought about in a considered way:

    (Business) Drivers

    Why do you want X / How will it be used / Who will use it / Where should it be located / When will it be used?

    Answering these questions will help understand if something already exists (with PHP most likely), and understand the communities we need to talk to to understand their needs and desires.

    Then a risk assessment can be carried out to determine the relative risks of each area, and understand likely vulnerabilities based upon existing exploits.

    Occasionally, this process will pick up areas where there are missing areas of functionality in PHP. This then ends up on the roadmap for later versions.

    Conceptual Layer

    Training and awareness – DR / BCP – audit and review – authorization – administration – incidents – standards, etc

    Often these areas are well covered. The trick is to bring them under the one roof and ensure we’re all driving in the same direction. Some of the issues in a standard security architecture simply don’t apply to a language, which is cool as it means less work.

    Logical Layer

    Policy – classification – security services management – interop standards (WS Security et al) – audit trail, etc

    This is usually left to the programs written in PHP, but PHP needs to provide these services. The OWASP Guide provides a great deal of best practices, so the main activity here is to determine if the standard PHP frameworks contain adequate API in each area. For example, PHP is sadly lacking a secure audit class.

    Physical layer

    For PHP, this is mostly about configuration, but also the rawest possible implementation of the security trumvirate: confidentiality, integrity, availability. Again, PHP may need to grow to allow all of the areas here:

    certificate mangament

    Component layer

    A major win for security architectures in the last few years is the move away from crunchy on the outside, soft on the inside “edge” hardening towards trust boundaries. Identifying data flows from trusted components to other less trusted components is key to understanding the security risk.

    Therefore, this can be used to identify those API which normally perform this transition on behalf of programs, such as echo/printf/ IO in general, and so on. Each of these major trust boundaries needs to be investigated to ensure that there is a safe way to make the transition, whilst a raw / unsafe way remains for those few programs that need the raw / unsafe way.

    Conclusion for now…

    Well, that about wraps up this explanation. In the next installment, I’ll start enumerating the current risks and identifying business drivers. This is an important first step to creating a security architecture which will be robust.

  • PHP Security Architecture – Contextual Overview

    Overview

    The problem with PHP is that it has no security architecture. What do I mean by security architecture? A single pervasive vision for security, which will last for approximately five years with little or no design maintenance. A robust security architecture creates a balance between functionality and risk, and ensures that by default, simple activities and normal features create as little risk as possible.

    There is no point in a “safe” mode which prohibits most scripts of any consequence. For example, safe mode prevents most gallery applications from running. This wouldn’t be a bad thing if you’re a gallery hating hoster who wanted to prevent such apps from running, but this is not the case 99 times out of a 100. What is worse though, is that safe mode is trivially worked around if you want to completely 0wn the host using an attack script, but hard to work around if all you want to do is save some images to disk. The new security architecture must make balanced choices which allow apps to do their stuff, but not allow attack scripts to do their stuff.

    PHP has had several disjunct goes at implementing “security” features, but unfortunately, failed to implement them correctly. All these efforts requires modern safer programs to include code which tests if these options are enabled, and if so, to then undo their handiwork. Often such code is buggy and slow. For example, many hosters (incorrectly) have register globals enabled as many customer scripts “need” them. However, safer PHP scripts do not need register globals as they follow the usual OWASP model of validate! validate! validate! from the correct source ($_GET, $_POST, etc). So they have to undo the registered globals, requiring even more work and slowing each and every script down. If they get it wrong (and often they do; some programs actually look to see if register globals if off and put the old bad behavior back!) All this is wasted work.

    One of the key findings of the major sources of PHP vulnerability relate to many distinct configuration flavors. Hosters often run insecurely to maximize compatibility, and with “register globals”/”magic quotes” and “safe mode”, apps have at least 8 common combos which may or may not work for them. This makes testing significant PHP apps basically impossible. The new security architecture must provide a single correct configuration which programs can rely on, so there is no reason to enable these unsafe features. This also means that it is easy for auditors and reviewers to find code which relies on unsafe features, and even easier to find code which relies on the new security architecture so they can concentrate on the really dodgy stuff – bad design and silly processes.

    The pervasive security architecture explicitly reduces attack surface area of any PHP 6.0 script to manageable levels, and controls all security features in a cohesive and orthagonal way. A key goal is to ensure that existing scripts will not need to be modified (but also do not benefit … unless it is easy and safe to do so), but new applications which are aware of PHP 6.0 will automatically get the safest programming experience.

    Of course, it is possible to write insecure programs in any language if you try hard enough. What I want is the easiest way is also the safest way.

    Security Architecture Objectives

    The major objectives for the PHP Security Architecture are:

    • By default, the new architecture uses a low attack surface area approach, disabling any features which have a security outcome
    • The easiest way to do something, is also the safest way
    • If we can provide security without coding, then that will happen (think freebie XSS protection)
    • Backward compatibility is not broken, but hosters and admins are free to enable this mode (yes, by default unless you ask for the new architecture, the old scripts will not run)
    • Unsafe constructs and patterns, like mysql_query() which cannot be made safe at any price, do not run in the new architecture. At all.
    • Safe constructs, like PDO are unchanged and require no porting
    • The basic architecture pattern is “deny unless permitted” as it applies to operating system and network resources
    • Never introduce another broken security idea into the new security architecture. ie, no more “register globals” or “magic quotes”. Either the new feature is the lowest risk way of achieving an outcome, or it is not introduced. This will require significant peer review.

    Compliant scripts invoke the new security architecture, but once invoked the entire script must be compatible with the new architecture. The new security architecture has to be wholistic and apply to all functions. But as this is a lot of work, the initial effort has to be to secure the securable, and remove access to the unsecurable.
    Bad security patterns in need of solving

    There is no point in fixing broken API, so I will not delve into straight security “fixes” for existing PHP applications. However, I have reviewed the top five PHP related issues and worked out their root causes.
    The five things are:

    • File inclusion attacks, usually resulting in remote command injection
    • Remote command injection
    • Validation failures, particularly XSS attacks
    • File system attacks
    • and lastly, configuration related attacks which makes the attack so much worse

    Creating secure patterns and fixing APIs which remove these issues permanently will advise the security architecture’s overall look and feel, and it will help us create “safer” PHP-like constructs. There is no point in producing Java-like or .NET like constructs on top of PHP – PHP developers use PHP for a reason: it’s a pretty simple language to pick up and fast to make things happen.That is not to say that there will be no J2EE or .NET influences, particularly if one of those does something very well, and PHP currently does not have an equivalent API. For example, if we need a new API for feature X already present in another platform, then there is little reason to create an API with tiny detail differences. All that means is that programmers moving from other platforms to PHP need to learn the PHP nuances, but the more likely instance is that they will get it wrong and there will be subtle bugs. A typical example is that in .NET, structured exception handling is pervasive, whereas in PHP, we use PHP’s loose typing to return bools, occasionally -1, occasionally throw an exception and sometimes a string. We should be careful about return results like this, particularly if we pick up an API wholesale from somewhere else.

    Sandbox

    The sandbox as it stands in the runkit is insufficient. What we need is isolation so that each application has a level of isolation from the underlying environment, and other applications. Hosters often run hundreds of users on each host and many applications may exist in each account, such as a CMS, blog and gallery … just as I do here. All are in PHP. The apps can see and change each other’s resources like config and temporary files, which is not an ideal situation.
    Obviously, some apps require less isolation than others, and some require tight integration (a CMS and an integrated forum software for example), so a model must exist which allows such integration without opening up an entire full trust model as today.

    Bringing old features into the fold

    There is no point in creating a massive new API just for security architecture, but there needs to be a way to identify those areas which are affected. Luckily, a great deal of the API is already in PHP, so it’s “just” a matter of securing the boundary in the core between PHP applications and the underlying operating system.

    However there are gaps in PHP requiring new API and ideas to allow basic security activities

    • The idea of an intrinsic authorization model which code can rely upon to be there
    • Privilege levels / trust model, running as the lowest possibile privilege and allowing impersonation and elevation of privilege without secret storage
    • Secret storage and other crypto API, particularly as it relates to database connections
    • Implementing encrypted connections to databases and LDAP stores by default unless unencrypted is required

    Conclusion… for now

    There is much work to do. I will blog my thoughts here regularly and refine the overall approach. I will obviously find some hidden corners of PHP of which I am currently completely unaware, so I will take advice from anyone who has experience in these areas.

  • PHP Insecurity: Failure of Leadership

    About a week or so ago, I wrote to webappsec in response to Yasuo Ohgaki (書かない日記) post about some issues with PHP’s security model.

    For some time, I’ve been worried about the direction of PHP. As many of you know, I helped write XMB Forum and now help write UltimaBB. XMB in particular is an old code base, and UltimaBB, a descendant from XMB. I’ve done a lot to protect that code base from attack, and luckily, we’ve been missed despite some doozy and silly security issues. After writing PHP forum software for three years now, I’ve come to the conclusion that it is basically impossible for normal programmers to write secure PHP code. It takes far too much effort.

    PHP needs a proper security architecture, and support for newbie programmers. PHP’s raison d’etre is that it is simple to pick up and make it do something useful. There needs to be a major push by the PHP Development team to take this advantage, and make it safe for the likely level of programmers – newbies. Newbies have zero chance of writing secure software unless their language is safe.

    Think about Logo for a minute. Logo can do some interesting things, but it is not a commercially useful language because it cannot do much. But it is an excellent teaching language. PHP is like Logo – it’s a simple and easy way to get into serious web development. It is possible to write large applications in PHP, so it is useful at that level. But it is inherently unsafe as it can do far, far more than Logo.

    There are so many ways to break PHP that it is impossible for even experienced security professionals like me to code in it securely all the time. There are nearly 4000 function calls, and many of them have unintended consequences or have been inappropriately extended by something else.

    At every turn, the PHP Development Team have made truly terrible “security” choices:

    • register_globals
    • magic_quotes_gpc (and friends)
    • PHP wrappers (see below)
    • safe mode
    • output, XML, LDAP, and SQL interfaces that intermingle data and query elements, which by their very nature are impossible to protect against injection attacks

    All of these are broken. They are disjunct and have no security model. Some of the features, like PHP wrappers, are not well documented, and are a clear and present danger to PHP scripts and worse, they do not obey the weak “safe” mode restrictions. I bet few PHP coders are aware of them, let alone their security impacts.

    http://php.net/manual/en/wrappers.php

    PHP coders cannot rely upon their script running in a Unix or Windows environment, so they must code to the least common denominator. Hosters rarely upgrade to the latest PHP, even though it is safer. Even though programs could be ported to safer interfaces like PDO or the OO mysqli parameterized queries, programs cannot support this mode as it’s too rare. Even PEAR modules are hard or impossible to import in a shared environment, so favorites like PECL or ADODB which might help are not available, so programs ship with outdated and vulnerable libraries.

    So why this whinge?

    PHP must now mature and take on a proper security architecture, an over arching security model which prevents or limits attack surface area until the application explicitly asks for it. There can be no other way. If you look at Bugtraq, every day, 10-50 PHP applications are broken mercilessly. This cannot continue. Hosters cannot pay the price for the PHP development team’s lack of security expertise.

    I wrote back to webappsec that we as security experts should offer our counsel to the PHP Development Team. The only response I received from Yasuo-さん. His response included an exploit of the PHP wrappers (as above) which is completely unaffected by any safe mode implementation. He also suggested I contact Rasmus Lerdorf, one of PHP’s creators who leads the PHP development team.

    I e-mailed Rasmus, and although it’s the new year, I have yet to receive a reply. I get a lot of e-mail, but I make an effort to reply to all of it. I wish others would do the same – it is only polite. [ Edit: 24/1/2006 – I have a reply from Rasmus. Apparently, he saw Chris’s blog and thus this rant, and replied. ]

    It is time to stop complaining. The time for forgiving PHP’s weaknesses are over – it must stop, and stop now. PHP 6.0 is still in development, and it should be so clearly more secure than anything before it, that hosters will upgrade to it, in the same way they have not upgraded to PHP 5.0.

    It is time to いたします。

  • PHP Insecurity: File handling and remote code execution

    For better or worse, there are a lot of novice programmers hammering away at PHP scripts all over the planet. It is one of the most common web scripting languages. However, it’s simply too hard for a newbie PHP programmer to write secure PHP code. As I’ll demonstrate, it’s also impossible for even security minded PHP professionals to keep their applications secure due to the way PHP manages change to its ever-growing API. Their culture of “add stuff, but stuff the security implications” has to stop. Don’t get me wrong, I love change. I just don’t love the way the PHP project goes about it.

    Let’s take a non-hypothetical instance. Some functions are very familiar to Unix folks, like fopen(), fread(), fclose() and so on. In Unix, the semantics of these functions and the security issues surrounding them are well understood. However, in PHP, fopen() and friends are heavily overloaded, and gain new functionality between even minor PHP releases. For example, by default, PHP’s fopen() and several friends can open any file on the Internet. Producing a canonical filename which is safe is basically impossible in PHP.

    Take a typical PHP application using templated languages. A typical implementation will enumerate a directory to see what files are available (English.lang.php, русски.lang.php, etc) and then try to “fix” it up. The attacker will then try to substitute ../../../../../etc/passwd or something similar. Nothing new here for our Unix friends. But what about going offsite? Well, the top vulnerability for PHP applications in 2005 is remote file inclusion and it uses this exact same mechanism.

    The usual type of thing I see all the time:

    $language = $_POST[‘language’] + “.lang.php”;
    include ($language);

    Of course, the security people reading this are going “nononononno!”. But to the average PHP programmer, why should it be any harder? PHP just made a basic idea very hard to get right. This is not to say J2EE or ASP.NET are invulnerable to this type of boneheaded programming, but they don’t allow you to include files from over the Internet and then evaluate their contents.

    What about if we move to file_get_contents() instead of including the result? file_get_contents is rarely used as it is a PHP 4.3.0 and later construct, and PHP coders are reluctant to use new fangled calls when old ones will do. However, it is no better! It STILL allows us to read the file directly from a URL or via a wrapper, like php://output (which acts like echo… with the usual association of data… XSS city), or php://filter/resource=http://www.example.com … and this is NOT restricted by allow_url_fopen. Who comes up with these settings?

    Programmers are usually surprised at the wide number of places what used to be local file operations are able to be used for remote file and file filters. The job is made harder because PHP keeps on changing its mind about what is available. What used to be a safe application with PHP 4.2.x is no longer safe in PHP 4.3.x or PHP 5 – just because PHP changed.

    Accompanied by extremely fragmented documentation (ie “see Appendix L” or read the usually extensive comments to see how the functions ACTUALLY work), it takes experience to program PHP’s file operations. With a very low barrier of entry, PHP needs to keep these advanced features to those who know what they’re doing. However, it’s far too late. PHP is used by programmers of many different skill levels. The average Joe programmer has no help in hell of writing a safe PHP application.

    In the meantime, let me plug Chris Shiflett’s brand spanking new PHP Security book from O’Reilly:
    Amazon Listing

    If you want to write secure apps in PHP, you need that book.

  • “Enterprise” levels of insecurity

    Why is it that “enterprise” applications have the worst security?

    If VXers researched this area, they could bring corporates all over the world to their knees.

    Typical mistakes include:

    • clear text management protocols
    • clear text authentication, if performed at all
    • excessive privileges required to do their tasks
    • poorly written and tested – it’s usually trivial to cause agents to seg fault or GPF with simple fuzz testing tools
    • Default configurations are insecure out of the box
    • Default username and passwords
    • require old software stacks which themselves have security issues
    • secretive and obtuse documentation particularly around security issues
    • Stupid limitations… like BMC Patrol’s requirement that all agents run at a matching security level … or else the console does not work. This makes for Big Bang changes in most environments which means no change.

    I could go on, but my blood is boiling. If you are buying management software, buy *secure* management software. Don’t trust the vendor to tell you about this – evaluate the software in your environment. Use Ethereal and ettercap to detect if it’s sending clear text or replayable secrets over the wire. Use the trial softare against a default installation and see if you can manage your test hosts with default passwords.

    Unbelievable.