Blog

  • WordPress updated

    I had a problem with the 2024 theme looking like the site had been hacked, when it was simply a configuration issue gone wrong. So I upgraded to the latest version of everything and deleted the 2024 theme.

    Everything, including the links to previous posts should work now.

  • Privacy Policy

    I’ve decided to set up a Facebook Public Figure page, so that I can accept more friend requests from those in my industry, but also to help filter out cat memes from the infosec content. Facebook requires a privacy policy. It’s going to be incredibly simple:

    At this time, I do not intend to collect, process, store, share, redistribute, or in any way do anything with your private information. Don’t post your sekret stuff on my Facebook page.

  • Keeping work papers

    I’d like to hear about folks’ record-keeping practices like when you will take a note and when you won’t, if you use written notes, tools, or text files or Word docs, how long you take recording stuff and has it ever saved your bacon.

    For background, I’ve been doing this for 20+ years and I’ve always kept notes. I don’t need help personally. We are having discussions at my $dayjob on the sorts of records we should be keeping and the trade-offs involved in doing so. I would like to understand both leading practices and what is the common industry practices.

    What do you do? Has it helped you make you a better tester? Did you learn your technique from a mentor or did you have to make it up as you go along?

    [ I originally attempted to ask this on r/netsec, but it was rejected as it was a question (!) Something about small people with very small power something something. Ain’t no-one got time for that. ]

  • Porting Freemint to ARM – Retro Challenge RC2017/10

    The Retro Challenge is an interesting idea – pick a project that is over 10 years old, and blog about working on it for a month. Most folks pick older computers that they acquire and fix up, or do something interesting, such as add network functionality to Apple II’s, or running Twitter clients over serial.

    These are amazing hardware projects, but hardware is not really my forte even if I want to do more of it. Plus, I’m travelling during October quite extensively, so I need a project I can do within a virtual machine in my spare time in hotel rooms with carry on items only.

    Software is more my thing, so I want to pick a piece of software that is more than 10 years old, and do something useful with it, like make it work on modern hardware to let it live again. I didn’t want to retread where others have been, and as you’ll come to see, I have definitely bitten off more than I can chew.

    Previous Retro Challenges have done a lot with early 80’s computers, usually 8 bit computers, but as I used these when they first came out, I wasn’t really interested in that. I pretty much jumped from 8 bit computers directly to 32 bit computers (Macs), bypassing 16 bit computers like the Amiga and Atari, and for what it’s worth, DOS. That’s right, DOS and Windows 1.0-9x was never my daily driver, with a few times I was forced to do so at Uni, or fixing up relatives’ computers. When I bought my first PC in the mid 1990’s, it was a HP XU dual Pentium Pro workstation running Windows NT, and mostly Linux. I’ve owned around 20 Macs in my life, and I’m sure I’ll own a few into the future. I’ve also owned a bunch of weird things, like a VT102 terminal (a real one), Acorn Risc PC that I tried getting RiscIX running, Sun Ultra 10 workstation running Solaris, and a DEC Alpha PC164 that I ran NetBSD/alpha and then Linux for Alpha on. I’ve owned a Atari Portfolio (more disappointing than you think), a Newton 2100 (better than the critics made out), various PDAs, and an Amiga 500Plus I bought in the UK.

    I currently run a Lenovo T460s, which is my first PC in over a decade, and it’s actually pretty good.

    Project ideas

    Something that always interested me was how a port to a new processor gave a dying platform a bit of additional life simply because now more people can use it or at least try it out. In particular, AmigaOS was ported to PowerPC to run on various iterations of the phoenix Amiga hardware platform and various accelerator cards for Commodore Amiga. Additionally, RiscOS was ported to the Raspberry Pi, which is much faster than any real Acorn hardware, and it’s actually pretty decent, if a little lacking in modern software.

    So from a 16 bit point of view, Amiga is a popular choice, and I don’t know that I would be able to add anything of particular value. Both AmigaOS and RiscOS are actively maintained and fails the spirit of “must be older than 10 year old” test unless I involved original hardware, which is out for me as I’m travelling.

    So what other 16 bit platforms were popular that I missed out on? I don’t do consoles. I’ve owned a PS2 and an original XBox, and I never really used either of them. Consoles aren’t really interesting to me, plus I can’t carry them around during October.

    Atari TOS was never ported off the original m68k platform. There are various 68060 accelerator cards and a complete clone of the platform called FireBee based on the Motorola ColdFire processor architecture running at around 260 MHz, but simply doing something with an 68060 or 2013 era ColdFire based system doesn’t meet the requirements of the RetroChallenge. Plus, I can’t haul around an ST.

    Looking on eBay and Craigslist, there’s basically no Atari ST / TT / Falcons for any price, and new Firebees are too new. The scarcity of Atari ST / TT / Falcon and clones, coupled with the lack of space for a retro computer in my home office, and my travel means that I needed to shelve the plan to work on original hardware.

    So what’s portable, or could work within a virtual machine whilst I was travelling? My Raspberry Pi Model B. However, whilst the specs for the Raspberry Pi is fantastic compared to the specs of the original ST, there are some limitations to this platform which I will touch on later.

     

    I want to make sure that anything I do might end up revitalizing the Atari platform, or at least give it a new life, rather than be stuck on the m68k platform forever, and thus stuck within emulators such as Hatari and the amazing Aranym, which is a m68k based software emulator for FreeMint and other Atari based operating systems, but basically designed to go REALLY fast on modern hardware, so it’s not trying to be 100% compatible, but usefully fast whilst running old Atari apps, and even many games. It often runs a great deal faster than even modern hardware like the Firebee.

    Starting

    Before the challenge proper starts, you’re allowed to do some prep, and work out what you want to do. In fact, without this step, I reckon it’s almost impossible to do much other than faff about and write blog entries. So I started to estimate the effort.

    A few months back, I reviewed if this was even possible. The original Atari ST’s had their operating system in ROM, and even later ones rely the TOS ROM to do various things, such as boot. If the source to TOS wasn’t available, this project concept is dead, as I’m not going to buy an Atari (for now) … and they don’t seem to be particularly available in the US. I understand that Atari ST was much more popular in Europe than in the US, and with most software written for PAL systems, it seems only the most diehard Atari followers in the US who committed to the platform. So there’s not a lot of systems going.

    Luckily, the original TOS was “improved” back in the late 1980’s prior to Atari’s demise by a tinkerer who created MiNT (MiNT is not TOS), which created a new Unix like kernel and GPL utilities to run on the Atari. Atari saw this was good and hired the developer, and that’s pretty much how TOS officially became MiNT, and that’s how TOS (and Mint) became open sourced before they died. Without this history, any chance of new life would be over as only emulation would be possible.

    What is MiNT? It’s a multitasking kernel that allows more than one TOS program to run at once, but it’s more than that – it has a Unix like kernel that allows many POSIX utilities to be compiled and run.

    Eventually, MiNT begat FreeMint. AES, the Atari’s desktop environment became XaAES, also open sourced and freely available on Github. It’s very retro, full of blends and stuff.

    FreeMint seemed to fizzle out around 2004. By this time, Atari had been dead for nearly a decade, one of the key contributors passed away, and there was no PowerPC accelerator cards, so basically, beyond work to bring in more and more of the Unix userland, and “improvements” to XaAES such as moving it to kernel space (! – remember that most users were running on 8 MHz systems, with a few on 30 MHz 68030’s, so basically this might seem crazy now, but to allow the then modern look and feel, I’m betting that kernel was the right place.

    The FreeMiNT project hasn’t seen a whole lot of maintenance, mainly nips and tucks and work to support the FireBee on the ColdFire processor and various device drivers, and additional support for Aranym. I’m sure I’m understating it, but there’s a lot of technical debt in this code base, and I’m really hoping that it will not bite me too hard.

    Which TOS?

    By this point, you’re thinking job done in terms of selecting the project, but it’s not that simple. There is

    • Original TOS, which I’m not sure anyone uses any more except original owners
    • EmuTOS – used by Hatari, basically an improved version of TOS with AES, and will run on Aranym and FireBee.
    • FreeMint used on the original hardware, FireBee and emulators, which means there’s active users of this platform today
    • SpareMint, a FreeMiNT kernel + package manager
    • FireTOS for FreeMiNT – used on the FireBee
    • FireTOS full – used on the FireBee

    I could go retro, and try to port a simpler amount of software, which is the Original TOS / EmuTOS. Unfortunately, these are original sources, and make a lot of suppositions about the underlying hardware.

    Folks think that Amiga had more custom chips and is harder to emulate, but the reality is that AmigaOS is probably an easier port than Atari, as Atari used software to achieve much the same things as the Amiga. So in some ways, TOS knows and assumes a lot more about the platform than many operating systems. Things like timing loops, where IO is located and how that works.

    I didn’t want to get into package management or trying to replicate an entire operating system, and I couldn’t do something with the FireBee as it’s too new, so basically, I’m down to FreeMint. A lot of folks do interesting things with FreeMint, it runs on real hardware, newer hardware like the FireBee, and emulators like Hatari and Aranym.

    Plus, Freemint has code for newer CPUs and drivers for the Firebee and Aranym, and more an abstracted view of the hardware platform that whilst not as clean as say NetBSD, is certainly a good start. Plus, there’s “only” 45 assembly language files, but obviously, that’s not everything required to boot on an entirely new platform.

    False starts

    Looking around, I found a great reference for building a new operating system on a Raspberry Pi. I found that my model supports a serial console with a special cable, which will be essential for getting the platform running. However, the process of building a kernel, writing it to flash, booting and so on will be a slow nightmare.

    The Raspberry Pi 3 has a UART for serial that you can re-enable in config.txt, and more to the point, it can boot an operating system over the network. The workflow will be compile a kernel, deploy to a local directory, net boot, start the system over serial, and debug. It might be some time before I can manage to remotely debug the kernel, as you can do with Linux, and indeed this is not a goal for October.

    However, this does bring up a thorny issue. There are NO good emulators for the Raspberry Pi. I don’t know why this is. If I tackled a BeagleBone, I can emulate that. So realistically, the first boots won’t be in an emulator unless I write for Beaglebone and then port to the Raspberry Pi, and that seems … more difficult.

    When I looked at the FreeMint source, I saw Minix and BSD bits and pieces, as well as GPL tools that provide much of the shell and userland. So I thought, instead of porting that, why not start with something that already boots on the Raspberry Pi, and port XaAES and a FreeMint (TOS) library to that?

    Minix

    I spent some time looking at Minix. Initially, it seemed an excellent fit. FreeMint gets its /u file system support from Minix 2. Why not continue the obvious pathway the original author of FreeMint was going, by bringing more Minix to FreeMint, and thus bypassing a lot of the porting process, concentrating on creating a FreeMint and XaAES Minix server, and relying on Minix to provide the rest of the platform (memory, scheduling, pipes, file systems, network, etc).

    It’s a fading platform now that Andy Tanenbaum has retired. It barely runs on Beagleboards, no longer has SMP support, no longer runs X11, is 32 bit only on Intel processors, and does not support the Raspberry Pi. A few grad students had a summer project in 2015 or so, and they got it booted on a Raspberry Pi 2, but their work was never merged, and now newer versions of Minix exist which means integration of those patches might be tough. I didn’t have a Raspberry Pi 2 to test that hypothesis and it doesn’t emulate, so basically, I was a bit shut down.

    Sadly, even though I think if the FreeMint project wanted to re-platform to something a lot more modern, Minix 3 is probably not the right choice for this project.

     

    NetBSD

    Minix itself has been for a few years now re-platforming itself to use NetBSD userland, so basically replacing a monolithic NetBSD kernel with a Minix microkernel and all that implies. This process is ongoing, so I thought, well maybe I could use NetBSD to run a container or library containing FreeMint (TOS) + XaAES on top of X11.

    Again, I hit roadblocks. NetBSD 7.1 is a tier 2 port for NetBSD on the Raspberry Pi 2 and 3. There’s no support for the original Raspberry Pi as far as I can tell. Again, no emulation so I couldn’t test it out. Lastly, NetBSD has jails, but no containerization or domains or Xen. There is sailor, a container platform, but it’s not designed to run another OS.

    At least I could port the code to run on top of NetBSD, but if we wanted fast recompilation of old ST / TOS software to run on top of this platform, it wouldn’t look much like an Atari ST at that point.

    FreeBSD

    As I have a Raspberry Pi 1, I thought about doing the same using FreeBSD, but even though there is better support for the Raspberry Pi, emulation is an issue, and containerization or domains is still missing, so it would look and feel like FreeBSD until you ran an Atari app, and that’s like running a KDE app on a Gnome desktop. I wanted to do better.

    Linux

    Linux was always going to be my cross-compiling choice, so I looked at emulation or if I could do something like run an Atari subsystem with XaAES running natively instead of X11. But realistically, even though this might be possible, it’s too much work for one month. Plus my concerns about it feeling like Linux that can run AES programs means that I don’t think you’re really getting a feel for the Atari, more of a modern version of GEM for Linux, which considering it once ran on PCs, isn’t really a win.

    This brings me back to…

    Bare metal

    The obvious choice by now is that we will cross-compile in Linux, and run on bare metal. I think this gives the best shot of FreeMint on ARM feeling like it is really FreeMint, and will hopefully bring old ST fans out of the closet and new folks interested in the ST platform for a modest investment in a Raspberry Pi, especially considering the scarcity and insane price of real vintage hardware coupled with considerable performance improvements over the original platforms, even the Firebee.

    I will acquire a Raspberry Pi 3, because it can netboot, and a serial console cable, because for a while at least, I will be on the road and will not have a HDMI monitor I can plug it into. Plus, although I have no plans for a 64 bit port of the operating system (that would be TOO far for my goals), at least it’s there for the future if anyone wants to have a shot.

    Goals

    My goal for October is to achieve cross-compilation of the FreeMint source, and in particular the kernel (and not necessarily XaAES, tools, or shared) as a proof of concept, and hope to demonstrate the FreeMint boot loader and process looks and feel just like an original Atari ST / TT / Falcon once the Raspberry Pi’s boot loader goes away, starting with the memory test and boot sequence.

    This will require porting a 16 bit / 32 bit operating system without memory protection or virtual memory to a 32 bit platform that has both protected memory and supports virtual memory, that in terms of hardware looks nothing like the original platform.

    I bet there’s esoteric bit blasting going on that makes perfect sense in a CPU and memory constrained mid-80’s platform, trying to get Amiga like performance on extremely economy hardware. Remember, the Atari  ST platform did not have DMA or a blitter until 1989, so most software doesn’t assume it exists. Although the Amiga was a technical powerhouse, I think the ST platform actually uses more of the overall platform just because it has to.

    Do not doubt for one second that I’ve chosen an easy project, or even a possible one. I will give it a shot and try to have fun along the way.

    SLOC Directory SLOC-by-Language (Sorted)
    214746 sys ansic=151941,asm=62672,sh=113,perl=20
    84679 xaaes ansic=83626,asm=972,cs=66,sh=15
    54107 tools ansic=52712,awk=786,asm=286,perl=234,sh=89
    2400 shared ansic=1775,yacc=447,lex=178
    512 doc ansic=233,asm=149,cpp=87,sh=43
    0 fonts (none)
    0 top_dir (none)
    
    
    Totals grouped by language (dominant language first):
    ansic: 290287 (81.44%)
    asm: 64079 (17.98%)
    awk: 786 (0.22%)
    yacc: 447 (0.13%)
    sh: 260 (0.07%)
    perl: 254 (0.07%)
    lex: 178 (0.05%)
    cpp: 87 (0.02%)
    cs: 66 (0.02%)

    So why do this?

    I think it’ll be fun. I’ll learn ARM assembly language, porting an operating system from scratch on bare metal. I will re-learn hardware and device driver programming again (I did a couple of Linux device drivers back in the 1990s, Matrox Millennium support for XFree86, and HP’s PPA print drivers for gs, both of which is still included is most Linux distros to this day). Lastly, if it does work, I hope that I can attract the original Freemint project to include it, and hopefully be the start of a renaissance of the ST platform, even if it’s just within the retro / virtual / vintage community.

     

  • Training the next generation or abolition of the Australian 457 visa

    Without consultation or warning, the Australian Government has decided to abolish the speciality skilled migration 457 visa system.

    There is currently a great deal of confusion, but it seems that the current plan is that there are two lists of skills shortages eligible for varying lengths of temporary stay and migration outcome:

    • The Short Term Combined Skills Shortage list lists IT security professionals (ANZSCO  code 262112). Folks sponsored on this visa are eligible for a two year visa and then they need to return home. This visa category does not lead either to permanent residency or eventual citizenship.
    • The Medium and Long-term Strategic Skills List, which allows 457 visas to be granted for four years. As this category stands today (20th April 2017), this list has no IT security professional category, despite our industry lacking tens of thousands of workers. Migrants employed under this skills list can lead to permanent residency and eventual citizenship.

    As a profession, we have been overlooked. Abandoned, even. IMHO, with this change, Australia is being cut off from the world without adequate notice. We haven’t planned for this, and so it’s going to be chaotic and an extremely tight market for a while as we transition to not being able to hire immigrant workers.

    Where to start? Well, in the olden days, folks with an interest in security sort of fell into it. Back then, it was the wild west. It could easily end up that way again. I think we can do better, but we need to address the skills pipeline, starting with Universities popping out rounded students who have a holistic and deep understanding of many areas of IT Security, and not just one small element of it.

    It’s difficult to hire juniors today. They exist, but the problem is that clients often expect the “A” team, and will ring you up after a gig if they are unhappy for whatever reason with a consultant and ask that they don’t return. After a few of those calls, and we’re in deep scheduling do-do. We do take the opportunity to learn from these calls, but it can be risky to hire someone who might have talent but needs more experience. Soon, we will have no choice, and clients will have no choice but to accept juniors and fee rises as we increase the use of shadowing.

    In the past, it was difficult with the ever downward pressure on fees to allow shadowing, which is the usual way of imparting knowledge on the job. It’s inappropriate to send in folks who can take out a client’s network or application without realising it. The Dunning Kruger effect in IT security is particularly harsh, and it can end your career if you’re not protected. It’s a difficult lesson to learn, and the best people learn the most in the hardest possible way. The only way to minimise this risk is adequate early training and shadowing for at least 6-12 months. And even then, there will still be mistakes.

    The lack of juniors is a curse on our industry today – we let it get like this, firstly by allowing clients to choose consultants rather than the team, and secondly, by not taking chances on juniors and understanding that for a couple of years, they will need to be shadowing someone before it’s safe to let them take on a job by themselves.

    We created this skills shortage by not demanding that our universities produce graduates with adequate rounded individuals that we then could layer on top industry needs, like sound consulting behaviors, people soft skills, and writing skills.

    Australian universities do not have many degrees in IT security, and those who do offer units here and there, or if they offer a major stream (and a few do, like UNSW), they concentrate on what I affectionately call “ethical hacking” rather than the full suite of our profession. I applaud the fact that at long last, we are seeing some IT security majors, but the subject matter leaves a great deal to be desired. Universities aren’t vocational schools, and yet many are pumping out vocationally trained individuals. As an industry, we need both rounded and vocationally trained graduates, with life long learning beaten into them with a healthy start on the ol’ Dunning-Kruger curve.

    Folks from these degrees often need a lot of training to get them client ready, so again, I think we need to work with higher education to produce fewer hackers and more security professionals, with a depth of skills across the IT security spectrum that will stand the test of their entire career, rather than for example folks with good CTF skills or the ability to deep dive into X86 reverse engineering.

    We need to engage with the University sector to get new degrees going in 2018, and for them to aggressively recruiting new students as a matter of priority:

    • Governance, risk and compliance
    • Identity and access management
    • Privacy and data protection
    • Enterprise and Cloud Security Architecture
    • SSDL and Secure coding
    • Application and mobile security
    • Systems and DevOps Security
    • Defences against the dark arts (Blue team)
    • Red teaming (ethical hacking)
    • DFIR, and malware analysis
    • Managing security – BAU IT Security Management and CISO
    • Critical Infrastructure Security (OT security, SCADA security, etc)

    We will have to concentrate on the more esoteric and necessary fields later, such as IoT security and embedded systems.

     

    We have less than two years to graduate students through a three year undergraduate program that does not exist today, employ them as juniors, and train them in what we do.

    I’ve said many times I can take a developer and turn them into a security pro in relatively short order (3-12 months max), but I cannot teach three years of programming to a security pro.

    For a while, I fully expect the current 0% unemployment rate in our field to become negative unemployment, with out of control wages growth as fewer folks will be around to fulfil an ever increasing number of FTE requirements. We might need to work together between security consultancies to share staff or similar, I’m sure there will be consolidation of consultancies and friendly alliances. I also bet there will be opportunistic recruiters setting up consultant farms to help folks get the best price in a really tight market. Good for them, market forces working for everyone.

    As a global Board member of OWASP (speaking in a personal capacity), I can help bring together experts in application security and drive those syllabuses in SSDL, secure coding, application and mobile security, which is a critical skill for nearly all firms that produce software and security boutiques alike.

    I call on the Universities, ACSC, AISA, OWASP, and lastly ACS to lead this charge and to engage with higher education to start the process. We must work together as an industry and higher education to build out these many syllabuses, and get the word out to prospective Uni students that IT security is a great field with immense prospects.

    The imminent abolition of IT security immigrant visas is both terrifying and exciting, because finally, we don’t have any choice – the entire lifecycle of our industry has to grow up and REALLY fast.

  • The intelligence kimono

    Some of my IR and forensics friends who I highly respect are getting all bent out shape about attribution, or the perceived lack of solid evidence for attribution regarding the DNC attacks. In particular, many of them are now publicly doubting on social media (and mainstream media) that Russia is behind the DNC hacks.

    When the Guccifer 2.0 posts came out, these same set of folks analyzed the dump, pretty much everyone in Twitter IR land was convinced the dump was by Russian intelligence services, and Guccifer 2.0 persona a Russian intel persona. Go back, check for yourself, it’s easy to do if you know the usual suspects.

    IR and forensics is not my field, so I didn’t really comment at the time, nor really now, except to repeat “attribution is hard, why bother” (particularly relating to attributing to China, which was the previous most common attribution target).

    Why bother with attribution?

    Because it gets press. Attribution is simply not that useful for the average organisation trying to protect their data … unless they need to take it to court, or you’re a nation state and you want to know who attacked you. Then it becomes vital that it is done properly, and only a few can do this well.

    Realistically, my field is protecting information. I find it frustrating when the cyclical fads in our industry lean towards the fatalistic “you’re already hacked, so let’s only detect and respond”, which has been going for nearly three years already, and two years longer than I expected. It must be making money for someone or it seems like security is finally doing a good job, when in fact, we’re only fighting fires, not constructing fire proof artefacts out of flame retardant materials.

    If we don’t start work on protecting information BY DEFAULT, we will always be fighting fires, and the world will be on constant fire. That’s crap. We can and should do better than that.

    I specialize in helping clients and anyone who consumes my standards work to protect themselves. Building security in costs far less to do the right thing, and this should be the default choice as it’s the most economic investment.

    When I help clients protect information, I like to learn how folks in their industrial sector are attacked, so I am very interested in tools, techniques and practices, and to some extent “why” they did it, but I simply don’t need to know who did it. It’s just not relevant.

    So I don’t invest in attribution because it is so ridiculously hard to get to a level that would stand up to scrutiny in court. I have colleagues who can do that, but the time and effort taken … well, if your attacker turns out to be a nation state, what are you realistically going to do about it? The same things I am already suggesting you do.

    We’re not behind the intelligence kimono

    The problem is simple: security agencies with more access than mere mortals don’t share what’s behind the intelligence kimono. Folks outside the kimono either have to trust intelligence agencies on face value, or … you have to state “I don’t personally know, but my opinion is that the evidence is not strong”.

    As one of the latest releases says:

    It’s perfectly fine and indeed I would expect that my experienced IR and forensic friends to call for a better job of presenting evidence to provide a justification for a particular conclusion without compromising state secrets.

    But to state strongly without any more evidence than has been released, such as “It’s country X or Head of State Y” or “There’s no direct evidence, so it’s not Country X or Head of State Y” is at the very least over egging it, and almost certainly wrong. But due to the intelligence kimono, we can’t say for sure.

    Intelligence agencies from my understanding rarely state things in black and white terms, but present arguments based around analysis of available (classified) information. Therefore, for the person in the street looking for an easy “It was Country X or Head of State Y”, well, that’s unlikely to ever exist.

    What can we take away from this?

    Please go easy when making public statements, particularly when we muddy the waters. Understand that there are unknown unknowns, and unless you’re on the inside of the intelligence kimono, those unknown unknowns means we can’t advocate strongly one way or another. 

    I do hope that intelligence agencies trying to brief the public on classified matters realize that the IT security field contains many awesome subject matter experts, whom will peer review your work for free, either for you, or in the media.

    Releasing under-cooked or simply wrong reports is counterproductive. It would be worthwhile to bring in those with a strong IR capability to help create public documents that stand the scrutiny of my dedicated and talented peers.

    To my dedicated and talented IR and forensic industry peers, please don’t be “It’s not X” in the media and all over social media. Unless you are inside the intelligence kimono, you have no more information than I do unless someone is blabbing inappropriately. Please work to help government agencies do a better job instead of saying something you can’t prove any more than I can, even with your additional expertise.

    In the meantime, let’s not start down the path of distrusting expertise. That way lies failure.

     

  • Standing for the OWASP Board in 2017 – 2018

    I am standing again for the OWASP Board, again representing the Asia Pacific region, which is a huge growth area for OWASP globally. The growth opportunities in Australia, New Zealand, Singapore, Japan, Malaysia, Philippines, and in particular, Indonesia are immense.

    My goals for OWASP is to transition us from a small fast growing non-profit to a healthy sustainable non-profit, a future OWASP where we can directly employ OWASP Leaders to work on their projects, chapters can use their funds to help employ Foundation staff to help them grow, and we have 4 global and at least 10 large regional events worldwide.

    My election platform for 2017 – 2018 is:

    Sound financial management and growing OWASP to $5m per year by 2019. I have been OWASP’s treasurer in this last year, and for the first time in a long time, OWASP has had a treasurer with an active interest in finance and how we can best manage our funds. With sound financial management, OWASP can grow and do all the things that I and other candidates will promise. Without sound management, and keeping a lid on administrative expenses, we will go out backwards. We had some moments this year, which I hope we can start to avoid as we grow in future years. My goal for 2017 is to have a solid year on year revenue growth whilst keeping a lid on expenses, which will then allow us to do amazing things in 2018. We need to do some big ticket items in 2017 – including a web site revamp and go from 2 to 4 global conferences, as well as change the model by which we help and fund regional conferences. Conferences are a major profit centre for OWASP, so we have to get this right, as we carry a lot of debt for these conferences until all the bills are paid. But get it right, OWASP’s mission is achieved – awareness, training, outreach, chapters, members, and projects all benefit from sound financial management. I have this experience, and I want to continue on as Treasurer if re-elected.

    Developers. OWASP was originally a developer centric initiative, but as we grew, the breaker and defender community materials and projects took front and centre. That focus led to us being where we are today, but we have lost a core part of our mission – developers. Too few developers know about us, and yet we are the go to source in every pen test result conducted worldwide. Too many of our developer centric materials are woefully out of date. I propose to establish a program of works to ask developers and listen to what they need, and then work out what we can re-use, re-vamp, or retire. This will be a major focus for me in 2017-2018, and I hope we can one day again be the first in mind of all developers.

    Greater education outreach. I recently put up a successful motion to establish an OWASP Education platform, which I will be a plan into place by the end of the year, and funding properly next year. I see OWASP’s future in having the most up to date training and associated developer materials, as this is our Members #1 request – more training. I want the OWASP Education platform to be a place where free and paid training, webinars, and a one stop shop for all our of education materials. This is yet to be worked out exactly how it will work, but I hope to have this as a membership benefit – that members get access to all free and paid materials, with a certain number of paid material hours per year. And as a positive side note, OWASP members can enrol to be trainers in their local regions, be trained and give OWASP training in their area. This is a huge win for everyone, and will allow OWASP to go to the next level.

    Tertiary syllabus and outreach. As a logical outgrowth of our Education platform, I have long advocated for a “train the lecturer” and to provide completely free and open teaching materials and to bring our main materials up to scratch so they can be used in a tertiary course setting, either as a semester long course in application security, or as a major in a three year degree, and eventually, establish a masters by research program, where OWASP helps provide both supervisors and mentor existing PhD supervisors who may struggle to understand what their students are researching.

    Projects. OWASP will soon cross the $3m / year in annual revenue, and I see a day where we will have $5m/year revenues. As long as we keep a lid on expenses, this should be entirely spent on our mission, which should mean at least $1-2m a year on projects. I want to be in a position where OWASP Project Leaders can apply for a grant to work on their project full time for a period of time (say 3-6 months) to get the next version out or to make their project a Flagship project. Working with our Senior Technical Project Coordinator, successful projects will define a roadmap of agreed deliverables, apply for a grant to work on it, and then take a sabbatical from their day job to complete the agreed piece of work. To fund this more fully, we need to have better sources of project funding, to put projects front and centre when joining OWASP, during Conference selection, and to ensure that we go get corporate sponsorship – which may make it easier for individuals to work on things our corporate sponsors want to be improved.

    Diversity. OWASP’s job here is not done. I hope with a renewed Board in 2017, I will be able to resolve OWASP’s embarrassing lack of female keynote speakers, and frankly statistically impossible male:female ratios for things such as conference paper committees. That I am extremely disappointed that I haven’t been able to convince a majority of my fellow Board members OWASP these last two years, where the meritocracy fallacy is acceptable as a status quo was brought up more than once. As a Board, we have a responsibility – and must actively change – to reflect our industry’s diversity: in gender, ethnicity, geography, in all diversity aspects. Organisations with a diverse Board always do better than those dominated by white men, so I look forward to working with a new Board, hopefully this time getting needed reform through. OWASP members can help with this goal – please elect women and folks not from the US to the Board. We are a global organisation, and our Board should reflect that.

    Chapter reform. I want OWASP and awareness of OWASP to grow in its own right. We are faced with many of our chapters drawing on OWASP funds, but promoting themselves as “Security meetups” or indeed as another brand entirely. This is a terrible waste of OWASP’s funds – we are not a piñata to be hit when another group wants money. I will be working shortly to ensure OWASP’s branding and message is front and centre of all that we do, and re-energise our chapter base.

    Funding the website revamp. We need a new website, and I will be working with Tom Brennan to establish a strong budget to get this done by the first half of 2017. It’s not as easy as reskinning our MediaWiki, we have a LOT of material that a LOT of people and other standards use and link to, so we can’t just retire things.

    If you have any questions relating to my platform, or indeed anything about OWASP or OWASP’s finances, don’t hesitate to ask away in the comments, or on Twitter (@vanderaj) or on Google+ (+Andrew van der Stock).

  • On backdoors and malicious code

    So since the ASVS 3.0 retired much of the malicious code requirements, and after actually doing a line by line search of ~20 kLOC of dense J2EE authentication code, I’ve been thinking about various methods that backdoors might be created and not be findable by both automated and line by line searches.

    This obviously has an issue with the recent Juniper revelation that they found a backdoor in the VPN code of their SOHO device firmware. It also feels like the sort of thing that Apple suffered with GOTO FAIL, and Linux suffered a long time ago with the wait4 backdoor.

    https://freedom-to-tinker.com/blog/felten/the-linux-backdoor-attempt-of-2003/

    So basically, I’ve been thinking that there obviously has to be a group dedicated to obfuscated backdooring. Making code that passes the visual and automated muster of specialists like me. There is probably another group or portion of the same group that sets about gaining sufficient privileges to make these changes without being noticed.

    So before anyone goes badBIOS on me, I think it would be useful if we started to learn what malicious coding looks like in every language likely to be backdoored.

    We can help prevent these attacks by improving the agile SDLC process, and keeping closer tabs on our repos. We can also make it more difficult to slip these things in if folks stuck to an agreed formatting style that made slipping in these types of attacks much harder, primarily by using automated indentation and linting that detected the lack of block control and assignment during conditionals. Yes, this will make some code visually longer, but we cannot tolerate more backdoors.

    I’ve been doing a LOT of agile SDLC security work in the last few years, working WITH the developers on creating actually secure development processes and resilient applications, rather than reviewing the finished product and declaring their babies ugly. The latter does not work. You cannot review your way to building secure applications. We need to work with developers.

    This is important as we’re starting to see an explosion in language use. It’s not merely enough to understand how these things are done in C or C++, but any system language, and any up and coming languages, many of whom we have zilch, nada, nothing in the way of automated tools, code quality tools, and specialists familiar with Go, Clojure, Haskell, and any number of other languages I see pop up from time to time.

    What I think doesn’t work is line by line reviewing. All of these pieces of code must be have been looked at by many people (the many eyeballs fallacy) and run past a bunch of automated source code analysis tools, and it was “all good”, but it wasn’t really. Who knows how many secure code review specialists like me looked at the code? We need better knowledge and better techniques that development shops can implement. I bet we haven’t seen the last of Juniper style attacks pop up. Most firms are yet to really look through their unloved bastard repos full of code from developers past, present and future.

  • Time to start rebuilding GaiaBB

    In a life a long time ago in early 2002, we had to move Australia’s largest Volkswagen car forum from EzyBoard, which was distributing malicious ads and hard to get rid of pop ups to our users, to our own forum software. After a product selection, I chose XMB, which was (and is) better than all the other free forums out there, such as phpBB (didn’t have attachments until v3/0!), and others.

    XMB was a good choice as it had so many features. What I didn’t know is that XMB was full of security holes. XMB started life as a two week effort by a then 14 year old, who had limited capabilities in writing secure software. If you looked at the OWASP Top 10 and XMB, XMB had at least one of everything. We had XSS, we had SQL injection, we had access control problems, we had authentication bypass, we had … you name it, we had it. Boards were being pwned all over. So I started to help XMB, and soon became their security manager.

    The story of XMB’s rise and fall is long and complicated, with many machinations over its history. There were multiple changes of lead developer, of which I caused at least once, something I’m not proud of, nor realistically never covered anyone in glory. The original 14 year old was pushed out before I got there, and there were various stories floating around, but one of the interesting things is that as a result of that, a company thought they owned a free software project. I pushed to have our software adopt the GPL, which was accepted at the time, and it’s the only reason that XMB and all its forks including GaiaBB lasted to this day.

    XMB was forked relentlessly, being the basis of MyBB and OpenBB, as well as UltimaBB and then GaiaBB. The late 2000’s were not friendly to XMB, with the loss of not only the main development system, but also a change of “ownership” caused much rifts. Then there was another failed fork called UltimateXMB, which initially was a mod central for XMB, but then turned into a fully customized version of XMB like UltimaBB, but closer in database schema to XMB itself. The last fork of XMB, XMB 2, was a last ditch effort to stop it dying, but it failed as well as the last “owner” of XMB decided to use DCMA take down requests, which is illegal as I owned the (C) of many of the files in XMB, as did others – particularly John Briggs and Tularis. That last remnant of the one true tree of XMB can be found XMB Forum 2.

    In 2007, I forked XMB with John Briggs, creating UltimaBB, which life was good for a while, we had momentum and it worked better than XMB during those years. After the loss of the XMB master SVN tree, XMB 1.9.5 was resurrected as an effectively a reverted version of UltimaBB. Then life changed for me, with moving the US and having a child, so we parted ways, which was sad at the time for me as I knew what would happen without a strong and active lead developer. Eventually UltimaBB withered too. I had to fork UltimaBB, creating GaiaBB as I needed to keep my car forum Aussieveedubbers alive and secure.

    GaiaBB got only just enough love to keep it secure, but it hasn’t kept up. It barely functions on PHP 5.6 and modern browsers render it funny. The technical debt from the basis of 14 year old “modular” code originally written by a 14 year old (header.php and functions.php are both monolithic and stupidly long), it’s time to call time on it.

    So I need to start over or find something new for my forum. As I need to keep my skills up with the modern kids writing software today, I’ve decided to make the investment in re-writing the software so I can learn about modern Ajax frameworks, and have a go at writing back end code. No small part of this is that I want to learn about the security measures as a developer as a code reviewer and penetration tester, you can’t talk to developers unless you know how applications are built and tested, and all the compromises that go into making applications actually run.

    GaiaBB

    So let’s talk about GaiaBB. Compared to most PHP forum software, it’s pretty secure. It’s got all the features you would ever need and then a lot more on top of that. But it’s spaghetti nightmare. It needs a total re-write. It’s not responsive design in any shape or form. Mobile users just can’t use it. There are heaps of bugs that need fixing. There’s no test suite. Database compatibility is not its strong point.

    Frontend decision – Polymer

    After looking around, I’ve decided that the front end shall be Polymer as it has good anti-XSS controls and is rapidly evolving. It does responsive design like no one’s business. And because Polymer hasn’t got the cruft of some of the alternatives it will make me think harder about the UI design of the forum software.

    Back in the day, we crammed as many pixels and features into a small space because that was the thing then. Nowadays, it’s more about paring back to the essentials. This is critical for me as I don’t have the time to put back EVERY feature of GaiaBB, but as I know most features are never used, that’s not a big deal.

    Backend considerations

    Now, I need to choose a back end language to do the re-write. My requirements are:

    • Must be workable on as many VPS providers as possible as many do not provide a way to run non-PHP applications without difficulty
    • Must be fast to develop, so I am not interested in enterprise style languages which requires hundreds of lines of cruft where one line is actually required
    • Must support RESTful API
    • Must support OAuth authentication as although I can write an identity provider, I am more than willing to allow forum owners integrate our identity with Facebook Connect or Google+.
    • Must be a entity framework for data access. The days of writing SQL queries are done. I want database transparency.
    • Must support writing automated unit and integration tests. This is not optional

    So far, I’ve looked at various languages and frameworks, including:

    • PHP. OMG the pain. There are literally no good choices here. You’d think because I have a lot of the business logic already in PHP that this would be a no-brainer, but the reality is that I have terrible code that is untestable.
    • Go. Very interesting choice as it’s a system language that explicitly supports threading and all sorts of use cases. However, it does not necessarily follow that writing backend code in Go is the way to go as I’ve not found a lot of examples that implement restful web services. It’s possible as it’s a system language, but I don’t want to be the bunny doing the hard yards.
    • Groovy and Grails. I have clients who use this, so I am interested from learning the ins and outs as it seems pretty self documenting and fast to write. Uses a JVM
    • Spring. Many clients use this, but I do not like how much glue code Java makes you write to do basic things. Patterns implemented in Spring seem to take forever to provide a level of abstraction that is not required in real life. I want something simpler.

    Frameworks I will not consider.

    The few remaining XMB, UltimaBB, and GaiaBB forums need to be migrated to something modern, and that requires support. I don’t have time for support, so I am going to exclude a few things now.

    • Python / Django. I don’t write Python. Few clients use it and I don’t want to be figuring out or supporting a Python web service layer.
    • Node.js. I know this was hot a few years ago, but seriously, I need security, and writing a backend in something that does not protect against multi-threaded race conditions is not okay.
    • Ruby on Rails. I was thinking about this for a bit, but honestly, I’ve never had to review a Ruby on Rails application, so re-writing my business logic and entities in this will not give me more insight than using Groovy/Grails, which I do have clients.

    At the moment, I’m undecided. I might use Groovy/Grails as it’s literally the simplest choice so far, and supports exactly what I want to do.  That said, Groovy/Grails is starting to lose corporate backing, so I don’t want to use a language that might end up on the scrapheap of history.

    What would you do? I’m interested in your point of view if you’ve done something interesting as a RESTful API.

  • Looking back at 2009 and Predictions for 2015

    I looked back at the “predictions” for 2010, a post I wrote five years ago, and found that besides the dramatic increase in mobile assessments this last year or two, the things I was banging on about in 2009 are still issues today:

    Developer education is woeful. I recently did an education piece for a developer crowd at a client, and only two of 30 knew what XSS was, and only one of them was aware of OWASP. At least at a University event I did later on in the year, about 20% of the students were aware of OWASP, XSS, SQL injection and security. The other 80% – not so much. I hope I reached them!

    Agile security is woeful. When it first came out, I was enthralled by the chance for a SDLC to be secure by default because they wrote tests. Unfortunately, many modern day practitioners felt that all non-functional requirements were in the category of “you ain’t gonna need it“, and so the state of agile SDLC security is just abysmal. There are some shops that get it, but this year, I made the acquaintance of an organisation that prides themselves on being agile thought leaders who told our mutual client they don’t need any of that yucky documentation, up front architecture or design, or indeed any security stuff, such as constraints or filling in the back of the card with non-functional requirements.

    Security conferences are still woeful. Not only is there a distinct lack of diversity in many conferences (zero women speakers at Ruxcon for example), very few have “how do we fix this?” talks. Take for example, the recent CCC event in Berlin. The media latched onto talks about biometric security failing (well, duh!) and SS7 insecurity (well, duh! if you’ve EVER done any telco stuff). Where are the talks about sending biometrics to the bottom of the sea with concrete shackles or replacing SS7 with something that the ITU hasn’t interfered with?

    Penetration testing still needs to improve. I accept some of the blame here, because I was unable to change the market. They still sell really well. We really need to move on from pen tests as a wasted opportunity cost for actual security. We should be selling hybrid application verifications – code reviews with integrated pen tests to sort the exploitability of vulnerabilities properly. Additionally, the race to the bottom of the barrel with folks selling 1-2 day automated tests as equivalent to a full security verification for as little money as they can. We need a way of identifying weak tests from strong tests so the market can weed out useless checkbox testing. I don’t think red teaming is the answer as it’s a complete rod length check that can cause considerable harm unless performed with specific rules of engagement, that most red team folks would think invalidates a red team exercise.

    Secure supply chain is still an incomplete problem. No progress at all since 2009. Until liability is reversed from the unique position that software – unlike all other goods and services is somehow special and needs special protection (which might have been true back in the 70’s during the home brew days), it’s not true today. We are outsourcing and out-tasking more and more every day, and unless the suppliers are required to uphold standard fit for purpose rules that all other manufacturers and suppliers have to do, we are going to get more and more breaches and end of company events. Just remember, you can outsource all you like,  but you can’t out source responsibility. If managers are told “it’s insecure” and take no or futile steps to remediate it, I’m sorry, but these managers are accountable and liable.

    At least, due to the rapid adoption of JavaScript frameworks, we are starting to see a decline in traditional XSS. If you don’t know how to attack responsive JSON or XML API based apps and DOM injections, you are missing out on new style XSS. Any time someone tells you that security is hard to adopt because it requires so much refactoring, point them at any responsive app that started out life a long time ago. There’s way more refactoring in changing to responsive design and RESTful API than adding in security.

    Again, due to the adoption of frameworks, such as Spring MVC and so on, we are starting to see a slight decline in the number of apps with CSRF and SQL injection issues. SQL injection used to be about 20-35% of all apps I reviewed in the late 2000’s, and now it’s fairly rare. That said, I’ve had some good times in 2014 with SQL injection.

    The only predictions I will make for 2015 is a continued move to responsive design, using JavaScript frameworks for web apps, a concerted movement towards mobile first apps, again with a REST backend, and an even greater shift towards cloud, where there is no perimeter firewall. Given the lack of security architecture and coding knowledge out there, we really must work with the frameworks, particularly those on the backend like node.js and others, to protect front end web devs from themselves. Otherwise, 2015 will continue to look much like 2009.

    So the best predictions are those you work on to fix. To that end, I was recently elected by the OWASP membership to the Global Board as an At Large member. And if nothing else – I am large!

    • I will work to rebalance our funding from delivering most of OWASP’s funds directly back to the members who are paying us a membership fee who then don’t spend the allocated chapter funds, but instead, become focussed on building OWASP’s relevance and future by spending around 30% building our projects, standards, and training, 30% on outreach, 30%  on membership, and 10% or less for admin overheads.
    • I will work towards ensuring that we talk to industry standards bodies and deal OWASP into the conversation. We can’t really complain about ISO, NIST, or ITU standards if they don’t have security SMEs to help draft these standards, can we?
    • I will work towards redressing the diversity both in terms of gender and region in our conferences, as well as working towards creating a speaker list so that developer conferences can look through to provide invited talks to great local appsec experts. We have so many great speakers with so much to say, but we have to get outside the echo chamber!
    • We have to increase our outreach to universities. We’ve lost the opportunity to train the folks who will become the lead devs and architects in the next 5-10 years, but we can work with the folks coming behind them. Hopefully, we can invest time and effort into addressing outreach to those folks already in the industry in senior/lead developer and architect and business analyst roles, but in terms of immediate bang for buck, we really need to address university level education to the few “ethical hacking” courses (which is a trade qualification), and work on building in security knowledge to the software engineers and comp sci students of the future. Ethical hacking courses have a place … for security experts, but for coders, they are a complete waste of time. Unless you are doing offensive security as your day job, software devs do not need to know how to discover vulnerabilities and code exploits, except in the most general of ways.

    It’s an exciting time, and I hope to keep you informed of any wins in this area.