Porting Freemint to ARM – Retro Challenge RC2017/10

The Retro Challenge is an interesting idea – pick a project that is over 10 years old, and blog about working on it for a month. Most folks pick older computers that they acquire and fix up, or do something interesting, such as add network functionality to Apple II’s, or running Twitter clients over serial.

These are amazing hardware projects, but hardware is not really my forte even if I want to do more of it. Plus, I’m travelling during October quite extensively, so I need a project I can do within a virtual machine in my spare time in hotel rooms with carry on items only.

Software is more my thing, so I want to pick a piece of software that is more than 10 years old, and do something useful with it, like make it work on modern hardware to let it live again. I didn’t want to retread where others have been, and as you’ll come to see, I have definitely bitten off more than I can chew.

Previous Retro Challenges have done a lot with early 80’s computers, usually 8 bit computers, but as I used these when they first came out, I wasn’t really interested in that. I pretty much jumped from 8 bit computers directly to 32 bit computers (Macs), bypassing 16 bit computers like the Amiga and Atari, and for what it’s worth, DOS. That’s right, DOS and Windows 1.0-9x was never my daily driver, with a few times I was forced to do so at Uni, or fixing up relatives’ computers. When I bought my first PC in the mid 1990’s, it was a HP XU dual Pentium Pro workstation running Windows NT, and mostly Linux. I’ve owned around 20 Macs in my life, and I’m sure I’ll own a few into the future. I’ve also owned a bunch of weird things, like a VT102 terminal (a real one), Acorn Risc PC that I tried getting RiscIX running, Sun Ultra 10 workstation running Solaris, and a DEC Alpha PC164 that I ran NetBSD/alpha and then Linux for Alpha on. I’ve owned a Atari Portfolio (more disappointing than you think), a Newton 2100 (better than the critics made out), various PDAs, and an Amiga 500Plus I bought in the UK.

I currently run a Lenovo T460s, which is my first PC in over a decade, and it’s actually pretty good.

Project ideas

Something that always interested me was how a port to a new processor gave a dying platform a bit of additional life simply because now more people can use it or at least try it out. In particular, AmigaOS was ported to PowerPC to run on various iterations of the phoenix Amiga hardware platform and various accelerator cards for Commodore Amiga. Additionally, RiscOS was ported to the Raspberry Pi, which is much faster than any real Acorn hardware, and it’s actually pretty decent, if a little lacking in modern software.

So from a 16 bit point of view, Amiga is a popular choice, and I don’t know that I would be able to add anything of particular value. Both AmigaOS and RiscOS are actively maintained and fails the spirit of “must be older than 10 year old” test unless I involved original hardware, which is out for me as I’m travelling.

So what other 16 bit platforms were popular that I missed out on? I don’t do consoles. I’ve owned a PS2 and an original XBox, and I never really used either of them. Consoles aren’t really interesting to me, plus I can’t carry them around during October.

Atari TOS was never ported off the original m68k platform. There are various 68060 accelerator cards and a complete clone of the platform called FireBee based on the Motorola ColdFire processor architecture running at around 260 MHz, but simply doing something with an 68060 or 2013 era ColdFire based system doesn’t meet the requirements of the RetroChallenge. Plus, I can’t haul around an ST.

Looking on eBay and Craigslist, there’s basically no Atari ST / TT / Falcons for any price, and new Firebees are too new. The scarcity of Atari ST / TT / Falcon and clones, coupled with the lack of space for a retro computer in my home office, and my travel means that I needed to shelve the plan to work on original hardware.

So what’s portable, or could work within a virtual machine whilst I was travelling? My Raspberry Pi Model B. However, whilst the specs for the Raspberry Pi is fantastic compared to the specs of the original ST, there are some limitations to this platform which I will touch on later.


I want to make sure that anything I do might end up revitalizing the Atari platform, or at least give it a new life, rather than be stuck on the m68k platform forever, and thus stuck within emulators such as Hatari and the amazing Aranym, which is a m68k based software emulator for FreeMint and other Atari based operating systems, but basically designed to go REALLY fast on modern hardware, so it’s not trying to be 100% compatible, but usefully fast whilst running old Atari apps, and even many games. It often runs a great deal faster than even modern hardware like the Firebee.


Before the challenge proper starts, you’re allowed to do some prep, and work out what you want to do. In fact, without this step, I reckon it’s almost impossible to do much other than faff about and write blog entries. So I started to estimate the effort.

A few months back, I reviewed if this was even possible. The original Atari ST’s had their operating system in ROM, and even later ones rely the TOS ROM to do various things, such as boot. If the source to TOS wasn’t available, this project concept is dead, as I’m not going to buy an Atari (for now) … and they don’t seem to be particularly available in the US. I understand that Atari ST was much more popular in Europe than in the US, and with most software written for PAL systems, it seems only the most diehard Atari followers in the US who committed to the platform. So there’s not a lot of systems going.

Luckily, the original TOS was “improved” back in the late 1980’s prior to Atari’s demise by a tinkerer who created MiNT (MiNT is not TOS), which created a new Unix like kernel and GPL utilities to run on the Atari. Atari saw this was good and hired the developer, and that’s pretty much how TOS officially became MiNT, and that’s how TOS (and Mint) became open sourced before they died. Without this history, any chance of new life would be over as only emulation would be possible.

What is MiNT? It’s a multitasking kernel that allows more than one TOS program to run at once, but it’s more than that – it has a Unix like kernel that allows many POSIX utilities to be compiled and run.

Eventually, MiNT begat FreeMint. AES, the Atari’s desktop environment became XaAES, also open sourced and freely available on Github. It’s very retro, full of blends and stuff.

FreeMint seemed to fizzle out around 2004. By this time, Atari had been dead for nearly a decade, one of the key contributors passed away, and there was no PowerPC accelerator cards, so basically, beyond work to bring in more and more of the Unix userland, and “improvements” to XaAES such as moving it to kernel space (! – remember that most users were running on 8 MHz systems, with a few on 30 MHz 68030’s, so basically this might seem crazy now, but to allow the then modern look and feel, I’m betting that kernel was the right place.

The FreeMiNT project hasn’t seen a whole lot of maintenance, mainly nips and tucks and work to support the FireBee on the ColdFire processor and various device drivers, and additional support for Aranym. I’m sure I’m understating it, but there’s a lot of technical debt in this code base, and I’m really hoping that it will not bite me too hard.

Which TOS?

By this point, you’re thinking job done in terms of selecting the project, but it’s not that simple. There is

  • Original TOS, which I’m not sure anyone uses any more except original owners
  • EmuTOS – used by Hatari, basically an improved version of TOS with AES, and will run on Aranym and FireBee.
  • FreeMint used on the original hardware, FireBee and emulators, which means there’s active users of this platform today
  • SpareMint, a FreeMiNT kernel + package manager
  • FireTOS for FreeMiNT – used on the FireBee
  • FireTOS full – used on the FireBee

I could go retro, and try to port a simpler amount of software, which is the Original TOS / EmuTOS. Unfortunately, these are original sources, and make a lot of suppositions about the underlying hardware.

Folks think that Amiga had more custom chips and is harder to emulate, but the reality is that AmigaOS is probably an easier port than Atari, as Atari used software to achieve much the same things as the Amiga. So in some ways, TOS knows and assumes a lot more about the platform than many operating systems. Things like timing loops, where IO is located and how that works.

I didn’t want to get into package management or trying to replicate an entire operating system, and I couldn’t do something with the FireBee as it’s too new, so basically, I’m down to FreeMint. A lot of folks do interesting things with FreeMint, it runs on real hardware, newer hardware like the FireBee, and emulators like Hatari and Aranym.

Plus, Freemint has code for newer CPUs and drivers for the Firebee and Aranym, and more an abstracted view of the hardware platform that whilst not as clean as say NetBSD, is certainly a good start. Plus, there’s “only” 45 assembly language files, but obviously, that’s not everything required to boot on an entirely new platform.

False starts

Looking around, I found a great reference for building a new operating system on a Raspberry Pi. I found that my model supports a serial console with a special cable, which will be essential for getting the platform running. However, the process of building a kernel, writing it to flash, booting and so on will be a slow nightmare.

The Raspberry Pi 3 has a UART for serial that you can re-enable in config.txt, and more to the point, it can boot an operating system over the network. The workflow will be compile a kernel, deploy to a local directory, net boot, start the system over serial, and debug. It might be some time before I can manage to remotely debug the kernel, as you can do with Linux, and indeed this is not a goal for October.

However, this does bring up a thorny issue. There are NO good emulators for the Raspberry Pi. I don’t know why this is. If I tackled a BeagleBone, I can emulate that. So realistically, the first boots won’t be in an emulator unless I write for Beaglebone and then port to the Raspberry Pi, and that seems … more difficult.

When I looked at the FreeMint source, I saw Minix and BSD bits and pieces, as well as GPL tools that provide much of the shell and userland. So I thought, instead of porting that, why not start with something that already boots on the Raspberry Pi, and port XaAES and a FreeMint (TOS) library to that?


I spent some time looking at Minix. Initially, it seemed an excellent fit. FreeMint gets its /u file system support from Minix 2. Why not continue the obvious pathway the original author of FreeMint was going, by bringing more Minix to FreeMint, and thus bypassing a lot of the porting process, concentrating on creating a FreeMint and XaAES Minix server, and relying on Minix to provide the rest of the platform (memory, scheduling, pipes, file systems, network, etc).

It’s a fading platform now that Andy Tanenbaum has retired. It barely runs on Beagleboards, no longer has SMP support, no longer runs X11, is 32 bit only on Intel processors, and does not support the Raspberry Pi. A few grad students had a summer project in 2015 or so, and they got it booted on a Raspberry Pi 2, but their work was never merged, and now newer versions of Minix exist which means integration of those patches might be tough. I didn’t have a Raspberry Pi 2 to test that hypothesis and it doesn’t emulate, so basically, I was a bit shut down.

Sadly, even though I think if the FreeMint project wanted to re-platform to something a lot more modern, Minix 3 is probably not the right choice for this project.



Minix itself has been for a few years now re-platforming itself to use NetBSD userland, so basically replacing a monolithic NetBSD kernel with a Minix microkernel and all that implies. This process is ongoing, so I thought, well maybe I could use NetBSD to run a container or library containing FreeMint (TOS) + XaAES on top of X11.

Again, I hit roadblocks. NetBSD 7.1 is a tier 2 port for NetBSD on the Raspberry Pi 2 and 3. There’s no support for the original Raspberry Pi as far as I can tell. Again, no emulation so I couldn’t test it out. Lastly, NetBSD has jails, but no containerization or domains or Xen. There is sailor, a container platform, but it’s not designed to run another OS.

At least I could port the code to run on top of NetBSD, but if we wanted fast recompilation of old ST / TOS software to run on top of this platform, it wouldn’t look much like an Atari ST at that point.


As I have a Raspberry Pi 1, I thought about doing the same using FreeBSD, but even though there is better support for the Raspberry Pi, emulation is an issue, and containerization or domains is still missing, so it would look and feel like FreeBSD until you ran an Atari app, and that’s like running a KDE app on a Gnome desktop. I wanted to do better.


Linux was always going to be my cross-compiling choice, so I looked at emulation or if I could do something like run an Atari subsystem with XaAES running natively instead of X11. But realistically, even though this might be possible, it’s too much work for one month. Plus my concerns about it feeling like Linux that can run AES programs means that I don’t think you’re really getting a feel for the Atari, more of a modern version of GEM for Linux, which considering it once ran on PCs, isn’t really a win.

This brings me back to…

Bare metal

The obvious choice by now is that we will cross-compile in Linux, and run on bare metal. I think this gives the best shot of FreeMint on ARM feeling like it is really FreeMint, and will hopefully bring old ST fans out of the closet and new folks interested in the ST platform for a modest investment in a Raspberry Pi, especially considering the scarcity and insane price of real vintage hardware coupled with considerable performance improvements over the original platforms, even the Firebee.

I will acquire a Raspberry Pi 3, because it can netboot, and a serial console cable, because for a while at least, I will be on the road and will not have a HDMI monitor I can plug it into. Plus, although I have no plans for a 64 bit port of the operating system (that would be TOO far for my goals), at least it’s there for the future if anyone wants to have a shot.


My goal for October is to achieve cross-compilation of the FreeMint source, and in particular the kernel (and not necessarily XaAES, tools, or shared) as a proof of concept, and hope to demonstrate the FreeMint boot loader and process looks and feel just like an original Atari ST / TT / Falcon once the Raspberry Pi’s boot loader goes away, starting with the memory test and boot sequence.

This will require porting a 16 bit / 32 bit operating system without memory protection or virtual memory to a 32 bit platform that has both protected memory and supports virtual memory, that in terms of hardware looks nothing like the original platform.

I bet there’s esoteric bit blasting going on that makes perfect sense in a CPU and memory constrained mid-80’s platform, trying to get Amiga like performance on extremely economy hardware. Remember, the Atari  ST platform did not have DMA or a blitter until 1989, so most software doesn’t assume it exists. Although the Amiga was a technical powerhouse, I think the ST platform actually uses more of the overall platform just because it has to.

Do not doubt for one second that I’ve chosen an easy project, or even a possible one. I will give it a shot and try to have fun along the way.

SLOC Directory SLOC-by-Language (Sorted)
214746 sys ansic=151941,asm=62672,sh=113,perl=20
84679 xaaes ansic=83626,asm=972,cs=66,sh=15
54107 tools ansic=52712,awk=786,asm=286,perl=234,sh=89
2400 shared ansic=1775,yacc=447,lex=178
512 doc ansic=233,asm=149,cpp=87,sh=43
0 fonts (none)
0 top_dir (none)

Totals grouped by language (dominant language first):
ansic: 290287 (81.44%)
asm: 64079 (17.98%)
awk: 786 (0.22%)
yacc: 447 (0.13%)
sh: 260 (0.07%)
perl: 254 (0.07%)
lex: 178 (0.05%)
cpp: 87 (0.02%)
cs: 66 (0.02%)

So why do this?

I think it’ll be fun. I’ll learn ARM assembly language, porting an operating system from scratch on bare metal. I will re-learn hardware and device driver programming again (I did a couple of Linux device drivers back in the 1990s, Matrox Millennium support for XFree86, and HP’s PPA print drivers for gs, both of which is still included is most Linux distros to this day). Lastly, if it does work, I hope that I can attract the original Freemint project to include it, and hopefully be the start of a renaissance of the ST platform, even if it’s just within the retro / virtual / vintage community.


Standing for the OWASP Board

I have formally submitted my name to be in the Board Elections 2014.
I am standing for:
  • Reforming the Board. We need to improve the independence, ethics and dispute resolution processes. I will be a root and branches reformer to encourage the Board to make a couple of the positions available to truly independent directors. I will be encouraging all current Board and future Board members to undertake an Institute of Company Directors course to understand their duties, and the way they integrate with the Foundation they are responsible for.
  • Inclusion. I want OWASP to adopt one of the many fabulous inclusion policies for our community and our conferences. Everywhere you look, such as Reddit or Slashdot, it’s all too easy for the odd bad apple to come in and ruin a working community or local group with unnecessary drama. We need to make sure our policies and standards are inclusive of all who want to participate, regardless of merit or standing; but this has an important caveat – not at any price. We need to make sure we are an open and safe community for all of humanity, those from outside the USA, regardless of gender, sexuality, religion, politics, ethnic background, and all the other ones I’ve missed.
  • Projects. We must broaden our church to be truly inclusive of modern web applications, web services, cloud, system, embedded and mobile. I propose the Board create a process for RedBook style short intensive workshops of 1-2 weeks where projects can ask for funding to move their project to completion or a much higher state of quality. This should be backed by industry participation, ensuring our core deliverables are actually useful to developers and architects. The days of funding anyone but the content creators must end. We need to be famous for our developer centric projects, and these projects should be immediately useful to developers and their teams.
  • Standards. We need to be the trusted advisor to PCI, NIST, and ISO. This is not an easy path to take, but if we are not at the table, we become irrelevant. Additionally, we have an opportunity to take our flagship standards products (Application Security Verification Standard and Proactive Controls) and plug a market hole for easily applicable advice to developers. Developers don’t read ISO 27034, they don’t read PCI DSS. They should be reading and using our materials.
  • Education. We need to create University level course (100, 200, 300) with the help of a university educator. I propose that we ask a range of universities to come to AppSec USA and start the process of formulating a curriculum, which once completed will become the default standard university curriculum for application security.
I know there are excellent candidates already. I encourage you to ask them their positions on reforming the Board, Projects, Standards, and Education. With your vote, you get to choose the future of OWASP. I want to bring us back to our core mission of being relevant to developers, the literal standard bearer for all application developers, and the thought leader for the next generation of contributors and supporters.
I will expand on these points in future blog posts over the next week or so, as well as providing links to assist you in voting early.

So your Twitter has been hacked. Now what?

So I’m getting a lot of Twitter spam with links to install bad crap on my computer.

More than just occasionally, these DM’s are sent by folks in the infosec field. They should know better than to click unknown links without taking precautions.

So what do you need to do?

Simple. Follow these basic NIST approved rules:

Contain – find out how many of your computers are infected. If you don’t know how to do this, assume they’re all suspect, and ask your family’s tech support. I know you all know the geek in the family, as it’s often me.

Eradicate – Clean up the mess. Sometimes, you can just use anti-virus to clean it up, other times, you need to take drastic action, such as a complete re-install. As I run a Mac household with a single Windows box (the wife’s), I’m moderately safe as I have very good operational security skills. If you’re running Windows, it’s time for Windows 8, or if you don’t like Windows 8, Windows 7 with IE 10.

Recover – If you need to re-install, you had backups, right? Restore them. Get everything back the way you like it.

  • Use the latest operating system. Windows XP has six months left on the clock. Upgrade to Windows 7 or 8. MacOS X 10.8 is a good upgrade if you’re still stuck on an older version. There is no reason not to upgrade. On Linux or your favorite alternative OS, there is zero reason not to use the latest LTS or latest released version. I make sure I live within my home directory, and have a list of packages I like to install on every new Linux install, so I’m productive in Linux about 20-30 minutes after installation.
  • Patch all your systems with all of the latest patches. If you’re not good with this, enable automatic updates so it just happens for you automatically. You may need to reboot occasionally, so do so if your computer is asking you to do that. On Windows 8, it only takes 20 or so seconds. On MacOS X, it even remembers which apps and documents were open.
  • Use a safer browser. Use IE 10. Use the latest Firefox. Use the latest Chrome. Don’t use older browsers or you will get owned.
  • On a trusted device, preferably one that has been completely re-installed, it’s time to change ALL of your passwords as they are ALL compromised unless proven otherwise. I use a password manager. I like KeePass X, 1Password, and a few others. None of my accounts shares a password with any other account, and they’re all ridiculously strong. 
  • Protect your password manager. Make sure you have practiced backing up and restoring your password file. I’ve got it sprinkled around in a few trusted places so that I can recover my life if something bad was to happen to any single or even a few devices.
  • Backups. I know, right? It’s always fun until all your data and life is gone. Backup, backup, backup! There are great tools out there – Time Capsule for Mac, Rebit for Windows, rsync for Unix types.

Learn and improve. It’s important to make sure that your Twitter feed remains your Twitter feed and in fact, all of your other accounts, too.

I never use real data for questions and answers, such as my mother’s maiden name as that’s a public record, or my birth date, which like everyone else, I celebrate once per year and thus you could work it out if you met me even randomly at the right time of the year. These are shared knowledge questions, and thus an attacker can use that to bypass Twitter, Google’s and Facebook’s security settings. I either make it up or just insert a random value. For something low security like a newspaper login or similar, I don’t track these random values as I have my password manager to keep track of the actual password. For high value sites, I will record the random value to “What’s your favorite sports team”. It’s always fun reading out 25 characters of gibberish to a call centre in a developing country.

Last word

I might make a detailed assessment of the DM spam I’m getting, but honestly, it’s so amateur hour I can’t really be bothered. There is no “advanced” persistent threat here – these guys are really “why try harder?” when folks don’t undertake even the most basic of self protection.

Lastly – “don’t click shit“. If you don’t know the person or the URL seems hinky, don’t click it.

That goes double for infosec pros. You know better, or you will just after you click the link in Incognito / private mode. Instead, why not fire up that vulnerable but isolated XP throw away VM with a MITM proxy and do it properly if you really insist on getting pwned. If you don’t have time for that, don’t click shit.

El Reg and the troubling case of climate denialism

This post is a last resort as I’ve had two comments rejected by the moderators at The Register, one of my favorite IT news websites.

Lewis Page is a regular contributor to the Register. For whatever reason, around 50% of his total output there is (willful mis-) reporting on various papers and research on climate science. Considering he (and for what it’s worth, myself) is not a climatologist, it’s very frustrating to see the “science” category tag on these articles. It wouldn’t be so bad if it was marked Opinion or Editorial, and that he wasn’t deliberately misrepresenting the observed facts, papers, research and scientists’ own words, but that he gives no truck at all to anything that doesn’t fit into his worldview.

Just to be utterly clear – among scientists who are trained in climatology, there is no doubt that we are in a rapidly changing world. Basically the question hasn’t been “if” there’s climate change for about 15-20 years, but “what does it mean to be on this planet in 10-20-50-100 years”. It’s up to us and the politicians to decide “what to do about it”. Even if climate change is not as bad as predicted (which actually, it’s worse than has been predicted), the actions we must take now are good for us and the planet:

  • less air pollution == longer, heathier lives
  • less water pollution == longer, healthier lives
  • lower energy bills == more money for other things
  • less wasteful consumption of a finite non-renewable resource == richer, more economically healthy future and longer production of things we can’t economically make without oil, like certain materials and medicines and so on

There is literally no downside to acting to curb emissions, but there’s a lot on the line if we don’t do something. Personally, I don’t think an ETS is the correct path as it’s a cheap way for the government to earn money and seen to be doing something – anything at all, but as it’s a derivative market, which has a colorful history of abuse (such as in Germany, where too many credits were issued undermining the market, and California, where traders essentially create artificial spikes in price to maximise profits and create artificial blackouts), but despite this, we must move on to the phase of our industrial planet.

I call on the Register to provide the scientific consensus view. Here’s my rejected comment in full.

It’s my long and fervent wish that the Register would stop publishing these opinion pieces, as I rather enjoy the “call a spade a f$&#ing spade” approach to almost all the other articles, reviews and IT news, which is rather let down by Mr Page’s long standing and regular missives on this topic.

In my opinion, these articles are not “science”, nor are they reasonable journalism, where the authors of the paper might be asked for a comment or an interview to get their side first hand. Mr Page can still have his opinion, but at least pay us the respect of writing about the researchers, paper or presentation in an unbiased way to allow us to compare Mr Page’s opinion with what they really wrote, demonstrated, observed or said.

At least pay us the respect of providing balanced coverage either by providing mainstream climate science coverage in the science category along with Mr Page’s opinion pieces and coverage, or by adding in right of reply, interviews and accurate coverage of what was actually written in the papers and research.


I have taken the step of finally splitting the cut-n-paste import from my blog at Advogato into the days they actually occurred. All that content was here previously, but in some cases bunched together over many thousands of lines in single massive multi-month postings.

Some early permalinks are gone, but that’s okay, you can search for the content. The content I’m talking about dates back more than ten years.

Installing Fedora 18 (RTM) to VMWare Fusion 5 or VMWare Workstation 9

I always live in hope that just one day, the folks over at Fedora will actually have a pain free VMWare installation. Not to be. Here’s how to do it with the minimal gnashing of teeth.

Bugs that get you before anything else

On VMWare Fusion 5, currently Fedora 18 x86_64 Live DVD’s graphical installer will boot and then gets stuck at a blue GUI screen if you have 3D acceleration turned on (which is the default if you choose Linux / Fedora 64 bit).

  • Virtual Machine -> Settings -> Display -> disable 3D acceleration.

We’ll come back to this after the installation of VMWare Tools

Installing Fedora 18 in VMWare Fusion / VMWare Workstation 8

The installation is pretty straight forward … as long as you can see it.

The only non-default choice I’d like you to change is to set your standard user up to be in the administrators group (it’s a checkbox during installation). Being in the administrators group allows sudo to run. If you don’t want to do this, drop sudo from the beginning of all of the commands below, and use “su -” to get a root shell instead. 

The new graphical installer still has a few bugs:

  • Non-fatal – On the text error message screen (Control-Alt-F2) there’s an error message from grub2 (still!) about grub2 file not found /boot/grub2/locale/en.mo.gz. This will not prevent installation, so just ignore it for now (which the Fedora folks have for a couple of releases!). Go back to the live desktop screen by using Control-Alt-F1
  • PITA – Try not to move the installer window offscreen as it’s difficult to finish the installation if even a little off screen. If you get stuck, press tab until you hit the “Next” button – or just reboot and start again
Update Fedora 18

Once you have Fedora installed, login and open a terminal window (Activities -> type in “Terminal”)

sudo yum update
sudo reboot
sudo yum install kernel-devel kernel-headers gcc make
sudo reboot

Fix missing kernel headers

At least for now, VMware Tools 9.2.2 build-893683 will moan about a path not found error for the kernel headers. Let’s go ahead and fix that for you:

sudo cp /usr/include/linux/version.h /lib/modules/`uname -r`/build/include/linux/

NB: The backtick (`) executes the command “uname -r” to make the above work no matter what your kernel version is.

NB: Some highly ranked and well meaning instructions want you to install the x86_64 or PAE versions of kernel devel or kernel headers when trying to locate the correct header files. This is not necessary for the x86_64 kernel on Fedora 18, which I am assuming you’re using as nearly everything released by AMD or Intel for the last six years is 64 bit capable. Those instructions might be relevant to your interests if you are using the 32 bit i686 version or PAE version of Fedora 18.

Mount VMWare Tools

Make sure you have the latest updates installed in VMWare before proceeding!

  • Virtual Machine -> Install VMWare Tools

Fedora 18 mounts removable media in a per-user specific location (/run/media/<username>/<volume name>), so you need to know your username and the volume name

Build VMWare Tools

Click on Activities, and type Terminal

tar zxf /run/media/`whoami`/VMware\ Tools/VMw*.tar.gz
cd vmware-tools-distrib
sudo ./vmware-install.pl

Make sure everything compiled okay, and if so, restart:

sudo reboot

NB: The backtick (`) executes the command “whoami” to make the above work no matter what your username is.

No 3D Acceleration oh noes!1!! Install Cinnamon or Mate

Now, all the normal VMWare Tools will work. Unfortunately, after all the faffing about, I didn’t manage working 3D acceleration. I ended up installing something a bit lighter than Gnome 3.6, which requires hardware 3D acceleration.

  • Activities -> Software -> Packages -> Cinnamon for a more modern desktop appearance or 
  • Activities -> Software -> Packages -> MATE for old school Gnome 2 desktop appearance
  • Apply 
  • Logout 
  • From the session pull down, change across to Cinnamon or Mate and log back in
When VMWare updates support Tools to support Fedora 18 or vice versa, I’d still suggest Cinnamon over Gnome 3.6. Gnome 3.6 sucks way less than earlier Gnome 3.x releases, but that’s no great compliment. YMMV and you may really like Gnome 3.6, but without 3D support, it’s going to be painful. 

PTV iPhone app – worst public transport app ever, or just pure evil?

I take the train between Marshall and Southern Cross Station, a terminus station with 14 or 15 platforms and hundreds of V/Line country, suburban and bus services daily. I had an app that worked (the old MetLink app). That wasn’t stellar, but it worked well enough that I didn’t need to get a paper timetable.

So imagine my continuing frustration that the most basic of use cases just doesn’t work in the complete re-write of the new app:

I cannot find my station when standing on the station platform (!) using location search or by searching for the station in the default “Trains” mode the app comes in from the AppStore.

It cannot find the terminus of all V/Line services – Southern Cross Station. I’m serious. In “Train” mode, you cannot search for V/Line services or stations. In “V/Line” mode, Southern Cross is not even a station (!!). You cannot find it by clicking on “Find my location” icon whilst in the station (!), and you cannot choose it from the map, and you cannot search for it. Epic fail of all epic fails. It’s like the PTV app designers chose not to walk the 40 m from their office block to the biggest and busiest station in all of Victoria and test it out.

Modality. It’s nearly impossible to work out you can change the mode of transport you’re looking up by clicking the word “Trains” at the bottom of the screen. I am catching a “train”, but not the default type of “train”. Who knew? The thought that there are multiple types of trains obviously never entered to PTV’s UX designers. There’s no button shape or indicator, it’s just in a button bar by itself, which usually means that there are no other choices.

Honestly, PTV need to test their apps:

  • You should be able to find all the services within 500 m of where you are standing. Just list them all and let the filter function narrow things down in one or two keytaps.
  • You should be able to find ANY station or service or transport mode via text search. It’s just not that hard. There should be no difference between a regional bus, a metropolitan tram, an intercity V/Line service, or a station or bus stop. List ’em all, and let the filter work its magic in a few keystrokes.
  • Get rid of modes. I don’t think of modes and I use at least two every day. Free up that wasted screen real estate and replace it with a search function that works across all modes, and services.
  • You should be able to view a line’s entire timetable with no more than two or three clicks. Timetables -> scroll to the timetable or tap in enough to narrow things down -> voila. It’s not rocket science. Allow it to be a favorite.
  • Planning a multi-mode trip is not rocket science. This is just not possible with the current PTV app.
  • The old app had notifications for the services / lines you were interested in. Please bring it back. This feature may actually be in the PTV app – I simply don’t know because I have not been able to find my station or the station at which I get off.

This app is terrible. It must be withdrawn.

Resurrecting the wife’s laptop – Asus hates you and you and you

At Christmas last year, I bought a new laptop for the wife, an Asus K52DR with 4 GB of RAM and 500 GB hard drive. I quote from then:

[…Asus should…] supply a real copy of Windows 7 installation media, so you can clean install the OS easily instead of wasting hours and hours and hours getting rid of the circusware. Asking folks to sit there for 2.5 hours to create 45 cents worth of DVDs is morally repugnant and evil.

Although I stand behind every word I said above, I’m begrudgingly glad I spent the extra 2.5 hours creating those DVDs as I’m restoring her computer to factory default after she killed the previous HD by cooking it in the bedding. Obviously, not Asus’ fault, but what happens after replacing the HD is most certainly Asus’ fault. This Asus will be our last PC – my life is just too precious to donate to absurd and evil corporate practices.

When I bought the Asus, it took me about three days to get the PC to a default-ish Windows installation, Office 2010, and iTunes with just enough drivers to run “advanced” technical devices like the display or the wireless network. Don’t get me started on the number of reboots or gigabytes of patches required. Copying Tanya’s data, migrating her PST and recovering her calendar was simple by comparison.

I am dreading wasting yet another two to most likely three days of my personal life YET AGAIN to weed out all the circusware from the factory default build. Asus must start providing a fast circusware free method of complete restoration like Apple do. The time I’m going to spend over the next few nights, and probably the next weekend, is like a working week away from my family. Completely unacceptable.

I tried restoring the repair partition I dd’d off, but due to the new 750 GB drive having different sized clusters and alignment than the old 500 GB drive, I struggled to create a bootable recovery partition without spending yet more time than it would take to restore using the DVDs. So I’m using the restore DVDs.

I still don’t have a Time Machine work-a-like that can back up Tanya’s data. This is a serious issue as hers is the most likely computer to die. […]

And die it did. I tried Windows 7 Backup for months on and off after buying a new 2TB external HD, but as per usual being a Microsoft product, it doesn’t actually work. So too late, I found Rebit, which is just like Time Machine … but expensive. I’ll be trying that after restoring Tanya’s data. Luckily, I was able to get her most if not all of her data off under Linux all the while the HD was making very high pitched death screams. It’s dead now – all the sparing sectors are spared and the computer wedges hard if you try to do anything with it in read / write mode.

My newish MacBook Air 11.6″ is significantly faster and cheaper than this Asus, and more so every time I have to fix it up. Once I had recovered Tanya’s data to my 2TB dumping ground on my Mac, she was up and running with one of our AppleTV’s in about two minutes.

Tanya’s next computer will be a Mac when this one dies. I will not tolerate the loss of any more of my life to Asus insistence on circusware in the default build, and cheapening out by not providing real installation media, or Microsoft’s insistence on a recovery CD and crappy end user experience.

I stand by my recommendation:

Score so far: 2/5. Do not recommend. PCs are only cheaper if your time is worthless. I just don’t get it.


I’m going to reduce the rating to 1/5, and the 1 is only due to the surprisingly resilient Seagate 500 GB drive that survived just long enough to get nearly all of Tanya’s data off it.

RIP Meebles 1997-2011. Best cat ever

Some blog entries are easy to write. Not this one.

Meebles is no more. In the end, it was peaceful, but his last days must have been hell. At least he had chicken (and lots of it) last night.

I first met Meebles in early 1998 when I was looking for a companion to Greebo. I went to the Lost Dog’s Home, and picked the most feisty cat there. After 14 years, I know now why his original slaves put him up for adoption again, but I didn’t mind the random attacks, the aloof distance he preferred, and his general bat craziness. It was part of his charm, and it’s the reason I picked him. He had 3 days to go before what I had to do today would have been done to a six month young cat back then.

All in all, I got the best of the bargain for all 14 years. He was steadfast in his loyalty. You had to earn that loyalty, something dog owners will never and don’t understand, but once you had it, he was a part of your life.


Meebles watching over me

I miss him already. Catchya round buddy.

Time for something new

As many of you have probably noticed by now, my larger than life frame is not at AusCERT 2011. This is a shame as it sounding like one of the best AusCERTs in the history of AusCERT. There’s a couple of reasons for my absence – flu and the strange case of the disappearing job.

My services at Pure Hacking are no longer required, and so I need to get on with the job of getting on with the next phase of my life – and that means finding a great job that allows everyone to win.

There are a couple of options on the table as I write this. But the most intriguing to me right now is to be the advanced gun for hire for consultancies with schedule overload. If you think your consultancy could use me in that fashion even a few times a year, I definitely want to hear from you. If I can make alliances with even a few of you, this could work for us all. This would allow me to work for anyone in the world from my lab here, and would allow consultancies all over the world to plug their scheduling nightmare with one of the best web app sec minds* out there period.

I have a strong preference for remote telecommuting jobs as I live in a regional city. This doesn’t mean that a full time job in Melbourne is out of the question, but I will be upfront about my need for flexibility (i.e. allow me to work on the train and a day a week at home), or full time remote working from Geelong. Being 2011, full time or partial telecommuting should not be a difficult decision today.

I know I have a small but loyal readership in this blog, so if you know someone who knows someone, I’m available. I only have a short window before I have to make a decision, so if you’re able to pick me up, I definitely want to hear from you – vanderaj @ greebo . net.

* Just in case you didn’t know, I was the Project Leader and primary author of the OWASP Developer Guide 2.0, OWASP Top 10 2007 (the one in PCI DSS), and ESAPI for PHP, and I helped set the exam for the SANS GSSP (Java).