Blog

  • Advogato – 26 March 2000

    (This is a re-post from Advogato, which I no longer use.)

    26 Mar 2000 »

    Decided to have a weekend to myself for once. Had a big Friday night, involving much Guiness. Had breakfast early on Saturday for once, and then drove to Bondi for lunch with Dan and Ange. Bondi was as superficial as ever and had the customary annoying Mor(m)on.
    Drove Dan and Ange to Ange’s place, and since it was part of the way to Wollongong, drove to Wollongong. Didn’t get the fang out of me there, so continued on. Stopped eventually in Batesman’s Bay, some 200 km south of Sydney and 150 km east of Canberra. Had a cheap vietnamese meal there and saw Hanging up. Sad movie – take a tissue or two.

    The alternatives were to drive back the way I came, or via Canberra. Went via Canberra. Excellent fang. I think it’s out of me now. Drove around inner city Canberra for twenty minutes trying to find a petrol station (Echo’s have to be filled occasionally, and mine was approaching 500 km). None really, so pottered off to the Hume highway. Got petrol at Goulbourn, 630 km from my starting point, whilst still having about 10 litres left (about 140 km to spare). I love fuel efficiency.

    Drove home from Goulbourn, eventually crawling into my driveway at 3 am to share my bed with two damp and hungry little felines.

    Sunday was a complete waste. I was going to spend some time working with Reiserfs and my secret project for it on my Alpha, but since I slept in until 3 pm old time (2 pm non-DST time), I decided to rip through Crytonomicon instead. Good choice. Tried seeing the Insider but not on any more. Bad. Went to Burger King to make up for the loss.

  • SAGE Advice – First Editorial

    [ I took over editing the SAGE-AU journal for a while ]

    Editorial

     

    Well, this is my first go at being SAGE Advice’s editor; so hopefully, I won’t screw up too badly. Thanks to Donna Ashelford for all her efforts these past couple of years and Lee Monette who continues assembling the copy and doing all the hard yards in pre-press.

    Correction Time

     

    You may be shocked, but occasionally I make mistakes. However, in striving for perfection, … oh yes, I made a mistake. Simon Hancock of Praxa wrote to me after the publication of the last newsletter and pointed out a mistake that I’d made in my previous article in SAGE-Advice. Instead of each regional office having three class A’s, as stated, it was one class B subnetted from the 10.0.0.0/8 CIDR network. Just the facts ma’am.

     

    W. Richard Stevens  1951-1999

     

    William Richard Stevens died on September 1, 1999. He wrote or co-authored seven books, three RFCs, and other papers, including the seminal “Unix Network Programming” and of course the “TCP/IP Illustrated” series. He contributed immensely to the ‘net, bootstrapping knowledge of arcane networking to computing professionals everywhere. It’s sad to see the passing of yet another “great” in the Internet world, with Jon Postel passing on last year.

     

    However, it’s also especially sad to see the deterioration of social grace in online communities, such as slashdot. It might surprise you to know that Slashdot used to be a firm favorite of mine. But a substantial minority of the slashdot “community” reacted to Richard Stevens’ death in a way that just shocked me. I think I’ll quote Tom Christansen, a noted perl hacker:

     

    Good bye, Rich. Good Riddance, Slashdot

     

    In my nearly two decades of habitation upon the Arpanet and its descendents, never before have I ever had the misfortune to witness so distressing a thread of messages as these. This unspeakably sickening invective against so kind a man, a man whom most of you never even knew, can have no other effect than to boggle the mind, wound the heart, and taint the soul with a nauseous stench.

     

    Rich was always gentleman: pleasant, helpful, and courteous. Despite his fame and his skill, no prima donna was he. He was never bitter nor spiteful, never arrogant nor condescending. His humor and his insights inspired many of us, and not merely in our programming.

     

    In the last few years that I came to know Rich a bit better as we shared a meal at random conferences scattered about the globe, I was always impressed by his irrepentantly positive attitude. Whatever the tale he told, whether a personal one relating to his children or his delightful rediscovery of the piano, a professional one related to programming and computers, or simply some incidental anecdote, that tale he presented with a childlike delight and glee. Rich displayed a perpetually positive attitude rare in a man even half his age. He was uplifting merely to be around.

     

    Never was I so honored as on that day when Rich lamented not bringing his Perl Cookbook with him so he could get my autograph on it. I was deeply touched and completely surprised. Rich is acknowledged in the credits for his indirect help in preparing that book from our discussions of troff and systems programming matters. Despite his good taste and obvious skill, he had been for some time using Perl for various daily jobs. It’s true that Rich had minor issues with Perl’s cleanliness, but these were subsumed by the practical concerns of simply getting a job done easily and quickly. In short, it worked and he used it, and he was thankful it saved him time. The very things that the HTML crowd find hardest with Perl — its Unix roots and proclivities — Rich found immediately familiar and obvious. I am proud that I had ever so small a part in helping out a man who had tremendously helped me and thousands of others.

     

    It is with nothing less than complete shock and surpassing shame that I have read here what so many insensitive malcontents have cruelly and unjustly scrawled. Doubtless these are the same twisted perverts who torture kittens and kick pregnant mothers, a sickness upon this medium and this planet. I hope these sociopaths find help soon, or at least remove themselves from the company of men and the gene pool.

     

    Forget not this one inescapable fact: that where Rich has gone, so too inexorably goes each and every one of you walking shadows, and tragically sooner than you dare fathom. May you be remembered in the same measure as have you remembered those who preceded you down that lonesome path to dusty death.

     

    It does not take a particularly compassionate and sensitive person to be sickened and hurt by these inexpressibly horrible postings. It takes nothing but a decent and caring human being, the sort of which we seem to have so few of these days–and today, to our loss, one fewer.

     

    –tom

     

    Finally

     

    I’m happy to receive feedback on how I’m going, or just to have a whinge at the world in general. Please mail me at editor@sage-au.org.au. The good stuff will be printed in the next SAGE Advice.

     

    On a personal note, I truly wish you, your friends and family a merry Christmas and a happy new year. For the cynics among you, this can be read as: I hope you can stand your family, and have a good time despite the impending Christmas/New Year’s consumer splurge^W^W holidays. Happy shopping at the sales and try not to work on New Year’s Eve unless you’re doing it to avoid boring parties elsewhere. May all your toy dreams come true and there’s a big bag of Lego MindStorms waiting in your Christmas stocking*.

     

    Happy holidays, stay safe and see you next year!

    Andrew van der Stock

    * Links the Office Cat tells me that it is traditional not to be so cynical about the Christmas holidays. I’ll stop being cynical when the churches are full, shops aren’t open 24 hours a day the week prior to Christmas eve and pre-Christmas promotions aren’t available in August.

     

  • Tweaking Your Infrastructure

    [ A copy of a column I wrote for SAGE-AU newsletter ]

    This month, I’m going to talk about getting the most out of the hardware and software that your organisation has paid for. For those of you who don’t use NT, you may find the first bit of my column useful. It contains a few suggestions for performance tweaking networks, and this stuff works regardless of platform.

    Many of you will have noticed that Microsoft funded a Mindcraft white paper purporting that NT is considerably faster than Linux on the same hardware for file (2.5x) and web (3.7x) serving purposes. I’m not going to defend that white paper, as I feel that it is massively flawed.

    However, I am going to point out that Mindcraft have not only done a favour to the Linux, Apache and Samba developer communities (by pointing out easily rectified flaws), they’ve done a massive favour to NT administrators as well. How? By documenting in the one place exactly what settings you need to tune NT for the fastest possible speed. This information is spread over TechNet and the resource kits, and to a certain extent third party publications like Windows NT Magazine.

     

    Infrastructure First

    The tweaks contained in the Mindcraft document will not help at all if you don’t have the appropriate infrastructure to support your network. There’s no point in getting an extra 5% from an individual server/client combo if your network or network services sucks. In many cases, paying attention to your network will get you more of a boost than any amount of performance tweaking on your servers.

    First off, try to determine overall network utilisation, particularly in server segments. 0-10% is good, 10-30% is average, 30-72% requires some thought about partitioning traffic or switching, and over 72% requires help from a professional.

    Fix the low-level problems first. Use a network sniffer to see if any of your subnets are suffering from excessive jabber and other technical faults. Don’t daisy chain hubs. Don’t exceed Ethernet cable distances (approximately 185 m for cat 5 10/100 cable). Try to avoid putting more than 24 nodes into the same collision domain if you are still using dumb hubs. Use good punchdown patch panels (like Krone) and shortish good quality patch leads.

    Subnetting is the forgotten friend of network administrators. Don’t be profligate with subnetting. Just because your switch vendor says that you can have a 65,000 node subnet by using switching doesn’t mean you should. A single subnet for this many nodes would have massive broadcast and multicast packet storms, regardless of whether a switch was used or not. At a site I have worked at, they went through the entire 10.0.0.0/8 network just because they had a System™. When they wanted to connect to the outside world, they suddenly found that Telstra had already used part of this address space, and that NAT would be necessary to connection to the Internet via the Telstra managed firewall, limiting their options. Be realistic about expectations for growth. If you’re setting up an outlying office and it has 3 workstations, you don’t need to give the outlying office a 65,000 node network (or three, as this site did; one network for the router, one network for the three workstations, and one network for the printer. They wasted approximately 196,600 IP addresses at each of their 54 regional offices). By being parsimonious with subnetting, you can really reduce traffic and make your network management that much easier. You might need a few more router interfaces, but routers are getting cheaper, particularly routers with multiple 10/100 Mb/s interfaces.

    Check that you don’t have excessive broadcasts. Configured properly, using NetBIOS over TCP/IP (which can be a little bit noisy when badly configured) no more than 5% of all packets will be broadcasts. More than 5%, you need to look at your WINS configuration. Don’t have WINS and have more than one subnet? Shame on you! Make sure that you set the DHCP global scope to specify WINS node type 0x8, which is Hybrid. Hybrid almost completely avoids the broadcast overhead and is far preferable to M and P types. As a bonus, hybrid is almost always faster than no WINS at all.

    If you are managing more than 3 nodes, you would be crazy not to use DHCP. It’s easy to configure and with the MS DNS server, you have zero maintenance DNS reverse lookups. To make DHCP work on internetworks, you need RFC1542 compliant routers. For those of you with really big networks, DHCP prior to service pack 4 scaled to about 1200 DHCP scopes per DHCP server. SP4 fixed that. I would suggest that each physical site have two DHCP servers. Configure DHCP server A to look after 50% of the subnets, and reserve 75% of each subnet. Configure DHCP server B to look after the other 50% of the subnets, similarly reserving 75% of each subnet. Then configure DHCP for fault tolerance: on server A, configure 25% of the space from server B’s subnets, and vice versa. This way, if one of the servers goes down (either for maintenance or for more sinister reasons), the few clients that need a new lease during that downtime can still be serviced. I find that 3 day leases are a good compromise for most networks. 7 days or longer is too long – if you need to renumber your network, seven days wont cut the mustard, and 1 day or less will cause a massive DHCP packet barrage around 9 AM every morning. Service Pack 4’s DHCP and WINS servers can cope much better than prior NT releases with these transient loads. Get there as soon as possible if you haven’t already.

    If you use Exchange, always make sure both your servers and your clients have properly configured DNS. Not having DNS will slow Exchange down, in many cases make it unusable – for example the X.400 connector over TCP/IP will often fail to work. Since Windows 2000’s directory is based upon DDNS, you should consider some form of DNS server today if you don’t already have DNS installed at your site.

    Do a traceroute from a random sampling of clients, and try to ensure that there are no more than two router hops to servers used commonly by users. For example, if user000 through user999 at site A require access to server349, make sure that all clients can talk to the server through no more than two routers, and preferably just one or none. The latency should be no more than 15 ms to provide users with seemingly fast response to their actions, particularly if they use Active Desktop or IE 4.0 or later. The pipe to their servers should be faster than 2 Mb/s to ensure that they don’t bitch and moan at the server being slow. If you can’t provide this sort of speed locally, figure out some way to get a file synchroniser or backup program to look after a local file server for them.

    100 Mb/s EtherNet can be a minefield. Check to see that you are actually seeing an improvement in performance using the “Auto” setting when using 100 Mb/s. I’ve seen servers set to 100 Mb/s Full Duplex crawl – 39 minutes to copy a 72 MB file instead of 15-20 seconds. I’ve found that by falling back to half-duplex, the speed is almost the same as full duplex – particularly with large packets. If neither duplex settings help, fall back to 10 Mb/s and again test for maximum speed when using half/full duplex. Ensure that all devices on a switch or hub have the same duplex setting.

    Finally, it’s important that servers are able to talk to each other via as big a pipe as you can afford. By doing careful analysis of your network, you’ll quickly come to realise that a $2000 8 port switch, or $4000 24 port switch will massively boost inter-server bandwidth, reduce collisions and reduce router load. With Unix servers, unless they’re heavily NFS cross-mounted, there’s not much point to putting servers on a switch, but putting NT servers on a switch can really help. Some common BackOffice components and poorly designed COM/COM+ objects call the security provider all the time, meaning that a good percentage of your servers will be sending a constant stream of packets to your domain controllers.

    Win32 programs, almost without exception, print to the GDI print model. The NT print processor then ensures that the printer driver is capable of delivering results close to the original intentions using the native PDL, and if not, it fakes it by rasterizing the necessary areas. If you have non-PostScript printers, you’d be surprised at the size of even modest business documents. PostScript is one of the highest forms of PDL, and thus has the smallest need to rasterize a GDI call. In the field, the difference can be quite staggering: a simple PowerPoint job will take 100 kb instead of 5 MB on a PCL printer. If you care about network bandwidth, do not buy non-PostScript printers. If you have printers that can do both PCL and PostScript, you can do your network a favour by choosing the PostScript version of the driver. Your users will get more options, all jobs will print quicker, and the network will not be bogged down so often. It’s extremely worthwhile to place printers and the printer server on the same switch. This will do more for speeding your network up than almost every other trick here as printer traffic (which can get quite intense) will not traverse user segments, switches and routers.

    Last printing tip: Don’t use any form of AppleTalk print server. They all suck and they all at least double your network traffic. If you need Macs to print, buy EtherTalk capable printers, such as the HP 4000N, and let the printer do the talking.

     

    Tweaks

    Now that I’ve saved a good percentage of the network traffic, you need to look at getting the most out of your hardware that you’ve already purchased.

    Windows NT and all the BackOffice components supply a rich set of performance counters, which make it easy to work out what your servers are doing and if they’re coping. It’s a good idea to take PerfMon logs on every major counter for a week or so and work out baseline performance for your servers. Then about once every three months or during known busy times, do it again and compare against previous baselines. By looking at the results of the comparisons, you can determine if your servers are coping with the load, or if they need upgrading. There are heaps of performance tuning references, including the resource kits from Microsoft, so I’m not going to go into much detail here. These baselines make it easy to justify purchasing upgrades or new servers. Without them, you may as well wet your finger, stick it in the air and see which way the wind is blowing.

     

    Conclusion

    It’s important to set up your environment to best cope with the operating system you’re going to use most. If that happens to be Windows NT, then a few small tweaks and some generic good advice will make the difference between a marginal existence and a trouble free, fast environment.

  • Windows NT 4.0 Manageability

    [ Copies of some of my older work for a SAGE-AU column ]

    A short column this month as I’m pretty pressed for time and working against a tight deadline, which I’ve definitely abused this time (sorry Donna!). This month, I’ll be dealing with the remote management blues. You’ll need a copy of the NT Server Resource Kit, 3rd Edition, and the NT Workstation Resource Kit, and if you’d like to get the full screen stuff happening, I suggest buying one of VNC, Control It! (nee pcAnywhere), or Timbuktu.

    Step Zero

    The biggest mistake I see with many naïve Windows NT installations is that the administrator installs every service and its dog on the off chance that it’ll be needed it later. Don’t do this – you can always install it later. As with all production systems, you install and run only those services that you actually require. By installing less, there’s more RAM for the real application or service to use, NT loads faster, and there’s probably fewer bugs or holes to exploit.

    After installing any installation of Windows NT, it’s important to sort out any warnings or errors in the Event Log. These warnings are harbingers of doom for your social life if you leave them lie.

    Another trick for easy performance boost and nice little trick to know is to set the page file to twice physical RAM for both minimal and maximal settings just after installation. This stops Windows NT resizing the page file on the fly, which under stress can cause a completely unresponsive server. By setting the page file just after installation, you get a fairly contiguous page file, which can help performance.

    Event Manager

    This first stop when diagnosing any problem with Windows NT is the Event Manager. If you only get one piece of information from this article, the key to successful problem resolution is Event Manager.

    If the server is blue screened, take good note of the exception and which driver or application killed itself, and reboot. Then hit the event monitor to see if there was something in the lead up to the blue screen that might have triggered the BSOD. Since BSOD’s are rare (I’ve seen less than five in the last twelve months), most times the only entrails of the problem will be in the Event Manager. Check all three logs, and see if you can replicate the problem. At least you have some event codes to plug into TechNet to see what turns up.

    Always set the log policy that suits your organization. If you’re not interested in the log contents, bump all three logs to 4096 KB, and over write as necessary. By leaving the logs with the default settings is asking for sudden unexplained application failure as NT will simply stop the logs from being used. Always check critical servers every morning, and other servers once per week. If this means you only have time to check logs, it’s time for a log management helper, like NetIQ or similar.

    Freebies

    Don’t ignore the command line processor. The command shell has hidden talents, such as command history, scrollable windows and expanded batch functionality, including conditional operations (&&), command grouping and serialization. Try using the function keys in the command shell. F1 is a character by character version of ye olde F3. F2 allows you to copy part of the command history to a specific character (sort of like yank line with a search in vi). F3 displays the last command. F4 allows you to delete from the insertion point to a specific character in the command history. F7 allows you to browse previous commands. F8 does the last command with the insert point at the beginning of the line. F9 allows the selection of a specific command from the history buffer (equivalent to !5 in tcsh, which repeats command 5). You can make the command processor much easier to cut and paste from by turning on quick insert mode. I like to use 43 or 50 lines, and a smaller font with blue background and white text, but that’s just me.

    The command line is useful under Windows NT remote troubleshooting because all the good stuff can be done using command line tools, particularly with net.exe. In W2K, the command line becomes even more useful. Microsoft have committed to be able to do everything (and I mean everything) by the command line. So far I count more than 400 executables in the Windows 2000 system directory. That’s more than double the total amount of Windows NT 4.0, even though all the graphical administration utilities are now Microsoft Management Console (MMC) snap-ins (which have a .msc extension). I think this bodes well.

    For bonus points, besides posix.exe, what is the only other POSIX subsystem application that is delivered with Windows NT? Sorry, no prizes for this one.

    Net.exe – Nifty Tool of the Week

    Windows NT and Windows 9x both ship with a program called net.exe. Net allows you do the vast majority of your remote administration. The first thing you need to know is a little known depth called mailslots. Mailslots are an old OS/2 LanMan RPC holdover, one of six different IPC methods that is available to Windows NT. Mailslots allow you to impersonate a user by connecting first to the IPC$ share.

    To invoke a new impersonated mailslot from the command line type the following:

    Net use \\server\ipc$ * /user:domain\account

    Remember to substitute the username, domain name and server to make it work for you. The asterisk allows you not to enter the password in the clear on the command line – important if you keep a command history or there are busy bodies wafting around your shoulders, say at work or a conference.

    NT 4.0 and later allows you to use DNS names and IP addresses as well as hosts that WINS can find for you. For example, if you have no WINS replication or resolution to a PC (say your PC at home), you can connect to it like this:

    Net use \\192.168.1.1\ipc$ * /user:domain\account

    Where obviously, you’d substitute 192.168.1.1 with the necessary IP address, and substituting the domain and account details. You could connect to it via a dns name, like \\hackbox.greebo.net\ipc$. There are bugs in 4.0 prior to SP4 regarding the use of hexadecimal or octal representations for the IP address. Upgrade to SP4 to avoid this.

    Why connect via a mailslot? Well, when you have a valid and active mailslot running, you can browse the machine, administrate it using the normal NT utilities, and use net commands against it, like net user or net statistics.

    This is the hidden su-like interface to Windows NT. The cool thing is that you can use any account, and you still get through as long as physical communication is possible (ie you can ping the remote machine and ports 137-139 are not blocked in the middle). Make sure you do a

    Net use \\server\ipc$ /del

    when you’re finished.

    Tip of the week: net stop/net start can avoid some reboots if you know what you’re doing. Many applications will ask for a reboot when all that’s really required is for the service(s) to be stopped and started. Practice before you trust this advice, but it can avoid downtime, so it’s worth a try. Sometimes logging out is that’s required as well. If availability is important to you, do try this. Otherwise, just reboot. It’s the NT way.

    Windows NT Diagnostics

    Run winmsd.exe from the start menu or start it from the Administrator Tools, and you get a handy little tool that can connect remotely. WINMSD can tell you what sort of processors you have and how much RAM, what sort of disks, etc, in one handy little utility. I’ve used this with some success when I needed to tell the difference between a PII/400 and a Xeon/400 at a site some 265 km from where I was sitting just this week. It works.

    Resource Kits – Don’t Leave Home Without Them™

    If you support NT for a living or just dabble with NT because you’re the only “computer” person working in your company or department, the resource kits are essential parts of the administrator’s toolkit. Right now, the NT Server 4.0 Resource Kit can be had for less than $300, but it’ll have a very short life span, so it’s not good value.

    The workstation and server resource kits CD’s are available via TechNet (aka DogNet). TechNet costs about $800 per year, and is well worth the price. You have to order through Microsoft directly, rather than ordering through a dealer. The resource kit utilities are partially available via Microsoft’s ftp site. You can buy books of the resource kits for about $400, but they typically don’t have the latest versions of the CDs, and the paper text will be out of date within six months.

    The resource kits contain many useful utilities, not the least being a telnet daemon, and the more useful “rconsole” (rconnect.exe). Both utilities give you access to a command prompt running on a remote NT server. Rconsole gives you full command shell functionality, and allows for most console programs to run (with the exception of things that change video modes, like Ghost 5.0 or games). Now that you know how to connect using mailslots above, you can do this inside a rconnect window as well. Layer upon layer upon layer…

    I treat Resource Kits like dictionaries – they are deep, and you don’t have to know every nook and cranny, but if you spend a little bit of time every week getting know new tools, it’ll pay off in the end, or when you have a tight deadline.

    Tip of the week: Check out the password filter in the resource kit. It does a great job of allowing you to define what sort of passwords your users can use. The downside? It needs to be on all workstations.

    Quickies

    NT has some nice functionality for managing remote sites, but sometimes the functionality is hidden somewhat. For example, if you wish to add a printer on a remote server, this used to be a doddle in NT 3.x, but it’s sort of hidden in NT 4.0. The trick? Browse the server and dive into the Printers folder. Add New Printer wizard is now available. You can’t easily create LPR or JetDirect ports, but if the ports already exist, then you can setup and manage printers remotely again.

    To tone down some of the more unnecessary NetBIOS broadcasts, you can turn off the Computer Browser service on NT Workstations and member servers. This stops these machines participating in Browser elections. If you have WAN sites with asynchronous or single channel ISDN connections, you might want to have a look at WINS replication intervals (every 30 minutes might be too often). The replication governor (look it up in TechNet) and possibly revisiting your WAN infrastructure to minimize WAN traffic by placing a NT Server at the other end.

    NT Services for Unix have been released as an actual product. This has a number of the MKS shell tools and an NFS server and client. It’s not the complete MKS tool kit, but it’s better than nothing. Internet Explorer 5.0 and Office 2000 are due on March 18th. One is free, the other will cost more J

    Conclusion

    Windows NT may not be the most manageable or serviceable operating system without some additional third party helpers, but judicious use of the available tools coupled with a methodical approach can help look after most technical support issues. As with most operating systems, proper production management techniques will boost reliability and availability.

  • Windows NT Serviceability

    A few years ago, I owned a lovely Beetle 1300 that only let me down about twenty or so times in the two years I owned it. As a result, I owned a great owner’s repair guide, written by an old hippie. It was a great read in its own right, and I used the book extensively. One of the things that stuck with me is that the author told of a time that he took apart and serviced a Buick auto transmission using the instructions for a Beetle auto transmission. It worked, and he learnt a lot during the process. In the same way, I am hoping that you’ll stick with me, think outside the square for a few minutes, and see if you can take an idea or two from my article and apply it to your own situation.

    Serviceability

    On October 20, Microsoft released Service Pack 4. This service pack is the service pack that all NT shops have been dying for. After a considerable wait, it’s finally here, and it looks as if Microsoft has finally taken comments from the system administration community seriously. One of the bigger problems with software development, especially on a code base of the complexity of Windows NT, Solaris or Linux, it’s hard to separate new functionality from fixes. Microsoft has provided three different levels of update for SP4, based upon feedback garnered over the last few years.

    The smallest update is just the fixes. In the minimal update, 641 new fixes (plus all the old ones) are provided in a single file 260 kb file. That’s fine if you don’t need to tick the y2k box or want any of the new features.

    The intermediate update, 32 MB in size, not only fixes all known NT problems, but provides a lot of extra fixes and some new functionality, asked for specifically by NT Security gurus, like new versions of PPTP and LMv2 security. In many cases, to get really secure you need to ditch Windows 9x from your network. For the dubious still reading, 32 MB is very comparable to the 41.9 MB in recommended patches for Solaris 2.5.1 or 20 MB for Solaris 2.6, which has not been out as long as NT 4.0 has.

    NT 4.0 had two y2k bugs and about 4 or five cosmetic y2k bugs. Microsoft provides the 76 MB y2k fix to get as many customers as possible to the same supportable configuration. This huge meta-update contains IE 4.01 SP1, SP4, a data connector update, and some BackOffice fixes you need to make NT 4.0 y2k compliant.

    SP4 is one of the easiest service packs to apply in a long time. Click a couple of boxes, and it munges away. The downtime window is very small – the time it takes your server to shutdown and restart (mostly less than five minutes on the Intel and Alpha servers I’ve updated so far). But as always, prepare for the worst. Make a emergency repair disk (rdisk.exe), do a full backup, ensure that you understand your own disaster recovery plan (DISPLAN), and make sure you have your NT CD (and if you need them, the three boot disks) handy. The best case down time window will be the same, but you’ll be in a much better position in case something goes wrong.

    My success ratio with SP4 is good – it fixed one seriously ill server that was cruising for a bruising with the NT install CD. The only thing stopping me is that it was our primary domain controller. It wouldn’t give up being the primary domain controller and the bandwidth to the box was approximately 9600 bps over a 100 Mbs full duplex Ethernet connection. However, NT is a very stable OS, and even though it was very sick, it stayed up for months on end, and it reliably serviced over 200,000 DNS and 142,000 WINS queries per week. Applying SP4 fixed both the promotion/demotion and the bandwidth issues, so it’s back to normal.

    There was one “server” that didn’t take SP4 too well. It comes down to what we class as servers. This unit was an old HP Vectra VL 5/200 with 48 MB of RAM. It was servicing the Cisco 5200’s TACACS+ needs for the place I used to work at. I’m no great fan of using desktop PC’s as servers. My basic requirements for a server is that if it’s important enough to dedicate a machine to, it’s important enough to do it right. This means providing the necessary infrastructure and support for a server level operating system, things like a CD-ROM drive, some way of backing up and restoring the server commensurate to its importance in the enterprise, and whether the tier one vendor will support you when you have problems.

    HP, like most tier one vendors (such as Sun, IBM, Compaq, Compaq nee Digital, Apple, and others) have two or more separate product lines – a desktop line and a server line. My personal opinion is that sometimes the distinctions can just be marketing, but HP provide support for NT Server, SCO Unix, Solaris x86, OS/2 Warp Server and NetWare only on their NetServers. They do not support these OS’s on their desktop PC’s. For any corporation, the data or service is of far more worth to the organisation than the hardware. That’s why I baulk at installing server level OS’s on desktop PC’s unless those PC’s are going to be used by a single user under test conditions – and even then a desktop PC is no predictor of success when translated to the real thing. In a bad taste analogy, it’s like clinical tests on mice – some drugs are fatal to mice that are benign to humans and vice versa.

    If you’re not buying servers from tier one vendors, I’m sorry but that’s not such a good idea. I know friends who have rolled their own servers, but let me relay to you what happened at my last site with a roll your own. The machine was massively built – it was a full tower with a Asus mainboard, a DPT caching RAID card, heaps of RAM, the works. The problem is that the drive cage was painted with non-conductive paint. After a year of heavy service, the insulation wore through from the vibration and the drives started to earth their circuitry to the cage and died. First one drive died, and no one noticed because the box didn’t have any monitoring software loaded nor did the RAID card have a $2 piezoelectric bleeper like the HP NetRAID cards do. So the DPT controller made up the difference using parity. Then the next drive died, and the server stopped. There were no backups of the box for a month because the DDS tape drive could not read its own tapes (which is why you verify). The excrement hit the fan and someone got the arse. The server cost only $2000 less than an equivalent HP server, which also had vendor support (ie if a component dies, they courier out a replacement), and it had true hot swap rather than just the cold swap of the roll your own. Is your job worth $2000? The month’s lost data was worth far more than $2000 (mid-six figures, actually). If you’re wondering, NT was not the NOS running this box, but it’s irrelevant to this recounting.

    Server Availability Tips

    • Do not install any protocols, services or products that are not going to be used as part of the server. For example, do not install IPX/SPX on an Oracle DBMS as clients will not use this protocol to communicate with the server. Never install Simple TCP/IP services.
    • Always have a CD-ROM drive on your servers. They’re only about $100, and can save you hours of repair time. I’m not too fussy about ATAPI vs SCSI CD-ROMs these days, just make sure that your OS can read it without additional drivers. Panasonic 32x SCSI CD-ROMs are less than $300, so if you can afford the SCSI alternative, go for it.
    • Take emergency repair and disk partition disks on a regular basis. I do ERD’s once a week, and disk partition disks about once a month, and I rotate the disks so I have more than one ERD per server. The reason is that floppies are terribly unreliable, and if you’re trusting a six or twelve month old floppy, you’re kidding yourself.
    • Try to avoid using the console at all. Domain Admin users are able to crash the server (as just as in Unix, root can cd / ; rm –rf * or kill –9 –1). There are some unavoidable reasons to use the console, so schedule this as part of your regular maintenance window.
    • Make sure you have a regular maintenance window. Never promise 100% uptime, as you’ll be setting unrealistic expectations. The aim is to have 100% availability for core hours. I worked in the hospital system, and we had the aim of 100% availability, but if we needed to, we could take some time from 4 am – 6 am on Sunday morning or longer if arranged beforehand. As it was, we were in the high 99.994% uptime (less than 30 minutes of unscheduled down time per year) for the vast majority of our servers (NT, Novell and Digital Unix). If anyone says that these operating systems are unreliable, I have a bone to pick with them based upon real life experience in the mission critical, health care enterprise arena.
    • With Windows NT, as in many OS’s, it’s worthwhile to separate the data from system files. This means at least two partitions on production servers. I have my own preference for partitioning, but to cut it short, you need about 1 GB for NT’s system partition (to have the OS, a copy of the installation files, the page file, and drivers), and the rest can be partitioned for user files. If you’re doing a print server (in my book, a server servicing more than 50 or so printers, or you’re doing PostScript RIP stuff), move the spool to the data partition, as you can fill the system partition with user files. The Q article in the knowledge base is Q123747.
    • Practice your disaster recovery plans. If you don’t have a test server that’s exactly like your production servers, allocate some budget, and buy it. It’ll pay you off the first time you have a crash. Learn (and document) how to restore your systems as quickly and as reliably as possible. Practice, practice, practice. Don’t have a DISPLAN? Write one today or seek advice on getting one written. They’re living documents, so keep them up to date.
    • If you don’t have a TechNet subscription, get it. It’s about $800 per year, and worth every cent. If you have even one developer in house, get the MSDN Universal subscription (about $3500 per year at today’s prices). It comes with lots of goodies, including MSDN Library (some of the best answers to your problems are in MSDN) and you get pretty much all the MS products including betas.
    • NT Magazine is a must have subscription – don’t waste your time with the emasculated Australian edition – pay the extra fifty bucks and get the US one airmailed to you.
    1. There are various NT resources all over the Net. My favourites include http://ntsecurity.ntadvice.com and http://ntbugtraq.ntadvice.com, both run by Russ Cooper, a featured 1997 SAGE-AU conference speaker.
    • Avoid letting staff with a little knowledge administrate NT. It’s a recipe for disaster. Teach them a few things every month and bring their knowledge up, rather than let them just go for it. Management will dislike you because you’re “reducing productivity” or looking like a control freak (management speak: “You are not being a team player”), but the alternative is massive amounts of down time. Make sure that they are interested in boosting their knowledge levels by making them go for the MCSE exams. They exams are $135 a pop and easy to get as long as you actually use and understand the product (the instructor led courses can help, but they’re not mandatory). Under no circumstances give out Domain Admin privilege to those who do not need it.

    In the next newsletter, I’ll explain how to use the resource kit utilities to administrate NT from a user level account (with access to a domain admin account, of course!).

    Slagging Microsoft

    Like many of you, I read Slashdot, although I am beginning to wonder why. Originally, Slashdot was a fun site that had many cool stories and lots of nifty Linux/Open Source articles. However, more and more often it has descended to outright MS bashing. Now I am not going to defend Microsoft for everything they do, because I personally find their marketing and monopolistic practices loathsome.

    What’s the relevance? The problem is that SAGE-AU’s mailing list has descended to the lowest levels of Slashdot of late. The Executive will be making some announcements soon on measures to curtail the level of vendor bashing on the lists. This is because we are putting off people who might not ask questions that are necessary for them to get their job done. For example, I haven’t seen a NetWare-specific question on the list this year. Is it because we have no NetWare people on the list, or is because the NetWare people are fearful of being slagged by both the MS and Unix weenies? This is not professional behaviour dudes!

    Whatever the reason, the SAGE-AU Executive have decided to take some action to curtail advocacy or just plain emotive slagging. There’s no point in voicing the opinion that OS x is not stable or is unsuitable to a particular task, particularly when the admin asking the question might already be using OS x in that situation quite happily. They may only have a small problem that would make their life easier if someone else on the list has already solved it.

  • Diary of a junket: Microsoft Asia Pacific Developer’s Conference 27th October 1997 – 29th October 1997

    Note to my boss: This is meant to be a factual article on my trip to the Microsoft Asia Pacific Developer’s Conference. Where there is a difference between my reimbursement claims and this article, this article is playing fast and loose with the truth and my claims are completely factual, including the cognac bill.

    Day One (I’m no steenking C programmer)

    I got up, after barely getting to sleep. Being a habitué of the night has the downside of being mean and ugly when a red eye special is necessary to get some where. I got a lift with my dad to the airport at the ungodly hour of 4.45 am. Usually, I get a cab when travelling to and from the airport but my Dad was working as it’s not that far from the Ansett maintenance base where Dad works to the airport terminal.

    The one benefit of being up at that hour, as early risers and system administrators know, is the beautiful sunrises that often accompany bitterly cold mornings. It wasn’t too bad, but it was certainly a cold Monday morning in Melbourne. I got on the plane after picking my ticket up from the counter with all my stuff in a carry on bag. So did everyone else. The plane’s hold must have been empty.

    Little did I know it but I was seated in zoo class with one of the presenters, Mark Hammond from Skippynet. Both of us being experienced travelers, we didn’t geek out, but instead looked bored when the attendants ran through the safety video. I’m really glad that no one else bothered to watch either, so that in the case of a real emergency, it’ll be the blind (through osmosis of ignoring the flight attendants over the coarse of a few hundred flights) leading the blind.

    I pack lightly on a meal flight. I had my trusty EMF source to screw the flight systems (my CD player) in case I wanted to be lead by the blind in an early morning romp through a nearby hillside. I love airline food. There I’ve said it. I’ll eat anything but caviar and asparagus, but airline food is fabulous in Australia. I gobbled down what is most likely to have been scrambled eggs, bacon (probably), and tomatoes (it was red). I think the grey fat coated shiny thing was a sausage that would have done Cut Me Own Throat Dibbler proud, but was yummy anyway. Of course, the orange juice in those tiny foil covered cups is designed to coat you in orange juice when turbulence hits. The conspiracy theorist would say they have it designed into the auto pilot way points … 55 kms north, jiggle up and down, turn 25 degrees right, jiggle some more and fly for 80 kms… I was well on to my second cup of coffee when I got splashed. Those who know me know that this is not unusual in of itself. At least I had a decent excuse instead of “oh, I missed when I was quaffing the coffee”.

    After landing, I queued jumped the taxi line by accident. I found an empty one that was leaving and I jumped in whilst he was in traffic. Cabbies never say no to a well dressed fare, so only when I realized that it was really easy getting a cab I asked if it was busy, and he told me that he was going to join the cab queue when I jumped in. Oh well. Saved me about 10 minutes.

    Got to my hotel to find that I was screwed. The travel agents had made a booking but the hotel wanted a credit card imprint to let me in the room. After spending about 30 minutes leaving voicemail for my boss to fax his authorization through, I was really pissed off, so I let them make an imprint of my Visa card. I really wanted to dump my stuff, but even after that, I couldn’t get into the room until after lunchtime. Lesson 1: come the day before after 4 PM, and you don’t have accommodation worries.

    After having a good coffee in the hotel’s cafe, I descended on the local McDonald’s across the street and had breakfast number two while I waited for the doors to open. In the end, I ended up having an argument with an Asian developer about the impending doom of Apple while we drank the crap coffee. The doors finally opened at 10 am for registration, a full three hours after I got there. As it turns out, I could have slept in and got some badly needed beauty sleep.

    It was chaos up on the second floor – they were not ready for us and worst of all, the coffee wasn’t on, and the demo machines weren’t ready. As I have the attention span of a small kitten on three dozen catnip tabs, I was bored. I had run out juice on my CD player, the Asian dude (he was from Singapore) wouldn’t believe that Apple would still be around next year, and I didn’t know anyone. Eventually, we were allowed to register and get our show bags. To keep us around, Microsoft weren’t going to give us the CDs of NT 5.0 and so on until the last day. Good move. There were some mighty boring sessions. The show bags came with a BFB (Big Fat Book™) on Visual Basic, and I bought a BFB on Visual C++ 5.0 because I was a little lost in that huge IDE. Both books have helped me since.

    I was starting to feel a little tired despite the large quantity of oxygenated caffeine in my blood stream. I went to Mark’s session on Preparing your application for VBA Hosting, which was very interesting, and a good introduction to the technical tone of the conference. If you spend a little bit of time exposing your app to COM or COM+ (basically extend your object model to external applications), with a little bit of work, you can use the VBA DLL’s as your app’s scripting language. This allows even dumb users to use the VBA macro recorder to script your app. Even better, you can use VBA to write significant portions of your application (such as Wizards, etc) extraordinarily quickly. Users can then use a language that they know to extend your application in ways you haven’t yet thought of, especially if you allow your application to embed objects (an OLE container or server). Mark demonstrated this to us, and it is very nifty. The licensing terms for the VBA DLL’s are generous considering that this is Microsoft we’re talking about.

    Next, I went to the first session on Windows CE Technical Overview. Windows CE is pretty cute, but let me tell you, the Libretto 70CT I typed part of this document on runs Office 97 and Linux, whereas CE will never run these two. But CE will be successful despite the lack of technical excellence. We learnt that Microsoft re-implemented Win32 in a light form, one that can run in as little as 128 KB of RAM. Why don’t they shove that under Windows 95’s hood? Because it can’t run diddly squat in 128 KB of RAM besides maybe a toaster. And most toasters are quite comfortably programmed into a less than 400 gate FPGA for much less cost than WinCE and 128 KB of RAM.

    The CE hand held devices that we know need around 2 MB to run, and more to stash your stuff. CE’s goal is to be everywhere in the embedded and consumer market. Maybe they’ll succeed in the high end hand held market, but I doubt that the embedded market are going to fall for a Win95 look a like when they still have very cheap small footprint embedded processors. Currently, WinCE is available on four platforms, x86 (emulator), two variants of a Hitachi Super-H, and I believe a Motorola 68k embedded processor. Windows CE is being ported to a much faster Super-H, the baby PowerPC’s and the Strong ARM. Both of the latter will give an average Pentium a good run for their money in terms of performance, and in the Strong ARM’s case, the MIPS per watt of power consumed is phenomenal. No wonder Apple has been using it in the Newton all these years.

    Being a foodie, I live for food. Lunch was a disappointment because it was little finger food items. I felt like popping downstairs for a McFeast. I had lots of orange juice just in case it ran out.

    After lunch was the keynote speaker, Moshie Dunie, the VP of Microsoft. Moshie had one and a half hours to kill, and I was bored pretty much straight away. He has a funny accent, but that’s okay. I waded through the morass of marketing until the marketing turned into geek ambrosia. They demonstrated the new NetPC’s and Scott wosshisname from Sun should be scared. From a system administration perspective, NetPC’s are better than sex. Well, better than #hottub. They showed a NetPC booted and logged in, and the performance was okay (perfectly acceptable for lusers from a BOFH perspective). Then they pulled the plug, and they got another NetPC from the audience (hah! As if they are just lying around), plugged it in, and turned it on. Four minutes later, it had remote booted (and cached) NT 5.0 to the log on screen. Less than 20 seconds later, they had logged in, and were double clicking a document on the desktop they had placed there less than a few minutes before pulling the plug on the previous NetPC. I love these beasties, and I can’t wait to deploy them, as it will reduce our workload when replacing a PC by around a day. The users can’t dick with them, and they don’t have any user serviceable parts. Then they demonstrated Hydra. Hydra is Microsoft’s multi user version of NT they bought back from an outside developer (Citrix). True to form, they made a new protocol to communicate with the clients based around H.320 (more particularly, NetMeeting). But they booted an old clunker from DOS and less than 10 seconds later it was like it was running NT 5.0 as if it were a Pentium Pro (which the server was). Hydra will have automatic load balancing, and some fault tolerance. If a server carks it, the users just have to reconnect to the application cluster and they can resume from the last save. The message to developers was to learn how to be good network citizens again, and use the Class store to stash registry items. This automatically allows clients to work with Active Directory and means that truly diskless workstations will again be possible. Microsoft asked us to follow the rules, and then promised us that they would follow the rules with their own software like Office. Maybe pigs will fly before this will happen, but you never know.

    They demonstrated the power of Active Directory Services, and some nifty tractor applications, like a Web based organization chart, which was pulling all the data from the ADS. The ADS will lock ISV’s into Microsoft’s platform, but it will make corporate life so much easier than it is today. Imagine NDS, but able to store much much more, with two way LDAP exposure, and Kerberos authentication. ADS allows administrators to install applications against the directory, and as long as the applications expose COM+ objects and use proper registration methods, they will be self installing and self healing. No longer will users have to put up with dialogs asking what application can open a file with extension such and such. If the association is available by installing an application from the ADS in your container, and the user has permission to install it, it will be installed automatically the way you as the system administrator likes it, and the user’s object or file opened. If you write your application properly, this even works for objects, and will install only those portions needed for the object. In other words, this is System Administration nirvana.

    Then after a small break, I attended the Component Services session. I fell asleep many times and from the number of hard kicks I got I must have snored pretty loudly. It’s a bit embarrassing because I was in the front row and Tracey Trewin, the presenter knew her stuff. I was major pumpkin from the night before that had never been, so I needed sleep desperately. From the stuff when I was awake, COM+ will become the premiere distributed remote procedure call by dint of all those visual basic programmers out there alone. There are many, many benefits to using COM+, and the least of them is that your app is fairly easily scriptable in VBScript. If your app needs to be scriptable, you owe it to yourself to check out COM+. All you have to do is establish your object model in COM, and you can do really neat things with the object properties, include write those wizards in VB in no time flat to give your product that shiny Win32 feel.

    After copious quantities of orange juice during the break, I went to the ZAW Architecture and Desktop Management session. If you’re a system administrator, you owe it to yourself to check out NT 5.0’s ZAW capabilities. Imagine never having to touch client workstations again! It’s not quite zero, like the name implies, but it’s a lot closer than it is now. To make full use of ZAW, applications will have to be 95/NT aware, in that they don’t mind being run without a properly configured local registry (i.e. they can restore or create a non-existent HKLM key for themselves), and they use HKCU for the user’s settings. If they use the class store, they get the Active Directory for free, and with a little more work, you can be NT 5.0 friendly – and as a bonus, you get the Designed for NT 5.0 logo. The main benefit from the system administrator’s point of view, is that NT 5.0 logo’d programs will self install, self heal, and generally be much better network citizens than before. With some NetPC’s even missing a hard drive (something that WestPac forced on Microsoft – blame WestPac for that), and most NT 5.0 workstations in a corporate environment having no local hard drive storage, most applications will need to be aware of where exactly they can stash stuff.

    After the last session for the day, I went to my hotel room, finally dumped my stuff and freshened up. Then, I returned for the food. Naturally, I was happy that it was a buffet, but I was a little disappointed with the lack of tables. It made it pretty hard to eat. However, being the trouper I am, I managed. I even went as far as two Magnum ice creams. I talked to many of the developers and caught up with the presenter who I had caught the plane with. I talked to some of the Microsoft developers about what it’s like to work for Microsoft. They seemed to like it, but I don’t think it is for me. As some wit has previously said, Microsoft employees enjoy working only half the day – and they get to pick which twelve hours that half is.

    Being absolutely buggered, I went back to my hotel room early, but since I was now in my normal waking hours, I had that disconcerting jet lag that comes from being up too early in your own time zone. So I watched TV and read until 1 AM. Raiding the feeble pleasures of the minibar (why they call these stupid little fridges a minibar I will never know) was always going to be part of the game plan.

    Day Two (the good stuff approaches)

    Awaking unnaturally early again at 6.30 am without the help of the two alarm clocks I had brought, I went and had McDonald’s for breakfast. Egg McMuffins are the best thing to wake up to, unless you are a vegetarian like I am again (this week).

    The first session was Networking and Distributed Services in Windows NT 5.0 – Active Directory. It basically talked about what you need to do to leverage and use the ADS. Applications don’t need to do much really, beyond acknowledge policy keys (yes, well behaved apps heed policies… finally) and use the registry like you were supposed to when Windows NT 3.1 first came out. They talked about smart cards as being an authentication method, but this only marginally ties in with the ADS. Applications that store stuff in the class store get a lot of free rides from ADS, and makes NT 5.0 a very compelling story for any corporate with more than five desktops.

    By accident I went to the Class Store Architecture and how to write COM applications to the Class Store session. This session was excellent, but a little dry (there’s no helping for it when you’re discussing COM). The session focused on what the Class store gave applications free of charge, and what ISV’s have to do to leverage it. The major bonus that I found was that the class store allows system administrators to leverage policy much more than they can do today, and apply it to all the computers that participate in that particular Organizational Unit (OU). It also allows you to stash per user data easily, and store per-domain configuration much easier than before in a standardized way. If you use the Class Store, and register a proper Global Unique Identifier (GUID), your application finally does away with the limitations of the three-letter extension for the most part, replacing it with a largish unique identifier.

    Sunita Shrivastava ran the next session on Designing Applications to make effective use of Clustering, and unfortunately for her, her audience left gradually (and noisily) throughout the session. I really felt for her, as the topic was dry, and although she was trying to go through the MSCS features, which many will need to use, especially in the dry run phase that exists now (two node fail over, rather than Tandem-tough zero context failure fault tolerance). All her demos didn’t work, and she didn’t enamour the audience when she professed not to know C++, and all her VB demos failed. The “fault tolerant” notepad.exe worked, however, but that required no programming. She really needed to dry run her demos in future, and maybe take a public speaking course to improve her confidence in the face of a bored and hostile audience. In the end, about ten of us interested developers were left, out of an initial audience of close to eighty.

    The best session in the entire conference in my opinion was the Universal Data Access session. This was a broad introduction to Active Data Objects. Why would data vendors be interested in ADO? The main reason is that ADO finally fixes the main problems with ODBC, and all the descendants that were put out to allow Microsoft applications easy access to your data. The really cool thing is that you can do SQL joins on disparate data very easily, including flat text files. I’m looking at doing an ADO data provider for our PICK system, which will mean that we can finally get the data out and put in a decent robust intermediate solution whilst our mainframe PICK system takes it’s last few wheezing breaths towards extinction. And not a day too soon. PICK is first against the wall when the revolution comes, and it’s coming very fast at your nearest health care outlet. ADO is a powerful new tool, and one that promises and delivers much to the average corporate whose data is stuck in proprietary systems.

    After that, there was one more session, on Internet Client Services, which was trying to get you to write your applications to be IE 4.0 dependant, which isn’t in our game plan. So I took a Microsoft MCSE Windows NT Workstation exam at Sylvan Prometric’s stand. I had talked myself into it since they were only half price at $67.50. I felt that even though I hadn’t studied, I had a good chance at passing, which I did. Feeling pretty confident with my skill set, I took the core NT Server exam the next day, and passed that too (again with no study). I was feeling good, and the exams were cheap, so I took the NT Server Enterprise exam. To my surprise, I passed that too. This saved about $205 and several weeks’ worth of study. Normally, it’s moderately hard to pass these exams, so I got a big head, which is good, as I am a complete and total legend. As you can tell, I’m modest too. In actual fact, knowledge of the product is far more important than studying a text; paper MCSE’s are not yet common (although it is possible), unlike the derided CNE which can be passed by study alone.

    That night’s dinner was to be held at the Sega World video arcade in Darling Harbor. I’m sorry, but in my personal opinion, Sega World is crap. It had few redeeming features, and it was good that there was plenty of PC’s to play with and lots of free booze. There was finger food again, which made me desire a quick trip to McDonald’s to fill up on before I got really pissed. Luckily enough, I got really pissed and spent a good part of the evening not noticing the lack of food and thinking how good these Sega Mega drive based games really are. NOT! They threw us all out at 11 PM, which sort of annoyed me, not because I wanted to stay, but because the booze hadn’t run out yet. I joined some New Zealanders who took me back to their hotel. We sat in the bar, getting sozzled on our various poisons (I had a triple Cointreau on ice – it was their room service tab, not mine). We had a brush with fame with an ABC broadcaster who I tried to engage in conversation. Unfortunately, he was into classical music (funnily enough he is on ABC FM out of Adelaide on Thursday evenings and does the occasional Sunday night special, and no he’s not Christopher Lawrence). Simon someone. Aw god. Anyway, after figuring out that we really didn’t have much in common (what does one cultured ABC presenter have to do with four drunken NZ yobbo developers and one half sozzled Australian? Not much. He uses Windows 95 at home, but that’s about it). We left Simon to his own devices and the rest of us went upstairs to one of the dude’s bedrooms, and we made whoopee with his room service bill and the porno channel. We all chucked in two bucks to watch some crap American soft porn and we thought it was good because we were pissed. At around 2 AM, I retired to my hotel. The joy of the junket…

    Day Three (the end is in sight)

    9 AM start? Who are they kidding? I had a slow long room service breakfast and made it to the Developing for the WinCE session on time. I was one of the very few. I don’t recall much of that session, but it must have been okay, because I have notes with questions I asked Sharad Mathur, the development manager for WinCE. The main point was the tool chain and how to actually develop for WinCE. The main tool was VC++ cross compilers, and it did work. The best way to debug was to develop for the emulator; on most developer PC’s the emulator will be considerably faster than an actual WinCE device. And you get color.

    Building Component Applications with Tracey Trewin was as dull as her first session. I slept at the back this time, and no one bothered to kick me. For those of you who are developing component based software, this is a very important topic, but I’m sure you already knew this. The main thrust of this session was using DCOM to get distributed component services for pretty much the same price as not using DCOM at all. The slides I have are very compelling, but I don’t recall much of this session.

    To allow me to pass a couple of exams, I skipped the Exchange Introduction to Collaboration Data Objects, as we are very unlikely to develop this class of Exchange infrastructure in house.

    At the tea break, they finally gave us our CD packs. The conference was $795, and for that, you got three days worth of seminars from some of Microsoft’s finest developers, and the best bit was 14 CD’s which included lots of beta test bunny software, like Windows 98 beta 1, Windows NT 5.0 beta 1, and so on.

    Martin Duursma from Citrix Systems Australia presented the last session, The Microsoft Hydra Server. It was a very cool session. For those of you who like multi user boxes, Hydra turns NT into one of these as well. The bonus from following the rules about the Class store and ADS is that you pretty much get Hydra compatibility for free and corporates all over the world will love you. Of course, being from Citrix, he spent a good portion of the session explaining why you should buy Picasso, Citrix’s Hydra implementation rather than Microsoft’s. The same rules apply for devlopers, however, whether you target Picasso or Hydra. The main one is to use HKCU for user specific data. The best bit was when they got an old DOS clunker to log into an NT 5.0 Server. Very cool.

    As he was doing the credits slide, there was a general exodus to the exits until they mentioned the prizes. Since I had passed three exams, I won a prize. Some f#^$er did four (why he attended the conference I don’t know) and he got a Windows CE hand held. I got Microsoft Golf. Ripped off. J

    Then we made for the exits. I jumped in the first cab to the airport and then waited there for about an hour because there was a problem with the plane. I finally got home about 9.30 PM, made peace with my guard kitten (she hadn’t seen me since Sunday) and slept like a baby for ages and ages.