Well that’s embarrassing…

I thought I was being smart. Instead of pulling in mail directly to my laptop’s Maildir++ directory via offlineimap, I thought I’d use fetchmail to deliver it to my laptop postfix install instead. That way, I could use IDLE reliably, and also configure my laptop’s MTA to use maildrop to test out new mail filters before fully adopting them on my mail server. All good stuff. I installed fetchmailconf and ran the wizard. It wanted to test an initial import of everything. Fair enough, let’s go…

What I completely forgot was that I had added a .forward file to my laptop home directory some time ago, which forwarded all local mail to the account I was importing from!

As you might imagine, this caused a mail loop. Very quickly, my mail server decided “nope, I’ve seen this before and I’m stuck in a loop – bounce the message”. I caught the problem pretty quick – I realised mail was importing slowly, and noticed my modem unexpectedly busy. I quickly tailed the mail logs, saw what was happening, cancelled fetchmail, stopped Postfix and nuked the mail queue… but in that short time, 1663 bounce e-mails had been sent out.

Luckily, things appear to have not been too bad. Most e-mail was sent from forwarding accounts, since I only recently switched over to hosting e-mail myself. The majority of the e-mail was also backup notifications and other server reports that would not have relayed to external servers. Much of my e-mail was also sent to mail lists, which normally will discard bounced e-mails. I likely e-mail my spouse the most, and she received under 30 e-mail bounces. I also received bounce messages from Google for the bounce messages – Google temporarily blocked my address, which I’m surprisingly glad about. It should also be pretty clear to anyone who received the messages that it was a configuration issue based on the fact that the e-mails all came through within about 2 minutes of each other, most of the messages were old, and that most or all of the messages had already been replied to at some point.

There was much to be learned from this experience. I usually consider myself someone who pays attention to detail, but that didn’t stop me from tripping up – on a one-liner too! It would have been nice if fetchmailconf had an option to test just a few messages first, as opposed to automatically running across everything in your account. In any case, if you happened to be on the receiving end of my dumb mistake, I apologise for the hassle.

Why I will not back FSF’s guidelines for free software distributions

The FSF publishes a document describing guidelines for free software distributions on gnu.org, as well as a list of distributions known to comply with these guidelines. In light of popular distributions that are increasingly including and recommending non-free software, these guidelines and distributions are a breath of fresh air to many – but they too are not without their problems.

From the guidelines, “any nonfree firmware needs to be removed from a free system”. The purpose of such firmware is to allow the target hardware device to function, so essentially distributions like Trisquel GNU/Linux feel it is fine to disable parts of a computer if it cannot be used in a completely free way. I have no complaint about this per se, but the way this is implemented in practice makes these distribution maintainers come off as hypocrites. These distributions are being reduced to not much more than a marketing ploy to mislead users. To understand why, I need to explain a bit more about what is meant exactly by the FSF when they refer to “firmware”, and why in many cases it’s a non-issue.

When the FSF talks about firmware, they are using it in a way that is inclusive of the term “microcode“. This is important, because proprietary microcode is everywhere and difficult to avoid. Even so-called “freedom-compatible” hardware frequently includes it.

If you are running an x86 processor released in the last 10 years or so, your CPU likely supports microcode runtime updates from within the operating system. If you run a Debian Wheezy GNU/Linux distribution, an Intel CPU and have the intel-microcode non-free package installed, this will automatically load the latest proprietary Intel microcode into your CPU at boot (if the packaged version is newer than what is already running).

So what happens if you don’t have this package installed? The answer is that your computer BIOS already includes CPU microcode that it injects into your CPU every time you turn your PC on. This is done before your operating system (or even its bootloader) has started to load. Were you not to load microcode updates in from your operating system, you would need to rely on flashing BIOS updates to deliver your CPU microcode updates. Either way, like it or not, you’re going to run Intel or AMD microcode at boot. It’s just a question of having the latest version with microcode fixes, or running an older version.

Here is the beginnings of why the argument for fully free software distributions (for the x86 architecture at least) falls flat on it’s face. These distributions might be 100% free software, and give you the illusion of having a computer that is fully free, but in practice removing this microcode has achieved very little – if anything at all.

CPUs aren’t the only devices you’ll find in modern PCs that require microcode. Enter the subject of graphics cards. This is where my main gripe with these distributions comes into being. Modern AMD graphics cards, like the CPUs discussed above, require microcode to function properly. Unlike CPUs however, AMD graphics cards need drivers to load this microcode into the GPU at boot – the BIOS will not do this.

AMD has helped the free software community create some great free software drivers. They have released all the specifications, and assisted in the development of code. Nvidia, by comparison, seldom plays ball with free software developers and (for x86-based graphics card drivers at least) has basically been no help at all. If you’re in the market for a high-end graphics card from one of these vendors, AMD would seem the logical choice – support the guys who support free software the most, right? No! Not according to the FSF!

Generators for Nvidia microcode have been created, but not for Radeon microcode. This result is likely just out of necessity – Nouveau (the free software project that has reverse engineered Nvidia graphics card drivers) likely were not able to redistribute the existing proprietary microcode due to licensing. However since AMD has allowed Radeon microcode to be distributed “as is” (basically do whatever you want with it [Edit: Sadly I was mistaken – you can basically redistribute as you like but “No reverse engineering, decompilation, or disassembly of this Software is permitted.”], but did not release the means to recreate the (21K or less in size) microcode file, there was little incentive for developers to replace this – they would rather work on actually getting the drivers working properly than dedicating time to what appears to amount to (in this case at least) a purely philosophical exercise.

Now I admit, I don’t like that I need to run my AMD graphics hardware with proprietary microcode (even if they do have excellent free software drivers). Distribution maintainers have two options:

1. Allow the user to install microcode (possibly that the user provides so as to not need to redistribute it as part of the project) to have a working and otherwise completely free software operating system installed


2. Don’t make it easy to have the user get his/her hardware working, make them install a different distribution that may respect software freedom far less

Although option one would seem more logical at a glance, we have already established distribution maintainers wishing to comply with the FSF guidelines for free software distributions will need to elect to go with option two.

Now that all the discussion of firmware and microcode is out of the way, I have paved the way to explain what really makes me mad in all of this.

From the above, we can conclude that Free software distributions do not want us to run hardware that requires non-free binary blobs of any kind – no matter how small the blob or how important the hardware may be. Now have a look at, say, the download page for Trisquel. Trisquel apparently supports 32-bit or 64-bit PCs (ie. x86-architecture, ie. AMD and Intel CPUs, ie. CPUs that require priorietary microcode to function). Where are the download links for people that have that have RISC CPUs that don’t require proprietary microcode (eg. MIPS, like the Loongson processors as used in the Lemote netbook that RMS uses)? No, Trisquel doesn’t really make any effort or seem to care about you running a 100% free software computer. To do so would mean dropping support for one of their main sponsors Think Pengiun computers, which only ship Intel x86 PCs!

If the free software guidelines were serious about avoiding non-free blobs, they should be blacklisting hardware known to disrespect user freedom by mandating blobs – regardless of how the blobs get installed, and should probably be dropping x86 architecture support. Alternatively they could go the other way and allow any non-free blobs, if they are stripped to the absolute minimum required to get hardware actually working, so end users gain the maximum possible free software experience from their hardware. Of course they wont do either of these things though. Neither having a completely free software computing experience, or having things work correctly for end users is their primary goal; it’s all about marketing.

Fun times – upgrading Xen dom0 to Wheezy

I apologise for the downtime Sunday evening. What follows is a description of the problems I ran into which caused this.

It was about 6pm. J- and I were trying to figure out some issues we had been experiencing with XMPP. I run ejabberd in a VM on my server, which I’m reasonably happy with. J- on the other hand was using a Google Talk account, but always appeared invisible on my contact list. Yet, I was clearly visible and online on her roster.

My suspicions were that it was somehow related to Google Talk – it’s been in the news that Google is breaking federation, and they have broken it (partially at least) in the past. J- sought to fix this by signing up for a dukgo.com account. Oddly, this resulted in the same strange issue.

Next, I thought I might want to investigate my own XMPP server. I was only running stock Debian Squeeze, so figured I should probably upgrade to the latest stable before spending any significant amount of time on it. After all, how long could an upgrade take? It was 6:30pm on a Sunday evening, but I also had slides to come up with for a talk at LUV Tuesday night. Surely the upgrade wouldn’t take more than about an hour?

After all the packages had been upgraded, it was time to reboot the instance into a new kernel. That’s when I ran into my first problem – the instance refused to boot. It seemed that pygrub, which is what I was using for a boot-loader, was unable to parse the newly generated grub.cfg file.

Pygrub is a part of my dom0, which also was running Squeeze. My thinking was that hopefully if I upgraded the dom0 to Wheezy too, it will support the new Grub configuration format. Worth a shot. And so I began the dom0 upgrade.

After all the packages on the dom0 were upgraded, it was now time to reboot and cross my fingers. Thankfully, the reboot was successful. I was very glad to see the processes of runlevel 2 initiate. Very glad… except one of my instances refused to boot. Not just any instance, but my firewall! No more Internets! Panic started to settle in.

The ADSL modem connected to the server via USB. The entire USB controller was using xen-pciback for device pass-through to to the guest. This functionality was no longer working – the dom0 decided that the device was no longer available and could not be passed through. If it could not be passed through, the firewall instance refused to start (and wouldn’t be very useful even if it did). This was starting to be a real annoyance. Now I had to unload the kernel modules, play with /sys entries to free up the device, and then boot the firewall again. There was some tinkering with dom0’s Grub kernel parameters along the way, but eventually I got the firewall to boot *and* see the USB device. It took hours, but I finally did it. Sorta.

There were a ton of USB driver error messages in dmesg output of the firewall. The USB stack was failing and was unusable. I tried various pass-through configurations, but ultimately I was not able to get the guest to use any kind of USB device. Seems like some kind of regression.

At this point it was getting quite late, and I wasn’t in the mood for playing around any longer. I just wanted things working again – and preferably without having to undo all my work by restoring from backups. Fine, I thought. If I can’t pass through the USB controller, I’ll just install a spare PCIe NIC and pass through that instead. After all, my modem supports connectivity from either USB or Ethernet, and it doesn’t matter to me which.

Although this seemed like a good approach, and I had the hardware to spare, things once again didn’t work out. The dom0 kernel wanted to load the device drivers of this hardware for itself, and I would have to prevent that if I were to be able to use that in the guest. The kernel driver module was r8169. I started creating entries in /etc/modprobe.d/ and rebuilding the initramfs, which is when it hit me… this is the same kernel module as used by the other integrated network port in the server – which I very much need. If I prevent this from loading, I won’t be able to remotely connect to the server any more via my LAN!

It was somewhere in the early hours of Monday morning, I had no Internet access (except through tethering with my N900), I had to go to work the same day, I had not had much sleep the night before, I had slides for a presentation that needed to be created, and I knew J- would kill me if I left the server in this broken state for too long. Further, I wasn’t sure how to proceed, and (to add insult to injury) my N900 battery just died.

I checked the server, and observed that it had two unused PCI slots. Thankfully my home server runs on an old budget motherboard that still supported them, as I figured I could scrounge up an old PCI NIC or two. After pulling some old boxes out of storage, I did indeed find spare PCI NICs. The first one I tried required yet another r8169 kernel module, but then I found an old PCI NIC that was gigabit and had heatsinks on it! I couldn’t see what it was under the heatsinks, but given that the other chipsets were bare, it seemed it would probably be something different. Turned out to be some kind of National Semiconductor NIC. No idea where I brought it from or how long I have had it for, but it proves that sometimes it really does pay to keep old crap. :)

So, after installing it, messing around a bit with /etc/modprobe.d/ rebuilding initramfs, tinkering with the dom0 kernel parameters to provide appropriate device-specific xen-pciback parameters (because I’d forget about them if they weren’t in /proc/cmdline), changing the firewall VM configuration profile, etc… my Internets were back.

Unfortunately, even as I write this I still have not had time to go back and investigate the original issue – J- is still invisible to me in my roster when she should appear as online.

Star Trek: Into Darkness

One of the perks of living in Australia is that occasionally we get to see really great movies well before the theoretical release in the USA. Star Trek: Into Darkness is one such movie, and definitely one I could not pass up.

Overall I enjoyed the experience, so when somebody asked me last week if I thought it was any good, it might come surprising to learn that I was hesitant with my answer.

It all depends on what is meant by the term “good”. If we take the meaning to be “was the film entertaining” then the answer is easy – the film is a fun ride. There’s plenty of suspense, action, comedy, good acting, etc. However, if we take the meaning to be “good for the franchise” or “a good step forward for demonstrating a set of idealisms for a younger generation”, the answer is a sound “no”.

I have always considered Star Trek movies very separate to any Star Trek TV series. My reasoning can be understood particularly well when drawing comparisons between The Next Generation series and films. Many of the films seem to pay more attention to action sequences than an actual story. Further, characters have a tendency to act differently. Picard in the movies never seemed to act as rationally as he did in the series, and in at least one instance was even hell-bent on revenge (First Contact)! Perhaps due to this, I find myself never expecting to be able to take the movies too seriously.

One aspect of Star Trek that I love is the abolishment of capitalism – the economy is no longer based on greed. The replacement economic system is never very well explained (we are only really told what it is not), however I feel this is one of the core aspects that makes Star Trek a believable franchise. Instead of focusing on greed, we have turned our attention to bettering humanity. Problems such as poverty and disease have been completely wiped out, and we have achieved amazing medical advances that appear accessible to all.

Many science fiction films paint a bleak picture of our future when capitalism is allowed to continue its course (eg. Blade Runner), but when greed is removed from an economy we get Star Trek – where humanity’s worst problems instead have a tendency to involve the unknown, or diplomatic relations with other races.

Perhaps the most compelling argument I have seen is that the Star Trek universe is based on a participatory economy. Regardless, you don’t see everyone owning personal vehicles, spacecraft, or other significant assets.

Yet, the 2009 film opens with Kirk driving what appears as somebody’s personal car. It’s an antique so perhaps not as bold of a statement that there is a capitalist economy as it could have been. Fast forward to the 2013 movie however, and I vaguely recall seeing a number of small spacecraft around the time Scotty was heading towards the coordinates that lead towards the USS Vengeance. This is seemingly a very inefficient way of managing transportation, and implies personal transportation is available, and thus a strong sign of capitalism.

(Side note: It’s true that Picard’s family seems to own a vineyard and Sisko’s father seems to own a restaurant. However Picard might also refer to “my ship” without meaning that he owns it. These may be collectively owned by all.)

Into Darkness, like the movie before it, further removes itself from the television series – despite making even more references to it. While I was better prepared for the change this time around, it’s still disappointing to see Star Trek reduced to another action movie. Essentially, that’s what the Star Trek universe is slowly becoming. I half-jokingly mentioned to a colleague that the new Star Trek movie probably has more action sequences than some of the Star Wars films, but actually this might even be true!

Further, a recent Slashdot review of the film noted that it likely fails the Bechdel test, and I think they’re right. This is a guy’s film. Having a film in the Star Trek universe – a universe that is supposed to uphold values such as equality – suggests that J. J. Abrams still doesn’t get it, or at least doesn’t care.

Why does any of this matter? It’s just a film after all, right? Well, if all I wanted was an action film set in the future, I’d probably prefer a Star Wars movie – pure science fantasy. Instead, I prefer Science Fiction. I like the idea that science fiction can take life today as we know it, change one aspect of it, fast-forward to the future and present a view of what our life might be like. I like Star Trek because it presents a future anyone can aspire for – even if they aren’t interested in astronomy or even space travel.

I shudder to think that this is the new future of Star Trek. If any TV series are made going forward, will they too throw away those core values and base it on what this reboot presents? Into Darkness seriously misrepresents all that Star Trek is, to the point where I wished Abrams used a different or original property to base his action films on.

My letter to Humble Bundle


Honestly, I could not believe you guys did this when I read the news on Slashdot. I thought no way, get outta here, this is some kind of joke..

The Humble Bundle has always had the tag-line “Pay what you want, DRM free cross-platform and support charity” yet you’ve made the decision to abandon 3 of those 4 core values to your brand.

I don’t game any more under Windows, I do care a lot about DRM, and as if all this wasn’t already bad enough you have also dropped the ability to support the EFF – my preferred charity.

My wife and I have purchased many bundles in the past. I’ve always told my friends and colleagues to check out the awesome bundles you have put together, but this will happen no longer. I will make sure that all the people I have recommended the Humble Bundle to are aware of what has happened today.

Even if you appear to go back to your previous-style bundles, you have lost my trust. I can’t promote or support a brand that isn’t true to the ideals and values that attracted me in the first place.

StatusNet now a part of System Saviour

Last week, the FSF dented about a MediaGoblin fund-raiser. Shortly after, Ben sent an email out to the FSM mail list indicating that he had used the service in the past and found himself donating. A couple of days later, a FSF e-mail hit my inbox pressuring me some more.

The funny thing is that whilst I’ve heard of the project, I don’t fully understand how it works and why I would use it. After all, if it’s just for sharing images I would either add them within WordPress, or otherwise simply do this by scp’ing them to a directory my server and link to them as required. This functionality works fine with my N900 as well, although clearly posting images online is not a service I have much demand for. Heck, not a week goes by that I don’t just use elinks for something.

Perhaps I’m not the target audience, but I’m probably also misunderstanding what MediaGoblin is all about. How does it compare to say ownCloud? The best way to understand it is to take it for a spin. Let’s take a look at the documentation… they compare it to Identi.ca and Libre.fm right off the bat. Wait a second… I use Identi.ca a lot but I’m not running it on my own hardware right now. Despite this I’m deploying some Goblin to my server that I don’t really understand? Time to change priorities.

What followed was me spending the rest of the day re-organising my DomU machines, web server configurations and finally installing my own StatusNet micro-blog at http://micro.systemsaviour.com/.

So far I haven’t customised my install too much. I haven’t even replaced the Status.Net heading with the site name, but can do that all in good time. As my usage of Identi.ca was previously almost exclusively limited to other Identi.ca accounts, I had not until now had a good chance to see for myself how well the federation features worked. While not perfect (eg. no direct messaging functionality, documented bugs preventing messages to groups sometimes appearing, etc.) I think it will live up to my expectations and be sufficiently useful to me to want to make the switch away from my boltronics@identi.ca account.

As for MediaGoblin, I’ll have to look at that again another weekend to see if I can figure out how it might be useful. As for Libre.FM, I don’t think I’ll be hosting my own GNU FM server any time soon given it doesn’t appear to have federation capabilities currently which would pretty much restrict its usefulness to scrobbling (which I don’t really care much for anyway). I have decided that I also want to run my own Gitorious install sooner rather than later. Too much cool tech… arrggh!!

October 28th 2012 update:
As expected, I have since spent some time messing around with MediaGoblin. The results are visible from the Images menu button above. I have yet to create a custom theme, and do not have registrations enabled – with no plans to do so; at least not until the software matures.

Introducing ‘usbraid’ – for efficient USB RAID management.

Those of you who know me well also know that I’ve been doing geeky stuff for a long time, so it shouldn’t come as a surprise to learn that (while I wasn’t the first person to do so) I have been using USB RAID arrays for a few years. Unlike the linked articles however, I have generally had a practical reason for using one.

The first practical USB RAID array I ran was in RAID0 – attached with tape to the back of my Asus EeePC 701 laptop screen. The USB RAID storage was actually considerably faster than the 4Gb of internal non-upgradable flash the netbook came with.

Currently however, I use a USB RAID array to store my most confidential files on – things like my BitCoin wallet, password manager databases, important documents and the like. Why would I do that? Security and convenience, primarily. I wanted a backup solution with redundancy in case one of the drives failed, so that rules out my spare laptops which all only house a single HDD (without reaching for a soldering iron, anyway). I also don’t want to store such confidential information on my home server which is running 24×7 and always connected to the Internet – it exposes this data to unnecessary risk. No, ideally the storage device to be used for these specific backups should be only powered up when the data is actively being used.

Most USB HDDs you can buy would fail to meet the ‘redundancy’ requirement, but there are devices such as the Western Digital My Book RAID1 enclosures and the like. Unfortunately these generally house 3.5″ HDDs – overkill for the few small files I need to store securely. There are other non-apparent problems with these too:

  • The sheer bulk and weight of some of those solutions would make them very susceptible to damage if accidentally dropped.
  • They tend to rely on proprietary software and/or HDD controller chipsets which may not be easy to replace if they fail.
  • Generally, such devices are not terribly cheap.
  • In my experience, putting much trust in consumer-grade external hardware devices is just asking for trouble.

So there you have it – a very practical reason why I require a USB RAID array. Running five 1Gb sticks in RAID6, permanently duck-taped to a cheap USB hub solves all of the above problems, is silent, tougher, smaller, lighter, cheaper, more easy to replace (can just buy any other USB hub off the shelf – or in a pinch not even use a hub if a desktop has enough USB ports), and would require at least 3 drives (more than half the array in my case) to fail before losing data. As far as the hardware part of the solution goes, it’s perfect!

Of course, the software side of the story is a little more tedious. I actually run LVM to manage my partitions on top of my RAID device, so having to manually start a RAID array by specifying the device nodes of each USB key, setting the LVM volume group to ‘available’, creating mount points and then mounting each filesystem I’m interested in each time I want to use my array is actually quite a lot of work. After a bit of practice you can go from connecting the device to having the filesystems mounted in about a minute, but even that is far too long IMO – especially when you consider that you also need to do a number of steps to reverse all of this when you’re finished with the filesystems later.

A few months ago, I bit the bullet and spent a few hours writing my own solution which I now license to all (under the GPLv3): usbraid. I’ve spent most of this morning updating it to be less specific to my system and adding the included documentation, so hopefully it’s useful to somebody who might be in a similar situation. You need to know a bit about mdadm and LVM2 if you are considering making your own USB RAID setup and using this tool, but hopefully it’s not too difficult. Once setup as described in the included README file, you should just be able to simply run:

$ sudo usbraid -m
$ sudo usbraid -u

to mount and unmount your USB RAID filesystems.

Giving up fglrx in Debian Wheezy

The title says it all. A recent update has once again killed fglrx direct rendering from working with Xorg, so I’ve decided to just switch over to the free software Gallium driver entirely. This means no Amnesia, but I’ve since finished that game. It probably goes without saying that CrossFire won’t work now too, so… I would like to say that three of my GPUs are just doing nothing, but there are still power management issues with the radeon driver so the fans are sending my wife and I deaf while my cards cook at around 80-90 degrees, and it heats up my apartment noticeably – an annoyance since we’re heading towards the middle of summer here. It also means no OpenCL support since the AMD APP SDK depends on fglrx, although fortunately I haven’t been using that lately either.

The uninstallation of fglrx did not go smoothly. There have been times since I first performed my current desktop OS install where I manually ran the installer downloaded from AMD’s website, which spread files all over the place. These had to be cleaned up. The following two links were the most useful I came across which deal with this problem:

However, the final issue I had was documented on neither of those. The AMD installer created a file on my system in /etc/profile.d/ati-fglrx.sh which set an environment variable which caused direct rendering fail ($LIBGL_DRIVERS_PATH IIRC). Removing that file, logging out and in again got everything back to normal… well, “normal” as described above. :/

I’m still keeping fglrx on my laptop though (which I haven’t updated in a while)… for now. I don’t want my laptop run into the same power management issues leading up to Linux.conf.au 2012.

Here’s something I’ll be taking away from this experience. Proprietary software might sometimes be better than free software, but generally there can be no expectation of it becoming any better in the future than it is today. In the future it may become incompatible, may add new restrictions upon you, may not support new formats, may force you to upgrade (sometimes at cost) to continue functioning properly, etc. The issue I have experienced in this post was the former. With free software however, I can generally expect that the software I have today will never become worse over time – that is, it only gets better. Even in cases where ‘better’ is debatable (eg. GNOME 3), it can be (and often is) forked by anyone. That’s one of the reasons I love it.

To show my support of free software and software freedom, I have finally done something I feel guilty for not doing a long time ago – and became an associate member of the Free Software Foundation.
[FSF Associate Member]

Tough time for Debian Wheezy users running fglrx (and farewell GNOME)

Do you run the fglrx driver on Debian Wheezy? I sure do, and if you’re like me I feel your pain.

About a month ago, the fglrx packages were added back into Debian Testing so the previous workaround is no longer required. Unfortunately not long after things appeared working, the Debian guys decided it would be a good time to upgrade to GNOME 3, which caused all kinds of graphical corruption and made Debian all but unusable for me. I actually found myself booting into my Windows install for a while.

A newer fglrx driver was then released which fixed most graphical glitches, however things were still far from perfect. As an example the alt tag pop-up text in Firefox was rendered incorrectly and barely readable, but for the most part things were okay… until I tried to play a video. Hello bug #649346 – “fglrx-driver: using xv extension crashes Xorg”. At the time of writing, this still isn’t fixed. Naturally this is all terribly frustrating.

I generally use mplayer whenever I need to watch a video, so the work-around for me was adding “vo=gl” under the [default] section in my ~/.mplayer/config file, and being extra careful mplayer is the default video player for everything!

There’s one other interesting thing that happened to me over the weekend – I ditched GNOME. I’ve been a GNOME fan-boy since the pre-1.0 releases back around 1998, so you might imagine the significance of this. Certainly some of the lead GNOME guys have previously upset me by encouraging some further development to be in Mono, but my real reason for doing this is simply because modern GNOME 3.x versions just don’t seem to cater to me any more. After using it for a few weeks, I just feel too constrained.

For example, I wanted to find a way to select an appropriate font size. I couldn’t – I could choose “Small”, “Medium” “Large” etc. I know I like font size 8, but there was no way to select it – all the options gave something too big or small for my liking.

Another thing I use all the time is virtual desktops. Right now, I’m using 3, but sometimes I find myself using 10 or more depending on my workload. Because GNOME has always defaulted to two horizontal panels along the top and bottom of the screen, my virtual desktops have also always been aligned horizontally. GNOME 3 changes this – you have to get used to managing them vertically. Further, I can’t assign e-mail to virtual desktop 7 – GNOME only creates them as you need them. This may seem like nit-picking, but it’s too difficult to get used to, and it just feels inefficient.

Then there’s the Alt+Tab functionality. How could they screw that up? Well, if you have 10 Terminator (xterm) windows open for example, GNOME considers them all to be a single application. So when you Alt+Tab to switch through them, they all appear as a single item. Instead, you must Alt+tab to Terminator, and then Alt+` to switch between the individual terminals. I’m sure they were aiming for efficiency here (for a change), but it all feels very tedious and breaks conventions everyone is already used to.

Opening a new program is also annoying in GNOME 3. You need to move the mouse over to the top-left corner of the screen, then click an Applications icon that appears a few centimetres away (probably further away on larger screens). An large (huge?) icon for every application will appear after a few seconds of loading time – which is impractical for most people since the list is so big, so you need to narrow down the results by category. So now move the mouse way over to the right side of the screen to what looks a bit like the traditional GNOME 2.x menu options. These limit the giant application icons to only those that fit within that category.

What happens if the application you have has been installed manually and does not have a GNOME launch icon? Well, you need to create one manually, of course! Fire up your favourite text editor and create one under ~/.local/share/applications/ or some such. What a pain in the ass! Unlike the good old days of creating a custom application launcher through the GUI, in GNOME 3 you need to do it all through text editors.

You can add commonly used application launch icons to a dock on the left-hand side of the screen, but if you’re like me and use a bunch of different applications depending on the task at hand, that’s not particularly helpful. In fact, quite frequently I find myself hitting ALT+F2 to just type the name of the application I want to launch. This functionality is still there in GNOME 3, however it’s far less useful than it used to be. Auto-complete functionality seemed to be missing, however it’s still the best option for launching applications when you don’t want to bother with creating launch icons.

Some of the GNOME 3 options simply aren’t even implemented. For example, telling GNOME you want your user to automatically log in doesn’t work – you need to edit configuration files. How a major release ever made it out in such a state I’ll never know.

Another thing I wanted to do was tell GNOME that my default terminal should be Terminator, since it was clearly ignoring my /etc/alternatives/x-terminal-emulator setting. Unfortunately, that’s a matter of firing up gconf-editor and hunting down the option. What used to be a simple drop-down menu in GNOME 2.x no longer exists!

Some of the above issues are able to be worked around via Shell Extensions, and GnomeTweakTool, but it seems stupid to be forced to waste time with hacks just to get basic functionality going. Firefox provides everything needed for efficient web browsing out of the box, and if you want extra uncommon functionality the extensions are there to help you out – but it’s still a perfectly good web browser without them. GNOME 3 on the other hand just feels useless as a desktop without them. It’s a disaster.

So what have I switched to? I wanted something Debian was likely to have good support for, so I started poking around the available packages:

$ for i in $(apt-cache show task-desktop | grep ^Recommends: | cut -d ' ' -f 2- | tr -d ',|') ; do [[ ${i} = *desktop ]] && echo ${i} ; done

I’ve given KDE a number of chances over the years, but have always switched back to GNOME due to its complexity. When you get frustrated trying to hunt down an option you know should exist, something has to be wrong. However, my N900 does run LXDE in a chroot and it seemes okay, so I gave it a spin. Ouch was it buggy! Trying to configure options would spit out random errors which had been fixed in newer releases which made it into Ubuntu over a month ago, but were still an issue in Debian Testing? Seemed to me like the Debian guys haven’t given LXDE much love, so that leaves me with Xfce. Linus Torvalds switched to it a while ago… how bad can it be? Well, I did try it years ago, and my memories of it were not good… but given the lack of options I thought I’d give it a try anyway. And boy was I impressed!

GNOME 2.x users will feel right at home. Imagine GNOME 2.x… except with more options for configuration out-of-the-box! I was able to make my Xfce desktop look and behave almost identically to GNOME 2.x, and it feels quicker to boot! I don’t know why the Mate project (aiming to fork GNOME 2) is bothering – Xfce just feels so right. :)

I did have one issue with Xfce sound however. I have basically two sound cards – an Intel HD Audio Controller located on my motherboard, and my Logitech G35 USB headset. Stock Xfce did not seem to provide any option for switching between these on the fly, however audio was one thing that GNOME (both 2 and 3) got right.. which gave me an idea. Under Settings -> ‘Session and Startup’ -> ‘Application Autostart’, I added /usr/bin/gnome-sound-applet (which comes from the gnome-control-center package). Now, audio works just as well under Xfce via this applet as it does under GNOME. Beautiful!

There’s a few other little things I’ve found in Xfce where I’ve thought “wow that’s a nice touch”. Eg. I regularly Alt+drag windows around, but with Xfce you can actually drag them to neighbouring virtual desktops! It might not sound that amazing, but it feels nice. Also, when you want to move an applet around on the panels, you get a square appear that makes it very clear what the panel will look like if you left-click to confirm – as opposed to the GNOME way where you see the results as you have already made the change by dragging. Lastly, say I click on a launcher for a program that is already open in another virtual desktop which I forgot about, instead of getting a flashing icon in the task panel and having to click it to jump to a different virtual desktop (as would be the case in GNOME 2.x), you just have the application instantly move from whichever virtual desktop it was on to the current one. These are all minor details, but have made me pleasantly surprised.

As for the Xfce panel applets, some are better than those in GNOME 2.x, and some aren’t quite as good. Overall, I didn’t feel any worse off. I did think the Directory Menu applet will be really useful, but I haven’t relied on it much yet (perhaps out of habit of not having it). If you like GNOME 2 and hate GNOME 3, definitely do yourself a favour and give Xfce 4.8 a try for a few days and see what you think.

Slashdot: IEA Warns of Irreversible Climate Change In 5 Years

There have been some very interesting comments in the Slashdot post IEA Warns of Irreversible Climate Change In 5 Years. For example:

One example is the discussion over this image.

Another would be Phleg’s question What are you going to do?:

So what are any Of you going to do about it? Continue to point fingers at China? The third world? Oil companies?

How about accepting that you can’t change others, and instead set examples yourself. I moved into the city, leave my A/C and heat off whenever possible, bicycle for 95% of my trips (including commuting), grow as much of my own food as I can, and buy the rest locally and in-season whenever possible.

2 years ago, I was doing none of that. Now my personal energy footprint is a fraction of what it had been. Perhaps not as much as is needed, but it’s something, and none of it has honestly even been hard.

So again I ask: what are you going to do about it? What will you or have you changed about your lifestyle to help avert global disaster?

My answer:

I already am doing something about it. There’s room for improvement, but I know I must be doing better than 99% of Australians. Here’s how:

  1. My wife and I don’t have kids. There can be no greater selfishness. It may be said that the significance of all of our environmental problems are directly related to now 7 billion people on this planet. It’s been clear for decades that the Earth’s population growth is unsustainable, and yet here we are.

  2. We don’t own a car. Easily achievable. I know lots of people say “but I live in an area where there is no public transportation” or “I live too far away from work to ride” – but that’s because they’re selfish. They were not considering the environmental impact of their decision to live in such a location. My wife and I on the other hand have always expected we will not be relying on a car, and have planned our lifestyle accordingly. As such, it is no problem.

    If more people chose such a lifestyle, maybe councils around the country and the world would better cater for the needs of people like ourselves who do not drive. For example, the detours I need to take to ride to work are ridiculous – just because my local council didn’t pay any significant consideration to cyclists when planing and paving the roads.

  3. Don’t rely on an air-conditioner or heater. Until the summer heat wave of 2009, my wife and I had never owned an air-conditioner. We did buy a portable unit for those few weeks with over 40-degree heat since our apartment tends to get very hot as it is, but I don’t think we’ve ever used it since. Under ordinary circumstances, we have no problem adapting by simply changing to lighter clothing. When it’s cold, we wear a jumper and jacket, or dressing gown for night time. If that’s still not enough, we’ll just get a scarf or even a blanket until we’re comfortable.

    Contrast this to basically any workplace I’ve ever worked at. If somebody just came back from a jog, the air-conditioner gets cranked up. Same deal if the air feels “mucky”. If it’s a few degrees too cold, don’t bother putting something on – with a couple of button presses it’ll magically feel better. It’s a sad thing to watch. I usually just bring in a jacket so I can wear it if I’m cold, but almost every day someone will still turn on an air-conditioner. And worse – leave it on when they leave! Meanwhile, I don’t think there has ever been a time I have turned on the air-conditioner or heater at any of my workplaces – past or present.

  4. We’re vegetarian (and speaking for myself, I’ve been vegetarian for around 8 years). That means, we eat a lot of food that isn’t processed. My wife is always buying fresh vegetables to cook something for dinner from. Further – and more importantly, we are not contributing to the damage caused by extensive cattle farming – the leading cause of greenhouse gas emissions in places like Brazil, and it makes up about 17 per cent of Australia’s emissions. Our choice to be vegetarian certainly isn’t because we’re religious or too poor – it’s because it’s unethical from a number of viewpoints not to be at least a strict vegetarian. Some would say the same thing about being vegan, although I haven’t taken my diet to that level.

  5. Limit use of shopping bags and plastic bottles. I personally drink about 1 litre of soft drink each day at work – but I make it at home with my Soda Stream kit and bring it in using a reusable bottle – which I carry in using the pannier on my bike or a backpack if walking/jogging. The main waste created by this is the syrup bottle, although this is small and lasts a few weeks, and is always recycled. By contrast, I know other people who buy a bottle of coke each day from a local cafe! I sure hope they recycle all the plastic they throw out. I also take a backpack with me almost everywhere else I go, and when I do shopping I make sure it’s full of reusable shopping bags. Sometimes store clerks give me a plastic bag before I have a chance to tell them that I’d like them to use ones I have specially brought in – in which case I’ll keep the bag for use as a rubbish bin later. I always put any bag I receive to use – but do my best to not get them in the first place.

Having said all of the above, I know there is still room for improvement in our lifestyle.

  1. Our home server that powers this blog is running 24×7. That in itself isn’t necessarily bad, but I suspect the machine isn’t as power-efficient as it could be. Perhaps in a few years I’ll replace it with my current AMD E-350-based laptop, or maybe some other very low power ARM solution. However, I don’t think a perfect low-power replacement is readily available to me at this time.

  2. My desktop is extremely power-hungry. Sometimes I need that power, but ~90% of the time I don’t – which makes me feel like I’m being wasteful. Perhaps to test the above two points are valid, I should buy some kind of power draw measurement tool.

  3. Our electricity should ideally come from solar panels we would install on our roof. Unfortunately we are renting and haven’t the funds or authorization to make such a change, but if ever we buy a home that allows for such a setup, it’s my intention to do this.

  4. My wife especially purchases electronic devices that she doesn’t really need to have but just likes to have. eg. the latest model phone. I don’t think I personally fall into this category much these days – every electronic device I have brought in the last two or so years has some tangible practical benefit (well, arguably excluding game consoles I suppose…). At least when I have purchased new electronic devices (eg video cards), my older ones have been sold off and not directly wasted.

That’s it for me. What about you?