Categories
Linux Ubuntu

The GUI v. CLI Debate

I’ve been helping with online tech support for Ubuntu for over four years now, and every now and then the discussion comes up about whether it’s “better” to use terminal command instructions or to use point-and-click instructions when offering help.

Inevitably, some die-hard CLI (command-line interface) fans come out and say that the terminal is “more powerful” and that every Linux user should learn to use the terminal, and then some die-hard GUI (graphical user interface) fans come out and say that the terminal is intimidating and that if Linux wants more users, it has to develop more graphical interfaces for things; and then you get the hardcore Linux users who claim they don’t care if Linux gets more users or not, etc., etc., ad nauseam.

The truth is that neither CLI nor GUI is always “better” than the other. There are appropriate situations for both CLI and GUI on a support forum. I hope everyone can agree that all common tasks should be able to be done in the CLI and through the GUI. Choice is ultimately what’s most important, so that those who prefer the CLI can use the CLI, and those who prefer the GUI can use the GUI.

But if I am offering help to new users, do I give GUI instructions or CLI instructions? It depends on what kind of support I’m giving.

When is GUI support appropriate?
If a new user wants to know how to do a basic task that she will probably repeat (or, if not the exact task, then at least something similar) in the future, then I will usually give point-and-click instructions to encourage that user to explore the GUI for that kind of task. For example, if a new user asks “How do I install Audacity?” then I am not going to say “Just use sudo apt-get install audacity.” Instead, I’ll tell her to use Applications > Ubuntu Software Center or Applications > Add/Remove, or just link her to this tutorial on how to install software. There are several reasons I do this:

  • Even though the apt-get command makes perfect sense to me, it is just cryptic gobbledygook to a new user, and it will not help her to install other software in the future unless I bother to explain how the command works; and, more importantly, even if she understands how apt-get works, she’ll still need to know the name of the package she wants to install in order to use the command most efficiently.
  • A lot of new Linux users (myself included, when I first started) have an irrational fear of the terminal, even if you tell them to copy and paste the command with a mouse (no typing necessary). Eventually, as they become more comfortable with the new environment that Gnome or KDE (or Xfce or whatever other user interface they’re exploring) has to offer, they are more likely to be amenable to learning terminal commands and even liking them.
  • Among Windows power users (the most likely group to migrate to an almost-unheard-of operating system that requires download, installation, and configuration from the user and not the OEM), there is already a reputation Linux distros have of being too terminal-dependent. It’s great to advertise to new users just how many things can be done by pointing and clicking, and that will make their transition to Linux that much easier.

Ah, some veteran forum members would protest, but what if I don’t want to bother making screenshots or typing out long point-and-click instructions that can be summed up in a single command? To that, I say if you’re too lazy to offer appropriate help, don’t offer help at all. Someone else will help. Or, better yet, find a good screenshot-laden tutorial and link to the tutorial instead (that’s actually how I started up my Psychocats Ubuntu tutorials site—I got tired of constantly retyping the same support posts over and over again, so I just made one place I could keep linking new users to).

I would say something similar to those who use Fluxbox or Enlightenment and want to primarily help those who use Gnome or KDE. If you aren’t familiar with the graphical environment the user you’re trying to help is using, don’t offer help in that instance. Save your help for when the CLI is appropriate.

When is CLI support appropriate?
The GUI may be fine for common tasks (installing software, launching applications, managing files and folders), but what if someone runs into a problem? What if what she’s doing is not a common task but a one-time setup or configuration? There’s no way if a new user says “When I try to launch Firefox, it just disappears” that I’m going to offer a point-and-click solution. Problems are best diagnosed with the CLI, and terminal commands (even for GUI applications) are more likely to yield helpful error messages. Likewise, if her wireless card isn’t recognized properly or fixed by System > Administration > Hardware Drivers, it isn’t a crime to walk her through manually editing configuration files to fix the wireless problem, because once it’s fixed, she should never have to do that again.

If you do offer CLI solutions to problems, though, as much as possible try to explain what these commands mean or do. You don’t have to copy and paste in a whole man page (in fact, that probably won’t be helpful at all—I’ve been using Linux for years and have yet to find a man page I actually understand). Just keep in mind that to many new users, terminal commands are like a foreign language they can’t even say hello or thank you in.

CLI and GUI aren’t going away any time soon. One is a hammer. One is a screwdriver. No one tool will suit everyone best at all times. Use what’s appropriate. Appreciate that what you like or prefer may not be what someone else likes or prefers.

Categories
Linux Ubuntu

My response to Rory Cellan-Jones

Rory Cellan-Jones recently spent 24 hours with Ubuntu:

I installed a few applications – including Skype, and a social networking application called Gwibber.

But when I tried to install a free open-source audio editing program, Audacity, it appeared more complex to get hold of an Ubuntu version than the one I’ve used on a Mac.

So it was simpler than this on Mac?








What was tripping you up? Not knowing a sound recording and editing program would be in the Sound & Video category? Or not realizing how silly it is to have to open a web browser to install a program? Do you find the iTunes App Store difficult to use? Because that’s pretty much the same thing, isn’t it?

I very much look forward to reading your next article, “24 hours learning to ride a bicycle.” The wheels must just not be worth the effort.

Further Reading
Know why software installation is difficult on Linux? It’s a secret. I can’t tell you.

Categories
Uncategorized

Impressed with Karmic Koala beta

Ubuntu Linux gets released twice a year—once in the spring, once in the fall. The releases are numbered to indicate the month of release. Most spring releases (with the exception of Ubuntu 6.06, also known as Dapper) were released in April (5.04, 7.04, 8.04, 9.04), and all fall releases so far were released in October (4.10, 5.10, 6.10, 7.10, 8.10).

I’ve always been a bigger fan of the April releases than of the October ones. That’s changed with this next release (9.10) nicknamed Karmic Koala. I just installed the beta release (it had gone through six alpha releases previously), and all the standard disclaimers apply, of course (if you install beta, you do so at your own risk, don’t use it on a production machine, you may lose data, blah blah blah. There is no warranty, real or implied.) Nevertheless, I’ve generally found (with few exceptions) that Ubuntu beta releases are more or less stable. I haven’t had anything catastrophic happen with a beta Ubuntu release (your mileage may vary).

And I like this October one. I think Ubuntu is finally heading in the direction Mark Shuttleworth has said for years that it should head in. It’s focusing on usability. It’s focusing on looking better. It’s focusing on hardware compatibility and working out a lot of the little bugs that make a big difference.

Speed
With the last release (Jaunty, 9.04), boot time was a little over a minute from the moment I pressed the power button to being actually able to use the system (that’s what I consider boot time, not when you see your desktop). With Karmic (9.10), boot time is only 37 seconds. It’s not the 10 seconds some people have been touting (and, yes, I have a solid state drive, too). Still very impressive.

More importantly, the interface is more responsive. I don’t know how to do actual timing benchmarks. I’m sure the difference is just a matter of milliseconds, but it feels much snappier. There is no lag switching windows or clicking on a button. In Jaunty, there would be a barely noticeable delay in rendering when simply closing a tab in Firefox and having the next tab appear in focus. In Karmic, no delay at all. It’s nearly instantaneous (not as fast as Chromium, but for all intents and purposes fast enough). I’m using a crappy Intel Atom processor, by the way.

Appearance
Aesthetics is, to a large degree, subjective. Nevertheless, there are certain visual implementations in interfaces that are in vogue in the corporate and consumer computing worlds, and I think Ubuntu is moving in a good direction here. The boot-up is so fast that there isn’t even a loading boot screen (there is in the live CD session, though, and it looks nice). The icons are much cooler-looking. It’s pretty clear, though, that much has been copied from Mac OS X, including the applets for wireless and power management, which now have a simple light-gray iconization instead of pixelated blue bars and complicated graphics that don’t always render well.

One gripe I have is that there is still text displayed during bootup. Granted, if you want text displayed during bootup (some kind of verbose mode), that’s good. The default should have only graphics, though. The Grub boot menu is all text (white text on black background), and then there are little boot messages that scroll by very quickly (visible for only a couple of seconds). I’m hoping that’ll be fixed in Ubuntu 10.04.

Along with the Macification of icons, there is also the simplification/Macification of the interface. System > Administration > Login Window no longer brings up a multi-tabbed preferences window with lots of options. It now has basically two options (autologin or not, show the screen to log in or not). System > Preferences > Sound shows a sound dialogue that looks an almost exact carbon copy of the Mac sound preferences dialogue.

Most importantly, the new Ubuntu Software Center is even easier to use than Add/Remove or Synaptic. It just puts all the options in your face and filters things quickly. You don’t have to mark things for installation and then apply. You just click to install each item, and it does it right away. The progress bar is inline instead of a new pop-up window. It just seems fast.

Hardware Recognition
Jaunty was pretty good at recognizing hardware. There was a little regression, though, that made it so that certain Intel sound chips didn’t work and Alsa had to be recompiled from source… oh, and for my set-up anyway, PulseAudio (the default sound management system) always had to be uninstalled to get sound to work. There was also a bug that had wireless take “forever” (between 30 seconds and a minute) to come back after resume from suspend (or “sleep”).

In Karmic, sound worked with PulseAudio (I just had to change the input from Microphone 1 to Line-In), wireless worked after resume within seconds, and everything else worked, too (no regressions for screen resolution, power management, or hotkey recognition, etc.).

One little bug (which I filed) is that the hardware drivers for Broadcom 4312 install fine during the live CD session, but once you install Karmic, the drivers need to be uninstalled and reinstalled to work, and then only after a reboot. Hoping that gets fixed before final release.

Conclusion
Overall, this is a totally awesome Ubuntu release. If my friends would just stop using iPod Touches and iPhones (or if Apple would play nice with other systems or port iTunes to Linux), then I could actually recommend Linux to people.

Categories
Uncategorized

Why I’m not a fan of Google’s cease-and-desist letter to Cyanogen

Those of you who follow my blog or are Ubuntu Forums members may know that I often come to the defense of Google. There is a lot of Google-bashing out there. It seems to now be the cool thing to do. I almost laughed out loud when there were blog posts framing the Apple rejection of the Google Voice app as “David and Goliath” with Google being the Goliath!

I generally like Google because Google generally favors open source and open standards, and even does quite a bit of funding for open source. They have not, in the past, engaged in any of the vendor lock-in practices that Microsoft and Apple have. It is annoying if you have a Hotmail account and can’t use a regular email client like Thunderbird with it. It’s annoying if you can’t install a Google Voice app because Apple tells you what can and cannot be installed on your iPhone (and, unlike in Android, the iPhone doesn’t have an override option to say “I understand the risks of installing this third-party unapproved app but just want to do it anyway”).

I have a rooted Android phone. The term rooted in this case is a bit misleading. It isn’t a regular Android installation that has somehow been modified to allow me root access (so I can install apps like wifi tethering). It actually is a special rooted Android ROM I had to replace my regular Android installation with.

The folks who make these ROMs are volunteers who just want to make the most of what Google has advertised as an open platform. One of the most famous is a developer who goes by the nickname Cyanogen. I tried a few ROMS and Cyanogen’s was definitely the best.

He thought he was being careful. He thought (I’m paraphrasing here), “Well, I’ve modified the open source components of Android. The Google proprietary binaries (YouTube app, Google Maps app, GMail app, etc.) I haven’t modified. I’m redistributing these only to people who already have Google-branded phones. It shouldn’t be a problem.”

Well, apparently, he was wrong. Google thought it was a big problem, despite the fact that only a few tens of thousands of people were using Cyanogen’s ROM. Google sent him a cease-and-desist letter, claiming he did not have the right to redistribute Google’s proprietary apps in a modified ROM.

Is Google within its legal right to do this? Certainly.

Is this a good idea for Google to do this? Absolutely not. Here are the reasons why:

  • If you look at the billions of people in the world and the millions of Android phone users, only a comparatively small number of people were using Cyanogen’s ROM. This cease-and-desist letter actually brings only more publicity to ROMs (which will continue to exist but now will have to go underground).
  • Google is pissing off the very people who have been the most vocal proponents of Android. These are people who can not only help develop the platform software-wise but can advocate for friends and family to buy Android phones in lieu of iPhones or Blackberries.
  • Even though what Cyanogen was doing may have been legally wrong, it was morally right. He was not stealing money from Google or hurting Google’s business model. Google does sell those “free” apps to phone manufacturers. But Cyanogen was creating the mod specifically for phones that had regular Google Android on them anyway.
  • The real clincher for me is the fact that Google Android has been touted by Google as open source. Yes, technically the OS itself (which is based on a Linux kernel) is open source, but Cyanogen and some other ROM developers have pointed out that the way Android is, it’s basically useless without the core apps (Android Market, Google Contacts syncing, etc.).

My hope is that, for Google PR’s sake, Google undertakes the following follow-up actions:

  • Offer Cyanogen a job working for Google Android
  • Work on releasing a barebones Android framework that is completely open source but also at least basically functional.
  • Provide a way for Android users to actually root their phones without replacing the standard OS with a custom ROM. The wifi tethering app, for example, is hosted by Google. Well, what good is the wifi tethering app from Google if it can’t be used? What good is an “open source” operating system if it requires proprietary components to function?

I haven’t completely turned against Google. I do think they’re still doing a lot of good work, and they’re still more open than Microsoft and Apple. Nevertheless, this incident has left a sour taste in my mouth, and I can’t really enthusiastically recommend Android phones to people now. I like Android still personally. But it no longer has the same open source appeal it used to. So if a friend or family member asks if she should get an iPhone, I’m just going to have to say “Why not?”

Categories
Apple and Mac OS X Computers Linux Music I Like Ubuntu

A professional musician switches from Mac to Ubuntu Linux?

I just read Linux Music Workflow: Switching from Mac OS X to Ubuntu with Kim Cascone, and I have to say I’m shocked, especially after reading Kim Cascone’s Wikipedia entry. Kim is a serious musician, not just some schmoe dinking around in his basement.

I’ve been a full-time Ubuntu user for a little over four years now, having switched from Windows XP. My wife switched around the same time but from Windows to Mac, as she uses Mac for serious graphic design work.

Even though I get annoyed when anti-Linux trolls make it sound as if no one could use Linux just because Linux isn’t great for certain niche commercial applications (AutoCAD, Adobe CS, certain graphics-intensive video games), I have to concede that Linux is not for everyone. And if someone had come up to me yesterday and said, “Hey I’m a professional musician who uses a computer full-time for audio stuff. Should I use Linux?” I would probably laugh in her face and tell her to go with Mac OS X.

Even though I don’t use Linux for serious audio work, I’ve seen enough of the Linux audio mess of Pulse Audio, OSS, and ALSA to know it can be an obstacle for someone seeking to use Linux primarily for audio work. After reading that blog post, though, I have to say I’m pleasantly surprised.

And I also think that, even though there is a myth of meritocracy in the software world, arguing about how freedom is important isn’t going to win over the general public. If open source is really a better development model, it will create better software. There shouldn’t be a choice between functionality and ideology. If the ideology of freedom being better is true, then it should produce the best functionality eventually. And maybe it is slowly getting there.

I don’t subscribe to the notion that if Ubuntu (or some other Linux distro) fixes all its usability issues that all of a sudden hundreds of millions of Windows users (and Mac users?) will just download .iso files, burn them to CD, boot from CD, and install and configure a new operating system themselves. But why have extra obstacles?

Keep on bringing the improvements, Linux communities. This is definitely a cool development.

Categories
Computers Linux Writing

Google Chrome OS isn’t Linux?

Add one more to the tech journalism hall of shame.

From PC World‘s “Google’s Chrome OS May Fail Even as It Changes Computing Forever”:

First, Google will compete with another operating system, Linux, that has tried fruitlessly to replace Windows on consumer PCs. The Linux camp will give it another go with a Linux variant called Moblin that has the backing of Intel and is headed for netbooks soon. (No specific partners or dates have been announced.) Dell says it prefers Moblin to Chrome OS.

Hey, Tom Spring—Google Chrome OS is Linux, just as much as Intel’s Moblin is, just as much as Ubuntu is. Linux is a short-hand many people use to designate any operating system that uses the GNU/Linux kernel… and Google Chrome OS uses the Linux kernel!

Maybe this mistake is a good thing.

If even tech “journalists” think Google Chrome OS isn’t Linux, then maybe people will give Chrome a chance because of the Google brand and not be afraid that Linux is only for geeks. After all, no one ever said you had to be a geek to use TiVo.

If Chrome OS is successful, Linux’s “year of the desktop” may not even be recognized as such, because most people (not even supposed journalists) won’t even realize Chrome is Linux. Of course, I don’t buy that Google is directly competing with Microsoft. Yes, Chrome OS is an operating system. Yes, if it’s successful, it will take some marketshare away from Windows. But cloud computing can be only so successful in the near future. Not everyone has broadband internet. Not everyone wants confidential documents on someone else’s servers. Not everyone wants to migrate away from her current platform. Not all applications have “cloud” counterparts.

If Google is successful in taking over the netbook market, it’ll be a huge blow to Microsoft, but people will still be using their Windows desktops and Windows laptops for heavy gaming, for niche business applications, for graphic design (if they aren’t using Macs).

Windows does not need to be totally overthrown, though. Any gain in marketshare for Linux will mean more hardware support for Linux users, which means ultimately more freedom and choice for even those Linux users who use non–Chrome OS distros.

Categories
Computers hp mini Linux Ubuntu

Vanilla Ubuntu on the HP Mini 1120nr

Anyone who read my last post knows I am not a fan at all of the HP Mobile Internet Experience. It was a huge disappointment that made me almost regret buying the HP Mini 1120nr.

Good thing I didn’t give up on it, though, just because of the bad MIE interface. I installed vanilla Ubuntu on it, and it’s great now!

First I had to consider whether to install Ubuntu lpia (lower-powered Intel architecture) or the regular i386 version. Presumably the lpia version is optimized for the Intel Atom processor in my HP Mini, but…

…not to mention the fact that almost all third-party .deb files (TrueCrypt, DropBox, Opera) are compiled for i386. Since the battery life on the HP Mini appears to be between 2 and 2.5 hours (less than the 3 hours I got on my Eee PC 701), an added 12 to 15 minutes of battery life wouldn’t really help anyway. In any case, I don’t travel much, so battery life would be just something to brag about, not necessarily something I would need.

Instead of the hours I spent trying to make the MIE interface usable (to no avail, by the way, and it wasn’t any more responsive even after I switched from 1 GB to 2 GB of RAM), the Ubuntu installation and configuration took me only about 40 minutes and was extremely painless.

I took a vanilla Ubuntu 9.04 (Jaunty Jackalope), booted it from USB, clicked the Install button on the desktop, answered the easy questions quickly, resized my MIE partition to make way for regular Ubuntu, took 20 minutes to install, then rebooted.

Almost everything worked straight away—Compiz, screen resolution, function keys, resume from suspend. Even wireless worked (and it’s a Broadcom card, which is notoriously Linux-unfriendly). The only thing broken was sound. So I did a quick Google search and came across this fix. I pasted those few commands into the terminal, rebooted, checked a couple of boxes in the sound preferences (check to enable speakers, uncheck to disable PC beep), and everything was running quite smoothly—and with no lag at all.

It’s a shame HP didn’t put more usability testing into their preinstalled version of Ubuntu… or just put more thought into sticking with regular Ubuntu.

Edit (26 May, 2009): Actually, the sound settings would reset after each reboot. Usually, I just suspend to RAM, but every now and then I reboot, and it’s annoying to have to mute the PC Beep and unmute the PC Speaker every time.

The fix is:

  1. Get the volume settings exactly the way you want them.
  2. Paste the command sudo alsactl store into the terminal.
  3. Edit the /etc/rc.local file as root (sudo nano -B /etc/rc.local) and then add in the line alsactl restore before exit 0

Now if you reboot, your sound settings should stay the same.

Categories
Linux Windows

Software installation in Linux is difficult

Linux is for geeks only. Software installation in Linux is difficult. It is not for the faint-hearted. Let’s take, for example, installing a simple game of Hearts.

In Linux, you’ll have to download source code and have to compile it from source, and then you’ll run into dependency hell and have to track down all the individual dependencies yourself.

Here are some screenshots to show you just how difficult it is…









See? That was quite difficult, and I would not recommend that for the average user. People just want to click and go. They don’t want to have to run a lot of obscure commands just to play a game of Hearts.

It’s much easier in Windows. In Windows, all you have to do is search for the software you want, download it, click next-next-next-finish, and you’re done.

Let’s take a look at how much easier it is to install software in Windows…





































See how easy that was? These Linux geeks have to stop pretending that Linux is ready for the average user. Windows is ready to go out of the box, and it’s just more user-friendly.

Categories
Computers Linux

Know why software installation is difficult on Linux? It’s a secret. I can’t tell you.

I love this line from Preston Gralla’s latest bit of anti-Linux propaganda:

But when you try to install new software [in Linux], or upgrade existing software, you’ll be in for trouble. I won’t get down and dirty with the details here, but believe me, it’s not pretty.

Actually, I don’t believe you. Why should anyone? I find it quite pretty. I find it beautiful and simple.

Since Gralla doesn’t want to spend the time explaining to you the details of software installation in Linux, I will. I will get down and dirty with the details here.

I’ve been using Ubuntu for the past four years straight. When I want to install software, this is what I do:

  • I click with my mouse on the Applications menu.
  • Then I select with my mouse the menu item Add/Remove.
  • I do a search in a little text search field (which I can click in with my mouse) for the software I want (or what the software does) and then some results come up in the search with little pictures and descriptions next to them.
  • I pick the result I want and check with my mouse the little checkbox next to it.
  • Then I click with my mouse the Apply button.

That’s it. Some pretty dirty details there. If you want to see screenshots of this “not pretty” process, you can visit my Ubuntu software installation page.

And the best part is that I don’t even have to worry about upgrading applications. Every six months when I upgrade Ubuntu, all my applications automatically get upgraded. How easy is that?

Gralla, welcome to 2009 (or actually even 2005). I don’t know why you’re still using Linux distributions from ten years ago. Do I make judgments on Windows based on my experiences with Windows ME?

Categories
Computers Linux

Not that hopeful ARM will save Linux on netbooks

With the recent 96% Microsoft netbook fiasco (i.e., poor excuses for tech journalism, as usual), I see a lot of smug comments from the Linux community about the upcoming ARM-powered Linux netbooks.

The argument goes something like this:

Yeah, Windows may dominate the netbooks now, but Linux will come back. Windows doesn’t run on ARM yet, and the ARM-powered netbooks will be cheap and have long battery life. If they sell netbooks for US$200 with a 15-hour battery life, then who would pick a more expensive Windows option with less battery life?

I would love Linux to succeed on netbooks, but look what has happened already? Let’s review, shall we? First, the One Laptop Per Child project introduced the idea of a very low-cost laptop for children in developing countries. Then the Classmate PC came out as a rival. Both Microsoft and Apple tried to edge their operating systems on to the OLPC laptop. What happened? Well, not only is the Sugar interface on the X0 rubbish, but OLPC even started entertaining putting Windows on its laptops, despite its earlier refusals in objection to the use of proprietary software.

Then there were all these rumors about Asus coming out with a $200 very small laptop. People got all excited. $200? Really? Wow! What happened? The Eee PC. It was a big hit! Was it $200? No. It was $400. And it had a 4 GB SSD drive. Later, they came out with a $300 version with a 2 GB SSD drive. At first people marveled at these small things and even praised the Linux interface (with very large icons) as something anyone could use. Then they realized it was some crippled version of Xandros and promptly started to replace Xandros en masse with Windows or Ubuntu (or some other Linux distribution).

Other vendors started jumping on the bandwagon, because they didn’t want to lose out in this new netbook market, so Acer, MSI, Sylvania, and HP all ended up coming out with their own versions. The prices either got higher or stayed the same (but with better specs).

And then Windows XP started appearing. Unfortunately, if you want people to actually start using Linux, preinstallation is not enough. First of all, that preinstalled version has to be preconfigured, too, and thoroughly tested. Then it has to be properly marketed. It also should be a proper Linux distribution and not a crippled one (no Linpus Lite, no customized Xandros).

Pretty soon, Linux became synonymous with enormous cartoony icons and lack of easy software installation. Windows won. That, and a few FUD stories thrown in about return rates being higher for Linux netbooks (even though that was for only MSI, not Dell or Asus), and Microsoft has basically won the battle.

And my suspicion is that it’ll win the war, too. The real problem is that OEMs are not invested in seeing Linux succeeding. If Linux is a cheap option that will get them some revenue, OEMs will use Linux. But if Windows will get them even more revenue, they’ll use Windows. And a lot of Linux users aren’t helping, either. This whole mentality of “Well, if the Windows option is better, so I’ll just buy that and install Linux myself on it” will just limit future Linux options, as executives at OEMs will just say “We tried to offer a Linux option, but even the Linux users will just buy Windows and install Linux over it themselves. What’s the point?”

Will ARM be $200? We don’t know that. Will ARM have amazing battery life that the Windows netbooks won’t compare to? We also don’t know that. Some of the more recent Windows Eee PCs boast up to 9.5 hours of battery life.

Call me cynical. Call me pessimistic. But I see ARM either falling through the cracks or Android falling through the cracks, or ARM netbooks being marketed badly or overpriced or configured badly. I will be extremely surprised if Ubuntu shows up on an ARM-powered netbook that’s US$189 with a 15-hour battery life, a comfortable keyboard, a large hard drive, a slick look, and no “We recommend Windows for home computing” at the top of the vendor page. The vendors will keep recommending Windows, and Microsoft will keep pushing Windows 7. And it’ll bring out its ARM smears and Android smears. Microsoft will go down fighting or not go down at all. I have to confess, at this point, I’m very tempted to just throw in the towel and get a Windows netbook and install Linux myself on it, even though that’ll just add to Microsoft’s bottom line and its boasts about the demise of the Linux netbook.

You vendors, you’d better come out with some cool Linux netbook soon… and don’t let Apple steal this new market away the way it did with portable audio players and the iPod.