Category Archives: Software Development

General matters of designing and implementing software

Wget Stuff

Yeah, well my blog posts have dropped off drastically lately. It’s probably mostly due to my recently having taken on maintainership of GNU Wget. I’ll give an update on what I’ve been doing with that.

Bug tracking

One of the first things I did was to go through the TODO list on the Wget repository, and issue reports on the mailing lists, and transfer it all to a bug tracker, so we can actually see what needs fixing, and what we’ve fixed, and keep it all semi-organized somehow. In spite of reservations, I decided to move them all into GNU Savannah’s bug tracker, because Wget already has a presence on Savannah, so it would require a minimum of setup. On the other hand, as I was already well, Savannah’s interface positively sucked. It’s cumbersome to set up bug submission form fields, it’s cumbersome to arrange their order, it’s cumbersome to search for specific kinds of bugs, …but at the end of the day, it does the minimum that I decided I needed, and required relatively little setup. Maybe someday we’ll move to Bugzilla… *shrug*

Moving the repository

Another thing I did pretty soon after taking ownership of Wget, was to move hosting of the Wget source code repository from (formerly known as; they still host our primary mailing list) to my own VPS (the one this blog runs on), under the domain I didn’t do this just because I’m a control-freak and want absolute power over everything (though this may be the case 😉 ), but after several weeks of trying to get the attention of the dotsrc staff so I could get commit access to the repository (and actually freakin’ write code for the project I was supposedly maintaining), I decided enough was enough, and used svnsync to create an identical copy of the Subversion source code repository, so I can give myself access. 🙂

New mailing lists

Another motivation for moving the repository was that I desired to have a mailing list for receiving commit notification, so everyone who’s interested can see what development is going on. Mauro Tortonesi, the previous Wget maintainer, related that he’d tried to get the dotsrc staff to put such a thing in place, but to no avail. So, I created a list for this purpose, which also receives bug report change notifications from Savannah; and another very low traffic one for communication between just the developers who have commit access.

The Wget Wgiki

Next was to complete the migration from a web presence at to The original plan was to have the entire web presence hosted at the site; however, at the same time, I was scheming about putting a wiki in place for collobarative definitions of specifications for future major improvements to Wget. When I finally got around to slapping MoinMoin onto my server (which I chose primarily because of familiarity due to my involvement with Ubuntu), I began to realize just how much better it would be to host as much of our main informational content on the wiki. So, the end result is that the dotsrc site no longer exists (or, more accurately, redirects to the GNU site); and the GNU site is a basic informational stub, that points to the new wiki site (dubbed The Wget Wgiki), which holds all the real information.

Development schedule planning

Another thing I started doing early on was to draw up a project plan (Gantt chart) to try and target when we would release the next version of GNU Wget, 1.11. Since it was pretty much just me and Mauro doing active development—who both have day jobs—I tried to be extremely generous with the amount of time it would take us to get things done. Wound up with a target of September 15. I’m confident we would’ve made it, too: we were on-target in terms of development, but there ended up being some legal issues with the new version 3 of the GNU GPL, and the exemption Wget needs to make to allow linking with OpenSSL, an incompatibly-licensed library that handles encryption for things like HTTPS. We’re still waiting for the final concensus from the FSF legal time.

At the moment, we’re not code-ready anyway; but we would’ve been if we hadn’t been somewhat demotivated by the fact that our code-readiness or lack thereof isn’t going to impact when we can release. I chose to work more on the wiki instead of code at that point, and on evaluating decentralized SCMs as potential replacements for Subversion. Now that I’m doing most work on a laptop, a DSCM is convenient. So far, Mercurial seems like a good bet, but we’re still discussing it on the list. Several folks prefer Git, but Git seems to be heavily Unix-centric, with limited support for other platforms; given that Wget is also used on other platforms, there seems to be some merit in preferring a more multiplatform solution; but we’ll see.

GNU Maintainer Me

This morning I was officially appointed as the maintainer of GNU Wget (one of the tasks on my to-do list is to update that page to the current, more modern GNU look, but for now, I’ve at least got my name on it 🙂 ). Wget is a very versatile command-line application for fetching web files, and can be used to grab local copies of web sites, or sections of web sites. I’ve used it many times for a variety of reasons; quick fetch of a web file to disk, grabbing a portion of a website so I can view it offline, web debugging….

It’s a fairly high-profile tool in the GNU and Unix worlds, so I’m proud to be able to be a part of it. It will be a big time investment, which I never have a whole lot of, but I am very, very motivated. I spent the weekend categorizing and prioritizing the things that need to be done on wget, so I have a fairly solid idea of what needs to be accomplished.

Here’s the announcement on the wget mailing list.

PostScript is Amazing

A sample mazeWell, I finally completed a little toy project I’d been working on, off and on, for a while.

It’s a program written in PostScript that generates random mazes. You can try it out, and a random maze will be generated to a PDF file which you will see (if you have Adobe Acrobat Reader or some other PDF viewer). I have another version which produces six mazes to a page. The original PostScript version (which you can download here) can be configured to print an arbitrary number of mazes to a page, and was written using the Test-Driven Development paradigm (so it includes a fairly complete set of regression tests).

I wrote it after I read the source code to the program “Amazing”, written in BASIC, which was presented in a book I used to read when I was a kid, to try things out on my TRS-80 or in Microsoft BASIC on my Macintosh Plus.

I dug up the source code again while I was searching for code to simple, text-based games that I could use as means to teach the C programming language to newbies. The example from the BASIC Computer Games book was rather illegible, and I spent a weekend deciphering it so I could understand the underlying algorithm. In the process, I ended up finding a couple bugs in the progress, such as the occasional omission of the maze exit point, or generating unreachable locations in the maze. I wish now that I’d found the original version of the program, which is much more readable. I think David Ahl’s version from the book must have been written to conserve absolutely as much space as possible, at the expense of comprehensibility.

What Is a Good Test Case?

Found this brief PDF essay to be an interesting read. I feel that the title is a bit of a misnomer, though, as it doesn’t really answer that question. It does a good job of defining some sometimes-confusing software QA terminology, and identifies common testing paradigms that are used to define test cases.

Clever C Code to Calcluate “π”

Someone on IRC (#ubuntu-offtopic on sent a link to the following delightful little 1988 IOCCC winner, written by Brian Westley:

The functionality of the above program is to print the value of the constant π (“pi”, ∼3.14). The beauty of it is: the larger you make the circle, the more accurate the value (but you’d need to adjust the precision of the printf() format string).


Did you ever own an Intellivision video game system?

After our Atari 2600 got stolen out of our home, along with a guitar and camera, my uncle Gary bought our family an Intellivision, along with scads of games. That console makes up my earliest childhood memories of videogame-playing (I barely remember anything at all about the 2600; just a dim memory of playing Basketball with my dad).

Nightstalker was cool enough, and my best friend Laban (still is) and I spent hours discovering neat tricks in Utopia, like how to turn your PT boat invisible or make it travel on land, or sink a harbored fishing boat. But far and away, my most favorite game was Tron® Deadly Discs.

Tron® Deadly Discs mini-screenshot

I’d always loved that game, Continue reading

Five Truths About Code Optimization

I discovered a terrific blog post on code optimization, through my friend Mars’s blog.

To summarize in my own words, the most important parts are:

  1. Start with unit tests, so you can be confident that the code worked in the first place, and that you haven’t broken anything in the optimization process.
  2. Never assume you know what the bottleneck is, even if it’s “obvious”. Profile first, code later.
  3. Reprofile after every code change, so you know whether the remaining efficiency issues are still in the same place, or if a new spot is the current big problem.

There are two more points in the post, but they are either obvious or much less important, IMHO.