Author Archives: Micah

About Micah

Oldest of 8 children. I'm skilled in piano performance and computer programming (especially C and Perl on Linux), and have a strong interest in typography, and well-made entertainment media such as books, movies, and video games.

Obama Quote on “Doing Green Things”

My new favorite Obama quote, from the highlights from Newsweek’s Special Elections project:

I often find myself trapped by the questions and thinking to myself, ‘You know, this is a stupid question, but let me … answer it.’ So when Brian Williams is asking me about what’s a personal thing that you’ve done [that’s green], and I say, you know, ‘Well, I planted a bunch of trees.’ And he says, ‘I’m talking about personal.’ What I’m thinking in my head is, ‘Well, the truth is, Brian, we can’t solve global warming because I f—ing changed light bulbs in my house. It’s because of something collective’.

Five Steps to Happiness

The Daily Telegraph has an enjoyable, light read which they’ve titled, “Scientists suggest five ways to stay sane.”

From the article:

Connect

Developing relationships with family, friends, colleagues and neighbours will enrich your life and bring you support

Be active

Sports, hobbies such as gardening or dancing, or just a daily stroll will make you feel good and maintain mobility and fitness

Be curious

Noting the beauty of everyday moments as well as the unusual and reflecting on them helps you to appreciate what matters to you

Learn

Fixing a bike, learning an instrument, cooking – the challenge and satisfaction brings fun and confidence

Give

Helping friends and strangers links your happiness to a wider community and is very rewarding

Git vs Mercurial

I just got out of a point-by-point comparison of Git and Mercurial, presented by one developer from each project (simultaneously). Both are DVCSs that are popular among Free and Open Source Software projects.

The presentation pretty much confirmed my existing convictions about the two projects: they are extremely comparable in terms of the features provided, though of course there are differences in the approach that can make a big difference for people’s preferences. But the major difference seems to be that Mercurial has significantly better documentation, is easier to use overall (at least for the most common tasks), and runs better on Windows.

The better performance on Windows was the main reason I chose to migrate GNU Wget to Mercurial rather than Git. The ease-of-use, and similarity to Subversion, is why I prefer to use it in my own personal projects, as well.

When I had to choose a DVCS for GNU Screen, on the other hand, I went with Git. This was mainly because I believed it to be essentially equal in power to Mercurial (which I feel was confirmed by this presentation), was of no use to Windows users (at least, other than ones running a Unix environment such as Cygwin), and would be the more familiar DVCS to the folks most likely to be interested in developing it (several other important GNU projects, such as coreutils, gnulib, autoconf and automake, are managed with Git).

Thoughts on the Future of Human Evolution

Prefacing disclaimer: obviously, I’m not a geneticist, biologist, or any other expert qualified to speak authoritatively about evolution.

My attention was caught by the title of a recent lecture from University College of London Professor of Genetics Steve Jones: Human Evolution is Over (here and here), which I first found via Digg. I was disappointed by the lecture, or at any rate by summaries of the lecture in the news, since I can’t find the text of the actual lecture; but it caught my attention because I’d actually been thinking for a while about human evolution, and the rather unique spot we find ourselves in now.

According to the articles, the lecture essentially makes the argument that human evolution is at a near-standstill, because natural selection pressures are low, and because older fathers aren’t as commonplace as they once were (older fathers’ sperm-manufacturing cells being the product of more cell divisions, giving greater opportunity for genetic mutation).

I’m baffled by the argument that the pressures of natural selection from extremes of heat and cold, or famine, are nullified by modern heating and air conditioning, and food plentifulness. Current heating and AC technologies are not so perfect as to effectively nullify the selective effects of environmental extremes (and anyway not everyone in the world has access to these technologies), and food is far from universally “plentiful”. Even ignoring those, there is plenty of opportunity for competition and natural selection, through population growth pressures present in many areas, and epidemics of disease in many corners of the world.

Meanwhile, the argument that younger fathers have eradicated the opportunity for genetic mutation rather ignores how we got where we are, and the fact that a 35-year-old father has more years than the entire lifespan of the bulk of our ancestors; in many cases by a huge margin. Though it is true that our cells’ mechanisms for guarding against infidelity in genetic copying have become more advanced now.


And yet, despite these flaws in the arguments (at least as represented in news sources), I wonder whether there’s still some (limited) truth to the conclusion?

We’re in a weird place in evolutionary history—really, completely-uncharted territory. We are the first creatures to have become aware of the underlying mechanisms of our evolution, the first to be in a position to actually manipulate genetic material directly—both our own material and that of other plants and creatures. I wonder whether natural selection will soon be made irrelevant through our own increased powers of artificial auto-selection.

Even aside from our fledgeling ability to govern our own genetics, our modern medical technology is already starting to turn the tide of natural selection through artificial compensations. I suspect penecillin, which might at first glance seem to be a good example, is actually a poor one: it’s not an example of our ability to conquer natural selection, but only our latest short-term triumph against it in our perpetual battle with disease. After all, it’s only been around for eighty years, and there are already plenty of examples of diseases that have evolved immunities to it. Before too long, we’ll have to invent it all over again. And again.

On the other hand, in a world where prosthetic legs are available, the effects of natural selection on a “clumsiness” gene that makes people more apt to lose a leg is diminished (though not if the “clumsiness” also produces greater risk of direct loss of life 🙂 ). Genes that would otherwise have become extinct due to unpleasant aesthetic effects (lessening sexual desirability) are given a reprieve by cosmetics and (in more extreme cases) cosmetic surgery.

We already screen fetuses for common genetic diseases; how long before we start ensuring their absence through direct manipulation, a lá Gattaca? I don’t really think we’ll ever find ourselves in a world where, as in the movie, genetic discrimination ensures that unmanipulated individuals cannot obtain white-collar jobs or decent girlfriends; but I do think it’s likely that at some point, it will become somewhat routine for parents to screen their children’s genes, or the genes that they contribute toward child-bearing.

And that, in itself, disturbs me. Not because it’s unethical or somehow violates the sanctity of natural human reproduction, but because, from an evolutionary standpoint, it’s probably a really bad idea. The genes that we would tend to filter out because they’re responsible for sickle-cell anaemia or cystic fibrosis, when we are unlucky enough to obtain that gene from both of our parents, provide protection against malaria and cholera when we obtain just one copy of the gene. In our zeal to weed out genes that confer confirmed negative effects, we are very likely to strain out genes whose undiscovered positive effects actually outweigh their known negative effects in some environments.

The problem is that artificial selection is guesswork: we presume to be able to deduce when our manipulations are for the best, but we rely on faulty human reason and incomplete understanding to make these decisions. Natural selection, on the other hand, has complete understanding and flawless, unconscious reason, in that it always guarantees that the surviving genes (over sufficient spans of time) are those that confer the greatest advantages

Of course, natural selection’s perfect ability to decide which genes should be eliminated and which survive, comes at a price. For one thing, it’s awfully slow, and the knowledge that natural selection will provide us with an immunity to such-and-such a disease over the course of a number of generations is small consolation when we’re dying from it now. For another thing, it comes at the cold and calloused cost of human lives. Natural selection depends on death, in combination with reproduction and variation, to achieve its ends. There is no selection, natural or artificial, if some things aren’t dying while others survive.

Human sensibilities, meanwhile, demand that all human life is sacred, and not just those that nature would select. So of course we will continue to intervene on individuals’ behalfs—we must. To not do so would be unfeeling, uncaring, and more than a little reminiscent of Nazi eugenics.

And yet, interference comes with a price of its own. As we relieve the effects of natural selection on individuals through our efforts to use technology to cure humankind’s ills, we condemn ourselves to evolve dependencies on those same technologies. Preventing natural selection from filtering out weak genes through quicker death to the possessors or, alternatively, taking nature’s responsibilities on ourselves and doing a lesser job of filtering out weak genes before birth, ensures that weak genes will proliferate where they otherwise would not. This in turn forces us to continue to use our medical technologies perpetually, lest we suffer nature’s belated compensation for our weakened genetic resilience.

Meanwhile, natural selection will continue to have its way with those peoples for whom the wonders of modern technology are out of reach. After all, all these arguments from modern medical and genetics technology suffer the same flaws I noted for Jones’ arguments from central heating and plentiful food: they can’t completely eliminate natural selection, and (more importantly) not everyone has them. While the middle- and upper-classes of  the affluent nations of the world develop a dependence on their savior technologies, those who can’t afford these miracles will continue to depend on nature to provide them with the protections they need, the hard way. But, interbreeding between the privileged and unprivileged will help reintroduce healthy genes to the “privileged” who otherwise might find themselves in an ever-escalating battle against their own degrading DNA, and beginning to dwindle in numbers in comparison to the healthier “unprivileged”.


Whether or not human evolution has effectively ceased (cough), our tools have certainly been evolving lately at a much higher rate than we ourselves have been. Perhaps the tools we have created will increasingly exert selective pressures of their own on human survival statistics, resulting in an accidental artificial selection. There are those that suggest that this has already played a large role in guiding our evolution to the current state; we start using a tool that benefits us, and suddenly the people best equipped to be using these tools have the best chances for survival.

In our current digital age, more and more professionals are finding themselves having to think about multiple tasks simultaneously (or as nearly so as we are currently capable), and of dealing with multiple channels of information. These are both things that we are currently fairly poor at handling, as a species; but perhaps evolution will produce people with true multitasking capabilities, handling multiple simultaneous conscious thoughts, or at least able to take better advantage of what multitasking capabilities the subconscious mind already posseses. Could the human brain eventually become “multicore”? 🙂

As our tools continue to evolve, I expect we’ll eventually do away with such impediments as keyboards, and be capable of communicating thoughts much closer to the speed at which we actually think them. In that event I imagine that rapid thought might become an evolutionarily favored trait. That might in turn result in an eventual (very eventual) reduction in the complexity of our vocal capabilities, as they fall out of necessity, much in the same way that we lost our (external) tails, or that whales lost their legs.

…I worry that, contrary to popular belief, human powers of reason may not be especially favored by evolution. A lot of people believe that humans are more intelligent because we are “more evolved”; but of course on reflection that’s simply not the case. We are not “more evolved” than chimpanzees, because we did not evolve from chimpanzees; chimpanzees and ourselves both evolved from some common ancestor, which means that we have had precisely as much time to evolve from that point as chimpanzees have had. We are both the result of generations of adaptations to become quite well-suited to our respective, different environments. Ours just so happened to favor greater intelligence. It doesn’t follow that we’re “more evolved”, just because the thing we’re most proud of happened to be favored (after all, it’s what we’re most proud of because it’s what natural selection favored).

And humans are not especially logical. We may be much more so than our evolutionary cousins, but we’re still not especially bright. Evolution appears to have favored quite a few mental aberrations over and above any favor given to logic and reason. Our brains are built to strongly prefer

  • conclusions that align well with beliefs we already hold
  • conclusions that facilitate a positive self-image
  • conclusions that facilitate our desires
  • conclusions that align with our emotional reactions to things
  • conclusions that provide us with hope and a positive outlook, even when such an outlook is unrealistic

All of these, over conclusions that arise logically from the available facts. If the available facts are in conflict with any of these—especially the top couple—it may be a minor miracle when we actually manage to arrive at the truth. Add to these a propensity for finding patterns, that is so strong that we regularly find them where they don’t actually exist (number “patterns” in sequences of random numbers, faces of religious figures in food items, tree knots, and whatnot…), and tendencies for weighting some data far too greatly, and other data too lightly (“Counting the hits but not the misses”, leading to convinctions of prophetic truths, or miraculous answers to prayer, etc).

The way I see it, the human race has finally become just intelligent enough to begun to realize just how unintelligent we really are.

And many of these impediments to intelligence and reason appear to be the result of natural selection, indicating that these are flaws in human reason that tend to increase chances of survival, in which case there’s small hope to reach higher levels of reason until whatever selective pressures produced these flaws cease (or are overruled by still greater selective pressures, and perhaps that is already the case).

Quotation of the Day

I want you to just let a wave of intolerance wash over you. I want you to let a wave of hatred wash over you. Yes, hate is good…. Our goal is a Christian nation. We have a Biblical duty, we are called by God, to conquer this country. We don’t want equal time. We don’t want pluralism.

—Randall Terry, founder of Operation Rescue, quote from Fort Wayne Indiana News-Sentinel on August 16, 1993 (according to Wikipedia).

Thanks to the Barefoot Bum for bringing this to my attention, and to Five Public Opinions for bringing it to his.

Adventures in Haskell, Part 2: Kewlness

This is a continuation on part one, which was a very basic introduction to functional programming. In this portion I’m pointing out some aspects of Haskell which I find to be fairly kick-ass. 🙂

My apologies if 80-90% of this post is gibberish to you. I try to be clear, but I’m also trying not to spend a copious amount of time writing this post. Think of it as a rather detailed “best things about Haskell” list, rather than anything remotely like a tutorial, or something that otherwise gives you an actual decent understanding of Haskell as a language. The idea is to try to whet your appetite to investigate further.

Death to Parentheses

Well, the first thing that impressed me is going to be small hat to folks whose primary experience with functional programming is with languages other than Lisp or Scheme, or to folks who have no FP experience at all; but Lisp is notorious for its excessive use of parentheses. Every function call gets its own pair, and since invoking functions is a frequent occurrence in functional programming, you get quite a lot of them. Even better, when you’re done with all your nested function calls, you’re left to figure out how to place the dozen or so closing parentheses. Here’s an example of the crappy powers function from part one of this miniseries, in Lisp:

For this example, it’s not too terrible; but it’s a simple function, and already we end up with five successive closing parentheses at the end. Imagine how more complex code might look! Still, you get used to it; and at least there’s never a question of what’s an argument of what.

The parens tend to bunch up when you nest function calls:

The example above includes non-standard functions (3* and filter) whose definitions I have not provided (they’re trivial); it takes a list of numbers from 1 to 10, then filters the list so it’s just the odd numbers in that range (1 3 5 9), multiplies each element of the list by three, and then reverses the list (largest to smallest). You may note that it’s sometimes easier to read expressions like the above from right to left.

In Lisp there’s no way to avoid these parens in a series of nested function calls. In Haskell, you could also write something similar to the above…

However, Haskell provides a “dot” operator, which applies the function on the left to the output of the function on the right, so you can also write this snippet like:

The outer set of parentheses are necessary in the above example, because the dot operator takes a function on either end; and the result of filter odd [1..10] (which binds more tightly than the dot operator; function applications always bind more tightly than operators) is not a function, it’s a list result. You might wonder how filter odd and map (3*) can be functions; they look more like the result of a function applied to an argument. And they are: one of Haskell’s great features is partial function application: you can pass one argument to a function that takes two, and the result is a function that takes the “second” argument. The (3*) is actually a partial application too: in the Lisp example, I would have had to define a function named 3* to make it work; but in the Haskell example (3*) is a partial application of the * operator.

I like to read the dot operator as “of”. So you can read (reverse . map (3*) . filter odd) as” reverse the order of mapping the multiply-by-three operation across the elements of the result of filtering for just the odd-numbered values of…”

We can even get rid of those outer parentheses, too, if we want:

The dollar operator takes a function on the left and applies it to everything on the right as a single argument. It’s right-associative, so we actually could also write this expression as:

The only set of parentheses we can’t get rid of is the ones in (3*), where they are used in the partial application of the * operator, called a section. Parentheses are part of the syntax required to designate a section. Of course, if we really wanted, to, we could just write a function that does the same thing, and name it mulThree or what have you.

Personally, I like this one best:

Function Definitions and Pattern Matching

In part one of this overview, we presented this version of our powers function in Haskell:

The truth is, though, we don’t need both i and v. One value could be deduced from the other. I just kept both parameters because they’d been needed in C (they could each be deduced from the other in C as well, but at a performance penalty; there are other ways to write the example in C without using i; but the example was written for readability).

A common way to write function definitions in Haskell, is to first write a definition of the function for a concrete value, and then define the function for other values in terms of that first definition. For instance, we could’ve written powers like:

Note that this assumes no one ever calls powers 9; and also that powers now takes one argument instead of two (we dropped v).

This series of definitions means that

  • “powers 0” is a list whose first element has the value 1, and whose remaining elements are defined by “powers 1”
  • “powers 8” is the empty list.
  • “powers n” for any other value n is a list whose first element is twice the value of the first element of “powers (n-1)”, and whose remaining values are defined by “powers (n+1)”.

This winds up giving us the same list result for powers 0 as we previously got for powers 0 1 in our original definition of the function.

Note that there’s just one function powers, but there are three definitions for that function. These definitions combine to provide the complete description of what powers means.

Pattern Matching

We’ve just made a slight use of a very powerful Haskell feature, pattern matching. In this case, we’re matching arguments to powers against various possibilities of 0, 1, or n (where n can be any value). But pattern matching can actually be performed against any sort of value or partial value.

For instance, if we wanted to write a very simplistic function to convert a string into 1337-speak, transforming

to

we could write it like:

The above could have been written somewhat more tersely, but it’s a good demonstration of using pattern-matched function definitions. The (x:xs) patterns describe a list (: is the list-construction operator), whose first element is x and whose remaining elements are the (possibly empty) list xs. The first definition in this example simply says that applying leetize to an empty string will yield an empty string as the result. The following definition means that if the leetize function is applied to a string whose first character is the letter ell, then the result is the one digit, followed by the application of leetize to the remaining characters of the string. The final definition matches (x:xs), for any string beginning with an arbitrary character x that doesn’t match any of the previous patterns; it just leaves that character as-is, and continues to transform transformations on the remaining characters.

Roll your own operators

Another thing that I think is really cool in Haskell is the ability to invent or redefine your own operators. Languages like C++ let you override the definition of existing operators for new types, but Haskell lets you invent entirely new operators. Want to define a new operator *!? It’s as easy as writing

You can also use normal functions as if they were operators. This is common with functions like elem, where elem x mylist evaluates to True if x may be found as an element in mylist. While elem x mylist is fine, the meaning of x elem mylist seems just a little clearer.

For both operators and functions-applied-as-operators, you can specify associativity and precedence. (You can only define binary operators; all operators in Haskell are binary operators, except for unary -, as used in expressions such as -3.)

Lazy Evaluation

One of the things that many people find really cool about Haskell (not just me!), is that it follows a “lazy” or “non-strict” evaluation scheme. The value of an expression is only calculated when it is required. This has some rather interesting consequences.

One consequence is that Haskell allows you to define lists that don’t end. Among other things, this means I’ve been making our running powers example way more difficult than it needs to be. Instead of defining a function that takes some starting value and keeps doing calculations until it reaches a maximum, I can have it just keep calculating that result forever:

Note that this latest powers doesn’t need any arguments at all. It’s just bound to the set of all 2n, n≥0. That’s a list without a limit; but that doesn’t matter, so long as we never try to evalute the entire list. But if I ask Haskell to give me just the first eight elements of that list (with take 8 powers), it works just fine. Haskell performs the necessary calculations to bring me the first eight elements, and doesn’t continue trying to calculate any further elements.

Infinite-length lists are fun, but when you come down to it, they’re really just neat sugar. One area where Haskell’s lazy evaluation really comes in handy is in code like:

This is a complete program to convert the standard input to all-uppercase, and print the results to the standard output. It binds the variable slurp to the full contents from standard input as a string, then binds the variable upperCased to a string which holds the result of mapping the toUpper function against each character in slurp. Finally, the string held in upperCased is printed to the standard output using putStr.

Similar code in Perl might look like:

The difference is that the Perl example will happily try to take a 5GB file and shove it all into memory (or, much more likely, make the attempt and choke), whereas the Haskell version, which looks nearly identical, will process input a bit at a time—how much of a bit depends on the buffering scheme, which depends on the implementation—but the point is, it’s not going to try to slurp it all into memory. It’ll process data a bit at a time, converting it to uppercase, and spitting it out. It only grabs input when it’s ready to print out the next few characters. Then it looks at what it’s printing—upperCased, and sees that it’s supposed to hold the uppercasing of slurp. It then sees that slurp is supposed to be a string holding the content from standard input, so it goes ahead and reads the next bit from standard input, uppercases it like it’s supposed to… rinse, lather, repeat.

What’s neat about this is that it frees you from having to think about the details. Your code doesn’t need to “know” that it’s dealing with input piecemeal: as far as it’s concerned, it’s just transforming a string. Naturally, Perl is just as capable as Haskell at processing input a piece at a time; the difference is that the Perl code will be very aware that it’s doing things one bit at a time, while the Haskell code “thinks” it’s just working on the whole data.

Note, the Haskell example is for illustration. A much simpler program that does the same thing is:

Monads

You may have noticed that the original version of the uppercasing program looked somewhat procedural-ish. The do keyword introduces a block of code that is essentially procedural; but it never for a moment violates the rules of a pure functional language. The behavior of do is defined in terms of a series of function applications and nested name-bindings. The important thing is that Haskell preserves the principle of non-destructive, immutable data, and do doesn’t let you do anything you can’t do without it; it just looks much cleaner than its equivalent.

What’s cool about the do block, though, is that it actually provides a mechanism that many people refer to as overloading the semicolon. The do block is defined in terms of a couple of operators, >> and >>=, that the user can overload in the types phe creates. All types that can be used as the types of the expression-statements in a do block, and as a do block’s own type, belong to a class called Monad.

For instance, take the Maybe parameterized type (parameterized means it takes another type as a parameter; for instance, it can be a Maybe String or a Maybe Integer). This is a type provided by the Haskell standard library, that provides a way to represent a “null”-like value. For instance, an expression that returns a Maybe String can construct return values such as Just "hello" or Just "what is your name?"; it can also use the constructor Nothing, to represent no value (often used to indicate an error condition). For instance, Haskell provides a find function that takes two arguments: a predicate function, and a list. If it finds an element a that matches the predicate in the list, it will return Just a; if it doesn’t find anything, it returns Nothing. So, find (< 2) [1 5 3 9 8] will return Just 1; but find (> 10) [1 5 3 9 8] would return Nothing.

So, what happens if you need to use the result from several successive finds? Say, to find the customer ID corresponding to a name, and then use that to find a sales record? You could end up with successive, nested if expressions to check for Nothing before proceeding. Rough pseudo-Haskell:

(There are severe glitches in the above, but it was written for human readability, and not actual compilability.) Note that it’s safe to bind record as above, even though we haven’t yet checked isJust record, because it’s not actually evaluated until we use it, and we don’t use it until after we check.

Imagine if there were even more dependent lookups we needed to perform! The nested if expressions would quickly become unweildly. However, since the Maybe type is a Monad, we can write it with the do block, like this:

The Maybe type overloads the meaning of the name bindings done with <- (actually, it overloads >>=, but <- is defined in terms of >>=). If the result of the right-hand side is a Just x (for any x), it binds custId to x (stripping off the Just) for the remaining statements. If either of the two <- statements result in a Nothing value, then the remaining statements are not processed, and the result of the entire do block is Nothing.

I was at first dismayed when I discovered that Haskell’s exception mechanisms are not as powerful as one might hope. However, as it turns out, it doesn’t actually pose a problem, because you can just use Maybe and do blocks, as in the example above. The first <- binding or statement that evalutes to Nothing will short-circuit the rest of the code, and eliminates the need to manually check for error-status results after each statement!

This “semicolon-overloading” mechanism is really very powerful. The do block provides a mechanism for compactly and abstractly representing a sequence-of-statements, where a statement is either a Monad-returning expression or a name-binding; but the monadic type gets to decide what “sequencing” these things actually means.

A great example of the power of Monads is that simple lists are also Monads. But with list types, the name-binding <- notation is overloaded so that x <- someList will result in x being set to each element of someList in turn, and then all the remaining statements that appear after the <- binding statement in the same do block will be evaluated in turn, for each x. The result of the do block will be the list of each of those results. As an example, here’s a simple permuation of two lists:

As perhaps you’re beginning to see, Monads are an incredibly powerful feature of Haskell. In fact, Monads combined with Haskell’s lazy, evaluate-as-needed behavior is what enables Haskell to be capable of elegantly dealing with interactive with user input, without sacrificing its status as a pure functional programming language. Interactivity (such as might be found in a typical text-editor) is not possible in languages that are purely functional (immutable data), and use strict (non-lazy) evaluation, because such languages require that they have all the input at the beginning before processing can even begin.

String Literals

I haven’t seen this emphasized anywhere, and it’s a small thing, but one more aspect of Haskell that strikes me as particularly elegant is the ways in which Haskell deals with string literals.

One of the things I really like, is that Haskell’s string literals can include spans of whitespace that are ignored, when they are enclosed within backslashes; this whitespace can include newlines. So you can write something like:

The large “Hello, World!” will be printed flush-left, but we were able to align the literal rather nicely in the Haskell source. This sort of thing can be a little more difficult in some other languages.

Also, in addition to the usual C-style \r, \n, \b, etc, and hexadecimal and octal notations (which differ slightly from C’s), Haskell also provides a \^C notation (where '\^C' represents “Control-C”), and a \ESC notation (representing the escape character, using its ASCII acronym). Thus, the ASCII backspace character can be represented as any of '\b', '\BS', '\^H, '\o33', or '\x1B'. Every one of those representations can be useful in different circumstances, especially for a terminal enthusiast and GNU Screen co-maintainer such as myself.


Well, that pretty much sums it all up. This took a lot longer to write than I really wanted it to, so maybe I’m finished talking about Haskell for a while. In presenting all the things that really impress me about Haskell, I’m perhaps painting an unrealistically rosy picture of the language. I do believe that Haskell, and its current best implementations, have some significant shortcomings (but none that actually render Haskell impractical, as is sometimes suggested); if I get around to writing a “part 3”, I’ll probably dwell on those, to counter-balance the wondrous things I covered here. However, that one might be a while in coming, as I think I’ve spent quite enough time writing about Haskell for a limited audience, for now. 😉

Why I’ll Never Buy Another Card From American Greetings

So, my wife’s birthday was yesterday. I had all the gifts wrapped, but then realized that I hadn’t remembered to get any cards to go with them.

So I went online looking for create-and-print cards that I could have done in time, and settled on AmericanGreetings.com’s “Create & Print” section. It requires a $20/year subscription, but offers a 30-day Free Trial period. So great, I’ll sign up for it, and be sure to cancel within 30 days.

After browsing around for a few minutes, I found a card I liked, and clicked to start the process of customizing and printing it. I then got a message about installing a plugin to continue. Fine. Except that the download link is a 404 (that’s for Firefox; I suspect MSIE would have worked fine).

Well, screw that. I’m not going to mess around any more with it, I just wanted it for my wife’s birthday, and dealing with tech support would be too little, too late. So I wanted to cancel the account.

The problem is, there was no obvious way to do this. I looked under “My Account” (or “My AG”), the first most-obvious place, and there was nothing there for cancellation. I had to spend around 20 minutes in the help section before I finally found it: “Currently online cancellation is not available. In order to verify ownership, we require that all cancellations be completed over the phone.”

This, of course, royally pissed me off. This is a really lousy way to do business, and I’m fairly certain that it’s illegal as well: it should be as easy to unsubscribe as it is to subscribe. I tried calling, but of course, being Sunday, I was given a “we’re closed, our business hours are…” message.

So I come around today to cancel the subscription. But their site is acting up, and four out of five page loads fail, so I’m hitting reload three to five times after every click, in order to find the page that gives the phone number. Fortunately, I was only on hold for about 5 minutes. I got “Why do you want to unsubscribe? We notice that you just subscribed yesterday…”, but after reiterating that it was what I wanted, I got the cancellation.

This is a really shitty way to treat your customers, though, and I’ll be damned if I consciously buy something that feeds money to them in the future. Of course, I won’t yell at friends or family for giving me an American Greetings card, though I do hope they read this and join me in my boycott; this kind of business practice is really unpardonable.

I’ll note here that Strawberry Shortcake and Care Bears are among the properties of American Greetings. Subsidiaries include Gibson Greetings and Carlton Cards.

Adventures in Haskell

So, the last few weeks I’ve been learning the Haskell programming language, and thought I’d share my thoughts on it. Haskell is a pure functional programming language that supports lazy evaluation (I’ll explain all that in a moment), which has been gaining popularity in some circles (and particularly among academia and computer lingusts). It has (semi-)famously been used to write an implementation for the Perl 6 programming language, and also a distributed revision control system (DaRCS).

I’d heard of the language a couple years ago, and have been wanting to learn it since then, but the freely available resources I’d been able to find for learning about the language were frustratingly poor, and printed books were in the range of $80 and up, which is a bit steep for learning a language as a hobby. I finally found an upcoming O’Reilly book, Real World Haskell, whose full content is available for browsing online. Amazon says the printed version’s going to be $50, which is a marked improvement over other books I’d been considering buying. The book’s not perfect—there are several minor beefs I have with it—but the fact remains that it is the highest quality resource for learning Haskell that is freely available. It also has the major advantage that it focuses heavily on real-world applications (as the name implies), including coverage of network programming, and concurrency (multi-threaded or clustered computing), writing language parsers; it even guides the reader through writing software to retrieve ISBN numbers from cell-phone photos of book backs! This probably makes it the most practical resource for learning Haskell, too—pricey or free.

I’m still working through the book, but I’ve supplemented my understanding so far by reading through the official language specification, The Haskell 98 Report, so at this point I’ve gained a pretty solid understanding of the core language and minimum libraries.

Intro to Functional Programming

(If you’re fairly familiar with functional programming, might as well skip this article and wait for part 2.)

I’ve had previous experience with functional programming languages, mainly Lisp and XSLT, which helped in trying to learn Haskell.

Lisp is one of the more popular functional programming languages , and probably the oldest. My experience with it is primarily through the Emacs-Lisp dialect. Emacs is a very powerful text editor that is popular in the Unix world (and particularly on GNU/Linux). It is mostly implemented in Emacs-Lisp, and you can alter the program’s behavior while it’s running by editing the Lisp code. It’s the first Unix editor I learned to use, and though I primarily prefer Vim these days, I’m very comfortable with Emacs (these days I run it in a vi-emulating mode called viper), and have written code in Emacs-Lisp.  I’ve also dabbled in Scheme, a Lisp dialect.

XSLT is a language for performing transformations on XML documents, and like Haskell but unlike Lisp is a “pure” functional programming language (more on that in a moment). Unfortunately, it doesn’t really provide a complete set of useful tools (version 1.0, anyway: I’m not very familiar with the XSLT 2.0 spec), so my experience with it was that writing XSLT can be an exercise in frustration.

In imperative programming languages like C, C++, BASIC, Java, and many others, the focus is on performing a series of actions: do A, then do B, etc. In imperative programming, one often deals with data in memory that can be modified through a series of actions (mutable data), and the behavior of bits of code in the program may depend on the current value of some particular piece of data in memory, which may be changed by other bits of code.

Here’s a simple example of imperative programming in C code:

This snippet produces an eight-element array a whose elements hold successive powers of 2 (1,2,4,8,16,32,64,128). The code amounts to a series of instructions like:

  1. set the value of v and i to 0
  2. set the ith element of the array a to the current value of v
  3. set the value of v to twice its previous value.
  4. set the value of i to one more than its previous value.
  5. if the value of i is less than 8, repeat from step 2.

Notice the heavy focus on performing a series of steps, and a dependency on modifying data.

In contrast to imperative programming, functional programming focuses on transforming input data into output data, by defining the function that performs the transformation. In a “pure” functional programming language, it is not possible to change the value of some object of data from one thing to another; in fact, there are no objects—there are only values (and functions; immutable data). This tends to place a higher focus on what the desired result is, rather than how we want to arrive at a given result.

Here’s an example of some code in Haskell that produces a result similar to the imperative snippet above:

If you invoke the function above as powers 0 1, its result will be a list of successive powers-of-two from 1 to 128, just like for our C code. However, you may notice that the method by which we’ve arrived at that list is somewhat different. Translated to English, this code says:

  • Define the function powers(i,v) (for any v, and where 0 ≤ i < 8) to be a list whose first element has the value v, and whose remaining elements are the result of powers(i+1, v*2), or, for any i not matching the qualification just given, the empty list.

…and that’s it. When you invoke powers 0 1, it constructs the element with value 1, then invokes itself again as powers 1 2, which constructs an element whose value is 2 and invokes powers 2 4 to construct the third element with a value of 4. When it finally reaches powers 8 256, it sees that i=8 doesn’t meet the criterion that 0 ≤ i < 8, and so it caps it off with an empty list result (which, since it doesn’t invoke itself recursively again, terminates the list, and the evaluation of powers).

Functional code like the above can look cryptic to imperative programmers who aren’t used to seeing it, but it’s learned fairly quickly. It happens to correspond fairly closely to how a mathematician would formulate a function definition, so folks who are comfortable with math will tend to feel right at home.

Note that it’s just as easy to do recursion like this in C:

However, in real-world C implementations, each invocation of powers will consume additional memory. It works fine for our example, where we’re limited to a total of nine calls to powers, but for longer recursions you could run out of stack space, which is not a good thing.

Note that our recursive C example still isn’t pure functional programming, as we’re modifying the values of an existing array, and not producing new values.

Disclaimer: none of the C or Haskell examples are particularly idiomatic; they’re for rough example purposes only. I’d do them differently in real life; but I felt that these examples, as written, may be a little easier to discuss, and more accessible for folks that might not be terribly familiar with either language.

To be continued (very soon)… Part 2 will discuss some of the things in Haskell that I think are really cool. 🙂