can hidden complexity be good?

July 1st, 2008

The intuitive answer would be: no. Complexity is the huge cross we have to bear, the great weight that squashes our systems and makes them unmaintainable. We fight complexity tooth and nail, and hidden complexity is the worst kind, because it breaks things for reasons we don't understand. So the least you can achieve with a system is grok the full complexity of it, even if it's too much of a mess to do anything with.

On the other hand... if you take a step back and think about what programming really *is* then you might have to rethink that conclusion. It is, plainly, finding ways to solve problems of data processing in one way or another. And to be a coder that's really all you need to know. It does mean that you risk ending up on thedailywtf, however. So now, how is "good code" different from "just code"? What characterizes code that wins our approval? In a word: simplicity. The smartest way of doing something is the simplest way, without missing any of the requirements. A simple solution is an elegant solution, isn't it? Simplest often means shortest, too. The principle of simplicity also relates directly to the issue of complexity in a technical sense. High performance code is efficient because it's the most lazy way to do the job. Inefficient code, conversely, does too much work -- it's for suckers.

What "good programming" is

This ingrained characteristic of programming is reflected very clearly in just about any discussion of code that is "too slow". People critique the code for being awkward, for doing things in a round about manner. Eventually you arrive at a solution that is typically both shorter, and clearer. On the other end of the spectrum you have concepts like "beautiful code" and "code that stands the test of time". And when you look at the code they're raving about, the same observation transpires. It's simple. It's both blindingly obvious (once you get around to thinking about the problem in that particular way) and impressively simple for something *that* hard. It is the ultimate optimization of problem vs effort.

Our product smacks somewhat of mathematical proofs. In mathematics you score points for simple solutions, but it's not strictly necessary. All it takes is for someone to read your proof and verify that it all fits together. No one is gonna run the proof on their machine a thousand times per second.

And that is what I'm driving at here. Programming is the activity of solving a problem such that the solution exerts the least amount of effort. It's kind of funny that we of all people are dedicated to this particular discipline. Us, with the expensive silicone that can perform more operations than any other machine.

Set in those terms, programming is the art of doing as little as possible. And the great pieces of code are great not in what they *do* but in what they *don't do*. In other words, if you write good code, it's because you've found a way to take an input and do the least amount of work to produce the output. Baked into that is the secret of choosing your input very carefully in the first place. So if you can gain something from the form that the input is in, then you're achieving something without writing any code for it. This is step 1 toward your brilliant piece of code.

Hidden complexity

But this is also when you start introducing complexity. Even if it's external to your program, it is still an assumption that must hold. Is this good or bad? From your classical software engineering perspective, you badly want to minimize the amount of text you have to read to look at a piece of code and make sense of it. But then again, you need to know everything, because ignorance will bite you.

For example, in spiderfetch I spider a web page for urls. And it can run in a mode that will just output the urls and stop there. Now, if the urls are in the same order that they appear on the page, this is a big advantage, because the average web page can easily yield 50 urls and the user won't be able to easily recognize them if they come up in some random order. But this is also too cosmetic an issue to be an explicit requirement. I certainly didn't think of this particular issue when I started working on it. If you really wanted to document this kind of behavior down to the smallest detail, your documentation would be enormous. Pragmatically speaking, this behavior probably would not be documented.

Why is this an issue? Because giving a unique list of urls *is* a requirement which will influence this particular issue. (Hence, list(set(urls)) won't do the trick).

Suppose you find a way to produce the desired behavior without doing any work (or doing very little). Should this be documented in order to make this bit of complexity explicit? If this added bit of complexity doesn't affect the working of the function much at all, then it's quite peripheral anyway. What are the risks? If you break the function then obviously the peripheral complexity doesn't affect you. If you refactor you might break the peripheral bit, because it wasn't written down anywhere. On other other hand, if all such peripheral bits were to be specified, it would take you that much longer to grok the code at all.

The message we send

The question we like to ask ourselves is: what happens to the person who inherits the code? Will he notice the "hidden" (or more precisely: incidental) desirable behavior? If not, is it really important enough to document it? And if yes, will he understand why it works that way? If you lose this behavior you haven't broken the program. You have degraded it in a cosmetic way, but it still works well enough. So does it really need to be explicit?

The so-called clever coder that every middle manager is blogging his heart out about hiring, will, obvously, notice the hidden complexity. And know both how and why it's there. The less clever coder might not notice. Or he might notice, but not understand the thought process behind it. What do we want to say to him? It's okay if you mess this up, it's not that important -or- Pay close attention to the detailed documentation or you might break something?

spiderfetch, now in python

June 28th, 2008

Coding at its most fun is exploratory. It's exciting to try your hand at something new and see how it develops, choosing a route as you go along. Some poeple like to call this "expanding your ignorance", to convey that you cannot decide on things you don't know about, so first you have to become aware - and ignorant - of them. Then you can tackle them. If you want a buzzword for this I suppose you could call this "impulse driven development".

spiderfetch was driven completely by impulse. The original idea was to get rid of awkward, one-time grep/sed/awk parsing to extract urls from web pages. Then came the impulse "hey, it took so much work to get this working well, why not make it recursive at little added effort". And from there on countless more impulses happened, to the point that it would be a challenge to recreate the thought process from there to here.

Eventually it landed on a 400 line ruby script that worked quite nicely, supported recipes to drive the spider and various other gimmicks. Because the process was completely driven by impulse, the code became increasingly dense and monolithic as more impulses were realized. And it got to the point where the code worked, but was pretty much a dead end from a development point of view. Generally speaking, the deeper you go into a project, gradually the lesser the ideas have to be to be realized without major changes.

Introducing the web

The most disruptive new impulse was that since we're spidering anyway, it might be fun to collect these urls in a graph and be able to do little queries on them. At the very least things like "what page did I find this url on" and "how did I get here from the root url" could be useful.

spiderfetch introduces the web, a local representation of the urls the spider has seen, either visited (spidered) or matched by any of the rules. Webs are stored, quite simply, in .web files. Technically speaking, the web is a graph of url nodes, with a hash table frontend for quick lookup and duplicate detection. Every node carries information about incoming urls (locations where this url was found) and outgoing urls (links to other documents), so the path from the root to any given url can be traced.

Detecting file types

Aside from the web impulse, the single biggest flaw in spiderfetch was the lack of logic to deal with filetypes. Filetypes on the web work pretty much as well as they do on your local computer, which means if you rename a .jpg to a .gif, suddenly it's not a .jpg anymore. File extensions are a very weak form of metadata and largely useless. Just the same with spidering, if you find a url on a page you have no idea what it is. If it ends in .html then it's probably that, but it can also not have an extension at all. Or it can be misleading, which when taken to perverse lengths (eg. scripts like gallery), does away with .jpgs altogether and encodes everything as .php.

In other words, file extensions tell you nothing that you can actually trust. And that's a crucial distinction: what information do I have vs what can I trust. In Linux we deal with this using magic. The file command opens the file, reads a portion of it, and scans for well known content that would identify the file as a known type.

For a spider this is a big roadblock, because if you don't know what urls are actual html files that you want to spider, you have to pretty much download everything. Including potentially large files like videos that are a complete waste of time (and bandwidth). So spiderfetch brings the "magic" principle to spidering. We start a download and wait until we have enough of the file to check the type. If it's the wrong type, we abort. Right now we only detect html, but there is a potential for extending this with all the information the file command has (this would involve writing a parser for "magic" files, though).

A brand new fetcher

To make filetype detection work, we have to be able to do more than just start a download and wait until it's done. spiderfetch has a completely new fetcher in pure python (no more calling wget). The fetcher is actually the whole reason why the switch to python happened in the first place. I was looking through the ruby documentation in terms of what I needed from the library and I soon realized it wasn't cutting it. The http stuff was just too puny. I looked up the same topic in the python docs and immediately realized that it will support what I want to do. In retrospect, the python urllib/httplib library has covered me very well.

The fetcher has to do a lot of error handling on all the various conditions that can occur, which means it also has a much deeper awareness of the possible errors. It's very useful to know whether a fetch failed on 404 or a dns error. The python library also makes it easy to customize what happens on the various http status codes.

A modular approach

The present python code is a far cry from the abandoned ruby codebase. For starters, it's three times larger. Python may be a little more verbose than ruby, but the increase is due to a new modularity and most of all, new features. While the ruby code had eventually evolved into one big chunk of code, the python codebase is a number of modules, each of which can be extended quite easily. The spider and fetcher can both be used on their own, there is the new web module to deal with webs, and there is spiderfetch itself. dumpstream has also been rewritten from shellscript to python and has become more reliable.

Grab it from github:

spiderfetch-0.4.0

emacs that firefox!

June 24th, 2008

So the other day I was thinking what a pain it is to handle text in input boxes on web pages, especially when you're writing something longer. Since I started using vim for coding I've become aware of how much more efficient it is to edit when you have keyboard shortcuts to accelerate common input operations.

I discovered a while back that bash has input modes for both vi and emacs and ever since then editing earlier commands is so much easier. And not only does it work in bash, but just as well in anything else, like ipython, irb, whatever. :cap:

So now only Firefox remains of my most used applications that still has the problem of stoneage editing, and I'm stuck using the mouse way too much. It bugs me that I can't do Ctrl+w to kill a word. Thus I went hunting for an emacs extensions and what do you know, of course there is one: Firemacs. Turns out it works well, and it also has keyboard shortcuts for navigation. > gets you to the bottom of the page, no more having to hold down <space>. :thumbup:

iphone = sexism

June 24th, 2008

Finger challenged women are complaining about the iphone because their uglyass [fake] long nails prevent them from using the touchscreen comfortably. :howler:

Oh man, this is too much. :D What's next, people who wear their hair down at shoe level will complain that it gets messed up because the street is dirty?

One clever missy has the answer, though.

I wouldn't go as far as to call it misogyny, but it sure is annoying. They should just do what I do, keep one fingernail short for hindrances such as this.

the Swedish Pirate Party

June 17th, 2008

Rick Falkvinge of the Swedish Pirate Party gives a talk at google. It's one of the best talks about free culture and "intellectual property" I've seen. I also learned that the Norwegian Liberal Party (Venstre) has adopted the same stance on free culture, bravo!

If you have reservations about the implications of copyright reform, go watch this talk, he gets all these questions from the audience.

The soundbite from Falkvinge's talk for all you 24hour news media addicts:

Copyright, while written into law that it's supposed to be for the benefit of the author, never was. It was for the benefit of the distributors.