Archive for May, 2009

classes of functions

May 24th, 2009

I decided to finally blog this since I can never frickin remember which is which. Maybe if I write this and draw the diagram it'll finally stick. If not, I'll have something to come back to. To make it more accessible I'll be using a running example: the bike allocation problem. Given a group of people and a set of bikes, who gets which bike?

Partial functions

function_properties_partialPartial functions are pretty self explanatory. All you have to remember is which side the partiality concerns. In our example a partial function means that not every person has been assigned a bike. Some persons do not have bikes, so lookup_bike(person) will not work on all inputs.

Partial functions are common in code: reading from files that don't exist, and of course the ever lurking NullPointerException -- following a pointer to an object that is not live. In haskell, this is where the Maybe monad appears.

Total functions

function_properties_totalNot surprisingly, total functions are the counterpart to partial functions. A total function has a value for every possible input, so that means every person has been assigned a bike. But it doesn't tell you anything about how the bikes are distributed over the persons; whether it's one-to-one, all persons sharing the same bike etc.

Clearly, total functions are more desirable than partial ones -- it means the caller can call the function with any value without having to check it first. Partial functions often masquerade as total ones, by returning a value outside the expected range (which explains the existence of a null value in just about every programming language and data format). In python the value 0, None and any empty sequence (string, list, tuple, dict) all represent null, which makes writing total functions easy.

Bijective/isomorphic functions (forall one-to-one)

function_properties_bijectiveA bijective function (also called isomorphic) is a one-to-one mapping between the persons and the bikes (between the domain and codomain). It means that if you find a bike, you can trace it back to exactly one person, and that if you have a person you can trace it to exactly one bike. In other words it means the inverse function works, that is both lookup_bike(person) and lookup_person(bike) work for all inputs.

Isomorphic functions are found in all kinds of translations; storing objects in a database, compressing files etc. The name literally means "the same shape", so any format that can reproduce the same structure can represent the same data.

Injective functions (one-to-one)

function_properties_injectiveAn injective function returns a distinct value for every input. That is, no bike is assigned to more than one person. If the function is total, then what prevents it from being bijective is the unequal cardinality of the domain and codomain (ie. more bikes than persons).

Another way to understand it is to think of something small being stored in (embedded in) something big. In order to maintain unique output values, the codomain must be at least as big as the domain. GUIDs are an example of this. A GUID generator guarantees a globally unique identifier by picking values from a sufficiently large space. Given a GUID that has been issued, you can trace it back to exactly one object, but you cannot take just any value in the GUID space, because most of them have never (and will never) be issued to anyone.

Surjective functions (many-to-one)

function_properties_surjectiveA surjective function is one where all values in the codomain are used (ie. all bikes are assigned). In a way it is the inverse property of a total function (where all persons have a bike).

Surjective functions are often undesirable in practice, meaning that you have too few resources at your disposal, which forces sharing (threads on a cpu) or rejection (streaming server can only accept so many clients).

The way to think of injections and surjections is not as opposites, but as complementary properties. A function can be both injective (all persons have a unique bike) and surjective (all bikes are used). If so, it is bijective.

kill obsolete dotfiles

May 23rd, 2009

So my portable dotfiles are working out really well. There is only one fly in the ointment left. When a file has changed, it gets overwritten with a newer version, that's fine. But when a file has been renamed or removed, it will stick around in ~, creating the false impression that it's supposed to be there.

This is not a huge problem, but it does get to be tiresome when you're moving around directory hierarchies. I recently started using emacs more seriously and I expect a lot of stuff will eventually pile up in .emacs.d. Meanwhile, obsolete files and directories will clutter the hierarchy and possibly interfere with the system.

What can we do about it? The archive that holds the dotfiles is a sufficient record of what is actually in the dotfiles at any given moment. We can diff that with the files found on the system locally and pinpoint the ones that have been made obsolete.

For example, I initially had an ~/.emacs file, but then I moved it to ~/.emacs.d/init.el. So ~/.emacs is obsolete. But when I sync my dotfiles on a machine with an older version of my dotfiles, it will still have ~/.emacs around.

Not anymore. Now this happens:

dotfiles_sync

Files directly in ~ have to be listed explicitly, because we don't know anything about ~ as a whole. But files in known subdirs of ~, like ~/.emacs.d, are detected automatically.

killobsolete() {
	# files directly in ~ formerly used, now obsolete
	# subdirs of ~ checked automatically
	local suspected=".emacs"

	# init tempfiles we need to run diff on
	local found="/tmp/.myshell_found"
	local pulled="/tmp/.myshell_pulled"

	# detect files found locally
	for i in $(gunzip -c cfg.tar.gz | tar -tf -); do
		echo $(dirname "$i");
	done | grep -v '^.$' | sort | uniq | xargs find | sort | uniq | \
		grep -v ".myshell/.*_local*" > "$found"

	# detect suspected files
	for f in $(find $suspected 2>/dev/null); do
		echo "$f" >> "$found";
	done

	# sort found list
	cat "$found" | sort | uniq > "$found.x" ; mv "$found.x" "$found"

	# list files pulled from upstream
	for i in $(gunzip -c cfg.tar.gz | tar -tf -); do
		echo "$i";
		echo $(dirname "$i");
	done | grep -v '^.$' | sed "s|\/$||g" | sort | uniq > "$pulled"

	# list obsolete files
	local num=$(diff "$pulled" "$found" | grep '>' | cut -b3- | wc -l)
	if [ $num -gt 0 ]; then
		echo -e "${cyellow} ++ files found to be obsolete${creset}";
		diff "$pulled" "$found" | grep '>' | cut -b3-;

		# prompt for deletion
		echo -e "${cyellow} ++ kill obsolete? [yN]${creset}"
		read ans
		if [[ "$ans" == "y" ]]; then
			for i in $(diff "$pulled" "$found" | grep '>' | cut -b3-); do
				if [ -d "$i" ]; then
					rm -rf "$i"
				else
					rm -f "$i"
				fi
			done
		fi
	fi

	# dispose of tempfiles
	rm -f "$pulled" "$found"
}

generalized makefiles

May 18th, 2009

Build systems are probably not the most beloved pieces of machinery in this world, but hey we need them. If your compiler doesn't resolve dependencies, you need a build system. You may also want one for any repeated task that involves generating targets from sources as the sources change over time (building dist packages, xml -> html, latex -> pdf etc).

generalized_makefiles_singleFittingly, there are quite a few of them. I haven't done an exhaustive review, but I've mentioned ant and scons in the past. They have their strengths, but the biggest problem, as always, is portability. If you're shipping java then having ant is a reasonable assumption. But if not.. Same goes for python, especially if you're using scons as build system for something that generally gets installed before "luxury packages" like python. Besides, scons isn't popular. I also had a look at cmake, which is disgustingly verbose.

Make is the lowest common denominator and thus the safest option by far. So over the years I've tried as best to cope with it. Fortunately, I tend to have fairly simple builds. There's also autotools, but for a latex document it seems like overkill, to put it politely.

One to one, single instance

So what's the problem here anyway? Let's use a simple example, the randomwalks code. The green file is the source. The orange file is the target. And you have to go through all the yellow ones on the way. The problem is that make only knows about the green one. That's the only one that exists.

So the simplest thing you can do is state these dependencies explicitly, pointing each successive file at the previous one. Then it will say "randomwalks.s? That doesn't exist, but I know how to produce it." And so on.

targets := randomwalks

all: $(targets)

randomwalks : randomwalks.o
	cc -o randomwalks randomwalks.o

randomwalks.o : randomwalks.s
	as -o randomwalks.o randomwalks.s

randomwalks.s : randomwalks.c
	cc -S -o randomwalks.s randomwalks.c

clean:
	rm -f *.o *.s $(targets)

Is this what we want? No, not really. Unfortunately, it's what most make tutorials (yes, I'm looking at you, interwebs) teach you. It sucks for maintainability. Say you rename that file. Have fun renaming every occurrence of it in the makefile! Say you add a second file to be compiled with the same sequence. Copy and paste? Shameful.

One to one, multiple instances

generalized_makefiles_multipleIt's one thing if the dependency graph really is complicated. Then the makefile will be too, that's unavoidable. But if it's dead obvious like here, which it often is, then the build instructions should mirror that. I run into a lot of cases where I have the same build sequence for several files. No interdependencies, no multiple sources, precisely as shown in the picture. Then I want a makefile that requires no changes as I add/remove files.

I've tried and failed to get this to work several times. The trick is you can't use variables, you have to use patterns. Otherwise you break the "foreach" logic that runs the same command on one file at a time. But then patterns are tricky to combine with other rules. For instance, you can't put a pattern as a dependency to all.

At long last, I came up with a working makefile. Use a wildcard and substitution to manufacture a list of the target files. Then use patterns to state the actual dependencies. It's also helpful to unset .SUFFIXES so that the default patterns don't get in the way.

targets := $(patsubst %.c,%,$(wildcard *.c))

all: $(targets)

% : %.o
	cc -o $@ $<

%.o : %.s
	as -o $@ $<

%.s : %.c
	cc -S -o $@ $<

clean:
	rm -f *.o *.s $(targets)

.SUFFIXES:

Many to one

generalized_makefiles_manytooneWhat if it gets more complicated? Latex documents are often split up into chapters. You only compile the master document file, but all the imports are dependencies. Well, you could still use patterns if you were willing to use article.tex as the main document and stash all the imports in article/.

This works as expected, $< gets bound to article.tex, while the *.tex files in article/ correctly function as dependencies. Now add another document story.tex with chapters in story/ and watch it scale. :cap:

targets := $(patsubst %.tex,%.pdf,$(wildcard *.tex))
 
all: $(targets)
 
%.pdf : %.tex %/*.tex
	pdflatex $<

clean:
	rm -f *.aux *.log *.pdf

Many to many

Latex documents don't often have interdependencies. Code does. And besides, I doubt you want to force this structure of subdirectories onto your codebase anyway. So I guess you'll have to bite the bullet and put some filenames in your makefile, but you should still be able to abstract away a lot of cruft with patterns. Make also has a filter-out function, so you could state your targets explicitly, then wildcard on all source files and filter out the ones corresponding to targets, and use the resulting list as dependencies. Obviously, you'd have to be willing to use all non-targets as dependencies to every target, which yields some unnecessary builds. But at this point the only alternative is to maintain the makefile manually, so I'd still go for it on a small codebase.

PS. First time I used kivio to draw the diagrams. It works quite okay and decent on functionality, even if the user interface is a bit awkward. Rendering leaves something to be desired clearly.

love the Rybak story, don't like Rybak

May 17th, 2009

It has come to pass.

I would say it's hard to believe that it's gone this far, but truly it isn't. Eurovision has always been garbage so I'm not particularly shocked at this outcome. The problem with Eurovision isn't the music, it's the voting. Every time there are some good acts in it, but the votes fall pretty much as the wind blows, it's by no means meritocratic. (That is, even looking past the blatant political voting and back room deals between neighboring countries.) So if you watch it you'll only be pissed that the best songs never get recognized.

But then they picked Rybak. Two-three months ago I'd never heard of him. I was visiting in Norway and I was told the biggest celebrity in the country right now is a guy with Belarusian origins who plays the violin. Sweet! What's not to like about that?

It's a feel good story, almost makes me feel like it could have been me. I'd like nothing better than someone like this get recognized. Norway is a great place, but like all immigration stories go, there are some people who look down on some people from certain geographic origins; it's pure prejudice. And he plays violin to boot, an eccentric interest to say the least. Which I did too once! And this is the guy the Norwegian public loves to death.

All was rosy. And then I heard his music. :/

Awful stuff. The melody must be derived from some sort of folk music, evidenced by the lame dancing. I hate folk music. Then there is the violin, which he doesn't really do much with, except play the theme. And even if you liked it up to that point, then he starts singing. Borderline off key. It literally doesn't even make the standard of an average Eurovision entry.

Needless to say, pop music is not about music, it's about image. And he certainly has that going for him. Maybe if he gets a good producer behind him he can crank out something actually worth playing on the radio. Having come this far, he can't be completely devoid of talent, can he? Just talent misdirected, that's my guess.

The Rybak story is a great story, but it sorely needs an injection of credibility. Maybe one day he could be celebrated on musical merit?

assembly primer

May 14th, 2009

So, assembly.. :nervous: Not exactly second nature for someone who lives in the dynamic language world. Even c seems like a million miles away. But even though I'm not yearning to work in assembly, it would be nice to know something about it. If for no other reason than the fact that all my [compiled] code eventually ends up in assembly (or bytecode, which is not too distant from assembly).

No doubt one way to learn assembly would be to dive in at the deep end with the right manuals. But I've already read about all the opcodes once and it didn't tell me anything, it was all too foreign. So I think a much better way is to see how assembly gets to be the way it is, and maybe it's possible to make sense of that. For this we'll need a super simple example to dissect, because assembly code is much longer than c.

Random walks

The idea of a random walk is that you start out at position=0, and then you flip a coin. If it's heads, take one step to the right (position++). If it's tails take one step to the left (position--). Repeat this for as long as it amuses you. Eventually you terminate and check where you've come. It could be position==0, which means you had just as many heads and tails. Or it could position==-2, so you had more tails etc.

We'll see this algorithm in c, and then compile it to assembly. It's obviously very simple; the only complication is the need for a random number generator. Now, there is a rand() function in stdlib.c, but you still have to seed it with *something* random, so we'll just read bytes from /dev/urandom instead. The byte is a number [0,255], so we'll divide the range in two and see which side of 127 the number was on.

Here it is in c:

#include <stdio.h>
#include <stdlib.h>

void walk() {
    int pos = 0;
    int steps = 100;

    FILE *rand_stream = fopen("/dev/urandom", "r");

    for (int i=0; i<steps; i++) {
	int x = fgetc(rand_stream);
	pos += x < 127 ? -1 : 1;
    }

    fclose(rand_stream);

    printf("Steps taken: %d\n", steps);
    printf("End position: %d\n", pos);
}

int main() {
    walk();
}

Compile with:

gcc -std=c99 -o walks randomwalks.c

But we also want the assembly code:

gcc -std=c99 -S randomwalks.c

The first WTF in assembly is that rather independent of how your c code looks, it doesn't come out in that order. So the first thing to figure out is where the heck execution starts. There should be a function called main somewhere..

main:
    leal    4(%esp), %ecx       /* execution begins */
    andl    $-16, %esp
    pushl   -4(%ecx)
    pushl   %ebp
    movl    %esp, %ebp
    pushl   %ecx
    subl    $4, %esp
    call    walk                /* call walk with no arguments */
    movl    $0, %eax            /* exit code := 0 */
    addl    $4, %esp
    popl    %ecx
    popl    %ebp
    leal    -4(%ecx), %esp
    ret                         /* execution ends */

The only thing that happens here that we can recognize from the c program is the call to walk. The other stuff is all bookkeeping that has to do with how programs start and end. I have no idea what this stuff is for.

Should we just go to walk then? Well, first it seems prudent to mention that all our string literals (which we'll be needing soon) are stored separately from code.

.LC0:
    .string "r"
.LC1:
    .string "/dev/urandom"
.LC2:
    .string "Steps taken: %d\n"
.LC3:
    .string "End position: %d\n"

Let's go to walk now.

Again, the first few instructions have to do with bookkeeping and are not commented. But soon we reach statements from the c program. The first interesting event is the literal 0, written $0, pushed onto the stack, at the location -4(%ebp), which is an offset calculated from the base pointer %ebp. The base pointer has something or other to do with where functions keep their local variables, making sure a successive call doesn't clobber the environment of the caller.

But anyway, -4(%ebp) is the address where literal value 0 is. And this represents the integer variable pos. We know that because this is the first thing that happens in the c program, so it's also the first thing that happens here. Until something new gets put into -4(%ebp) we know that pos==-4(%ebp).

The same thing happens with steps, stored one slot over. The addresses increase by multiples of 4 because this is a 32bit machine and 32bits/8bits/byte = 4 bytes.

Next, two string constants (their addresses, actually) are put on the stack. They have to be set up in this order to call a c function, namely fopen. The result of this call is found in %eax, and represents the variable *rand_stream. This value is the pushed on the stack again, to address -12(%ebp).

Finally, we've reached the for loop. The next instruction assigns i=0, and then we jump to the loop condition.

walk:                           /** walk function body **/
    pushl   %ebp
    movl    %esp, %ebp
    subl    $56, %esp
    movl    $0, -4(%ebp)        /* (pos = 0)        -> -4(%ebp) */
    movl    $100, -8(%ebp)      /* (steps = 100)    -> -8(%ebp) */
    movl    $.LC0, 4(%esp)      /* "r"              -> stack[1] */
    movl    $.LC1, (%esp)       /* "/dev/urandom"   -> stack[0] */
    call    fopen               /* Call fopen with 2 args from stack */
    movl    %eax, -12(%ebp)     /* *rand_stream     -> -12(%ebp) */
    movl    $0, -16(%ebp)       /* (i = 0)          -> -16(%ebp) */
    jmp     .L2                 /* goto loop condition */

In case you're wondering, .L2 is not a function declaration. It's a label, ie. an address you can jump to. Assembly makes no distinction between a label that represents a function and one that doesn't.

What we have here is the loop condition check, followed by the rest of the walk function (excluding the loop body). In other words, suppose the condition were never true, this is how we would execute walk.

We load the address representing the variable i into %eax and compare it to the value of steps. If the comparison is unsuccessful (ie. the body should be executed) we jump to the loop body.

Subsequently, we line up the argument *rand_stream. The extra step of putting into %eax seems silly, perhaps it could be done in one instruction. We then call fclose. The same happens with the calls to printf.

The last two instructions again have to do with bookkeeping and don't correspond to any statements in our c program.

.L2:                            /** loop condition check **/
    movl    -16(%ebp), %eax     /* i                -> %eax */
    cmpl    -8(%ebp), %eax      /* cmp(steps, i)    -> %eax */
    jl      .L5                 /* if (!cmp) goto loop body */

                                /** After for loop **/
    movl    -12(%ebp), %eax     /* *rand_stream                 */
    movl    %eax, (%esp)        /*                  -> stack[0] */
    call    fclose              /* call fclose with 1 arg from stack */
    movl    -8(%ebp), %eax      /* steps                        */
    movl    %eax, 4(%esp)       /*                  -> stack[1] */
    movl    $.LC2, (%esp)       /* "Steps taken.."  -> stack[0] */
    call    printf              /* call printf with 2 args from stack */
    movl    -4(%ebp), %eax      /* pos                          */
    movl    %eax, 4(%esp)       /*                  -> stack[1] */
    movl    $.LC3, (%esp)       /* "End position.." -> stack[0] */
    call    printf              /* call printf with 2 args from stack */
    leave
    ret                         /* return from walk function */

What we have left, then, is the loop body. It looks messy because we have some branching here. But it's not too bad.

The first thing being done is to get ahold of *rand_stream and call fgetc. The result is found in %eax and represents the variable x. We compare this value to the literal $126 (127 in the c program), to see if we should produce the increment 1 or -1.

If x turns out to be below 127 we store the value -1 in -36(%ebp). This value is part of an expression and does not represent any variable in the c program. We then jump over the next assignment. Alternatively, we run the assignment of 1 to -36(%ebp) and skip the first assignment.

Ending up in .L4, we add the value in -36(%ebp), which is either 1 or -1, to pos. Then we increment the loop counter i.

In the full assembly file, .L2 follows after this, which means we re-evaluate the loop condition and possibly execute the loop again. So everything checks out. :)

.L5:                            /** loop body **/
    movl    -12(%ebp), %eax     /* *rand_stream                 */
    movl    %eax, (%esp)        /*                  -> stack[0] */
    call    fgetc               /* call fgetc with 1 arg from stack */
    movl    %eax, -20(%ebp)     /* x                -> -20(%ebp) */
    cmpl    $126, -20(%ebp)     /* cmp(126,x)       -> %eax */ 
    jg      .L3                 /* if (cmp) goto assign 1 */

                                /** x is <127  => assign -1 **/
    movl    $-1, -36(%ebp)      /* -1               -> -36(%ebp) */

    jmp     .L4                 /* goto pos += ... */

.L3:                            /** x is >=127 => assign 1 **/
    movl    $1, -36(%ebp)       /* 1                -> -36(%ebp) */

.L4:                            /** pos += ... **/
    movl    -36(%ebp), %eax     /* add(1 or -1,              */
    addl    %eax, -4(%ebp)      /*              pos) -> pos  */
    addl    $1, -16(%ebp)       /* add(1, i)         -> i */

So that's a pretty non trivial chunk of code for a trivial c program. Needless to say, it really helps to have the source code when you're looking at assembly. It gets worse when programs are optimized, because the instructions will be pruned and rearranged, so it's gonna be harder to understand.