Archive for the ‘dysfunctional’ Category

monads for the layman

March 11th, 2012

I've done a fair bit of coding in Haskell, yet I could never fully understand monads. And if you don't get monads then coding Haskell is... tense. I could still get things done with some amount of cargo culting, so I was able to use them, but I couldn't understand what they really were.

I tried several times to figure it out, but all the explanations seemed to suck. People are so excited by the universality of monads that they make up all kinds of esoteric metaphors and the more of them you hear about the less you understand. Meanwhile there's no simple explanation to be found. That's when they're not simply too eager to skip ahead to equations that have them weeping like an art critic in front of the Mona Lisa and tell you all about the monadic laws as it that helps.

Fortunately, help is at hand, for today I will show you the first truly good explanation of monads I've ever seen, written by the charming Dan Piponi in the distant 2006 (I rather wish I had found it sooner). What I will do here is use Dan's method to explain it, using some python examples for easier comprehension, and keep it even more basic.

Rationale

monads_primitive_funcsIt's good to get this one straightened out right off the bat. Basically, it's nice to be able to have some piece of data that you can pass to any number of functions, however many times you want, and in whatever order you want. Imagine them lined up one after another like a pipeline, and your data goes through it. In other words: function composition. We like that because it makes for clear and concise code.

To achieve this we need functions that can be composed, ie. have the same signature:

def inc(x):
    return x+1

def double(x):
    return x*2

print "Primitive funcs:", double( inc(1) )
# Primitive funcs: 4

Logging functions

monads_logging_funcsSometimes, however, you find that you want to add something to a function that is not strictly necessary to participate in the pipeline. Something that is more like metadata. What if you wanted your functions to also log that they had been called?

def inc_log(x):
    return inc(x), "inc_log called."

def double_log(x):
    return double(x), "double_log called."

# This will not work properly:
print "Logging funcs:", double_log( inc_log(1) )
# Logging funcs: ((2, 'inc_log called.', 2, 'inc_log called.'), 'double_log called.')

# What we wanted:
# Logging funcs: (4, 'inc_log called.double_log called.')

Now, instead of each function taking one input and giving one output, it gives two outputs. So what happened when we ran it was this:

  1. inc_log received 1
  2. inc_log returned (2, 'inc_log called.')
  3. double_log received (2, 'inc_log called.')
  4. double_log returned ((2, 'inc_log called.', 2, 'inc_log called.'), 'double_log called.')
    Instead of doubling the number it doubled the tuple.

Restoring composability (bind)

monads_bindSo how can we solve this? It's not that hard. If you look at the diagram you see that inc_log produces two outputs, yet double_log should only receive one of these. The other should still be saved, somehow, and then joined with the output of double_log after it's finished executing. So we need a wrapper around double_log to change the arguments it receives and change the arguments it returns!

def bind(g):
    def new_g(pair):
        f_num, f_str = pair
        g_num, g_str = g(f_num)
        return g_num, f_str + g_str
    return new_g

new_double_log = bind(double_log)

print "Logging funcs:", new_double_log( inc_log(1) )
# Logging funcs: (4, 'inc_log called.double_log called.')

The name "bind" is not the most self explanatory imaginable, but what the wrapper does is just what we said in the description:

  1. Receive a pair of values.
  2. Call double_log with the first of these values.
  3. Receive a new pair of values from double_log.
  4. Return a third pair of values that we construct from the other pairs.

The key thing to notice is this: we have "adapted" double_log to be a function that accepts two inputs and returns two outputs. We could use the wrapper on any number of other functions with the same "shape" as double_log and chain them all together this way, even though their inputs don't match their outputs!

Mixing function types

monads_decSo far so good, but what if we now we want to mix logging functions with primitive functions in our pipeline?

def dec(x):
    return x-1

# This will not work:
new_dec = bind(dec)
print "Logging funcs:", new_dec( inc_log(1) )

Granted dec is not a logging function, so we can't expect it to do any logging. Still, it would be nice if we could use it without the logging.

But we can't use bind with dec, because bind expects a function with two outputs. dec simply does not have have the shape of a logging functions, so we are back to square one. Unless...

Using bind with primitive functions

Unless we could fake it, that is. And make dec look like a logging function. In the diagram we can see that there is a gap between the end point of dec and that of bind. bind is expecting two outputs from dec, but it only receives one. What if we could plug that gap with a function that lets the first output through and just makes up a second one?

def unit(x):
    return x, ""

monads_liftYes, just like that! Except that now we have two functions dec and unit, and we don't want to think of them as such, because we really just care about dec. So let's wrap them up so that they look like one!

def lift(func):
    return lambda x: unit( func(x) )

lifted_dec = lift(dec)
new_dec = bind(lifted_dec)

print "Logging funcs:", new_dec( inc_log(1) )
# Logging funcs: (1, 'inc_log called.')

So lift does nothing more than calling dec first, then passing the output to unit and that's it. dec+unit now has the shape of a logging function and lift wraps around them both, making the whole thing into a single function.

And with the lifted dec (a logging function should anyone inquire), we use bind as we've done with logging functions before. And it all works out!

The log says that we've only called inc_log. And yet dec has done its magic too, as we see from the output value.

Conclusions

If you look back at the diagram you might think we've gone to a lot of trouble just to call dec, quite a lot of overhead! But that's also the strength of this technique, namely that we don't have to rewrite functions like dec in order to use them in cases we hadn't anticipated. We can let dec do what it's meant for and do the needed plumbing independently.

If you look back at the code and diagrams you should see one other thing: if we change the shape of logging functions there are two functions we need to update: bind and unit. These two know how many outputs we're dealing with, whereas lift is blissfully ignorant of that.

what's in a fixed point?

November 9th, 2008

The fixed point is a very intuitive idea that we know from daily life. It's the notion that if something is a certain way, and you do something to change it, it will change. Some things you can go on changing forever by repeating the same action, but other things reach a limit beyond which repeating the action has no effect. Suppose you are going 50km/h in your car and you step on the brake. The car slows down. You step on the brake again, the car slows down again. But eventually the car will have come to a complete stop and stepping on the brake doesn't do anything anymore. The point at which the action has no effect anymore has a scary math name: a fixed point.

It's easy enough to write a function that finds the fixed point of another function f. You apply f to the argument and you get a value, you apply f to that and you get a value, you apply f to that and you get a value etc. And every time you check if the new value is different from the old. (Or even if it's just sufficiently similar, Newton-Raphson says hello.) Here is how it goes:

fixfinder :: (Eq a) => (a -> a) -> a -> a
fixfinder f arg =
    let val = f arg
    in if val == arg
        then arg
        else (fixfinder f val)

{-
*Main> fixfinder sqrt 68
1.0
-}

We can try it on a function whose fixed point we know, like the square root. Back in school, did you ever hit the square root key on your calculator repeatedly out of shear boredom? Eventually you'd get 1. (You can also take the square root of 0 and get 0, the second fixed point, but that's not particularly exciting.)

So far so good, now comes the mysterious part. The fixed point idea exists in programming, but in a different guise. In Data.Function there is a function called fix. It makes a very odd promise: to find the fixed point of a function without using a value. Que? How can you return the fixed point value (leaving aside for the moment the question of how you can even calculate it) if you don't have a single value to begin with? Heck, without an input value you don't even know what type the output is, if f is a polymorphic function.

fix :: (a -> a) -> a
fix f = f (fix f)

{-
*Main> fix sqrt
*** Exception: stack overflow
-}

The function doesn't look promising at all. It applies f, but there is no termination condition. Once you reach the fixed point (if there is one), how do you stop? It obviously can't figure out the square root function.

It turns out that fix is a trick used to perform recursion in languages that don't have a recursion primitive (ie. the compiler reaches an expression containing the name of the function in its own body and doesn't know what to do). The trick is that you write a function expecting among other things, a function argument (fix). Then, at the point in your function where you need to recurse, you call fix. Fix then calls the function back, so it's a sort of mutual recursion. In pure lambda calculus fix is called the y-combinator. The formula is so ridiculously cryptic that some have taken to tattoo it on their arms, obviously given up on remembering.

So what do you get out of it? Solely the fact that you get around calling the function within itself, from what I can see. And clearly that's all it does too. How do you prevent the seemingly inevitable infinite loop? You put the termination condition in your semantic function. So you write your recursive function exactly the same, but you give it an extra argument, fix. Then, at the point of recursion, instead of calling the function, you call fix. It's so bizarre that I've written out the evaluation sequence to shed some light on it.

Daredevil that I am, I'm using the factorial function. What's happening here, supposedly, is that fix is computing the fixed point of fac. We can then take that result and apply it to a number, it finds the factorial, it doesn't diverge.

What's happening here? It looks like a sort of loop unrolling, which can clearly proceed ad infinitum, were it not for Haskell's lazy nature. And, one can speculate, it's unrolling the recursion exactly the number of times it needs to be expanded (but then again, evaluation with a value already does that), but how could you possibly know that before you have a value to calculate with? It's not actually computing anything, is it? How is this any different from any old recursive function?

What fac really does is take one step toward the factorial. Multiply one factor, then call fix, which calls fac again. Eventually, the termination condition within fac kicks in. But that doesn't mean we've found the fixed point of fac, which is fac(1) = 1 and fac(2) = 2.  In fact, it seems to me we're walking right past these two fixed points, and calling fac one last time with n=0, because how else can fac terminate?

There is a lot more to fixed points, there's even something called a fixed point datatype that my small brain completely fails to comprehend. The most sane description on fix that I found is in the Haskell Book if you want to take a look. I wish you luck.

dynamic or lexical, that is the scope

October 2nd, 2008

Apologies for the awful pun on a 16th century action movie.

Do you know how in the movies, when someone has to testify they first pin his hand on a Bible and make him recite that I swear to the tell the truth, the whole truth, and nothing but the truth, so help me God litany? Presumably, the god they're talking about is the god in the book, that's why the book is there (I bet polytheists find this very helpful). I guess they think it's harder for people to lie after taking a pledge while handling a Bible. (Do we have any statistics on whether that works?)

Anyway, in a dynamic scope, there is witness called Python. He will make his pledge based on the book that they happened to shove under his hand that day. One day it could be the Bible, a week later it could be The Gospel of the Flying Spaghetti Monster. So that means the pledge will be somehow relative to the god in that particular book. Uneasy about one god, very comfortable with another one.

In a lexical scope, there is a witness called Perl. He is very emotional about his first time as a witness. And even though they give him a new book every time, he just can't seem to notice. He makes his pledge based on the very first book they slipped him.

And now for a short digression into the world of programming languages. You have two scopes, one is the innermost, the other is the outer scope. There is a variable in the inner scope, but it's bound in the outer scope. How do you evaluate this variable? There are two answers to this question.

Under dynamic scoping the variable gets the value that it has in the outer scope at the time of evalution. Under lexical scoping the variable gets the value that it has in the outer scope at the time of declaration.

That didn't explain anything, did it? I know, read on.

Who cares?

This is an important question, and people rarely seem to ask it. Functions care. Named functions and unnamed functions like lambdas, blocks, closures (different languages have different names for the same thing). Anything that has a scope and can be passed around, so that's only functions.

So all the blah about lexical scoping really just boils down to one little detail that has to do with how functions are evaluated. Hardly seems worth the effort, does it?

Dynamic, baby

Dynamic scoping is the more intuitive one. You grab the value of the variable that it has when you need to use it. That's what's dynamic about it, today this value, tomorrow another.

Consider this Python program. It prints the same string in three different colors. output is the function responsible for the actual printing. output has an inner helper function colorize, which performs the markup with shell escapes. Now, since colorize is defined in the scope of output, we can just reuse those bindings. I pass the string explicitly, but I don't bother passing the color index variable. (A variable gets interpolated where there is an %s).

def output(color_id, s):

    def colorize(s):
        return "\033[1;3%sm%s\033[0m" % (color_id, s)

    print colorize(s)


for e in range(1, 4):
    output(e, "sudo make me a sandwich")

Lexical ftw

Lexiwhat? If you recall from last time, "lexical" is a pretentious way of saying "where it was written".

What this implies is that the outer binding is evaluated only the first time. After that, whatever scope the function finds itself being evaluated in, it doesn't matter, the variable with an outer binding doesn't change value.

Consider this Perl code which is exactly the same as before.

sub output {
    my ($color_id, $s) = @_;

    sub colorize {
        my ($s) = @_;
        return "\033[1;3${color_id}m${s}\033[0m";
    }

    print colorize("$s\n");
}


for (my $e = 1; $e < 4; $e++) {
    output($e, "sudo make me a sandwich");
}

How do you think it evaluates?

Oh no, it's broken! Why? Because the first time colorize is evaluated, the value of ${color_id} is recorded and stored for all eternity. The term lexical in this example isn't helpful at all, because the function is *always* evaluated where it was declared, it's not passed to some other place where the value of ${color_id} could have been decided by someone other than output. 'pedia says lexical scoping is also called static scoping, which makes more sense here.

Interestingly, in the language of tweaks that is Perl you can replace my with local on line 2 and you got yourself dynamic scoping! :-o The code will run as expected now.

Which is better?

I don't know. I don't have any conclusions yet. I got into the habit of writing inner functions in Python without passing in all the arguments, it's useful sometimes when you have a lot of variables in scope. And then I got in trouble for doing the same thing in Perl.

In languages without assignment, they will obviously pick lexical, because it reinforces the rule of referential transparency. A variable assigned always keeps the same value.

You need lexical scoping to have closures. A function being defined has to be able to capture and store the values of its unbound variables. Otherwise you could pass it to some other scope that doesn't have bindings for variables with those names, and then what?

But you know what? Python has closures anyway. Here, colorize is defined inside output, but it's not called. It passes back to the loop, and it's called there. But that scope doesn't have a binding for color_id! And yet it still works.

def output(color_id):

    def colorize(s):
        return "\033[1;3%sm%s\033[0m" % (color_id, s)

    return colorize


for e in range(1, 4):
    f = output(e)
    print f("sudo make me a sandwich")

If you try the same thing with Perl and local in place of my, and set $color_id to $e, it works too.

So at least for Python and Perl, you can't reasonably say "dynamic scoping" or "lexical scoping". They do a bit of both. So why is that? Are the concepts dynamic and lexical just too simplistic to use in the "real world"?

what the heck is a closure?

April 20th, 2008

That's a question that's been bugging me for months now. It's so vexing to try to find something out and not getting it. All the more so when you look it up in a couple of different places and the answers don't seem to have much to do with each other. Obviously, once you have the big picture, all those answers intersect in a meaningful place, but while you're still hunting for it, that's not helpful at all.

I put this question to a wizard and the answer was (not an exact quote):

A function whose free variables have been bound.

Don't you love to get a definition in terms of other terms you're not particularly comfortable with? Just like a math textbook. This answer confused me, because I couldn't think of a case that I had seen where that wasn't the case, so I thought I must be missing something. The Python answer is very simple:

A nested function.

It's sad, but one good answer is enough. When you can't get that, sometimes you end up stacking up several unclear answers and hoping you can piece it all together. And that can very well fail.

I read a definition today that finally made it clear to me. It's not the simplest and far from the most intuitive description. In fact, it too reads like a math textbook. But it's simply what I needed to hear in words that would speak to me.

A lexical closure, often referred to just as a closure, is a function that can refer to and alter the values of bindings established by binding forms that textually include the function definition.

I read it about 3 times, forwards and backwards, carefully making sure that as I was lining up all the pieces in my mind, they were all in agreement with each other. And once I verified that, and double checked it, I felt so relieved. Finally!

I can't follow the Common Lisp example that follows on that page, but scroll down and you find a piece of code that is much simpler.

(define (foo x)
	(define (bar y)
		(+ x y))
	bar)

(foo 1) 5 => 6
(foo 2) 5 => 7

What's going on here? First there is a function being defined. Its name is foo and it takes a parameter x. Now, once we enter the body of this function foo, straight away we have another function definition - a nested function. This inner function is called bar and takes a parameter y. Then comes the body of the function bar, which says "add variables x and y". And then? Follow the indentation (or the parentheses). We have now exited the function definition of bar and we're back in the body of foo, which says "the value bar", so that's the return value of foo: the function bar.

In this example, bar is the closure. Just for a second, look back at how bar is defined in isolation, don't look at the other code. It adds two variables: y, which is the formal parameter to bar, and x. How does x receive its value? It doesn't. Not inside of bar! But if you look at foo in its entirety, you see that x is the formal parameter to foo. Aha! So the value of x, which is set inside of foo, carries through to the inner function bar.

Can we square this code with the answers quoted earlier? Let's try.

A function whose free variables have been bound. - A function, in this case bar. Free variables, in this case x. Bound, in this case defined as the formal parameter x to the function foo.

A nested function. - The function bar.

A lexical closure, often referred to just as a closure, is a function that can refer to and alter the values of bindings established by binding forms that textually include the function definition. - A function, in this case bar. That can refer to and alter, in this case bar refers to the variable x. values of bindings, in this case the value of the bound variable x. established by binding forms, in this case the body of the function foo. that textually include the function definition, in this case foo includes the function definition of bar.

So yes, they all make sense. If you understand what it's all about. :/

Let's return to the code example. We now call the function foo with argument 1. As we enter foo, x is bound to 1. We now define the function bar and return it, because that is the return value of foo. So now we have the function bar, which takes one argument. We give it the argument 5. As we enter bar, y is bound to 5. And x? Is it an undefined argument, since it's not defined inside bar? No, it's bound *from before*, from when foo was called. So now we add x and y.

In the second call, we call foo with a different argument, thus x inside of bar receives a different value, and once the call to bar is made, this is reflected in the return value.

Well, that was easy. And to think I had to wait so long to clarify such a simple idiom. So what is all the noise about anyway? Think of it as a way to split up the assignment of variables. Suppose you don't want to assign x and y at the same time, because y is a "more dynamic" variable whose value will be determined later. Meanwhile, x is a variable you can assign early, because you know it's not going to need to be changed.

So each time you call foo, you get a version of bar that has a value of x already set. In fact, from this point on, for as long as you use this version of bar, you can think of x as a constant that has the value that it was assigned when foo was called. You can now give this version of bar to someone and they can use it by passing in any value for y that they want. But x is already determined and can't be changed.