Archive for August, 2013

python bytecode: loops

August 18th, 2013

New to the series? The previous entry was part 4.

We have seen quite a bit of bytecode already - we've even seen the structure of a bytecode module. But we haven't seen loops yet.

A while loop

We'll start with the simplest kind of loop. It's not really a typical loop given that it breaks after a single execution, but it's enough to give us a look at the looping machinery.

def loop(x):
    while x > 3:
        print x
        break

Disassembly of loop:
  2           0 SETUP_LOOP              22 (to 25)
        >>    3 LOAD_FAST                0 (x)
              6 LOAD_CONST               1 (3)
              9 COMPARE_OP               4 (>)
             12 POP_JUMP_IF_FALSE       24

  3          15 LOAD_FAST                0 (x)
             18 PRINT_ITEM          
             19 PRINT_NEWLINE       

  4          20 BREAK_LOOP          
             21 JUMP_ABSOLUTE            3
        >>   24 POP_BLOCK           
        >>   25 LOAD_CONST               0 (None)
             28 RETURN_VALUE

The control flow is a bit convoluted here. We start off with a SETUP_LOOP which pushes a block onto the stack. It's not really clear to me why that's necessary, given that Python does not use blocks for scopes. But it might be that the interpreter needs to know which level of looping it's at.

We then load the variable x and the constant 3 and run COMPARE_OP. This opcode actually takes a parameter to tell it which operation to perform (greater than in this case). The result of that will be a boolean value on the stack.

Now we need to know whether we're going to execute the loop body or jump past the loop, so that's POP_JUMP_IF_FALSE, which may jump to location 24 where the loop ends.

Assuming we are in the loop body, we simply load the variable x and print it. Interestingly, the print statement requires two opcodes PRINT_ITEM and then PRINT_NEWLINE, which seems a bit over the top.

We now have a BREAK_LOOP instruction. Notice that if we were to ignore and execute the JUMP_ABSOLUTE just behind it that would return us to the loop predicate, and we might continue looping. But that's not supposed to happen after a break: a break ends the loop even if the loop predicate is still true. So this must mean that we won't reach JUMP_ABSOLUTE.

After this we execute POP_BLOCK which will officially end the loop by taking the block off the stack again.

A for loop

A for loop, then, is not very different. The main difference is that we are not looping on a boolean condition - we are looping over an iterable.

def loop(x):
    for i in range(x):
        print(i)

Disassembly of loop:
  2           0 SETUP_LOOP              25 (to 28)
              3 LOAD_GLOBAL              0 (range)
              6 LOAD_FAST                0 (x)
              9 CALL_FUNCTION            1
             12 GET_ITER            
        >>   13 FOR_ITER                11 (to 27)
             16 STORE_FAST               1 (i)

  3          19 LOAD_FAST                1 (i)
             22 PRINT_ITEM          
             23 PRINT_NEWLINE       
             24 JUMP_ABSOLUTE           13
        >>   27 POP_BLOCK           
        >>   28 LOAD_CONST               0 (None)
             31 RETURN_VALUE

To do that we have LOAD_GLOBAL to load the range function on the stack. This is an opcode we haven't seeb before, and it simply means this name comes from somewhere outside this module (the __builtin__ module in this case). We then load x and call the function. This produces a list.

Now, since Python uses iterators so heavily, the loop will use this method to move through the list. It means you could also loop over any other iterable object (tuple, dict, string, your own custom iterators etc). In fact, GET_ITER amounts to calling the iter function on the list (which returns an iterator object). And FOR_ITER calls the iterator's next method to get the next item.

We now have the first int in the list, and we bind it to the name i with STORE_FAST. From there on, we may use i in the loop body.

You will notice that there is something odd about the way i is manipulated. At location 16 the int is sitting on the stack, and gets bound to a name with STORE_FAST. This consumes it on the stack. We then immediately push it on the stack again with LOAD_FAST. These two instructions cancel each other out: we could remove them without changing the meaning of the program.

So why do we have to store and load? Well, imagine i were used again in the loop body - it would have be bound, right? So it could be optimized away in this case, but not in the general case.

python bytecode: object model

August 12th, 2013

New to the series? The previous entry was part 3.

Okay, so we've seen a fair amount of bytecode already. We know how modules, classes and functions work and today we'll study an example where it all comes together. It's more about consolidating our insights rather than learning new material.

A simple module

Right as you see this code you'll be thinking in bytecode already. We have three top level bindings:

  • pencils is bound to the integer 13
  • give_back is bound to a function object
  • Person is bound to a class object

pencils = 13

def give_back(x):
    return x

class Person(object):
    default_age = 7

    def __init__(self, age):
        self.age = age or self.default_age

    def had_birthday(self):
        self.age += 1

    def how_old(self):
        return self.age

Let's disassemble this module so we can complete the picture. Ned Batchelder was kind enough to provide some example code for this purpose.

We won't go through all of the bytecode, as it's a bit long. But this diagram illustrates some of the key components of this module. On display we have:

  • The module mod.py's code object. Two of its constants are code objects too:
    • The function give_back's code object.
    • The class Person's code object. it has three constants that are code objects, one of which is:
      • The method how_old's code object.

So yes, a .pyc file simply contains a code object (of type types.CodeType), which has a bunch of attributes. Not all of those attributes will be set - it depends on what kind of functional object (module, class, function, ...) the code operates on.

A function that uses local variables will store them in .co_varnames.

A class or a function will have a .co_name value.

Many code objects will have .co_filename value that tells you where its source code comes from.

And in many cases a code object will have other code objects in its .co_consts tuple, and those will be code objects representing classes or functions or what have you.

python bytecode: classes

August 11th, 2013

New to the series? The previous entry was part 2.

Today we're going to see how classes work. Classes are a bit tricky as they provide the glue that makes functions into methods. They are also created dynamically, as we saw last time, so they must have some compiled bytecode that gets executed when the class actually gets created.

We'll do this in several steps, because unlike functions classes don't come with code objects that we can inspect directly. So we'll use some trickery to see what happens step by step.

Executing a class body

Before we look at a class in constructed form, let's poke around its namespace.

We'll use this simple class as a running example.

import dis

class Person(object):
    default_age = 7

    def __init__(self, age):
        self.age = age or self.default_age

    def had_birthday(self):
        self.age += 1

    def how_old(self):
        return self.age

    # throws NameError if these are not bound
    default_age, __init__, had_birthday, how_old

    # We'll see what this function looks like
    dis.dis(how_old)

# disassemble
 13           0 LOAD_FAST                0 (self)
              3 LOAD_ATTR                0 (age)
              6 RETURN_VALUE

What happens here is that we let all the class members get defined. This is basically like making bindings at the module level. In this scope there is nothing to suggest that we are inside a class body. (Well, except that the name __module__ is also bound to "__main__", which would not be the case at module level.)

We disassemble one of the functions to show that there is nothing special about it - it doesn't contain any special sauce that would reveal it to be a method instead of a function. self is a local variable like any other that just happens to be called self.

The class code

As mentioned there is no way to get at the code given a class object, but what we can do is put the class definition in a function body, retrieve the function's code object - which we know contains all the constants and variables used in that function. One of those is bound to be the code object of the class-to-be-constructed.

import dis

def build_class():
    class Person(object):
        default_age = 7

        def __init__(self, age):
            self.age = age or self.default_age

        def had_birthday(self):
            self.age += 1

        def how_old(self):
            return self.age

cls_code = build_class.func_code.co_consts[2]
dis.disassemble(cls_code)

# disassemble
  4           0 LOAD_NAME                0 (__name__)
              3 STORE_NAME               1 (__module__)

  5           6 LOAD_CONST               0 (7)
              9 STORE_NAME               2 (default_age)

  7          12 LOAD_CONST               1 (<code object __init__ at 0xb7533ba8, file "funcs.py", line 7>)
             15 MAKE_FUNCTION            0
             18 STORE_NAME               3 (__init__)

 10          21 LOAD_CONST               2 (<code object had_birthday at 0xb7533c80, file "funcs.py", line 10>)
             24 MAKE_FUNCTION            0
             27 STORE_NAME               4 (had_birthday)

 13          30 LOAD_CONST               3 (<code object how_old at 0xb7533e30, file "funcs.py", line 13>)
             33 MAKE_FUNCTION            0
             36 STORE_NAME               5 (how_old)
             39 LOAD_LOCALS         
             40 RETURN_VALUE

So we reach into the function's code, into it's co_consts tuple and grab the code object. It happens to be at index 2, because index 0 is None and index 1 is the string "Person".

So what does the class code do? Just like at module level, it binds all the names in its namespace, and it also binds the name __module__, because a class is supposed to know the module it's defined in.

And then? Once all those bindings have been made, it actually just returns them. So basically the class code builds a dict and returns it.

This helps complete the picture from last time. To recap, at module level the code first a) calls the class as if it were a function with CALL_FUNCTION (to which this "function" returns a dict, as we've just seen), and then b) BUILD_CLASS on that return value (ie. on the dict), which wires everything together and produces an actual class object.

Methods

Okay, now let's find out something else. We know that functions and methods are not the same type of thing. What about their code? We saw before how a function defined in a class body has no signs of being a method. Has it changed during the construction of the class? A function object replaced by a method object perhaps?

import dis

class Person(object):
    default_age = 7

    def __init__(self, age):
        self.age = age or self.default_age

    def had_birthday(self):
        self.age += 1

    def how_old(self):
        return self.age

dis.disassemble(Person.how_old.func_code)

# disassemble
 13           0 LOAD_FAST                0 (self)
              3 LOAD_ATTR                0 (age)
              6 RETURN_VALUE

The answer to that is no. The object is unchanged. In fact, the object stored in the class is a function. That's right, a function. Try reaching into Person.__dict__ to get it and what you get is a function object. It isn't until you do an attribute access on the class object (Person.how_old) that the method object appears, so the method is like a view on the function, it's not "native".

How does that work? You already know: descriptors.

func = Person.__dict__['how_old']
print func
# <function how_old at 0xb749e994>

print Person.how_old
# <unbound method Person.how_old>
print func.__get__(None, Person)
# <unbound method Person.how_old>

person = Person(4)
print person.how_old
# <bound method Person.how_old of <__main__.Person object at 0xb73f5eec>>
print func.__get__(person, Person)
# <bound method Person.how_old of <__main__.Person object at 0xb73f5eec>>

Function objects implement the descriptor protocol. Getting the function through the class (ie. getting the method) is equivalent to calling the function object's __get__ method with that class object as the type. This returns an unbound method (meaning bound to a class, but not to an instance of that class).

If you also give it an instance of the class you get a bound method.

So there you have it, classes and methods. Simple, right? Well, ish. One last thing: is the bound/unbound method object created on the fly? As in: does Python perform an object allocation every time you access a method? Because that would be... bad. Well, it doesn't. At least as far as the user can tell, it's always the same object with the same memory address.

things i would want to know about erlang

August 10th, 2013

Earlier this year I spent a few weeks playing with Erlang. I wanted to make something out of it, but despite an encouraging start I found it too frustrating to use.

I got excited about Erlang because a lot of interesting things have been done in Erlang. Like CouchDB, RabbitMQ, Riak and so forth. Besides that, Erlang is a dynamic language and I generally find those quite nice to use.

I won't recap Erlang's selling points here - I assume you've heard them. These are my discoveries.

  • The language feels very static. Dynamic type checking and code reloading are about the only things that seem dynamic about it. There is no introspection. The build system is make. You can't change the structure of a record once the program is running. There is no meta programming. And so on and so forth. It really isn't a dynamic language at all in the sense of Python, Ruby or Javascript.
  • ...some of these static restrictions are plain weird. If you want to call a function in the same module you don't have to do anything special, but if you're going to export the function you need to declare it as an export as func/2 (where 2 is its arity), and maintaining consistency between the function definition and the export declaration is another source of bugs. The question is: what's the point of having this? The arity is a kind of statically enforced type, but you can still call the function with any arguments you want - you can even give all your functions a single parameter and then call it with a tuple and Erlang has nothing to say about what that tuple contains.
  • The data abstraction is very weak. There is no object system - all you have is records and they are compile time defined and provided to client code as header files, like in c. This is very rigid.
    If you don't like records you can just use tuples. Tuples don't have to be declared in a header file and compiled in, just start using them freely. But now you have a container with positional arguments and if you want to change the structure of the tuple you have to update all your code, because any code using this tuple has to pattern match against all of its fields. That's even worse.
  • The syntax is Prolog and that's not a good thing. I didn't even realize how much it is Prolog until I read a Prolog tutorial a few weeks ago. It may be alright for Prolog, but it has definitely not been extended in an elegant way by Erlang. The tuple syntax is probably the worst part of it, but control flow structures too are so easy to get wrong with so many different line termination characters in use.
  • Strings are just lists, and there's no way to detect whether a list merely contains integers or whether it's supposed to be a string. If you wanted to be very generous you could call this sloppy.
  • Single assignment may be a nice idea, but it makes code ugly. At first I was naming my variables things like Something = func(OriginalSomething), but then I realized I had to plan very carefully what I would name the variable to always have a descriptive name. So I abandoned that and started using Houses2 = func(Houses) and so on. That is a bit more flexible at least, but now it's like I'm maintaining a list in sorted order and if I discover that I need to add a step in between Houses3 = func(Houses4) I can either introduce Houses7 or I have to update all the indexes. This plain sucks. (And no: Haskell's solution of adding apostrophes is just a different numbering syntax, it doesn't make it any better.)
  • Dynamic typing + pattern matching can make a mess. I was looking at ibrowse, which is a very feature rich and mature library. But some of its function definitions are 3 pages long. Instead of splitting it up into several helper functions, it's simply defining the same function with many different clauses, each of which has a different signature (obviously), but also a different arity and sometimes expecting different types for those positional arguments.
    This is of no concern to the caller, because I just call the top level clause of this function and I don't see what happens behind the scenes. But what you actually end up with in the background is a call graph of essentially different functions (but with the same name), heavily recursive, that is hell to try to make sense of. And because there is no static type information, well good luck.
  • OTP is a straightjacket. OTP kind of assumes that you will be using a supervisor, that you will be using gen_server and that you will be using OTP code patterns, OTP packaging and an OTP directory layout. I found it quite hard to write a program in a single module just to try out an idea first. I would later have migrated it to OTP style, but it was hard to get the benefits of OTP without going all the way in. What is the point of a supervisor that supervises a single process without restart behavior? It's pure overhead. Yet in OTP everything is supervised, whether or not that's useful. Also, the lack of a proper data abstraction is nowhere more painful than in OTP where you're constantly passing nested tuples around and trying to figure out which part is used by the function you're calling directly, and which part is propagated as part of the message payload. If you happen to know the OTP API like the back of your hand then this is probably a non-issue, but you can say that about any awful API.
  • The tooling is low quality. It makes you think noone has made a real effort to improve it in a good 10-15 years. Like the REPL. There is one, but it sucks. It doesn't support readline. It has tab completion, but it doesn't complete all function names. You think you can explore the libraries this way, but some modules don't show up at all in the completer, or some of their functions are missing. The same goes for all the process introspection tools. They exist, but they've been programmed against Motif or something and they're terrible to use. And stack traces are formatted in a weird way that makes it hard to see exactly what the failing thing is. And if you use sasl then the actual output you care about it drowned out in a bunch of other output that you don't care about.

So yes, several very interesting ideas in Erlang, but a poor development experience. Unfortunately, most of these problems are deep in the language and the culture of Erlang and not likely to ever change. Someone could write a better REPL, but it would take a lot of community work to improve records or the syntax. There are other problems in Erlang that are probably more tractable, like a certain amount of duplication of functionality in the standard library (several competing dictionary implementations), but not even that seems to be worked on.

UPDATE: Erik Ridderby has written a response that includes a fascinating piece of history painting a nice picture of where Erlang came from and what sort of development environments it was used in.

python bytecode: modules

August 8th, 2013

If you're new here I recommend starting with part 1.

Last time we looked at statements inside function bodies. Exciting stuff, but a bit puzzling. I bet you were asking yourself: How did those functions get into that module in the first place?

Statements at module level

For some reason dis.dis does not seem to produce any output when given a module object other than showing you its functions. For a more insightful view we will use python -m dis module.py.

Let's try a really simple one first.

a = 7
b = a + a

# disassembly
  1           0 LOAD_CONST               0 (7)
              3 STORE_NAME               0 (a)

  2           6 LOAD_NAME                0 (a)
              9 LOAD_NAME                0 (a)
             12 BINARY_ADD          
             13 STORE_NAME               1 (b)
             16 LOAD_CONST               1 (None)
             19 RETURN_VALUE

The first thing to notice is the use of LOAD_NAME and STORE_NAME. We're at module level here, and LOAD_FAST/STORE_FAST only apply to local variables in a function body. But just like a function object a module stores the names it contains in a tuple which can be indexed into, as shown here.

It's a bit more obscure to get at that storage, because a module does not have a func_code attribute attached to it like a function does. But we can create a module object ourselves and see what it contains:

code = compile("a = 7; b = a + a", "module.py", "exec")
print code.co_names
# ('a', 'b')

And there's our storage. The module has various other co_* attributes which we won't go into right now.

Also worth noting: modules return None like functions do, which seems a bit redundant given that there isn't a way to capture that return value: value = import os is not valid syntax. And module imports feel like statements, not like expressions.

Functions

Think about this first: when you import a module which contains two functions, what do you expect its namespace to contain? Those two names bound to their function objects, of course! See, that was not a trick question!

def plus(a, b):
    return a + b

def main():
    print plus(2, 3)

# disassembly
  1           0 LOAD_CONST               0 (<code object plus at 0xb73f4920, file "funcs.py", line 1>)
              3 MAKE_FUNCTION            0
              6 STORE_NAME               0 (plus)

  4           9 LOAD_CONST               1 (<code object main at 0xb73f47b8, file "funcs.py", line 4>)
             12 MAKE_FUNCTION            0
             15 STORE_NAME               1 (main)
             18 LOAD_CONST               2 (None)
             21 RETURN_VALUE

And so it is. Python will not give any special treatment to functions over other things like integers. A function definition (in source code) becomes a function object, and its body becomes a code object attached to that function object.

We see in this output that the code object is available. It's loaded onto the stack with LOAD_CONST just like an integer would be. MAKE_FUNCTION will wrap that in a function object. And STORE_NAME simply binds the function object to the name we gave the function.

There is so little going on here that it's almost eerie. But what if the function body makes no sense?? What if it uses names that are not defined?!? Recall that Python is a dynamic language, and that dynamism is expressed by the fact that we don't care about the function body until someone calls the function! It could literally be anything (as long as it can be compiled successfully to bytecode).

It's enough that the function object knows what arguments it expects, and that it has a compiled function body ready to execute. That's all the preparation we need for that function call. No, really!

Classes

Classes are much like functions, just a bit more complicated.

class Person(object):
    def how_old(self):
        return 5

# disassembly
  1           0 LOAD_CONST               0 ('Person')
              3 LOAD_NAME                0 (object)
              6 BUILD_TUPLE              1
              9 LOAD_CONST               1 (<code object Person at 0xb74987b8, file "funcs.py", line 1>)
             12 MAKE_FUNCTION            0
             15 CALL_FUNCTION            0
             18 BUILD_CLASS         
             19 STORE_NAME               1 (Person)
             22 LOAD_CONST               2 (None)
             25 RETURN_VALUE

Let's start in the middle this time, at location 9. The code object for the entire class has been compiled, just like the function we saw before. At module level we have no visibility into this class, all we have is this single code object.

With MAKE_FUNCTION we wrap it in a function object and then call that function using CALL_FUNCTION. This will return something, but we can't tell from the bytecode what kind of object that is. Not to worry, we'll figure this out somehow.

What we know is that we have some object at the top of the stack. Just below that we have a tuple of the base classes for our new class. And below that we have the name we want to give to the class. With all those lined up, we call BUILD_CLASS.

If we peek at the documentation we can find out that BUILD_CLASS takes three arguments, the first being a dictionary of methods. So: methods_dict, bases, name. This looks pretty familiar - it's the same inputs needed for the __new__ method in a metaclass! At this point it would not be outlandish to suspect that the __new__ method is being called behind the scenes when this opcode is being executed.