Archive for 2008

Breaking the spell

December 4th, 2008

Daniel Dennett is a professor of philosophy. In this tome his primary call to action, as others have also suggested, is that of denouncing the protectonism religion enjoys in our culture. And not in some sort of general sense, but he states concrete suggestions. In the first place to "break the spell" that discourages anyone from discussing or studying religion seriously. In the second place, perhaps, to break away from religion altogether, pending the results of the first step.

The subtitle is "Religion as a natural phenomenon", and that is really what this book is all about. Unlike other authors who examine and critique religion, Dennett wants to explain what religion is and where it came from. He goes at it by seeing religions and religious ideas and practices as memes. This has the immediate consequence of divorcing the matter of a particular religion from its followers; the religion can be studied on its own. By seeing religions as memes we can also understand how most of the world's religions are long since extinct, the memes failed to survive. And what we have left are the most robust and resilient memes, those shaped best by cultural evolution to seduce and entrance us. (Incidentally, there is nothing in the theory of evolution that says transmission must be biological, it may just as well be cultural, I am told.)

Defining religion memetically has another benefit, namely to ask: how is it that religion survives; is it at our benefit (a symbiotic relationship), at our expense (parasitic) or just co-existing neutrally? Dennett does not answer the question (he says philosophers prefer to ask questions rather than answer them), but just by framing it thus he dispels the intuition that surely it must be beneficial to us. Central to this question is his compulsion to ask: who benefits from this meme? A meme is just an idea, a unit of information. A word is a meme. What is it made of? Where does it live? In our minds, or in a book, or in a recording. But it exists strictly separate from us. We can pronounce it, transmit it to others, but who ultimately benefits from that? Is it ourselves, because we need the word to describe something? Ah, but what if we destroy whatever it is (let's say it's an object) the word describes? Well, the word lives on, it may not mean anything, but we can still pronounce it, write it, pass it on. The word exists separately from the thing it describes.

What about a viral video (eg. rickrolling). Those can spread in weeks, or days, on our beloved interwebs. Who benefits from that? Every person who passes it on gets a laugh out of it? Isn't it also true that we are sending it around "because everyone else is doing it"? What does that say about our motive? If we enjoy the video and pass it on, what does it say about the video itself? As a meme, it is one of gazillions of memes, amongst which we have found one in particular that appeals to us. Why did we pick this one to like and not that one? Why did we come up with the elements of Christianity rather than some other bag of memes?

The birth of religion

Dennett suggests that the roots of religion are found in superstition. And the roots of superstition, in turn, are found in our ability to reason about intentions. Our actions, as humans, are motivated by motives, and every human understands this. As such, we tend to mistakenly attribute potentially all actions as intentional, backed by motive. Even when they aren't. If I do something, then it is never without a reason. Why am I doing what I'm doing? Why does it rain today? What caused it to rain today? Who caused it to rain? It is just in our nature to be suspicious of intentions, to search for them. Evolutionarily speaking, it must have been a benefit to us to perceive the intentions of humans and animals (even if animals are not driven by motive but merely instinct).

And here is another important idea: the free floating rationale. They are somewhat like laws of physics. We may discover them. But no person invented a law of physics and then set out to put it into practice. That is not in our power, the best we can do is to discover laws that exist. Thus also with natural causes. Why do certain animals exhibit purposefully misleading behavior (eg. the fish that lies flat on the sea bed, waiting to be mistaken for terrain, only to pounce on its prey)? It is not because they reason such, their instincts were just shaped this way in the endless arms race of natural selection. It is a "free floating" explanation, ie. that it "makes sense" (in terms of survival and reproduction) for the fish to disguise itself on the seabed.

And so our instinct to detect intention is a free floating rationale, Dennett says. Even if it causes us to detect intention where there is none (after all, evolution is an extremely imperfect process). And so we concoct this idea that it rains, because someone, somewhere, is doing it. We have other quirks too. We form bonds with people around us. We become accustomed to their presence. But just as it takes time to bond, it also takes time to "disbond". In various circumstances of breaking the bond, we still feel sometimes as if the person is still around. There are times when we think of something to say that the other person would like. Or that we remember their presence and the effect it had on us back then. No wonder that so many superstitions arose centered around the idea of talking to our ancestors. If someone died, and I still feel their presence sometimes, it's as if they were still around, kind of. So perhaps they are? If someone is making it rain and it's noone that I can find, perhaps it's an ancestor?

This is how Dennett sees the beginning of religion, as superstitions arising from life in our surroundings, gradually forming a folklore of beliefs we transmit to one another and our progeny. But then, as these primitive religions evolve, we become increasingly reflective about our beliefs, and what starts out as folklore religion transitions into organized religion.


Then there is that other thing about religions that is so mystifying: god(s). When people say today that they believe in god, we really have no idea what that really is. The early concept of a god was basically a person with intentions, or not strictly a person, as they did not exist (or had ceased to, if they were our ancestors), but certainly a being in human form. It is crucial that god be a defined in human form, because it is of great interest of us to communicate with this being. A god who is abstract or lifeless cannot be communicated with, and therefore not a particularly captivating god at all (ie. the meme that went extinct).

But the problem is this. Early gods people belived in were in some sense human beings. The Greek gods, the Roman gods, Nordic mythology, that kind of thing. But today there are lots of people who find such a god simplistic and false. (What was wrong with Zeus exactly?) Instead, our idea of "god" has evolved a lot over the centuries. God is not exactly a person, he is some kind of "thing" that we can't understand. Why? Because such a "thing" is much more resistent to refutation. Some "thing" that is mysterious, inexplicable, that doesn't literally live on Mount Olympus, is a more tantalizing and captivating kind of god (meme). Its survival hinges on being inscrutable. Yet other people believe "in god" not as a being at all, but just as a word to describe the inexplicable. So it boils down to this. If you ask people if they "believe in god", a lot of people will say they do, because they feel a certain moral duty to their upbringing and traditions "to believe", but the actual "gods" people supposedly believe in vary wildly.

Do people believe in the god of the Old Testament, the arrogant, selfish, cruel god who craves constant recognition? The capricious and small minded god who punked Moses into killing his own son? The loving god who murdered the entire planet except whoever made it onto the Ark? No, that's a backward and primitive notion of god, god is actually a lot more sophisticated than that. Or is he? Rememeber, god has to remain human in some sense, so that we can speak to him. But our sensibilities are offended by this backward notion of a god, in our civilized society with human rights and gender equality. So who believes in what kind of god we have no idea, but because everyone calls it god, we can maintain this illusion that everyone believes in the same thing. The only thing we know for sure is that it's called "god", whatever it is:

So we have the strange phenomenon, as Kant assures us, of a mind believing with all its strength in the real presence of a set of things of no one of which it can form any notion whatsoever.
-- William James

Further, there are memes in certain religions that serve to obscure the idea of god even more, by forbidding depictions of god.

And so, beside those who do believe in god there are many more who believe that "perhaps there is such a thing as god", or those who believe that it's good for people to believe in god, and "wouldn't it be nice if I believed in god". But for cultural and political reasons, all these people are inclined to say they believe in god, even if what they really believe is merely the idea that "belief in god exists".

Taken: efficient fast paced action

December 3rd, 2008

Liam Neeson's character Bryan Mills has been likened to an older Jason Bourne. That is accurate, to a point. Mills is a retired CIA operative with a family. He bears a certain resemblance to Ludlum's Bourne in "Ultimatum" (not the Bourne in the movies, that is), single minded about his daughter and without a second care in the world. But the story is very different from a Ludlum story, it's as if you grabbed all the action and little else. No tangled web, very movie friendly. Or put it this way, the web is there, but Mills unravels it at superb speed, the movie audience doesn't have a moment to wonder about where this is heading. Mills is also more cruel that Bourne, and not as good at interrogation.

The story bears more likeness to "Man on Fire", with Denzel. Both plots to rescue a captured girl by unscrupulous rescuers. Both backed by a high degree of police corruption, making it one man against everyone else.

I am quite impressed with Liam Neeson. I haven't seen him in this kind of role before, but I think he fits it well. Then again, he takes on quite a wide spectrum and fares well wherever he goes. Oskar Schindler will always been in our memories, but he was a submarine officer in "K-19", and then played the Batman villain in "Batman Begins". There are those actors who just seem to fit in whatever the environment, without making a big splash, part of the scenery almost. And you think they've always just been there. Neeson is one of them.

when adults talk to kids

November 28th, 2008

Have you ever noticed the way in which adults talk to little kids? I say talk to because it's completely one way. It's a conversation that doesn't happen anywhere else. Kids up to a certain age are too young to assert themselves. It's not that they can't talk, they just don't have any comments to make. They haven't watched enough tv, heard enough gossip or seen enough popular culture to be fluent in the conversations. Kids do not talk for the sake of talking itself, adults do.

So when an adult talks to a kid of say 2 years of age, or 4, it's a constant stream of inanities. And there seems to be an unwritten rule that says when you meet a kid you have to talk. It's a precious opportunity for the adult to yap away without being judged on what he's saying. "Look at how big you are!" I may only have been alive for four years, but I've figured out that we have such a thing as growth. Adults talk to kids like they're idiots because adults have this urge to play idiots. They think "hey, a kid, what a wonderful opportunity to escape the judgment of my peers". And adults of all ages agree on this, they're just as eager to be around little kids whether they're 30 or 80.

This wouldn't be happening if the kid said "I'll have to stop you there buster, what you just said doesn't make any sense". The adult, after recovering from the stunning blow, would mutter "you're a clever one aren't you" and turn around on his heel. Victory! But this is not gonna happen.

See, adults *know* that kids that age don't have the confidence to talk back. There are so many factors in their favor. They're 3 times taller, have a deep voice, have the approval of all the other adults, aren't treated as kids by everyone. So if a kid doesn't have the confidence to engage in small talk, he's definitely not gonna have the courage to criticize. In fact, that's what the liberation of puberty is all about. Finally you have the courage to criticize all the things that have been pissing you off since the start.

But that's not what I do. I don't like the talking. Why should I talk when the other person isn't? Kids have this curious gaze in their eyes which is very conducive to mind games. "Does he think that I think that.."

deferred aliasing

November 27th, 2008

Shell users like to tailor their environment to speed up common tasks. The alias mechanism is very handy for this, you bind a short idiom to the sequence you run a lot. For instance, I have this line in my alias list:

alias lh='ls --color -Flah'

Having done that, you can also rig up a system of pulleys so that you can take the whole thing with you wherever you go.

All is well so far, but now comes the icky part. If you take this bundle of joy to murky neighborhoods like BSD or commercial Unices you'll find out that the userland isn't the same everywhere, even though you have the shell. BSD (DesktopBSD), for instance, doesn't support the whimsical --color switch, they like monochrome.

So what to do? What I would like to have is the same alias bound to something that works on the given platform, let it degrade gracefully. The only way to know that is to test for it, then bind the alias accordingly. That's what happens here, I try the incantation I want and if that doesn't work, I use the failsafe one.

setalias() {
	if eval $2 &>/dev/null; then
		eval "alias $1='$2'"
		eval "alias $1='$3'"

setalias "lh" "ls --color -Flah" "ls -Flah"

But this is still a bit lacking. You obviously set up the aliases to run at shell startup, not manually. But there are instances when you might need a new shell, but you might have high io latency, or the io might be locked up through a system error. In such a case, you really just want to do as much as is necessary to start the shell, without doing a lot of io stuff in the startup files. So running ls and things like that for the purpose of setting up an alias is to be avoided.

Here's where an idea from compilers can help. First, let's think about the setalias function as code to be executed. This code runs on shell startup. But it doesn't have to. We could just as well bind the alias to the function code itself, to make it run the first time the alias is executed. That's what happens here, the shell starts and binds the alias to a string. The first time the alias is executed, it runs the test and then does the binding. And then runs the actual command we wanted to run.

setalias() {
	alias "$1"="if eval $2 &>/dev/null; then
		eval \"alias $1='$2'\"
		eval \"alias $1='$3'\"
	fi; eval $1"

setalias "lh" "ls --color -Flah" "ls -Flah"

It's like a just-in-time compiler, we do some extra work the first time, but from then on it's all set up.

There's a lot more you can do with eval, so go nuts!

hierarchical temporal memory

November 26th, 2008

As is often said, we humans (if you are not one of us you can join on the website, membership fees are high but not impossible) are pattern seeking animals. This implies that it is difficult for us to understand a completely "new kind of thing", we tend to seek something else that we can compare it to. Psychology got a minor win when computers emerged, because it finally had a model for the brain. Psychology professors could point to the computer and say "the brain, it's somewhat like that". It behooves psychology that the computer we know has a distinct memory, a processing/reasoning unit, and input channels that receive transmissions of "sensory perception".

The computer as we know it is the so called Von Neumann architecture, every computer we've ever had has been designed in those basic components. This design is simple enough (and, in fact, dumb enough) to handle just about anything at all, it is the general purpose computer (a way of saying that it doesn't have any purpose).

Now a bunch of neuroscientists have figured out that the memory in our computers is too dumb to do certain things well. Our linear memory, where a memory cell has no relationship to the neighboring cells, is abstract and general enough for anyone's pleasure, but it's not the way human memory works. Our memory is hierarchical, that is to say it's made up of levels where the bottom levels remember very simple things, like shapes and sounds in time. As you ascend the hierarchy, the levels above that do not remember "discrete" things, they remember unifications over the simple things. That is the way in which you understand that a leg is both a discrete thing as well as it can be part of a human body, one part in something larger.

Now, if you think about it, this is a crude first model for learning, you are being fed a lot of facts in the hope that you will be able to unify them and see "meaning" to them as a whole. This, unfortunately, is necessary, because we don't know how to transmit the meaning itself, we think the only way is to send the facts and then the mind will infer the meaning by itself. (It's quite an optimistic strategy, isn't it?) Interestingly, there is a trade off at this point. Apparently, you cannot both remember all the discrete facts *and* be able to unify them. So that could explain how some people have a propensity for lots of facts without seeing the bigger picture, while others can't hold on to all the little pieces. In a way it makes sense, doesn't it? Like doing research. Once you've stated your thesis, you don't need all those little notes anymore, they are subsumed in the larger unifying rationale.

But now back to technology. A bunch of people have built this model of memory in software, calling it a "hierarchical temporal memory". It's an absolutely fascinating premise.