math explained

December 2nd, 2012

I've always found it really painful to read math. Yet I've never really been able to pinpoint why. It feels like using a really horrible piece of software where it's ridiculously hard to figure out what is what and how to do what you want.

Then it hit me. Math notation is a dead ringer for the International Obfuscated C Code Contest. I've always had this unarticulated sensation that when I read math, be it a textbook, a proof, anything with formal notation, I constantly have to decrypt the horrible code into its meaning and store that in memory. Eventually the number of symbols and operators grows to a point where I encounter an out of memory error.

If you've ever looked at what javascript looks like when it comes out of a minifier, you've noticed that descriptive function names and variable names have all been turned into identifiers like "a", "b" etc. Sure, on the web the size of the payload matters. But are mathematicians too embroiled in some misguided quest for Huffman coding?

People criticize c++ because you can overload the operators to mean anything you want. But math is such a cognitive overload of operators that it's worse than any c++. And people who need math to explain something formally routinely redefine operators to mean whatever they want.

You know what else math doesn't have? Scope. It's fabulous reading a paper with that tingling of anxiety because you never know when a symbol defined 10 pages back is about to make a comeback.

So in the end it's all operators and single identifier names in a global namespace, in a word: delightful. Holy cow, I *wish* we had awful Hungarian notation practices, it would improve the code by levels of magnitude. In fact, so many of the tips in the seminal How to Write Unmaintainable Code would be a godsend for math writing.

:: random entries in this category ::