Archive for the ‘english’ Category

The timeless Riviera

August 6th, 2009

summer_vacation_2009_flags

The beauty of the camping vacation is the enduring feeling of being on site. Air travel is very insular that way, I feel like I'm in the same place right up until I get off the plane, but when you're on the road it's a whole different feeling and more authentic in a sense. It's tiresome if you have to go far to get there, but once you cross the border into where you're going there is a real sense of expectation that gradually bears out. And you can stop anywhere you want along the way.

summer_vacation_route_2009

Locations of interest:

  • Wien (classical concert)
  • Verona (late night outdoor opera)
    Camping right near Lago di Garda - a summer camping hotspot
  • La Spezia (all day visit to the pictoresque villages in Cinque Terre national park)
  • Genova (city walk and a visit to Europe's largest aquarium/oceanography museum)
  • San Remo (walk along the promenade and swimming at the beach)
  • Monaco (city walk, swimming at the beach)
  • Nice (walk along the promenade, swimming)
  • Cannes (walk along the promenade)
  • Ramatuelle by St. Tropez
    Camping at the unbeatable Les Tournels
  • St. Tropez (city walk)
  • Marseille (city walk)
    Camping in Aix-en-Provence - a nice town in itself
  • Les Baux-de-Provence (a splendid ancient castle)
  • Orange (the best preserved Roman theater in Europe)
  • Geneve (CERN museum, city walk)
  • Lausanne (city walk)
  • Bern (Einestein museum)
  • Zurich (natural earth museum and a robotics museum)
  • Munich (Deutches museum - world's largest museum of science and technology)

And that's a way to spend three weeks.

networktest: improved network detection

June 13th, 2009

As a follow up to the network perimeter test I have expanded the code a bit. It now shows also the interface names to help explain what's what, and it also tries to match the gateway to the ip addresses found. The strategy, however, has changed somewhat. At first the goal was to find all the networks and proceed from there. I decided this was not really the best approach, given that a misconfigured network connection could possibly contain, say, a gateway not on any network. It therefore seems more sensible to display the information read from route and ifconfig as is than try to infer too much from it.

Here for instance the loopback ip is on a network that is not known, but still a working ip nonetheless.

The probing strategy also includes nmap (if available) probes to have some redundancy in the process (say for instance outbound icmp is blocked by the firewall). And the code has been made more portable, so on platforms other than linux (where linux networking tools aren't present) there is a graceful degradation of features.

havenet1

Naturally, much networking happens over wireless these days, so there's also a wifi command that displays the status of all the wireless interfaces. This is again nothing more than is revealed by iwconfig, but in a considerably more human readable form I would argue.

wifi

Then there is wifiscan, which not surprisingly scans for access points. The output is a considerably more space efficient and usable counterpart to what iwlist prints.

wifiscan

One thing to keep in mind about these detection commands is that many of the system tools being used offer less (or none) information to unprivileged users, so running these commands as root may produce fuller output.

classes of functions

May 24th, 2009

I decided to finally blog this since I can never frickin remember which is which. Maybe if I write this and draw the diagram it'll finally stick. If not, I'll have something to come back to. To make it more accessible I'll be using a running example: the bike allocation problem. Given a group of people and a set of bikes, who gets which bike?

Partial functions

function_properties_partialPartial functions are pretty self explanatory. All you have to remember is which side the partiality concerns. In our example a partial function means that not every person has been assigned a bike. Some persons do not have bikes, so lookup_bike(person) will not work on all inputs.

Partial functions are common in code: reading from files that don't exist, and of course the ever lurking NullPointerException -- following a pointer to an object that is not live. In haskell, this is where the Maybe monad appears.

Total functions

function_properties_totalNot surprisingly, total functions are the counterpart to partial functions. A total function has a value for every possible input, so that means every person has been assigned a bike. But it doesn't tell you anything about how the bikes are distributed over the persons; whether it's one-to-one, all persons sharing the same bike etc.

Clearly, total functions are more desirable than partial ones -- it means the caller can call the function with any value without having to check it first. Partial functions often masquerade as total ones, by returning a value outside the expected range (which explains the existence of a null value in just about every programming language and data format). In python the value 0, None and any empty sequence (string, list, tuple, dict) all represent null, which makes writing total functions easy.

Bijective/isomorphic functions (forall one-to-one)

function_properties_bijectiveA bijective function (also called isomorphic) is a one-to-one mapping between the persons and the bikes (between the domain and codomain). It means that if you find a bike, you can trace it back to exactly one person, and that if you have a person you can trace it to exactly one bike. In other words it means the inverse function works, that is both lookup_bike(person) and lookup_person(bike) work for all inputs.

Isomorphic functions are found in all kinds of translations; storing objects in a database, compressing files etc. The name literally means "the same shape", so any format that can reproduce the same structure can represent the same data.

Injective functions (one-to-one)

function_properties_injectiveAn injective function returns a distinct value for every input. That is, no bike is assigned to more than one person. If the function is total, then what prevents it from being bijective is the unequal cardinality of the domain and codomain (ie. more bikes than persons).

Another way to understand it is to think of something small being stored in (embedded in) something big. In order to maintain unique output values, the codomain must be at least as big as the domain. GUIDs are an example of this. A GUID generator guarantees a globally unique identifier by picking values from a sufficiently large space. Given a GUID that has been issued, you can trace it back to exactly one object, but you cannot take just any value in the GUID space, because most of them have never (and will never) be issued to anyone.

Surjective functions (many-to-one)

function_properties_surjectiveA surjective function is one where all values in the codomain are used (ie. all bikes are assigned). In a way it is the inverse property of a total function (where all persons have a bike).

Surjective functions are often undesirable in practice, meaning that you have too few resources at your disposal, which forces sharing (threads on a cpu) or rejection (streaming server can only accept so many clients).

The way to think of injections and surjections is not as opposites, but as complementary properties. A function can be both injective (all persons have a unique bike) and surjective (all bikes are used). If so, it is bijective.

kill obsolete dotfiles

May 23rd, 2009

So my portable dotfiles are working out really well. There is only one fly in the ointment left. When a file has changed, it gets overwritten with a newer version, that's fine. But when a file has been renamed or removed, it will stick around in ~, creating the false impression that it's supposed to be there.

This is not a huge problem, but it does get to be tiresome when you're moving around directory hierarchies. I recently started using emacs more seriously and I expect a lot of stuff will eventually pile up in .emacs.d. Meanwhile, obsolete files and directories will clutter the hierarchy and possibly interfere with the system.

What can we do about it? The archive that holds the dotfiles is a sufficient record of what is actually in the dotfiles at any given moment. We can diff that with the files found on the system locally and pinpoint the ones that have been made obsolete.

For example, I initially had an ~/.emacs file, but then I moved it to ~/.emacs.d/init.el. So ~/.emacs is obsolete. But when I sync my dotfiles on a machine with an older version of my dotfiles, it will still have ~/.emacs around.

Not anymore. Now this happens:

dotfiles_sync

Files directly in ~ have to be listed explicitly, because we don't know anything about ~ as a whole. But files in known subdirs of ~, like ~/.emacs.d, are detected automatically.

killobsolete() {
	# files directly in ~ formerly used, now obsolete
	# subdirs of ~ checked automatically
	local suspected=".emacs"

	# init tempfiles we need to run diff on
	local found="/tmp/.myshell_found"
	local pulled="/tmp/.myshell_pulled"

	# detect files found locally
	for i in $(gunzip -c cfg.tar.gz | tar -tf -); do
		echo $(dirname "$i");
	done | grep -v '^.$' | sort | uniq | xargs find | sort | uniq | \
		grep -v ".myshell/.*_local*" > "$found"

	# detect suspected files
	for f in $(find $suspected 2>/dev/null); do
		echo "$f" >> "$found";
	done

	# sort found list
	cat "$found" | sort | uniq > "$found.x" ; mv "$found.x" "$found"

	# list files pulled from upstream
	for i in $(gunzip -c cfg.tar.gz | tar -tf -); do
		echo "$i";
		echo $(dirname "$i");
	done | grep -v '^.$' | sed "s|\/$||g" | sort | uniq > "$pulled"

	# list obsolete files
	local num=$(diff "$pulled" "$found" | grep '>' | cut -b3- | wc -l)
	if [ $num -gt 0 ]; then
		echo -e "${cyellow} ++ files found to be obsolete${creset}";
		diff "$pulled" "$found" | grep '>' | cut -b3-;

		# prompt for deletion
		echo -e "${cyellow} ++ kill obsolete? [yN]${creset}"
		read ans
		if [[ "$ans" == "y" ]]; then
			for i in $(diff "$pulled" "$found" | grep '>' | cut -b3-); do
				if [ -d "$i" ]; then
					rm -rf "$i"
				else
					rm -f "$i"
				fi
			done
		fi
	fi

	# dispose of tempfiles
	rm -f "$pulled" "$found"
}

generalized makefiles

May 18th, 2009

Build systems are probably not the most beloved pieces of machinery in this world, but hey we need them. If your compiler doesn't resolve dependencies, you need a build system. You may also want one for any repeated task that involves generating targets from sources as the sources change over time (building dist packages, xml -> html, latex -> pdf etc).

generalized_makefiles_singleFittingly, there are quite a few of them. I haven't done an exhaustive review, but I've mentioned ant and scons in the past. They have their strengths, but the biggest problem, as always, is portability. If you're shipping java then having ant is a reasonable assumption. But if not.. Same goes for python, especially if you're using scons as build system for something that generally gets installed before "luxury packages" like python. Besides, scons isn't popular. I also had a look at cmake, which is disgustingly verbose.

Make is the lowest common denominator and thus the safest option by far. So over the years I've tried as best to cope with it. Fortunately, I tend to have fairly simple builds. There's also autotools, but for a latex document it seems like overkill, to put it politely.

One to one, single instance

So what's the problem here anyway? Let's use a simple example, the randomwalks code. The green file is the source. The orange file is the target. And you have to go through all the yellow ones on the way. The problem is that make only knows about the green one. That's the only one that exists.

So the simplest thing you can do is state these dependencies explicitly, pointing each successive file at the previous one. Then it will say "randomwalks.s? That doesn't exist, but I know how to produce it." And so on.

targets := randomwalks

all: $(targets)

randomwalks : randomwalks.o
	cc -o randomwalks randomwalks.o

randomwalks.o : randomwalks.s
	as -o randomwalks.o randomwalks.s

randomwalks.s : randomwalks.c
	cc -S -o randomwalks.s randomwalks.c

clean:
	rm -f *.o *.s $(targets)

Is this what we want? No, not really. Unfortunately, it's what most make tutorials (yes, I'm looking at you, interwebs) teach you. It sucks for maintainability. Say you rename that file. Have fun renaming every occurrence of it in the makefile! Say you add a second file to be compiled with the same sequence. Copy and paste? Shameful.

One to one, multiple instances

generalized_makefiles_multipleIt's one thing if the dependency graph really is complicated. Then the makefile will be too, that's unavoidable. But if it's dead obvious like here, which it often is, then the build instructions should mirror that. I run into a lot of cases where I have the same build sequence for several files. No interdependencies, no multiple sources, precisely as shown in the picture. Then I want a makefile that requires no changes as I add/remove files.

I've tried and failed to get this to work several times. The trick is you can't use variables, you have to use patterns. Otherwise you break the "foreach" logic that runs the same command on one file at a time. But then patterns are tricky to combine with other rules. For instance, you can't put a pattern as a dependency to all.

At long last, I came up with a working makefile. Use a wildcard and substitution to manufacture a list of the target files. Then use patterns to state the actual dependencies. It's also helpful to unset .SUFFIXES so that the default patterns don't get in the way.

targets := $(patsubst %.c,%,$(wildcard *.c))

all: $(targets)

% : %.o
	cc -o $@ $<

%.o : %.s
	as -o $@ $<

%.s : %.c
	cc -S -o $@ $<

clean:
	rm -f *.o *.s $(targets)

.SUFFIXES:

Many to one

generalized_makefiles_manytooneWhat if it gets more complicated? Latex documents are often split up into chapters. You only compile the master document file, but all the imports are dependencies. Well, you could still use patterns if you were willing to use article.tex as the main document and stash all the imports in article/.

This works as expected, $< gets bound to article.tex, while the *.tex files in article/ correctly function as dependencies. Now add another document story.tex with chapters in story/ and watch it scale. :cap:

targets := $(patsubst %.tex,%.pdf,$(wildcard *.tex))
 
all: $(targets)
 
%.pdf : %.tex %/*.tex
	pdflatex $<

clean:
	rm -f *.aux *.log *.pdf

Many to many

Latex documents don't often have interdependencies. Code does. And besides, I doubt you want to force this structure of subdirectories onto your codebase anyway. So I guess you'll have to bite the bullet and put some filenames in your makefile, but you should still be able to abstract away a lot of cruft with patterns. Make also has a filter-out function, so you could state your targets explicitly, then wildcard on all source files and filter out the ones corresponding to targets, and use the resulting list as dependencies. Obviously, you'd have to be willing to use all non-targets as dependencies to every target, which yields some unnecessary builds. But at this point the only alternative is to maintain the makefile manually, so I'd still go for it on a small codebase.

PS. First time I used kivio to draw the diagrams. It works quite okay and decent on functionality, even if the user interface is a bit awkward. Rendering leaves something to be desired clearly.