Archive for the ‘code’ Category

using your dotfiles on the go

July 27th, 2007

Chances are you use more than one computer in your daily/weekly/monthly routine. Probably not because you want to; it would be more practical for all of us to just have one place to store all our stuff, but alas.

A problem

Anyway, if you do then you know the pain of sitting down somewhere to not use your carefully set up and configured environment, dropped into some sub par default world that doesn't have all your personal goodies.

I've endured this for a number of years, until eventually I decided to fix it. Mind you, the reason there isn't a standard solution for this is that the problem has a rather large number of variables. But for this you need:

  • a universal storage (http, ftp, etc)
  • bash or zsh

The idea came up when I started fiddling around with zsh, trying to get a reasonable prompt working. Once I finally had, obviously I wanted the same environment that I do in bash already. And not knowing whether I'd love zsh enough to use it permanently, I also wanted the option to use bash whenever I might want to.

To complicate things a bit, I also like the option to have my prompt look slightly different depending on the host I'm logged into. One too many reboots when logged into a remote server (thinking I was rebooting the local machine) made me introduce that as policy. :D

So my prompt looks like this, where the hostname always has a different color on every host. I also change the curdir to be red when I'm root to remind me of that fact.

bash_prompt.png

After a lot of trial and error I eventually got my zsh prompt looking very similar.

Finally, there's the aspect that perhaps on some hosts you need some local settings as well, so one should make allowances for that. For instance, perhaps you want to set PATH or LD_LIBRARY_PATH on just one host.

A solution

All of these concerns produced the following files:

.zshrc
.zlogout
.zsh/functions/
.zsh/functions/prompt_numerodix_setup
.bashrc
.bash_profile
.bash_logout
.myshell/bash_local
.myshell/colors
.myshell/colors_zsh
.myshell/common
.myshell/common_local
.myshell/hostcolor_local
.myshell/zsh_local

The .bash* and .z* files are the usual stuff, of course. But everything common to bash and zsh is contained in .myshell/common, which is imported by both .bashrc and .zsh. That way we can share stuff between them. Further, everything called *local is local to the specific host, so these files are not distributed as part of the config update.

But the most useful bit is the pair of functions called pushcfg and pullcfg, in .myshell/common, which do the actual work of storing updates of these files in the central location. I use a website for this, but you could use some other easy-to-reach location just as well.

To see what happens, let's see the pullcfg function.

pullcfg(){
	oldcwd=$(pwd); cd ~
	wget http://www.matusiak.eu/numerodix/configs/cfg.tar.gz
	gunzip cfg.tar.gz
	tar xvf cfg.tar
	rm cfg.tar
	touch .myshell/common_local .myshell/bash_local .myshell/zsh_local .myshell/hostcolor_local
	find .zsh | xargs chmod 700 -R
	cd $oldcwd
}

So the tar file is downloaded and extracted, overwriting the existing files. Then all the local files are created empty (unless they already exist) so you don't have to remember their names.

So all you do is log into a host, download the file and unpack, then to update run pullcfg. And after that you should logout and login to see the effects.

But back to the prompt issue. A common problem with bash is that it has colors but there's no easy way to use them. The file .myshell/colors defines variable names for them, so everything related to colors is made much simpler. From there on, the prompt can be set quite simply like so.

PS1="${cbwhite}[\u@${host}\H${cbwhite}] ${cbgreen}\w \n${cbblue}$ ${creset}"

cbwhite means "color, bold, white", while cwhite means "color, white" and is the darker shade of the same color. Now, since zsh handles colors differently, .myshell/colors_zsh redefines these variables just for zsh, so that they work in both shells.

But what about changing the color of the hostname with every host? Glad you ask. The first time I extract the tar file on a new host, the prompt is not colored. This is to remind me that I haven't set a color for the hostname on this host yet. The hostname's color is defined in .myshell/hostcolor_local. So in the screenshot above it contains the line "cbmagenta". Once this file exists, and is non-emtpy, the prompt will become colored.

And so the whole thing works, with equally enough organization and flexibility to do just the right things. :)

Want to test drive? Download the tar file:

Of course, once the opportunity presents itself to migrate settings, I take advantage of that to also include .vimrc and whatever else is useful to have.

play audiobooks on crappy mp3 players

May 30th, 2007

Portable music players are to me a very welcome addition to our lives. A lot of time that was previously completely wasted can now be exploited. All the waiting is so much better, waiting for the bus, waiting at the post office, waiting to cross the street. Oh sure, we had portable players the last ~25 years, but the reason they've become so universal lately is how convenient it's become to use them. Yes, for something to be practical matters a great deal, there's hardly a better example.

And yes, mp3 players are great... for music. Just like pens are great for writing letters. But that's not all you can do with them. Hey, after all it's an audio player, not a music-only player. And hey, music is great when you're on the road. But when you're out there a long time, no matter how much music you have, it does get a little boring. So how do you pick up that slack? Why not try an audiobook.

I actually don't play much music on my player anymore, anytime I'm outdoors I use it for spoken audio. Unfortunately, portable players generally suck for this. It's as if noone thought of it, gee what if someone wanted to listen to something longer than 5 minutes?

Obviously, audiobooks tend to be longer than songs. Depending on how it's divided up into tracks, it's sometimes very inconvenient to play them. If you miss 30 seconds of a song, it doesn't really matter. But if you're listening to prose and you get interrupted, you want to seek back those 30 seconds to hear what you missed. For audiobooks, easy seeking in tracks is pretty important.

My old iRiver ifp series used to choke on tracks longer than about 30 minutes in length. It would play the track, and go past this limit, but the duration on the display was now out of sync with the audio. For these long tracks seeking was completely broken past this limit. Unless there's been a firmware upgrade in the last 6 months, this is still the case for everyone. Not only that, the seeking function was pretty much the weakest part of the interface, it was very impractical and very often I would accidentally skip to the next track instead of seeking forward (holding the button vs pressing). Incredibly annoying.

This seems to be the trend in general, seeking is a marginal thing, no one is making it easy to use on their portable player. Another inconvenience is that some players have displays so small that long artist/title/album names are a pain to check, it takes forever to scroll them.

So what to do? Well, you can hack around it. It's not an elegant solution, but it's a solution. Divide all these long tracks into short tracks, so your player won't choke on them, and so that you only go 5 minutes back if seeking isn't reliable.

tracksplit.rb will do just that. Run it in the directory where you have your longish tracks (it accepts mp3/ogg) and it will chop each one into pieces for you. It also renames them sequentially (so I assume you know what tracks you have), so the order in which they were alphabetically is preserved. The originals are deleted (after all, this is just a copy for your portable player, right?).

The actual heavy lifting is done with mp3splt, which cuts tracks into pieces without re-encoding. :cap:

#!/usr/bin/env ruby
#
# Author: Martin Matusiak <numerodix@gmail.com>
# Licensed under the GNU Public License, version 2.
#
# Note: this script uses mp3splt to split mp3/ogg files at a set length.


# set the length in minutes for each track, eg. 5.0 = 5 minutes
$track_length = 5.0


if ARGV[0].nil?
	$duration = $track_length
	puts "Track length not given, using standard track length of #{$track_length}"
else
	$duration = ARGV[0].to_f
end

if not system "which mp3splt &> /dev/null"
	puts "Erratum: mp3splt not found on system"
	exit 1
end

$pattern = "*.{mp3,ogg}"
files = Dir[$pattern]

if files.empty?
	puts "No files named \"#{$pattern}\" found, exiting"
	exit 0
end

w = (files.length / 10) + 1
files.each do |file| 
	i = files.index(file)
	newfile = "%0#{w}d_@n" % i
	cmd = "mp3splt -t #{$duration} -o \"#{newfile}\" \"#{file}\""
	puts cmd
	if system cmd
		File.delete(file)
	end
end

Ps. This happens to be my first adventure with ruby, so report breakage please ;)

painless website backup/synchronization

May 18th, 2007

Why you should care

There are quite a few reasons why you would want to back-up your website. For one thing, in the case of some kind of security breach, you don't want to lose the files on the server. Even if someone broke in, with a backup you could just restore it and you'd be back in a jiff. Otherwise, maybe you just want full control of your files, and knowing that they sit on a server somewhere remote doesn't make you feel as good as knowing they are right on your local disk. Whatever the reason, the following method is well suited to Wordpress sites, but general enough to apply to just about any website.

However, the following method enables you to transfer files in both direction, it's equally ideal for deployment. It makes no difference if you're uploading or downloading, we cover both bases.

How it works

Okay, that was the sales pitch. The script was written to allow for fast deployment of files on a server. Using Wordpress as an example, if you're hacking on your theme and you want to upload that one file you changed and see the result, you can do that quickly and painlessly with rsync. It's really the best way to transfer one file when you know none of the other files have changed. rsync synchronizes two locations, transferring only what has changed.

The files are transferred with rsync over ssh, so you need shell access on the server for this.

In a typical example where you have an account on a web server, this is how your file structure is at the root level (your homedir):

$ ls ~
.bashrc
.htaccess
.ssh
bin/
etc/
mail/
public_ftp/
public_html/
=> cgi-bin/
=> images/
=> => picture.jpg
=> index.html

tmp/

The files in bold are the ones you want to synchronize with your local disk and keep up-to-date. But there will generally be a lot of other files you're not interested in, generated in your homedir automatically, like raw web traffic logs, mail spam etc. (If the item is a directory, you want all the files and dirs it contains to be synchronized.)

So the issue is to selectively pick the items you want. But there may also be certain types of files inside these dirs you don't want, like for instance I ignore cgi-bin. So you want a way to exclude certain files/dirs from being transferred.

How to

Now that you know what's happening, it's time to set it up. You fill in the variables at the top of the script. local_path is where you want the files on disk. remote_path is where they are located on the server (in most cases ~ or /home/username). locations is the list of top level directories/files you want to synchronize. And finally exclusions are patterns you want to exclude (so if it contains cgi-bin, then that directory and all the files in it will be excluded from the synchronization).

Once that's done, you just run

$ sync.sh down

to download the files on the server to your local dir, and

$ sync.sh up

to transfer your local changes to the website. Finally,

$ sync.sh

alone will log you into your server with ssh.

Time to synchronize full local/remote tree for matusiak.eu (5470 files) when no changes were made: 4.4 seconds. ;)

A small note about security

Note that this script does not violate or subvert how you access your server. It uses ssh as the underlying security context. You can easily synchronize up/down with public key authentication, in which case you'll never have to type in your password when running sync.sh, and it's actually more secure as well. :)

#!/bin/bash
#
# Author: Martin Matusiak <numerodix@gmail.com>
# Licensed under the GNU Public License, version 2.


# server setup
hostname="matusiak.eu"
username=""
ssh_port="22"

# local setup
local_path="/local/path"

# remote setup
remote_path="~"
locations="bin backups public_html"

exclusions="cgi-bin *.swp *~" #.swp are vim swap files


## EDIT BELOW THIS LINE IF YOU KNOW WHAT YOU'RE DOING

# rsync options
rsync_options="--archive --verbose --stats --progress"

# switch priority
nice="nice -n 10"


inc_list=""
function inclusion_list() {
	for i in $exclusions; do
		inc_list="${inc_list}--filter='- $i' "
	done
	for i in $locations; do
		inc_list="${inc_list}--filter='+ /$i' "
	done
	inc_list="${inc_list} --filter='- /*'"
}

function shell() {
	 ssh -C ${username}@${hostname} -p ${ssh_port}
}

function sync_up() {
	inclusion_list
	cmd="${nice} rsync ${rsync_options} -e \"ssh -p ${ssh_port}\" \
	${inc_list} \
	${local_path}/* \
	${username}@${hostname}:${remote_path} "
	echo "$cmd"
	sh -c "$cmd"
}

function sync_down() {
	inclusion_list
	mkdir -p ${local_path}
	cmd="${nice} rsync ${rsync_options} -e \"ssh -p ${ssh_port}\" \
	${inc_list} \
	${username}@${hostname}:${remote_path}/* \
	${local_path} "
	echo "$cmd"
	sh -c "$cmd"
}


if [ -z "$1" ]; then
	shell
elif [ "$1" = "down" ]; then
	sync_down
elif [ "$1" = "up" ]; then
	sync_up
else
	echo "$0 [down|up]"	
fi

latex: adding pagebreaks at sections

May 11th, 2007

Stephen Wright once said something to the effect:

I have a huge collection of sea shells. It's spread out on all the beaches of the world.

That's an exact description on the state of latex documentation. Sure, here's probably the most powerful typesetting language known to man, well probably just the one man who actually knows it, the rest of us know bits and pieces. But, when you actually need to do something that you haven't done before, or you've done but you can't remember, bon voyage.

Safe trip on that extensive google search, finding ancient web pages describing good old techniques (latex hasn't changed much over the years decades), 404 links to packages that once were in use, and a great deal of tips & tricks that seem useful, but are nothing like what you need to do right now.

Sometimes you'll find the answer. Sometimes you'll give up. Sometimes you'll conclude it's not possible (or at least, not unless you're a latex wizard). In general, it is possible. But because latex is used and abused by so many in so many different ways, over so many years, it's naturally hard to keep track of who accomplished what and how.

But, there is no centralized documentation at all. Latex is so huge that it needs to be extensively documented, but what you find instead is some professor who wrote a tutorial for his students for that particular assignment, or a list of all symbols you can use, or all kinds of bits and pieces, but nowhere can you find the whole. Not how the different programs are related to each other, how to write a fairly general Makefile for them, how to actually construct a workflow out of it. For that you better hope there is someone willing to guide you through it in the beginning.

One of the things I've wanted to do for some time is enforce a pagebreak before every section, because in some cases it just makes sense. Thrilled that I am that today I stumbled upon one of those ancient pages that has a working recipe for it. When you look at the solution, it's ridiculously simple, but when you don't know it... well.

\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{pagedsections}[2007/05/11 Adding pagebreaks before sections]

\let\oldsection = \section
\renewcommand{\section}[1]{
	\pagebreak
	\oldsection{#1}
}

Then, of course, include it into the document as usual:

\documentclass[12pt]{article}

\usepackage{pagedsections}

\begin{document}
\section{first}
blahdeeblah
\section{second}
blah
\end{document}

This lack of documentation is common for applications that predate the age of the internet, or at least the "modern" internet, not including usenet and whatever other deprecated forms of communication. For instance, bash suffers from an acute lack of in-depth documentation.

keepalive.sh: restarting flaky applications

February 5th, 2007

Sometimes you just want an application to run in the background for whatever reason. One that tends to crash. Well, if you're not there when it crashes, you can't start it up again. So what to do? The obvious answer is "fix the damn application already!" But maybe you don't have the source code. Or you don't know how. Or you can't be bothered. Or whatever. And you just want a way to automatically restart the application whenever it crashes.

I didn't know how to do that before, so I never had a solution for those rare cases when this was needed. But it's very easy to do.

#!/bin/sh

if [ "x$1" = "x" ]; then echo "usage: $0 <application>"; exit 1; fi


app=$1
echo "Running $app"


$app &
pid=$!
while ((1)); do
	if ! ps $pid; then 
		echo "Restarting $app"
		$app &
		pid=$!
	fi
	echo "pid: $pid"
	wait $pid
	sleep 30
done

Here's how keepalive.sh works.

  1. It starts the application.
  2. It captures the pid.
  3. Now it starts an infinite loop.
    1. Check the pid to see if the app is running.
    2. If the app is not running, start it and capture the pid.
    3. Otherwise just wait for it to finish.
    4. Goto 3.1.

It doesn't matter if you stop the application in a standard way, if you kill it, or if it dies on its own. Within 30 seconds it will be restarted. The short delay is included so that an application that dies instantly won't keep restarting and dieing all the time, bringing your system to its knees. Until you stop keepalive.sh, it will keep looping forever.