Archive for the ‘spiderfetch’ Category

spiderfetch, now in python

June 28th, 2008

Coding at its most fun is exploratory. It's exciting to try your hand at something new and see how it develops, choosing a route as you go along. Some poeple like to call this "expanding your ignorance", to convey that you cannot decide on things you don't know about, so first you have to become aware - and ignorant - of them. Then you can tackle them. If you want a buzzword for this I suppose you could call this "impulse driven development".

spiderfetch was driven completely by impulse. The original idea was to get rid of awkward, one-time grep/sed/awk parsing to extract urls from web pages. Then came the impulse "hey, it took so much work to get this working well, why not make it recursive at little added effort". And from there on countless more impulses happened, to the point that it would be a challenge to recreate the thought process from there to here.

Eventually it landed on a 400 line ruby script that worked quite nicely, supported recipes to drive the spider and various other gimmicks. Because the process was completely driven by impulse, the code became increasingly dense and monolithic as more impulses were realized. And it got to the point where the code worked, but was pretty much a dead end from a development point of view. Generally speaking, the deeper you go into a project, gradually the lesser the ideas have to be to be realized without major changes.

Introducing the web

The most disruptive new impulse was that since we're spidering anyway, it might be fun to collect these urls in a graph and be able to do little queries on them. At the very least things like "what page did I find this url on" and "how did I get here from the root url" could be useful.

spiderfetch introduces the web, a local representation of the urls the spider has seen, either visited (spidered) or matched by any of the rules. Webs are stored, quite simply, in .web files. Technically speaking, the web is a graph of url nodes, with a hash table frontend for quick lookup and duplicate detection. Every node carries information about incoming urls (locations where this url was found) and outgoing urls (links to other documents), so the path from the root to any given url can be traced.

Detecting file types

Aside from the web impulse, the single biggest flaw in spiderfetch was the lack of logic to deal with filetypes. Filetypes on the web work pretty much as well as they do on your local computer, which means if you rename a .jpg to a .gif, suddenly it's not a .jpg anymore. File extensions are a very weak form of metadata and largely useless. Just the same with spidering, if you find a url on a page you have no idea what it is. If it ends in .html then it's probably that, but it can also not have an extension at all. Or it can be misleading, which when taken to perverse lengths (eg. scripts like gallery), does away with .jpgs altogether and encodes everything as .php.

In other words, file extensions tell you nothing that you can actually trust. And that's a crucial distinction: what information do I have vs what can I trust. In Linux we deal with this using magic. The file command opens the file, reads a portion of it, and scans for well known content that would identify the file as a known type.

For a spider this is a big roadblock, because if you don't know what urls are actual html files that you want to spider, you have to pretty much download everything. Including potentially large files like videos that are a complete waste of time (and bandwidth). So spiderfetch brings the "magic" principle to spidering. We start a download and wait until we have enough of the file to check the type. If it's the wrong type, we abort. Right now we only detect html, but there is a potential for extending this with all the information the file command has (this would involve writing a parser for "magic" files, though).

A brand new fetcher

To make filetype detection work, we have to be able to do more than just start a download and wait until it's done. spiderfetch has a completely new fetcher in pure python (no more calling wget). The fetcher is actually the whole reason why the switch to python happened in the first place. I was looking through the ruby documentation in terms of what I needed from the library and I soon realized it wasn't cutting it. The http stuff was just too puny. I looked up the same topic in the python docs and immediately realized that it will support what I want to do. In retrospect, the python urllib/httplib library has covered me very well.

The fetcher has to do a lot of error handling on all the various conditions that can occur, which means it also has a much deeper awareness of the possible errors. It's very useful to know whether a fetch failed on 404 or a dns error. The python library also makes it easy to customize what happens on the various http status codes.

A modular approach

The present python code is a far cry from the abandoned ruby codebase. For starters, it's three times larger. Python may be a little more verbose than ruby, but the increase is due to a new modularity and most of all, new features. While the ruby code had eventually evolved into one big chunk of code, the python codebase is a number of modules, each of which can be extended quite easily. The spider and fetcher can both be used on their own, there is the new web module to deal with webs, and there is spiderfetch itself. dumpstream has also been rewritten from shellscript to python and has become more reliable.

Grab it from github:

spiderfetch-0.4.0

spiderfetch, part 2

April 27th, 2008

Note: If you haven't read part 1 you may be a little lost here.

So, the inevitable happened (as it always does, duh). I start out with a simple problem and not too many ambitions about what I want to accomplish. But once I reach that plateau, nay well before reaching it, I begin to ever so quietly ask myself the question "wait a second, what if x?" and "this looks specialized, I wonder if I could generalize...". And so before ever even reaching that hill top I've already, covertly, committed myself to taking it one step further. Not through a conscious decision, but through those lingering peripheral thoughts that you know won't disappear once they've struck. A bell cannot be unrung and all that.

I realized this was happening, but I didn't want to get into too much grubby stuff in the first blog entry, so I decided to keep that one simple and continue the story here. The first incarnation of spiderfetch had a couple of flaws that bugged me.

  1. No way to inspect how urls were being matched on the page, or even reason to believe this was happening correctly, other than giving an input and checking that all the expected urls were found. To make matching evident, I would need to be able to see the matches visually on the page.
    This has been addressed with a new option --dumpcolor, which dumps the index page and highlights the matches. This has made it much easier to verify that matching is done correctly.
  2. Matching wasn't sufficiently effective. The regex I had written would match urls inside tags, as long as they were in quotes. But this would still miss unquoted urls, and it also excluded all other urls on the page, which may or may not be of interest. I also realized that a single regex, no matter how refined, would be unlikely to match simultaneously all the urls that may be of interest.
    The obvious response is to add an option for multiple regexes, which is exactly what happened. This obviously adds another layer of complexity to debugging regexes, so the match highlighting was extended to colorize every match in a different color. Furthermore, where two regexes would match the same characters, the highlighting is in bold to indicate this.

With that, I was far happier with the ability to infer and verify correctness in the matching behavior. Surely now everything is honkey dorey?

Or not? (As a classmate of mine likes to say after delivering a convincing argument, but graciously gives you the chance to state your objections anyway). Well, if you read part 1 of this adventure right to the end, noting the observation that spiderfetch could be run recursively, you may have thought what I thought. Well gosh, Bubba, this is starting to sound like wget --mirror. Since I've set up all this infrastructure already -- to spider a single page -- it wouldn't really take much to generalize it to run recursively.

There are a couple of problems to solve, however. Firstly, the operational model for spiderfetch was very simple: spider a page, then fetch all the urls that match the pattern. In terms of multiplicity: 1 page to spider, 1 pattern to match urls against, n urls to find. If we now take this a step further, in the next pass we have n urls to spider (obtained form the n urls found in the first step), and we may need 1 pattern to filter some of them. Next, we spider these pages, which produces (m1+m2+...) (or roughly, n*m) urls and so on. This becomes rather convoluted to explain in words, so let's visualize.

Starting at the url to be spidered (the top green node), we spider the page for urls. For each of the urls found, it ends up in one of three categories:

  1. It matches the spider filter, so it becomes a url to spider in the next round (a black arrow).
  2. It matches the fetch filter, so it becomes a url to fetch (a blue arrow).
  3. It matches neither and is discarded (not shown).

In the next round, we gather up all the urls that are to be spidered (the black arrows starting at the top green node) and do the same thing for each one as we did with just the one page to begin with.

But this complicates matters quite a lot. We now have to deal with a bunch of new issues:

  1. How do we traverse the nodes? wget in mirror/spider modes goes depth-first, which I always thought was eccentric. I don't know why they do it this way, but I'm guessing to minimize memory use. If you go breadth-first then at every step you have to keep track of all the green nodes at the current level, which grows exponentially. Meanwhile, depth-first give you linear growth, so that choice is well justified. But, on the other hand, the traversal order seems a bit unintuitive, because you "jump" from the deepest corner of your filesystem back to the top level and so on. I wonder if this turns out to be foolish (I don't expect spiderfetch to get the same kind of workout that wget does, obviously), but I've chosen the opposite approach, which I think also makes it easier to track what is happening underway.
  2. How deep do we want to go? Do we want to set an upper bound or (gasp) let it run until it stops?
  3. Until now we've only needed one filter (for the blue arrows at the top green node). Now we suddenly have a lot more arrows that we should be able to filter in some meaningful way. Obviously, we don't want a pair of filters for every single node. Not only would that be madness, but we don't know in advance how many nodes there will be.
    Our old friend wget only has one filter you can set for the whole site. But we want to be more specific than that, so there is a pair of filters (spider, fetch) for every level of the tree. This gives pretty decent granularity.

So how can we represent this cleanly? Well, it would be rather messy to have to input this as a command line parameter, besides which a once written scheme for a particular scenario could be reusable. So instead we introduce the idea of a recipe composed of rules. Starting from the top of the tree, each rule applies to the next level of the tree. And once we have no more rules -- or no more urls to spider -- we stop.

Let's take the asx example from part 1, where we had a custom made bash script to do the job. We can now rewrite it like this. First, the recipe is a list of rules, each rule is a hash. So starting from the top green node, we grab the first rule in the list, the one that contains the symbol :spider. This gives us the pattern to match urls on that page for spidering. There are no other patterns in there, so we spider these urls and then move on to the next step. We are now at the level below the top green node in the tree, with a bunch of pages from urls ending in .asx. We now grab the next rule in the recipe. This one gives a pattern for :dump, which means "dump these urls to the screen". So we find all the urls that match this pattern in all of our green nodes and dump them. Since there are no more rules left, this is where we stop.

module Recipe 
	RECIPE = [
		{ :spider => "\.asx$" },
		{ :dump => "^mms:\/\/" },
	]
end

So you would use it like this:

spiderfetch.rb --recipe asx http://www.something.com/somewhere

The options for patterns are :spider, :fetch, and :dump. If you want to repeat the same rule several times (for example to spider an image gallery with 10 pages, which are linked together with Next and Previous links), you can also set :depth to a positive integer value. This will descend in the tree the given number of times, using the same rule again and again.

And if you're feeling completely mental, you can even set :depth => -1, which will repeat the same rule until it runs out of urls to spider. You should probably combine this with --host, which will make sure you only spider the host (domain, to be exact) you started with, rather than the whole internet. (It will still allow :fetch and :dump to match urls on other hosts, so if you're spidering for images and they live on http://img.host.com rather than http://www.host.com, they will still be found.)

Lastly, as a heavy handed arbitration measure, if you execute a recipe and pass either of --dump or --fetch this will switch all your :fetch patterns to :dump or vice versa. Might be nice to be able to check that the right urls are being found before you start fetching, for instance.

Download and go nuts:

spiderfetch-0.3.1.tar.gz

UPDATE: Paul Hawkins wrote to say that wget actually runs in breadth-first mode.

download all media links on a webpage

April 26th, 2008

This has probably happened to you. You come to a web page that has links to a bunch of pictures, or videos, or documents that you want to download. Not one or two, but all. How do you go about it? Personally, I use wget for anything that will take a while to download. It's wonderful, accepts http, https, ftp etc, has options to resume and retry, it never fails. I could just use Firefox, and if it's small files then I do just that, and click all the links in one fell swoop, then let them all download on their own. But if it's larger files then it's not practical. You don't want to download 20 videos of 200mb each in parallel, that's no good. If Firefox crashes within the next few hours (which it probably will) then you'll likely end up with not even one file successfully downloaded. And Firefox doesn't have a resume function (there is a button but it doesn't do anything :rolleyes: ).

So there is a fallback option: copy all the links from Firefox and queue them up for wget: right click in document, Copy Link Location, right click in terminal window. This is painful and I last about 4-5 links before I get sick of it, download the web page and start parsing it instead. That always works, but I have to rig up a new chain of grep, sed, tr and xargs wget (or a for loop) for every page, I can never reuse that and so the effort doesn't go a long way.

There is another option. I could use a Firefox extension for this, there are some of them for this purpose. But that too is fraught with pain. Some of them don't work, some only work for some types of files, some still require some amount of manual effort to pick the right urls and so on, some of them don't support resuming a download after Firefox crashes. Not to mention that every new extension slows down Firefox and adds another upgrade cycle you have to worry about. Want to run Firefox 3? Oh sorry, your download extension isn't compatible. wget, in contrast, never stops working. Most limiting of all, these extensions aren't Unix-y. They assume they know what you want, and they take you from start to end. There's no way you can plug in grep somewhere in the chain to filter out things you don't want, for example.

So the problem is eventually reduced to: how can I still use wget? Well, browsers being as lenient as they are, it's difficult to guarantee that you can parse every page, but you can at least try. spiderfetch, whose name describes its function: spider a page for links and then fetch them, attacks the common scenario. You find a page that links to a bunch of media files. So you feed the url to spiderfetch. It will download the page and find all the links (as best it can). It will then download the files one by one. Internally, it uses wget, so you still get the desired functionality and the familiar output.

If the urls on the page require additional post-processing, say they are .asx files you have to download one by one, grab the mms:// url inside, and mplayer -dumpstream, you at least get the first half of the chain. (Unlikely scenario? If you wanted to download these freely available lectures on compilers from the University of Washington, you have little choice. You could even chain spiderfetch to do both: first spider the index page, download all the .asx files, then spider each .asx file for the mms:// url, print it to the screen and let mplayer take it from there. No more grep or sed. :) )

Features

  • Spiders the page for anything that looks like a url.
  • Ability to filter urls for a regular expression (keep in mind this is still Ruby's regex, so .* to match any character, not * as in file globbing, (true|false) for choice and so on.)
  • Downloads all the urls serially, or just outputs to screen (with --dump) if you want to filter/sort/etc.
  • Can use an existing index file (with --useindex), but then if there are relative links among the urls, they will need post-processing, because the path of the index page on the server is not known after it has been stored locally.
  • Uses wget internally and relays its output as well. Supports http, https and ftp urls.
  • Semantics consistent with for url in urls; do wget $url... does not re-download completed files, resumes downloads, retries interrupted transfers.

Limitations

  • Not guaranteed to find every last url, although the matching is pretty lenient. If you can't match a certain url you're still stuck with grep and sed.
  • If you have to authenticate yourself somehow in the browser to be able to download your media files, spiderfetch won't be able to download them (as with wget in general). However, all is not lost. If the urls are ftp or the web server uses simple authentication, you can still post-process them to: ftp://username:password@the.rest.of.the.url, same for http.

Download spiderfetch:

Recipes

To make the use a bit clearer, let's see some concrete examples.

Recipe: Download the 2008 lectures from Fosdem:

spiderfetch.rb http://www.fosdem.org/2008/media/video 2008.*ogg

Here we use the pattern 2008.*ogg. If you first run spiderfetch with --dump, you'll see that all the urls for the lectures in 2008 contain the string 2008. Further, all the video files have the extension ogg. And whatever characters come in between those two things, we don't care.

Recipe: Download .asx => mms videos

Like it or not, sometimes you have to deal with ugly proprietary protocols. Video files exposed as .asx files are typically pointers to urls of the mms:// protocol. Microsoft calls them metafiles. This snippet illustrates how you can download them. First you spider for all the .asx urls, using the pattern \.asx$, which means "match on strings containing .asx as the last characters of the string". Then we spider each of those urls for actual urls to video files, which begin with mms. And for each one we use mplayer -dumpstream to actually download the video.

#!/bin/bash

mypath=$(cd $(dirname $0); pwd)
webpage="$1"

for url in $($mypath/spiderfetch.rb $webpage "\.asx$" --dump); do 
	video=$($mypath/spiderfetch.rb $url "^mms" --dump)
	mplayer -dumpstream $video -dumpfile $(basename $video)
done