You can resume an http download by setting the Range
header, or an ftp download with the REST
command. Not all hosts support this, but many do.
If you use wget
a lot, have you ever asked yourself why --continue
isn't on by default? Surely it's better to resume an interrupted download than to restart it? If you look up the man page, it has this healthy reminder for you:
-c, --continue
Continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of Wget, or by another program. For instance:wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
If there is a file named ls-lR.Z in the current directory, Wget will assume that it is the first portion of the remote file, and will ask the server to continue the retrieval from an offset equal to the length of the local file.
Note that you don’t need to specify this option if you just want the cur‐ rent invocation of Wget to retry downloading a file should the connection be lost midway through. This is the default behavior. -c only affects resumption of downloads started prior to this invocation of Wget, and whose local files are still sitting around.
Without -c, the previous example would just download the remote file to ls-lR.Z.1, leaving the truncated ls-lR.Z file alone.
Beginning with Wget 1.7, if you use -c on a non-empty file, and it turns out that the server does not support continued downloading, Wget will refuse to start the download from scratch, which would effectively ruin existing contents. If you really want the download to start from scratch, remove the file.
Also beginning with Wget 1.7, if you use -c on a file which is of equal size as the one on the server, Wget will refuse to download the file and print an explanatory message. The same happens when the file is smaller on the server than locally (presumably because it was changed on the server since your last download attempt)---because ‘‘continuing’’ is not meaningful, no download occurs.
On the other side of the coin, while using -c, any file that’s bigger on the server than locally will be considered an incomplete download and only "(length(remote) - length(local))" bytes will be downloaded and tacked onto the end of the local file. This behavior can be desirable in certain cases---for instance, you can use wget -c to download just the new portion that’s been appended to a data collection or log file.
However, if the file is bigger on the server because it’s been changed, as opposed to just appended to, you’ll end up with a garbled file. Wget has no way of verifying that the local file is really a valid prefix of the remote file. You need to be especially careful of this when using -c in conjunction with -r, since every file will be considered as an "incomplete download" candidate.
Another instance where you’ll get a garbled file if you try to use -c is if you have a lame HTTP proxy that inserts a ‘‘transfer interrupted’’ string into the local file. In the future a ‘‘rollback’’ option may be added to deal with this case.
I pasted the whole thing here, because it nicely summarizes the many reasons why "resume by default" is not safe.
As a matter of fact, that's not all. wget
doesn't even know if the local file with the same name is the same file that's on the server. And even if it is, the first attempt at downloading presumably didn't succeed, which while it may be unlikely, could perhaps have corrupted the local file. So even if you download the rest of it, you won't be able to use it anyway.
What can we do about this? To be sure that a) it's the same file and b) it's uncorrupted, we have to download the whole thing. That is, for obvious reasons, not desirable. Instead, I propose re-downloading the last portion of the file as a checksum. The fetcher in spiderfetch uses the last 10kb of the local file to determine if the resume should proceed. If the last 10kb of the local file doesn't agree with the same 10kb of the remote file, the fetcher exits with a "checksum" error.
The main benefit of this method is to verify that it's the same file. Clearly, it can still fail, but I imagine that with most file formats 10kb is enough to detect a divergence between two files.