Add 'spidering' of the links on a web page, to download
all files on a website. To make this simple, just
download the files present in the main directory and
below, ignoring links to other servers.
Added a recursive download tool in v1.32. You give a
starting url, choose the filestypes you want and select the
depth of the recursion. It does not find javascript links
yet, though.
Try http://www.thehun.net/index.html with
"avi-wmv-mpg-mov-rm-ram-mpeg" and a depth of 1 and you'll
get lots of pr0n for example ;)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Logged In: YES
user_id=164594
Added a recursive download tool in v1.32. You give a
starting url, choose the filestypes you want and select the
depth of the recursion. It does not find javascript links
yet, though.
Try http://www.thehun.net/index.html with
"avi-wmv-mpg-mov-rm-ram-mpeg" and a depth of 1 and you'll
get lots of pr0n for example ;)