The recursion depth defines how deep ItSucks should crawl through linked web sites. Think about a site structure like this:
_ site3.html
/
_ site2.html
/
_ site1.html
/ \_ yellow.png
index.html
\_ background.png
If you set the recursion depth to 0, you will only get the index.html.
With a value of 1, you will get index.html, site1.html and background.png.
With a value of 2, you will get index.html, site1.html, background.png, site2.html and yellow.png.
With a value of 3, you will get index.html, site1.html, background.png, site2.html, yellow.png and site3.html.
When set to -1 it's unlimited.
Defines a time limit. If the time limit is reached, no more links are added to the "open" list. After all links in the "open" list are finished, the download ends.
Define a maximum limit of links (URLs). When the limitation is reached, no more links are added to the "open" list. After all links in the "open" list are finished, the download ends.
Defines a prefix for the URL. When set, only URLs which are beginning with the prefix are accepted. This can be handy of only a specific directory should be downloaded. Only a string is allowed, no regular expresions.
Example: http:www.example.com/section1/
A host filter can be set if ItSucks should follow only links whose hostname matches an regular expression. To do so, remove the ".*" entry from the "Allowed Hostname" box and add something like ".*google.de". In this case ItSucks will only retrieve files from an host like "images.google.de", "google.de" or "http://www.google.de". Be careful to not remove all entries from the filter list. In this case no hostname is allowed.
To control which filetypes should be saved on disk, this filter can be used. Only files matching one of the regular expressions are saved on disk. As an example to accept only jpegs, remove the ".*" entry from the list and add ".*jpg$". When removing all entries from the filter list, no files will be saved on disk.