From: Jeff Breidenbach <jeff@ja...> - 2006-08-09 08:37:53
> Any truth to this [vile blasphemy]?
Hard to imagine how server side scripting could fool HtDig. Maybe if the
URL is randomly gnerated gibberish for every page visit, the HtDig
crawler won't know if it has already visited a page or not. The ISP
might also be trying to save buck. Crawling files on a filesystem uses
maybe less bandwidth and definitely less CPU than hitting the web server
(which is required for a dynamic website.) Anyway, HtDig indexes
websites that use server side scripting all the time.
No way HtDig 3.2 would make a difference in this respect. That said,
HtDig 3.1 just got, um, upgraded out of the most recent Debian GNU/Linux
distribution, so its days may be numbered for other reasons. Older
software tends to accumulate problems over time as the environment it
runs in changes - this is commonly referred to as 'bit rot'.
Get latest updates about Open Source Projects, Conferences and News.