Johnny L. Wales schrieb:
> I was looking around the sourceforge page and noticed that there's an open
> task to write a robots.txt file which will prevent a few pages from being
> Maybe instead, we should include tags like this on pages we don't want
> <META NAME="ROBOTS" CONTENT="NOINDEX">
> And, if you want the robot to stop following links on this page, you add
> this to it:
> <META NAME="ROBOTS" CONTENT="NOFOLLOW">
> That should get everything you need to do done, right?
We already use the robots meta tag. The problem is that some robots
ignore these tags and robots.txt also. So the only solution will be to
block these. ward's wiki uses a timeout. my first patch was based on the
$REMOTE_HOST and $HTTP_USER_AGENT.
I had this:
$badrobots = array ('gw01.webtop.com',
// '22.214.171.124', HTTrack 2.0x
// '126.96.36.199' HTTrack 2.0x
'lgdx06atm.lg.ehu.es', // reported falsely as Mozilla
$badagentsre = '/(WebZIP)|(Teleport
// good robots: FAST-WebCrawler, TridentSpider3
This should be an optional configuration item.