The archive-crawler project is building Heritrix: a flexible, extensible, robust, and scalable web crawler capable of fetching, archiving, and analyzing the full diversity and breadth of internet-accesible content.
Features
- deeply and thoroughly harvests website content
- works on any Java platform (Linux recommended)
- stores content to ARC or ISO WARC aggregate/transcript format
- web interface for operator control and monitoring of crawls
License
GNU Library or Lesser General Public License version 2.0 (LGPLv2), Apache License V2.0Follow Heritrix: Internet Archive Web Crawler
You Might Also Like
Whether natural disaster, cyberattack, or plain-old human error, data can disappear in the blink of an eye. ConnectWise BCDR (formerly Recover) delivers reliable and secure backup and disaster recovery backed by powerful automation and a 24/7 NOC to get your clients back to work in minutes, not days.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of Heritrix: Internet Archive Web Crawler!