Extract is an Web Information Management System which allows users to store and search many kind of structured data in a database (database records, Samba directories and files) classified in categories like in file system browsers.
InfoCrawler allows you to crawl and index various types of documents, accessing data from various resources: Intranets, public WEB sites, local or remote file systems. For product information please see our website at http://www.infocrawler.org/
GUI frontend for the a full-text search engine namazu (www.namazu.org)
Neko is a GUI for namazu: a full-text search engine (www.namazu.org).
ngetsuite is a collection of ruby scripts for retrieving binaries from Usenet. It uses nget and yydecode for fetching and decoding, stores the article headers in a MySQL database, and comes with a eruby web interface for remote control.
Photo archiving and categorising software using PHP, MySQL and AJAX. Also functions as a public photo gallery website.
phpScoutCamp is a php script to manage a database of scout camps
Php multilingual job search engine using indeed api
Search the web for videos, audios, eBooks, torrents and much more
What is WebCrunch? WebCrunch is intended to provide a very powerful web server indexing and search service allowing you to find a file among millions of files located on public servers around the internet. The search engine is powered by a database that holds information about all the files web servers have. The information about the files is gathered by an intelligent web crawler that runs every 2 to 4 days. It keeps the database clean and up-to-date with the previous contents and new entries for each web server address submitted by members.