An open source search engine with RESTFul API and crawlers
OpenSearchServer is a powerful, enterprise-class, search engine program. Using the web user interface, the crawlers (web, file, database, etc.) and the client libraries (REST/API , Ruby, Rails, Node.js, PHP, Perl) you will be able to integrate quickly and easily advanced full-text search capabilities in your application: Full-text with basic semantic, join queries, boolean queries, facet and filter, document (PDF, Office, etc.) indexation, web scrapping,etc. OpenSearchServer runs on Windows and Linux/Unix/BSD.
FileSearch is a multi-threaded documents searcher. No indexes need to be updated ; no background service is required. The more you have drives the more search speed is increased thanks to its multi-threading technic.
Desktop application for proxy searching
You want to find free proxy, but it is so hard to do it manually? Just try Burd's Proxy Searcher program. It is looking for list of proxies in Internet with using of public search engines, checks if those proxies works in your Internet segment, gathers additional information. If you want to be anonymous and don't want to spend much time for manual search then this program was developed especially for you. Tags: the free proxy, proxy list, proxies, proxy for free, proxy providers
Search Google with precision.
This little tool will help you with your Google searches. It uses Google Operators to search with precision the terms as you enter them in the program. It work with almost everything Google provide (Search Operators). You can search for files, sites/domains, search in URL, in title, in text… In order to provide the best results, some fields disable others, some Google Operators aren’t meant to be mixed up with others! It will launch the search results in your default browser, so you can keep working with the same browser.
Added features from the community forums
This is an edited Version 1.3.5 of Sphider. Released under the GPLv3 Sphider 1.3.5 is required first. This package ONLY updates the original install of Sphider which can be found here. http://www.sphider.eu/download.php #What got me to work on this ? http://www.sphider.eu/forum/read.php?3,6156 "thanks, inmotion" Sphider-CV allows you to index using ".sh" scripts, but also retains the default features. This version does not work on WINDOWS servers due to it needing to run .sh files. It will take some modifications for it to run .bat files instead. I will also upload a WINDOWS version in the next few days.
some mini tools for internet handly used
some mini internet tools.webspider to download site pages and images and swf,.minidownload to download multi files from website.
Perstem is a Persian (Farsi) stemmer, morphological analyzer, transliterator, and partial part-of-speech tagger. Inflexional morphemes are separated or removed from their stems. Perstem can also tokenize and transliterate between various character set encodings and romanizations.
eZimDMS based on DocMgr is a Document Management System(DMS). Which provide more commerical features, such as upload progress bar, enchanced security, more and more. Our goals would like to provide a full & complete document management system
It is to read IPs or Links to get more links.
This will read IPs or Links to get links. These links stored in the created MySQL database Each input file will create a MySQL database with two tables: Ranges and Links ===================== Load Input: 1. If input files contain lines of form: 188.8.131.52/1 => use Load IP/Mask or IPs 2. If input files contain lines of form: 184.108.40.206 => use Load IP/Mask or IPs 3. If input files contain lines of form: 220.127.116.11-18.104.22.168 => us Load Range It is ok to have spaces before -, so line can be 22.214.171.124 - 126.96.36.199 Load Link: 1. Lines are all links that can begin with or without http:// or https:// http://www.abcdefg012345.com https://www.abcdefg012345.com www.abcdefg012345.com Bad Input: Combinations of IPs , Links , Ranges 188.8.131.52/1 www.abcdefg012345.com Set up Data Source Name is the same as version 1 which is at https://sourceforge.net/p/jiprangescanner/wiki/Home/
Questo script consente di evidenziare, estrarre e condividere contenuti da una pagina web tramite la semplice selezione col mouse. This script allows you to highlight, extract and share content from a web page simply by mouse selecting.
LinkDB is a Link DataDase. This means that LinkDB is a centralised system to manage links, log who, how and when was a link accessed and finally serve links to resources that change a lot or that are quite long to remember.
centralized syslog-ng monitoring frontend writen in php
Webinterface to monitor many Syslog-ng - Linux Hosts on a central logserver. Powered by SphinxSE for ultrafast Fulltext-Search Queries. Testet with huge amount of entries (over 80 000 000) with incredible good performance. Easy to setup
Multi-Function web search tool. It does reverse phone lookups, IP and domain lookups, zip and area code, Google searches. Windows based program. Use the installer or run the stand alone program without having to install anything.
PhPubli is a tool written in php/mysql intended to organize and advertise bibliographical references, typically the publications of a research institute. Features include advanced search criteria and exporting of the results in a variety of formats.
Application designed for those who want to keep information stored and doesn't want to use a big app for a basic task like this.
A collection of my software utilties
Referer spam (also known as log spam or referer bombing)
Required: - Php CLI - Php CURL Referer spam (also known as log spam or referer bombing) is a kind of spamdexing (spamming aimed at search engines). The technique involves making repeated web site requests using a fake referer URL that points to the site the spammer wishes to advertise. Sites that publicize their access logs, including referer statistics, will then inadvertently link back to the spammer's site. These links will be indexed by search engines as they crawl the access logs. This benefits the spammer because of the free link, which gives the spammer's site improved search engine ranking due to link-counting algorithms that search engines use.
A threaded C application that searches torrent trackers/indexers for .torrent files and sorts the results according to user defined criteria. Uses glib2.0 and libcurl4
A terminal-program for downloading torrents from PirateBay
Torrtux is a terminal-based program, written in perl for downloading torrents from The Pirate Bay. If you live in a country where tpb is blocked (UK, Fin, Be, etc.), you can set a proxy in the config file. With it you can get the magnet link of your torrent, copy it in the clipboard and open your torrent manager. All of that from your terminal ! It also allows you to get the details of your torrent, the author, the date, the type, the size, etc., just like being on the TPB site ! Moreover, it retrieves subs from www.opensubtitles.org. It retrieves informations in the source code of the TPB page and parses it with regexp and the library html-parser. In the config file ~/.torrtuxrc, you can chose your display, subs, comments preferences, your torrent-manager and a proxy if needed ! Thanks for reporting all bugs you find !
UindexWeb Search engine is an open source web spider, main program is in Delphi7. Lucene.Net is the default full text index engine. The latest version can be retrieved from http://www.opencpu.com/.
Fetches topics with new posts from ZetaBoards forums and does something with the URLs, like opening them in a browser. Configurations can be stored and manipulated for quicker fetching. Development, translations, bug reports, etc. are handled at Launchpad: https://launchpad.net/zb-fetcher SourceForge is used to host released files.
List al URLs present in requested URL in absolute format
This php program extracts all URLs present on the requested URL, in absolute path.
Index biological data (genbank sheets, Uniprot...) in a Solr indexer, with index shard support and provides a query interface. Project goal is to create a virtual image with indexer and web interface to query and visualize biological data.