It's a modern take on desktop management that can be scaled as per organizational needs.
Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
Learn More
Make hybrid work a reality with Robin
With maps, space and desk management, distance planning, analytics, and more, returning to the office is easier than ever.
Whether you want to make it easier to find, book meeting rooms or search and reserve shared desks, Robin empowers office managers and employees alike to make the office work for them, and not the other way around.
LogCrawler is an ANT task for automatic testing of web applications. Using a HTTP crawler it visits all pages of a website and checks the server logfiles for errors. Use it as a "smoketest" with your CI system like CruiseControl.
WebNews Crawler is a specific web crawler (spider, fetcher) designed to acquire and clean news articles from RSS and HTML pages. It can do a site specific extraction to extract the actual news content only, filtering out the advertising and other cruft.
Crawl-By-Example runs a crawl, which classifies the processed pages by subjects and finds the best pages according to examples provided by the operator. Crawl-By-Example is a plugin to the Heritrix crawler, and was done as a part of GSoC06 program.
Cloud-native dedicated servers powered by automation
phoenixNAP is a global IaaS provider delivering world-class infrastructure solutions from strategic edge locations in the U.S., Europe, Asia-Pacific, Australia, and Latin America.
J-Obey is a Java Library/package, which allows people writing their own crawlers to have a stable Robots.txt parser, if you are writing a web crawler of some sort you can use J-Obey to take out the hassle of writing a Robots.txt parser/intrepreter.
A configurable knowledge management framework. It works out of the box, but it's meant mainly as a framework to build complex information retrieval and analysis systems. The 3 major components: Crawler, Analyzer and Indexer can also be used separately.
SmartCrawler is a java-based fully configurable, multi-threaded and extensible crawler, which is able to fetch and analyze the contents of a web site by using dinamically pluggable filters
WebLoupe is a java-based tool for analysis, interactive visualization (sitemap), and exploration of the information architecture and specific properties of local or publicly accessible websites. Based on web spider (or web crawler) technology.
eMembership has everything you need to effectively manage your labor union.
We built the first version of eMembership in 2008 to address a growing problem faced by labor unions – aging computer systems that no longer supported the organization or the industry. Our goal was to build a system that could evolve with the times. We used contemporary software and a modular design that can support the unique requirements of any organization. We host eMembership in our highly-redundant, SSAE-16 compliant data center, so we take care of hardware, software, operating systems, security patches, system monitoring, bandwidth, backups…while you focus on your core business.
WebSPHINX is a web crawler (robot, spider) Java class library, originally developed by Robert Miller of Carnegie Mellon University. Multithreaded, tollerant HTML parsing, URL filtering and page classification, pattern matching, mirroring, and more.
This project aims to be a base for specialized image crawlers. It can download images from a specific website and can be extended to crawler any website. All the the processes are multithread. Accept filters.
Spider is web crawler written in the Java.Based on an Regular expression string the spider parses the internet for web pages matching this string and stores it in an MYSQL database.
studiMaps is a web based application for visualization and analysis of social networks. It consists of two software components: a web-crawler for getting data and the web based application for visualization.