Enterprise Auth, Zero Friction: Any Framework • 30+ SDKs • Universal Login
Production-ready login in 10 lines of code. SSO, MFA & social auth included. Scale seamlessly beyond free tier with Okta’s enterprise security.
Get Your API Keys
Secure remote access solution to your private network, in the cloud or on-prem.
Deliver secure remote access with OpenVPN.
OpenVPN is here to bring simple, flexible, and cost-effective secure remote access to companies of all sizes, regardless of where their resources are located.
Nomad is tiny but efficient search engine and web crawler. This works very good for searching with in the set of corporate websites on internet and/or intranet's HTML documents or knowledge repositories.
Web Textual eXtraction Tools C++ Parallel web crawler, noun phrase idenification, Multi-lingual Part of Speech Tagging, Tarjan's Algorithm, Co-RelationShip Mappings...
A VB Web crawler that is currently under construction with the goal to be able to crawl and index the net most likely by distributed computing (via network).
Wadsworth is a java based web scripting engine. It uses user-defined XML scripts to define its actions. It can be used as a web testing tool, or as a web scraper, or to automate any web actions you wish. It can also be invoked and controlled by another
Payroll Services for Small Businesses | QuickBooks
Save up to 50% on QuickBooks Online! Keep the Accounting and Book Keeping for your Small Business up to date!
Easily pay your team and access powerful tools, employee benefits, and supportive experts with the #1 online payroll service provider. Manage payroll and access HR and employee services in one place. Pay your team automatically once your payroll setup is complete. We'll calculate, file, and pay your payroll taxes automatically.
A Java implementation of a flexible and extensible web spider engine.
Optional modules allow functionality to be added (searching dead links, testing the performance and scalability of a site, creating a sitemap, etc ..
Arachnid is a Java-based web spider framework. It includes a simple HTML parser object that parses an input stream containing HTML content. Simple Web spiders can be created by sub-classing Arachnid and adding a few lines of code called after each page
WebSPHINX is a web crawler (robot, spider) Java class library, originally developed by Robert Miller of Carnegie Mellon University. Multithreaded, tollerant HTML parsing, URL filtering and page classification, pattern matching, mirroring, and more.
It is basicly a program that can make you a search engine. It is a web crawler, has all the web site source code (in ASP, soon to be PHP as well), and a mysql database.
Spider is web crawler written in the Java.Based on an Regular expression string the spider parses the internet for web pages matching this string and stores it in an MYSQL database.
Bright Data - All in One Platform for Proxies and Web Scraping
Say goodbye to blocks, restrictions, and CAPTCHAs
Bright Data offers the highest quality proxies with automated session management, IP rotation, and advanced web unlocking technology. Enjoy reliable, fast performance with easy integration, a user-friendly dashboard, and enterprise-grade scaling. Powered by ethically-sourced residential IPs for seamless web scraping.
ApeSmit is a very simple Python module to create XML sitemaps as defined at http://www.sitemaps.org. ApeSmit doesnt contain any web spider or something like that, it just writes the data you provide to a file using the proper syntax.
studiMaps is a web based application for visualization and analysis of social networks. It consists of two software components: a web-crawler for getting data and the web based application for visualization.
We are integrating existing communication systems including Wiki, IRC, Instant Messaging, e-mail, and even static web sites. We write web scrapers and servers for managing events, IRC bots, logs, local names, templates, and groups.
FWebSpider is a web crawler application written on Perl. It performs chosen site crawl, featuring response cache, URL storage, URL exclusion rules and more. It is developed to function as a local/global site search engine core.
Purpose of SAWS is to facilitate process of web scraping by - 1) providing a pattern specification mechanism on top of normal regular expressions 2) and implementation of common matching algorithm to run specified pattern on given source for any matches.
Web Crawling, Web Testing, and Web Scraping application
Blackfire Player is a powerful Web Crawling, Web Testing, and Web Scraper application. It provides a nice DSL to crawl HTTP services, assert responses, and extract data from HTML/XML/JSON responses.
Some Blackfire Player use cases:
Crawl a website/API and check expectations -- aka Acceptance Tests;
Scrape a website/API and extract values;
Monitor a website;
Test code with unit test integration (PHPUnit, Behat, Codeception, ...);
Test code behavior from the outside thanks to the...