Web Scrapers for Windows

View 34 business solutions
  • Business Continuity Solutions | ConnectWise BCDR Icon
    Business Continuity Solutions | ConnectWise BCDR

    Build a foundation for data security and disaster recovery to fit your clients’ needs no matter the budget.

    Whether natural disaster, cyberattack, or plain-old human error, data can disappear in the blink of an eye. ConnectWise BCDR (formerly Recover) delivers reliable and secure backup and disaster recovery backed by powerful automation and a 24/7 NOC to get your clients back to work in minutes, not days.
  • Holistically view your business data within a single solution. Icon
    Holistically view your business data within a single solution.

    For IT service providers and MSPs that need a data platform to manage their processes

    BrightGauge, a ConnectWise solution, was started in 2011 to fill a missing need in the small-to-medium IT Services industry: a better way to manage data and provide the value of work to clients. BrightGauge Software allows you to display all of your important business metrics in one place through the use of gauges, dashboards, and client reports. Used by more than 1,800 companies worldwide, BrightGauge integrates with popular business solutions on the market, like ConnectWise, Continuum, Webroot, QuickBooks, Datto, IT Glue, Zendesk, Harvest, Smileback, and so many more. Dig deeper into your data by adding, subtracting, multiplying, and dividing one metric against another. BrightGauge automatically computes these formulas for you. Want to show your prospects how quick you are to respond to tickets? Show off your data with embeddable gauges on public sites.
  • 1

    Mowglee

    Mowglee - The Geo Crawler!

    Mowglee is a distributed, multi-threaded, asynchronous task execution based web crawler in Java.It is designed for geographic affinity and is highly modular.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    MusicalGalaxy

    MusicalGalaxy

    Shows the complex connection between musicians and their pupils

    A galaxy of musicians, connected from pupil to teacher in a complex family tree, displayed in an elegant, dynamic, interactive way. A demonstration can be found here: https://v.redd.it/wc5dhs12m7a51/DASH_720?source=fallback The size and colour of the stars depend upon the number of connections the musician made over their lifetime. Those who taught few reside around the edges, whilst the greatest teachers cluster around the centre. Everything is connected. The brightest star at the moment is Nadia Boulanger, who single-handedly changed the face of modern music, teaching musicians by the likes of Philip Glass, Daniel Barenboim and Aaron Copland. Credit to OrigamiDrag0n, 2020. MIT license reserved for all code. UPDATE - clicking the bubbles now opens the webpage of the chosen composer. Error messages thrown by this update are currently being investigated.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    NightCrawler is a multithreaded web spider which uses MIME types to download files.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Nomad is tiny but efficient search engine and web crawler. This works very good for searching with in the set of corporate websites on internet and/or intranet's HTML documents or knowledge repositories.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Automated RMM Tools | RMM Software Icon
    Automated RMM Tools | RMM Software

    Proactively monitor, manage, and support client networks with ConnectWise Automate

    Out-of-the-box scripts. Around-the-clock monitoring. Unmatched automation capabilities. Start doing more with less and exceed service delivery expectations.
  • 5
    We are integrating existing communication systems including Wiki, IRC, Instant Messaging, e-mail, and even static web sites. We write web scrapers and servers for managing events, IRC bots, logs, local names, templates, and groups.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6

    PAMIE

    A Python class to allow the user to automate Internet Explorer

    Python Automation Module (class) for Internet Explorer (PAM.py). Originally written as a simple Python module. This new Python class starting with 2.0 allows the user to automate Internet Explorer browser for QA testing, development testing, or web scraping. This python class only runs on Windows (only) and automates Internet Explorer using the COM object, there is no support for Firefox, Chrome, Safari or Flex at this time. This is not an Application. Also check out the original "SAMIE" (Perl module) written by Henry Wasserman.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7

    PGBuild

    Compile your mobile web pages into mobile aps via build.phonegap.com

    PGbuild is a Phonegap development system that automates the development process by connecting your CMS/web server with the online service [Phonegap Build](http://build.phonegap.com). PGBuild is essentially a web spider that make off-line versions of web pages. The off-line version is zippped and send to the Phonegap Build service. The spider is controlled by a project file that sets the rules for the spider and the options for the phonebap build service. You may create and manage your phonegap project source files manually on your webserver or use PGBuild to connect to a CMS system to extract content. PGBuild is managed from a small widget that you may use your self or integrate into a CMS system.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Perl Web Scraping Project

    Perl Web Scraping Project

    Perl Web Scraping Project

    Web scraping (web harvesting or web data extraction) is data scraping used for extracting data from websites.[1] Web scraping software may access the World Wide Web directly using the Hypertext Transfer Protocol, or through a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using a bot or web crawler. It is a form of copying, in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis. Web scraping a web page involves fetching it and extracting from it.[1][2] Fetching is the downloading of a page (which a browser does when you view the page). Therefore, web crawling is a main component of web scraping, to fetch pages for later processing. Once fetched, then extraction can take place. The content of a page may be parsed, searched, reformatted, its data copied into a spreadsheet, and so on.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Raise

    Raise

    A simple (and unofficial) GitHub Trending client

    A simple (and unofficial) GitHub Trending client that lives in your menubar. Raise is a simple extension for GitHub Trending you can use to browse trending repositories and developers at any time. Raise is an open-source project so if you have any problems, feel free to submit an issue on GitHub! Raise is developed on Node.js v16. Other Node.js versions have not been tested.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Cyber Risk Assessment and Management Platform Icon
    Cyber Risk Assessment and Management Platform

    ConnectWise Identify is a powerful cybersecurity risk assessment platform offering strategic cybersecurity assessments and recommendations.

    When it comes to cybersecurity, what your clients don’t know can really hurt them. And believe it or not, keep them safe starts with asking questions. With ConnectWise Identify Assessment, get access to risk assessment backed by the NIST Cybersecurity Framework to uncover risks across your client’s entire business, not just their networks. With a clearly defined, easy-to-read risk report in hand, you can start having meaningful security conversations that can get you on the path of keeping your clients protected from every angle. Choose from two assessment levels to cover every client’s need, from the Essentials to cover the basics to our Comprehensive Assessment to dive deeper to uncover additional risks. Our intuitive heat map shows you your client’s overall risk level and priority to address risks based on probability and financial impact. Each report includes remediation recommendations to help you create a revenue-generating action plan.
  • 10

    Rapid Reference

    An extension that allows for hassle-free website citation/referencing.

    Please do not distribute with the goal of selling my program. How to attach to your Chrome/Edge/Brave etc Browser: 1. Download the extension(rapidreference.zip) 2. Extract 3. Go to chrome://extensions if on Chrome, or navigate to your extension management setting in your browser 4. Enable developer mode (usually top right) 5. Add unpacked extension 6. Choose the extracted extension's folder 7. There you go! How to use: 1. Start a session in the panel of the extension 2. Do required research/website navigation 3. You can check the preview of the cited references in the panel 4. You can copy the citation list to the clipboard through simply clicking the "Copy Citation" button 5. You're Done! Thanks for using my software 😊
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Yet another web crawler? Yes, but this ones uses the full power of regular expressions to accept or reject, examine or ignore, save or refuse pages. You also use MIME types to do all this. Powerful and flexible.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Requests-HTML

    Requests-HTML

    Pythonic HTML Parsing for Humans

    This library intends to make parsing HTML (e.g. scraping the web) as simple and intuitive as possible. When using this library you automatically get full JavaScript support! (Using Chromium, thanks to puppeteer) CSS Selectors (a.k.a jQuery-style, thanks to PyQuery). XPath Selectors, for the faint of heart. Mocked user-agent (like a real web browser). Automatic following of redirects. Connection–pooling and cookie persistence. The Requests experience you know and love, with magical parsing abilities, and async support. The rest of the code operates the same way as the synchronous version except that results is a list containing multiple response objects however the same basic processes can be applied as above to extract the data you want.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Roach

    Roach

    The complete web scraping toolkit for PHP

    Roach is a complete web scraping toolkit for PHP. It is a shameless clone heavily inspired by the popular Scrapy package for Python. Roach allows us to define spiders that crawl and scrape web documents. But wait, there’s more. Roach isn’t just a simple crawler, but includes an entire pipeline to clean, persist and otherwise process extracted data as well. It’s your all-in-one resource for web scraping in PHP. Roach doesn’t depend on a specific framework. Instead, you can use the core package on its own or install one of the framework-specific adapters. Currently, there’s a first-party adapter available to use Roach in your Laravel projects with more coming. Roach is built from the ground up with extensibility in mind. In fact, most of Roach’s built-in behavior works the exact same way that any custom extensions or middleware works.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    RoboBrowser

    RoboBrowser

    On the fly web scraper

    RoboBrowser is a webkit powered browser which built for web scraping purposes. It loads requested webpage, saves page source to disk, and sends it's path to a php script as first parameter.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Purpose of SAWS is to facilitate process of web scraping by - 1) providing a pattern specification mechanism on top of normal regular expressions 2) and implementation of common matching algorithm to run specified pattern on given source for any matches.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    SEO MACROSCOPE

    SEO MACROSCOPE

    SEO Macroscope is a website scanning tool, to check your website

    The website broken link scanner and technical SEO toolbox. SEO Macroscope for Microsoft Windows is a free and open-source website broken link checking and scanning tool, with some technical SEO functionality for common website problems. Find broken links on your website, both internal and external. Report robots.txt statuses. Check and report canonical, hreflang, and other metadata problems. Perform simple, configurable Technical SEO checks on titles and descriptions. Report fastest/slowest pages. Export reports to Excel and CSV formats. Generate and export text and XML sitemaps from the crawled pages. Analyze redirect chains. Use custom filters to verify the presence/absence of tracking tags. Use CSS Selectors, XPath Queries, and Regular Expressions to scrape website data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17

    Scra.php

    Scrape anything!

    The ultimate customiseable YAML-ised Web Scraper for PHP
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    ScrapBot 1.40 64bits

    ScrapBot 1.40 64bits

    Task automation software for accessing and manipulating website data.

    ScrapBot is a task automation software that allows you to access, authenticate, extract, and insert data on any website. The software utilizes JavaScript to execute tasks, eliminating the need for server or additional software installations. The system can control the accessed webpage through JavaScript, and the entire navigation can be viewed in the program window. The main.js script runs in a separate frame from the navigation frame but can access all page content without any restrictions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Scrapy-Redis

    Scrapy-Redis

    Redis-based components for Scrapy

    You can start multiple spider instances that share a single redis queue. Best suitable for broad multi-domain crawls. Scraped items gets pushed into a redis queued meaning that you can start as many as needed post-processing processes sharing the items queue. Scheduler + Duplication Filter, Item Pipeline, Base Spiders. Default requests serializer is pickle, but it can be changed to any module with loads and dumps functions. Note that pickle is not compatible between python versions. Version 0.3 changed the requests serialization from marshal to cPickle, therefore persisted requests using version 0.2 will not able to work on 0.3. The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Scrapyd

    Scrapyd

    A service daemon to run Scrapy spiders

    Scrapyd can manage multiple projects and each project can have multiple versions uploaded, but only the latest one will be used for launching new spiders. A common (and useful) convention to use for the version name is the revision number of the version control tool you’re using to track your Scrapy project code. For example: r23. The versions are not compared alphabetically but using a smarter algorithm (the same packaging uses) so r10 compares greater to r9, for example. Scrapyd is an application (typically run as a daemon) that listens to requests for spiders to run and spawns a process for each one. Scrapyd also runs multiple processes in parallel, allocating them in a fixed number of slots given by the max_proc and max_proc_per_cpu options, starting as many processes as possible to handle the load.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    ScrapydWeb

    ScrapydWeb

    Web app for Scrapyd cluster management

    Web app for Scrapyd cluster management, with support for Scrapy log analysis & visualization. Make sure that Scrapyd has been installed and started on all of your hosts. Start ScrapydWeb via command scrapydweb. (a config file would be generated for customizing settings on the first startup.) Add your Scrapyd servers, both formats of string and tuple are supported, you can attach basic auth for accessing the Scrapyd server, as well as a string for grouping or labeling. You can select any number of Scrapyd servers by grouping and filtering, and then invoke the HTTP JSON API of Scrapyd on the cluster with just a few clicks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Selectolax

    Selectolax

    Python binding to Modest and Lexbor engines

    A fast HTML5 parser with CSS selectors using Modest and Lexbor engines. Selectolax supports two backends: Modest and Lexbor. By default, all examples use the Modest backend. Most of the features between backends are almost identical, but there are still some differences. Currently, the Lexbor backend is in beta and missing some of the features. To use lexbor, just import the parser and use it in the similar way to the HTMLParser.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Simple-Scrape is a simple web-scraping library that allows for programmatic access to HTML code. No further techniques are needed and the library is very compact and thus easy to use.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    This project will provide a tool for users to get a better understanding of the content and structure of an existing website. It will do this by providing a customised web spider as well as extensions to the GUESS graph visualisation application.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Sphider is a lightweight web spider and search engine written in PHP, using MySQL as its back end database. It is a great tool for adding search functionality to your web site or building your custom search engine. Sphider is small, easy to set up and...
    Downloads: 0 This Week
    Last Update:
    See Project