Web Scrapers for Windows

View 34 business solutions
  • Our Free Plans just got better! | Auth0 by Okta Icon
    Our Free Plans just got better! | Auth0 by Okta

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
    Try free now
  • Bright Data - All in One Platform for Proxies and Web Scraping Icon
    Bright Data - All in One Platform for Proxies and Web Scraping

    Say goodbye to blocks, restrictions, and CAPTCHAs

    Bright Data offers the highest quality proxies with automated session management, IP rotation, and advanced web unlocking technology. Enjoy reliable, fast performance with easy integration, a user-friendly dashboard, and enterprise-grade scaling. Powered by ethically-sourced residential IPs for seamless web scraping.
    Get Started
  • 1

    WebScrapa - Web Scraping Program

    Pulls information from the web undetected.

    This program will pull information from the web and display it into a textbox and will do all of it with a smart bot. You can connect bots to this program to make it more practical.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    A simple to set up web scraper written in Java. It uses modified regEx to quickly write complex patterns to parse data out of a website. It contains a GUI tool for testing your configuration scripts and is fully automated through the command line
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    A C# coded web crawler that analyzes pagerank, total links, no follow links, and make a ranking for their best-positioned pages in Google
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Larbin is a Web crawler intended to fetch a large number of Web pages, it should be able to fetch more than 100 millions pages on a standard PC with much u/d. This set of PHP and Perl scripts, called webtools4larbin, can handle the output of Larbin and p
    Downloads: 0 This Week
    Last Update:
    See Project
  • Nectar: Employee Recognition Software to Build Great Culture Icon
    Nectar: Employee Recognition Software to Build Great Culture

    Nectar is an employee recognition software built for the modern workforce.

    Our 360 recognition & rewards platform enables everyone (peer to peer & manager to employees alike) to send meaningful recognition rooted in core values. Nectar has the most extensive rewards catalog so users can choose from company branded swag, Amazon products, gift cards or custom reward types. Integrate with your other tools like Slack and Teams to make sending recognition easy. We support top organizations like MLB, SHRM, Redfin, Heineken and more.
    Learn More
  • 5
    A basic Perl web spider with grandiose aspirations. Supports XML log file output and resumable spidering sessions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    arachnode.net is an open source Web crawler for downloading, indexing and storing Internet content including e-mail addresses, files, hyperlinks, images, and Web pages and is written in C# using SQL Server 2008. See http://arachnode.net for the LATEST.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    bee-rain is a web crawler that harvest and index file over the network. You can see result by bee-rain website : http://bee-rain.internetcollaboratif.info/
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8

    cavespider

    guile web spider software

    guile scheme web spider database and keyword searching software + extras.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    crawler4j

    crawler4j

    Open source web crawler for Java

    crawler4j is an open source web crawler for Java which provides a simple interface for crawling the Web. Using it, you can setup a multi-threaded web crawler in few minutes. You need to create a crawler class that extends WebCrawler. This class decides which URLs should be crawled and handles the downloaded page. shouldVisit function decides whether the given URL should be crawled or not. In the above example, this example is not allowing .css, .js and media files and only allows pages within ics domain. visit function is called after the content of a URL is downloaded successfully. You can easily get the url, text, links, html, and unique id of the downloaded page. You should also implement a controller class which specifies the seeds of the crawl, the folder in which intermediate crawl data should be stored and the number of concurrent threads.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Automated quote and proposal software for IT solution providers. | ConnectWise CPQ Icon
    Automated quote and proposal software for IT solution providers. | ConnectWise CPQ

    Create IT quote templates, automate workflows, add integrations & price catalogs to save time & reduce errors on manual data entry & updates.

    ConnectWise CPQ, formerly ConnectWise Sell, is a professional quote and proposal automation software for IT solution providers. ConnectWise CPQ offers a wide range of tools that enables IT solution providers to save time, quote more, and win big. Top features include professional quote or proposal templates, product catalog and sourcing, workflow automation, sales reporting, and integrations with best-in-breed solutions like Cisco, Dell, HP, and Salesforce.
    Learn More
  • 10
    crwlr

    crwlr

    Library for Rapid (Web) Crawler and Scraper Development

    This library provides kind of a framework and a lot of ready-to-use, so-called steps, that you can use as building blocks, to build your own crawlers and scrapers with. Before diving into the library, let's have a look at the terms crawling and scraping. For most real-world use cases, those two things go hand in hand, which is why this library helps with and combines both. A (web) crawler is a program that (down)loads documents and follows the links in it to load them as well. A crawler could just load actually all links it is finding (and is allowed to load according to the robots.txt file), then it would just load the whole internet (if the URL(s) it starts with are no dead end). Or it can be restricted to load only links matching certain criteria (on same domain/host, URL path starts with "/foo",...) or only to a certain depth. A depth of 3 means 3 levels deep. Links found on the initial URLs provided to the crawler are level 1 and so on.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11

    dataflowkit

    Golang framework for scraping data from web pages

    Golang Web Scraper library for extracting data from web pages. Save results as CSV, JSON, XML
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    datalus
    PHP web API designed to simplify object handling(loading, saving, querying, displaying, and editing), abstract the data from its display structure, and layout and allow the target data to be delivered to any supported format without special logic.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    django-dynamic-scraper

    django-dynamic-scraper

    Creating Scrapy scrapers via the Django admin interface

    Django Dynamic Scraper (DDS) is an app for Django build on top of the scraping framework Scrapy. While preserving many of the features of Scrapy it lets you dynamically create and manage spiders via the Django admin interface. With Django Dynamic Scraper (DDS) you can define your Scrapy scrapers dynamically via the Django admin interface and save your scraped items in the database you defined for your Django project. Since it simplifies things DDS is not usable for all kinds of scrapers, but it is well suited for the relatively common case of regularly scraping a website with a list of updated items (e.g. news, events, etc.) and then dig into the detail page to scrape some more infos for each item. Django Dynamic Scraper tries to keep its data structure in the database as separated as possible from the models in your app, so it comes with its own Django model classes for defining scrapers, runtime information related to your scraper runs and classes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    dude uncomplicated data extraction

    dude uncomplicated data extraction

    dude uncomplicated data extraction: A simple framework

    Dude is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-learn syntax. Dude is currently in Pre-Alpha. Please expect breaking changes. You can run your scraper from terminal/shell/command-line by supplying URLs, the output filename of your choice and the paths to your python scripts to dude scrape command.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    elk is a powerful open-source python based command-line web crawler that can recursively search for files and text on websites.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    This plug-in for Google Desktop is a simple web spider (Könguló is Icelandic for spider) that crawls websites you specify, e.g. intranet websites, and dumps them into Google Desktop. You must install Google Desktop prior to installing the plug-in.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    html-metadata

    html-metadata

    MetaData html scraper and parser for Node.js (supports Promises

    The aim of this library is to be a comprehensive source for extracting all HTML-embedded metadata. Currently, it supports Schema.org microdata using a third-party library, a native BEPress, Dublin Core, Highwire Press, JSON-LD, Open Graph, Twitter, EPrints, PRISM, and COinS implementation, and some general metadata that doesn't belong to a particular standard (for instance, the content of the title tag, or meta description tags). Planned is support for RDFa, AGLS, and other yet unheard-of metadata types. Contributions and requests for other metadata types are welcome! You can also pass an options object as the first argument containing extra parameters. Some websites require the user-agent or cookies to be set in order to get the response.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    img2dataset

    img2dataset

    Easily turn large sets of image urls to an image dataset

    Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine. Also supports saving captions for url+caption datasets. Opt-out directives: Websites can pass the http headers X-Robots-Tag: noai, X-Robots-Tag: noindex , X-Robots-Tag: noimageai and X-Robots-Tag: noimageindex By default img2dataset will ignore images with such headers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19

    newsscrape

    news headline collecting for analysis in determining the category

    newsscrape is web scraping for news headline to analyse on how it relates to a news category. - It extracts RSS feed from Google News. - Each news headline is matched against Google News category like Entertainment, Sports, etc. - Called from scheduler to collect this data at 5 minutes interval and be accumulated in a database. - It contains R statistical computing scripts to learn the pattern on words in the headline resulting a particular category. - To test its accuracy in predicting the category from a news headline, select a news title from other sources - e.g. http://rss.news.yahoo.com/rss/entertainment - and incorporate it into the R script for outputting a news category it assumes on the news title.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    phoneutria
    A Java Web crawler: multi-threaded, scalable, with high performance, extensible and polite. It can be used to crawl and index any web or enterprise domain and is configurable through a XML configuration file.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    pyspider

    pyspider

    A powerful Spider(Web Crawler) system in Python

    pyspider is a powerful Spider(Web Crawler) system in Python. Components are connected by message queue. Every component, including message queue, is running in their own process/thread, and replaceable. That means, when process is slow, you can have many instances of processor and make full use of multiple CPUs, or deploy to multiple machines. This architecture makes pyspider really fast. benchmarking. Since pyspider has various components, you can just run pyspider to start a standalone and third service free instance. Or using MySQL or MongoDB and RabbitMQ to deploy a distributed crawl cluster. To deploy pyspider in product environment, running component in each process and store data in database service is more reliable and flexible. To deploy pyspider components in each single processes, you need at least one database service. pyspider now supports MySQL, MongoDB and PostgreSQL. You can choose one of them.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22

    rubywebcrawler

    web spider software written in ruby

    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    scraper-with-chatgpt
    It is a powerful data scraping tool that helps you extract information from various online sources. Easily collect data from Google SERP, Maps, Shopify, Zillow, and more. With a user-friendly interface, you can scrape and save data in JSON or Excel formats. Unlock insights from the web effortlessly with scrape-it.cloud API.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    soup

    soup

    Web Scraper in Go, similar to BeautifulSoup

    Web Scraper in Go, similar to BeautifulSoup. soup is a small web scraper package for Go, with its interface highly similar to that of BeautifulSoup. Pointer containing the pointer to the current html node. NodeValue containing the current html node's value, i.e. the tag name for an ElementNode, or the text in case of a TextNode. Error containing an error in a struct if one occurs, else nil is returned. A detailed text explanation of the error can be accessed using the Error() function. A field Type in this struct of type ErrorType will denote the kind of error that took place.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    studiMaps is a web based application for visualization and analysis of social networks. It consists of two software components: a web-crawler for getting data and the web based application for visualization.
    Downloads: 0 This Week
    Last Update:
    See Project