Showing 24 open source projects for "make"

View related business solutions
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 1
    rvest

    rvest

    Simple web scraping for R

    rvest helps you scrape (or harvest) data from web pages. It is designed to work with magrittr to make it easy to express common web scraping tasks, inspired by libraries like beautiful soup and RoboBrowser. If you’re scraping multiple pages, I highly recommend using rvest in concert with polite. The polite package ensures that you’re respecting the robots.txt and not hammering the site with too many requests.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    rnet

    rnet

    Python HTTP client with TLS and HTTP/2 fingerprint emulation support

    ...It is powered by the underlying wreq engine and is built with performance and modularity in mind. rnet also supports advanced networking capabilities such as proxy rotation, connection pooling, and streaming transfers, which make it suitable for automation, scraping, and high-performance network.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 3
    Spider

    Spider

    High-performance Rust web crawler and scraper for large-scale data

    ...It supports advanced capabilities such as headless browser rendering, background crawling tasks, and configurable rules that control crawl depth or ignored paths. These capabilities make the project suitable for building search indexers, data extraction pipelines, & SEO analysis tools.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 4
    Web Scraping for Laravel

    Web Scraping for Laravel

    Laravel adapter for Roach, the complete web scraping toolkit for PHP

    ...The Laravel adapter mostly provides the necessary container bindings for the various services Roach uses, as well as making certain configuration options available via a config file. The Laravel adapter of Roach registers a few Artisan commands to make out development experience as pleasant as possible. Roach ships with an interactive shell (often called Read-Evaluate-Print-Loop, or Repl for short) which makes prototyping our spiders a breeze. We can use the provided roach:shell command to launch a new Repl session.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Custom VMs From 1 to 96 vCPUs With 99.95% Uptime Icon
    Custom VMs From 1 to 96 vCPUs With 99.95% Uptime

    General-purpose, compute-optimized, or GPU/TPU-accelerated. Built to your exact specs.

    Live migration and automatic failover keep workloads online through maintenance. One free e2-micro VM every month.
    Try Free
  • 5
    jsoup

    jsoup

    Java library for working with real-world HTML

    ...It provides a very convenient API for fetching URLs and extracting and manipulating data, using the best of HTML5 DOM methods and CSS selectors. jsoup implements the WHATWG HTML5 specification, and parses HTML to the same DOM as modern browsers do. jsoup is designed to deal with all varieties of HTML found in the wild; from pristine and validating, to invalid tag-soup; jsoup will create a sensible parse tree. The parser will make every attempt to create a clean parse from the HTML you provide, regardless of whether the HTML is well-formed or not. You have HTML in a Java String, and you want to parse that HTML to get at its contents, or to make sure it's well formed, or to modify it. The String may have come from user input, a file, or from the web.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Ulixee Hero

    Ulixee Hero

    The web browser built for scraping

    ...We've recreated a fully compliant DOM directly in NodeJS allowing you bypass the headaches of previous scraper tools. The powerful Chrome engine sits under the hood, allowing for lightning fast rendering. Emulators make it easy to disguise your script as practically any browser.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    crawlee

    crawlee

    A web scraping and browser automation library for Node.js

    ...When a website adds JavaScript rendering, you don't have to rewrite everything, only switch to one of the browser crawlers. When you later find a great API to speed up your crawls, flip the switch back. It keeps your proxies healthy by rotating them smartly with good fingerprints that make your crawlers look human-like. It's not unblockable, but it will save you money in the long run. Crawlee is built by people who scrape for a living and use it every day to scrape millions of pages. Meet our community on Discord. We believe websites are best scraped in the language they're written in. Crawlee runs on Node.js and it's built in TypeScript to improve code completion in your IDE, even if you don't use TypeScript yourself.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 8
    CyberScraper 2077

    CyberScraper 2077

    A Powerful web scraper powered by LLM | OpenAI, Gemini & Ollama

    CyberScraper 2077 is not just another web scraping tool – it's a glimpse into the future of data extraction. Born from the neon-lit streets of a cyberpunk world, this AI-powered scraper uses OpenAI, Gemini and LocalLLM Models to slice through the web's defenses, extracting the data you need with unparalleled precision and style.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    dxy-covid-19-crawler

    dxy-covid-19-crawler

    Realtime crawler for COVID-19 outbreak statistics from DXY data

    ...The crawler periodically retrieves pandemic statistics and stores them in a database so that historical changes in the outbreak can be preserved and analyzed later. It was created to make up-to-date infection data more accessible for developers, researchers, and analysts who wanted to build visualizations or conduct data analysis during the early stages of the pandemic. DXY-COVID-19-Crawler automatically crawls data at regular intervals, typically every minute, ensuring that newly published statistics are captured as quickly as possible. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 10
    ScrapydWeb

    ScrapydWeb

    Web app for Scrapyd cluster management

    Web app for Scrapyd cluster management, with support for Scrapy log analysis & visualization. Make sure that Scrapyd has been installed and started on all of your hosts. Start ScrapydWeb via command scrapydweb. (a config file would be generated for customizing settings on the first startup.) Add your Scrapyd servers, both formats of string and tuple are supported, you can attach basic auth for accessing the Scrapyd server, as well as a string for grouping or labeling.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    spider_collection

    spider_collection

    Collection of Python web scraping scripts for data extraction tasks

    spider_collection is a collection of Python web crawler scripts created primarily for experimentation, learning, and practical scraping tasks. spider_collection gathers multiple independent spiders designed to collect data from different platforms and services, demonstrating a variety of scraping techniques and workflows. These crawlers make use of common Python scraping tools such as requests, parsel, BeautifulSoup, and the Scrapy framework to extract structured information from web pages. Several scripts also incorporate multi-threading and proxy usage to improve scraping efficiency and help avoid common anti-scraping limitations. In addition to raw data collection, some spiders include basic data processing and analysis using tools such as pandas and simple visualization with matplotlib. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    sperm

    sperm

    Collection of reverse engineering articles curated for learning

    ...These articles originate from multiple technical communities and platforms and are exported into Markdown format to maintain a consistent and readable structure. sperm focuses on educational material that explores reverse engineering techniques, mobile analysis, and related technical discussions. Its goal is to preserve useful learning resources and make them easier for researchers, students, and security enthusiasts to browse in a single location. sperm emphasizes that the materials are intended strictly for study and technical exchange rather than misuse. By maintaining these articles in a version-controlled repository, the content remains accessible and easier to navigate.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    Trafilatura

    Trafilatura

    Python & command-line tool to gather text on the Web

    ...Going from raw HTML to essential parts can alleviate many problems related to text quality, first by avoiding the noise caused by recurring elements (headers, footers, links/blogroll etc.) and second by including information such as author and date in order to make sense of the data. The extractor tries to strike a balance between limiting noise (precision) and including all valid parts (recall). It also has to be robust and reasonably fast, it runs in production on millions of documents.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    wombat

    wombat

    Lightweight Ruby DSL for scraping structured data from web pages

    ...Developers can declare the data fields they want and specify selectors or rules for retrieving them, allowing Wombat to parse and return structured results. The DSL approach helps make scraping definitions more readable and maintainable, especially when dealing with multiple fields or nested data structures. Because it is implemented as a Ruby library, it integrates easily into Ruby applications and scripts that need to gather information from web pages. Wombat also includes examples and tests that demonstrate how scraping definitions can be written and executed within Ruby environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    PHPScraper

    PHPScraper

    A universal web-util for PHP

    PHPScraper is a universal web-scraping util for PHP, built with simplicity in mind. The goal is to make xPath Selectors optional and avoid the commonly needed boilerplate code. Just create an instance of PHPScraper, go to a website, and start collecting data. All scraping functionality can be accessed either as a function call or a property call. For example, the title can be accessed in two ways. Many common use cases are covered already.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Goutte

    Goutte

    Goutte, a simple PHP Web Scraper

    ...Goutte depends on PHP 7.1+. Add fabpot/goutte as a require dependency in your composer.json file. Create a Goutte Client instance (which extends Symfony\Component\BrowserKit\HttpBrowser). Make requests with the request() method. The method returns a Crawler object (Symfony\Component\DomCrawler\Crawler). To use your own HTTP settings, you may create and pass an HttpClient instance to Goutte. For example, to add a 60 second request timeout. Read the documentation of the BrowserKit, DomCrawler, and HttpClient Symfony Components for more information about what you can do with Goutte. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17

    webotron

    Using industrial automation techniques for creating web scraping tools

    Industry uses machines that can easily maim or kill their operators and is also used in very adverse environments. In spite of this, production quality must be close to perfect without reliance on operator skill or attentiveness. Control programs must be robust, yet simple enough to be understood and maintained by non programmer skilled trades like electricians . The main programming model is the PLC which implements double buffering and an event loop. The most advanced production model...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    AutoScraper

    AutoScraper

    A Smart, Automatic, Fast and Lightweight Web Scraper for Python

    This project is made for automatic web scraping to make scraping easy. It gets a URL or the HTML content of a web page and a list of sample data that we want to scrape from that page. This data can be text, URL or any HTML tag value of that page. It learns the scraping rules and returns similar elements. Then you can use this learned object with new URLs to get similar content or the exact same element of those new pages.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19

    WebScrapa - Web Scraping Program

    Pulls information from the web undetected.

    This program will pull information from the web and display it into a textbox and will do all of it with a smart bot. You can connect bots to this program to make it more practical.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    pyspider

    pyspider

    A powerful Spider(Web Crawler) system in Python

    ...Every component, including message queue, is running in their own process/thread, and replaceable. That means, when process is slow, you can have many instances of processor and make full use of multiple CPUs, or deploy to multiple machines. This architecture makes pyspider really fast. benchmarking. Since pyspider has various components, you can just run pyspider to start a standalone and third service free instance. Or using MySQL or MongoDB and RabbitMQ to deploy a distributed crawl cluster. To deploy pyspider in product environment, running component in each process and store data in database service is more reliable and flexible. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Gecco

    Gecco

    Lightweight Java web crawler framework with jQuery-style extraction

    Gecco is a lightweight web crawler framework written in Java that simplifies the process of building web scraping applications. It is designed to make crawler development straightforward by allowing developers to extract page elements using jQuery-style selectors rather than complex parsing logic. It integrates several well-known Java libraries and frameworks, including tools for HTTP requests, HTML parsing, JSON processing, and application development. Through its annotation-based design, developers can define crawling rules and data extraction logic directly within Java classes, reducing boilerplate code and improving readability. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 22
    xpider

    xpider

    An extensible web spider (crawler) for Joomla!

    The extensible web spider (Xpider) is Joomla! component that tries to make the crawling of external webpages possible for you. It is possible to create a Spider and give it some Tasks (data to find) and some Seeds (web addresses) to search on. The Spider's Finding (the result of finding the tasks) is possible to link to a database.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23

    PGBuild

    Compile your mobile web pages into mobile aps via build.phonegap.com

    PGbuild is a Phonegap development system that automates the development process by connecting your CMS/web server with the online service [Phonegap Build](http://build.phonegap.com). PGBuild is essentially a web spider that make off-line versions of web pages. The off-line version is zippped and send to the Phonegap Build service. The spider is controlled by a project file that sets the rules for the spider and the options for the phonebap build service. You may create and manage your phonegap project source files manually on your webserver or use PGBuild to connect to a CMS system to extract content. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Funnel is a project for use on intranets, or selected sites on the Internet to gather together and index information from several different sources and make it available through a sane, usable interface.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB