1523 projects for "data.6bin" with 2 filters applied:

  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 1
    Scrapy

    Scrapy

    A fast, high-level web crawling and web scraping framework

    ...It can be used for data mining, monitoring and automated testing.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 2
    syslog-ng

    syslog-ng

    Log management solution that improves the performance of SIEM

    ...Instead of deploying multiple agents on hosts, organizations can unify their log data collection and management. syslog-ng Store Box provides automated archiving, tamper-proof encrypted storage, granular access controls to protect log data. The largest appliance can store up to 10TB of raw logs.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 3
    SkyCrypt

    SkyCrypt

    A Hypixel skyblock stats website

    SkyCrypt is a web-based application that allows players of Hypixel SkyBlock to view and share detailed information about their in-game profiles through a visually rich interface. It aggregates data from the Hypixel API and presents it in an organized format, including player statistics, skills, equipment, and inventory details. The project is built with a Node.js-based stack and integrates additional technologies such as MongoDB and Redis to handle data storage and caching. SkyCrypt enhances the user experience by providing clear visualizations and quick overviews of complex in-game metrics, making it easier for players to analyze their progress. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    DB Browser for SQLite

    DB Browser for SQLite

    The DB Browser for SQLite

    ...Import and export records as text, import and export tables from/to CSV files, import and export databases from/to SQL dump files, issue SQL queries and inspect the results, examine a log of all SQL commands issued by the application, plot simple graphs based on table or query data.
    Downloads: 190 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 5
    Curl

    Curl

    Command line tool and library for transferring data with URLs

    Curl is a command line tool and library for transferring data specified with URL syntax. It supports HTTP, HTTPS, FTP, FTPS, GOPHER, TFTP, SCP, SFTP, SMB, TELNET, DICT, SSL certificates, cookies, user+password authentication, and so much more! Curl is used for many different things. It's used in command lines or scripts for transferring data. It's also used in just about every device you can think of: mobile phones and tablets, television sets, printers, routers, media players and other audio equipment. ...
    Downloads: 26 This Week
    Last Update:
    See Project
  • 6
    dxy-covid-19-crawler

    dxy-covid-19-crawler

    Realtime crawler for COVID-19 outbreak statistics from DXY data

    ...DXY-COVID-19-Crawler automatically crawls data at regular intervals, typically every minute, ensuring that newly published statistics are captured as quickly as possible. Retrieved data is stored in MongoDB and archived so that the entire progression of the outbreak can be traced over time. It also provided an API that allowed developers to easily access the collected data for building dashboards, visualizations, and other analytical tools.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    fluentbit

    fluentbit

    Fast and Lightweight Logs and Metrics processor for Linux, BSD, OSX

    ...No more OOM errors! Integration with all your technology, cloud-native services, containers, streaming processors, and data backends. Fully event-driven design leverages the operating system API for performance and reliability. All operations to collect and deliver data are asynchronous.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    fess

    fess

    Open source enterprise search server for websites, files, and data

    Fess is an open source enterprise search server designed to provide powerful full-text search capabilities across multiple data sources. It enables organizations to quickly deploy a scalable search environment without requiring deep knowledge of underlying search technologies. Fess is built on top of OpenSearch and offers an integrated solution for crawling, indexing, and searching documents from websites, file systems, and various data stores. Fess includes a built-in crawler that can collect content from sources such as databases, CSV files, and shared storage, making it suitable for centralized knowledge discovery. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 9
    skycaiji

    skycaiji

    Open source web scraping system for automated data collection tasks

    SkyCaiji is an open source web scraping and data collection system designed to gather information from websites through configurable extraction rules. It focuses on simplifying the process of building crawlers by allowing users to visually define scraping rules rather than writing complex code. It can collect structured or unstructured data from many types of webpages and automate the extraction process for large datasets.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Fully Managed MySQL, PostgreSQL, and SQL Server Icon
    Fully Managed MySQL, PostgreSQL, and SQL Server

    Automatic backups, patching, replication, and failover. Focus on your app, not your database.

    Cloud SQL handles your database ops end to end, so you can focus on your app.
    Try Free
  • 10
    Weibo Crawler

    Weibo Crawler

    Python crawler for collecting and downloading Sina Weibo user data

    ...It also captures detailed data about each post, including the content, publishing time, topics, mentions, likes, reposts, and comments. In addition to textual data, the project can download original media from posts, such as images, videos, and Live Photo content. Collected data can be exported to structured formats such as CSV or JSON or stored in databases for further analysis and research.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    douyin

    douyin

    Open source Douyin crawler for collecting and downloading public data

    DouyinCrawler is an open source data collection tool designed to gather publicly available information from the Douyin platform. It demonstrates how to build a Python-based web crawler combined with a graphical interface and command line functionality. It allows users to collect data from various types of Douyin content, including user profiles, videos, hashtags, and music pages.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    Matomo

    Matomo

    Alternative to Google Analytics that gives you full control over data

    Google Analytics alternative that protects your data and your customers' privacy. Take back control with Matomo – a powerful web analytics platform that gives you 100% data ownership. You could lose your customers’ trust and risk damaging your reputation if people learn their data is used for Google’s “own purposes”. By choosing the ethical alternative, Matomo, you won’t make privacy sacrifices or compromise your site.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    dude uncomplicated data extraction

    dude uncomplicated data extraction

    dude uncomplicated data extraction: A simple framework

    Dude is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-learn syntax. Dude is currently in Pre-Alpha. Please expect breaking changes. You can run your scraper from terminal/shell/command-line by supplying URLs, the output filename of your choice and the paths to your python scripts to dude scrape command.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Spider

    Spider

    High-performance Rust web crawler and scraper for large-scale data

    ...Spider can operate concurrently across many pages, allowing it to gather large datasets in a short period of time. Spider also provides mechanisms for subscribing to crawl events so developers can process page data such as URLs, status codes, or HTML content as it is discovered. It supports advanced capabilities such as headless browser rendering, background crawling tasks, and configurable rules that control crawl depth or ignored paths. These capabilities make the project suitable for building search indexers, data extraction pipelines, & SEO analysis tools.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 15
    spider_collection

    spider_collection

    Collection of Python web scraping scripts for data extraction tasks

    ...In addition to raw data collection, some spiders include basic data processing and analysis using tools such as pandas and simple visualization with matplotlib. It also contains examples of proxy pool integration and encapsulation to support more reliable crawling when working with sites that enforce request limits.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    DotnetSpider

    DotnetSpider

    Lightweight .NET framework for fast web crawling and data scraping

    DotnetSpider is a web crawling and data extraction framework built on the .NET Standard platform. It is designed to help developers create efficient and scalable crawlers for collecting structured data from websites. It provides a high-level API that simplifies the process of defining spiders, managing requests, and extracting content from web pages. Developers can create custom spiders by extending base classes and configuring pipelines that handle downloading, parsing, and storing collected data. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    YourInfo

    YourInfo

    Real-time browser fingerprinting demo with cross-browser tracking

    YourInfo is a personal information management tool designed to let users securely store, structure, and retrieve their key data — such as contacts, credentials, personal notes, and preferences — while also enabling AI-assisted queries or reminders using that data. The platform prioritizes privacy by focusing on local storage or user-controlled databases, ensuring sensitive data stays under the user’s control rather than in third-party servers. Users can define types of information, tag entries for quick categorization, and perform intuitive searches when they need to recall something like a phone number, address, or secret detail. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    wombat

    wombat

    Lightweight Ruby DSL for scraping structured data from web pages

    Wombat is a lightweight web crawling and scraping library written in Ruby that focuses on extracting structured data from web pages using a concise domain-specific language (DSL). It is designed to simplify the process of defining how information should be collected from HTML documents without requiring large amounts of scraping boilerplate code. Developers can declare the data fields they want and specify selectors or rules for retrieving them, allowing Wombat to parse and return structured results. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    watercrawl

    watercrawl

    AI-ready web crawler that extracts and structures website content

    ...WaterCrawl supports customizable extraction rules so users can focus only on relevant elements while ignoring unnecessary page components. WaterCrawl also offers real-time monitoring capabilities, allowing users to track crawling progress, performance metrics, and errors during large data collection jobs. Developers can integrate the tool into applications through a REST API and multiple client SDKs, enabling automated data pipelines and AI data preparation workflows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Geziyor

    Geziyor

    Blazing fast Go framework for web crawling and data scraping tasks

    ...It is designed to help developers crawl websites and extract structured information from web pages efficiently. It focuses on speed and scalability, allowing large numbers of requests to be processed concurrently. Geziyor supports use cases such as data mining, monitoring web content, and automated testing workflows. It provides a flexible architecture where developers define parsing functions that process responses and extract the desired data. Geziyor includes features for managing requests, handling cookies, respecting robots rules, and exporting collected data in multiple formats. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Mini QR

    Mini QR

    Create & scan cute qr codes easily

    ...It emphasizes customization so the QR you generate can match a brand, event theme, or personal style, including color and styling controls, framed layouts with labels, and the ability to add a logo image. Because QR reliability matters as much as looks, it exposes practical settings like error correction levels so you can balance data density with scannability, especially when adding a logo or encoding larger payloads. The scanning side supports camera-based scanning and image uploads, and it recognizes common QR content types such as URLs, emails, phone numbers, SMS messages, Wi-Fi credentials, and other structured payloads so the next action is obvious. It also supports workflows for producing many codes at once by importing CSV data and exporting batches.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 22
    Laravel Sharp

    Laravel Sharp

    Laravel 10+ Content management framework

    ...The public website should not have any knowledge of the CMS, the CMS is a part of the system, not the center of it. In fact, removing the CMS should not have any effect on the project. Content administrators should work with their data and terminology, not CMS terms. I mean, if the project is about spaceships, space travels, and pilots, why would the CMS talk about articles, categories, and tags? Developers should not have to work on the front-end development for the CMS. Because life is complicated enough, Sharp takes care of all the responsive / CSS / JS stuff.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    FEAPDER

    FEAPDER

    Powerful Python crawler framework for scalable web scraping tasks

    ...It also integrates monitoring and alerting capabilities to help developers track crawler performance and detect issues during execution. feapder includes browser rendering support for handling dynamic web pages and provides mechanisms for large-scale data deduplication during crawling.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Faved

    Faved

    Free open-source bookmark manager with customisable nested tags

    ...Instead of just listing URLs, Faved supports customizable nested tags, helping users categorize their bookmarks in hierarchical structures that reflect personal workflows or project needs. All data is stored locally by default, which enhances privacy and eliminates dependency on external servers or vendor lock-in. The application is lightweight and built to launch quickly, with minimal complexity in setup or use, making it suitable for developers and non-technical users alike. Faved’s emphasis on fast performance and low overhead means it can be deployed on small servers and used reliably for long-term link management without slowing down over time.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 25
    QueryList

    QueryList

    Progressive PHP web crawler framework with jQuery-like DOM parsing

    QueryList is an extensible PHP web scraping and crawling framework designed to extract and process data from web pages. It provides a simple and expressive API that allows developers to collect structured information from HTML documents using familiar DOM traversal techniques. It is built on top of phpQuery and uses CSS3 selectors similar to those found in jQuery, making it easy for developers to query and manipulate page elements during scraping tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next
MongoDB Logo MongoDB