Open Source ChromeOS Internet Software - Page 5

Internet Software for ChromeOS

  • Train ML Models With SQL You Already Know Icon
    Train ML Models With SQL You Already Know

    BigQuery automates data prep, analysis, and predictions with built-in AI assistance.

    Build and deploy ML models using familiar SQL. Automate data prep with built-in Gemini. Query 1 TB and store 10 GB free monthly.
    Try Free
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 1
    Our mission is to develop open source solutions and provides professional support helps small and medium size companies meet the challenges of developing professional Arabic websites in the PHP/MySQL environment based on our experience in Arabic language processing, the library that we develop helps companies save time and increase productivity.
    Downloads: 21 This Week
    Last Update:
    See Project
  • 2
    FCKeditor

    FCKeditor

    FCKeditor (retired)

    FCKeditor is the previous version of CKEditor and has been discontinued after version 2. The new CKEditor is redesigned from the ground up, offering more WYSIWYG text editing features, enhanced security and better integration. Don’t force yourself with retro FCKeditor. Switch to the new, cool CKEditor at ckeditor.com
    Downloads: 14 This Week
    Last Update:
    See Project
  • 3
    Fast Artificial Neural Network Library is a free open source neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks. Cross-platform execution in both fixed and floating point are supported. It includes a framework for easy handling of training data sets. It is easy to use, versatile, well documented, and fast. Bindings to more than 15 programming languages are available. An easy to read introduction article and a reference manual accompanies the library with examples and recommendations on how to use the library. Several graphical user interfaces are also available for the library.
    Downloads: 21 This Week
    Last Update:
    See Project
  • 4
    CSSBox

    CSSBox

    Pure Java HTML / CSS rendering engine

    CSSBox is an (X)HTML/CSS rendering engine written in pure Java. Its primary purpose is to provide a complete information about the rendered page suitable for further processing. However, it also allows displaying the rendered document.
    Downloads: 23 This Week
    Last Update:
    See Project
  • Enterprise-grade ITSM, for every business Icon
    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity.

    Freshservice is an intuitive, AI-powered platform that helps IT, operations, and business teams deliver exceptional service without the usual complexity. Automate repetitive tasks, resolve issues faster, and provide seamless support across the organization. From managing incidents and assets to driving smarter decisions, Freshservice makes it easy to stay efficient and scale with confidence.
    Try it Free
  • 5
    SSHTOOLS

    SSHTOOLS

    Java SSH API

    This project now hosts the third-generation of Java SSH API, Maverick Synergy. This API builds on the Maverick Legacy commercial APIs and delivers a new API in a unified client/server framework. This API is available to the community under the LGPL open source license. This update includes ed25519 support, support for the new OpenSSH private key file format and stronger key exchange algorithms. The project continues to host both the original API and legacy applications created around it, however, these are now considered deprecated and we do not recommend their use in anyway.
    Leader badge
    Downloads: 14 This Week
    Last Update:
    See Project
  • 6
    Bits UI

    Bits UI

    The headless components for Svelte

    Bits UI is an open-source headless component library designed specifically for the Svelte ecosystem, providing developers with flexible and accessible primitives for building custom user interface components. Instead of shipping with predefined styles, the library offers unstyled components that focus on behavior and accessibility, allowing developers to fully control the appearance of their UI through their own CSS or design systems. This headless architecture makes Bits UI particularly useful for teams that need reusable UI logic while maintaining consistent branding and visual customization. The project builds on concepts inspired by libraries such as Radix UI and React Spectrum and integrates builder patterns influenced by Melt UI to deliver powerful component abstractions. Developers can use Bits UI to implement complex interface patterns such as dropdowns, modals, and calendars while preserving accessibility standards and predictable interaction behaviors.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    Browserless

    Browserless

    The headless Chrome/Chromium driver on top of Puppeteer

    Browserless is an open-source headless browser automation library and service built on top of Puppeteer that simplifies the process of running and scaling Chromium-based browser tasks in production environments. It provides a high-level API for interacting with headless Chrome, allowing developers to perform operations such as generating PDFs, capturing screenshots, extracting text or HTML, and automating web navigation. The project is designed to act as a production-ready abstraction layer over Puppeteer, offering improved reliability, error handling, and scalability for real-world applications. Browserless includes built-in optimizations such as request blocking, automatic retries, and sensible defaults that improve performance when processing web pages. It can be used as a standalone library, a command-line tool, or a hosted API service that scales browser instances on demand.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    MDCx

    MDCx

    Movie metadata scraper and organizer for media libraries and NFO

    MDCx is an open source media metadata scraping and organization tool designed to automate the process of collecting detailed information for movie files. It retrieves metadata from multiple online sources and applies it to local media collections, helping users maintain structured and well-organized libraries. MDCx can download information such as titles, cast data, artwork, and other metadata, then generate standardized NFO files compatible with media management systems. It also supports image processing tasks such as downloading and cropping artwork used by media centers. It includes several interfaces, allowing users to operate it through a graphical desktop application, a browser-based web interface, or command-line utilities depending on their workflow. Its architecture separates core scraping logic from the user interfaces, allowing the same metadata processing system to be reused across different modes.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    Matomo

    Matomo

    Alternative to Google Analytics that gives you full control over data

    Google Analytics alternative that protects your data and your customers' privacy. Take back control with Matomo – a powerful web analytics platform that gives you 100% data ownership. You could lose your customers’ trust and risk damaging your reputation if people learn their data is used for Google’s “own purposes”. By choosing the ethical alternative, Matomo, you won’t make privacy sacrifices or compromise your site. You can even use Matomo without needing to ask for consent. With 100% data ownership you get the power to protect your user’s privacy. You know where your data is stored and what’s happening to it, without external influence. We’re serious about privacy here at Matomo and keeping your business GDPR and CCPA compliant. The Google Analytics Importer plugin imports Google Analytics reports into a Matomo instance. When you run an import, your Google Analytics (GA) Property will be automatically created as a Website into Matomo.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 10
    Python API for JMComic

    Python API for JMComic

    Python crawler and API for downloading JMComic albums and images

    JMComic-Crawler-Python is a Python library and crawler framework designed to programmatically access and download comic content from the JMComic platform. It provides a structured API that allows developers to retrieve albums, chapters, and images using simple Python code while handling the necessary network requests and data processing behind the scenes. It supports both web-based and mobile API interfaces, enabling flexible interaction with the platform depending on the available endpoints. Its architecture includes components for configuration management, download orchestration, and client communication, allowing users to automate the retrieval of manga chapters or entire albums. It includes command-line functionality and configuration files so users can customize download behavior, directory structures, and performance settings without modifying code. It also supports plugin-based extensions that allow additional processing.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    Scweet

    Scweet

    Scrape tweets, profiles, followers and following from Twitter/X

    Scweet is a Python-based Twitter/X scraping library and CLI designed to collect tweets, profile timelines, followers, following lists, and user profile data without requiring the official Twitter/X API or a developer account. Instead of depending on deprecated unauthenticated scraping methods, it works by using X’s web GraphQL API together with authenticated browser cookies, which gives it a more current and practical approach for data extraction. The project supports a broad set of collection patterns, including searches by keyword, hashtag, user, date range, engagement thresholds, language, and location, making it useful for research, monitoring, and data gathering workflows. It is built for both local use and higher-volume runs, with support for proxies, dedicated accounts, and multi-account cookie handling to improve reliability at scale. Scweet also includes asynchronous method variants, a command-line interface, automatic credential persistence in a local database, etc.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    autocrawler

    autocrawler

    Multiprocess Selenium crawler for downloading images by keywords

    AutoCrawler is a Python-based image crawling tool designed to automatically download large numbers of images from search engines using automated browser interaction. It uses Selenium and a Chrome browser driver to navigate image search pages and collect image sources based on keywords provided by the user. AutoCrawler supports multiprocess and multithreaded downloading, which allows it to retrieve images faster by running several tasks simultaneously. Users provide search terms through a simple keyword file, and the crawler organizes downloaded images into directories for each keyword. It can download either thumbnails or full resolution images and supports multiple image formats such as JPG, GIF, and PNG. It also includes configuration options such as headless mode, download limits, proxy usage, and thread count to customize crawling behavior.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    grab-site

    grab-site

    Web crawler for archiving and backing up sites into WARC archives

    grab-site is an open source web crawling tool designed to archive and back up websites by recursively downloading their content. It works by taking a starting URL and systematically following links across the site, capturing pages and resources and saving them into WARC archive files for long-term preservation. Internally, the crawler uses a fork of the wpull engine to fetch and process web pages efficiently during large-scale crawls. grab-site includes a built-in dashboard that displays real-time crawl activity, including which URLs are currently being processed and how many remain in the queue. Users can dynamically apply ignore patterns during an active crawl, allowing them to skip problematic or unnecessary URLs that could slow down or block the archiving process. grab-site also provides predefined ignore sets for common site structures such as forums and other complex web platforms. Additional mechanisms like duplicate page detection help avoid re-crawling identical content.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    jd-autobuy

    jd-autobuy

    Python tool that automates JD.com login and product purchase tasks

    jd-autobuy is an open source Python-based automation tool designed to simulate the purchasing process on the JD e-commerce platform. It uses web scraping and HTTP request techniques to log into an account, check product availability, and attempt to purchase specified items automatically. It supports login through methods such as QR code authentication, allowing users to sign in through the platform’s mobile application. Once authenticated, the script can retrieve product details including price, stock status, and item information. It can automatically add items to the shopping cart and prepare an order submission workflow for faster purchasing during high-demand sales or limited stock releases. Users can configure parameters such as the product ID, quantity, refresh interval, and purchase behavior using command-line options. jd-autobuy is intended primarily for learning purposes and demonstrates how automated scripts can interact with web services and online shopping systems .
    Downloads: 3 This Week
    Last Update:
    See Project
  • 15
    mlscraper

    mlscraper

    ML-based HTML scraper that learns extraction rules from examples

    mlscraper is a Python library designed to automatically extract structured data from HTML pages without requiring developers to manually write CSS selectors or XPath rules. Instead of defining extraction logic by hand, users provide a few examples of the data they want to retrieve from a webpage. It analyzes those examples within the HTML document and determines patterns or rules that can be used to extract the same type of information from similar pages. Once trained, the generated scraper can process new pages and return the extracted data in structured formats such as dictionaries or lists. This approach simplifies web scraping tasks by shifting the focus from rule-writing to example-based training. Internally, the project processes HTML documents, identifies relevant elements in the DOM, and builds extraction logic based on statistical or heuristic analysis of the training samples. The result is a developer-oriented tool that aims to automate common scraping workflows.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    news-please

    news-please

    Python tool for crawling and extracting structured data from news site

    news-please is an open source news crawler and information extraction tool designed to collect and structure articles from online news websites. It provides an integrated pipeline that crawls news sites, retrieves article pages, and extracts structured information such as headlines, authors, publication dates, and article text. news-please can recursively follow internal links and read RSS feeds to gather both recent and archived articles from a news outlet when given only the root URL of a site. It combines several established technologies and libraries to perform web crawling and content extraction, enabling reliable processing across a wide range of news sources. Developers can use the software either as a standalone command line application or integrate it into their own Python applications through its library interface. Extracted article data can be stored in different formats and systems, including JSON files or database-backed storage solutions.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 17
    python-fxxk-spider

    python-fxxk-spider

    Collection of 100+ Python web scraping projects and crawler examples

    python-fxxk-spider is a curated collection of Python web scraping and crawler projects gathered in a single repository for reference and learning. It aggregates many independent scraping examples that target a wide range of websites, online services, and public data sources. Instead of being a single crawler tool, it functions as a catalog of ready-made Python spider implementations that demonstrate different scraping techniques. python-fxxk-spider includes scrapers for social media, e-commerce platforms, job listings, music services, video platforms, and various content sites. Because websites frequently change their structure, some included projects may require adjustments before they can run successfully. It is designed as a long-term, continuously updated list of practical crawler implementations that developers can study, modify, and adapt to their own scraping tasks. It also highlights the importance of legal and responsible use of web scraping technologies.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 18
    spider_collection

    spider_collection

    Collection of Python web scraping scripts for data extraction tasks

    spider_collection is a collection of Python web crawler scripts created primarily for experimentation, learning, and practical scraping tasks. spider_collection gathers multiple independent spiders designed to collect data from different platforms and services, demonstrating a variety of scraping techniques and workflows. These crawlers make use of common Python scraping tools such as requests, parsel, BeautifulSoup, and the Scrapy framework to extract structured information from web pages. Several scripts also incorporate multi-threading and proxy usage to improve scraping efficiency and help avoid common anti-scraping limitations. In addition to raw data collection, some spiders include basic data processing and analysis using tools such as pandas and simple visualization with matplotlib. It also contains examples of proxy pool integration and encapsulation to support more reliable crawling when working with sites that enforce request limits.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    sqliv

    sqliv

    Massive SQL injection vulnerability scanner for automated web testing

    SQLiv is a command-line security tool designed to identify SQL injection vulnerabilities in web applications through automated scanning techniques. Written primarily in Python, the project focuses on discovering potentially vulnerable web pages by analyzing URLs that contain database query parameters. It can perform large-scale scanning by using search engine queries known as SQL injection dorks to collect candidate websites and then test them for vulnerabilities. In addition to bulk scanning, SQLiv supports targeted analysis of specific domains or individual URLs, allowing security researchers to focus on particular web applications. When a domain is supplied, the scanner can crawl the site to gather URLs with parameters and evaluate them for potential SQL injection weaknesses. SQLiv also supports reverse domain scanning to locate other websites hosted on the same server, which can then be examined for similar vulnerabilities.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 20
    tumblr-crawler

    tumblr-crawler

    Python crawler to download photos and videos from Tumblr blogs

    tumblr-crawler is an open source Python-based utility designed to download media content from Tumblr blogs. It provides a script that automatically retrieves photos and videos from specified Tumblr sites and saves them locally for offline access. Users can specify one or multiple blogs to crawl by editing a configuration file or by passing parameters through the command line. Once executed, the script fetches media from the Tumblr API and stores the downloaded files in folders named after each blog. tumblr-crawler avoids re-downloading files that have already been saved, making repeated runs safe and useful for recovering missing media. It also supports optional proxy configuration, which can help when access to Tumblr content requires routing requests through a proxy server. With simple dependencies and straightforward configuration, the project offers a practical way to archive media content from Tumblr blogs.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 21
    whoami.filippo.io

    whoami.filippo.io

    A ssh server that knows who you are. $ ssh whoami.filippo.io

    whoami.filippo.io powers a diagnostic service that reports what your client and connection look like from the other side, making it a handy mirror for network and TLS debugging. It surfaces details such as your IP address, protocol versions, cipher suites, SNI, and other attributes that are otherwise tedious to confirm across layers. The tool emphasizes clarity and minimalism, helping engineers quickly verify configuration changes in browsers, proxies, VPNs, or CLI tools. It is especially helpful when validating modern TLS features like ALPN, HTTP versions, and certificate behavior under different client stacks. Because the output is human-friendly, it doubles as an educational resource for understanding how transport and application-layer negotiations actually manifest. Teams use it to troubleshoot misconfigurations and confirm policy enforcement without setting up their own inspection endpoints.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 22
    TYPO3

    TYPO3

    This page is abandoned, since 2018 find TYPO3 at https://get.typo3.org

    Fetch the latest version on https://get.typo3.org. TYPO3 is an enterprise class Web CMS written in PHP/MySQL. It's designed to be extended with custom written backend modules and frontend libraries for special functionality. It has very powerful integration of image manipulation.
    Downloads: 14 This Week
    Last Update:
    See Project
  • 23
    Ada Class Library

    Ada Class Library

    Ada Class Library - an object orientated library for Ada.

    Text search and replace. Scripting (small tool programs). CGI scripts. Execution of external programs (incl. I/O redirection). Garbage Collection. Extendended Booch Components. CD-Recorder
    Leader badge
    Downloads: 71 This Week
    Last Update:
    See Project
  • 24
    Tracks usage of TCP/IP network subnets and builds html files with graphs to display utilization. Charts are built by individual ip. Color Codes HTTP, TCP,UDP, ICMP, VPN, P2P, etc. Click on the release notes icon for the latest release for more info.
    Downloads: 29 This Week
    Last Update:
    See Project
  • 25
    In this project we aim to develop scheme libraries for developing various web applications (especially servlets and xml-based web services). Our approach is to use jscheme (an open source implementation of scheme in Java) as the core language which allow
    Leader badge
    Downloads: 67 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB