142 projects for "cloud" with 2 filters applied:

  • Fully Managed MySQL, PostgreSQL, and SQL Server Icon
    Fully Managed MySQL, PostgreSQL, and SQL Server

    Automatic backups, patching, replication, and failover. Focus on your app, not your database.

    Cloud SQL handles your database ops end to end, so you can focus on your app.
    Try Free
  • Custom VMs From 1 to 96 vCPUs With 99.95% Uptime Icon
    Custom VMs From 1 to 96 vCPUs With 99.95% Uptime

    General-purpose, compute-optimized, or GPU/TPU-accelerated. Built to your exact specs.

    Live migration and automatic failover keep workloads online through maintenance. One free e2-micro VM every month.
    Try Free
  • 1
    COLMAP

    COLMAP

    Structure-from-Motion and Multi-View Stereo

    COLMAP is a general-purpose Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline with a graphical and command-line interface. It offers a wide range of features for the reconstruction of ordered and unordered image collections. The software is licensed under the new BSD license.
    Downloads: 62 This Week
    Last Update:
    See Project
  • 2
    Agent Starter Pack

    Agent Starter Pack

    Ship AI Agents to Google Cloud in minutes, not months

    ...The framework supports multiple agent architectures, including ReAct, retrieval-augmented generation, and multi-agent systems, allowing flexibility across use cases. It integrates tightly with Google Cloud services like Vertex AI, Cloud Run, and Terraform-based infrastructure provisioning, enabling scalable and reliable deployments.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    PentestGPT

    PentestGPT

    Automated Penetration Testing Agentic Framework Powered by LLMs

    ...It offers real-time feedback and live walkthroughs, allowing users to observe each step of the testing process as it unfolds. Built with a modular and extensible architecture, PentestGPT supports cloud and local LLMs, making it suitable for research, education, and authorized security testing.
    Downloads: 424 This Week
    Last Update:
    See Project
  • 4
    Generative AI

    Generative AI

    Sample code and notebooks for Generative AI on Google Cloud

    ...It is licensed under Apache-2.0, open­sourced and maintained by Google, meaning it's designed with enterprise-grade practices in mind. Overall, it serves as a practical entry point and reference library for building real-world generative AI systems on Google Cloud.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Full-stack observability with actually useful AI | Grafana Cloud Icon
    Full-stack observability with actually useful AI | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 5
    TuyaOpen

    TuyaOpen

    Next-gen AI+IoT framework for T2/T3/T5AI/ESP32/and more

    ...The platform provides a cross-platform C and C++ software development kit that supports a wide range of hardware platforms including Tuya microcontrollers, ESP32 boards, Raspberry Pi devices, and other embedded systems. It offers a unified development environment where developers can build devices capable of communicating with IoT cloud services while integrating AI capabilities and intelligent automation features. The system includes built-in networking support for communication protocols such as Wi-Fi, Bluetooth, and Ethernet, allowing devices to connect securely to remote services and applications. TuyaOpen also integrates with Tuya’s broader cloud ecosystem, enabling developers to manage device authentication, firmware updates, device activation, and remote monitoring from centralized services.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 6
    Osaurus

    Osaurus

    AI edge infrastructure for macOS. Run local or cloud models

    ...The project provides a native runtime that allows applications to access large language models and AI tools directly on the user’s machine without relying entirely on cloud services. Osaurus supports running both local and remote models, enabling developers to build AI-powered applications that can operate offline or leverage external APIs when needed. The platform acts as an always-on runtime that coordinates AI tasks, tools, and workflows while enabling applications to communicate with models through standardized interfaces. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 7
    Clippy

    Clippy

    Clippy, now with some AI

    ...The project serves as both a playful homage to the early days of personal computing and a practical demonstration of local AI inference. Clippy integrates with the llama.cpp runtime to run models directly on a user’s computer without requiring cloud-based AI services. It supports models in the GGUF format, which allows it to run many publicly available open-source LLMs efficiently on consumer hardware. Users interact with the system through a simple animated assistant interface that can answer questions, generate text, and perform conversational tasks. The application includes one-click installation support for several popular models such as Meta’s Llama, Google’s Gemma, and other open models.
    Downloads: 34 This Week
    Last Update:
    See Project
  • 8
    NativeMind Extension

    NativeMind Extension

    Your fully private, open-source, on-device AI assistant

    ...The extension is aimed at everyday browser workflows, offering features like multi-tab context awareness, webpage summarization, document understanding, contextual toolbars, and AI-assisted rewriting directly inside the browsing experience. Because it runs locally after setup, it is also positioned as an always-available assistant that avoids API quotas, network latency, and service outages common in cloud-based AI tools.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 9
    HolmesGPT

    HolmesGPT

    CNCF Sandbox Project

    ...Rather than requiring engineers to manually correlate large volumes of monitoring data, HolmesGPT automatically synthesizes evidence and presents explanations in natural language. The project is developed by Robusta and has been accepted as a Cloud Native Computing Foundation Sandbox project, highlighting its relevance to the cloud-native ecosystem. It is designed to operate as an automated troubleshooting assistant that can analyze incidents continuously and support on-call engineers during outages.
    Downloads: 2 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 10
    ClawHost

    ClawHost

    Deploy OpenClaw with one click

    ClawHost is an open-source, self-hostable cloud hosting platform designed to simplify the deployment of OpenClaw onto a dedicated VPS in minutes, giving users full control over their AI infrastructure without relying on shared or managed services. It automates server provisioning, DNS configuration, SSL certificates, and firewall setup, so developers can focus on running their AI workloads rather than configuring infrastructure manually.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    openclaw-kapso-whatsapp

    openclaw-kapso-whatsapp

    Give your OpenClaw AI agent a WhatsApp number

    openclaw-kapso-whatsapp is a plugin repository designed to extend the OpenClaw AI agent by giving it a dedicated WhatsApp phone number using the official Meta Cloud API via Kapso, enabling direct interaction through one of the most widely used messaging platforms. This integration allows the autonomous AI assistant to send and receive messages on WhatsApp, turning the agent into a real-world task performer accessible through text conversations. The plugin is built in Go and handles communication entirely through cloud APIs, avoiding the risk of bans that come with unofficial or reverse-engineered interfaces. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 12
    MemPalace

    MemPalace

    The highest-scoring AI memory system ever benchmarked

    ...The system is inspired by the classical “memory palace” mnemonic technique, organizing information into hierarchical spaces such as wings, rooms, and halls, which allows AI agents to navigate past knowledge in a more contextual and intuitive way. It operates fully locally using tools like ChromaDB, meaning it requires no API keys, cloud services, or external dependencies once installed. MemPalace emphasizes fidelity over compression, preserving full conversational history to maintain reasoning, nuance, and decision-making context that is typically lost in other systems.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 13
    Ollama JavaScript Library

    Ollama JavaScript Library

    Ollama JavaScript library

    ...Streaming responses are built in, returning an async generator so applications can render output progressively instead of waiting for a full response. It also supports cloud-hosted usage by pointing the client at Ollama’s cloud endpoint with an API key, while preserving a familiar local-first workflow for developers who want to move between local and remote execution.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    Jina-Serve

    Jina-Serve

    Build multimodal AI applications with cloud-native stack

    ...The framework allows developers to create microservices that expose machine learning models through APIs that communicate using protocols such as HTTP, gRPC, and WebSockets. It is built with a cloud-native architecture that supports deployment on local machines, containerized environments, or large orchestration platforms such as Kubernetes. Jina Serve focuses on making it easier to turn machine learning models into production-ready services without forcing developers to manage complex infrastructure manually. The framework supports many major machine learning libraries and data types, making it suitable for multimodal AI systems that process text, images, audio, and other inputs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Pathway AI Pipelines

    Pathway AI Pipelines

    Ready-to-run cloud templates for RAG

    ...The templates include built-in indexing, vector search, hybrid search, and caching capabilities that remove the need to assemble separate infrastructure components. Developers can run the applications locally or deploy them to cloud platforms using Docker with minimal setup. Overall, llm-app functions as a practical accelerator for teams building real-time, production-ready AI knowledge systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Cube Studio

    Cube Studio

    Cube Studio open source cloud native one-stop machine learning

    Cube Studio is an open-source, cloud-native end-to-end machine learning and AI platform designed to support the full lifecycle of AI development — from data preparation and interactive notebook coding to distributed training, model tuning, and deployment in production-ready environments. It provides a unified interface where teams can manage data sources, track datasets, and build pipelines using drag-and-drop workflow orchestration, making it accessible for both engineers and data scientists working at scale. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    DeployStack

    DeployStack

    Centralized credential vault, governance, and token optimization

    ...The project emphasizes repeatability and clarity, enabling teams to follow best practices for scalability, security, and operational reliability without hand-crafting deployment scripts for every new service. It supports integration with popular cloud providers and infrastructure tooling, streamlining workflows that span local development through staging and production environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    MAI-UI

    MAI-UI

    Real-World Centric Foundation GUI Agents

    ...Unlike traditional UI frameworks, MAI-UI emphasizes realistic deployment by supporting agent–user interaction (clarifying ambiguous instructions), integration with external tool APIs using MCP calls, and a device–cloud collaboration mechanism that dynamically routes computation to on-device or cloud models based on task state and privacy constraints.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    Scriberr

    Scriberr

    Self-hosted AI audio transcription

    Scriberr is a self-hosted AI-powered transcription platform designed to convert audio and video into highly accurate text while prioritizing privacy and local processing. Unlike cloud-based transcription services, Scriberr runs entirely on the user’s machine, ensuring that sensitive recordings are never sent to third-party servers and remain fully under user control. It leverages modern speech recognition models such as Whisper and other advanced architectures to deliver precise transcripts with word-level timing and speaker identification. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 20
    GPT4All

    GPT4All

    Run Local LLMs on Any Device. Open-source

    GPT4All is an open-source project that allows users to run large language models (LLMs) locally on their desktops or laptops, eliminating the need for API calls or GPUs. The software provides a simple, user-friendly application that can be downloaded and run on various platforms, including Windows, macOS, and Ubuntu, without requiring specialized hardware. It integrates with the llama.cpp implementation and supports multiple LLMs, allowing users to interact with AI models privately. This...
    Downloads: 127 This Week
    Last Update:
    See Project
  • 21
    SmythOS

    SmythOS

    Cloud-native runtime for agentic AI

    ...It provides a foundational infrastructure layer that functions similarly to an operating system for agentic AI systems, managing resources such as language models, storage, vector databases, and caching through a unified interface. Developers can use the runtime to create, deploy, and orchestrate intelligent agents across local machines, cloud environments, or hybrid infrastructures without rewriting their application logic. The platform includes a software development kit and command-line interface that allow developers to define agent workflows, manage execution environments, and automate deployment processes. SRE is designed with modular architecture so that connectors to external services or infrastructure providers can be swapped or extended without changing the agent’s core logic.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Open Responses

    Open Responses

    Specification for multi-provider, interoperable LLM interfaces

    ...It enables you to run a local or private server that speaks the standard Responses API, so tools, applications, and agents built against that API can operate without contacting OpenAI’s cloud and can instead route calls to any large language model provider you choose, such as Claude, Qwen, Ollama, or others. This makes it a powerful option for teams or individuals who want full control over their AI infrastructure, prioritize privacy, or need to standardize inference calls across multiple backends without rewriting their code.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 23
    Multica

    Multica

    The open-source managed agents platform

    ...The system integrates with multiple AI coding tools and provides a unified interface for managing tasks, compute environments, and agent execution pipelines. It includes both a web interface and a CLI that connects local or cloud-based runtimes to the platform, enabling flexible deployment and scaling. Multica emphasizes collaboration between humans and AI by allowing agents to operate alongside developers in shared workspaces. It also supports reusable skill accumulation, meaning that solutions generated by agents can be reused across projects to improve efficiency over time.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 24
    Farfalle

    Farfalle

    AI search engine - self-host with local or cloud LLMs

    ...The project integrates large language models with multiple search APIs so that the system can gather information from external sources and synthesize responses into concise answers. It can run either with local language models or with cloud-based providers, allowing developers to deploy it privately or integrate with hosted AI services. The architecture separates the frontend and backend, using modern web technologies such as Next.js and FastAPI to deliver an interactive interface and scalable server logic. Farfalle also includes an agent-based search workflow that plans queries and executes multiple search steps to produce more accurate results than traditional keyword searches. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    MimiClaw

    MimiClaw

    Run OpenClaw on a $5 chip

    ...Even though it’s running on minimal hardware, MimiClaw maintains local memory that persists across power cycles, enabling context continuity over time without relying on cloud services. Its architecture emphasizes privacy, low power, and portability, ideal for personal or hobbyist use cases where privacy and local control matter.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB