Showing 398 open source projects for "cloud"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 1
    Dagger

    Dagger

    Containerized automation engine for programmable CI/CD workflows

    ...It enables developers to define software delivery workflows using code instead of complex shell scripts or configuration files. Dagger executes tasks inside containers, ensuring that automation runs in identical environments across local machines, CI servers, or cloud infrastructure. Dagger provides a core execution engine and system API that orchestrates containers, filesystems, secrets, repositories, and other resources needed during development pipelines. Developers can write pipelines using SDKs available for multiple programming languages, enabling integration with existing development stacks and tools. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Text-to-image Playground

    Text-to-image Playground

    A playground to generate images from any text prompt using SD

    ...The system combines a backend machine learning service with a browser-based frontend interface that lets users experiment interactively with prompt engineering and generative AI. Developers can run the application locally or deploy it using cloud infrastructure, making it accessible both for experimentation and educational use. The platform demonstrates how large generative models can be integrated into user-friendly tools for creative exploration and rapid prototyping. It also serves as a reference architecture for building full-stack generative AI applications that connect model inference pipelines with web interfaces.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    AutoTrain Advanced

    AutoTrain Advanced

    Faster and easier training and deployments

    ...The system integrates closely with the Hugging Face ecosystem and allows developers to train models using datasets hosted on the Hugging Face Hub. AutoTrain Advanced can run locally or in cloud environments, making it adaptable to different computational setups. By automating tasks such as model configuration, hyperparameter selection, and training pipelines, the project significantly reduces the technical barrier to building AI systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    PicoLM

    PicoLM

    Run a 1-billion parameter LLM on a $10 board with 256MB RAM

    ...The runtime is capable of running language models with billions of parameters on devices with only a few hundred megabytes of memory, which is significantly lower than typical LLM infrastructure requirements. This makes PicoLM particularly suitable for edge computing, offline AI applications, and embedded AI devices that cannot rely on cloud resources.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 5
    MaxText

    MaxText

    A simple, performant and scalable Jax LLM

    ...The project acts as both a reference implementation and a practical training library that demonstrates best practices for building and scaling transformer-based language models on modern accelerator hardware. It is optimized to run efficiently on Google Cloud TPUs and GPUs, enabling researchers and engineers to train models ranging from small experiments to extremely large distributed workloads. The framework focuses on simplicity while still supporting advanced techniques such as model sharding, distributed computation, and high-throughput training pipelines. MaxText includes ready-to-use configurations and reproducible training examples that help developers understand how to deploy large-scale AI workloads with modern machine learning infrastructure.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    ClaraVerse

    ClaraVerse

    Claraverse is a opesource privacy focused ecosystem to replace ChatGPT

    ClaraVerse is an open-source private AI workspace designed to give users a unified environment for interacting with large language models, building automations, and managing AI-driven tasks in a self-hosted environment. The platform combines chat interfaces, workflow automation, and long-running task management into a single application that can connect to both local and cloud-based AI models. Users can integrate models from multiple providers such as OpenAI, Anthropic, Google, or locally hosted systems like Ollama and LM Studio, enabling flexibility in how AI capabilities are deployed and managed. The system includes a visual workflow builder that allows users to create automation pipelines where AI tools interact with external services, APIs, or datasets. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    RunAnywhere

    RunAnywhere

    Production ready toolkit to run AI locally

    RunAnywhere SDKs are a set of cross-platform development tools that enable applications to run artificial intelligence models directly on user devices instead of relying on cloud infrastructure. The toolkit allows developers to integrate language models, speech recognition, and voice synthesis capabilities into mobile or desktop applications while keeping all computation local. By running models entirely on device, the platform eliminates network latency and protects user data because information does not leave the device. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    MetaMCP

    MetaMCP

    MCP Aggregator, Orchestrator, Middleware, Gateway in one docker

    ...It ships Dockerized for quick deployment and emphasizes dynamic aggregation so teams can register or remove servers without restarting clients. The org maintains related repos and a GUI app for cloud and self-hosted setups, with a note that the cloud demo is outdated while the open-source v2 evolves. Overall, MetaMCP aims to simplify multi-server MCP operations for individuals and organizations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    zclaw

    zclaw

    Your personal AI assistant at all-in 888KiB

    ...The architecture is optimized for efficiency, allowing the full assistant stack to run in under one megabyte of space. By targeting low-power hardware, zclaw explores the future of edge AI assistants that operate independently of large cloud systems. Overall, the project showcases how lightweight autonomous assistants can be embedded directly into IoT devices.
    Downloads: 6 This Week
    Last Update:
    See Project
  • Go from Code to Production URL in Seconds Icon
    Go from Code to Production URL in Seconds

    Cloud Run deploys apps in any language instantly. Scales to zero. Pay only when code runs.

    Skip the Kubernetes configs. Cloud Run handles HTTPS, scaling, and infrastructure automatically. Two million requests free per month.
    Try it free
  • 10
    Moltis

    Moltis

    A Rust-native claw you can trust

    ...It compiles the entire assistant stack, including the web interface, model routing, memory, and tools, into a single self-contained binary with no external runtime dependencies. The system supports multiple large language model providers alongside local models, enabling users to maintain privacy while still accessing cloud capabilities when needed. Moltis emphasizes security through sandboxed execution environments, where commands and browsing tasks run in isolated containers and require explicit approval. The platform also includes long-term memory powered by hybrid vector and full-text search, allowing the assistant to retain context across sessions. ...
    Downloads: 19 This Week
    Last Update:
    See Project
  • 11
    Letta

    Letta

    Letta (formerly MemGPT) is a framework for creating LLM services

    Letta is an AI-powered task automation framework designed to handle workflow automation, natural language commands, and AI-driven decision-making.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    Basic Memory

    Basic Memory

    Persistent AI memory using local Markdown knowledge graphs

    ...Basic Memory creates a semantic knowledge graph by linking related ideas, making it easier to retrieve, expand, and connect information over time. With a local-first design, your data stays private and portable, while optional cloud sync enables cross-device access. It combines simplicity with powerful indexing and search, giving you a flexible way to build long-term memory for projects, research, and workflows.
    Downloads: 18 This Week
    Last Update:
    See Project
  • 13
    QMD

    QMD

    mini cli search engine for your docs, knowledge bases, etc.

    QMD is a powerful and lightweight command-line tool that acts as an on-device search engine for your personal knowledge base, allowing you to index and search files like Markdown notes, meeting transcripts, technical documentation, and other text collections without depending on cloud services. Designed to keep all search activity local, it combines classic full-text search techniques with modern semantic features such as vector similarity and hybrid ranking so that queries return not just literal matches but conceptually relevant results. Users can organize content into named collections, embed documents for semantic retrieval, and then perform keyword searches, semantic searches, or hybrid natural-language queries to quickly surface the most useful information across all indexed sources. ...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 14
    Unsloth Studio

    Unsloth Studio

    Unified web UI for training and running open models locally

    Unsloth Studio is a web-based interface for running and training AI models locally with a unified and user-friendly experience. It allows users to work with a wide range of models for text, audio, vision, embeddings, and more without relying heavily on cloud infrastructure. Built on top of the Unsloth framework, it focuses on high-performance training with reduced VRAM usage and faster speeds compared to traditional methods. The platform supports fine-tuning, pretraining, and reinforcement learning workflows, making it suitable for both experimentation and production use. Users can interact with models through chat, upload files like PDFs or images, and execute code within the environment to improve outputs. ...
    Downloads: 18 This Week
    Last Update:
    See Project
  • 15
    lightning AI

    lightning AI

    The most intuitive, flexible, way for researchers to build models

    Build in days not months with the most intuitive, flexible framework for building models and Lightning Apps (ie: ML workflow templates) which "glue" together your favorite ML lifecycle tools. Build models and build/publish end-to-end ML workflows that "glue" your favorite tools together. Models are “easy”, the “glue” work is hard. Lightning Apps are community-built templates that stitch together your favorite ML lifecycle tools into cohesive ML workflows that can run on your laptop or any...
    Downloads: 17 This Week
    Last Update:
    See Project
  • 16
    Open Notebook

    Open Notebook

    An Open Source implementation of Notebook LM with more flexibility

    Open Notebook is an open-source, privacy-focused alternative to Google’s Notebook LM that gives users full control over their research and AI workflows. Designed to be self-hosted, it ensures complete data sovereignty by keeping your content local or within your own infrastructure. The platform supports 16+ AI providers—including OpenAI, Anthropic, Ollama, Google, and LM Studio—allowing flexible model choice and cost optimization. Open Notebook enables users to organize and analyze...
    Downloads: 27 This Week
    Last Update:
    See Project
  • 17
    PasteGuard

    PasteGuard

    Masks sensitive data and secrets before they reach AI

    ...PasteGuard supports two primary modes: mask mode, which anonymizes data and still uses external APIs; and route mode, which forwards sensitive requests to a local LLM inference engine while sending the rest to the cloud. It can be self-hosted via Docker, works with a wide range of SDKs and tools, and includes a browser extension for automatic protection in everyday AI chats.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 18
    CocoIndex

    CocoIndex

    ETL framework to index data for AI, such as RAG

    CocoIndex is an open-source framework designed for building powerful, local-first semantic search systems. It lets users index and retrieve content based on meaning rather than keywords, making it ideal for modern AI-based search applications. CocoIndex leverages vector embeddings and integrates with various models and frameworks, including OpenAI and Hugging Face, to provide high-quality semantic understanding. It’s built for transparency, ease of use, and local control over your search...
    Downloads: 9 This Week
    Last Update:
    See Project
  • 19
    Speech Note

    Speech Note

    Speech Note Linux app. Note taking, reading and translating

    ...It combines speech-to-text, text-to-speech, and machine translation in a single interface, allowing users to dictate notes, listen back to them, and translate them without ever sending data to the cloud. All processing is done locally, which means audio, text, and translations never leave the device, emphasizing strong privacy guarantees. The application supports multiple STT engines such as Coqui STT (DeepSpeech fork), Vosk, whisper.cpp, Faster Whisper, and april-asr, giving users flexibility in accuracy, speed, and hardware requirements. ...
    Downloads: 16 This Week
    Last Update:
    See Project
  • 20
    GPUStack

    GPUStack

    Performance-optimized AI inference on your GPUs

    ...The platform supports GPUs from a wide range of vendors and can run on laptops, workstations, and servers across operating systems such as macOS, Windows, and Linux. It also enables developers to deploy models from common repositories like Hugging Face and access them through APIs similar to cloud-based AI services.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 21
    MemMachine

    MemMachine

    Universal memory layer for AI Agents

    MemMachine is a universal memory layer designed for AI agents that provides persistent, rich memory storage and retrieval capabilities so autonomous agent systems can recall context, personal preferences, and long-term interaction history across sessions, models, and use cases. Unlike ephemeral LLM prompt state, MemMachine supports distinct memory types—short-term conversational context, long-term persistent knowledge, and profile memory for personalized facts—persisted in optimized stores...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 22
    DeepCamera

    DeepCamera

    Open-Source AI Camera. Empower any camera/CCTV

    ...It provides open-source facial recognition-based intrusion detection, fall detection, and parking lot monitoring with the inference engine on your local device. SharpAI-hub is the cloud hosting for AI applications that helps you deploy AI applications with your CCTV camera on your edge device in minutes. SharpAI yolov7_reid is an open-source Python application that leverages AI technologies to detect intruders with traditional surveillance cameras. The source code is here It leverages Yolov7 as a person detector, FastReID for person feature extraction, Milvus the local vector database for self-supervised learning to identify unseen persons, Labelstudio to host images locally and for further usage such as label data and train your own classifier. ...
    Downloads: 12 This Week
    Last Update:
    See Project
  • 23
    MLRun

    MLRun

    Machine Learning automation and tracking

    MLRun is an open MLOps framework for quickly building and managing continuous ML and generative AI applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications, significantly reducing engineering efforts, time to production, and computation resources. MLRun breaks the silos between data, ML, software, and DevOps/MLOps teams, enabling collaboration and fast continuous...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 24
    Edgee

    Edgee

    AI gateway with token compression for Claude Code, Codex, and more

    Edgee is an edge-native execution platform designed to run AI-driven logic and data processing directly at the network edge, reducing latency and improving responsiveness for modern applications. It enables developers to deploy functions and workflows closer to users, allowing real-time processing without relying heavily on centralized cloud infrastructure. The platform is built to support event-driven architectures, where actions are triggered by incoming requests, user behavior, or external signals. It integrates AI capabilities into edge environments, making it possible to perform inference, personalization, and decision-making at the point of interaction. Edgee is optimized for performance and scalability, leveraging distributed execution to handle high volumes of requests efficiently. ...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 25
    llama.vscode

    llama.vscode

    VS Code extension for LLM-assisted code/text completion

    ...The extension is designed to be lightweight and efficient, enabling developers to use AI tools even on consumer-grade hardware. It integrates with the llama.cpp runtime to run language models locally, eliminating the need to rely entirely on external APIs or cloud providers. The extension supports common AI development features such as code completion, conversational chat assistance, and AI-assisted code editing directly within the IDE. Developers can select and manage models through a configuration interface that automatically downloads and runs the required models locally. The extension also supports agent-style coding workflows, where AI tools can perform more complex tasks such as analyzing project context or editing multiple files.
    Downloads: 11 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB