40 projects for "cloud" with 2 filters applied:

  • Train ML Models With SQL You Already Know Icon
    Train ML Models With SQL You Already Know

    BigQuery automates data prep, analysis, and predictions with built-in AI assistance.

    Build and deploy ML models using familiar SQL. Automate data prep with built-in Gemini. Query 1 TB and store 10 GB free monthly.
    Try Free
  • Earn up to 16% annual interest with Nexo. Icon
    Earn up to 16% annual interest with Nexo.

    Access competitive interest rates on your digital assets.

    Generate interest, borrow against your crypto, and trade a range of cryptocurrencies — all in one platform. Geographic restrictions, eligibility, and terms apply.
    Get started with Nexo.
  • 1
    Osaurus

    Osaurus

    AI edge infrastructure for macOS. Run local or cloud models

    ...The project provides a native runtime that allows applications to access large language models and AI tools directly on the user’s machine without relying entirely on cloud services. Osaurus supports running both local and remote models, enabling developers to build AI-powered applications that can operate offline or leverage external APIs when needed. The platform acts as an always-on runtime that coordinates AI tasks, tools, and workflows while enabling applications to communicate with models through standardized interfaces. ...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 2
    Clippy

    Clippy

    Clippy, now with some AI

    ...The project serves as both a playful homage to the early days of personal computing and a practical demonstration of local AI inference. Clippy integrates with the llama.cpp runtime to run models directly on a user’s computer without requiring cloud-based AI services. It supports models in the GGUF format, which allows it to run many publicly available open-source LLMs efficiently on consumer hardware. Users interact with the system through a simple animated assistant interface that can answer questions, generate text, and perform conversational tasks. The application includes one-click installation support for several popular models such as Meta’s Llama, Google’s Gemma, and other open models.
    Downloads: 40 This Week
    Last Update:
    See Project
  • 3
    TuyaOpen

    TuyaOpen

    Next-gen AI+IoT framework for T2/T3/T5AI/ESP32/and more

    ...The platform provides a cross-platform C and C++ software development kit that supports a wide range of hardware platforms including Tuya microcontrollers, ESP32 boards, Raspberry Pi devices, and other embedded systems. It offers a unified development environment where developers can build devices capable of communicating with IoT cloud services while integrating AI capabilities and intelligent automation features. The system includes built-in networking support for communication protocols such as Wi-Fi, Bluetooth, and Ethernet, allowing devices to connect securely to remote services and applications. TuyaOpen also integrates with Tuya’s broader cloud ecosystem, enabling developers to manage device authentication, firmware updates, device activation, and remote monitoring from centralized services.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    Casibase

    Casibase

    Open-source enterprise-level AI knowledge base and MCP

    Casibase is an open-source AI cloud platform designed to function as an enterprise knowledge base, container management system, and collaboration environment for AI-driven applications. The project combines knowledge management, messaging, and forum features with large language model integration to create an interactive platform for storing and querying domain-specific knowledge.
    Downloads: 6 This Week
    Last Update:
    See Project
  • Go from Code to Production URL in Seconds Icon
    Go from Code to Production URL in Seconds

    Cloud Run deploys apps in any language instantly. Scales to zero. Pay only when code runs.

    Skip the Kubernetes configs. Cloud Run handles HTTPS, scaling, and infrastructure automatically. Two million requests free per month.
    Try it free
  • 5
    HolmesGPT

    HolmesGPT

    CNCF Sandbox Project

    ...Rather than requiring engineers to manually correlate large volumes of monitoring data, HolmesGPT automatically synthesizes evidence and presents explanations in natural language. The project is developed by Robusta and has been accepted as a Cloud Native Computing Foundation Sandbox project, highlighting its relevance to the cloud-native ecosystem. It is designed to operate as an automated troubleshooting assistant that can analyze incidents continuously and support on-call engineers during outages.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    SmythOS

    SmythOS

    Cloud-native runtime for agentic AI

    ...It provides a foundational infrastructure layer that functions similarly to an operating system for agentic AI systems, managing resources such as language models, storage, vector databases, and caching through a unified interface. Developers can use the runtime to create, deploy, and orchestrate intelligent agents across local machines, cloud environments, or hybrid infrastructures without rewriting their application logic. The platform includes a software development kit and command-line interface that allow developers to define agent workflows, manage execution environments, and automate deployment processes. SRE is designed with modular architecture so that connectors to external services or infrastructure providers can be swapped or extended without changing the agent’s core logic.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    GPT4All

    GPT4All

    Run Local LLMs on Any Device. Open-source

    GPT4All is an open-source project that allows users to run large language models (LLMs) locally on their desktops or laptops, eliminating the need for API calls or GPUs. The software provides a simple, user-friendly application that can be downloaded and run on various platforms, including Windows, macOS, and Ubuntu, without requiring specialized hardware. It integrates with the llama.cpp implementation and supports multiple LLMs, allowing users to interact with AI models privately. This...
    Downloads: 125 This Week
    Last Update:
    See Project
  • 8
    NativeMind Extension

    NativeMind Extension

    Your fully private, open-source, on-device AI assistant

    ...The extension is aimed at everyday browser workflows, offering features like multi-tab context awareness, webpage summarization, document understanding, contextual toolbars, and AI-assisted rewriting directly inside the browsing experience. Because it runs locally after setup, it is also positioned as an always-available assistant that avoids API quotas, network latency, and service outages common in cloud-based AI tools.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Farfalle

    Farfalle

    AI search engine - self-host with local or cloud LLMs

    ...The project integrates large language models with multiple search APIs so that the system can gather information from external sources and synthesize responses into concise answers. It can run either with local language models or with cloud-based providers, allowing developers to deploy it privately or integrate with hosted AI services. The architecture separates the frontend and backend, using modern web technologies such as Next.js and FastAPI to deliver an interactive interface and scalable server logic. Farfalle also includes an agent-based search workflow that plans queries and executes multiple search steps to produce more accurate results than traditional keyword searches. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • 10
    python-whatsapp-bot

    python-whatsapp-bot

    Build AI WhatsApp Bots with Pure Python

    python-whatsapp-bot is an open-source framework that demonstrates how to build AI-powered WhatsApp bots using pure Python and the official WhatsApp Cloud API. The project provides a practical implementation of a messaging automation system using the Flask web framework to handle webhook events and process incoming messages in real time. Developers can configure the bot to receive user messages through the WhatsApp API, route them through application logic, and generate automated responses powered by AI services such as large language models. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    text-extract-api

    text-extract-api

    Document (PDF, Word, PPTX ...) extraction and parse API

    ...It can be integrated into document analysis systems, knowledge retrieval tools, and AI pipelines that rely on clean textual data. The architecture is designed to be lightweight and easily deployable, making it suitable for both local installations and cloud environments.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    RunAnywhere

    RunAnywhere

    Production ready toolkit to run AI locally

    RunAnywhere SDKs are a set of cross-platform development tools that enable applications to run artificial intelligence models directly on user devices instead of relying on cloud infrastructure. The toolkit allows developers to integrate language models, speech recognition, and voice synthesis capabilities into mobile or desktop applications while keeping all computation local. By running models entirely on device, the platform eliminates network latency and protects user data because information does not leave the device. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    DocStrange

    DocStrange

    Extract and convert data from any document, images, pdfs, word doc

    ...It is built for developers who need high-quality parsing from scans, photos, PDFs, office files, and other document sources while preserving privacy and control over the processing flow. One of its key differentiators is deployment flexibility: it offers a cloud API for managed usage as well as a fully private offline mode that runs locally on a GPU. The platform also supports synchronous extraction, streaming responses, and asynchronous processing for larger documents, which makes it adaptable to both interactive workflows and heavier back-end pipelines.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    Ollamac

    Ollamac

    Mac app for Ollama

    ...The application focuses on delivering a lightweight and responsive experience that integrates seamlessly with the macOS ecosystem. Because the models run locally, the system enables private AI workflows without sending data to external APIs or cloud services. Ollamac supports different Ollama models and provides features designed to improve usability such as syntax highlighting and configurable settings.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    SwanLab

    SwanLab

    An open-source, modern-design AI training tracking and visualization

    ...It provides a modern user interface for visualizing results, enabling teams to compare runs, track model performance trends, and collaborate on machine learning research. SwanLab supports both cloud and self-hosted deployments, allowing organizations to run the system privately or integrate it into shared development environments. The platform integrates with a wide range of machine learning frameworks including PyTorch, Transformers, Keras, and other widely used training ecosystems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    local-llm

    local-llm

    Run LLMs locally on Cloud Workstations

    ...This approach improves data privacy and control, as all inference can be performed locally without sending sensitive information to external APIs. It also integrates seamlessly with Google Cloud services, allowing developers to build and test AI-powered applications within the broader cloud ecosystem.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    PicoLM

    PicoLM

    Run a 1-billion parameter LLM on a $10 board with 256MB RAM

    ...The runtime is capable of running language models with billions of parameters on devices with only a few hundred megabytes of memory, which is significantly lower than typical LLM infrastructure requirements. This makes PicoLM particularly suitable for edge computing, offline AI applications, and embedded AI devices that cannot rely on cloud resources.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Generative AI Use Cases (GenU)

    Generative AI Use Cases (GenU)

    Application implementation with business use cases

    AWS Generative AI Use Cases is an open-source repository developed by AWS that provides practical examples and reference implementations for building applications powered by generative artificial intelligence. The project collects a wide range of real-world scenarios that demonstrate how organizations can use large language models and generative AI services within cloud-based architectures. Each example typically includes infrastructure templates, backend services, and application code that show how to integrate generative AI capabilities with other AWS services. These examples cover tasks such as document analysis, conversational assistants, content generation, and knowledge retrieval systems. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    MaxText

    MaxText

    A simple, performant and scalable Jax LLM

    ...The project acts as both a reference implementation and a practical training library that demonstrates best practices for building and scaling transformer-based language models on modern accelerator hardware. It is optimized to run efficiently on Google Cloud TPUs and GPUs, enabling researchers and engineers to train models ranging from small experiments to extremely large distributed workloads. The framework focuses on simplicity while still supporting advanced techniques such as model sharding, distributed computation, and high-throughput training pipelines. MaxText includes ready-to-use configurations and reproducible training examples that help developers understand how to deploy large-scale AI workloads with modern machine learning infrastructure.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    vLLM Semantic Router

    vLLM Semantic Router

    System Level Intelligent Router for Mixture-of-Models at Cloud

    Semantic Router is an open-source system designed to intelligently route requests across multiple large language models based on the semantic meaning and complexity of user queries. Instead of sending every prompt to the same model, the system analyzes the intent and reasoning requirements of the request and dynamically selects the most appropriate model to process it. This approach allows developers to combine multiple models with different strengths, such as lightweight models for simple...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    DeepSeek R1

    DeepSeek R1

    Open-source, high-performance AI model with advanced reasoning

    DeepSeek-R1 is an open-source large language model developed by DeepSeek, designed to excel in complex reasoning tasks across domains such as mathematics, coding, and language. DeepSeek R1 offers unrestricted access for both commercial and academic use. The model employs a Mixture of Experts (MoE) architecture, comprising 671 billion total parameters with 37 billion active parameters per token, and supports a context length of up to 128,000 tokens. DeepSeek-R1's training regimen uniquely...
    Downloads: 63 This Week
    Last Update:
    See Project
  • 22
    AI as Workspace

    AI as Workspace

    An elegant AI chat client. Full-featured, lightweight

    AI as Workspace, short for AI as Workspace, is an open-source AI client application that provides a unified interface for interacting with multiple large language models and AI tools within a single workspace environment. The platform is designed as a lightweight yet powerful desktop or web application that organizes AI interactions through structured workspaces. Instead of managing individual chat sessions separately, users can group conversations, artifacts, and tasks within customizable...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 23
    chatd

    chatd

    Chat with your documents using local AI

    ...The software focuses on privacy and security by ensuring that all document processing and inference occur entirely on the user’s computer without sending data to external cloud services. It includes a built-in integration with the Ollama runtime, which provides a cross-platform environment for running large language models locally. The application typically runs models such as Mistral-7B and allows users to load and analyze documents while asking questions in natural language. Unlike many document-chat tools that require manual installation of model servers, chatd packages the model runner with the application so that users can start interacting with documents immediately after launching the program.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    HyperAgent

    HyperAgent

    AI Browser Automation

    HyperAgent is an open-source browser automation framework that combines large language models with modern browser scripting tools to create intelligent web automation agents. Built on top of Playwright, the framework allows developers to automate complex browser interactions using natural language commands rather than fragile selectors or hard-coded scripts. Instead of manually writing logic for clicking elements, extracting data, or navigating web pages, developers can instruct the agent in...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    LLM-Finetuning

    LLM-Finetuning

    LLM Finetuning with peft

    ...The project focuses on parameter-efficient fine-tuning methods such as LoRA and QLoRA, which allow large models to be adapted to new tasks without requiring full retraining. Instead of requiring specialized hardware or complex training pipelines, many examples are designed to run in cloud notebook environments such as Google Colab. The repository includes step-by-step notebooks demonstrating how to fine-tune models such as LLaMA, Falcon, OPT, Vicuna, and GPT-NeoX. These tutorials show how developers can adapt pretrained models for tasks such as chatbots, classification, and instruction following. The project also illustrates how low-precision training techniques and adapter-based methods reduce memory requirements while maintaining strong model performance.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
MongoDB Logo MongoDB