• Earn up to 16% annual interest with Nexo. Icon
    Earn up to 16% annual interest with Nexo.

    Let your crypto work for you

    Put idle assets to work with competitive interest rates, borrow without selling, and trade with precision. All in one platform. Geographic restrictions, eligibility, and terms apply.
    Get started with Nexo.
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 1
    Farfalle

    Farfalle

    AI search engine - self-host with local or cloud LLMs

    Farfalle is an open-source AI-powered search engine designed to provide an answer-centric search experience similar to modern conversational search systems. The project integrates large language models with multiple search APIs so that the system can gather information from external sources and synthesize responses into concise answers. It can run either with local language models or with cloud-based providers, allowing developers to deploy it privately or integrate with hosted AI services. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    node-llama-cpp

    node-llama-cpp

    Run AI models locally on your machine with node.js bindings for llama

    node-llama-cpp is a JavaScript and Node.js binding that allows developers to run large language models locally using the high-performance inference engine provided by llama.cpp. The library enables applications built with Node.js to interact directly with local LLM models without requiring a remote API or external service. By using native bindings and optimized model execution, the framework allows developers to integrate advanced language model capabilities into desktop applications, server software, and command-line tools. ...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 3
    PasteGuard

    PasteGuard

    Masks sensitive data and secrets before they reach AI

    ...PasteGuard supports two primary modes: mask mode, which anonymizes data and still uses external APIs; and route mode, which forwards sensitive requests to a local LLM inference engine while sending the rest to the cloud. It can be self-hosted via Docker, works with a wide range of SDKs and tools, and includes a browser extension for automatic protection in everyday AI chats.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    wllama

    wllama

    WebAssembly binding for llama.cpp - Enabling on-browser LLM inference

    wllama is a WebAssembly-based library that enables large language model inference directly inside a web browser. Built as a binding for the llama.cpp inference engine, the project allows developers to run LLM models locally without requiring a server backend or dedicated GPU hardware. The library leverages WebAssembly SIMD capabilities to achieve efficient execution within modern browsers while maintaining compatibility across platforms. By running models locally on the user’s device, wllama enables privacy-preserving AI applications that do not require sending data to remote servers. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 5
    Secret Llama

    Secret Llama

    Fully private LLM chatbot that runs entirely with a browser

    ...The interface mirrors the modern chat UX you’d expect—streaming responses, markdown, and a clean layout—so there’s no usability tradeoff to gain privacy. Under the hood it uses a web-native inference engine to accelerate model execution with GPU/WebGPU when available, keeping responses responsive even without a backend. It’s a great option for developers and teams who want to prototype assistants or handle sensitive text without sending prompts to external APIs.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    dataline

    dataline

    AI data analysis and visualization on CSV, Postgres, MySQL, Snowflake

    ...It supports connections to multiple structured data sources such as PostgreSQL, MySQL, Snowflake, SQLite, Excel files, CSV datasets, and other database systems. Once connected, users can generate tables, charts, and reports automatically based on queries produced by the AI engine. The platform is designed with a privacy-first architecture that stores data locally on the user’s device rather than sending it to external cloud services by default. It can also hide sensitive data from language models during processing, ensuring that only necessary metadata is used for query generation.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    LangGraph.js

    LangGraph.js

    Framework to build resilient language agents as graphs

    LangGraphJS is a JavaScript framework designed to build stateful AI applications and autonomous agents using graph-based execution models. Developed as part of the LangChain ecosystem, the framework allows developers to represent complex AI workflows as graphs where nodes represent tasks and edges define the flow of execution. This structure makes it easier to implement long-running agents, multi-step reasoning pipelines, and workflows that require persistent state. LangGraphJS supports...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB