• Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • Fully Managed MySQL, PostgreSQL, and SQL Server Icon
    Fully Managed MySQL, PostgreSQL, and SQL Server

    Automatic backups, patching, replication, and failover. Focus on your app, not your database.

    Cloud SQL handles your database ops end to end, so you can focus on your app.
    Try Free
  • 1
    PicoLM

    PicoLM

    Run a 1-billion parameter LLM on a $10 board with 256MB RAM

    ...The project focuses on enabling efficient local inference by optimizing memory usage, computation, and system dependencies so that relatively large models can operate on devices with minimal RAM. It is written primarily in C and designed with a minimalist architecture that removes unnecessary dependencies and external libraries. The runtime is capable of running language models with billions of parameters on devices with only a few hundred megabytes of memory, which is significantly lower than typical LLM infrastructure requirements. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    NullClaw

    NullClaw

    Fastest, smallest, and fully autonomous AI assistant infrastructure

    NullClaw is the smallest fully autonomous AI assistant infrastructure, built entirely in Zig as a single static binary with zero runtime dependencies. At just 678 KB with ~1 MB peak RAM usage, it boots in under 2 milliseconds and runs on virtually any hardware, including low-cost ARM boards. Despite its size, it delivers a complete AI stack with 22+ model providers, 18+ communication channels, integrated tools, hybrid memory, and sandboxed runtime support. Its architecture is fully modular, using vtable interfaces that allow providers, channels, tools, memory backends, and runtimes to be swapped without code changes. ...
    Downloads: 15 This Week
    Last Update:
    See Project
  • 3
    MESHROOM

    MESHROOM

    3D reconstruction software

    ...Automatically estimate fisheye circle or manually edit it. Take advantage of motorized-head file. Easy to integrate in your Renderfarm System. Add specific rules to select the most suitable machines regarding CPU, RAM, GPU requirements of each Node.
    Downloads: 113 This Week
    Last Update:
    See Project
  • 4
    llmfit

    llmfit

    157 models, 30 providers, one command to find what runs on hardware

    llmfit is a terminal-based utility that helps developers determine which large language models can realistically run on their local hardware by analyzing system resources and model requirements. The tool automatically detects CPU, RAM, GPU, and VRAM specifications, then ranks available models based on performance factors such as speed, quality, and memory fit. It provides both an interactive terminal user interface and a traditional CLI mode, enabling flexible workflows for different user preferences. llmfit also supports advanced configurations including multi-GPU setups, mixture-of-experts architectures, and dynamic quantization recommendations. ...
    Downloads: 46 This Week
    Last Update:
    See Project
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 5
    gensim

    gensim

    Topic Modelling for Humans

    Gensim is a Python library for topic modeling, document indexing, and similarity retrieval with large corpora. The target audience is the natural language processing (NLP) and information retrieval (IR) community.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    InvokeAI

    InvokeAI

    InvokeAI is a leading creative engine for Stable Diffusion models

    ...It provides a streamlined process with various new features and options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM. InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products. This fork is supported across Linux, Windows and Macintosh. ...
    Downloads: 15 This Week
    Last Update:
    See Project
  • 7
    Datasets

    Datasets

    Hub of ready-to-use datasets for ML models

    ...We also feature a deep integration with the Hugging Face Hub, allowing you to easily load and share a dataset with the wider NLP community. There are currently over 2658 datasets, and more than 34 metrics available. Datasets naturally frees the user from RAM memory limitation, all datasets are memory-mapped using an efficient zero-serialization cost backend (Apache Arrow). Smart caching: never wait for your data to process several times.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    GPU Hot

    GPU Hot

    Real-time NVIDIA GPU dashboard

    GPU Hot is an open-source, lightweight monitoring dashboard designed to provide real-time visibility into NVIDIA GPU performance across single machines or entire clusters. The project offers a self-hosted web interface that streams hardware metrics directly from GPU servers, enabling developers, ML engineers, and system administrators to observe GPU utilization and system behavior in real time through a browser. The dashboard collects and displays a wide range of performance metrics...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    OnnxStream

    OnnxStream

    Lightweight inference library for ONNX files, written in C++

    The challenge is to run Stable Diffusion 1.5, which includes a large transformer model with almost 1 billion parameters, on a Raspberry Pi Zero 2, which is a microcomputer with 512MB of RAM, without adding more swap space and without offloading intermediate results on disk. The recommended minimum RAM/VRAM for Stable Diffusion 1.5 is typically 8GB. Generally, major machine learning frameworks and libraries are focused on minimizing inference latency and/or maximizing throughput, all of which at the cost of RAM usage. ...
    Downloads: 14 This Week
    Last Update:
    See Project
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 10
    FastChat

    FastChat

    Open platform for training, serving, and evaluating language models

    FastChat is an open platform for training, serving, and evaluating large language model-based chatbots. If you do not have enough memory, you can enable 8-bit compression by adding --load-8bit to the commands above. This can reduce memory usage by around half with slightly degraded model quality. It is compatible with the CPU, GPU, and Metal backend. Vicuna-13B with 8-bit compression can run on a single NVIDIA 3090/4080/T4/V100(16GB) GPU. In addition to that, you can add --cpu-offloading to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    TurboPilot

    TurboPilot

    Open source large-language-model based code completion engine

    TurboPilot is a self-hosted copilot clone that uses the library behind llama.cpp to run the 6 Billion Parameter Salesforce Codegen model in 4GiB of RAM. It is heavily based and inspired by on the fauxpilot project. This is a proof of concept right now rather than a stable tool. Autocompletion is quite slow in this version of the project. Feel free to play with it, but your mileage may vary.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Language Models

    Language Models

    Explore large language models in 512MB of RAM

    languagemodels is a lightweight Python library designed to simplify experimentation with large language models while maintaining extremely low hardware requirements. The project focuses on enabling developers and students to explore language model capabilities without needing expensive GPUs or large cloud infrastructures. By using small and optimized models, the library allows LLM inference to run in environments with limited resources, sometimes requiring only a few hundred megabytes of...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    LLaMA.go

    LLaMA.go

    llama.go is like llama.cpp in pure Golang

    ...The code of the project is based on the legendary ggml.cpp framework of Georgi Gerganov written in C++ with the same attitude to performance and elegance. Both models store FP32 weights, so you'll needs at least 32Gb of RAM (not VRAM or GPU RAM) for LLaMA-7B. Double to 64Gb for LLaMA-13B.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    Alpaca.cpp

    Alpaca.cpp

    Locally run an Instruction-Tuned Chat-Style LLM

    Run a fast ChatGPT-like model locally on your device. This combines the LLaMA foundation model with an open reproduction of Stanford Alpaca a fine-tuning of the base model to obey instructions (akin to the RLHF used to train ChatGPT) and a set of modifications to llama.cpp to add a chat interface. Download the zip file corresponding to your operating system from the latest release. The weights are based on the published fine-tunes from alpaca-lora, converted back into a PyTorch checkpoint...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 15
    Darknet YOLO

    Darknet YOLO

    Real-Time Object Detection for Windows and Linux

    This is YOLO-v3 and v2 for Windows and Linux. YOLO (You only look once) is a state-of-the-art, real-time object detection system of Darknet, an open source neural network framework in C. YOLO is extremely fast and accurate. It uses a single neural network to divide a full image into regions, and then predicts bounding boxes and probabilities for each region. This project is a fork of the original Darknet project.
    Downloads: 44 This Week
    Last Update:
    See Project
  • 16
    Turi Create

    Turi Create

    Simplifies the development of custom machine learning models

    ...Turi Create supports macOS 10.12+, Linux (with glibc 2.10+), Windows 10 (via WSL). Turi Create requires Python 2.7, 3.5, 3.6, 3.7, 3.8. Also, x86_64 architecture, and at least 4 GB of RAM. We recommend using virtualenv to use, install, or build Turi Create. The package User Guide and API Docs contain more details on how to use Turi Create. If you want to build Turi Create from source, see BUILD.md. Turi Create does not require a GPU, but certain models can be accelerated 9-13x by utilizing a GPU.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Snips NLU

    Snips NLU

    Snips Python library to extract meaning from text

    Snips NLU is a Natural Language Understanding python library that allows to parse sentences written in natural language, and extract structured information. It’s the library that powers the NLU engine used in the Snips Console that you can use to create awesome and private-by-design voice assistants. The exact output is a bit richer, the point here is to give a glimpse on what kind of information can be extracted. Behind every chatbot and voice assistant lies a common piece of technology:...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    PyTorch-BigGraph

    PyTorch-BigGraph

    Generate embeddings from large-scale graph-structured data

    PyTorch-BigGraph (PBG) is a system for learning embeddings on massive graphs—think billions of nodes and edges—using partitioning and distributed training to keep memory and compute tractable. It shards entities into partitions and buckets edges so that each training pass only touches a small slice of parameters, which drastically reduces peak RAM and enables horizontal scaling across machines. PBG supports multi-relation graphs (knowledge graphs) with relation-specific scoring functions, negative sampling strategies, and typed entities, making it suitable for link prediction and retrieval. Its training loop is built for throughput: asynchronous I/O, memory-mapped tensors, and lock-free updates keep GPUs and CPUs fed even at extreme scale. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19

    Darkbot

    The IRC's Talking Robot

    [ Please read https://sourceforge.net/p/darkbot/news/2014/01/darkbots-revitalization/ ] Darkbot is a portable IRC chat robot written in the C language that can be taught responses to user inquiries, and even have conversations with them. Darkbot was originally created by Jason Hamilton as an aid for help channels on Intenet Relay Chat.
    Leader badge
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    Ministral 3 3B Reasoning 2512

    Ministral 3 3B Reasoning 2512

    Compact 3B-param multimodal model for efficient on-device reasoning

    ...This reasoning-tuned variant is optimized for tasks like math, coding, and other STEM-related problem solving, making it suitable for applications that require logical reasoning, analysis, or structured thinking. Despite its modest size, the model is designed for edge deployment and can run locally, fitting in ~16 GB of VRAM in BF16 or under 8 GB of RAM/VRAM when quantized. It supports dozens of languages, allowing it to function across global and multilingual contexts. The model retains strong system-prompt adherence, supports function-calling with structured JSON output, and offers a large 256k token context window for extended context reasoning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Devstral Small 2

    Devstral Small 2

    Lightweight 24B agentic coding model with vision and long context

    Devstral Small 2 is a compact agentic language model designed for software engineering workflows, excelling at tool usage, codebase exploration, and multi-file editing. With 24B parameters and FP8 instruct tuning, it delivers strong instruction following while remaining lightweight enough for local and on-device deployment. The model achieves competitive performance on SWE-bench, validating its effectiveness for real-world coding and automation tasks. It introduces vision capabilities,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB