Showing 436 open source projects for "llama-cpp-python.whl"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • Level Up Your Cyber Defense with External Threat Management Icon
    Level Up Your Cyber Defense with External Threat Management

    See every risk before it hits. From exposed data to dark web chatter. All in one unified view.

    Move beyond alerts. Gain full visibility, context, and control over your external attack surface to stay ahead of every threat.
    Try for Free
  • 1
    llama.cpp Python Bindings

    llama.cpp Python Bindings

    Python bindings for llama.cpp

    llama-cpp-python provides Python bindings for llama.cpp, enabling the integration of LLaMA (Large Language Model Meta AI) language models into Python applications. This facilitates the use of LLaMA's capabilities in natural language processing tasks within Python environments.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    LLaMA-Factory

    LLaMA-Factory

    Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

    LLaMA-Factory is a fine-tuning and training framework for Meta's LLaMA language models. It enables researchers and developers to train and customize LLaMA models efficiently using advanced optimization techniques.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 3
    Llama Stack

    Llama Stack

    Composable building blocks to build Llama Apps

    Llama-Stack is an open-source framework designed to facilitate the deployment and fine-tuning of large language models (LLMs) for various natural language processing tasks.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    Secret Llama

    Secret Llama

    Fully private LLM chatbot that runs entirely with a browser

    Secret Llama is a privacy-first large-language-model chatbot that runs entirely inside your web browser, meaning no server is required and your conversation data never leaves your device. It focuses on open-source model support, letting you load families like Llama and Mistral directly in the client for fully local inference. Because everything happens in-browser, it can work offline once models are cached, which is helpful for air-gapped environments or travel.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Cloud-based help desk software with ServoDesk Icon
    Cloud-based help desk software with ServoDesk

    Full access to Enterprise features. No credit card required.

    What if You Could Automate 90% of Your Repetitive Tasks in Under 30 Days? At ServoDesk, we help businesses like yours automate operations with AI, allowing you to cut service times in half and increase productivity by 25% - without hiring more staff.
    Try ServoDesk for free
  • 5
    Purple Llama

    Purple Llama

    Set of tools to assess and improve LLM security

    Purple Llama is an umbrella safety initiative that aggregates tools, benchmarks, and mitigations to help developers build responsibly with open generative AI. Its scope spans input and output safeguards, cybersecurity-focused evaluations, and reference shields that can be inserted at inference time. The project evolves as a hub for safety research artifacts like Llama Guard and Code Shield, along with dataset specs and how-to guides for integrating checks into applications. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    Llama Coder

    Llama Coder

    Open source Claude Artifacts – built with Llama 3.1 405B

    Llama Coder is an open-source tool that lets you generate small applications (often React or web apps) from a single natural-language prompt using the Llama 3 family of models. It’s framed as an open-source “Claude Artifacts”-style experience: you describe the app you want, the tool calls an LLM hosted on Together.ai, and you get back a runnable code artifact.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    LLaMA 3

    LLaMA 3

    The official Meta Llama 3 GitHub site

    ...Even as a deprecated repo, it documents the transition path and preserves references that clarify how Llama 3 releases map into the current ecosystem. Practically, it functioned as a bridge between Llama 2 and later Llama releases by standardizing distribution and starter code for inference and fine-tuning. Teams still treat it as historical reference material for version lineage and migration notes.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    llama.cpp

    llama.cpp

    Port of Facebook's LLaMA model in C/C++

    The llama.cpp project enables the inference of Meta's LLaMA model (and other models) in pure C/C++ without requiring a Python runtime. It is designed for efficient and fast model execution, offering easy integration for applications needing LLM-based capabilities. The repository focuses on providing a highly optimized and portable implementation for running large language models directly within C/C++ environments.
    Downloads: 153 This Week
    Last Update:
    See Project
  • 9
    LLaMA Models

    LLaMA Models

    Utilities intended for use with Llama models

    ...It complements separate repos that carry code and demos (for example inference kernels or cookbook content) by keeping authoritative metadata and specs here. Model lineages and size variants are documented externally (e.g., Llama 3.x and beyond), with this repo providing the “single source of truth” links and utilities. In practice, teams use llama-models as a reference when selecting variants, aligning licenses, and wiring in helper scripts for deployment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Keep company data safe with Chrome Enterprise Icon
    Keep company data safe with Chrome Enterprise

    Protect your business with AI policies and data loss prevention in the browser

    Make AI work your way with Chrome Enterprise. Block unapproved sites and set custom data controls that align with your company's policies.
    Download Chrome
  • 10
    Llama Cookbook

    Llama Cookbook

    Solve end to end problems using Llama model family

    The Llama Cookbook is the official Meta LLaMA guide for inference, fine‑tuning, RAG, and multi-step use-cases. It offers recipes, code samples, and integration examples across provider platforms (WhatsApp, SQL, long context workflows), enabling developers to quickly harness LLaMA models
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Distributed Llama

    Distributed Llama

    Connect home devices into a powerful cluster to accelerate LLM

    Distributed Llama is an open-source project that enables users to connect multiple home devices into a powerful cluster to accelerate Large Language Model (LLM) inference. By leveraging tensor parallelism and high-speed synchronization over Ethernet, it allows for faster performance as more devices are added to the cluster. The system supports various operating systems, including Linux, macOS, and Windows, and is optimized for both ARM and x86_64 AVX2 CPUs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Llama Recipes

    Llama Recipes

    Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method

    The 'llama-recipes' repository is a companion to the Meta Llama models. We support the latest version, Llama 3.1, in this repository. The goal is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Llama and other tools in the LLM ecosystem. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    cpp-httplib

    cpp-httplib

    A C++ header-only HTTP/HTTPS server and client library

    ...If your program needs to avoid being terminated on SIGPIPE, the only fully general way might be to set up a signal handler for SIGPIPE to handle or ignore it yourself. cpp-httplib officially supports only the latest Visual Studio. It might work with former versions of Visual Studio, but I can no longer verify it.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 14
    tgbot-cpp

    tgbot-cpp

    C++ library for Telegram bot API

    C++ library for Telegram bot API.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 15
    pytorch-cpp

    pytorch-cpp

    C++ Implementation of PyTorch Tutorials for Everyone

    C++ Implementation of PyTorch Tutorials for Everyone. This repository provides tutorial code in C++ for deep learning researchers to learn PyTorch (i.e. Section 1 to 3) Interactive Tutorials are currently running on LibTorch Nightly Version. Libtorch only supports 64bit Windows and an x64 generator needs to be specified. Create all required script module files for pre-learned models/weights during the build. Requires installed python3 with PyTorch and torch-vision. You can choose to only...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    Skiplist-CPP

    Skiplist-CPP

    A tiny KV storage based on skiplist written in C++ language

    Skiplist-CPP is a lightweight key-value storage engine implemented in C++ using a skip list as its core data structure. It showcases how a log-structured, ordered index can deliver fast inserts, lookups, and deletes while remaining simple to implement and reason about. The project supplies a compact codebase with a clear separation between the skip list implementation and the storage operations that use it.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Cpp-Peglib

    Cpp-Peglib

    A single file C++ header-only PEG (Parsing Expression Grammars)

    cpp-peglib is a single-file, header-only C++17 library for Parsing Expression Grammars (PEG). It enables developers to define grammars and build parsers directly within C++ code without external dependencies.​
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Llama Cloud Services

    Llama Cloud Services

    Knowledge Agents and Management in the Cloud

    Llama Cloud Services is a suite of tools designed to facilitate the integration of large language models (LLMs) into applications. It offers components for parsing, extracting, and reporting on complex documents, streamlining the process of preparing data for LLM consumption.​
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    LLaMA Efficient Tuning

    LLaMA Efficient Tuning

    Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon

    Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Ollama

    Ollama

    Get up and running with Llama 2 and other large language models

    Run, create, and share large language models (LLMs). Get up and running with large language models, locally. Run Llama 2 and other models on macOS. Customize and create your own.
    Downloads: 220 This Week
    Last Update:
    See Project
  • 21
    Chinese-LLaMA-Alpaca 2

    Chinese-LLaMA-Alpaca 2

    Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project

    This project is developed based on the commercially available large model Llama-2 released by Meta. It is the second phase of the Chinese LLaMA&Alpaca large model project. The Chinese LLaMA-2 base model and the Alpaca-2 instruction fine-tuning large model are open-sourced. These models expand and optimize the Chinese vocabulary on the basis of the original Llama-2, use large-scale Chinese data for incremental pre-training, and further improve the basic semantics and command understanding of Chinese. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    Downloads: 3 This Week
    Last Update:
    See Project
  • 23
    LLaMA

    LLaMA

    Inference code for Llama models

    Llama” is the repository from Meta (formerly Facebook/Meta Research) containing the inference code for LLaMA (Large Language Model Meta AI) models. It provides utilities to load pre-trained LLaMA model weights, run inference (text generation, chat, completions), and work with tokenizers. Tokenizer utilities, download scripts, shell helpers to fetch model weights with correct licensing/permissions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    kotaemon

    kotaemon

    An open-source RAG-based tool for chatting with your documents

    An open-source clean & customizable RAG UI for chatting with your documents. Built with both end users and developers in mind. This project serves as a functional RAG UI for both end users who want to do QA on their documents and developers who want to build their own RAG pipeline.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 25
    cpp-ipc

    cpp-ipc

    C++ IPC Library: A high-performance inter-process communication

    C++ IPC Library: A high-performance inter-process communication using shared memory on Linux/Windows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next