Search Results for "llama-cpp-python.whl" - Page 2

Showing 438 open source projects for "llama-cpp-python.whl"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • Cloud-based help desk software with ServoDesk Icon
    Cloud-based help desk software with ServoDesk

    Full access to Enterprise features. No credit card required.

    What if You Could Automate 90% of Your Repetitive Tasks in Under 30 Days? At ServoDesk, we help businesses like yours automate operations with AI, allowing you to cut service times in half and increase productivity by 25% - without hiring more staff.
    Try ServoDesk for free
  • 1
    llama2.c

    llama2.c

    Inference Llama 2 in one file of pure C

    llama2.c is a minimalist implementation of the Llama 2 language model architecture designed to run entirely in pure C. Created by Andrej Karpathy, this project offers an educational and lightweight framework for performing inference on small Llama 2 models without external dependencies. It provides a full training and inference pipeline: models can be trained in PyTorch and later executed using a concise 700-line C program (run.c).
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    restc-cpp C++ library

    restc-cpp C++ library

    Modern C++ REST Client library

    The magic that takes the pain out of accessing JSON API's from C++. It formulates a HTTP request to a REST API server. Then, it transforms the JSON formatted payload in the reply into a native C++ object (GET). It Serializes a native C++ object or a container of C++ objects into a JSON payload and sends it to the REST API server (POST, PUT). It formulates an HTTP request to the REST API without serializing any data in either direction (typically DELETE). It uploads a stream of data, like a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    CodeLlama

    CodeLlama

    Inference code for CodeLlama models

    Code Llama is a family of Llama-based code models optimized for programming tasks such as code generation, completion, and repair, with variants specialized for base coding, Python, and instruction following. The repo documents the sizes and capabilities (e.g., 7B, 13B, 34B) and highlights features like infilling and large input context to support real IDE workflows.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Unsloth

    Unsloth

    Finetune Llama 3.2, Mistral, Phi & Gemma LLMs 2-5x faster

    Unsloth is a framework designed to significantly improve the performance of Llama 3.3, DeepSeek-R1, and other reasoning large language models (LLMs). It optimizes these models to run up to 2x faster while using 70% less memory. Unsloth aims to make finetuning large models more efficient, offering users a simple, resource-efficient solution for customizing LLMs with their datasets. It provides a user-friendly experience through free notebooks and the ability to export finetuned models to various formats.
    Downloads: 6 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 5
    LLamaSharp

    LLamaSharp

    C#/.NET binding of llama.cpp, including LLaMa/GPT model inference

    The C#/.NET binding of llama.cpp. It provides APIs to infer the LLaMa Models and deploy it on the local environment. It works on both Windows, Linux and MAC without the requirement for compiling llama.cpp yourself. Its performance is close to llama.cpp. Furthermore, it provides integrations with other projects such as BotSharp to provide higher-level applications and UI.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Text Generation Web UI

    Text Generation Web UI

    A gradio web UI for running Large Language Models like LLaMA

    ...Very efficient text streaming. Parameter presets, 8-bit mode. Layers splitting across GPU(s), CPU, and disk. CPU mode, FlexGen, DeepSpeed ZeRO-3, API with streaming and without streaming. LLaMA model, including 4-bit GPTQ. RWKV model, LoRA (loading and training), Softprompts, and extensions.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 7
    LLM Foundry

    LLM Foundry

    LLM training code for MosaicML foundation models

    Introducing MPT-7B, the first entry in our MosaicML Foundation Series. MPT-7B is a transformer trained from scratch on 1T tokens of text and code. It is open source, available for commercial use, and matches the quality of LLaMA-7B. MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k. Large language models (LLMs) are changing the world, but for those outside well-resourced industry labs, it can be extremely difficult to train and deploy these models. This has led to a flurry of activity centered on open-source LLMs, such as the LLaMA series from Meta, the Pythia series from EleutherAI, the StableLM series from StabilityAI, and the OpenLLaMA model from Berkeley AI Research.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    bert4torch

    bert4torch

    An elegent pytorch implement of transformers

    An elegant PyTorch implement of transformers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Jan.ai

    Jan.ai

    Open source alternative to ChatGPT that runs 100% offline

    ...It allows you to download and run LLMs (local language models) offline while also offering optional integration with cloud-based model providers—giving you full control over your data and AI interactions. Download and run LLMs (Llama, Gemma, Qwen, GPT-oss etc.) from HuggingFace. Connect to GPT models via OpenAI, Claude models via Anthropic, Mistral, Groq, and others. Create specialized AI assistants for your tasks. MCP integration for agentic capabilities.
    Downloads: 37 This Week
    Last Update:
    See Project
  • Keep company data safe with Chrome Enterprise Icon
    Keep company data safe with Chrome Enterprise

    Protect your business with AI policies and data loss prevention in the browser

    Make AI work your way with Chrome Enterprise. Block unapproved sites and set custom data controls that align with your company's policies.
    Download Chrome
  • 10
    Eliza

    Eliza

    Autonomous agents for everyone

    ...Full support for voice, text, and media interactions. Built-in RAG memory system, document processing, media analysis, and autonomous trading capabilities. Supports multiple AI models including Llama, GPT-4, and Claude. Create custom actions, add new platform integrations, and extend functionality through a modular plugin system. Full TypeScript support.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    DeepSeek R1

    DeepSeek R1

    Open-source, high-performance AI model with advanced reasoning

    ...This approach has resulted in performance comparable to leading models like OpenAI's o1, while maintaining cost-efficiency. To further support the research community, DeepSeek has released distilled versions of the model based on architectures such as LLaMA and Qwen.
    Downloads: 61 This Week
    Last Update:
    See Project
  • 12
    fullmoon

    fullmoon

    Chat with private and local large language models

    ...Users can personalize the app by adjusting themes, fonts, and system prompts, and it integrates with Apple's Shortcuts for enhanced functionality. Fullmoon supports models like Llama-3.2-1B-Instruct-4bit and Llama-3.2-3B-Instruct-4bit, facilitating efficient on-device AI interactions without the need for an internet connection.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    RedPanda C++

    RedPanda C++

    A powerful, lighweight and cross-platform C/C++ IDE

    Red Panda C++ is a lightweight yet powerful C/C++/GNU Assembly IDE. It provides users with coding experiences like vs code and CLion, but is much lightweighted than them. Highlights of its new and enhanced feature: * High DPI support * Code intellisense (Code Completion suggestion while editing). * Syntax checking while editing * Greatly improved debugger (local \ call stack \ memory view) * Theme and color scheme. * UTF-8 encoding support * Greatly improved search/replace...
    Leader badge
    Downloads: 1,437 This Week
    Last Update:
    See Project
  • 14
    Orpheus TTS

    Orpheus TTS

    Towards Human-Sounding Speech

    Orpheus TTS is a state-of-the-art open-source text-to-speech system built on a Llama-3B backbone, treating speech synthesis as a large language model problem instead of a traditional TTS pipeline. It is designed to produce human-like speech with natural intonation, emotion, and rhythm, targeting quality comparable to or better than many closed-source systems. The project ships both pretrained and finetuned English models, as well as a family of multilingual models released as a research preview, and includes data-processing scripts so users can train or finetune their own variants. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    GPT4All

    GPT4All

    Run Local LLMs on Any Device. Open-source

    GPT4All is an open-source project that allows users to run large language models (LLMs) locally on their desktops or laptops, eliminating the need for API calls or GPUs. The software provides a simple, user-friendly application that can be downloaded and run on various platforms, including Windows, macOS, and Ubuntu, without requiring specialized hardware. It integrates with the llama.cpp implementation and supports multiple LLMs, allowing users to interact with AI models privately. This...
    Downloads: 115 This Week
    Last Update:
    See Project
  • 16
    tinygrad

    tinygrad

    Deep learning framework

    This may not be the best deep learning framework, but it is a deep learning framework. Due to its extreme simplicity, it aims to be the easiest framework to add new accelerators to, with support for both inference and training. If XLA is CISC, tinygrad is RISC.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    llamafile

    llamafile

    Distribute and run LLMs with a single file

    ...We're doing that by combining llama.cpp with Cosmopolitan Libc into one framework that collapses all the complexity of LLMs down to a single-file executable (called a "llamafile") that runs locally on most computers, with no installation. The easiest way to try it for yourself is to download our example llamafile for the LLaVA model (license: LLaMA 2, OpenAI). LLaVA is a new LLM that can do more than just chat; you can also upload images and ask it questions about them. With llamafile, this all happens locally; no data ever leaves your computer.
    Downloads: 14 This Week
    Last Update:
    See Project
  • 18
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    gradle-cpp

    gradle-cpp

    Intended to make Gradle C++ working more comfortable.

    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Axolotl

    Axolotl

    Go ahead and axolotl questions

    Axolotl is a powerful and flexible framework for fine-tuning large language models on custom datasets. Built for researchers and developers, Axolotl simplifies the process of adapting LLMs for specific tasks, including chat, code generation, and instruction following. It supports a wide variety of model architectures and offers out-of-the-box optimization strategies for efficient training.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    LocalAI

    LocalAI

    Self-hosted, community-driven, local OpenAI compatible API

    ...Drop-in replacement for OpenAI running LLMs on consumer-grade hardware. Free Open Source OpenAI alternative. No GPU is required. Runs ggml, GPTQ, onnx, TF compatible models: llama, gpt4all, rwkv, whisper, vicuna, koala, gpt4all-j, cerebras, falcon, dolly, starcoder, and many others. LocalAI is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer-grade hardware, supporting multiple model families that are compatible with the ggml format. ...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 22
    SillyTavern

    SillyTavern

    LLM Frontend for Power Users

    Mobile-friendly, Multi-API (KoboldAI/CPP, Horde, NovelAI, Ooba, OpenAI, OpenRouter, Claude, Scale), VN-like Waifu Mode, Horde SD, System TTS, WorldInfo (lorebooks), customizable UI, auto-translate, and more prompt options than you'd ever want or need. Optional Extras server for more SD/TTS options + ChromaDB/Summarize. SillyTavern is a user interface you can install on your computer (and Android phones) that allows you to interact with text generation AIs and chat/roleplay with characters you or the community create. ...
    Downloads: 127 This Week
    Last Update:
    See Project
  • 23
    Lepton AI

    Lepton AI

    A Pythonic framework to simplify AI service building

    A Pythonic framework to simplify AI service building. Cutting-edge AI inference and training, unmatched cloud-native experience, and top-tier GPU infrastructure. Ensure 99.9% uptime with comprehensive health checks and automatic repairs.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 24
    QuivrHQ

    QuivrHQ

    Opiniated RAG for integrating GenAI in your apps

    Quivr is an open-source platform that leverages Retrieval-Augmented Generation (RAG) to integrate Generative AI into applications. It serves as a "second brain," enabling users to build powerful AI-driven assistants that can process and retrieve information efficiently. Quivr supports various large language models and vector stores, providing flexibility and customization for developers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    VoxelCore

    VoxelCore

    Voxel game engine in C++ with OpenGL

    VoxelEngine-Cpp is a minimal voxel engine written in modern C++ using OpenGL, GLFW, and GLM, inspired by Minecraft-style block worlds. It offers a clean foundation for learning and experimenting with voxel-based rendering and world generation. With features like chunk loading, perlin noise terrain generation, and basic lighting, the engine is a perfect starting point for developers who want to create sandbox games or explore the technical aspects of 3D voxel environments.
    Downloads: 5 This Week
    Last Update:
    See Project