Open Source Python Large Language Models (LLM) - Page 5

Python Large Language Models (LLM)

View 375 business solutions

Browse free open source Python Large Language Models (LLM) and projects below. Use the toggles on the left to filter open source Python Large Language Models (LLM) by OS, license, language, programming language, and project status.

  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 1
    Unstructured.IO

    Unstructured.IO

    Open source libraries and APIs to build custom preprocessing pipelines

    The unstructured library provides open-source components for ingesting and pre-processing images and text documents, such as PDFs, HTML, Word docs, and many more. The use cases of unstructured revolve around streamlining and optimizing the data processing workflow for LLMs. unstructured modular bricks and connectors form a cohesive system that simplifies data ingestion and pre-processing, making it adaptable to different platforms and is efficient in transforming unstructured data into structured outputs.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    hCaptcha Challenger

    hCaptcha Challenger

    Gracefully face hCaptcha challenge with multimodal llms

    hCaptcha Challenger is an open-source automation framework designed to solve hCaptcha verification challenges using computer vision models and multimodal reasoning techniques. The project integrates machine learning models capable of analyzing visual captcha tasks and identifying the correct responses required to pass the verification process. Instead of relying on third-party captcha-solving services or browser scripts, the system operates independently by using pretrained neural networks that can classify images, detect objects, and interpret spatial relationships. The framework includes support for multiple types of captcha challenges such as object selection, drag-and-drop puzzles, and image labeling tasks. It implements an agent-style workflow where the system interprets the challenge prompt, selects the appropriate vision model, and generates the required interaction automatically.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    how-to-optim-algorithm-in-cuda

    how-to-optim-algorithm-in-cuda

    How to optimize some algorithm in cuda

    how-to-optim-algorithm-in-cuda is an open educational repository focused on teaching developers how to optimize algorithms for high-performance execution on GPUs using CUDA. The project combines technical notes, code examples, and practical experiments that demonstrate how common computational kernels can be optimized to improve speed and memory efficiency. Instead of presenting only theoretical explanations, the repository includes hand-written CUDA implementations of fundamental operations such as reductions, element-wise computations, softmax, and attention mechanisms. These examples show how different optimization techniques influence performance on modern GPU hardware and allow readers to experiment with real implementations. The repository also contains extensive learning notes that summarize CUDA programming concepts, GPU architecture details, and performance engineering strategies.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    text-extract-api

    text-extract-api

    Document (PDF, Word, PPTX ...) extraction and parse API

    text-extract-api is an open-source service designed to extract readable text from a wide variety of document formats through a simple API interface. The project focuses on converting complex files such as PDFs, images, scanned documents, and office files into structured plain text that can be processed by downstream applications or language models. Instead of requiring developers to integrate multiple document parsing libraries individually, the system centralizes text extraction capabilities into a unified API that standardizes the output. The platform supports automated processing pipelines that detect file types and apply the appropriate extraction method to obtain the most accurate text representation possible. It can be integrated into document analysis systems, knowledge retrieval tools, and AI pipelines that rely on clean textual data. The architecture is designed to be lightweight and easily deployable, making it suitable for both local installations and cloud environments.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 5
    tiny-llm

    tiny-llm

    A course of learning LLM inference serving on Apple Silicon

    tiny-llm is an educational open-source project designed to teach system engineers how large language model inference and serving systems work by building them from scratch. The project is structured as a guided course that walks developers through the process of implementing the core components required to run a modern language model, including attention mechanisms, token generation, and optimization techniques. Rather than relying on high-level machine learning frameworks, the codebase uses mostly low-level array and matrix manipulation APIs so that developers can understand exactly how model inference works internally. The project demonstrates how to load and run models such as Qwen-style architectures while progressively implementing performance improvements like KV caching, request batching, and optimized attention mechanisms. It also introduces concepts behind modern LLM serving systems that resemble simplified versions of production inference engines such as vLLM.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    Qwen2.5-Coder

    Qwen2.5-Coder

    Qwen2.5-Coder is the code version of Qwen2.5, the large language model

    Qwen2.5-Coder, developed by QwenLM, is an advanced open-source code generation model designed for developers seeking powerful and diverse coding capabilities. It includes multiple model sizes—ranging from 0.5B to 32B parameters—providing solutions for a wide array of coding needs. The model supports over 92 programming languages and offers exceptional performance in generating code, debugging, and mathematical problem-solving. Qwen2.5-Coder, with its long context length of 128K tokens, is ideal for a variety of use cases, from simple code assistants to complex programming scenarios, matching the capabilities of models like GPT-4o.
    Downloads: 26 This Week
    Last Update:
    See Project
  • 7
    AI Engineering Hub

    AI Engineering Hub

    In-depth tutorials on LLMs, RAGs and real-world AI agent applications

    The AI Engineering Hub repository is a large open-source collection of hands-on projects, tutorials, and real-world AI engineering resources designed to help developers learn and build with modern AI technologies, especially large language models (LLMs), retrieval-augmented generation (RAG), and agent-based systems. It includes more than 90 production-ready projects across skill levels, organized into beginner, intermediate, and advanced categories to guide users progressively from simple experiments to complex AI workflows. Projects range from OCR applications and local chatbot UIs to multimodal RAG systems and multi-agent automation pipelines, making the hub valuable both as a learning resource and as a practical reference. The repository provides in-depth notebooks, example code, and integration patterns that illustrate how to implement, adapt, and scale AI features in real applications.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    Agentic RAG for Dummies

    Agentic RAG for Dummies

    A modular Agentic RAG built with LangGraph

    Agentic RAG for Dummies is an educational repository that demonstrates how to build retrieval-augmented generation systems combined with autonomous AI agents. The project explains the principles behind agentic retrieval pipelines where language models can dynamically decide when to retrieve information, analyze results, and plan further actions. Instead of relying on static retrieval pipelines, the system shows how agents can orchestrate retrieval, reasoning, and tool usage in a more flexible decision loop. The repository provides practical examples and tutorials that guide developers through building agentic RAG systems using modern AI frameworks. These examples illustrate how agents can access knowledge bases, retrieve documents, analyze them, and refine their queries during multi-step reasoning processes. The repository focuses on simplifying complex architectural concepts so that beginners can understand how agentic retrieval systems are constructed.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    AirLLM

    AirLLM

    AirLLM 70B inference with single 4GB GPU

    AirLLM is an open source Python library that enables extremely large language models to run on consumer hardware with very limited GPU memory. The project addresses one of the main barriers to local LLM experimentation by introducing a memory-efficient inference technique that loads model layers sequentially rather than storing the entire model in GPU memory. This layer-wise inference approach allows models with tens of billions of parameters to run on devices with only a few gigabytes of VRAM. AirLLM preprocesses model weights so that each transformer layer can be loaded independently during computation, reducing the memory footprint while still performing full inference. As a result, developers can experiment with models that previously required specialized high-end GPUs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 10
    All-in-RAG

    All-in-RAG

    Big Model Application Development Practice 1

    All-in-RAG is an open-source educational project designed to teach developers how to build applications using retrieval-augmented generation techniques. The repository provides a structured learning path that covers both theoretical foundations and practical implementation steps for RAG systems. It explains the full development pipeline required to create knowledge-aware AI assistants, including data preparation, document indexing, vector embedding generation, and retrieval strategies. The project also explores advanced topics such as hybrid retrieval methods, query optimization, and evaluation techniques for improving system accuracy. Alongside theoretical explanations, the repository includes hands-on exercises and example projects that demonstrate how to build production-ready RAG systems. These projects guide developers through the process of integrating vector databases, embedding models, and large language models into a unified application.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    Autolabel

    Autolabel

    Label, clean and enrich text datasets with LLMs

    Autolabel is a Python library to label, clean and enrich datasets with Large Language Models (LLMs). Autolabel data for NLP tasks such as classification, question-answering and named entity recognition, entity matching and more. Seamlessly use commercial and open-source LLMs from providers such as OpenAI, Anthropic, HuggingFace, Google and more.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    Claude Code Tools

    Claude Code Tools

    Practical productivity tools for Claude Code, Codex-CLI

    Claude Code Tools is an open-source collection of command-line utilities and productivity plugins designed to enhance developer workflows when using AI coding agents such as Claude Code and Codex-CLI. The project focuses on solving common problems encountered in AI-assisted development environments, including managing session history, automating terminal interactions, and maintaining context across multiple coding sessions. It includes tools that allow developers to search conversation logs quickly, manage environment variables securely, and execute interactive terminal workflows that AI agents can control. Some components enable Claude Code to interact with terminal multiplexers such as tmux so that it can run programs, debug applications, and interact with scripts that require user input. The toolkit also provides safety mechanisms that prevent potentially dangerous shell commands from being executed automatically by AI agents.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    CodeLlama

    CodeLlama

    Inference code for CodeLlama models

    Code Llama is a family of Llama-based code models optimized for programming tasks such as code generation, completion, and repair, with variants specialized for base coding, Python, and instruction following. The repo documents the sizes and capabilities (e.g., 7B, 13B, 34B) and highlights features like infilling and large input context to support real IDE workflows. It targets both general software synthesis and language-specific productivity, offering strong performance among open models at release time. Typical usage includes prompt-driven generation, function or class completion, and zero-shot adherence to natural-language instructions about code changes. The ecosystem provides multiple distributions (e.g., HF format) so developers can integrate with standard toolchains and serving stacks. As part of the broader Llama effort, Code Llama complements instruction-tuned chat models by focusing on code-centric tasks and editor integrations.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    Context Engineering

    Context Engineering

    A frontier, first-principles handbook

    Context Engineering is a comprehensive, open-source project serving as a first-principles handbook for the emerging discipline of context design and optimization in AI. Moving beyond traditional prompt engineering, this repository defines and explores how to craft and provide complete context payloads — not just single prompts — to large language models so they can perform tasks more reliably and intelligently. It takes inspiration from thought leaders like Andrej Karpathy and bridges theory with practical examples, offering structured guidance on context orchestration, memory, retrieval, and state control within AI workflows. With extensive materials drawn from research, surveys, and visual explanations, the project acts as both a learning resource and a reference for practitioners looking to improve model behavior by engineering richer inputs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    DATAGEN

    DATAGEN

    AI-driven multi-agent research assistant automating hypothesis

    DATAGEN is an AI-driven multi-agent research and data analysis platform designed to automate complex analytical workflows. The system coordinates multiple specialized AI agents that collaborate to perform tasks such as hypothesis generation, data collection, analysis, visualization, and report creation. Instead of requiring users to manually orchestrate each stage of a research process, the platform allows these agents to coordinate automatically and handle the workflow end-to-end. The project integrates several modern AI frameworks including LangChain, LangGraph, and large language models to manage reasoning and data processing tasks. Through this architecture, the system can combine structured data analysis with natural language reasoning to generate insights and research outputs. The platform is designed for researchers, analysts, and developers who want to accelerate data exploration and automate parts of the research lifecycle.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    DriveLM

    DriveLM

    Driving with Graph Visual Question Answering

    DriveLM is a research-oriented framework and dataset designed to explore how vision-language models can be integrated into autonomous driving systems. The project introduces a new paradigm called graph visual question answering that structures reasoning about driving scenes through interconnected tasks such as perception, prediction, planning, and motion control. Instead of treating autonomous driving as a purely sensor-driven pipeline, DriveLM frames it as a reasoning problem where models answer structured questions about the environment to guide decision making. The system includes DriveLM-Data, a dataset built on driving environments such as nuScenes and CARLA, where human-written reasoning steps connect different layers of driving tasks. This design allows models to learn relationships between objects, behaviors, and navigation decisions through graph-structured logic.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    Fun Audio Chat

    Fun Audio Chat

    Large Audio Language Model built for natural interactions

    Fun Audio Chat is an interactive voice-first conversational AI platform designed to let users engage in natural spoken dialogue with large language models in real time, turning speech into context-aware responses while maintaining a smooth back-and-forth experience. It combines speech recognition, audio processing, and AI generation so users can speak simply and receive spoken replies, enabling applications such as virtual assistants, voice bots, and hands-free chat interfaces. The system supports dynamic audio input and output, meaning it can handle different voices, tones, and conversational contexts without forcing users into typed interactions. With real-time streaming, it minimizes latency and delivers responses quickly, making it suitable for applications where responsiveness matters, such as interactive demos, accessibility tools, and conversational games.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Guidance

    Guidance

    A guidance language for controlling large language models

    Guidance is an efficient programming paradigm for steering language models. With Guidance, you can control how output is structured and get high-quality output for your use case—while reducing latency and cost vs. conventional prompting or fine-tuning. It allows users to constrain generation (e.g. with regex and CFGs) as well as to interleave control (conditionals, loops, tool use) and generation seamlessly.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    Huatuo-Llama-Med-Chinese

    Huatuo-Llama-Med-Chinese

    Instruction-tuning LLM with Chinese Medical Knowledge

    Huatuo-Llama-Med-Chinese is an open-source project that develops medical-domain large language models by instruction-tuning existing models using Chinese medical knowledge. The project builds specialized models by fine-tuning architectures such as LLaMA, Alpaca-Chinese, and Bloom with curated medical datasets. These datasets are constructed from medical knowledge graphs, academic literature, and question-answer pairs designed to teach models how to respond accurately to healthcare-related queries. The goal of the project is to improve the reliability and domain expertise of language models when answering medical questions or assisting with healthcare-related tasks. By combining domain-specific training data with instruction-tuning techniques, the project produces models capable of generating more accurate medical responses than general-purpose models.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    Integuru v0

    Integuru v0

    The first AI agent that builds permissionless integrations

    Integuru is an open-source AI agent designed to automatically create integrations between software platforms by reverse-engineering their internal APIs. Instead of relying on official developer documentation or publicly available APIs, the system analyzes network traffic generated by user interactions within a web application. Developers capture browser requests and authentication data, which the agent then uses to infer the structure of the platform’s internal API endpoints. Based on this information, the system generates executable code that can replicate the original action programmatically. This approach allows developers to automate workflows and build integrations with services that do not provide official APIs or developer tools. The project is designed as a research platform for exploring AI-driven automation and integration generation.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    JSON_REPAIR

    JSON_REPAIR

    A python module to repair invalid JSON from LLMs

    json_repair is an open-source Python library designed to automatically fix malformed JSON data and convert it into valid, parseable structures. The tool is particularly useful in scenarios where JSON output is generated by large language models or external services that may produce syntactically invalid responses. Instead of failing when encountering errors such as missing quotes, trailing commas, or incomplete objects, the library analyzes the malformed data and reconstructs it into valid JSON. The repair process can also be combined with optional JSON Schema validation to enforce structural constraints and ensure the output conforms to expected data types and formats. Developers can integrate the library into applications as a drop-in replacement for standard JSON parsing functions, allowing systems to tolerate imperfect structured data without crashing.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    LLM CLI

    LLM CLI

    Access large language models from the command-line

    A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    LLaMA Efficient Tuning

    LLaMA Efficient Tuning

    Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon

    Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    LLaMA-Mesh

    LLaMA-Mesh

    Unifying 3D Mesh Generation with Language Models

    LLaMA-Mesh is a research framework that extends large language models so they can understand and generate 3D mesh data alongside text. The system introduces a method for representing 3D meshes in a textual format by encoding vertex coordinates and face definitions as sequences that can be processed by a language model. By serializing 3D geometry into text tokens, the approach allows existing transformer architectures to generate and interpret 3D models without requiring specialized visual tokenizers. The project includes a supervised fine-tuning dataset composed of interleaved text and mesh data, allowing the model to learn relationships between textual descriptions and 3D structures. As a result, the model can generate mesh models directly from text prompts, explain mesh structures in natural language, or output mixed text-and-mesh sequences. This unified representation enables a single model to operate across both textual and spatial domains.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    Language Models

    Language Models

    Explore large language models in 512MB of RAM

    languagemodels is a lightweight Python library designed to simplify experimentation with large language models while maintaining extremely low hardware requirements. The project focuses on enabling developers and students to explore language model capabilities without needing expensive GPUs or large cloud infrastructures. By using small and optimized models, the library allows LLM inference to run in environments with limited resources, sometimes requiring only a few hundred megabytes of memory. The package provides simple APIs that allow developers to generate text, perform semantic search, classify text, and answer questions using local models. It is particularly useful for educational purposes, as it demonstrates the fundamental mechanics of language model inference and prompt-based applications. The repository includes multiple example applications such as chatbots, document question answering systems, and information retrieval tools.
    Downloads: 1 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB