Showing 1194 open source projects for "llm"

View related business solutions
  • Enterprise-grade ITSM, for every business Icon
    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity.

    Freshservice is an intuitive, AI-powered platform that helps IT, operations, and business teams deliver exceptional service without the usual complexity. Automate repetitive tasks, resolve issues faster, and provide seamless support across the organization. From managing incidents and assets to driving smarter decisions, Freshservice makes it easy to stay efficient and scale with confidence.
    Try it Free
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 1
    MLC LLM

    MLC LLM

    Universal LLM Deployment Engine with ML Compilation

    MLC LLM is a machine learning compiler and deployment framework designed to enable efficient execution of large language models across a wide range of hardware platforms. The project focuses on compiling models into optimized runtimes that can run natively on devices such as GPUs, mobile processors, browsers, and edge hardware. By leveraging machine learning compilation techniques, mlc-llm produces high-performance inference engines that maintain consistent APIs across platforms. ...
    Downloads: 25 This Week
    Last Update:
    See Project
  • 2
    LLM Tornado

    LLM Tornado

    The .NET library to build AI agents with 30+ built-in connectors

    LLM Tornado is a provider-agnostic .NET SDK designed to build, orchestrate, and deploy AI agents and workflows with a strong focus on flexibility and integration. It provides a unified interface that connects to more than 30 AI providers and vector databases, allowing developers to switch between models and services without rewriting application logic.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 3
    LLM Wiki

    LLM Wiki

    Open Source Implementation of Karpathy's LLM Wiki

    LLM Wiki is a knowledge management and documentation system designed to organize, generate, and maintain structured information using large language models. It allows users to create interconnected knowledge bases that function similarly to a wiki but are enhanced with AI-driven content generation and summarization. The system emphasizes linking and context, enabling information to be connected across pages and topics for better navigation and understanding.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 4
    LLM Council

    LLM Council

    LLM Council works together to answer your hardest questions

    LLM Council is a creative open-source web application by Andrej Karpathy that lets you consult multiple large language models together to answer questions more reliably than querying a single model. Instead of relying on one provider, this application sends your query simultaneously to several LLMs supported via OpenRouter, collects each model’s independent response, and then orchestrates a multi-stage evaluation where the models critique and rank each other’s outputs anonymously. ...
    Downloads: 12 This Week
    Last Update:
    See Project
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 5
    Superset LLM

    Superset LLM

    Run an army of Claude Code, Codex, etc. on your machine

    Superset is a development environment and terminal-based platform designed to orchestrate multiple AI coding agents simultaneously within a single workspace. The tool enables developers to run many autonomous coding agents in parallel without the typical overhead of manually managing multiple terminals, repositories, or branches. Each agent task is isolated in its own Git worktree, ensuring that code changes from different agents do not interfere with each other while allowing developers to...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 6
    LLM Datasets

    LLM Datasets

    Curated list of datasets and tools for post-training

    LLM Datasets curates and standardizes datasets commonly used to train and fine-tune large language models, reducing the overhead of hunting down sources and normalizing formats. The repository aims to make datasets easy to inspect and transform, with scripts for downloading, deduping, cleaning, and converting to formats like JSONL that slot into training pipelines.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 7
    LLM Scraper

    LLM Scraper

    Extract structured data from webpages using LLM-powered scraping

    LLM Scraper also provides streaming output and code generation capabilities that help developers build reusable scraping workflows.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    Harbor LLM

    Harbor LLM

    Run a full local LLM stack with one command using Docker

    ...It is intended for local development and experimentation rather than production deployment, giving developers a flexible way to explore AI systems, test configurations, and manage complex LLM stacks without manual wiring or setup overhead.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    Happy-LLM

    Happy-LLM

    Large Language Model Principles and Practice Tutorial from Scratch

    Happy-LLM is an open-source educational project created by the Datawhale AI community that provides a structured and comprehensive tutorial for understanding and building large language models from scratch. The project guides learners through the entire conceptual and practical pipeline of modern LLM development, starting with foundational natural language processing concepts and gradually progressing to advanced architectures and training techniques.
    Downloads: 2 This Week
    Last Update:
    See Project
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 10
    TensorRT LLM

    TensorRT LLM

    TensorRT LLM provides users with an easy-to-use Python API

    TensorRT-LLM is an open-source high-performance inference library specifically designed to optimize and accelerate large language model deployment on NVIDIA GPUs. It provides a Python-based API built on top of PyTorch that allows developers to define, customize, and deploy LLMs efficiently across a variety of hardware configurations, from single GPUs to large multi-node clusters.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    RTP-LLM

    RTP-LLM

    Alibaba's high-performance LLM inference engine for diverse apps

    RTP-LLM is an open-source large language model inference acceleration engine developed by Alibaba to provide high-performance serving infrastructure for modern LLM deployments. The system focuses on improving throughput, latency, and resource utilization when running large models in production environments. It achieves this by implementing optimized GPU kernels, batching strategies, and memory management techniques tailored for transformer inference workloads.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    LLM Vision

    LLM Vision

    Visual intelligence for your home.

    LLM Vision is an open-source integration for Home Assistant that adds multimodal large language model capabilities to smart home environments. The project enables Home Assistant to analyze images, video files, and live camera feeds using vision-capable AI models. Instead of relying only on traditional object detection pipelines, it allows users to send prompts about visual content and receive contextual descriptions or answers about what is happening in camera footage.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    tiny-llm

    tiny-llm

    A course of learning LLM inference serving on Apple Silicon

    ...The project demonstrates how to load and run models such as Qwen-style architectures while progressively implementing performance improvements like KV caching, request batching, and optimized attention mechanisms. It also introduces concepts behind modern LLM serving systems that resemble simplified versions of production inference engines such as vLLM.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    LLM Action

    LLM Action

    Technical principles related to large models

    LLM-Action is a knowledge/tutorial/repository that shares principles, techniques, and real-world experience related to large language models (LLMs), focusing on LLM engineering, deployment, optimization, inference, compression, and tooling. It organizes content in domains like training, inference, compression, alignment, evaluation, pipelines, and applications.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    LLM Gateway

    LLM Gateway

    Route, manage, and analyze your LLM requests across multiple providers

    ...With optional UI, telemetry, and Docker deployment, it's ideal for teams aiming to centralize LLM orchestration and gain visibility into AI usage.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    LLM CLI

    LLM CLI

    Access large language models from the command-line

    A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    LLM Foundry

    LLM Foundry

    LLM training code for MosaicML foundation models

    Introducing MPT-7B, the first entry in our MosaicML Foundation Series. MPT-7B is a transformer trained from scratch on 1T tokens of text and code. It is open source, available for commercial use, and matches the quality of LLaMA-7B. MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k. Large language models (LLMs) are changing the world, but for those outside well-resourced industry labs, it can be extremely difficult to train and deploy...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    LLM X

    LLM X

    LLMX; Easiest 3rd party Local LLM UI for the web

    LLM X is a progressive web application designed to provide a flexible and accessible interface for interacting with large language models directly from the browser. It focuses on delivering a lightweight, installable experience that behaves like a native application while remaining web-based. The platform allows users to connect to various model providers, including local setups such as Ollama, enabling a unified interface for different AI backends.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    llm-ollama

    llm-ollama

    LLM plugin providing access to models running on an Ollama server

    llm-ollama is a plugin for the LLM CLI ecosystem that enables seamless access to models hosted on an Ollama server through a unified command-line interface. It automatically discovers available models from the connected Ollama instance and registers them for use within the CLI, making it easy to run prompts, chat sessions, and embedding operations without manual configuration.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    any-llm

    any-llm

    Communicate with an LLM provider using a single interface

    any-llm is a unified SDK and platform developed by Mozilla AI that allows developers to interact with multiple large language model providers through a single, consistent interface. Instead of rewriting code for each provider, developers can switch between services like OpenAI, Anthropic, Mistral, and Ollama simply by changing configuration parameters.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    LLM-Pruner

    LLM-Pruner

    On the Structural Pruning of Large Language Models

    LLM-Pruner is an open-source framework designed to compress large language models through structured pruning techniques while maintaining their general capabilities. Large language models often require enormous computational resources, making them expensive to deploy and inefficient for many practical applications. LLM-Pruner addresses this issue by identifying and removing non-essential components within transformer architectures, such as redundant attention heads or feed-forward structures. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    TAME LLM

    TAME LLM

    Traditional Mandarin LLMs for Taiwan

    ...The training pipeline leverages high-performance computing infrastructure and frameworks such as NVIDIA NeMo and Megatron to enable large-scale model training. Taiwan-LLM aims to improve language understanding and generation for Traditional Mandarin users by incorporating region-specific datasets and evaluation benchmarks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    LLM Colosseum

    LLM Colosseum

    Benchmark LLMs by fighting in Street Fighter 3

    LLM-Colosseum is an experimental benchmarking framework designed to evaluate the capabilities of large language models through gameplay interactions rather than traditional text-based benchmarks. The system places language models inside the environment of the classic video game Street Fighter III, where they must interpret the game state and decide which actions to perform during combat.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    LLM Guard

    LLM Guard

    The Security Toolkit for LLM Interactions

    ...The toolkit also helps prevent sensitive information leaks by identifying secrets such as API keys or credentials before they are processed by the model. LLM Guard supports both input and output filtering pipelines, allowing developers to sanitize prompts and validate generated responses in real time. The library integrates easily with existing AI frameworks and can be deployed in production environments to enhance the security posture of LLM-based applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    LLM-Finetuning

    LLM-Finetuning

    LLM Finetuning with peft

    LLM-Finetuning is an open educational repository that provides practical notebooks and tutorials for fine-tuning large language models using modern machine learning frameworks. The project focuses on parameter-efficient fine-tuning methods such as LoRA and QLoRA, which allow large models to be adapted to new tasks without requiring full retraining.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next
MongoDB Logo MongoDB