Structured outputs for llms
Python bindings for llama.cpp
Run Local LLMs on Any Device. Open-source
Advanced language and coding AI model
Port of Facebook's LLaMA model in C/C++
Agentic, Reasoning, and Coding (ARC) foundation models
A high-throughput and memory-efficient inference and serving engine
PandasAI is a Python library that integrates generative AI
Low-code app builder for RAG and multi-agent AI applications
Qwen3 is the large language model series developed by Qwen team
Powerful AI language model (MoE) optimized for efficiency/performance
Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon
GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Framework to easily create LLM powered bots over any dataset
User toolkit for analyzing and interfacing with Large Language Models
A high-performance ML model serving framework, offers dynamic batching
A modular graph-based Retrieval-Augmented Generation (RAG) system
Interact with your documents using the power of GPT
Operating LLMs in production
Access large language models from the command-line
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Building applications with LLMs through composability
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
A guidance language for controlling large language models
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step