Structured outputs for llms
Python bindings for llama.cpp
Run Local LLMs on Any Device. Open-source
Agentic, Reasoning, and Coding (ARC) foundation models
A high-throughput and memory-efficient inference and serving engine
Qwen3 is the large language model series developed by Qwen team
lightweight package to simplify LLM API calls
PandasAI is a Python library that integrates generative AI
GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Interact with your documents using the power of GPT
Powerful AI language model (MoE) optimized for efficiency/performance
CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Operating LLMs in production
Open-source, high-performance AI model with advanced reasoning
Adding guardrails to large language models
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
ChatGLM-6B: An Open Bilingual Dialogue Language Model
Simple, Pythonic building blocks to evaluate LLM applications
File Parser optimised for LLM Ingestion with no loss
Open-source observability for your LLM application
Inference code for CodeLlama models
Access large language models from the command-line
Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon
A high-performance ML model serving framework, offers dynamic batching