Open-source observability for your LLM application
LLM
Collect, organize, use, and share, all in OmniBox
Performance-optimized AI inference on your GPUs
⚡ Building applications with LLMs through composability ⚡
Phi-3.5 for Mac: Locally-run Vision and Language Models
An AI personal assistant for your digital brain
A powerful tool for automated LLM fuzzing
The first AI agent that builds permissionless integrations
Unified framework for building enterprise RAG pipelines
A high-performance ML model serving framework, offers dynamic batching
A python module to repair invalid JSON from LLMs
AirLLM 70B inference with single 4GB GPU
Scalable data pre processing and curation toolkit for LLMs
the terminal client for Ollama
SDG is a specialized framework
A list of free LLM inference resources accessible via API
Open-Source Financial Large Language Models
Low-latency REST API for serving text-embeddings
BISHENG is an open LLM devops platform for next generation apps
Low-code framework for building custom LLMs, neural networks
The Multi-Agent Framework
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs
Inference Llama 2 in one file of pure C
Replace OpenAI GPT with another LLM in your app