Python bindings for llama.cpp
Structured outputs for llms
Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Open-source, high-performance AI model with advanced reasoning
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Powerful AI language model (MoE) optimized for efficiency/performance
A high-throughput and memory-efficient inference and serving engine
Low-code app builder for RAG and multi-agent AI applications
Phi-3.5 for Mac: Locally-run Vision and Language Models
Framework and no-code GUI for fine-tuning LLMs
Revolutionizing Database Interactions with Private LLM Technology
A guidance language for controlling large language models
Adding guardrails to large language models
Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
⚡ Building applications with LLMs through composability ⚡
PyTorch library of curated Transformer models and their components
Open source libraries and APIs to build custom preprocessing pipelines
Operating LLMs in production
Open-source observability for your LLM application
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
Zep: A long-term memory store for LLM / Chatbot applications
Integrate cutting-edge LLM technology quickly and easily into your app