Python bindings for llama.cpp
Structured outputs for llms
Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Powerful AI language model (MoE) optimized for efficiency/performance
Open-source, high-performance AI model with advanced reasoning
A high-throughput and memory-efficient inference and serving engine
Phi-3.5 for Mac: Locally-run Vision and Language Models
Low-code app builder for RAG and multi-agent AI applications
Framework and no-code GUI for fine-tuning LLMs
A guidance language for controlling large language models
Adding guardrails to large language models
Revolutionizing Database Interactions with Private LLM Technology
Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon
Operating LLMs in production
⚡ Building applications with LLMs through composability ⚡
PyTorch library of curated Transformer models and their components
Open source libraries and APIs to build custom preprocessing pipelines
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
Open-source observability for your LLM application
Access large language models from the command-line
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
Run 100B+ language models at home, BitTorrent-style