Get up and running with Llama 2 and other large language models
Port of Facebook's LLaMA model in C/C++
Run Local LLMs on Any Device. Open-source
Distribute and run LLMs with a single file
Self-hosted, community-driven, local OpenAI compatible API
Integrate cutting-edge LLM technology quickly and easily into your app
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Ongoing research training transformer models at scale
A guidance language for controlling large language models
Vector database plugin for Postgres, written in Rust
Tools like web browser, computer access and code runner for LLMs
Leveraging BERT and c-TF-IDF to create easily interpretable topics
Python bindings for the Transformer models implemented in C/C++
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Database system for building simpler and faster AI-powered application
C#/.NET binding of llama.cpp, including LLaMa/GPT model inference
An ecosystem of Rust libraries for working with large language models
llama.go is like llama.cpp in pure Golang
Locally run an Instruction-Tuned Chat-Style LLM
Qwen2.5-Coder is the code version of Qwen2.5, the large language model
Implements a reference architecture for creating information systems