Get up and running with Llama 2 and other large language models
LLM Frontend for Power Users
Port of Facebook's LLaMA model in C/C++
Run Local LLMs on Any Device. Open-source
Powerful AI language model (MoE) optimized for efficiency/performance
The all-in-one Desktop & Docker AI application with full RAG and AI
A high-throughput and memory-efficient inference and serving engine
Low-code app builder for RAG and multi-agent AI applications
Open-source, high-performance AI model with advanced reasoning
Distribute and run LLMs with a single file
Drag & drop UI to build your customized LLM flow
One API for plugins and datasets, one interface for prompt engineering
Self-hosted, community-driven, local OpenAI compatible API
Python bindings for llama.cpp
Toolkit for conversational AI
Desktop app for prototyping and debugging LangGraph applications
lightweight package to simplify LLM API calls
Revolutionizing Database Interactions with Private LLM Technology
⚡ Building applications with LLMs through composability ⚡
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
An elegent pytorch implement of transformers
Application that simplifies the installation of AI-related projects
Integrate cutting-edge LLM technology quickly and easily into your app
A RWKV management and startup tool, full automation, only 8MB
Text generator is a handy plugin for Obsidian