Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
A library for accelerating Transformer models on NVIDIA GPUs
A Pythonic framework to simplify AI service building
The Triton Inference Server provides an optimized cloud
Easy-to-use deep learning framework with 3 key features
Fast inference engine for Transformer models
OpenMLDB is an open-source machine learning database
Easy-to-use Speech Toolkit including Self-Supervised Learning model
Trainable models and NN optimization tools
A GPU-accelerated library containing highly optimized building blocks
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Lightweight anchor-free object detection model
Training & Implementation of chatbots leveraging GPT-like architecture
Guide to deploying deep-learning inference networks