Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
Framework for Accelerating LLM Generation with Multiple Decoding Heads
Run 100B+ language models at home, BitTorrent-style
Implementation of "Tree of Thoughts
Toolbox of models, callbacks, and datasets for AI/ML researchers
A graphical manager for ollama that can manage your LLMs
A computer vision framework to create and deploy apps in minutes
Implementation of model parallel autoregressive transformers on GPUs
Sequence-to-sequence framework, focused on Neural Machine Translation
OpenMMLab Video Perception Toolbox
Training & Implementation of chatbots leveraging GPT-like architecture
Toolkit for allowing inference and serving with MXNet in SageMaker
CPU/GPU inference server for Hugging Face transformer models
Deploy a ML inference service on a budget in 10 lines of code