Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project
Chinese LLaMA & Alpaca large language model + local CPU/GPU training
Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Operating LLMs in production
The unofficial python package that returns response of Google Bard
Train a 26M-parameter GPT from scratch in just 2h
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
Implementation of "Tree of Thoughts
An implementation of model parallel GPT-2 and GPT-3-style models