Running Llama 2 with gradio web UI on GPU or CPU from anywhere (Linux/Windows/Mac).
Features
- Supporting all Llama 2 models (7B, 13B, 70B, GPTQ, GGML) with 8-bit, 4-bit mode
- Use llama2-wrapper as your local llama2 backend for Generative Agents/Apps; colab example
- Run OpenAI Compatible API on Llama2 models
- Supporting models: Llama-2-7b/13b/70b, all Llama-2-GPTQ, all Llama-2-GGML
- Supporting model backends: tranformers, bitsandbytes(8-bit inference), AutoGPTQ(4-bit inference), llama.cpp
- Demos: Run Llama2 on MacBook Air; Run Llama2 on free Colab T4 GPU
License
MIT LicenseFollow llama2-webui
Other Useful Business Software
Train ML Models With SQL You Already Know
Build and deploy ML models using familiar SQL. Automate data prep with built-in Gemini. Query 1 TB and store 10 GB free monthly.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of llama2-webui!