Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
README.md | 2024-01-12 | 1.4 kB | |
v0.0.14 source code.tar.gz | 2024-01-12 | 3.3 MB | |
v0.0.14 source code.zip | 2024-01-12 | 3.7 MB | |
Totals: 3 Items | 7.0 MB | 1 |
:sparkles:SuperAGI v0.0.14:sparkles:
:rocket: Enhanced Local LLM Support with Multi-GPU :tada:
New Feature Highlights :star2:
⚙️ Local Large Language Model (LLM) Integration: - SuperAGI now supports the use of local large language models, allowing users to leverage their own models seamlessly within the SuperAGI framework. - Easily configure and integrate your preferred LLMs for enhanced customization and control over your AI agents.
⚡️ Multi-GPU Support: - SuperAGI now provides multi-GPU support for improved performance and scalability.
How to Use
To enable Local Large Language Model (LLM) with Multi-GPU support, follow these simple steps:
- LLM Integration:
- Add your model path in the celery and backend volumes in the
docker-compose-gpu.yml
file. - Run the command:
bash docker compose -f docker-compose-gpu.yml up --build
- Open
localhost:3000
in your browser. - Add a local LLM model from the model section.
- Use the added model for running your agents.