| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| textgen-portable-3.17-windows-cuda12.4.zip | 2025-11-06 | 907.6 MB | |
| textgen-portable-3.17-windows-vulkan.zip | 2025-11-06 | 211.8 MB | |
| textgen-portable-3.17-windows-cpu.zip | 2025-11-06 | 197.6 MB | |
| textgen-portable-3.17-linux-cuda12.4.zip | 2025-11-06 | 917.1 MB | |
| textgen-portable-3.17-macos-x86_64.zip | 2025-11-06 | 199.0 MB | |
| textgen-portable-3.17-linux-vulkan.zip | 2025-11-06 | 254.4 MB | |
| textgen-portable-3.17-macos-arm64.zip | 2025-11-06 | 183.1 MB | |
| textgen-portable-3.17-linux-cpu.zip | 2025-11-06 | 240.1 MB | |
| README.md | 2025-11-06 | 1.1 kB | |
| v3.17 source code.tar.gz | 2025-11-06 | 24.9 MB | |
| v3.17 source code.zip | 2025-11-06 | 25.0 MB | |
| Totals: 11 Items | 3.2 GB | 0 | |
Changes
- Add
weights_only=Truetotorch.loadin Training_PRO for better security.
Bug fixes
- Pin huggingface-hub to 0.36.0 to fix manual venv installs.
- fix: Rename 'evaluation_strategy' to 'eval_strategy' in training. Thanks, @inyourface34456.
Backend updates
- Update llama.cpp to https://github.com/ggml-org/llama.cpp/tree/230d1169e5bfe04a013b2e20f4662ee56c2454b0 (adds Qwen3-VL support)
- Update exllamav3 to 0.0.12
Portable builds
Below you can find self-contained packages that work with GGUF models (llama.cpp) and require no installation! Just download the right version for your system, unzip, and run.
Which version to download:
- Windows/Linux:
- NVIDIA GPU: Use
cuda12.4. - AMD/Intel GPU: Use
vulkanbuilds. -
CPU only: Use
cpubuilds. -
Mac:
- Apple Silicon: Use
macos-arm64. - Intel CPU: Use
macos-x86_64.
Updating a portable install:
- Download and unzip the latest version.
- Replace the
user_datafolder with the one in your existing install. All your settings and models will be moved.