Download Latest Version llama-b7486-bin-910b-openEuler-x86.tar.gz (48.0 MB)
Email in envelope

Get an email when there's a new version of llama.cpp

Home / b7481
Name Modified Size InfoDownloads / Week
Parent folder
llama-b7481-xcframework.zip < 19 hours ago 149.8 MB
llama-b7481-xcframework.tar.gz < 19 hours ago 149.9 MB
llama-b7481-bin-win-vulkan-x64.zip < 19 hours ago 35.0 MB
llama-b7481-bin-win-sycl-x64.zip < 19 hours ago 109.2 MB
llama-b7481-bin-win-opencl-adreno-arm64.zip < 19 hours ago 16.9 MB
llama-b7481-bin-win-hip-radeon-x64.zip < 19 hours ago 347.7 MB
llama-b7481-bin-win-cuda-13.1-x64.zip < 19 hours ago 92.7 MB
llama-b7481-bin-win-cuda-12.4-x64.zip < 19 hours ago 204.0 MB
llama-b7481-bin-win-cpu-x64.zip < 19 hours ago 20.0 MB
llama-b7481-bin-win-cpu-arm64.zip < 19 hours ago 16.3 MB
llama-b7481-bin-ubuntu-x64.zip < 19 hours ago 19.1 MB
llama-b7481-bin-ubuntu-x64.tar.gz < 19 hours ago 19.1 MB
llama-b7481-bin-ubuntu-vulkan-x64.zip < 19 hours ago 34.6 MB
llama-b7481-bin-ubuntu-vulkan-x64.tar.gz < 19 hours ago 34.6 MB
llama-b7481-bin-ubuntu-s390x.zip < 19 hours ago 19.1 MB
llama-b7481-bin-ubuntu-s390x.tar.gz < 19 hours ago 22.3 MB
llama-b7481-bin-macos-x64.zip < 19 hours ago 42.9 MB
llama-b7481-bin-macos-x64.tar.gz < 19 hours ago 42.9 MB
llama-b7481-bin-macos-arm64.zip < 19 hours ago 16.7 MB
llama-b7481-bin-macos-arm64.tar.gz < 19 hours ago 16.7 MB
llama-b7481-bin-910b-openEuler-x86.tar.gz < 19 hours ago 48.0 MB
llama-b7481-bin-910b-openEuler-aarch64.tar.gz < 19 hours ago 43.9 MB
llama-b7481-bin-310p-openEuler-x86.tar.gz < 19 hours ago 48.0 MB
llama-b7481-bin-310p-openEuler-aarch64.tar.gz < 19 hours ago 43.9 MB
cudart-llama-bin-win-cuda-13.1-x64.zip < 19 hours ago 402.6 MB
cudart-llama-bin-win-cuda-12.4-x64.zip < 19 hours ago 391.4 MB
b7481 source code.tar.gz < 21 hours ago 28.6 MB
b7481 source code.zip < 21 hours ago 29.5 MB
README.md < 21 hours ago 2.6 kB
Totals: 29 Items   2.4 GB 0

[!WARNING] Release Format Update: Linux releases will soon use .tar.gz archives instead of .zip. Please make the necessary changes to your deployment scripts.

server: friendlier error msg when ctx < input (#18174) * llama-server: friendlier error msg when ctx < input This PR adds formatted strings to the server's send_error function * llama-server: use string_format inline * fix test

macOS/iOS: - macOS Apple Silicon (arm64) - macOS Intel (x64) - iOS XCFramework

Linux: - Ubuntu x64 (CPU) - Ubuntu x64 (Vulkan) - Ubuntu s390x (CPU)

Windows: - Windows x64 (CPU) - Windows arm64 (CPU) - Windows x64 (CUDA 12) - Windows x64 (CUDA 13) - Windows x64 (Vulkan) - Windows x64 (SYCL) - Windows x64 (HIP)

openEuler: - openEuler x86 (310p) - openEuler x86 (910b) - openEuler aarch64 (310p) - openEuler aarch64 (910b)

Source: README.md, updated 2025-12-19