Download Latest Version llama-b8304-bin-910b-openEuler-x86-aclgraph.tar.gz (64.8 MB)
Email in envelope

Get an email when there's a new version of llama.cpp

Home / b8299
Name Modified Size InfoDownloads / Week
Parent folder
llama-b8299-xcframework.zip < 16 hours ago 173.1 MB
llama-b8299-bin-win-vulkan-x64.zip < 16 hours ago 52.1 MB
llama-b8299-bin-win-sycl-x64.zip < 16 hours ago 130.9 MB
llama-b8299-bin-win-opencl-adreno-arm64.zip < 16 hours ago 29.2 MB
llama-b8299-bin-win-hip-radeon-x64.zip < 16 hours ago 349.0 MB
llama-b8299-bin-win-cuda-13.1-x64.zip < 16 hours ago 152.8 MB
llama-b8299-bin-win-cuda-12.4-x64.zip < 16 hours ago 224.5 MB
llama-b8299-bin-win-cpu-x64.zip < 16 hours ago 35.1 MB
llama-b8299-bin-win-cpu-arm64.zip < 16 hours ago 28.2 MB
llama-b8299-bin-ubuntu-x64.tar.gz < 16 hours ago 28.6 MB
llama-b8299-bin-ubuntu-vulkan-x64.tar.gz < 16 hours ago 45.8 MB
llama-b8299-bin-ubuntu-s390x.tar.gz < 16 hours ago 30.5 MB
llama-b8299-bin-ubuntu-rocm-7.2-x64.tar.gz < 16 hours ago 148.9 MB
llama-b8299-bin-macos-x64.tar.gz < 16 hours ago 92.8 MB
llama-b8299-bin-macos-arm64.tar.gz < 16 hours ago 35.9 MB
llama-b8299-bin-910b-openEuler-x86-aclgraph.tar.gz < 16 hours ago 64.7 MB
llama-b8299-bin-910b-openEuler-aarch64-aclgraph.tar.gz < 16 hours ago 58.1 MB
llama-b8299-bin-310p-openEuler-x86.tar.gz < 16 hours ago 64.7 MB
llama-b8299-bin-310p-openEuler-aarch64.tar.gz < 16 hours ago 58.1 MB
cudart-llama-bin-win-cuda-13.1-x64.zip < 16 hours ago 402.6 MB
cudart-llama-bin-win-cuda-12.4-x64.zip < 16 hours ago 391.4 MB
b8299 source code.tar.gz 2026-03-11 29.5 MB
b8299 source code.zip 2026-03-11 30.6 MB
README.md 2026-03-11 5.1 kB
Totals: 24 Items   2.7 GB 0
llama : enable chunked fused GDN path (#20340) * llama : enable chunked fused GDN path * models : avoid Q and K repeats when using fused GDA * cont : fix comment Co-authored-by: Aman Gupta <amangupta052@gmail.com> * cont : fix the fix Co-authored-by: Aman Gupta <amangupta052@gmail.com> * cont : fix * metal : add GDN kernel (#20361) * metal : add Metal backend for GGML_OP_GATED_DELTA_NET Add a fused Metal kernel for the gated delta net recurrence op (#19504), enabling GPU-accelerated inference for DeltaNet-based models (Qwen3.5, etc.) on Apple Silicon. Supports both GDA (scalar gate) and KDA (per-row gate) modes with head_size 64 and 128. Unsupported configurations (head_size 32, non-contiguous tensors) gracefully fall back to CPU. Performance: Qwen3.5-0.8B Q4_K_M on M4 Max tg128: 170 -> 213 t/s (+25%) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * metal : validate contiguity of all input tensors in supports_op Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * metal : add algorithm equivalence comment for GDA decay path Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * cont : unslop + optimize * cont : clean-up --------- Co-authored-by: Paul Flynn <paul@arkavo.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * CUDA: AR gated delta net improvements (#20391) * Add FastDiv to gated_delta_net_cuda * Shard columns across warps This reduces register pressure (avoids spill for S_v = 128) and gives the warp-scheduler more CTAs to schedule (thus hiding data-access latencies). * Remove unneded include in gated_delta_net.cu * Improve comments * Apply code-formating * Make sharding HIP-compatible 1. Use ggml_cuda_get_physical_warp_size() to determine warp size flexibly 2. Add test with partial warp to test sum reduction on CUDA * Remove fastdiv_s64, as we can treat neqk1 and rq3 as uint32_t * Rename variables * Enable GDN also for prefill, move TODO for chunked_GDN * Actually remove the TODO from [206890] * Get warp size at runtime warp_size is not known at compile time in hip host code. * Don't expose ggml_cuda_get_physical_warp_size on host --------- Co-authored-by: uvos <devnull@uvos.xyz> * llama : refactor llm_build_delta_net_base API --------- Co-authored-by: Aman Gupta <amangupta052@gmail.com> Co-authored-by: Paul Flynn <paul@arkavo.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Oliver Simons <osimons@nvidia.com> Co-authored-by: uvos <devnull@uvos.xyz>

macOS/iOS:

Linux:

Windows:

openEuler:

Source: README.md, updated 2026-03-11