Download Latest Version v0.11.0.post1 source code.tar.gz (2.6 MB)
Email in envelope

Get an email when there's a new version of Axolotl

Home / v0.10.0
Name Modified Size InfoDownloads / Week
Parent folder
README.md 2025-06-17 10.6 kB
v0.10.0 source code.tar.gz 2025-06-17 2.5 MB
v0.10.0 source code.zip 2025-06-17 2.9 MB
Totals: 3 Items   5.5 MB 0

Highlights

Sparse Finetuning using LLMCompressor

Using LLMCompressor, the integration allows users to efficiently fine-tune models with structured/unstructured sparsity, recovering 99% accuracy or better for sparse models, and 3X faster inference.

Quantization-Aware Training (QAT)

QAT simulates quantization during training to achieve higher quality post-training quantized (PTQ) models than from applying PTQ to models trained without QAT. image

Mistral tokenizer support via mistral-common

Use Mistral's preferred mistral-common directly to support the preferred tokenization of chat messages.

Efficient chunked KD and online distillation

Use liger-style chunking to efficiently calculate KD-loss and add support for online distillation via logprobs from vllm/sglang. If you're

Miscellaneous

  • Improved tool calling support
  • Support for torch==2.5.1 will be deprecated in a future release. We recommend using torch 2.6.0 or 2.7.1.

What's Changed

New Contributors

Full Changelog: https://github.com/axolotl-ai-cloud/axolotl/compare/v0.9.2...v0.10.0

Source: README.md, updated 2025-06-17