Download Latest Version Tunix v0.1.2_ Expanded Model Support and Enhanced Flexibility source code.tar.gz (1.2 MB)
Email in envelope

Get an email when there's a new version of Tunix

Home / v0.1.1
Name Modified Size InfoDownloads / Week
Parent folder
README.md 2025-10-06 6.5 kB
Tunix v0.1.1 -- Improved Stability, New Features, and TPU Optimizations source code.tar.gz 2025-10-06 1.2 MB
Tunix v0.1.1 -- Improved Stability, New Features, and TPU Optimizations source code.zip 2025-10-06 1.3 MB
Totals: 3 Items   2.6 MB 0

This release focuses on improving performance and stability across TPU and Kaggle environments, introducing new utilities for agentic RL workflows, and adding broader model and configuration support. It also includes several important bug fixes and developer experience improvements.

Run Tunix on Kaggle TPU

We’re excited to announce that Tunix can now be launched directly in Kaggle notebooks with TPU acceleration — making it easier than ever to experiment, prototype, and run reinforcement learning workflows without complex setup.

Key highlights

First-class TPU support on Kaggle – run GRPO and other RL pipelines end-to-end in a Kaggle notebook.

Pre-configured runtime – no manual dependency juggling needed; version compatibility and performance tuning are handled automatically.

Launch the notebook here: Knowledge Distillation Demo QLoRA Demo DPO Demo GRPO Demo

New Features & Improvements

  • Model & Training Options
  • Added support for Gemma-3-270M model configuration.
  • Enabled setting default parameter dtype for Gemma-3 models.
  • Added remat options to models to improve memory efficiency.
  • Created a new list container type to support both Flax ≤0.11.2 and ≥0.12.0 versions.
  • Pathways & TPU Performance
  • Introduced experimental pre-sharding (experimental_reshard) for Pathways on Cloud TPU.
  • Improved weight synchronization logic to handle KV head duplication.
  • Disabled certain profiler options by default to improve stability on Pathways backend.
  • Configuration & CLI Improvements
  • Enabled generic creation of optax.optimizer and optax.learning_rate_schedule directly from CLI.
  • Relaxed JAX version constraints to ensure compatibility with Kaggle images.
  • Added minimum resource requirements for launch scripts in the README.
  • Documentation
  • Added ReadTheDocs link in README.
  • Expanded external notebooks with step-by-step guidance for long-running tasks.

Bug Fixes

  • Fixed a bug in reward function logic causing incorrect training signals.
  • Fixed a checkpoint handling issue where Colab failed to locate the final checkpoint and now cleans up intermediate directories.
  • Fixed Kaggle image performance issues.
  • Fixed type errors in agents/ modules.
  • Optimized masked index lookups using jnp.where for better runtime efficiency.
  • Resharded prompt and completion tokens to the REFERENCE mesh when rollout and reference models are distributed.

Dependency & Version Updates

  • JAX pinned to 0.7.1 and libtpu downgraded to resolve Cloud TPU performance regressions.
  • Relaxed JAX version requirement for Kaggle compatibility.

Full Changelog:

New Contributors * @chethanuk made their first contribution in https://github.com/google/tunix/pull/501

Full Changelog: https://github.com/google/tunix/compare/v0.1.0...v0.1.1

Source: README.md, updated 2025-10-06