| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| README.md | 2026-04-27 | 9.4 kB | |
| Release v0.23.1 source code.tar.gz | 2026-04-27 | 1.6 MB | |
| Release v0.23.1 source code.zip | 2026-04-27 | 1.8 MB | |
| Totals: 3 Items | 3.4 MB | 0 | |
TL;DR
tokenizers 0.23.1 is the first proper stable release in the 0.23 line โ 0.23.0 only ever shipped as rc0 because the release pipeline itself was broken (Node side hadn't shipped multi-platform binaries since 2023, Python side was on pyo3 0.27 without free-threaded support). 0.23.1 is the version where everything actually goes out the door together: full Node multi-platform wheels for the first time in years, Python 3.14 (regular and free-threaded 3.14t), full type hints for every Python class, and a stack of measurable perf wins on the BPE / added-vocab hot paths.
There is no functional 0.23.0 published โ we tag 0.23.1 directly so users don't accidentally pull a never-shipped version.
๐จ Breaking changes
- Drop Python 3.9 (#1952) โ
requires-python = ">=3.10"; 3.9 users stay on0.22.x. add_tokensnormalizescontentat insertion (#1995) โ re-savedtokenizer.jsonmay differ in theadded_tokensblock. Existing files load unchanged.- Type stubs are precise (#1928, [#1997]) โ methods that returned
Anynow return real types;mypy --strictmay surface previously-hidden errors. Stub layout also moved fromtokenizers/<sub>/__init__.pyitotokenizers/<sub>.pyi. This breaks the surface of some of the processors likeRobertaProcessign's__init__. - 3.14t-only: setters/getters return
PyResult<T>because ofArc<RwLock<Tokenizer>>; a poisoned lock surfaces asPyExceptioninstead of a panic.
โก Performance โ measured locally on this Mac, not lifted from PRs
Run with cargo bench --bench <name> -- --save-baseline v0_22_2 on v0.22.2, then --baseline v0_22_2 on v0.23.1. Numbers are point-in-time wall clock on a single laptop; relative deltas are what matters, absolute numbers will differ on CI hardware.
Added-vocabulary deserialize โ the headline win (#1995, [#1999])
bench: improve added_vocab_deserialize to reflect real-world workloads (#2000) is now representative of how transformers actually loads tokenizer.json files. The combined effect of daachorse for the matching automaton plus the normalize-on-insert refactor is enormous on this workload:
| benchmark | v0.22.2 | v0.23.1 | change |
|---|---|---|---|
| 100k tokens, special, no norm | ~410 ms | 248 ms | โ40% |
| 100k tokens, non-special, no norm | ~7.1 s | 273 ms | โ96% |
| 100k tokens, special, NFKC | ~395 ms | 235 ms | โ40% |
| 100k tokens, non-special, NFKC | ~7.4 s | 290 ms | โ96% |
| 400k tokens, special, no norm | ~15 s | 980 ms | โ94% |
Real-world impact: loading a Llama-3-style tokenizer with a large set of added tokens dropped from "noticeable pause" to "instant".
BPE encode
| benchmark | v0.22.2 | v0.23.1 | change |
|---|---|---|---|
BPE GPT2 encode batch, no cache |
530 ms | 446 ms | โ16% |
BPE GPT2 encode batch (cached) |
690 ms | 685 ms | noise |
BPE GPT2 encode (single) |
1.95 s | 1.94 s | noise |
BPE Train (small) |
32.6 ms | 31.5 ms | โ3% |
BPE Train (big) |
1.01 s | 988 ms | โ2% |
The BPE per-thread cache PR (#2028) shows much larger wins on highly-parallel workloads (+47โ62% at 88+ threads on a server box, per the PR's own measurements on Vera). Single-thread batch numbers above are flat or slightly improved because cache-hit overhead was already low without contention.
Llama-3 encode
| benchmark | v0.22.2 | v0.23.1 | change |
|---|---|---|---|
llama3-encode (single) |
2.10 s | 2.02 s | โ4% |
llama3-batch |
438 ms | 408 ms | โ7% |
llama3-offsets |
410 ms | 395 ms | โ4% |
Truncation early exit (#1990)
Right-direction truncation no longer pre-tokenizes past max_length. The new truncation_benchmark doesn't exist on v0.22.2 so there's no apples-to-apples here, but the PR's own measurements on the same machine showed โ20โ28% across a range of max_length values for right-truncation; left-truncation unchanged.
Other perf improvements (no direct comparable bench)
BPE::Builder::buildno longer formats strings in a hot loop (#2010) โ ~45% fasterTokenizer::from_fileon Llama-3 in the PR's profile.- BPE per-thread cache (#2028) โ see Vera numbers in PR description for parallel scale-out.
๐ Serialization / deserialization
The tokenizer.json format is forward-compatible: existing files load on 0.23 unchanged. Two things to know if you re-save:
added_tokensentries created viaadd_tokens(..., normalized=True)will have theircontentnormalized at save time โ see breaking-change note above.tokenizer.train(...)no longer keeps a redundantadded_tokens/special_tokensVecseparate from theadded_tokens_map_r. Public API surface unchanged; only the internal struct shape moved.
bench: improve added_vocab_deserialize to reflect real-world workloads (#2000) lands a more realistic micro-benchmark for this surface; if you're tracking deserialize perf in your own CI, the new bench is the one to compare against.
๐ Python: free-threaded 3.14t support
Dedicated wheels for python3.14t (the free-threaded build introduced in PEP 703). The wheel:
- Declares
Py_MOD_GIL_NOT_USED, so importingtokenizersdoes not force the GIL back on. - Builds without the
abi3cargo feature (free-threaded Python doesn't expose the limited API). - Goes through
Arc<RwLock<Tokenizer>>for the inner state so concurrent setters and encoders don't race PyO3's per-pyclass borrow check.
A new stress-test module tests/test_freethreaded.py exercises N-encoder ร M-setter races on a single Tokenizer and asserts no RuntimeError: Already borrowed, no RwLock poisoning, and that sys._is_gil_enabled() is False post-import.
For the regular CPython wheel everything is unchanged.
๐ฆ Node.js bindings: first proper multi-platform release since 2023
The npm package now ships 13 platforms (macOS x64/arm64/universal, Windows x64/i686/arm64, Linux x64/arm64/armv7 in both glibc and musl, Android arm64/armv7) โ previous workflows only built 3 of those, leaving Apple Silicon / Linux ARM / Alpine users with package-not-found errors since 2023 (#1365, [#1703], [#1922]). Fixed via [#1970] + [#2034], which also bumps @napi-rs/cli to v3 and switches cross-builds to cargo-zigbuild.
๐งท Type hints & typing for all classes (#1928, [#1997])
Every class in the python bindings now ships proper .pyi stubs โ Tokenizer, AddedToken, Encoding, every decoder / model / normalizer / pre-tokenizer / processor / trainer. Editors and type checkers (mypy, pyright, ty) see real signatures with types and docstrings instead of falling back to Any.
The stubs are generated automatically from the compiled extension via tools/stub-gen (Rust binary using pyo3-introspection). Re-running make style regenerates them; CI guards against regenerated-vs-checked-in drift. If the generator ever returns 0 docstrings (e.g. because the [patch.crates-io] pin in .cargo/config.toml falls out of sync with the pyo3 dep version), it now hard-aborts with a precise diagnostic instead of silently emitting bare-bones stubs.
:::python
>>> from tokenizers import Tokenizer
>>> # IDEs now resolve every method, every kwarg, every return type
>>> Tokenizer.from_pretrained("bert-base-cased")
โ ๏ธ As called out in breaking changes: stricter type info means previously-hidden type errors in user code may now surface under mypy --strict.
โจ Other features
- Unigram sampling:
models.Unigramnow exposesalphaandnbest_sizefor subword regularization (parity with Google's implementation, [#1994]). Closes long-standing requests [#730] and [#849]. - Weakref support on
Tokenizer(#1958) โ useful for long-lived caches that don't want to keep tokenizers alive. - CI benchmark regression detection on PRs (#2013) โ every PR runs
ci_benchmarkagainst the stored baseline and posts a comparison chart to the PR. - Longer-context Llama-3 benchmarks (#1971) for tracking head-room on multi-thousand-token inputs.
๐ Other fixes
EncodingVisualizer: unclosed annotation span fixed (#1911), HTML escape applied to output (#1937).- DecodeStream:
__copy__/__deepcopy__(#1930). - Pre-tokenize: removed an unnecessary
to_vec()fromslice(#1964). - Replace
wget/ norvig URL with HF Hub downloads in test data fetch (#2018). uvsupport in the Python Makefile (#1977).- Several security-pin bumps on workflow SHAs (#2004, [#2005], [#2006], [#2016], [#2017]).
๐ฅ Contributors
Thanks to everyone who shipped commits between v0.22.2 and v0.23.1:
@ArthurZucker, @finnagin, @gordonmessmer, @jberg5, @kennethsible, @llukito, @MayCXC, @McPatate, @michaelfeil, @mrkm4ntr, @musicinmybrain, @ngoldbaum, @OhashiReon, @paulinebm, @podarok, @rtrompier, @sebpop, @Shivam-Bhardwaj, @threexc, @wheynelau, @xanderlent โ plus @dependabot and @hf-security-analysis for keeping pins fresh.
Full Changelog: https://github.com/huggingface/tokenizers/compare/v0.22.2...v0.23.1