| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| README.md | 2026-03-17 | 729 Bytes | |
| v2.7.3 source code.tar.gz | 2026-03-17 | 6.8 MB | |
| v2.7.3 source code.zip | 2026-03-17 | 7.0 MB | |
| Totals: 3 Items | 13.8 MB | 0 | |
What's Changed
Bug Fix
- fix: use
max_input_tokensfor context window size (#216) TokenCounter.get_max_tokens()was usinglitellm.get_max_tokens()which returns the output token limit (max_tokens), not the input context window (max_input_tokens)- Switched to
litellm.get_model_info()and readingmax_input_tokens, with fallback tomax_tokensif unavailable - Uses explicit
Nonecheck instead oforoperator to correctly handle falsy values like0 - This fixes utilization being overstated by up to 8x for models where output limits are much smaller than context windows
Closes [#215]
Full Changelog: https://github.com/BrainBlend-AI/atomic-agents/compare/v2.7.1...v2.7.3