Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
load_test.html | 2025-09-28 | 1.7 MB | |
load_test_stats.csv | 2025-09-28 | 548 Bytes | |
README.md | 2025-09-28 | 9.4 kB | |
v1.77.5.rc.1 source code.tar.gz | 2025-09-28 | 196.4 MB | |
v1.77.5.rc.1 source code.zip | 2025-09-28 | 198.6 MB | |
Totals: 5 Items | 396.6 MB | 0 |
What's Changed
- Add supported text field to anthropic citation response by @TomeHirata in https://github.com/BerriAI/litellm/pull/14164
- fix: SSO "Clear" button writes empty values instead of removing SSO config by @Jetemple in https://github.com/BerriAI/litellm/pull/14826
- Fix: make
fastuuid
an optional dependency forproxy
, graceful fallback to stdlibuuid
by @akshoop in https://github.com/BerriAI/litellm/pull/14818 - Fix: support claude code auth via subscription (anthropic) by @hazyone in https://github.com/BerriAI/litellm/pull/14821
- Fix concurrency/scaling when many Python threads do streaming using sync completions by @leventov in https://github.com/BerriAI/litellm/pull/14816
- Fix load credentials in token counter proxy by @eycjur in https://github.com/BerriAI/litellm/pull/14808
- Adding langfuse usage details for cached tokens by @fabriciojoc in https://github.com/BerriAI/litellm/pull/10955
- doc: make the README document clearer by @mrFranklin in https://github.com/BerriAI/litellm/pull/14860
- feat: New model - Add support for Qwen models family & Deepseek 3.1 to Amazon Bedrock by @onlylonly in https://github.com/BerriAI/litellm/pull/14845
- Added vertex_ai/qwen models and azure/gpt-5-codex by @mubashir1osmani in https://github.com/BerriAI/litellm/pull/14844
- Add sambanova deepseek v3.1 and gpt-oss-120b by @luisfucros in https://github.com/BerriAI/litellm/pull/14866
- [Fix] LakeraAI v2 Guardrail - Ensure exception is raised correctly by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14867
- Add /user/list to management routes by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14868
- [Memory Leak Fix] Fix InMemoryCache unbounded growth when TTLs are set by @Copilot in https://github.com/BerriAI/litellm/pull/14869
- Bump tar-fs from 3.0.10 to 3.1.1 in /docs/my-website by @dependabot[bot] in https://github.com/BerriAI/litellm/pull/14872
- Fix: make
pondpond
as optional dependency forproxy
extras, disable object pooling gracefully by @akshoop in https://github.com/BerriAI/litellm/pull/14863 - Revert "Fix: make
pondpond
as optional dependency forproxy
extras, disable object pooling gracefully" by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14880 - [Feat] Initial support for scheduled key rotations by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14877
- Fix: Support custom entity types in Presidio guardrail with Union[PiiEntityType, str] by @arsh72 in https://github.com/BerriAI/litellm/pull/14899
- Remove useful links from admin settings by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14918
- Update docs for session management availability by @berri-teddy in https://github.com/BerriAI/litellm/pull/14914
- [Feat] Logging -
datadog
callback Log message content w/o sending to datadog by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14909 - fix: use fastuuid helper by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/14903
- [Feat] - Cost Tracking - show input, output, tool call cost breakdown in StandardLoggingPayload by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14921
- Corrected docs updates sept 2025 by @berri-teddy in https://github.com/BerriAI/litellm/pull/14916
- (Feat) Add BitBucket Integration for Prompt Management by @Sameerlite in https://github.com/BerriAI/litellm/pull/14882
- Remove aiohttp_ prefix from config by @berri-teddy in https://github.com/BerriAI/litellm/pull/14920
- add noma guardrail provider to ui by @vpbill in https://github.com/BerriAI/litellm/pull/14415
- fix: prisma client state retries by @mubashir1osmani in https://github.com/BerriAI/litellm/pull/14925
- :bug: Fix a bug where openai image edit siltently ignores multiple images by @kgritesh in https://github.com/BerriAI/litellm/pull/14893
- feat: improve opik integration code by @mrFranklin in https://github.com/BerriAI/litellm/pull/14888
- Add gpt-5 and gpt-5-codex to OpenRouter cost map by @huangyafei in https://github.com/BerriAI/litellm/pull/14879
- GPT-3.5-Turbo price updated. by @oytunkutrup1 in https://github.com/BerriAI/litellm/pull/14858
- Fix: revert
fastuuid
optional dependency, always usefastuuid
in.__uid
helper by @akshoop in https://github.com/BerriAI/litellm/pull/14941 - [Feat] Add support for Gemini 2.5 Flash and Flash-lite preview models (09-2025 release) by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14948
- docs: add documentation for additional cost-related keys in custom pricing by @thiswillbeyourgithub in https://github.com/BerriAI/litellm/pull/14949
- [Feat] Add new anthropic web fetch tool support by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14951
- Revert incorrect changes to sonnet-4 max output tokens by @nherment in https://github.com/BerriAI/litellm/pull/14933
- fix: remove slow string operation by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/14955
- Azure Managed Batches - api key error by @mubashir1osmani in https://github.com/BerriAI/litellm/pull/14932
- fix mypy errors from bitbucket integration by @Sameerlite in https://github.com/BerriAI/litellm/pull/14959
- [Feat] UI - Allow scheduling key rotations when creating virtual keys by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14960
- fix: support documented params by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/14969
- [Fix] Parallel Request Limiter v3 - ensure Lua scripts can execute on redis cluster by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14968
- Error logging in SQS by @deepanshululla in https://github.com/BerriAI/litellm/pull/14974
- chore: setting devcontainer for develop by @22mSqRi in https://github.com/BerriAI/litellm/pull/14972
- Fix Anthropic streaming IDs (#14962) by @Maximgitman in https://github.com/BerriAI/litellm/pull/14965
- Fix inconsistent token configs for gpt-5 models by @danielmklein in https://github.com/BerriAI/litellm/pull/14942
- [Bug]: vLLM provider's rerank endpoint from /v1/rerank to /rerank by @subnet-dev in https://github.com/BerriAI/litellm/pull/14938
- Revert "Azure Managed Batches - api key error" by @krrishdholakia in https://github.com/BerriAI/litellm/pull/14978
- Add azure passthrough documentation by @berri-teddy in https://github.com/BerriAI/litellm/pull/14958
- stabilize main github tests by @krrishdholakia in https://github.com/BerriAI/litellm/pull/14979
- feat: initial commit for v2 oauth flow by @krrishdholakia in https://github.com/BerriAI/litellm/pull/14964
- test - fix mypy linting on ci/cd by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14981
- fix: reduce get_deployment cost to O(1) by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/14967
- fix: remove blocking create_task by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/14980
- Litellm ci cd linting fixes 09 29 2025 p2 by @krrishdholakia in https://github.com/BerriAI/litellm/pull/14982
- Revert "fix: remove blocking create_task" by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14984
- [Draft] v1.77.5 Release note by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/14985
New Contributors
- @Jetemple made their first contribution in https://github.com/BerriAI/litellm/pull/14826
- @akshoop made their first contribution in https://github.com/BerriAI/litellm/pull/14818
- @hazyone made their first contribution in https://github.com/BerriAI/litellm/pull/14821
- @leventov made their first contribution in https://github.com/BerriAI/litellm/pull/14816
- @fabriciojoc made their first contribution in https://github.com/BerriAI/litellm/pull/10955
- @onlylonly made their first contribution in https://github.com/BerriAI/litellm/pull/14845
- @Copilot made their first contribution in https://github.com/BerriAI/litellm/pull/14869
- @arsh72 made their first contribution in https://github.com/BerriAI/litellm/pull/14899
- @berri-teddy made their first contribution in https://github.com/BerriAI/litellm/pull/14914
- @vpbill made their first contribution in https://github.com/BerriAI/litellm/pull/14415
- @kgritesh made their first contribution in https://github.com/BerriAI/litellm/pull/14893
- @oytunkutrup1 made their first contribution in https://github.com/BerriAI/litellm/pull/14858
- @nherment made their first contribution in https://github.com/BerriAI/litellm/pull/14933
- @deepanshululla made their first contribution in https://github.com/BerriAI/litellm/pull/14974
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.77.4-nightly...v1.77.5.rc.1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.77.5.rc.1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 67 | 84.84551458044676 | 6.566478349985881 | 6.566478349985881 | 1964 | 1964 | 55.68499200012411 | 2992.9142229998433 |
Aggregated | Failed ❌ | 67 | 84.84551458044676 | 6.566478349985881 | 6.566478349985881 | 1964 | 1964 | 55.68499200012411 | 2992.9142229998433 |