| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| README.md | 2025-12-17 | 10.2 kB | |
| v1.80.10.rc.3 source code.tar.gz | 2025-12-17 | 255.5 MB | |
| v1.80.10.rc.3 source code.zip | 2025-12-17 | 258.8 MB | |
| Totals: 3 Items | 514.3 MB | 0 | |
What's Changed
- fix gemini web search requests count by @KeremTurgutlu in https://github.com/BerriAI/litellm/pull/17921
- fix(perplexity): use API-provided cost instead of manual calculation by @Chesars in https://github.com/BerriAI/litellm/pull/17887
- feat(stability): add Stability AI image generation support by @Chesars in https://github.com/BerriAI/litellm/pull/17894
- fix(anthropic): use dynamic max_tokens based on model by @Chesars in https://github.com/BerriAI/litellm/pull/17900
- fix: pass credentials to PredictionServiceClient for Vertex AI custom endpoints by @dongbin-lunark in https://github.com/BerriAI/litellm/pull/17757
- Add Azure Cohere 4 reranking models by @emerzon in https://github.com/BerriAI/litellm/pull/17961
- add MCP auth header propagation by @uc4w6c in https://github.com/BerriAI/litellm/pull/17963
- Fix: add OpenAI-compatible API for Anthropic with modify_params=True by @Chesars in https://github.com/BerriAI/litellm/pull/17106
- fix(openai/responses/guardrail_translation): fix basemodel import by @krrishdholakia in https://github.com/BerriAI/litellm/pull/17977
- Guardrails API - support LLM tool call response checks on
/chat/completions,/v1/responses,/v1/messageson regular + streaming calls by @krrishdholakia in https://github.com/BerriAI/litellm/pull/17619 - OpenRouter GPT 5.2, Mistral 3, and Devstral 2 by @SamAcctX in https://github.com/BerriAI/litellm/pull/17844
- add: litellm team PR template by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/17983
- Add: CI/CD rules to default PR template for LiteLLM team. by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/17985
- fix: cost calculation of gpt-image 1 model by @Sameerlite in https://github.com/BerriAI/litellm/pull/17966
- Add support for reasoning param for fireworks AI models by @Sameerlite in https://github.com/BerriAI/litellm/pull/17967
- Add provider specific tools support in responses api by @Sameerlite in https://github.com/BerriAI/litellm/pull/17980
- [Refactor] lazy imports: Use per-attribute lazy imports and extract shared constants by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/17994
- [Refactor]
litellm/init.py: lazy load http handlers by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/17997 - [Refactor]
litellm/init.py: lazy load caches by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18001 - [Refactor] litellm/init.py: lazy load get_modified_max_tokens by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18002
- [docs] update SAP docs by @vasilisazayka in https://github.com/BerriAI/litellm/pull/17974
- [Feat] Guardrails - litellm content filter by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/18007
- feat(custom_llm): add image_edit and aimage_edit support by @Chesars in https://github.com/BerriAI/litellm/pull/17999
- fix: mcp deepcopy error by @uc4w6c in https://github.com/BerriAI/litellm/pull/18010
- [Feat] New provider - Agent Gateway, add pydantic ai agents by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/18013
- fix(anthropic): claude-3-7-sonnet max_tokens to 64K default by @Chesars in https://github.com/BerriAI/litellm/pull/17979
- [fix] add qwen3-embedding-8b input per token price by @shivamrawat1 in https://github.com/BerriAI/litellm/pull/18018
- fix(gemini): use JSON instead of form-data for image edit requests by @Chesars in https://github.com/BerriAI/litellm/pull/18012
- Daily litellm staging branch by @krrishdholakia in https://github.com/BerriAI/litellm/pull/18020
- feat(gemini): support extra_headers in batch embeddings by @qdrddr in https://github.com/BerriAI/litellm/pull/18004
- Propagate token usage when generating images with Gemini by @komarovd95 in https://github.com/BerriAI/litellm/pull/17987
- feat(venice.ai): add support for Venice.ai API via providers.json by @donicrosby in https://github.com/BerriAI/litellm/pull/17962
- Litellm bedrock guardrails block precedence over masking by @kothamah in https://github.com/BerriAI/litellm/pull/17968
- Revert "Litellm bedrock guardrails block precedence over masking" by @krrishdholakia in https://github.com/BerriAI/litellm/pull/18022
- Revert "Revert "Litellm bedrock guardrails block precedence over masking"" by @krrishdholakia in https://github.com/BerriAI/litellm/pull/18023
- Fix get_model_from_request() to extract model ID from Vertex AI passthrough URLs by @krisxia0506 in https://github.com/BerriAI/litellm/pull/17970
- [Feat] New Provider - VertexAI Agent Engine by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/18014
- feat(pillar): add masking support and MCP call support by @eagle-p in https://github.com/BerriAI/litellm/pull/17959
- fix: Support Signed URLs with Query Parameters in Image Processing by @OlivverX in https://github.com/BerriAI/litellm/pull/17976
- [Docs] Add docs on using pydantic ai agents with LiteLLM A2a gateway by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/18026
- chore: improve issue labeling with component dropdown and more provider keywords by @Chesars in https://github.com/BerriAI/litellm/pull/17957
- Cleanup PR template: remove redundant fields by @Chesars in https://github.com/BerriAI/litellm/pull/17956
- Added new step into rotate master key function for processing credentials table by @Eric84626 in https://github.com/BerriAI/litellm/pull/17952
- [Docs] Litellm add docs vertex ai engine by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/18027
- Litellm dev 12 15 2025 p1 by @krrishdholakia in https://github.com/BerriAI/litellm/pull/18028
- [Feature] UI - Milvus Vector Store by @yuneng-jiang in https://github.com/BerriAI/litellm/pull/18030
- fix: add headers to metadata for guardrails on pass-through endpoints by @NicolaivdSmagt in https://github.com/BerriAI/litellm/pull/17992
- Router order parameter documentation by @krrishdholakia in https://github.com/BerriAI/litellm/pull/18045
- [Refactor] litellm/init.py: lazy load LLMClientCache by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18008
- [Refactor] litellm/init.py: lazy load bedrock types by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18053
- [Refactor] litellm/init.py: lazy load .types.utils by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18054
- [Refactor] litellm/init.py: lazy load dotprompt integration by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18056
- [Refactor] litellm/init.py: lazy load default encoding from client decorator by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18059
- [Feature] Download Prisma binaries at build time instead of at runtime for Security Restricted environments by @mdiloreto in https://github.com/BerriAI/litellm/pull/17695
- Add custom headers in responses API by @Sameerlite in https://github.com/BerriAI/litellm/pull/18036
- fix: skip adding beta headers for vertex ai as it is not suppported by @Sameerlite in https://github.com/BerriAI/litellm/pull/18037
- Remove ttl field when routing to bedrock by @Sameerlite in https://github.com/BerriAI/litellm/pull/18049
- fix: Add none to encoding_format instead of omitting it by @Sameerlite in https://github.com/BerriAI/litellm/pull/18042
- Add support for agent skills in chat completion by @Sameerlite in https://github.com/BerriAI/litellm/pull/18031
- Fix managed files endpoint by @Sameerlite in https://github.com/BerriAI/litellm/pull/18046
- Revert "Fix get_model_from_request() to extract model ID from Vertex AI passthrough URLs" by @Sameerlite in https://github.com/BerriAI/litellm/pull/18063
- [Refactor] litellm/init.py: lazy-load heavy client decorator imports by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18064
- Litellm staging 12 16 2025 by @krrishdholakia in https://github.com/BerriAI/litellm/pull/18025
- [Refactor] litellm/init.py: lazy-load heavy imports from litellm.main by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18066
- [Refactor] litellm/init.py: lazy-load AmazonConverseConfig by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18069
- [Refactor] litellm/init.py: lazy load encoding from main.py by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18070
- [Feature] UI - Add Models Conditional Rendering by @yuneng-jiang in https://github.com/BerriAI/litellm/pull/18071
- [Refactor] litellm/init.py: lazy load GuardrailItem by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18072
- [Refactor] images/main.py: lazy load ImageEditRequestUtils by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18074
- Lazy load OpenAILikeChatConfig to avoid heavy import by @AlexsanderHamir in https://github.com/BerriAI/litellm/pull/18075
- [Feat] LiteLLM Content Filter - Add Support for Brazil PII field by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/18076
- Add Azure DeepSeek V3.2 versions by @emerzon in https://github.com/BerriAI/litellm/pull/18019
- feat: add github_copilot model info by @codgician in https://github.com/BerriAI/litellm/pull/17858
- [Feat] New Endpoint - Google Interactions API - added on litellm SDK by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/18079
- [Feat] Add New Google Interactions API on AI Gateway by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/18081
- Allow base_model for non Azure providers in proxy by @jyeros in https://github.com/BerriAI/litellm/pull/18038
- docs: add documentation describing configurable Hashicorp Vault mount… by @uc4w6c in https://github.com/BerriAI/litellm/pull/18082
- [Feature] Add LiteLLM Overhead to Logs by @yuneng-jiang in https://github.com/BerriAI/litellm/pull/18033
- [Feature] UI - Show LiteLLM Overhead in Logs by @yuneng-jiang in https://github.com/BerriAI/litellm/pull/18034
New Contributors
- @dongbin-lunark made their first contribution in https://github.com/BerriAI/litellm/pull/17757
- @qdrddr made their first contribution in https://github.com/BerriAI/litellm/pull/18004
- @donicrosby made their first contribution in https://github.com/BerriAI/litellm/pull/17962
- @NicolaivdSmagt made their first contribution in https://github.com/BerriAI/litellm/pull/17992
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.80.10-nightly...v1.80.10.rc.3