| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| README.md | 2025-11-05 | 11.2 kB | |
| Release 1.18.0 source code.tar.gz | 2025-11-05 | 10.9 MB | |
| Release 1.18.0 source code.zip | 2025-11-05 | 11.8 MB | |
| Totals: 3 Items | 22.6 MB | 11 | |
1.18.0 (2025-11-05)
Features
- [ADK Visual Agent Builder]
-
Core Features
- Visual workflow designer for agent creation
- Support for multiple agent types (LLM, Sequential, Parallel, Loop, Workflow)
- Agent tool support with nested agent tools
- Built-in and custom tool integration
- Callback management for all ADK callback types (before/after agent, model, tool)
- Assistant to help you build your agents with natural language
- Assistant proposes and writes agent configuration yaml files for you
- Save to test with chat interfaces as normal
- Build and debug at the same time in adk web!
-
[Core]
- Add support for extracting cache-related token counts from LiteLLM usage (4f85e86)
- Expose the Python code run by the code interpreter in the logs (a2c6a8a)
- Add run_debug() helper method for quick agent experimentation (0487eea)
- Allow injecting a custom Runner into
agent_to_a2a(156d235) -
Support MCP prompts via the McpInstructionProvider class (88032cf)
-
[Models]
- Add model tracking to LiteLlm and introduce a LiteLLM with fallbacks demo (d4c63fc)
-
Add ApigeeLlm as a model that lets ADK Agent developers to connect with an Apigee proxy (87dcb3f)
-
[Integrations]
- Add example and fix for loading and upgrading old ADK session databases (338c3c8)
- Add support for specifying logging level for adk eval cli command (b1ff85f)
- Propagate LiteLLM finish_reason to LlmResponse for use in callbacks (71aa564)
- Allow LLM request to override the model used in the generate content async method in LiteLLM (ce8f674)
- Add api key argument to Vertex Session and Memory services for Express Mode support (9014a84)
- Added support for enums as arguments for function tools (240ef5b)
-
Implement artifact_version related methods in GcsArtifactService (e194ebb)
-
[Services]
- Add support for Vertex AI Express Mode when deploying to Agent Engine (d4b2a8b)
- Remove custom polling logic for Vertex AI Session Service since LRO polling is supported in express mode (546c2a6)
-
Make VertexAiSessionService fully asynchronous (f7e2a7a)
-
[Tools]
- Add Bigquery detect_anomalies tool (9851340)
- Extend Bigquery detect_anomalies tool to support future data anomaly detection (38ea749)
-
Add get_job_info tool to BigQuery toolset (6429457)
-
[Evals]
- Add "final_session_state" to the EvalCase data model (2274c4f)
- Marked expected_invocation as optional field on evaluator interface (b17c8f1)
-
Adds LLM-backed user simulator (54c4ecc)
-
[Observability]
-
Add BigQueryLoggingPlugin for event logging to BigQuery (b7dbfed)
-
[Live]
- Add token usage to live events for bidi streaming (6e5c0eb)
Bug Fixes
- Reduce logging spam for MCP tools without authentication (11571c3)
- Fix typo in several files (d2888a3)
- Disable SetModelResponseTool workaround for Vertex AI Gemini 2+ models (6a94af2)
- Bug when callback_context_invocation_context is missing in GlobalInstructionPlugin (f81ebdb)
- Support models slash prefix in model name extraction (8dff850)
- Do not consider events with state delta and no content as final response (1ee93c8)
- Parameter filtering for CrewAI functions with **kwargs (74a3500)
- Do not treat FinishReason.STOP as error case for LLM responses containing candidates with empty contents (2f72ceb)
- Fixes null check for reflect_retry plugin sample (86f0155)
- Creates evalset directory on evalset create (6c3882f)
- Add ADK_DISABLE_LOAD_DOTENV environment variable that disables automatic loading of .env when running ADK cli, if set to true or 1 (15afbcd)
- Allow tenacity 9.0.0 (ee8acc5)
- Output file uploading to artifact service should handle both base64 encoded and raw bytes (496f8cd)
- Correct message part ordering in A2A history (5eca72f)
- Change instruction insertion to respect tool call/response pairs (1e6a9da)
- DynamicPickleType to support MySQL dialect (fc15c9a)
- Enable usage metadata in LiteLLM streaming (f9569bb)
- Fix issue with MCP tools throwing an error (1a4261a)
- Remove redundant
formatfield from LiteLLM content objects (489c39d) - Update the contribution analysis tool to use original write mode (54db3d4)
- Fix agent evaluations detailed output rows wrapping issue(4284c61)
- Update dependency version constraints to be based on PyPI versions(0b1784e)
Improvements
- Add Community Repo section to README (432d30a)
- Undo adding MCP tools output schema to FunctionDeclaration (92a7d19)
- Refactor ADK README for clarity and consistency (b0017ae)
- Add support for reversed proxy in adk web (a0df75b)
- Avoid rendering empty columns as part of detailed results rendering of eval results (5cb35db)
- Clear the behavior of disallow_transfer_to_parent (48ddd07)
- Disable the scheduled execution for issue triage workflow (a02f321)
- Include delimiter when matching events from parent nodes in content processor (b8a2b6c)
- Improve Tau-bench ADK colab stability (04dbc42)
- Implement ADK-based agent factory for Tau-bench (c0c67c8)
- Add util to run ADK LLM Agent with simulation environment (87f415a)
- Demonstrate CodeExecutor customization for environment setup (8eeff35)
- Add sample agent for VertexAiCodeExecutor (edfe553)
- Adds a new sample agent that demonstrates how to integrate PostgreSQL databases using the Model Context Protocol (MCP) (45a2168)
- Add example for using ADK with Fast MCP sampling (d3796f9)
- Refactor gepa sample code and clean-up user demo colab(63353b2)