Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
0.4.0 source code.tar.gz | 2025-08-27 | 29.2 MB | |
0.4.0 source code.zip | 2025-08-27 | 30.1 MB | |
README.md | 2025-08-27 | 5.6 kB | |
Totals: 3 Items | 59.3 MB | 0 |
Published 27 Aug 2025
Major Features
- Integration with Observability Tools:
- Langfuse Integration: Span adapters for Langfuse client, including open telemetry and graph visualisation (KG-217, KG-223)
- W&B Weave Integration: Span adapters for W&B Weave open telemetry and observability (KG-217, KG-218)
- Ktor Integration: First-class Ktor support via the "Koog" Ktor plugin to register and run agents in Ktor applications (#422).
- iOS Target Support: Multiplatform expanded with native iOS targets, enabling agents to run on Apple platforms (#512).
- Upgraded Structured Output: Refactored structured output API to be more flexible and add built-in/native provider support for OpenAI and Google, reducing prompt boilerplate and improving validation (#443).
- GPT5 and Custom LLM Parameters Support: Now GPT5 is available together with custom additional LLM parameters for OpenAI-compatible clients (#631, [#517])
- Resilience and Retries:
- Retryable LLM Clients: Introduce retry logic for LLM clients with sensible defaults to reduce transient failures (#592)
- Retry Anything with LLM Feedback: Add a feedback mechanism to the retry component (
subgraphWithRetry
) to observe and tune behavior (#459).
Improvements
- OpenTelemetry and Observability:
- Finish reason and unified attributes for inference/tool/message spans and events; extract event body fields to attributes for better querying (KG-218).
- Mask sensitive data in events/attributes and introduce a “hidden-by-default” string type to keep secrets safe in logs (KG-259).
- Include all messages into the inference span and add an index for ChoiceEvent to simplify analysis (KG-172).
- Add tool arguments to
gen_ai.choice
andgen_ai.assistant.message
events (#462). - Allow setting a custom OpenTelemetry SDK instance in Koog (KG-169).
- LLM and Providers:
- Support Google’s “thinking” mode in generation config to improve reasoning quality (#414).
- Add responses API support for OpenAI (#645)
- AWS Bedrock: support Inference Profiles for simpler, consistent configuration (#506) and accept
AWS_SESSION_TOKEN
(#456). - Add
maxTokens
as prompt parameters for finer control over generation length (#579). - Add
contextLength
andmaxOutputTokens
toLLModel
(#438, KG-134) - Agent Engine:
- Add AIAgentPipeline interceptors to uniformly handle node errors; propagate
NodeExecutionError
across features (KG-170). - Include finish node processing in the pipeline to ensure finalizers run reliably (#598).
- File Tools and RAG:
- Reworked FileSystemProvider with API cleanups and better ergonomics; moved blocking/suspendable operations to
Dispatchers.IO
for improved performance and responsiveness (#557, “Move suspendable operations to Dispatchers.IO”). - Introduce
filterByRoot
helpers and allow custom path filters inFilteredFileSystemProvider
for safer agent sandboxes (#494, #508). - Rename
PathFilter
toTraversalFilter
and make its methods suspendable to support async checks. - Rename
fromAbsoluteString
tofromAbsolutePathString
for clarity (#567). - Add
ReadFileTool
for reading local file contents where appropriate (#628).
- Reworked FileSystemProvider with API cleanups and better ergonomics; moved blocking/suspendable operations to
- Update kotlin-mcp dependency to v0.6.0 (#523)
Bug Fixes
- Make
parts
field nullable in Google responses to handle missing content from Gemini models (#652). - Fix enum parsing in MCP when type is not mentioned (#601, KG-49)
- Fix function calling for
gemini-2.5-flash
models to correctly route tool invocations (#586). - Restore OpenAI
responseFormat
option support in requests (#643). - Correct
o4-mini
vsgpt-4o-mini
model mix-up in configuration (#573). - Ensure event body for function calls is valid JSON for telemetry ingestion (KG-268).
- Fix duplicated tool names resolution in
AIAgentSubgraphExt
to prevent conflicts (#493). - Fix Azure OpenAI client settings to generate valid endpoint URLs (#478).
- Restore
llama3.2:latest
as the default for LLAMA_3_2 to match the provider expectations (#522). - Update missing
Document
capabilities for LLModel (#543) - Fix Anthropic json schema validation error (#457)
Removals / Breaking Changes
- Remove Google Gemini 1.5 Flash/Pro variants from the catalog (KG-216, [#574]).
- Drop
execute
extensions forPromptExecutor
in favor of the unified API (#591). - File system API cleanup: removed deprecated FSProvider interfaces and methods;
PathFilter
renamed toTraversalFilter
with suspendable operations;fromAbsoluteString
renamed tofromAbsolutePathString
.
Examples
- Add a web search agent (from Koog live stream 1) showcasing retrieval + summarization (#575).
- Add a trip planning agent example (from Koog live stream 2) demonstrating tools + planning + composite strategy (#595).
- Improve BestJokeAgent sample and fix NumberGuessingAgent example (#503, [#445]).