Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
0.2.0 source code.tar.gz | 2025-06-05 | 1.4 MB | |
0.2.0 source code.zip | 2025-06-05 | 1.9 MB | |
README.md | 2025-06-05 | 1.6 kB | |
Totals: 3 Items | 3.3 MB | 0 |
Published 5 Jun 2025
Features
- Add media types (image/audio/document) support to prompt API and models (#195)
- Add token count and timestamp support to Message.Response, add Tokenizer and MessageTokenizer feature (#184)
- Add LLM capability for caching, supported in anthropic mode (#208)
- Add new LLM configurations for Groq, Meta, and Alibaba (#155)
- Extend OpenAIClientSettings with chat completions API path and embeddings API path to make it configurable (#182)
Improvements
- Mark prompt builders with PromptDSL (#200)
- Make LLM provider not sealed to allow it's extension (#204)
- Ollama reworked model management API (#161)
- Unify PromptExecutor and AIAgentPipeline API for LLMCall events (#186)
- Update Gemini 2.5 Pro capabilities for tool support
- Add dynamic model discovery and fix tool call IDs for Ollama client (#144)
- Enhance the Ollama model definitions (#149)
- Enhance event handlers with more available information (#212)
Bug Fixes
- Fix LLM requests with disabled tools, fix prompt messages update (#192)
- Fix structured output JSON descriptions missing after serialization (#191)
- Fix Ollama not calling tools (#151)
- Pass format and options parameters in Ollama request DTO (#153)
- Support for Long, Double, List, and data classes as tool arguments for tools from callable functions (#210)
Examples
- Add demo Android app to examples (#132)
- Add example with media types - generating Instagram post description by images (#195)
Removals
- Remove simpleChatAgent (#127)