9 Integrations with Kimi K2.5
View a list of Kimi K2.5 integrations and software that integrates with Kimi K2.5 below. Compare the best Kimi K2.5 integrations as well as features, ratings, user reviews, and pricing of software that integrates with Kimi K2.5. Here are the current Kimi K2.5 integrations in 2026:
-
1
AiAssistWorks
PT Visi Cerdas Digital
AiAssistWorks is the smartest way to use AI in Google Sheets™, Docs™, and Slides™. In Sheets™, just type a simple instruction — and Smart Command uses AI to do the task for you. Instantly generate product descriptions, create formulas, build charts and pivot tables, format data, create tables, validate entries, and more. No formulas. No scripts. No copy-paste. In Docs™, generate, rewrite, translate, create images, and summarize content — all directly inside your document. In Slides™, generate entire presentations or create AI-powered images in just a few clicks. Powered by 100+ AI models including GPT, Claude, Gemini, Llama, Groq, and more — giving you unmatched flexibility. ✅ Free Forever – 100 executions/month with your own API key ✅ Unlimited usage with a paid plan (API key required) ✅ No formulas needed – Fill 1,000+ rows with AI ✅ Automate SEO content, product listings, ad copy, and data labeling in Sheets™, Docs™, and Slides™.Starting Price: $5/month -
2
Kimi
Moonshot AI
Kimi is an intelligent assistant with a large "memory" that can read a 200,000-word novel in one go and surf the Internet. Kimi can understand and process long documents, helping you quickly summarize analysis reports, financial reports, etc., saving time in reading and organizing. When preparing for exams or researching new fields, Kimi can help you understand and summarize large amounts of textbooks or professional papers. If you work in programming or technology, Kimi can help you reproduce code or provide technical solutions based on your code or pseudocode from your paper. Kimi has significant advantages in Chinese and can handle multi-language documents, helping you communicate and understand more efficiently in international work. Kimi Chat can also play your favorite game characters, have interesting conversations with you, and provide entertainment and relaxation.Starting Price: Free -
3
NVIDIA TensorRT
NVIDIA
NVIDIA TensorRT is an ecosystem of APIs for high-performance deep learning inference, encompassing an inference runtime and model optimizations that deliver low latency and high throughput for production applications. Built on the CUDA parallel programming model, TensorRT optimizes neural network models trained on all major frameworks, calibrating them for lower precision with high accuracy, and deploying them across hyperscale data centers, workstations, laptops, and edge devices. It employs techniques such as quantization, layer and tensor fusion, and kernel tuning on all types of NVIDIA GPUs, from edge devices to PCs to data centers. The ecosystem includes TensorRT-LLM, an open source library that accelerates and optimizes inference performance of recent large language models on the NVIDIA AI platform, enabling developers to experiment with new LLMs for high performance and quick customization through a simplified Python API.Starting Price: Free -
4
SiliconFlow
SiliconFlow
SiliconFlow is a high-performance, developer-focused AI infrastructure platform offering a unified and scalable solution for running, fine-tuning, and deploying both language and multimodal models. It provides fast, reliable inference across open source and commercial models, thanks to blazing speed, low latency, and high throughput, with flexible options such as serverless endpoints, dedicated compute, or private cloud deployments. Platform capabilities include one-stop inference, fine-tuning pipelines, and reserved GPU access, all delivered via an OpenAI-compatible API and complete with built-in observability, monitoring, and cost-efficient smart scaling. For diffusion-based tasks, SiliconFlow offers the open source OneDiff acceleration library, while its BizyAir runtime supports scalable multimodal workloads. Designed for enterprise-grade stability, it includes features like BYOC (Bring Your Own Cloud), robust security, and real-time metrics.Starting Price: $0.04 per image -
5
EaseMate AI
EaseMate AI
EaseMate AI is an all-in-one assistant platform built for study, work, and creative output, integrating multiple advanced large language models (including GPT, Gemini, DeepSeek, Claude, and Meta Llama) to assist users in a variety of tasks. Core features include AI Chat tools for answering questions, translating files, writing documents, and summarizing uploaded content. There’s a strong PDF capability; users can chat with PDFs, ask questions about their contents, get summaries, and use OCR to extract text from PDF images and screenshots. For study, it offers solvers for math, physics, and chemistry problems, plus quiz and flashcard generation, video summarization (including YouTube content), mind-map creation, and tools for generating essays, paraphrasing, grammar checking, and AI detection of text. The creative side includes AI image filters, stylized photo transformations (cartoon, Ghibli, watercolour, etc.), image-to-video and video-to-video conversion, story generation, etc.Starting Price: $8.90 per month -
6
Brokk
Brokk
Brokk is an AI-native code assistant built to handle large, complex codebases by giving language models compiler-grade understanding of code structure, semantics, and dependencies. It enables context management by selectively loading summaries, diffs, or full files into a workspace so that the AI sees just the relevant portions of a million-line codebase rather than everything. Brokk supports actions such as Quick Context, which suggests files to include based on embeddings and structural relevance; Deep Scan, which uses more powerful models to recommend which files to edit or summarize further; and Agentic Search, allowing multi-step exploration of symbols, call graphs, or usages across the project. The architecture is grounded in static analysis via Joern (offering type inference beyond simple ASTs) and uses JLama for fast embedding inference to guide context changes. Brokk is offered as a standalone Java application (not an IDE plugin) to let users supervise AI workflows clearly.Starting Price: $20 per month -
7
Nebius Token Factory
Nebius
Nebius Token Factory is a scalable AI inference platform designed to run open-source and custom AI models in production without manual infrastructure management. It offers enterprise-ready inference endpoints with predictable performance, autoscaling throughput, and sub-second latency — even at very high request volumes. It delivers 99.9% uptime availability and supports unlimited or tailored traffic profiles based on workload needs, simplifying the transition from experimentation to global deployment. Nebius Token Factory supports a broad set of open source models such as Llama, Qwen, DeepSeek, GPT-OSS, Flux, and many others, and lets teams host and fine-tune models through an API or dashboard. Users can upload LoRA adapters or full fine-tuned variants directly, with the same enterprise performance guarantees applied to custom models.Starting Price: $0.02 -
8
Okara
Okara
Okara is a privacy-first AI workspace and private chat platform that lets professionals interact with 20+ powerful open source AI language and image models in one unified environment without losing context as you switch between models, conduct research, generate content, or analyze documents. All conversations, uploads (PDF, DOCX, spreadsheets, images), and workspace memory are encrypted at rest, processed on privately hosted open-source models, and never used for AI training or shared with third parties, giving users full data control with client-side key generation and true deletion. Okara combines secure, encrypted AI chat with integrated real-time web, Reddit, X/Twitter, and YouTube search tools, unified memory across models, and image generation, letting users weave live information and visuals into workflows while protecting sensitive or confidential data. It also supports shared team workspaces, enabling collaborative AI threads and shared context for groups like startups.Starting Price: $20 per month -
9
OpenCode
Anomaly Innovations
OpenCode is the AI coding agent purpose-built for the terminal. It delivers a responsive, themeable terminal UI that feels native while streamlining your workflow. With LSP auto-loading, it ensures the right language servers are always available for accurate, context-aware coding support. Developers can spin up multiple AI agents in parallel sessions on the same project, maximizing productivity. Shareable links make it easy to reference, debug, or collaborate across sessions. Supporting Claude Pro and 75+ LLM providers via Models.dev, OpenCode gives you full freedom to choose your coding companion.Starting Price: Free
- Previous
- You're on page 1
- Next