Business Software for Rust - Page 8

Top Software that integrates with Rust as of August 2025 - Page 8

Rust Clear Filters
  • 1
    Kodezi

    Kodezi

    Kodezi

    Let Kodezi auto-summarize your code in seconds. Kodezi is Grammarly for programmers. Generate, ask, search, and code anything in your codebase with KodeziChat. Your personal AI coding assistant! Kodezi doesn't just fix your code for you, it tells you why it’s wrong and how to prevent future bugs. Reduce unnecessary lines of code and syntax to ensure clean end results. Optimize your code for optimum efficiency. Debug code with detailed explanations. Swap from one framework or language to another in an instant, without losing context. When writing code, commenting and explanations are crucial for future maintenance. Generate code from text, input a project question or create an entire function all in seconds! Generate your code documentation. Translate code to another language. Optimize your code for optimum efficiency. Use our extension within your own IDE, never have to rely on opening up new tabs ever again.
  • 2
    Polars

    Polars

    Polars

    Knowing of data wrangling habits, Polars exposes a complete Python API, including the full set of features to manipulate DataFrames using an expression language that will empower you to create readable and performant code. Polars is written in Rust, uncompromising in its choices to provide a feature-complete DataFrame API to the Rust ecosystem. Use it as a DataFrame library or as a query engine backend for your data models.
  • 3
    Wasmer

    Wasmer

    Wasmer

    Create apps that run everywhere, publish, share with the community, and deploy to the edge, globally. Serve sandboxed WebAssembly apps anywhere through a single runtime and do in days what others do in months. Using a binary for each platform and chip is the past. Rise above with lightweight containerized apps that simply run everywhere. Supports almost every programming language. Truly universal, runs everywhere & fast as native. Packages are limited by their languages no more. Collaborate across stacks, leverage the ecosystem, and contribute your own packages. Get the scalability of serverless and the reusability of the cloud. Deploy to the edge, save your users time and yourself money. Faster, affordable & indefinitely scalable. All languages are fully containerized & collaborative. Plug your own backend, compiler, or runner. Run apps at close to native speed and outperform the competition.
  • 4
    Cosine Genie
    Whether it’s high-level or nuanced, Cosine can understand and provide superhuman level answers. We're not just an LLM wrapper – we combine multiple heuristics including static analysis, semantic search and others. Simply ask Cosine how to add a new feature or modify existing code and we’ll generate a step by step guide. Cosine indexes and understands your codebase on multiple levels. From a graph relationship between files and functions to a deep semantic understanding of the code, Cosine can answer any question you have about your codebase. Genie is the best AI software engineer in the world by far - achieving a 30% eval score on the industry standard benchmark SWE-Bench. Genie is able to solve bugs, build features, refactor code, and everything in between either fully autonomously or paired with the user, like working with a colleague, not just a copilot.
  • 5
    Oxide Cloud Computer

    Oxide Cloud Computer

    Oxide Cloud Computer

    Vertically integrated and scale-ready. Bringing hyper-scaler agility to the mainstream enterprise. Software that empowers developers and operators alike. Launch projects within minutes of powering on. Per-tenant isolation gives you full control of networking, routing, and firewalls through VPC and network virtualization capabilities. Network services scale with your deployment, eliminating traditional bottlenecks. Elastic compute capacity can be provisioned against a single infrastructure pool, with support for the tools developers already use. High-performance, persistent block storage service with configurable capacity and IOPS per volume. Go from rack install to developer availability in a matter of hours, compared to weeks or months today. Takes up two-thirds as much space as traditional on-premises infrastructure. Manage with technologies you already know and use with our Kubernetes and Terraform integrations.
  • 6
    Apache SkyWalking
    Application performance monitor tool for distributed systems, specially designed for microservices, cloud-native and container-based (Kubernetes) architectures. 100+ billion telemetry data could be collected and analyzed from one SkyWalking cluster. Support log formatting, extract metrics, and various sampling policies through script pipeline in high performance. Support service-centric, deployment-centric, and API-centric alarm rule setting. Support forwarding alarms and all telemetry data to 3rd party. Metrics, traces, and logs from mature ecosystems are supported, e.g. Zipkin, OpenTelemetry, Prometheus, Zabbix, Fluentd.
  • 7
    Gemma

    Gemma

    Google

    Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is inspired by Gemini, and the name reflects the Latin gemma, meaning “precious stone.” Accompanying our model weights, we’re also releasing tools to support developer innovation, foster collaboration, and guide the responsible use of Gemma models. Gemma models share technical and infrastructure components with Gemini, our largest and most capable AI model widely available today. This enables Gemma 2B and 7B to achieve best-in-class performance for their sizes compared to other open models. And Gemma models are capable of running directly on a developer laptop or desktop computer. Notably, Gemma surpasses significantly larger models on key benchmarks while adhering to our rigorous standards for safe and responsible outputs.
  • 8
    Celestia

    Celestia

    Celestia

    Tap into the abundant throughput enabled by data availability sampling (DAS), the first architecture that scales while maintaining verifiability for any user. Anyone can directly verify and contribute to Celestia by running a light node. Launch a blockchain with leading Ethereum rollup frameworks or transform nearly any VM into your sovereign chain. With Celestia underneath, a customizable blockchain becomes as easy to deploy as a smart contract. Access the dynamic scaling unlocked by data availability sampling, where scale increases with the number of users. Create applications using your favorite VM or define your own. Build sovereign rollups, a new type of self-governing blockchain with minimal platform risk.
  • 9
    Ellipsis Labs Phoenix
    Building the liquidity backbone of DeFi. High throughput blockchains have enabled the creation of new financial primitives. Ellipsis Labs is building Phoenix, a decentralized limit order book on Solana that is fully on-chain, non-custodial, and crankless. A composable liquidity hub is a public good for all of DeFi. Developers can build other on-chain applications that either post liquidity to or draw liquidity from the canonical liquidity source. AMMs either rely on unsustainable liquidity incentives or require retail LPs to consistently lose money. Because Solana has high throughput, fast blocks, and low fees, Solana DEXs can support active liquidity provisioning. This enables professional market makers to provide tighter and deeper liquidity while still being profitable and sustainable. Phoenix has an instant settlement. Unlike existing order books on Solana, Phoenix doesn't require an asynchronous crank to settle trades.
  • 10
    GaiaNet

    GaiaNet

    GaiaNet

    The API approach allows any agent application in the OpenAI ecosystem, which is 100% of AI agents today, to use GaiaNet as an alternative to OpenAI. Furthermore, while the OpenAI API is backed by a handful of models to give generic responses, each GaiaNet node can be heavily customized with a finetuned model supplemented by domain knowledge. GaiaNet is a decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise. It allows individuals and businesses to create AI agents. Each GaiaNet node provides. A distributed and decentralized network of GaiaNodes. Fine-tuned large language models with private data. Proprietary knowledge base that individuals or enterprises have to improve the performance of the model. Decentralized AI apps that utilize the API of the distributed GaiaNet infrastructure. Offers personal AI teaching assistants, ready to enlighten at any place & time.
  • 11
    Gemma 2

    Gemma 2

    Google

    A family of state-of-the-art, light-open models created from the same research and technology that were used to create Gemini models. These models incorporate comprehensive security measures and help ensure responsible and reliable AI solutions through selected data sets and rigorous adjustments. Gemma models achieve exceptional comparative results in their 2B, 7B, 9B, and 27B sizes, even outperforming some larger open models. With Keras 3.0, enjoy seamless compatibility with JAX, TensorFlow, and PyTorch, allowing you to effortlessly choose and change frameworks based on task. Redesigned to deliver outstanding performance and unmatched efficiency, Gemma 2 is optimized for incredibly fast inference on various hardware. The Gemma family of models offers different models that are optimized for specific use cases and adapt to your needs. Gemma models are large text-to-text lightweight language models with a decoder, trained in a huge set of text data, code, and mathematical content.
  • 12
    Arroyo

    Arroyo

    Arroyo

    Scale from zero to millions of events per second. Arroyo ships as a single, compact binary. Run locally on MacOS or Linux for development, and deploy to production with Docker or Kubernetes. Arroyo is a new kind of stream processing engine, built from the ground up to make real-time easier than batch. Arroyo was designed from the start so that anyone with SQL experience can build reliable, efficient, and correct streaming pipelines. Data scientists and engineers can build end-to-end real-time applications, models, and dashboards, without a separate team of streaming experts. Transform, filter, aggregate, and join data streams by writing SQL, with sub-second results. Your streaming pipelines shouldn't page someone just because Kubernetes decided to reschedule your pods. Arroyo is built to run in modern, elastic cloud environments, from simple container runtimes like Fargate to large, distributed deployments on the Kubernetes logo Kubernetes.
  • 13
    Edera

    Edera

    Edera

    Introducing secure-by-design AI and Kubernetes no matter where you run your infrastructure. Eliminate container escapes and put a security boundary around Kubernetes workloads. Simplify running AI/ML workloads through enhanced GPU device virtualization, driver isolation, and vGPUs. Edera Krata begins a new paradigm of isolation technology, ushering in a new era of security. Edera brings a new era of AI & GPU security and performance, while also integrating seamlessly with Kubernetes. Each container receives its own Linux kernel, eliminating a shared kernel state between containers. Which means goodbye container escapes, costly security tool layering, and long days doom scrolling logs.‍ Run Edera Protect with just a couple lines of YAML and you’re off to the races. It’s written in Rust for enhanced memory safety and has no performance impact. A secure-by-design Kubernetes solution that stops attackers in their tracks.
  • 14
    The CodeGround

    The CodeGround

    The CodeGround

    TheCodeground is an online integrated development environment that offers a suite of tools for real-time coding practice and collaboration. It supports multiple programming languages, including Rust, GoLang, Node.js, Python, Java, HTML, CSS, and JavaScript. Users can engage in live code sharing, conduct code interviews, and access insightful articles through the Reads section. The platform features an interface similar to Visual Studio Code, complete with autocomplete functionality, JSON differentiation, and a JWT decoder, enhancing the coding experience. TheCodeground is accessible via web browsers and can also be installed as a desktop application on Mac, Windows, and Linux. With The Code Ground, you can code from any device without the hassle of setup. Our cloud-based platform provides instant execution, rich tools, and a smooth coding experience. The CodeGround ensures you have everything you need for efficient development and accurate data handling.
  • 15
    Biome

    Biome

    Biome

    Biome is a comprehensive toolchain for web projects, offering high-performance formatting and linting capabilities for languages such as JavaScript, TypeScript, JSX, TSX, JSON, CSS, and GraphQL. Its formatter achieves 97% compatibility with Prettier, enabling rapid code formatting that can handle malformed code in real time within various editors. The linter incorporates over 270 rules from ESLint, TypeScript ESLint, and other sources, providing detailed, contextual diagnostics to assist developers in enhancing code quality and adhering to best practices. Built with Rust, Biome ensures exceptional speed and efficiency, capable of formatting extensive codebases significantly faster than comparable tools. It is designed for seamless integration into development environments, offering a unified solution for code formatting and linting without the need for extensive configuration. Designed to handle codebases of any size. Focus on growing products instead of your tools.
  • 16
    Gemini 2.0 Flash-Lite
    Gemini 2.0 Flash-Lite is Google DeepMind's lighter AI model, designed to offer a cost-effective solution without compromising performance. As the most economical model in the Gemini 2.0 lineup, Flash-Lite is tailored for developers and businesses seeking efficient AI capabilities at a lower cost. It supports multimodal inputs and features a context window of one million tokens, making it suitable for a variety of applications. Flash-Lite is currently available in public preview, allowing users to explore its potential in enhancing their AI-driven projects.
  • 17
    Gemini 2.0 Pro
    Gemini 2.0 Pro is Google DeepMind's most advanced AI model, designed to excel in complex tasks such as coding and intricate problem-solving. Currently in its experimental phase, it features an extensive context window of two million tokens, enabling it to process and analyze vast amounts of information efficiently. A standout feature of Gemini 2.0 Pro is its seamless integration with external tools like Google Search and code execution environments, enhancing its ability to provide accurate and comprehensive responses. This model represents a significant advancement in AI capabilities, offering developers and users a powerful resource for tackling sophisticated challenges.
  • 18
    ERNIE X1
    ERNIE X1 is an advanced conversational AI model developed by Baidu as part of their ERNIE (Enhanced Representation through Knowledge Integration) series. Unlike previous versions, ERNIE X1 is designed to be more efficient in understanding and generating human-like responses. It incorporates cutting-edge machine learning techniques to handle complex queries, making it capable of not only processing text but also generating images and engaging in multimodal communication. ERNIE X1 is often used in natural language processing applications such as chatbots, virtual assistants, and enterprise automation, offering significant improvements in accuracy, contextual understanding, and response quality.
    Starting Price: $0.28 per 1M tokens
  • 19
    JSON Formatter

    JSON Formatter

    JSON Formatter

    JSON Formatter's JSON Editor is a user-friendly tool designed for editing, viewing, and analyzing JSON data. It offers features such as formatting, beautifying, and validating JSON, as well as converting JSON to XML, CSV, and YAML formats. Users can load JSON data via file upload or URL and share edited JSON through generated links. It ensures that data is not sent to external servers, thus enhancing security and performance. ​
  • 20
    Gemini 2.5 Flash
    Gemini 2.5 Flash is a powerful, low-latency AI model introduced by Google on Vertex AI, designed for high-volume applications where speed and cost-efficiency are key. It delivers optimized performance for use cases like customer service, virtual assistants, and real-time data processing. With its dynamic reasoning capabilities, Gemini 2.5 Flash automatically adjusts processing time based on query complexity, offering granular control over the balance between speed, accuracy, and cost. It is ideal for businesses needing scalable AI solutions that maintain quality and efficiency.
  • 21
    SDF

    SDF

    SDF

    SDF is a developer platform for data that enhances SQL comprehension across organizations, enabling data teams to unlock the full potential of their data. It provides a transformation layer to streamline query writing and management, an analytical database engine for local execution, and an accelerator for improved transformation processes. SDF also offers proactive quality and governance features, including reports, contracts, and impact analysis, to ensure data integrity and compliance. By representing business logic as code, SDF facilitates the classification and management of data types, enhancing the clarity and maintainability of data models. It integrates seamlessly with existing data workflows, supporting various SQL dialects and cloud environments, and is designed to scale with the growing needs of data teams. SDF's open-core architecture, built on Apache DataFusion, allows for customization and extension, fostering a collaborative ecosystem for data development.
  • 22
    DeepSeek-Coder-V2
    DeepSeek-Coder-V2 is an open source code language model designed to excel in programming and mathematical reasoning tasks. It features a Mixture-of-Experts (MoE) architecture with 236 billion total parameters and 21 billion activated parameters per token, enabling efficient processing and high performance. The model was trained on an extensive dataset of 6 trillion tokens, enhancing its capabilities in code generation and mathematical problem-solving. DeepSeek-Coder-V2 supports over 300 programming languages and has demonstrated superior performance on benchmarks such surpassing other models. It is available in multiple variants, including DeepSeek-Coder-V2-Instruct, optimized for instruction-based tasks; DeepSeek-Coder-V2-Base, suitable for general text generation; and lightweight versions like DeepSeek-Coder-V2-Lite-Base and DeepSeek-Coder-V2-Lite-Instruct, designed for environments with limited computational resources.
  • 23
    Mistral Code

    Mistral Code

    Mistral AI

    Mistral Code is an AI-powered coding assistant designed to enhance software engineering productivity in enterprise environments by integrating powerful coding models, in-IDE assistance, local deployment options, and comprehensive enterprise tooling. Built on the open-source Continue project, Mistral Code offers secure, customizable AI coding capabilities while maintaining full control and visibility inside the customer’s IT environment. It supports over 80 programming languages and advanced functionalities such as multi-step refactoring, code search, and chat assistance, enabling developers to complete entire tickets, not just code completions. The platform addresses common enterprise challenges like proprietary repo connectivity, model customization, broad task coverage, and unified service-level agreements (SLAs). Major enterprises such as Abanca, SNCF, and Capgemini have adopted Mistral Code, using hybrid cloud and on-premises deployments.
  • 24
    Gemini 2.5 Flash-Lite
    Gemini 2.5 is Google DeepMind’s latest generation AI model family, designed to deliver advanced reasoning and native multimodality with a long context window. It improves performance and accuracy by reasoning through its thoughts before responding. The model offers different versions tailored for complex coding tasks, fast everyday performance, and cost-efficient high-volume workloads. Gemini 2.5 supports multiple data types including text, images, video, audio, and PDFs, enabling versatile AI applications. It features adaptive thinking budgets and fine-grained control for developers to balance cost and output quality. Available via Google AI Studio and Gemini API, Gemini 2.5 powers next-generation AI experiences.
  • 25
    Grok 4 Heavy
    Grok 4 Heavy is the most powerful AI model offered by xAI, designed as a multi-agent system to deliver cutting-edge reasoning and intelligence. Built on the Colossus supercomputer, it achieves a 50% score on the challenging HLE benchmark, outperforming many competitors. This advanced model supports multimodal inputs including text and images, with plans to add video capabilities. Grok 4 Heavy targets power users such as developers, researchers, and technical enthusiasts who require top-tier AI performance. Access is provided through the premium “SuperGrok Heavy” subscription priced at $300 per month. xAI has enhanced moderation and removed problematic system prompts to ensure responsible and ethical AI use.
  • 26
    GPT-5 pro
    GPT-5 Pro is OpenAI’s most advanced AI model, designed to tackle the most complex and challenging tasks with extended reasoning capabilities. It builds on GPT-5’s unified architecture, using scaled, efficient parallel compute to provide highly comprehensive and accurate responses. GPT-5 Pro achieves state-of-the-art performance on difficult benchmarks like GPQA, excelling in areas such as health, science, math, and coding. It makes significantly fewer errors than earlier models and delivers responses that experts find more relevant and useful. The model automatically balances quick answers and deep thinking, allowing users to get expert-level insights efficiently. GPT-5 Pro is available to Pro subscribers and powers some of the most demanding applications requiring advanced intelligence.
  • 27
    GPT-5 thinking
    GPT-5 Thinking is the deeper reasoning mode within the GPT-5 unified AI system, designed to tackle complex, open-ended problems that require extended cognitive effort. It works alongside the faster GPT-5 model, dynamically engaging when queries demand more detailed analysis and thoughtful responses. This mode significantly reduces hallucinations and improves factual accuracy, producing more reliable answers on challenging topics like science, math, coding, and health. GPT-5 Thinking is also better at recognizing its own limitations, communicating clearly when tasks are impossible or underspecified. It incorporates advanced safety features to minimize harmful outputs and provide nuanced, helpful answers even in ambiguous or sensitive contexts. Available to all users, it helps bring expert-level intelligence to everyday and advanced use cases alike.
  • 28
    CodeSonar

    CodeSonar

    CodeSecure

    CodeSonar employs a unified dataflow and symbolic execution analysis that examines the computation of the complete application. By not relying on pattern matching or similar approximations, CodeSonar's static analysis engine is extraordinarily deep, finding 3-5 times more defects on average than other static analysis tools. Unlike many software development tools, such as testing tools, compilers, configuration management, etc., SAST tools can be integrated into a team's development process at any time with ease. SAST technologies like CodeSonar simply attach to your existing build environments to add analysis information to your verification process. Like a compiler, CodeSonar does a build of your code using your existing build environment, but instead of creating object code, CodeSonar creates an abstract model of your entire program. From the derived model, CodeSonar’s symbolic execution engine explores program paths, reasoning about program variables and how they relate.
  • 29
    AtomicJar

    AtomicJar

    AtomicJar

    Shift testing to the left and find issues earlier, when they are easier and cheaper to fix. Enable developers to do better integration testing, shorten dev cycles and increase productivity. Shorter and more-thorough integration feedback cycles, mean more reliable products. Testcontainers Cloud makes it easy for developers to run reliable integration tests, with real dependencies defined in code, from their laptops to their team’s CI. Testcontainers is an open-source framework for providing throwaway, lightweight instances of databases, message brokers, web browsers, or just about anything that can run in a Docker container. No more need for mocks or complicated environment configurations. Define your test dependencies as code, then simply run your tests and containers will be created and then deleted.
  • 30
    Unremot

    Unremot

    Unremot

    Unremot is a go-to place for anyone aspiring to build an AI product - with 120+ pre-built APIs, you can build and launch AI products 2X faster, at 1/3rd cost. Even, some of the most complicated AI product APIs take less than a few minutes to deploy and launch, with minimal code or even no-code. Choose an AI API that you want to integrate to your product from 120+ APIs we have on Unremot. Provide your API private key to authenticate Unremot to access the API. Use unremot unique URL to connect the product API - the whole process takes only minutes, instead of days and weeks.