Business Software for CUDA

Top Software that integrates with CUDA as of July 2025

CUDA Clear Filters

Compare business software, products, and services to find the best solution for your business or organization. Use the filters on the left to drill down by category, pricing, features, organization size, organization type, region, user reviews, integrations, and more. View and sort the products and solutions that match your needs in the results below.

  • 1
    Cody

    Cody

    Sourcegraph

    Cody, Sourcegraph’s AI code assistant goes beyond individual dev productivity, helping enterprises achieve consistency and quality at scale with AI. Unlike traditional coding assistants, Cody understands the entire codebase, enabling deeper contextual awareness for smarter autocompletions, refactoring, and AI-driven code suggestions. It integrates with IDEs like VS Code, Visual Studio, Eclipse, and JetBrains, providing inline editing and chat without disrupting workflows. Cody also connects with tools like Notion, Linear, and Prometheus to enhance development context. Powered by advanced LLMs like Claude Sonnet 4 and GPT-4o, it optimizes speed and performance based on enterprise needs, and is always adding the latest AI models. Developers report significant efficiency gains, with some saving up to six hours per week and doubling their coding speed.
    Starting Price: $59
    View Software
    Visit Website
  • 2
    Dataoorts GPU Cloud
    Dataoorts: Revolutionizing GPU Cloud Computing Dataoorts is a cutting-edge GPU cloud platform designed to meet the demands of the modern computational landscape. Launched in August 2024 after extensive beta testing, it offers revolutionary GPU virtualization technology, empowering researchers, developers, and businesses with unmatched flexibility, scalability, and performance. The Technology Behind Dataoorts At the core of Dataoorts lies its proprietary Dynamic Distributed Resource Allocation (DDRA) technology. This breakthrough allows real-time virtualization of GPU resources, ensuring optimal performance for diverse workloads. Whether you're training complex machine learning models, running high-performance simulations, or processing large datasets, Dataoorts delivers computational power with unparalleled efficiency.
    Leader badge
    Starting Price: $0.20/hour
  • 3
    MATLAB

    MATLAB

    The MathWorks

    MATLAB® combines a desktop environment tuned for iterative analysis and design processes with a programming language that expresses matrix and array mathematics directly. It includes the Live Editor for creating scripts that combine code, output, and formatted text in an executable notebook. MATLAB toolboxes are professionally developed, rigorously tested, and fully documented. MATLAB apps let you see how different algorithms work with your data. Iterate until you’ve got the results you want, then automatically generate a MATLAB program to reproduce or automate your work. Scale your analyses to run on clusters, GPUs, and clouds with only minor code changes. There’s no need to rewrite your code or learn big data programming and out-of-memory techniques. Automatically convert MATLAB algorithms to C/C++, HDL, and CUDA code to run on your embedded processor or FPGA/ASIC. MATLAB works with Simulink to support Model-Based Design.
  • 4
    Python

    Python

    Python

    The core of extensible programming is defining functions. Python allows mandatory and optional arguments, keyword arguments, and even arbitrary argument lists. Whether you're new to programming or an experienced developer, it's easy to learn and use Python. Python can be easy to pick up whether you're a first-time programmer or you're experienced with other languages. The following pages are a useful first step to get on your way to writing programs with Python! The community hosts conferences and meetups to collaborate on code, and much more. Python's documentation will help you along the way, and the mailing lists will keep you in touch. The Python Package Index (PyPI) hosts thousands of third-party modules for Python. Both Python's standard library and the community-contributed modules allow for endless possibilities.
    Starting Price: Free
  • 5
    Fortran

    Fortran

    Fortran

    Fortran has been designed from the ground up for computationally intensive applications in science and engineering. Mature and battle-tested compilers and libraries allow you to write code that runs close to the metal, fast. Fortran is statically and strongly typed, which allows the compiler to catch many programming errors early on for you. This also allows the compiler to generate efficient binary code. Fortran is a relatively small language that is surprisingly easy to learn and use. Expressing most mathematical and arithmetic operations over large arrays is as simple as writing them as equations on a whiteboard. Fortran is a natively parallel programming language with intuitive array-like syntax to communicate data between CPUs. You can run almost the same code on a single CPU, on a shared-memory multicore system, or on a distributed-memory HPC or cloud-based system.
    Starting Price: Free
  • 6
    Copernicus Computing

    Copernicus Computing

    Copernicus Computing

    Render more with us and get additional discounts. Our experienced team of 3D professionals will make sure that your rendering experience is flawless. Our goal is to give you the best-rendered farm service at a very low price with just one click away. You’re going to have a hassle-free experience through our automatic rendering process through the convenience of a dashboard to easily control and manage your projects. The company boasts our 24/7 support team, composed of rendering professionals, who will be there for you at any time of the day, every day. If you want to know if your 3D app is supported, feel free to contact us, and we’ll be glad to take your rendering project. Easy plug & play integration with your 3D software. Support for all major 3D software and plugins. Choose from CPU and GPU nodes. Volume discounts of up to 50%, and easy cost estimation before rendering.
    Starting Price: $0.04 per GHz per hour
  • 7
    NVIDIA TensorRT
    NVIDIA TensorRT is an ecosystem of APIs for high-performance deep learning inference, encompassing an inference runtime and model optimizations that deliver low latency and high throughput for production applications. Built on the CUDA parallel programming model, TensorRT optimizes neural network models trained on all major frameworks, calibrating them for lower precision with high accuracy, and deploying them across hyperscale data centers, workstations, laptops, and edge devices. It employs techniques such as quantization, layer and tensor fusion, and kernel tuning on all types of NVIDIA GPUs, from edge devices to PCs to data centers. The ecosystem includes TensorRT-LLM, an open source library that accelerates and optimizes inference performance of recent large language models on the NVIDIA AI platform, enabling developers to experiment with new LLMs for high performance and quick customization through a simplified Python API.
    Starting Price: Free
  • 8
    Pruna AI

    Pruna AI

    Pruna AI

    Pruna uses generative AI to enable companies to produce professional-grade visual content quickly and affordably. By eliminating the traditional need for studios and manual editing, it empowers brands to create consistent, customized images for advertising, product displays, and digital campaigns with minimal effort.
    Starting Price: $0.40 per runtime hour
  • 9
    RightNow AI

    RightNow AI

    RightNow AI

    RightNow AI is an AI-powered platform designed to automatically profile, detect bottlenecks, and optimize CUDA kernels for peak performance. It supports all major NVIDIA architectures, including Ampere, Hopper, Ada Lovelace, and Blackwell GPUs. It enables users to generate optimized CUDA kernels instantly using natural language prompts, eliminating the need for deep GPU expertise. With serverless GPU profiling, users can identify performance issues without relying on local hardware. RightNow AI replaces complex legacy optimization tools with a streamlined solution, offering features such as inference-time scaling and performance benchmarking. Trusted by leading AI and HPC teams worldwide, including Nvidia, Adobe, and Samsung, RightNow AI has demonstrated performance improvements ranging from 2x to 20x over standard implementations.
    Starting Price: $20 per month
  • 10
    JarvisLabs.ai

    JarvisLabs.ai

    JarvisLabs.ai

    We have set up all the infrastructure, computing, and software (Cuda, Frameworks) required for you to train and deploy your favorite deep-learning models. You can spin up GPU/CPU-powered instances directly from your browser or automate it through our Python API.
    Starting Price: $1,440 per month
  • 11
    Brev.dev

    Brev.dev

    NVIDIA

    Find, provision, and configure AI-ready cloud instances for dev, training, and deployment. Automatically install CUDA and Python, load the model, and SSH in. Use Brev.dev to find a GPU and get it configured to fine-tune or train your model. A single interface between AWS, GCP, and Lambda GPU cloud. Use credits when you have them. Pick an instance based on costs & availability. A CLI to automatically update your SSH config ensuring it's done securely. Build faster with a better dev environment. Brev connects to cloud providers to find you a GPU at the best price, configures it, and wraps SSH to connect your code editor to the remote machine. Change your instance, add or remove a GPU, add GB to your hard drive, etc. Set up your environment to make sure your code always runs, and make it easy to share or clone. You can create your own instance from scratch or use a template. The console should give you a couple of template options.
    Starting Price: $0.04 per hour
  • 12
    NodeShift

    NodeShift

    NodeShift

    We help you slash cloud costs so you can focus on building amazing solutions. Spin the globe and point at the map, NodeShift is available there too. Regardless of where you deploy, benefit from increased privacy. Your data is up and running even if an entire country’s electricity grid goes down. The ideal way for organizations young and old to ease their way into the distributed and affordable cloud at their own pace. The most affordable compute and GPU virtual machines at scale. The NodeShift platform aggregates multiple independent data centers across the world and a wide range of existing decentralized solutions under one hood such as Akash, Filecoin, ThreeFold, and many more, with an emphasis on affordable prices and a friendly UX. Payment for its cloud services is simple and straightforward, giving every business access to the same interfaces as the traditional cloud but with several key added benefits of decentralization such as affordability, privacy, and resilience.
    Starting Price: $19.98 per month
  • 13
    AWS Marketplace
    AWS Marketplace is a curated digital catalog that enables customers to discover, purchase, deploy, and manage third-party software, data products, and services directly within the AWS ecosystem. It provides access to thousands of listings across categories like security, machine learning, business applications, and DevOps tools. With flexible pricing models such as pay-as-you-go, annual subscriptions, and free trials, AWS Marketplace simplifies procurement and billing by integrating costs into a single AWS invoice. It also supports rapid deployment with pre-configured software that can be launched on AWS infrastructure. This streamlined approach allows businesses to accelerate innovation, reduce time-to-market, and maintain better control over software usage and costs.
  • 14
    NeevCloud

    NeevCloud

    NeevCloud

    NeevCloud delivers cutting-edge GPU cloud solutions powered by NVIDIA GPUs like the H200, H100, GB200 NVL72, and many more offering unmatched performance for AI, HPC, and data-intensive workloads. Scale dynamically with flexible pricing and energy-efficient GPUs that reduce costs while maximizing output. Ideal for AI model training, scientific research, media production, and real-time analytics, NeevCloud ensures seamless integration and global accessibility. Experience unparalleled speed, scalability, and sustainability with NeevCloud GPU cloud solutions.
    Starting Price: $1.69/GPU/hour
  • 15
    NVIDIA Jetson
    NVIDIA's Jetson platform is a leading solution for embedded AI computing, utilized by professional developers to create breakthrough AI products across various industries, as well as by students and enthusiasts for hands-on AI learning and innovative projects. The platform comprises small, power-efficient production modules and developer kits, offering a comprehensive AI software stack for high-performance acceleration. This enables the deployment of generative AI at the edge, supporting applications like NVIDIA Metropolis and the Isaac platform. The Jetson family includes a range of modules tailored to different performance and power efficiency needs, such as the Jetson Nano, Jetson TX2, Jetson Xavier NX, and the Jetson Orin series. Each module is designed to meet specific AI computing requirements, from entry-level projects to advanced robotics and industrial applications.
  • 16
    NVIDIA Isaac
    NVIDIA Isaac is an AI robot development platform that comprises NVIDIA CUDA-accelerated libraries, application frameworks, and AI models to expedite the creation of AI robots, including autonomous mobile robots, robotic arms, and humanoids. The platform features NVIDIA Isaac ROS, a collection of CUDA-accelerated computing packages and AI models built on the open source ROS 2 framework, designed to streamline the development of advanced AI robotics applications. Isaac Manipulator, built on Isaac ROS, enables the development of AI-powered robotic arms that can seamlessly perceive, understand, and interact with their environments. Isaac Perceptor facilitates the rapid development of advanced AMRs capable of operating in unstructured environments like warehouses or factories. For humanoid robotics, NVIDIA Isaac GR00T serves as a research initiative and development platform for general-purpose robot foundation models and data pipelines.
  • 17
    Skyportal

    Skyportal

    Skyportal

    Skyportal is a GPU cloud platform built for AI engineers, offering 50% less cloud costs and 100% GPU performance. It provides a cost-effective GPU infrastructure for machine learning workloads, eliminating unpredictable cloud bills and hidden fees. Skyportal has seamlessly integrated Kubernetes, Slurm, PyTorch, TensorFlow, CUDA, cuDNN, and NVIDIA Drivers, fully optimized for Ubuntu 22.04 LTS and 24.04 LTS, allowing users to focus on innovating and scaling with ease. It offers high-performance NVIDIA H100 and H200 GPUs optimized specifically for ML/AI workloads, with instant scalability and 24/7 expert support from a team that understands ML workflows and optimization. Skyportal's transparent pricing and zero egress fees provide predictable costs for AI infrastructure. Users can share their AI/ML project requirements and goals, deploy models within the infrastructure using familiar tools and frameworks, and scale their infrastructure as needed.
    Starting Price: $2.40 per hour
  • 18
    Coverity Static Analysis
    Coverity Static Analysis is a comprehensive code scanning solution that enables developers and security teams to deliver high-quality software in compliance with security, functional safety, and industry standards. It effectively uncovers complex defects across extensive codebases, identifying and resolving code quality and security issues that span multiple files and libraries. Coverity supports compliance with a wide range of standards, including OWASP Top 10, CWE Top 25, MISRA, and CERT C/C++/Java, providing built-in reports to track and prioritize issues. With the Code Sight™ IDE plugin, developers receive real-time results, including CWE information and remediation guidance, directly within their development environment, facilitating the integration of security into the software development life cycle without compromising developer velocity.
  • 19
    C++

    C++

    C++

    C++ is a simple and clear language in its expressions. It is true that a piece of code written with C++ may be seen by a stranger of programming a bit more cryptic than some other languages due to the intensive use of special characters ({}[]*&!|...), but once one knows the meaning of such characters it can be even more schematic and clear than other languages that rely more on English words. Also, the simplification of the input/output interface of C++ in comparison to C and the incorporation of the standard template library in the language, makes the communication and manipulation of data in a program written in C++ as simple as in other languages, without losing the power it offers. It is a programming model that treats programming from a perspective where each component is considered an object, with its own properties and methods, replacing or complementing structured programming paradigm, where the focus was on procedures and parameters.
    Starting Price: Free
  • 20
    Vast.ai

    Vast.ai

    Vast.ai

    Vast.ai is the market leader in low-cost cloud GPU rental. Use one simple interface to save 5-6X on GPU compute. Use on-demand rentals for convenience and consistent pricing. Or save a further 50% or more with interruptible instances using spot auction based pricing. Vast has an array of providers that offer different levels of security: from hobbyists up to Tier-4 data centers. Vast.ai helps you find the best pricing for the level of security and reliability you need. Use our command line interface to search the entire marketplace for offers while utilizing scriptable filters and sort options. Launch instances quickly right from the CLI and easily automate your deployment. Save an additional 50% or more by using interruptible instances and auction pricing. The highest bidding instances run; other conflicting instances are stopped.
    Starting Price: $0.20 per hour
  • 21
    Azure Marketplace
    Azure Marketplace is a comprehensive online store that provides access to thousands of certified, ready-to-use software applications, services, and solutions from Microsoft and third-party vendors. It enables businesses to discover, purchase, and deploy software directly within the Azure cloud environment. The marketplace offers a wide range of products, including virtual machine images, AI and machine learning models, developer tools, security solutions, and industry-specific applications. With flexible pricing options like pay-as-you-go, free trials, and subscription models, Azure Marketplace simplifies the procurement process and centralizes billing through a single Azure invoice. It supports seamless integration with Azure services, enabling organizations to enhance their cloud infrastructure, streamline workflows, and accelerate digital transformation initiatives.
  • 22
    Unicorn Render

    Unicorn Render

    Unicorn Render

    Unicorn Render is a professional rendering software that enables users to produce stunning realistic pictures and achieve high-end rendering levels without any prior skills. It offers a user-friendly interface designed to provide everything needed to obtain amazing results with minimal controls. Available as a standalone application or as a plugin, Unicorn Render integrates advanced AI technology and professional visualization tools. The software supports GPU+CPU acceleration through deep learning photorealistic rendering technology and NVIDIA CUDA technology, allowing joint support for CUDA GPUs and multicore CPUs. It features real-time progressive physics illumination, a Metropolis Light Transport sampler (MLT), a caustic sampler, and native NVIDIA MDL material support. Unicorn Render's WYSIWYG editing mode ensures that 100% of editing can be done in final image quality, eliminating surprises in the production of the final image.
  • 23
    Clore.ai

    Clore.ai

    Clore.ai

    ​Clore.ai is a decentralized platform that revolutionizes GPU leasing by connecting server owners with renters through a peer-to-peer marketplace. It offers flexible, cost-effective access to high-performance GPUs for tasks such as AI development, scientific research, and cryptocurrency mining. Users can choose between on-demand leasing, which ensures uninterrupted computing power, and spot leasing, which allows for potential interruptions at a lower cost. It utilizes Clore Coin (CLORE), an L1 Proof of Work cryptocurrency, to facilitate transactions and reward participants, with 40% of block rewards directed to GPU hosts. This structure enables hosts to earn additional income beyond rental fees, enhancing the platform's appeal. Clore.ai's Proof of Holding (PoH) system incentivizes users to hold CLORE coins, offering benefits like reduced fees and increased earnings. It supports a wide range of applications, including AI model training, scientific simulations, etc.
  • 24
    HunyuanCustom
    HunyuanCustom is a multi-modal customized video generation framework that emphasizes subject consistency while supporting image, audio, video, and text conditions. Built upon HunyuanVideo, it introduces a text-image fusion module based on LLaVA for enhanced multi-modal understanding, along with an image ID enhancement module that leverages temporal concatenation to reinforce identity features across frames. To enable audio- and video-conditioned generation, it further proposes modality-specific condition injection mechanisms, an AudioNet module that achieves hierarchical alignment via spatial cross-attention, and a video-driven injection module that integrates latent-compressed conditional video through a patchify-based feature-alignment network. Extensive experiments on single- and multi-subject scenarios demonstrate that HunyuanCustom significantly outperforms state-of-the-art open and closed source methods in terms of ID consistency, realism, and text-video alignment.
  • 25
    Amazon EC2 G4 Instances
    Amazon EC2 G4 instances are optimized for machine learning inference and graphics-intensive applications. It offers a choice between NVIDIA T4 GPUs (G4dn) and AMD Radeon Pro V520 GPUs (G4ad). G4dn instances combine NVIDIA T4 GPUs with custom Intel Cascade Lake CPUs, providing a balance of compute, memory, and networking resources. These instances are ideal for deploying machine learning models, video transcoding, game streaming, and graphics rendering. G4ad instances, featuring AMD Radeon Pro V520 GPUs and 2nd-generation AMD EPYC processors, deliver cost-effective solutions for graphics workloads. Both G4dn and G4ad instances support Amazon Elastic Inference, allowing users to attach low-cost GPU-powered inference acceleration to Amazon EC2 and reduce deep learning inference costs. They are available in various sizes to accommodate different performance needs and are integrated with AWS services such as Amazon SageMaker, Amazon ECS, and Amazon EKS.
  • 26
    NVIDIA Magnum IO
    NVIDIA Magnum IO is the architecture for parallel, intelligent data center I/O. It maximizes storage, network, and multi-node, multi-GPU communications for the world’s most important applications, using large language models, recommender systems, imaging, simulation, and scientific research. Magnum IO utilizes storage I/O, network I/O, in-network compute, and I/O management to simplify and speed up data movement, access, and management for multi-GPU, multi-node systems. It supports NVIDIA CUDA-X libraries and makes the best use of a range of NVIDIA GPU and networking hardware topologies to achieve optimal throughput and low latency. In multi-GPU, multi-node systems, slow CPU, single-thread performance is in the critical path of data access from local or remote storage devices. With storage I/O acceleration, the GPU bypasses the CPU and system memory, and accesses remote storage via 8x 200 Gb/s NICs, achieving up to 1.6 TB/s of raw storage bandwidth.
  • 27
    VMware Private AI Foundation
    VMware Private AI Foundation is a joint, on‑premises generative AI platform built on VMware Cloud Foundation (VCF) that enables enterprises to run retrieval‑augmented generation workflows, fine‑tune and customize large language models, and perform inference in their own data centers, addressing privacy, choice, cost, performance, and compliance requirements. It integrates the Private AI Package (including vector databases, deep learning VMs, data indexing and retrieval services, and AI agent‑builder tools) with NVIDIA AI Enterprise (comprising NVIDIA microservices like NIM, NVIDIA’s own LLMs, and third‑party/open source models from places like Hugging Face). It supports full GPU virtualization, monitoring, live migration, and efficient resource pooling on NVIDIA‑certified HGX servers with NVLink/NVSwitch acceleration. Deployable via GUI, CLI, and API, it offers unified management through self‑service provisioning, model store governance, and more.
  • 28
    C

    C

    C

    C is a programming language created in 1972 which remains very important and widely used today. C is a general-purpose, imperative, procedural language. The C language can be used to develop a wide variety of different software and applications including operating systems, software applications, code compilers, databases, and more.
  • Previous
  • You're on page 1
  • Next