Best AI Models for NVIDIA DGX Cloud Serverless Inference

Compare the Top AI Models that integrate with NVIDIA DGX Cloud Serverless Inference as of October 2025

This a list of AI Models that integrate with NVIDIA DGX Cloud Serverless Inference. Use the filters on the left to add additional filters for products that have integrations with NVIDIA DGX Cloud Serverless Inference. View the products that work with NVIDIA DGX Cloud Serverless Inference in the table below.

What are AI Models for NVIDIA DGX Cloud Serverless Inference?

AI models are systems designed to simulate human intelligence by learning from data and solving complex tasks. They include specialized types like Large Language Models (LLMs) for text generation, image models for visual recognition and editing, and video models for processing and analyzing dynamic content. These models power applications such as chatbots, facial recognition, video summarization, and personalized recommendations. Their capabilities rely on advanced algorithms, extensive training datasets, and robust computational resources. AI models are transforming industries by automating processes, enhancing decision-making, and enabling creative innovations. Compare and read user reviews of the best AI Models for NVIDIA DGX Cloud Serverless Inference currently available using the table below. This list is updated regularly.

  • 1
    Llama

    Llama

    Meta

    Llama (Large Language Model Meta AI) is a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as Llama enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field. Training smaller foundation models like Llama is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. We are making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a Llama model card that details how we built the model in keeping with our approach to Responsible AI practices.
  • Previous
  • You're on page 1
  • Next