Compare the Top Artificial Intelligence Software that integrates with VLLM as of June 2025

This a list of Artificial Intelligence software that integrates with VLLM. Use the filters on the left to add additional filters for products that have integrations with VLLM. View the products that work with VLLM in the table below.

What is Artificial Intelligence Software for VLLM?

Artificial Intelligence (AI) software is computer technology designed to simulate human intelligence. It can be used to perform tasks that require cognitive abilities, such as problem-solving, data analysis, visual perception and language translation. AI applications range from voice recognition and virtual assistants to autonomous vehicles and medical diagnostics. Compare and read user reviews of the best Artificial Intelligence software for VLLM currently available using the table below. This list is updated regularly.

  • 1
    Docker

    Docker

    Docker

    Docker takes away repetitive, mundane configuration tasks and is used throughout the development lifecycle for fast, easy and portable application development, desktop and cloud. Docker’s comprehensive end-to-end platform includes UIs, CLIs, APIs and security that are engineered to work together across the entire application delivery lifecycle. Get a head start on your coding by leveraging Docker images to efficiently develop your own unique applications on Windows and Mac. Create your multi-container application using Docker Compose. Integrate with your favorite tools throughout your development pipeline, Docker works with all development tools you use including VS Code, CircleCI and GitHub. Package applications as portable container images to run in any environment consistently from on-premises Kubernetes to AWS ECS, Azure ACI, Google GKE and more. Leverage Docker Trusted Content, including Docker Official Images and images from Docker Verified Publishers.
    Starting Price: $7 per month
  • 2
    OpenAI

    OpenAI

    OpenAI

    OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. Apply our API to any language task — semantic search, summarization, sentiment analysis, content generation, translation, and more — with only a few examples or by specifying your task in English. One simple integration gives you access to our constantly-improving AI technology. Explore how you integrate with the API with these sample completions.
  • 3
    PyTorch

    PyTorch

    PyTorch

    Transition seamlessly between eager and graph modes with TorchScript, and accelerate the path to production with TorchServe. Scalable distributed training and performance optimization in research and production is enabled by the torch-distributed backend. A rich ecosystem of tools and libraries extends PyTorch and supports development in computer vision, NLP and more. PyTorch is well supported on major cloud platforms, providing frictionless development and easy scaling. Select your preferences and run the install command. Stable represents the most currently tested and supported version of PyTorch. This should be suitable for many users. Preview is available if you want the latest, not fully tested and supported, 1.10 builds that are generated nightly. Please ensure that you have met the prerequisites (e.g., numpy), depending on your package manager. Anaconda is our recommended package manager since it installs all dependencies.
  • 4
    KServe

    KServe

    KServe

    Highly scalable and standards-based model inference platform on Kubernetes for trusted AI. KServe is a standard model inference platform on Kubernetes, built for highly scalable use cases. Provides performant, standardized inference protocol across ML frameworks. Support modern serverless inference workload with autoscaling including a scale to zero on GPU. Provides high scalability, density packing, and intelligent routing using ModelMesh. Simple and pluggable production serving for production ML serving including prediction, pre/post-processing, monitoring, and explainability. Advanced deployments with the canary rollout, experiments, ensembles, and transformers. ModelMesh is designed for high-scale, high-density, and frequently-changing model use cases. ModelMesh intelligently loads and unloads AI models to and from memory to strike an intelligent trade-off between responsiveness to users and computational footprint.
    Starting Price: Free
  • 5
    Hugging Face

    Hugging Face

    Hugging Face

    Hugging Face is a leading platform for AI and machine learning, offering a vast hub for models, datasets, and tools for natural language processing (NLP) and beyond. The platform supports a wide range of applications, from text, image, and audio to 3D data analysis. Hugging Face fosters collaboration among researchers, developers, and companies by providing open-source tools like Transformers, Diffusers, and Tokenizers. It enables users to build, share, and access pre-trained models, accelerating AI development for a variety of industries.
    Starting Price: $9 per month
  • 6
    NVIDIA DRIVE
    Software is what turns a vehicle into an intelligent machine. The NVIDIA DRIVE™ Software stack is open, empowering developers to efficiently build and deploy a variety of state-of-the-art AV applications, including perception, localization and mapping, planning and control, driver monitoring, and natural language processing. The foundation of the DRIVE Software stack, DRIVE OS is the first safe operating system for accelerated computing. It includes NvMedia for sensor input processing, NVIDIA CUDA® libraries for efficient parallel computing implementations, NVIDIA TensorRT™ for real-time AI inference, and other developer tools and modules to access hardware engines. The NVIDIA DriveWorks® SDK provides middleware functions on top of DRIVE OS that are fundamental to autonomous vehicle development. These consist of the sensor abstraction layer (SAL) and sensor plugins, data recorder, vehicle I/O support, and a deep neural network (DNN) framework.
  • Previous
  • You're on page 1
  • Next