+
+

Related Products

  • RunPod
    133 Ratings
    Visit Website
  • Vertex AI
    713 Ratings
    Visit Website
  • Google AI Studio
    4 Ratings
    Visit Website
  • LM-Kit.NET
    16 Ratings
    Visit Website
  • Google Cloud BigQuery
    1,731 Ratings
    Visit Website
  • Fraud.net
    56 Ratings
    Visit Website
  • BytePlus Recommend
    1 Rating
    Visit Website
  • Google Cloud Speech-to-Text
    374 Ratings
    Visit Website
  • StarTree
    25 Ratings
    Visit Website
  • Qloo
    23 Ratings
    Visit Website

About

End-to-end project, model and hosting management platform, which allows companies to convert data and algorithms into holistic, execution-ready AI-strategies. Build, train and manage models securely with ease. Create products that consume AI models from anywhere, any time. Minimize risks of AI investments, while increasing strategic flexibility. Completely automated model testing, evaluation deployment, scaling and hardware load balancing. From real-time, low-latency, high-throughput inference to batch, long-running inference. Pay-per-second-of-use model, with an SLA, and full governance, monitoring and auditing tools. Intuitive interface that acts as a unified hub for managing projects, creating and visualizing datasets, and training models via collaborative and reproducible workflows.

About

VLLM is a high-performance library designed to facilitate efficient inference and serving of Large Language Models (LLMs). Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry. It offers state-of-the-art serving throughput by efficiently managing attention key and value memory through its PagedAttention mechanism. It supports continuous batching of incoming requests and utilizes optimized CUDA kernels, including integration with FlashAttention and FlashInfer, to enhance model execution speed. Additionally, vLLM provides quantization support for GPTQ, AWQ, INT4, INT8, and FP8, as well as speculative decoding capabilities. Users benefit from seamless integration with popular Hugging Face models, support for various decoding algorithms such as parallel sampling and beam search, and compatibility with NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, and more.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Businesses looking for an end-to-end automated MLOps platform

Audience

AI infrastructure engineers looking for a solution to optimize the deployment and serving of large-scale language models in production environments

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

No information available.
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

SquareFactory
Founded: 2022
Switzerland
www.squarefactory.io/product/

Company Information

VLLM
United States
docs.vllm.ai/en/latest/

Alternatives

Alternatives

OpenVINO

OpenVINO

Intel
AWS Neuron

AWS Neuron

Amazon Web Services

Categories

Categories

Integrations

Amazon Web Services (AWS)
Database Mart
Docker
Google Cloud Platform
Hugging Face
KServe
Kubernetes
Microsoft Azure
NGINX
NVIDIA DRIVE
OpenAI
PyTorch

Integrations

Amazon Web Services (AWS)
Database Mart
Docker
Google Cloud Platform
Hugging Face
KServe
Kubernetes
Microsoft Azure
NGINX
NVIDIA DRIVE
OpenAI
PyTorch
Claim SquareFactory and update features and information
Claim SquareFactory and update features and information
Claim VLLM and update features and information
Claim VLLM and update features and information