Alibaba Cloud Model Studio
Model Studio is Alibaba Cloud’s one-stop generative AI platform that lets developers build intelligent, business-aware applications using industry-leading foundation models like Qwen-Max, Qwen-Plus, Qwen-Turbo, the Qwen-2/3 series, visual-language models (Qwen-VL/Omni), and the video-focused Wan series. Users can access these powerful GenAI models through familiar OpenAI-compatible APIs or purpose-built SDKs, no infrastructure setup required. It supports a full development workflow, experiment with models in the playground, perform real-time and batch inferences, fine-tune with tools like SFT or LoRA, then evaluate, compress, accelerate deployment, and monitor performance, all within an isolated Virtual Private Cloud (VPC) for enterprise-grade security. Customization is simplified via one-click Retrieval-Augmented Generation (RAG), enabling integration of business data into model outputs. Visual, template-driven interfaces facilitate prompt engineering and application design.
Learn more
ModelScope
This model is based on a multi-stage text-to-video generation diffusion model, which inputs a description text and returns a video that matches the text description. Only English input is supported.
This model is based on a multi-stage text-to-video generation diffusion model, which inputs a description text and returns a video that matches the text description. Only English input is supported.
The text-to-video generation diffusion model consists of three sub-networks: text feature extraction, text feature-to-video latent space diffusion model, and video latent space to video visual space. The overall model parameters are about 1.7 billion. Support English input. The diffusion model adopts the Unet3D structure, and realizes the function of video generation through the iterative denoising process from the pure Gaussian noise video.
Learn more
Nebius Token Factory
Nebius Token Factory is a scalable AI inference platform designed to run open-source and custom AI models in production without manual infrastructure management. It offers enterprise-ready inference endpoints with predictable performance, autoscaling throughput, and sub-second latency — even at very high request volumes. It delivers 99.9% uptime availability and supports unlimited or tailored traffic profiles based on workload needs, simplifying the transition from experimentation to global deployment. Nebius Token Factory supports a broad set of open source models such as Llama, Qwen, DeepSeek, GPT-OSS, Flux, and many others, and lets teams host and fine-tune models through an API or dashboard. Users can upload LoRA adapters or full fine-tuned variants directly, with the same enterprise performance guarantees applied to custom models.
Learn more
Baseten
Baseten is a high-performance platform designed for mission-critical AI inference workloads. It supports serving open-source, custom, and fine-tuned AI models on infrastructure built specifically for production scale. Users can deploy models on Baseten’s cloud, their own cloud, or in a hybrid setup, ensuring flexibility and scalability. The platform offers inference-optimized infrastructure that enables fast training and seamless developer workflows. Baseten also provides specialized performance optimizations tailored for generative AI applications such as image generation, transcription, text-to-speech, and large language models. With 99.99% uptime, low latency, and support from forward deployed engineers, Baseten aims to help teams bring AI products to market quickly and reliably.
Learn more