Parallax is a decentralized inference framework designed to run large language models across distributed computing resources. Instead of relying on centralized GPU clusters in data centers, the system allows multiple heterogeneous machines to collaborate in serving AI inference workloads. Parallax divides model layers across different nodes and dynamically coordinates them to form a complete inference pipeline. A two-stage scheduling architecture determines how model layers are allocated to available hardware and how requests are routed across nodes during execution. This scheduling system optimizes latency, throughput, and hardware utilization even when nodes have different computational capabilities. The platform also supports model sharding and pipeline parallelism, allowing very large models to run across distributed resources.
Features
- Decentralized inference engine for distributed LLM serving
- Pipeline parallel model sharding across heterogeneous nodes
- Dynamic scheduling system for request routing and load balancing
- Support for collaborative GPU pools and distributed clusters
- Optimizations for latency, throughput, and resource utilization
- Cross-platform deployment across diverse hardware environments