A sparsity-aware enterprise inferencing system for AI models on CPUs. Maximize your CPU infrastructure with DeepSparse to run performant computer vision (CV), natural language processing (NLP), and large language models (LLMs).
Features
- Optimized for sparse deep learning models
- Enables high-speed inference on CPUs
- Supports ONNX model format for broad compatibility
- Works with sparsified versions of popular deep learning models
- Scales from edge devices to cloud deployments
- Integrates with PyTorch and TensorFlow models
License
MIT LicenseFollow DeepSparse
Other Useful Business Software
Go From AI Idea to AI App Fast
Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of DeepSparse!