A sparsity-aware enterprise inferencing system for AI models on CPUs. Maximize your CPU infrastructure with DeepSparse to run performant computer vision (CV), natural language processing (NLP), and large language models (LLMs).
Features
- Optimized for sparse deep learning models
- Enables high-speed inference on CPUs
- Supports ONNX model format for broad compatibility
- Works with sparsified versions of popular deep learning models
- Scales from edge devices to cloud deployments
- Integrates with PyTorch and TensorFlow models
License
MIT LicenseFollow DeepSparse
Other Useful Business Software
Stop Cyber Threats with VM-Series Next-Gen Firewall on Azure
Gain integrated visibility across all traffic in a single pass. Deploy Palo Alto Networks VM-Series to determine application identity and content while automating security policy updates via rich APIs.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of DeepSparse!