A sparsity-aware enterprise inferencing system for AI models on CPUs. Maximize your CPU infrastructure with DeepSparse to run performant computer vision (CV), natural language processing (NLP), and large language models (LLMs).
Features
- Optimized for sparse deep learning models
- Enables high-speed inference on CPUs
- Supports ONNX model format for broad compatibility
- Works with sparsified versions of popular deep learning models
- Scales from edge devices to cloud deployments
- Integrates with PyTorch and TensorFlow models
License
MIT LicenseFollow DeepSparse
Other Useful Business Software
Streamline Azure Security with Palo Alto Networks VM-Series
Improve your security posture and reduce incident response time. Use the VM-Series to natively analyze Azure traffic and dynamically drive policy updates based on workload changes.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of DeepSparse!