ProdigyExplosion
|
||||||
Related Products
|
||||||
About
NVIDIA Triton™ inference server delivers fast and scalable AI in production. Open-source inference serving software, Triton inference server streamlines AI inference by enabling teams deploy trained AI models from any framework (TensorFlow, NVIDIA TensorRT®, PyTorch, ONNX, XGBoost, Python, custom and more on any GPU- or CPU-based infrastructure (cloud, data center, or edge). Triton runs models concurrently on GPUs to maximize throughput and utilization, supports x86 and ARM CPU-based inferencing, and offers features like dynamic batching, model analyzer, model ensemble, and audio streaming. Triton helps developers deliver high-performance inference aTriton integrates with Kubernetes for orchestration and scaling, exports Prometheus metrics for monitoring, supports live model updates, and can be used in all major public cloud machine learning (ML) and managed Kubernetes platforms. Triton helps standardize model deployment in production.
|
About
Radically efficient machine teaching. An annotation tool powered by active learning. Prodigy is a scriptable annotation tool so efficient that data scientists can do the annotation themselves, enabling a new level of rapid iteration. Today’s transfer learning technologies mean you can train production-quality models with very few examples. With Prodigy you can take full advantage of modern machine learning by adopting a more agile approach to data collection. You'll move faster, be more independent and ship far more successful projects. Prodigy brings together state-of-the-art insights from machine learning and user experience. With its continuous active learning system, you're only asked to annotate examples the model does not already know the answer to. The web application is powerful, extensible and follows modern UX principles. The secret is very simple: it's designed to help you focus on one decision at a time and keep you clicking – like Tinder for data.
|
|||||
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
|||||
Audience
Developers and companies searching for an inference server solution to improve AI production
|
Audience
Data scientists, AI developers, data labelers
|
|||||
Support
Phone Support
24/7 Live Support
Online
|
Support
Phone Support
24/7 Live Support
Online
|
|||||
API
Offers API
|
API
Offers API
|
|||||
Screenshots and Videos |
Screenshots and Videos |
|||||
Pricing
Free
Free Version
Free Trial
|
Pricing
$490 one-time fee
Free Version
Free Trial
|
|||||
Reviews/
|
Reviews/
|
|||||
Training
Documentation
Webinars
Live Online
In Person
|
Training
Documentation
Webinars
Live Online
In Person
|
|||||
Company InformationNVIDIA
United States
developer.nvidia.com/nvidia-triton-inference-server
|
Company InformationExplosion
Founded: 2016
Germany
prodi.gy/
|
|||||
Alternatives |
Alternatives |
|||||
|
|
||||||
|
|
||||||
|
|
||||||
Categories |
Categories |
|||||
Data Labeling Features
Human-in-the-loop
Labeling Automation
Labeling Quality
Performance Tracking
Polygon, Rectangle, Line, Point
SDK
Supports Audio Files
Task Management
Team Collaboration
Training Data Management
|
||||||
Integrations
Alibaba CloudAP
Amazon EKS
Amazon Elastic Container Service (Amazon ECS)
Amazon SageMaker
Azure Kubernetes Service (AKS)
Azure Machine Learning
FauxPilot
Google Kubernetes Engine (GKE)
HPE Ezmeral
Kubernetes
|
Integrations
Alibaba CloudAP
Amazon EKS
Amazon Elastic Container Service (Amazon ECS)
Amazon SageMaker
Azure Kubernetes Service (AKS)
Azure Machine Learning
FauxPilot
Google Kubernetes Engine (GKE)
HPE Ezmeral
Kubernetes
|
|||||
|
|
|