4 Integrations with ONNX

View a list of ONNX integrations and software that integrates with ONNX below. Compare the best ONNX integrations as well as features, ratings, user reviews, and pricing of software that integrates with ONNX. Here are the current ONNX integrations in 2024:

  • 1
    Flyte

    Flyte

    Union.ai

    The workflow automation platform for complex, mission-critical data and ML processes at scale. Flyte makes it easy to create concurrent, scalable, and maintainable workflows for machine learning and data processing. Flyte is used in production at Lyft, Spotify, Freenome, and others. At Lyft, Flyte has been serving production model training and data processing for over four years, becoming the de-facto platform for teams like pricing, locations, ETA, mapping, autonomous, and more. In fact, Flyte manages over 10,000 unique workflows at Lyft, totaling over 1,000,000 executions every month, 20 million tasks, and 40 million containers. Flyte has been battle-tested at Lyft, Spotify, Freenome, and others. It is entirely open-source with an Apache 2.0 license under the Linux Foundation with a cross-industry overseeing committee. Configuring machine learning and data workflows can get complex and error-prone with YAML.
    Starting Price: Free
  • 2
    Azure SQL Edge
    Small-footprint, edge-optimized SQL database engine with built-in AI. Azure SQL Edge, a robust Internet of Things (IoT) database for edge computing, combines capabilities such as data streaming and time series with built-in machine learning and graph features. Extend the industry-leading Microsoft SQL engine to edge devices for consistent performance and security across your entire data estate, from cloud to edge. Develop your applications once and deploy them anywhere across the edge, your on-premises data center, or Azure. Built-in data streaming and time series, with in-database machine learning and graph features for low-latency analytics. Data processing at the edge for online, offline, or hybrid environments to overcome latency and bandwidth constraints. Deploy and update from the Azure portal or your enterprise portal for consistent security and turnkey management. Detect anomalies and apply business logic at the edge using the built-in machine learning capabilities.
    Starting Price: $60 per year
  • 3
    Cirrascale

    Cirrascale

    Cirrascale

    Our high-throughput storage systems can serve millions of small, random files to GPU-based training servers accelerating overall training times. We offer high-bandwidth, low-latency networks for connecting distributed training servers as well as transporting data between storage and servers. Other cloud providers squeeze you with extra fees and charges to get your data out of their storage clouds, and those can add up fast. We consider ourselves an extension of your team. We work with you to set up scheduling services, help with best practices, and provide superior support. Workflows can vary from company to company. Cirrascale works to ensure you get the right solution for your needs to get you the best results. Cirrascale is the only provider that works with you to tailor your cloud instances to increase performance, remove bottlenecks, and optimize your workflow. Cloud-based solutions to accelerate your training, simulation, and re-simulation time.
    Starting Price: $2.49 per hour
  • 4
    Groq

    Groq

    Groq

    Groq is on a mission to set the standard for GenAI inference speed, helping real-time AI applications come to life today. An LPU inference engine, with LPU standing for Language Processing Unit, is a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as AI language applications (LLMs). The LPU is designed to overcome the two LLM bottlenecks, compute density and memory bandwidth. An LPU has greater computing capacity than a GPU and CPU in regards to LLMs. This reduces the amount of time per word calculated, allowing sequences of text to be generated much faster. Additionally, eliminating external memory bottlenecks enables the LPU inference engine to deliver orders of magnitude better performance on LLMs compared to GPUs. Groq supports standard machine learning frameworks such as PyTorch, TensorFlow, and ONNX for inference.
  • Previous
  • You're on page 1
  • Next