Alternatives to Towhee

Compare Towhee alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Towhee in 2024. Compare features, ratings, user reviews, pricing, and more from Towhee competitors and alternatives in order to make an informed decision for your business.

  • 1
    Dataloop AI

    Dataloop AI

    Dataloop AI

    Manage unstructured data and pipelines to develop AI solutions at amazing speed. Enterprise-grade data platform for vision AI. Dataloop is a one-stop shop for building and deploying powerful computer vision pipelines data labeling, automating data ops, customizing production pipelines and weaving the human-in-the-loop for data validation. Our vision is to make machine learning-based systems accessible, affordable and scalable for all. Explore and analyze vast quantities of unstructured data from diverse sources. Rely on automated preprocessing and embeddings to identify similarities and find the data you need. Curate, version, clean, and route your data to wherever it’s needed to create exceptional AI applications.
  • 2
    Labelbox

    Labelbox

    Labelbox

    The training data platform for AI teams. A machine learning model is only as good as its training data. Labelbox is an end-to-end platform to create and manage high-quality training data all in one place, while supporting your production pipeline with powerful APIs. Powerful image labeling tool for image classification, object detection and segmentation. When every pixel matters, you need accurate and intuitive image segmentation tools. Customize the tools to support your specific use case, including instances, custom attributes and much more. Performant video labeling editor for cutting-edge computer vision. Label directly on the video up to 30 FPS with frame level. Additionally, Labelbox provides per frame label feature analytics enabling you to create better models faster. Creating training data for natural language intelligence has never been easier. Label text strings, conversations, paragraphs, and documents with fast & customizable classification.
  • 3
    Pathway

    Pathway

    Pathway

    Pathway is a Python ETL framework for stream processing, real-time analytics, LLM pipelines, and RAG. Pathway comes with an easy-to-use Python API, allowing you to seamlessly integrate your favorite Python ML libraries. Pathway code is versatile and robust: you can use it in both development and production environments, handling both batch and streaming data effectively. The same code can be used for local development, CI/CD tests, running batch jobs, handling stream replays, and processing data streams. Pathway is powered by a scalable Rust engine based on Differential Dataflow and performs incremental computation. Your Pathway code, despite being written in Python, is run by the Rust engine, enabling multithreading, multiprocessing, and distributed computations. All the pipeline is kept in memory and can be easily deployed with Docker and Kubernetes.
  • 4
    scikit-learn

    scikit-learn

    scikit-learn

    Scikit-learn provides simple and efficient tools for predictive data analysis. Scikit-learn is a robust, open source machine learning library for the Python programming language, designed to provide simple and efficient tools for data analysis and modeling. Built on the foundations of popular scientific libraries like NumPy, SciPy, and Matplotlib, scikit-learn offers a wide range of supervised and unsupervised learning algorithms, making it an essential toolkit for data scientists, machine learning engineers, and researchers. The library is organized into a consistent and flexible framework, where various components can be combined and customized to suit specific needs. This modularity makes it easy for users to build complex pipelines, automate repetitive tasks, and integrate scikit-learn into larger machine-learning workflows. Additionally, the library’s emphasis on interoperability ensures that it works seamlessly with other Python libraries, facilitating smooth data processing.
    Starting Price: Free
  • 5
    Feast

    Feast

    Tecton

    Make your offline data available for real-time predictions without having to build custom pipelines. Ensure data consistency between offline training and online inference, eliminating train-serve skew. Standardize data engineering workflows under one consistent framework. Teams use Feast as the foundation of their internal ML platforms. Feast doesn’t require the deployment and management of dedicated infrastructure. Instead, it reuses existing infrastructure and spins up new resources when needed. You are not looking for a managed solution and are willing to manage and maintain your own implementation. You have engineers that are able to support the implementation and management of Feast. You want to run pipelines that transform raw data into features in a separate system and integrate with it. You have unique requirements and want to build on top of an open source solution.
  • 6
    FinetuneFast

    FinetuneFast

    FinetuneFast

    FinetuneFast is your ultimate solution for finetuning AI models and deploying them quickly to start making money online with ease. Here are the key features that make FinetuneFast stand out: - Finetune your ML models in days, not weeks - The ultimate ML boilerplate for text-to-image, LLMs, and more - Build your first AI app and start earning online fast - Pre-configured training scripts for efficient model training - Efficient data loading pipelines for streamlined data processing - Hyperparameter optimization tools for improved model performance - Multi-GPU support out of the box for enhanced processing power - No-Code AI model finetuning for easy customization - One-click model deployment for quick and hassle-free deployment - Auto-scaling infrastructure for seamless scaling as your models grow - API endpoint generation for easy integration with other systems - Monitoring and logging setup for real-time performance tracking
  • 7
    IBM Watson Machine Learning
    IBM Watson Machine Learning is a full-service IBM Cloud offering that makes it easy for developers and data scientists to work together to integrate predictive capabilities with their applications. The Machine Learning service is a set of REST APIs that you can call from any programming language to develop applications that make smarter decisions, solve tough problems, and improve user outcomes. Take advantage of machine learning models management (continuous learning system) and deployment (online, batch, streaming). Select any of widely supported machine learning frameworks: TensorFlow, Keras, Caffe, PyTorch, Spark MLlib, scikit learn, xgboost and SPSS. Use the command-line interface and Python client to manage your artifacts. Extend your application with artificial intelligence through the Watson Machine Learning REST API.
    Starting Price: $0.575 per hour
  • 8
    NVIDIA Triton Inference Server
    NVIDIA Triton™ inference server delivers fast and scalable AI in production. Open-source inference serving software, Triton inference server streamlines AI inference by enabling teams deploy trained AI models from any framework (TensorFlow, NVIDIA TensorRT®, PyTorch, ONNX, XGBoost, Python, custom and more on any GPU- or CPU-based infrastructure (cloud, data center, or edge). Triton runs models concurrently on GPUs to maximize throughput and utilization, supports x86 and ARM CPU-based inferencing, and offers features like dynamic batching, model analyzer, model ensemble, and audio streaming. Triton helps developers deliver high-performance inference aTriton integrates with Kubernetes for orchestration and scaling, exports Prometheus metrics for monitoring, supports live model updates, and can be used in all major public cloud machine learning (ML) and managed Kubernetes platforms. Triton helps standardize model deployment in production.
    Starting Price: Free
  • 9
    Xilinx

    Xilinx

    Xilinx

    The Xilinx’s AI development platform for AI inference on Xilinx hardware platforms consists of optimized IP, tools, libraries, models, and example designs. It is designed with high efficiency and ease-of-use in mind, unleashing the full potential of AI acceleration on Xilinx FPGA and ACAP. Supports mainstream frameworks and the latest models capable of diverse deep learning tasks. Provides a comprehensive set of pre-optimized models that are ready to deploy on Xilinx devices. You can find the closest model and start re-training for your applications! Provides a powerful open source quantizer that supports pruned and unpruned model quantization, calibration, and fine tuning. The AI profiler provides layer by layer analysis to help with bottlenecks. The AI library offers open source high-level C++ and Python APIs for maximum portability from edge to cloud. Efficient and scalable IP cores can be customized to meet your needs of many different applications.
  • 10
    IBM Watson Studio
    Build, run and manage AI models, and optimize decisions at scale across any cloud. IBM Watson Studio empowers you to operationalize AI anywhere as part of IBM Cloud Pak® for Data, the IBM data and AI platform. Unite teams, simplify AI lifecycle management and accelerate time to value with an open, flexible multicloud architecture. Automate AI lifecycles with ModelOps pipelines. Speed data science development with AutoAI. Prepare and build models visually and programmatically. Deploy and run models through one-click integration. Promote AI governance with fair, explainable AI. Drive better business outcomes by optimizing decisions. Use open source frameworks like PyTorch, TensorFlow and scikit-learn. Bring together the development tools including popular IDEs, Jupyter notebooks, JupterLab and CLIs — or languages such as Python, R and Scala. IBM Watson Studio helps you build and scale AI with trust and transparency by automating AI lifecycle management.
  • 11
    Weights & Biases

    Weights & Biases

    Weights & Biases

    Experiment tracking, hyperparameter optimization, model and dataset versioning with Weights & Biases (WandB). Track, compare, and visualize ML experiments with 5 lines of code. Add a few lines to your script, and each time you train a new version of your model, you'll see a new experiment stream live to your dashboard. Optimize models with our massively scalable hyperparameter search tool. Sweeps are lightweight, fast to set up, and plug in to your existing infrastructure for running models. Save every detail of your end-to-end machine learning pipeline — data preparation, data versioning, training, and evaluation. It's never been easier to share project updates. Quickly and easily implement experiment logging by adding just a few lines to your script and start logging results. Our lightweight integration works with any Python script. W&B Weave is here to help developers build and iterate on their AI applications with confidence.
  • 12
    Azure Machine Learning
    Accelerate the end-to-end machine learning lifecycle. Empower developers and data scientists with a wide range of productive experiences for building, training, and deploying machine learning models faster. Accelerate time to market and foster team collaboration with industry-leading MLOps—DevOps for machine learning. Innovate on a secure, trusted platform, designed for responsible ML. Productivity for all skill levels, with code-first and drag-and-drop designer, and automated machine learning. Robust MLOps capabilities that integrate with existing DevOps processes and help manage the complete ML lifecycle. Responsible ML capabilities – understand models with interpretability and fairness, protect data with differential privacy and confidential computing, and control the ML lifecycle with audit trials and datasheets. Best-in-class support for open-source frameworks and languages including MLflow, Kubeflow, ONNX, PyTorch, TensorFlow, Python, and R.
  • 13
    Saturn Cloud

    Saturn Cloud

    Saturn Cloud

    Saturn Cloud is an award-winning ML platform for any cloud with 100,000+ users, including NVIDIA, CFA Institute, Snowflake, Flatiron School, Nestle, and more. It is an all-in-one solution for data science & ML development, deployment, and data pipelines in the cloud. Users can spin up a notebook with 4TB of RAM, add a GPU, connect to a distributed cluster of workers, build large language models, and more in a completely hosted environment. Data professionals can use your preferred languages, IDEs, and machine-learning libraries in Saturn Cloud. We offer full Git integration, shared custom images, and secure credential storage, making scaling and building your team in the cloud easy. We support the entire machine learning lifecycle from experimentation to production with features like jobs and deployments. These features and built-in tools are easily shareable within teams, so time is saved and work is reproducible.
    Leader badge
    Starting Price: $0.005 per GB per hour
  • 14
    Wallaroo.AI

    Wallaroo.AI

    Wallaroo.AI

    Wallaroo facilitates the last-mile of your machine learning journey, getting ML into your production environment to impact the bottom line, with incredible speed and efficiency. Wallaroo is purpose-built from the ground up to be the easy way to deploy and manage ML in production, unlike Apache Spark, or heavy-weight containers. ML with up to 80% lower cost and easily scale to more data, more models, more complex models. Wallaroo is designed to enable data scientists to quickly and easily deploy their ML models against live data, whether to testing environments, staging, or prod. Wallaroo supports the largest set of machine learning training frameworks possible. You’re free to focus on developing and iterating on your models while letting the platform take care of deployment and inference at speed and scale.
  • 15
    AWS Neuron

    AWS Neuron

    Amazon Web Services

    It supports high-performance training on AWS Trainium-based Amazon Elastic Compute Cloud (Amazon EC2) Trn1 instances. For model deployment, it supports high-performance and low-latency inference on AWS Inferentia-based Amazon EC2 Inf1 instances and AWS Inferentia2-based Amazon EC2 Inf2 instances. With Neuron, you can use popular frameworks, such as TensorFlow and PyTorch, and optimally train and deploy machine learning (ML) models on Amazon EC2 Trn1, Inf1, and Inf2 instances with minimal code changes and without tie-in to vendor-specific solutions. AWS Neuron SDK, which supports Inferentia and Trainium accelerators, is natively integrated with PyTorch and TensorFlow. This integration ensures that you can continue using your existing workflows in these popular frameworks and get started with only a few lines of code changes. For distributed model training, the Neuron SDK supports libraries, such as Megatron-LM and PyTorch Fully Sharded Data Parallel (FSDP).
  • 16
    Google Deep Learning Containers
    Build your deep learning project quickly on Google Cloud: Quickly prototype with a portable and consistent environment for developing, testing, and deploying your AI applications with Deep Learning Containers. These Docker images use popular frameworks and are performance optimized, compatibility tested, and ready to deploy. Deep Learning Containers provide a consistent environment across Google Cloud services, making it easy to scale in the cloud or shift from on-premises. You have the flexibility to deploy on Google Kubernetes Engine (GKE), AI Platform, Cloud Run, Compute Engine, Kubernetes, and Docker Swarm.
  • 17
    Automaton AI

    Automaton AI

    Automaton AI

    With Automaton AI’s ADVIT, create, manage and develop high-quality training data and DNN models all in one place. Optimize the data automatically and prepare it for each phase of the computer vision pipeline. Automate the data labeling processes and streamline data pipelines in-house. Manage the structured and unstructured video/image/text datasets in runtime and perform automatic functions that refine your data in preparation for each step of the deep learning pipeline. Upon accurate data labeling and QA, you can train your own model. DNN training needs hyperparameter tuning like batch size, learning, rate, etc. Optimize and transfer learning on trained models to increase accuracy. Post-training, take the model to production. ADVIT also does model versioning. Model development and accuracy parameters can be tracked in run-time. Increase the model accuracy with a pre-trained DNN model for auto-labeling.
  • 18
    Hopsworks

    Hopsworks

    Logical Clocks

    Hopsworks is an open-source Enterprise platform for the development and operation of Machine Learning (ML) pipelines at scale, based around the industry’s first Feature Store for ML. You can easily progress from data exploration and model development in Python using Jupyter notebooks and conda to running production quality end-to-end ML pipelines, without having to learn how to manage a Kubernetes cluster. Hopsworks can ingest data from the datasources you use. Whether they are in the cloud, on‑premise, IoT networks, or from your Industry 4.0-solution. Deploy on‑premises on your own hardware or at your preferred cloud provider. Hopsworks will provide the same user experience in the cloud or in the most secure of air‑gapped deployments. Learn how to set up customized alerts in Hopsworks for different events that are triggered as part of the ingestion pipeline.
    Starting Price: $1 per month
  • 19
    Baidu AI Cloud Machine Learning (BML)
    Baidu AI Cloud Machine Learning (BML), an end-to-end machine learning platform designed for enterprises and AI developers, can accomplish one-stop data pre-processing, model training, and evaluation, and service deployments, among others. The Baidu AI Cloud AI development platform BML is an end-to-end AI development and deployment platform. Based on the BML, users can accomplish the one-stop data pre-processing, model training and evaluation, service deployment, and other works. The platform provides a high-performance cluster training environment, massive algorithm frameworks and model cases, as well as easy-to-operate prediction service tools. Thus, it allows users to focus on the model and algorithm and obtain excellent model and prediction results. The fully hosted interactive programming environment realizes the data processing and code debugging. The CPU instance supports users to install a third-party software library and customize the environment, ensuring flexibility.
  • 20
    Domino Enterprise MLOps Platform
    The Domino platform helps data science teams improve the speed, quality, and impact of data science at scale. Domino is open and flexible, empowering professional data scientists to use their preferred tools and infrastructure. Data science models get into production fast and are kept operating at peak performance with integrated workflows. Domino also delivers the security, governance and compliance that enterprises expect. The Self-Service Infrastructure Portal makes data science teams become more productive with easy access to their preferred tools, scalable compute, and diverse data sets. The Integrated Model Factory includes a workbench, model and app deployment, and integrated monitoring to rapidly experiment, deploy the best models in production, ensure optimal performance, and collaborate across the end-to-end data science lifecycle. The System of Record allows teams to easily find, reuse, reproduce, and build on any data science work to amplify innovation.
  • 21
    Datatron

    Datatron

    Datatron

    Datatron offers tools and features built from scratch, specifically to make machine learning in production work for you. Most teams discover that there’s more to just deploying models, which is already a very manual and time-consuming task. Datatron offers single model governance and management platform for all of your ML, AI, and Data Science models in production. We help you automate, optimize, and accelerate your ML models to ensure that they are running smoothly and efficiently in production. Data Scientists use a variety of frameworks to build the best models. We support anything you’d build a model with ( e.g. TensorFlow, H2O, Scikit-Learn, and SAS ). Explore models built and uploaded by your data science team, all from one centralized repository. Create a scalable model deployment in just a few clicks. Deploy models built using any language or framework. Make better decisions based on your model performance.
  • 22
    Roboflow

    Roboflow

    Roboflow

    Roboflow has everything you need to build and deploy computer vision models. Connect Roboflow at any step in your pipeline with APIs and SDKs, or use the end-to-end interface to automate the entire process from image to inference. Whether you’re in need of data labeling, model training, or model deployment, Roboflow gives you building blocks to bring custom computer vision solutions to your business.
    Starting Price: $250/month
  • 23
    Torch

    Torch

    Torch

    Torch is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation. The goal of Torch is to have maximum flexibility and speed in building your scientific algorithms while making the process extremely simple. Torch comes with a large ecosystem of community-driven packages in machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking among others, and builds on top of the Lua community. At the heart of Torch are the popular neural network and optimization libraries which are simple to use, while having maximum flexibility in implementing complex neural network topologies. You can build arbitrary graphs of neural networks, and parallelize them over CPUs and GPUs in an efficient manner.
  • 24
    Valohai

    Valohai

    Valohai

    Models are temporary, pipelines are forever. Train, Evaluate, Deploy, Repeat. Valohai is the only MLOps platform that automates everything from data extraction to model deployment. Automate everything from data extraction to model deployment. Store every single model, experiment and artifact automatically. Deploy and monitor models in a managed Kubernetes cluster. Point to your code & data and hit run. Valohai launches workers, runs your experiments and shuts down the instances for you. Develop through notebooks, scripts or shared git projects in any language or framework. Expand endlessly through our open API. Automatically track each experiment and trace back from inference to the original training data. Everything fully auditable and shareable. Automatically track each experiment and trace back from inference to the original training data. Everything fully auditable and shareable.
    Starting Price: $560 per month
  • 25
    Chalk

    Chalk

    Chalk

    Powerful data engineering workflows, without the infrastructure headaches. Complex streaming, scheduling, and data backfill pipelines, are all defined in simple, composable Python. Make ETL a thing of the past, fetch all of your data in real-time, no matter how complex. Incorporate deep learning and LLMs into decisions alongside structured business data. Make better predictions with fresher data, don’t pay vendors to pre-fetch data you don’t use, and query data just in time for online predictions. Experiment in Jupyter, then deploy to production. Prevent train-serve skew and create new data workflows in milliseconds. Instantly monitor all of your data workflows in real-time; track usage, and data quality effortlessly. Know everything you computed and data replay anything. Integrate with the tools you already use and deploy to your own infrastructure. Decide and enforce withdrawal limits with custom hold times.
    Starting Price: Free
  • 26
    Dataiku DSS
    Bring data analysts, engineers, and scientists together. Enable self-service analytics and operationalize machine learning. Get results today and build for tomorrow. Dataiku DSS is the collaborative data science software platform for teams of data scientists, data analysts, and engineers to explore, prototype, build, and deliver their own data products more efficiently. Use notebooks (Python, R, Spark, Scala, Hive, etc.) or a customizable drag-and-drop visual interface at any step of the predictive dataflow prototyping process – from wrangling to analysis to modeling. Profile the data visually at every step of the analysis. Interactively explore and chart your data using 25+ built-in charts. Prepare, enrich, blend, and clean data using 80+ built-in functions. Leverage Machine Learning technologies (Scikit-Learn, MLlib, TensorFlow, Keras, etc.) in a visual UI. Build & optimize models in Python or R and integrate any external ML library through code APIs.
  • 27
    Dagster+

    Dagster+

    Dagster Labs

    Dagster is a next-generation orchestration platform for the development, production, and observation of data assets. Unlike other data orchestration solutions, Dagster provides you with an end-to-end development lifecycle. Dagster gives you control over your disparate data tools and empowers you to build, test, deploy, run, and iterate on your data pipelines. It makes you and your data teams more productive, your operations more robust, and puts you in complete control of your data processes as you scale. Dagster brings a declarative approach to the engineering of data pipelines. Your team defines the data assets required, quickly assessing their status and resolving any discrepancies. An assets-based model is clearer than a tasks-based one and becomes a unifying abstraction across the whole workflow.
  • 28
    Vaex

    Vaex

    Vaex

    At Vaex.io we aim to democratize big data and make it available to anyone, on any machine, at any scale. Cut development time by 80%, your prototype is your solution. Create automatic pipelines for any model. Empower your data scientists. Turn any laptop into a big data powerhouse, no clusters, no engineers. We provide reliable and fast data driven solutions. With our state-of-the-art technology we build and deploy machine learning models faster than anyone on the market. Turn your data scientist into big data engineers. We provide comprehensive training of your employees, enabling you to take full advantage of our technology. Combines memory mapping, a sophisticated expression system, and fast out-of-core algorithms. Efficiently visualize and explore big datasets, and build machine learning models on a single machine.
  • 29
    Scale GenAI Platform
    Build, test, and optimize Generative AI applications that unlock the value of your data. Optimize LLM performance for your domain-specific use cases with our advanced retrieval augmented generation (RAG) pipelines, state-of-the-art test and evaluation platform, and our industry-leading ML expertise. We help deliver value from AI investments faster with better data by providing an end-to-end solution to manage the entire ML lifecycle. Combining cutting edge technology with operational excellence, we help teams develop the highest-quality datasets because better data leads to better AI.
  • 30
    Keepsake

    Keepsake

    Replicate

    Keepsake is an open-source Python library designed to provide version control for machine learning experiments and models. It enables users to automatically track code, hyperparameters, training data, model weights, metrics, and Python dependencies, ensuring that all aspects of the machine learning workflow are recorded and reproducible. Keepsake integrates seamlessly with existing workflows by requiring minimal code additions, allowing users to continue training as usual while Keepsake saves code and weights to Amazon S3 or Google Cloud Storage. This facilitates the retrieval of code and weights from any checkpoint, aiding in re-training or model deployment. Keepsake supports various machine learning frameworks, including TensorFlow, PyTorch, scikit-learn, and XGBoost, by saving files and dictionaries in a straightforward manner. It also offers features such as experiment comparison, enabling users to analyze differences in parameters, metrics, and dependencies across experiments.
    Starting Price: Free
  • 31
    DeepNLP

    DeepNLP

    SparkCognition

    SparkCognition, a leading industrial AI company, has developed a natural language processing solution that automates workflows of unstructured data within organizations so humans can focus on high-value business decisions. The DeepNLP product uses advanced machine learning techniques to automate the retrieval of information, the classification of documents, and content analytics. The DeepNLP product integrates into existing workflows to enable organizations to better respond to changes in their business and quickly get answers to specific queries or analytics that support decision-making.
  • 32
    UnionML

    UnionML

    Union

    Creating ML apps should be simple and frictionless. UnionML is an open-source Python framework built on top of Flyte™, unifying the complex ecosystem of ML tools into a single interface. Combine the tools that you love using a simple, standardized API so you can stop writing so much boilerplate and focus on what matters: the data and the models that learn from them. Fit the rich ecosystem of tools and frameworks into a common protocol for machine learning. Using industry-standard machine learning methods, implement endpoints for fetching data, training models, serving predictions (and much more) to write a complete ML stack in one place. ‍ Data science, ML engineering, and MLOps practitioners can all gather around UnionML apps as a way of defining a single source of truth about your ML system’s behavior.
  • 33
    ONNX

    ONNX

    ONNX

    ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers. Develop in your preferred framework without worrying about downstream inferencing implications. ONNX enables you to use your preferred framework with your chosen inference engine. ONNX makes it easier to access hardware optimizations. Use ONNX-compatible runtimes and libraries designed to maximize performance across hardware. Our active community thrives under our open governance structure, which provides transparency and inclusion. We encourage you to engage and contribute.
  • 34
    Luminoso

    Luminoso

    Luminoso Technologies Inc.

    Luminoso turns unstructured text data into business-critical insights. Using common-sense artificial intelligence to understand language, we empower organizations to discover, interpret, and act on what people are telling them. Requiring little setup, maintenance, training, or data input, Luminoso combines world-leading natural language understanding technology with a vast knowledge base to learn words from context – like humans do – and accurately analyze text in minutes, not months. Our software provides native support in over a dozen languages, so leaders can explore relationships in data, make sense of feedback, and triage inquiries to drive value, fast. Luminoso is privately held and headquartered in Boston, MA.
    Starting Price: $1250/month
  • 35
    Metacoder

    Metacoder

    Wazoo Mobile Technologies LLC

    Metacoder makes processing data faster and easier. Metacoder gives analysts needed flexibility and tools to facilitate data analysis. Data preparation steps such as cleaning are managed reducing the manual inspection time required before you are up and running. Compared to alternatives, is in good company. Metacoder beats similar companies on price and our management is proactively developing based on our customers' valuable feedback. Metacoder is used primarily to assist predictive analytics professionals in their job. We offer interfaces for database integrations, data cleaning, preprocessing, modeling, and display/interpretation of results. We help organizations distribute their work transparently by enabling model sharing, and we make management of the machine learning pipeline easy to make tweaks. Soon we will be including code free solutions for image, audio, video, and biomedical data.
    Starting Price: $89 per user/month
  • 36
    Pachyderm

    Pachyderm

    Pachyderm

    Pachyderm’s Data Versioning gives teams an automated and performant way to keep track of all data changes. File-based versioning provides a complete audit trail for all data and artifacts across pipeline stages, including intermediate results. Stored as native objects (not metadata pointers) so that versioning is automated and guaranteed. Autoscale with parallel processing of data without writing additional code. Incremental processing saves compute by only processing differences and automatically skipping duplicate data. Pachyderm’s Global IDs make it easy for teams to track any result all the way back to its raw input, including all analysis, parameters, code, and intermediate results. The Pachyderm Console provides an intuitive visualization of your DAG (directed acyclic graph), and aids in reproducibility with Global IDs.
  • 37
    ML Kit

    ML Kit

    Google

    ML Kit brings Google’s machine learning expertise to mobile developers in a powerful and easy-to-use package. Make your iOS and Android apps more engaging, personalized, and helpful with solutions that are optimized to run on device. ML Kit’s processing happens on-device. This makes it fast and unlocks real-time use cases like processing of camera input. It also works while offline and can be used for processing images and text that need to remain on the device. Take advantage of the machine learning technologies that power Google's own experiences on mobile. We combine best-in-class machine learning models with advanced processing pipelines and offer these through easy-to-use APIs to enable powerful use cases in your apps. Recognizes handwritten text and handdrawn shapes on a digital surface, such as a touch screen. Recognizes 300+ languages, emojis and basic shapes.
  • 38
    Google Colab
    Colaboratory, or "Colab" for short, allows you to write and execute Python in your browser, with - Zero configuration required - Free access to GPUs - Easy sharing Whether you're a student, a data scientist or an AI researcher, Colab can make your work easier. Colab notebooks allow you to combine executable code and rich text in a single document, along with images, HTML, LaTeX, and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them.
  • 39
    Synthesis AI

    Synthesis AI

    Synthesis AI

    A synthetic data platform for ML engineers to enable the development of more capable AI models. Simple APIs provide on-demand generation of perfectly-labeled, diverse, and photoreal images. Highly-scalable cloud-based generation platform delivers millions of perfectly labeled images. On-demand data enables new data-centric approaches to develop more performant models. An expanded set of pixel-perfect labels including segmentation maps, dense 2D/3D landmarks, depth maps, surface normals, and much more. Rapidly design, test, and refine your products before building hardware. Prototype different imaging modalities, camera placements, and lens types to optimize your system. Reduce bias in your models associated with misbalanced data sets while preserving privacy. Ensure equal representation across identities, facial attributes, pose, camera, lighting, and much more. We have worked with world-class customers across many use cases.
  • 40
    Deep Talk

    Deep Talk

    Deep Talk

    Deep Talk is the fastest way to transform text from chats, emails, surveys, reviews, social networks into real business intelligence. Understand what's inside communications with customers with our easy-to-use AI platform. Unsupervised deep learning models to analyze your unstructured text data. Deepers are pre trained deep learning models to get custom detections inside your data. Use the "Deepers" API to analyze text in real time and tag text or conversations. Reach the people who need a product, request a new feature or express a complaint. Deep Talk offers cloud-based deep learning models as a service. You just need to upload your data or integrate one of the support services to extract all the insights and information from WhatsApp, chat conversations, emails, surveys or social networks.
    Starting Price: $90 per month
  • 41
    Google Cloud Inference API
    Time-series analysis is essential for the day-to-day operation of many companies. Most popular use cases include analyzing foot traffic and conversion for retailers, detecting data anomalies, identifying correlations in real-time over sensor data, or generating high-quality recommendations. With Cloud Inference API Alpha, you can gather insights in real-time from your typed time-series datasets. Get everything you need to understand your API queries results, such as groups of events that were examined, the number of groups of events, and the background probability of each returned event. Stream data in real-time, making it possible to compute correlations for real-time events. Rely on Google Cloud’s end-to-end infrastructure and defense-in-depth approach to security that’s been innovated on for over 15 years through consumer apps. At its core, Cloud Inference API is fully integrated with other Google Cloud Storage services.
  • 42
    Seldon

    Seldon

    Seldon Technologies

    Deploy machine learning models at scale with more accuracy. Turn R&D into ROI with more models into production at scale, faster, with increased accuracy. Seldon reduces time-to-value so models can get to work faster. Scale with confidence and minimize risk through interpretable results and transparent model performance. Seldon Deploy reduces the time to production by providing production grade inference servers optimized for popular ML framework or custom language wrappers to fit your use cases. Seldon Core Enterprise provides access to cutting-edge, globally tested and trusted open source MLOps software with the reassurance of enterprise-level support. Seldon Core Enterprise is for organizations requiring: - Coverage across any number of ML models deployed plus unlimited users - Additional assurances for models in staging and production - Confidence that their ML model deployments are supported and protected.
  • 43
    FortressIQ

    FortressIQ

    Automation Anywhere

    FortressIQ enables enterprises to decode work, transform experiences, and enhance workflows with the industry’s most advanced process intelligence platform. Using innovative computer vision and artificial intelligence, FortressIQ delivers unprecedented process insights, extremely fast, and with detail and accuracy unattainable with traditional methods. The platform autonomously acquires process data at scale even as processes extend across systems, empowering enterprises to understand, monitor, and improve operations, employee and customer experiences, and every business process. FortressIQ was founded in 2017, and is backed by Lightspeed Venture Partners, Boldstart Ventures, Comcast Ventures, Eniac Ventures, M12 and Tiger Global. Pinpoint inefficiencies and process variations continuously and automatically to reveal optimal process paths and reduce time to automation.
  • 44
    Butler

    Butler

    Butler

    Butler is a platform that helps developers turn AI into easy to use APIs. Create, train, and deploy AI Models in minutes. No AI experience required. Use Butler’s easy-to-use user interface to build a comprehensive labeled data set. Forget about painful labeling exercises. Butler automatically chooses and trains the correct ML model for your use case. No need to spend hours analyzing which models perform the best. With a library of features to customize, Butler enables you to tune your model to your exact requirements. Stop spending time wrestling with rigid predefined models or building homegrown custom solutions. Parse key data fields and tables from any unstructured document or image. Free your users from manual data entry with lightning fast document parsing APIs. Extract information from free form text like names, places, terms and any other custom data. Make your product understand your users the same way you do.
  • 45
    Launchable

    Launchable

    Launchable

    You can have the best developers in the world, but every test is making them slower. 80% of your software tests are pointless. The problem is you don't know which 80%. We find the right 20% using your data so that you can ship faster. We have shrink-wrapped predictive test selection, a machine learning-based approach being used at companies like Facebook so that it can be used by any company. We support multiple languages, test runners, and CI systems. Just bring Git to the table. Launchable uses machine learning to analyze your test failures and source code. It doesn't rely on code syntax analysis. This means it's trivial for Launchable to add support for almost any file-based programming language. It also means we can scale across teams and projects with different languages and tools. Out of the box, we currently support Python, Ruby, Java, JavaScript, Go, C, and C++, and we regularly add support for new languages.
  • 46
    Orange

    Orange

    University of Ljubljana

    Open source machine learning and data visualization. Build data analysis workflows visually, with a large, diverse toolbox. Perform simple data analysis with clever data visualization. Explore statistical distributions, box plots and scatter plots, or dive deeper with decision trees, hierarchical clustering, heatmaps, MDS and linear projections. Even your multidimensional data can become sensible in 2D, especially with clever attribute ranking and selections. Interactive data exploration for rapid qualitative analysis with clean visualizations. Graphic user interface allows you to focus on exploratory data analysis instead of coding, while clever defaults make fast prototyping of a data analysis workflow extremely easy. Place widgets on the canvas, connect them, load your datasets and harvest the insight! When teaching data mining, we like to illustrate rather than only explain. And Orange is great at that.
  • 47
    Oracle Data Science
    A data science platform that improves productivity with unparalleled abilities. Build and evaluate higher-quality machine learning (ML) models. Increase business flexibility by putting enterprise-trusted data to work quickly and support data-driven business objectives with easier deployment of ML models. Using cloud-based platforms to discover new business insights. Building a machine learning model is an iterative process. In this ebook, we break down the process and describe how machine learning models are built. Explore notebooks and build or test machine learning algorithms. Try AutoML and see data science results. Build high-quality models faster and easier. Automated machine learning capabilities rapidly examine the data and recommend the optimal data features and best algorithms. Additionally, automated machine learning tunes the model and explains the model’s results.
  • 48
    Graviti

    Graviti

    Graviti

    Unstructured data is the future of AI. Unlock this future now and build an ML/AI pipeline that scales all of your unstructured data in one place. Use better data to deliver better models, only with Graviti. Get to know the data platform that enables AI developers with management, query, and version control features that are designed for unstructured data. Quality data is no longer a pricey dream. Manage your metadata, annotation, and predictions in one place. Customize filters and visualize filtering results to get you straight to the data that best match your needs. Utilize a Git-like structure to manage data versions and collaborate with your teammates. Role-based access control and visualization of version differences allows your team to work together safely and flexibly. Automate your data pipeline with Graviti’s built-in marketplace and workflow builder. Level-up to fast model iterations with no more grinding.
  • 49
    Kraken

    Kraken

    Big Squid

    Kraken is for everyone from analysts to data scientists. Built to be the easiest-to-use, no-code automated machine learning platform. The Kraken no-code automated machine learning (AutoML) platform simplifies and automates data science tasks like data prep, data cleaning, algorithm selection, model training, and model deployment. Kraken was built with analysts and engineers in mind. If you've done data analysis before, you're ready! Kraken's no-code, easy-to-use interface and integrated SONAR© training make it easy to become a citizen data scientist. Advanced features allow data scientists to work faster and more efficiently. Whether you use Excel or flat files for day-to-day reporting or just ad-hoc analysis and exports, drag-and-drop CSV upload and the Amazon S3 connector in Kraken make it easy to start building models with a few clicks. Data Connectors in Kraken allow you to connect to your favorite data warehouse, business intelligence tools, and cloud storage.
    Starting Price: $100 per month
  • 50
    Scribble Data

    Scribble Data

    Scribble Data

    Scribble Data empowers organizations to enrich their raw data and easily transform it to enable reliable and fast decision-making for persistent business problems. Data-driven decision support for your business. A data-to-decision platform that helps you generate high-fidelity insights to automate decision-making. Solve your persistent business decision-making problems instantly with advanced analytics powered by machine learning. Rest easy and focus your energy on critical tasks, while Enrich does the heavy lifting to ensure the availability of reliable and trustworthy data for decision-making. Leverage customized data-driven workflows for easy consumption of data, and reduce your dependence on data science and machine learning engineering teams. Go from concept to operational data product in a few weeks, not months with feature engineering capabilities that can prepare high volume and high complexity data at scale.