Alternatives to Nextflow

Compare Nextflow alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Nextflow in 2024. Compare features, ratings, user reviews, pricing, and more from Nextflow competitors and alternatives in order to make an informed decision for your business.

  • 1
    Portainer Business
    Portainer is an intuitive container management platform for Docker, Kubernetes, and Edge-based environments. With a smart UI, Portainer enables you to build, deploy, manage, and secure your containerized environments with ease. It makes container adoption easier for the whole team and reduces time-to-value on Kubernetes and Docker/Swarm. With a simple GUI and a comprehensive API, the product makes it easy for engineers to deploy and manage container-based apps, triage issues, automate CI/CD workflows and set up CaaS (container-as-a-service) environments regardless of hosting environment or K8s distro. Portainer Business is designed to be used in a team environment with multiple users and clusters. The product includes a range of security features, including RBAC, OAuth integration, and logging - making it suitable for use in complex production environments. Portainer also allows you to set up GitOps automation for deployment of your apps to Docker and K8s based on Git repos.
  • 2
    Rivery

    Rivery

    Rivery

    Rivery’s SaaS ETL platform provides a fully-managed solution for data ingestion, transformation, orchestration, reverse ETL and more, with built-in support for your development and deployment lifecycles. Key Features: Data Workflow Templates: Extensive library of pre-built templates that enable teams to instantly create powerful data pipelines with the click of a button. Fully managed: No-code, auto-scalable, and hassle-free platform. Rivery takes care of the back end, allowing teams to spend time on priorities rather than maintenance. Multiple Environments: Construct and clone custom environments for specific teams or projects. Reverse ETL: Automatically send data from cloud warehouses to business applications, marketing clouds, CPD’s, and more.
    Starting Price: $0.75 Per Credit
  • 3
    Amazon ECS
    Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. Customers such as Duolingo, Samsung, GE, and Cook Pad use ECS to run their most sensitive and mission-critical applications because of its security, reliability, and scalability. ECS is a great choice to run containers for several reasons. First, you can choose to run your ECS clusters using AWS Fargate, which is serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Second, ECS is used extensively within Amazon to power services such as Amazon SageMaker, AWS Batch, Amazon Lex, and Amazon.com’s recommendation engine, ensuring ECS is tested extensively for security, reliability, and availability.
  • 4
    Google Cloud Run
    Cloud Run is a fully-managed compute platform that lets you run your code in a container directly on top of Google's scalable infrastructure. We’ve intentionally designed Cloud Run to make developers more productive - you get to focus on writing your code, using your favorite language, and Cloud Run takes care of operating your service. Fully managed compute platform for deploying and scaling containerized applications quickly and securely. Write code your way using your favorite languages (Go, Python, Java, Ruby, Node.js, and more). Abstract away all infrastructure management for a simple developer experience. Build applications in your favorite language, with your favorite dependencies and tools, and deploy them in seconds. Cloud Run abstracts away all infrastructure management by automatically scaling up and down from zero almost instantaneously—depending on traffic. Cloud Run only charges you for the exact resources you use. Cloud Run makes app development & deployment simpler.
  • 5
    Amazon EKS
    Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service. Customers such as Intel, Snap, Intuit, GoDaddy, and Autodesk trust EKS to run their most sensitive and mission-critical applications because of its security, reliability, and scalability. EKS is the best place to run Kubernetes for several reasons. First, you can choose to run your EKS clusters using AWS Fargate, which is serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Second, EKS is deeply integrated with services such as Amazon CloudWatch, Auto Scaling Groups, AWS Identity and Access Management (IAM), and Amazon Virtual Private Cloud (VPC), providing you a seamless experience to monitor, scale, and load-balance your applications.
  • 6
    Kubernetes

    Kubernetes

    Kubernetes

    Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team. Whether testing locally or running a global enterprise, Kubernetes flexibility grows with you to deliver your applications consistently and easily no matter how complex your need is. Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.
  • 7
    Apache Airflow

    Apache Airflow

    The Apache Software Foundation

    Airflow is a platform created by the community to programmatically author, schedule and monitor workflows. Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Airflow is ready to scale to infinity. Airflow pipelines are defined in Python, allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. Easily define your own operators and extend libraries to fit the level of abstraction that suits your environment. Airflow pipelines are lean and explicit. Parametrization is built into its core using the powerful Jinja templating engine. No more command-line or XML black-magic! Use standard Python features to create your workflows, including date time formats for scheduling and loops to dynamically generate tasks. This allows you to maintain full flexibility when building your workflows.
  • 8
    harpoon

    harpoon

    harpoon

    harpoon is a drag-and-drop Kubernetes tool for deploying any software in seconds. Whether you're new to Kubernetes or are looking for the best way to learn, harpoon has all the features you need to be successful in deploying and configuring your software using the industry-leading container orchestrator, all with no code. Our visual Kubernetes interface enables anyone to deploy production-grade software with no code. Easily accomplish simple or complex enterprise-grade cloud deployments to deploy and configure software and autoscale Kubernetes without writing any code or configuration scripts. Instantly search for and find any piece of commercial or open source software on the planet and deploy it to the cloud with one click. Before running any applications or services, harpoon will run automated scripts that will secure your cloud provider account. Connect harpoon to your source code repository anywhere and set up an automated deployment pipeline.
    Starting Price: $50 per month
  • 9
    Conductor

    Conductor

    Conductor

    Conductor is a workflow orchestration engine that runs in the cloud. Conductor was built to help Netflix orchestrate microservices-based process flows with the following features. A distributed server ecosystem, which stores workflow state information efficiently. Allow creation of process/business flows in which each individual task can be implemented by the same/different microservices. A DAG (Directed Acyclic Graph) based workflow definition. Workflow definitions are decoupled from the service implementations. Provide visibility and traceability to these process flows. Simple interface to connect workers, which execute the tasks in workflows. Workers are language agnostic, allowing each microservice to be written in the language most suited for the service. Full operational control over workflows with the ability to pause, resume, restart, retry and terminate. Allow greater reuse of existing microservices providing an easier path for onboarding.
  • 10
    StreamScape

    StreamScape

    StreamScape

    Make use of Reactive Programming on the back-end without the need for specialized languages or cumbersome frameworks. Triggers, Actors and Event Collections make it easy to build data pipelines and work with data streams using simple SQL-like syntax, shielding users from the complexities of distributed system development. Extensible Data Modeling is a key feature that supports rich semantics and schema definition for representing real-world things. On-the-fly validation and data shaping rules support a variey of formats like XML and JSON, allowing you to easily describe and evolve your schema, keeping pace with changing business requirements. If you can describe it, we can query it. Know SQL and Javascript? Then you already know how to use the data engine. Whatever the format, a powerful query language lets you instantly test logic expressions and functions, speeding up development and simplifying deployment for unmatched data agility.
  • 11
    JFrog Pipelines
    JFrog Pipelines empowers software teams to ship updates faster by automating DevOps processes in a continuously streamlined and secure way across all their teams and tools. Encompassing continuous integration (CI), continuous delivery (CD), infrastructure and more, it automates everything from code to production. Pipelines is natively integrated with the JFrog Platform and is available with both cloud (software-as-a-service) and on-prem subscriptions. Scales horizontally, allowing you to have a centrally managed solution that supports thousands of users and pipelines in a high-availability (HA) environment. Pre-packaged declarative steps with no scripting required, making it easy to create complex pipelines, including cross-team “pipelines of pipelines.” Integrates with most DevOps tools. The steps in a single pipeline can run on multi-OS, multi-architecture nodes, reducing the need to have multiple CI/CD tools.
    Starting Price: $98/month
  • 12
    Apache Mesos

    Apache Mesos

    Apache Software Foundation

    Mesos is built using the same principles as the Linux kernel, only at a different level of abstraction. The Mesos kernel runs on every machine and provides applications (e.g., Hadoop, Spark, Kafka, Elasticsearch) with API’s for resource management and scheduling across entire datacenter and cloud environments. Native support for launching containers with Docker and AppC images.Support for running cloud native and legacy applications in the same cluster with pluggable scheduling policies. HTTP APIs for developing new distributed applications, for operating the cluster, and for monitoring. Built-in Web UI for viewing cluster state and navigating container sandboxes.
  • 13
    Nebula Container Orchestrator

    Nebula Container Orchestrator

    Nebula Container Orchestrator

    Nebula container orchestrator aims to help devs and ops treat IoT devices just like distributed Dockerized apps. It aim is to act as Docker orchestrator for IoT devices as well as for distributed services such as CDN or edge computing that can span thousands (possibly even millions) of devices worldwide and it does it all while being open-source and completely free. Nebula is a open source project created for Docker orchestration and designed to manage massive clusters at scale, it achieves this by scaling each project component out as far as required. The project’s aim is to act as Docker orchestrator for IoT devices as well as for distributed services such as CDN or edge computing. Nebula is capable of simultaneously updating tens of thousands of IoT devices worldwide with a single API call in an effort to help devs and ops treat IoT devices just like distributed Dockerized apps.
  • 14
    Strong Network

    Strong Network

    Strong Network

    Strong Network allows the management of containers for DevOps online (as opposed to locally on developers laptop) and access them through a cloud IDE or a SSH connection (in the case of a local IDE). These containers provide a complete management of access keys and credentials to multiple types of resources, in addition to providing data loss prevention (DLP). In addition we combine the IDE with a secure chrome browser (remote browser isolation) such that any third party applications for DevOps can be accessed with DLP. This platform is a complete replacement for VDI/DaaS for code development. Our platform allows the provisioning and management of containers for development online (as opposed to locally on developers' laptops, using a solution like docker desktop for example) and enables accessing them through a cloud IDE or a SSH connection (in the case of a local IDE).
  • 15
    HashiCorp Nomad
    A simple and flexible workload orchestrator to deploy and manage containers and non-containerized applications across on-prem and clouds at scale. Single 35MB binary that integrates into existing infrastructure. Easy to operate on-prem or in the cloud with minimal overhead. Orchestrate applications of any type - not just containers. First class support for Docker, Windows, Java, VMs, and more. Bring orchestration benefits to existing services. Achieve zero downtime deployments, improved resilience, higher resource utilization, and more without containerization. Single command for multi-region, multi-cloud federation. Deploy applications globally to any region using Nomad as a single unified control plane. One single unified workflow for deploying to bare metal or cloud environments. Enable multi-cloud applications with ease. Nomad integrates seamlessly with Terraform, Consul and Vault for provisioning, service networking, and secrets management.
  • 16
    Nextflow Tower

    Nextflow Tower

    Seqera Labs

    Nextflow Tower is an intuitive centralized command post that enables large-scale collaborative data analysis. With Tower, users can easily launch, manage, and monitor scalable Nextflow data analysis pipelines and compute environments on-premises or on most clouds. Researchers can focus on the science that matters rather than worrying about infrastructure engineering. Compliance is simplified with predictable, auditable pipeline execution and the ability to reliably reproduce results obtained with specific data sets and pipeline versions on demand. Nextflow Tower is developed and supported by Seqera Labs, the creators and maintainers of the open-source Nextflow project. This means that users get high-quality support directly from the source. Unlike third-party frameworks that incorporate Nextflow, Tower is deeply integrated and can help users benefit from Nextflow's complete set of capabilities.
  • 17
    Canonical Juju
    Better operators for enterprise apps with a full application graph and declarative integration for both Kubernetes and legacy estate. Juju operator integration allows us to keep each operator as simple as possible, then compose them to create rich application graph topologies that support complex scenarios with a simple, consistent experience and much less YAML. The UNIX philosophy of ‘doing one thing well’ applies to large-scale operations code too, and the benefits of clarity and reuse are exactly the same. Small is beautiful. Juju allows you to adopt the operator pattern for your entire estate, including legacy apps. Model-driven operations dramatically reduce maintenance and operations costs for traditional workloads without re-platforming to K8s. Once charmed, legacy apps become multi-cloud ready, too. The Juju Operator Lifecycle Manager (OLM) uniquely supports both container and machine-based apps, with seamless integration between them.
  • 18
    Test Kitchen

    Test Kitchen

    KitchenCI

    Test Kitchen provides a test harness to execute infrastructure code on one or more platforms in isolation. A driver plugin architecture is used to run code on various cloud providers and virtualization technologies such as Vagrant, Amazon EC2, Microsoft Azure, Google Compute Engine, Docker, and more. Many testing frameworks are supported out of the box including Chef InSpec, Serverspec, and Bats For Chef Infra workflows, cookbook dependency resolution via Berkshelf or Policyfiles is supported or include a cookbooks/ directory and Kitchen will know what to do. Test Kitchen is used by all Chef-managed community cookbooks and is the integration testing tool of choice for cookbooks.
  • 19
    Pliant

    Pliant

    Pliant.io

    Pliant’s solution for IT Process Automation simplifies, streamlines, and secures how teams build and deploy automation. Reduce human error, ensure compliance, and elevate your efficiency, with Pliant. Ingest existing automation and write new automation with single-pane orchestration. Ensure compliance using consistent, practical built-in governance. Pliant has abstracted thousands of vendor APIs to create intelligent action blocks allowing users to drag-and-drop blocks, rather than writing and rewriting lines of code. From a single platform, citizen developers are able to build consistent and meaningful automation across platforms, services, and applications in minutes — maximizing value across the entire technology stack in one place. ​With​ ​our​ ​ability​ ​to​ ​add​ ​new​ ​APIs​ ​in​ ​15​ ​business​ ​days,​ ​anything that​ ​is​ ​not​ ​already​ ​out​ ​of​ ​the​ ​box​ ​will​ ​be​ ​in​ ​an​ ​industry​ ​leading​ ​timeframe.
  • 20
    GlassFlow

    GlassFlow

    GlassFlow

    GlassFlow is a serverless, event-driven data pipeline platform designed for Python developers. It enables users to build real-time data pipelines without the need for complex infrastructure like Kafka or Flink. By writing Python functions, developers can define data transformations, and GlassFlow manages the underlying infrastructure, offering auto-scaling, low latency, and optimal data retention. The platform supports integration with various data sources and destinations, including Google Pub/Sub, AWS Kinesis, and OpenAI, through its Python SDK and managed connectors. GlassFlow provides a low-code interface for quick pipeline setup, allowing users to create and deploy pipelines within minutes. It also offers features such as serverless function execution, real-time API connections, and alerting and reprocessing capabilities. The platform is designed to simplify the creation and management of event-driven data pipelines, making it accessible for Python developers.
    Starting Price: $350 per month
  • 21
    AWS Data Pipeline
    AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals. With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR. AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. You don’t have to worry about ensuring resource availability, managing inter-task dependencies, retrying transient failures or timeouts in individual tasks, or creating a failure notification system. AWS Data Pipeline also allows you to move and process data that was previously locked up in on-premises data silos.
    Starting Price: $1 per month
  • 22
    Dagster+

    Dagster+

    Dagster Labs

    Dagster is a next-generation orchestration platform for the development, production, and observation of data assets. Unlike other data orchestration solutions, Dagster provides you with an end-to-end development lifecycle. Dagster gives you control over your disparate data tools and empowers you to build, test, deploy, run, and iterate on your data pipelines. It makes you and your data teams more productive, your operations more robust, and puts you in complete control of your data processes as you scale. Dagster brings a declarative approach to the engineering of data pipelines. Your team defines the data assets required, quickly assessing their status and resolving any discrepancies. An assets-based model is clearer than a tasks-based one and becomes a unifying abstraction across the whole workflow.
  • 23
    Kestra

    Kestra

    Kestra

    Kestra is an open-source, event-driven orchestrator that simplifies data operations and improves collaboration between engineers and business users. By bringing Infrastructure as Code best practices to data pipelines, Kestra allows you to build reliable workflows and manage them with confidence. Thanks to the declarative YAML interface for defining orchestration logic, everyone who benefits from analytics can participate in the data pipeline creation process. The UI automatically adjusts the YAML definition any time you make changes to a workflow from the UI or via an API call. Therefore, the orchestration logic is defined declaratively in code, even if some workflow components are modified in other ways.
  • 24
    Yandex Data Proc
    You select the size of the cluster, node capacity, and a set of services, and Yandex Data Proc automatically creates and configures Spark and Hadoop clusters and other components. Collaborate by using Zeppelin notebooks and other web apps via a UI proxy. You get full control of your cluster with root permissions for each VM. Install your own applications and libraries on running clusters without having to restart them. Yandex Data Proc uses instance groups to automatically increase or decrease computing resources of compute subclusters based on CPU usage indicators. Data Proc allows you to create managed Hive clusters, which can reduce the probability of failures and losses caused by metadata unavailability. Save time on building ETL pipelines and pipelines for training and developing models, as well as describing other iterative tasks. The Data Proc operator is already built into Apache Airflow.
    Starting Price: $0.19 per hour
  • 25
    Google Cloud Composer
    Cloud Composer's managed nature and Apache Airflow compatibility allows you to focus on authoring, scheduling, and monitoring your workflows as opposed to provisioning resources. End-to-end integration with Google Cloud products including BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub, and AI Platform gives users the freedom to fully orchestrate their pipeline. Author, schedule, and monitor your workflows through a single orchestration tool—whether your pipeline lives on-premises, in multiple clouds, or fully within Google Cloud. Ease your transition to the cloud or maintain a hybrid data environment by orchestrating workflows that cross between on-premises and the public cloud. Create workflows that connect data, processing, and services across clouds to give you a unified data environment.
    Starting Price: $0.074 per vCPU hour
  • 26
    Metrolink

    Metrolink

    Metrolink.ai

    A high -performance unified platform which is layered on any existing infrastructure for seamless onboarding. Metrolink’s intuitive design empowers any organization to govern its data integration by arming it with advanced manipulations aimed to maximize diverse and complex data, refocus human resources, and ​eliminate overhead. Diverse, complex, multi-source, streaming data with rapidly changing use cases. Spending much more of the talent on data utilities, losing the focus on the core business. Metrolink is a Unified platform that allows organization design and manage their data pipelines according to their business requirements. This by enabling intuitive UI, advanced manipulations on diverse & complex data with high performance, in a way that amplifies data value while leveraging all data functions and data privacy in the organization.
  • 27
    Chalk

    Chalk

    Chalk

    Powerful data engineering workflows, without the infrastructure headaches. Complex streaming, scheduling, and data backfill pipelines, are all defined in simple, composable Python. Make ETL a thing of the past, fetch all of your data in real-time, no matter how complex. Incorporate deep learning and LLMs into decisions alongside structured business data. Make better predictions with fresher data, don’t pay vendors to pre-fetch data you don’t use, and query data just in time for online predictions. Experiment in Jupyter, then deploy to production. Prevent train-serve skew and create new data workflows in milliseconds. Instantly monitor all of your data workflows in real-time; track usage, and data quality effortlessly. Know everything you computed and data replay anything. Integrate with the tools you already use and deploy to your own infrastructure. Decide and enforce withdrawal limits with custom hold times.
    Starting Price: Free
  • 28
    Dropbase

    Dropbase

    Dropbase

    Centralize offline data, import files, process and clean up data. Export to a live database with 1 click. Streamline data workflows. Centralize offline data and make it accessible to your team. Bring offline files to Dropbase. Multiple formats. Any way you like. Process and format data. Add, edit, re-order, and delete processing steps. 1-click exports. Export to database, endpoints, or download code with 1 click. Instant REST API access. Query Dropbase data securely with REST API access keys. Onboard data where you need it. Combine and process datasets to fit the desired format or data model. No code. Process your data pipelines using a spreadsheet interface. Track every step. Flexible. Use a library of pre-built processing functions. Or write your own. 1-click exports. Export to database or generate endpoints with 1 click. Manage databases. Manage and databases and credentials.
    Starting Price: $19.97 per user per month
  • 29
    TrueFoundry

    TrueFoundry

    TrueFoundry

    TrueFoundry is a Cloud-native Machine Learning Training and Deployment PaaS on top of Kubernetes that enables Machine learning teams to train and Deploy models at the speed of Big Tech with 100% reliability and scalability - allowing them to save cost and release Models to production faster. We abstract out the Kubernetes for Data Scientists and enable them to operate in a way they are comfortable. It also allows teams to deploy and fine-tune large language models seamlessly with full security and cost optimization. TrueFoundry is open-ended, API Driven and integrates with the internal systems, deploys on a company's internal infrastructure and ensures complete Data Privacy and DevSecOps practices.
    Starting Price: $5 per month
  • 30
    Centurion

    Centurion

    New Relic

    A deployment tool for Docker. Takes containers from a Docker registry and runs them on a fleet of hosts with the correct environment variables, host volume mappings, and port mappings. Supports rolling deployments out of the box, and makes it easy to ship applications to Docker servers. We're using it in our production infrastructure. Centurion works in a two part deployment process where the build process ships a container to the registry, and Centurion ships containers from the registry to the Docker fleet. Registry support is handled by the Docker command line tools directly so you can use anything they currently support via the normal registry mechanism. If you haven't been using a registry, you should read up on how to do that before trying to deploy anything with Centurion. This code is developed in the open with input from the community through issues and PRs. There is an active maintainer team within New Relic.
  • 31
    Apache Brooklyn

    Apache Brooklyn

    Apache Software Foundation

    Your applications, any clouds, any containers, anywhere. Apache Brooklyn is software for managing cloud applications. Use it for: Blueprints describe your application, stored as text files in version control, components configured & integrated across multiple machines automatically, 20+ public clouds, or your private cloud or bare servers - and Docker containers, monitor key application metrics; scale to meet demand; restart and replace failed components. View and modify using the web console or automate using the REST API.
  • 32
    HPE Ezmeral

    HPE Ezmeral

    Hewlett Packard Enterprise

    Run, manage, control and secure the apps, data and IT that run your business, from edge to cloud. HPE Ezmeral advances digital transformation initiatives by shifting time and resources from IT operations to innovations. Modernize your apps. Simplify your Ops. And harness data to go from insights to impact. Accelerate time-to-value by deploying Kubernetes at scale with integrated persistent data storage for app modernization on bare metal or VMs, in your data center, on any cloud or at the edge. Harness data and get insights faster by operationalizing the end-to-end process to build data pipelines. Bring DevOps agility to the machine learning lifecycle, and deliver a unified data fabric. Boost efficiency and agility in IT Ops with automation and advanced artificial intelligence. And provide security and control to eliminate risk and reduce costs. HPE Ezmeral Container Platform provides an enterprise-grade platform to deploy Kubernetes at scale for a wide range of use cases.
  • 33
    Azure Container Instances
    Develop apps fast without managing virtual machines or having to learn new tools—it's just your application, in a container, running in the cloud. By running your workloads in Azure Container Instances (ACI), you can focus on designing and building your applications instead of managing the infrastructure that runs them. Deploy containers to the cloud with unprecedented simplicity and speed—with a single command. Use ACI to provision additional compute for demanding workloads whenever you need. For example, with the Virtual Kubelet, use ACI to elastically burst from your Azure Kubernetes Service (AKS) cluster when traffic comes in spikes. Gain the security of virtual machines for your container workloads, while preserving the efficiency of lightweight containers. ACI provides hypervisor isolation for each container group to ensure containers run in isolation without sharing a kernel.
  • 34
    Mirantis Kubernetes Engine
    Mirantis Kubernetes Engine (formerly Docker Enterprise) provides simple, flexible, and scalable container orchestration and enterprise container management. Use Kubernetes, Swarm, or both, and experience the fastest time to production for modern applications across any environment. Enterprise container orchestration Avoid lock-in. Run Mirantis Kubernetes Engine on bare metal, or on private or public clouds—and on a range of popular Linux distributions. Reduce time-to-value. Hit the ground running with out-of-the-box dependencies including Calico for Kubernetes networking and NGINX for Ingress support. Leverage open source. Save money and maintain control by using a full stack of open source-based technologies that are production-proven, scalable, and extensible. Focus on apps—not infrastructure. Enable your IT team to focus on building business-differentiating applications when you couple Mirantis Kubernetes Engine with OpsCare Plus for a fully-managed K8s experience.
  • 35
    azk

    azk

    Azuki

    What’s so great about azk? azk is open source software (Apache 2.0) and will always be. azk is agnostic and has a very soft learning curve. Keep using the exact same development tools you already use. It only takes a few commands. Minutes instead of hours or days. azk does its magic by executing very short and simple recipe files (Azkfile.js) that describe the environments to be installed and configured. azk is fast and your machine will barely feel it. It uses containers instead of virtual machines. Containers are like virtual machines, only with better performance and lower consumption of physical resources. azk is built with Docker, the best open source engine for managing containers. Sharing an Azkfile.js assures total parity among development environments in different programmers' machines and reduces the chances of bugs during deployment. Not sure if all the programmers in your team are using the updated version of the development environment?
  • 36
    Oracle Container Engine for Kubernetes
    Container Engine for Kubernetes (OKE) is an Oracle-managed container orchestration service that can reduce the time and cost to build modern cloud native applications. Unlike most other vendors, Oracle Cloud Infrastructure provides Container Engine for Kubernetes as a free service that runs on higher-performance, lower-cost compute shapes. DevOps engineers can use unmodified, open source Kubernetes for application workload portability and to simplify operations with automatic updates and patching. Deploy Kubernetes clusters including the underlying virtual cloud networks, internet gateways, and NAT gateways with a single click. Automate Kubernetes operations with web-based REST API and CLI for all actions including Kubernetes cluster creation, scaling, and operations. Oracle Container Engine for Kubernetes does not charge for cluster management. Easily and quickly upgrade container clusters, with zero downtime, to keep them up to date with the latest stable version of Kubernetes.
  • 37
    Marathon
    Marathon is a production-grade container orchestration platform for Mesosphere’s Datacenter Operating System (DC/OS) and Apache Mesos. High Availability. Marathon runs as an active/passive cluster with leader election for 100% uptime. Multiple container runtimes. Marathon has first-class support for both Mesos containers (using cgroups) and Docker. Stateful apps. Marathon can bind persistent storage volumes to your application. You can run databases like MySQL and Postgres, and have storage accounted for by Mesos. Beautiful and powerful UI. Service Discovery & Load Balancing. Several methods available. Health Checks. Evaluate your application’s health using HTTP or TCP checks. Event Subscription. Supply an HTTP endpoint to receive notifications - for example to integrate with an external load balancer. Metrics. Query them at /metrics in JSON format, push them to systems like Graphite, StatsD and DataDog, or scrape them using Prometheus.
  • 38
    Helios

    Helios

    Spotify

    Helios is a Docker orchestration platform for deploying and managing containers across an entire fleet of servers. Helios provides a HTTP API as well as a command-line client to interact with servers running your containers. It also keeps a history of events in your cluster including information such as deploys, restarts and version changes. The binary release of Helios is built for Ubuntu 14.04.1 LTS, but Helios should be buildable on any platform with at least Java 8 and a recent Maven 3 available. Use helios-solo to launch a local environment with a Helios master and agent. Helios is pragmatic. We're not trying to solve everything today, but what we have, we try hard to ensure is rock-solid. So we don't have things like resource limits or dynamic scheduling yet. Today, for us, it has been more important to get the CI/CD use cases, and surrounding tooling solid first. That said, we eventually want to do dynamic scheduling, composite jobs, etc.
  • 39
    IBM Cloud Kubernetes Service
    IBM Cloud® Kubernetes Service is a certified, managed Kubernetes solution, built for creating a cluster of compute hosts to deploy and manage containerized apps on IBM Cloud®. It provides intelligent scheduling, self-healing, horizontal scaling and securely manages the resources that you need to quickly deploy, update and scale applications. IBM Cloud Kubernetes Service manages the master, freeing you from having to manage the host OS, container runtime and Kubernetes version-update process.
    Starting Price: $0.11 per hour
  • 40
    Ondat

    Ondat

    Ondat

    Accelerate your development by using a storage layer that works natively with your Kubernetes environment. Focus on running your application, while we make sure you have the persistent volumes that give you the scale and stability you need. Reduce complexity and increase efficiency in your app modernization journey by truly integrating stateful storage into Kubernetes. Run your database or any persistent workload in a Kubernetes environment without having to worry about managing the storage layer. Ondat gives you the ability to deliver a consistent storage layer across any platform. We give you the persistent volumes to allow you to run your own databases without paying for expensive hosted options. Take back control of your data layer in Kubernetes. Kubernetes-native storage with dynamic provisioning that works as it should. Fully API-driven, tight integration with your containerized applications.
  • 41
    Container Service for Kubernetes (ACK)
    Container Service for Kubernetes (ACK) from Alibaba Cloud is a fully managed service. ACK is integrated with services such as virtualization, storage, network and security, providing user a high performance and scalable Kubernetes environments for containerized applications. Alibaba Cloud is a Kubernetes Certified Service Provider (KCSP) and ACK is certified by Certified Kubernetes Conformance Program which ensures consistent experience of Kubernetes and workload portability. Kubernetes Certified Service Provider (KCSP) and qualified by Certified Kubernetes Conformance Program. Ensures Kubernetes consistent experience, workload portability. Provides deep and rich enterprise-class cloud native abilities. Ensures end-to-end application security and provides fine-grained access control. Allows you to quickly create Kubernetes clusters. Provides container-based management of applications throughout the application lifecycle.
  • 42
    DataFactory

    DataFactory

    RightData

    DataFactory is everything you need for data integration and building efficient data pipelines. Transform raw data into information and insights in the time of other tools. The days of writing pages of code to move and transform data are over. Drag data operations from a tool palette onto your pipeline canvas to build even the most complex pipelines. A palette of data transformations you can drag to a pipeline canvas. Build pipelines that would take hours in code in minutes. Automate and operationalize using in-built approval and a version control mechanism. It used to be that data wrangling was one tool, pipeline building was another, and machine learning was yet another tool. DataFactory brings these functions together, where they belong. Perform operations easily using drag-and-drop transformations. Wrangle datasets to prepare for advanced analytics. Add & operationalize ML functions like segmentation and categorization without code.
  • 43
    Apache Hadoop YARN

    Apache Hadoop YARN

    Apache Software Foundation

    The fundamental idea of YARN is to split up the functionalities of resource management and job scheduling/monitoring into separate daemons. The idea is to have a global ResourceManager (RM) and per-application ApplicationMaster (AM). An application is either a single job or a DAG of jobs. The ResourceManager and the NodeManager form the data-computation framework. The ResourceManager is the ultimate authority that arbitrates resources among all the applications in the system. The NodeManager is the per-machine framework agent who is responsible for containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager/Scheduler. The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks.
  • 44
    Apache ODE

    Apache ODE

    Apache Software Foundation

    Apache ODE (Orchestration Director Engine) software executes business processes written following the WS-BPEL standard. It talks to web services, sending and receiving messages, handling data manipulation and error recovery as described by your process definition. It supports both long and short living process executions to orchestrate all the services that are part of your application. WS-BPEL (Business Process Execution Language) is an XML-based language defining several constructs to write business processes. It defines a set of basic control structures like conditions or loops as well as elements to invoke web services and receive messages from services. It relies on WSDL to express web services interfaces. Message structures can be manipulated, assigning parts or the whole of them to variables that can in turn be used to send other messages. Side-by-side support for both the WS-BPEL 2.0 OASIS standard and the legacy BPEL4WS 1.1 vendor specification.
  • 45
    k0s

    k0s

    Mirantis

    k0s is the simple, solid & certified Kubernetes distribution that works on any infrastructure: bare-metal, on-premises, edge, IoT, public & private clouds. It's 100% open source & free. Zero Friction - k0s drastically reduces the complexity of installing and running a fully conformant Kubernetes distribution. New kube clusters can be bootstrapped in minutes. Developer friction is reduced to zero, allowing anyone, with no special skills or expertise in Kubernetes to easily get started. Zero Deps - k0s is distributed as a single binary with zero host OS dependencies besides the host OS kernel. It works with any operating system without additional software packages or configuration. Any security vulnerabilities or performance issues can be fixed directly in the k0s distribution. Zero Cost - k0s is completely free for personal or commercial use, and it always will be. The source code is available on GitHub under Apache 2 license.
  • 46
    Kubestack

    Kubestack

    Kubestack

    No need to compromise between the convenience of a graphical user interface and the power of infrastructure as code anymore. Kubestack allows you to design your Kubernetes platform in an intuitive, graphical user interface. And then export your custom stack to Terraform code for reliable provisioning and sustainable long-term operations. Platforms designed using Kubestack Cloud are exported to a Terraform root module, that's based on the Kubestack framework. All framework modules are open-source, lowering the long-term maintenance effort and allowing easy access to continued improvements. Adapt the tried and tested pull-request and peer-review based workflow to efficiently manage changes with your team. Reduce long-term effort by minimizing the bespoke infrastructure code you have to maintain yourself.
  • 47
    Lightbend

    Lightbend

    Lightbend

    Lightbend provides technology that enables developers to easily build data-centric applications that bring the most demanding, globally distributed applications and streaming data pipelines to life. Companies worldwide turn to Lightbend to solve the challenges of real-time, distributed data in support of their most business-critical initiatives. Akka Platform provides the building blocks that make it easy for businesses to build, deploy, and run large-scale applications that support digitally transformative initiatives. Accelerate time-to-value and reduce infrastructure and cloud costs with reactive microservices that take full advantage of the distributed nature of the cloud and are resilient to failure, highly efficient, and operative at any scale. Native support for encryption, data shredding, TLS enforcement, and continued compliance with GDPR. Framework for quick construction, deployment and management of streaming data pipelines.
  • 48
    Dataplane

    Dataplane

    Dataplane

    The concept behind Dataplane is to make it quicker and easier to construct a data mesh with robust data pipelines and automated workflows for businesses and teams of all sizes. In addition to being more user friendly, there has been an emphasis on scaling, resilience, performance and security.
    Starting Price: Free
  • 49
    Data Taps

    Data Taps

    Data Taps

    Build your data pipelines like Lego blocks with Data Taps. Add new metrics layers, zoom in, and investigate with real-time streaming SQL. Build with others, share and consume data, globally. Refine and update without hassle. Use multiple models/schemas during schema evolution. Built to scale with AWS Lambda and S3.
  • 50
    Arcion

    Arcion

    Arcion Labs

    Deploy production-ready change data capture pipelines for high-volume, real-time data replication - without a single line of code. Supercharged Change Data Capture. Enjoy automatic schema conversion, end-to-end replication, flexible deployment, and more with Arcion’s distributed Change Data Capture (CDC). Leverage Arcion’s zero data loss architecture for guaranteed end-to-end data consistency, built-in checkpointing, and more without any custom code. Leave scalability and performance concerns behind with a highly-distributed, highly parallel architecture supporting 10x faster data replication. Reduce DevOps overhead with Arcion Cloud, the only fully-managed CDC offering. Enjoy autoscaling, built-in high availability, monitoring console, and more. Simplify & standardize data pipelines architecture, and zero downtime workload migration from on-prem to cloud.
    Starting Price: $2,894.76 per month