Alternatives to rkt

Compare rkt alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to rkt in 2024. Compare features, ratings, user reviews, pricing, and more from rkt competitors and alternatives in order to make an informed decision for your business.

  • 1
    Google Cloud Run
    Cloud Run is a fully-managed compute platform that lets you run your code in a container directly on top of Google's scalable infrastructure. We’ve intentionally designed Cloud Run to make developers more productive - you get to focus on writing your code, using your favorite language, and Cloud Run takes care of operating your service. Fully managed compute platform for deploying and scaling containerized applications quickly and securely. Write code your way using your favorite languages (Go, Python, Java, Ruby, Node.js, and more). Abstract away all infrastructure management for a simple developer experience. Build applications in your favorite language, with your favorite dependencies and tools, and deploy them in seconds. Cloud Run abstracts away all infrastructure management by automatically scaling up and down from zero almost instantaneously—depending on traffic. Cloud Run only charges you for the exact resources you use. Cloud Run makes app development & deployment simpler.
    Compare vs. rkt View Software
    Visit Website
  • 2
    Ambassador

    Ambassador

    Ambassador Labs

    Ambassador Edge Stack is a Kubernetes-native API Gateway that delivers the scalability, security, and simplicity for some of the world's largest Kubernetes installations. Edge Stack makes securing microservices easy with a comprehensive set of security functionality, including automatic TLS, authentication, rate limiting, WAF integration, and fine-grained access control. The API Gateway contains a modern Kubernetes ingress controller that supports a broad range of protocols including gRPC and gRPC-Web, supports TLS termination, and provides traffic management controls for resource availability. Why use Ambassador Edge Stack API Gateway? - Accelerate Scalability: Manage high traffic volumes and distribute incoming requests across multiple backend services, ensuring reliable application performance. - Enhanced Security: Protect your APIs from unauthorized access and malicious attacks with robust security features. - Improve Productivity & Developer Experience
    Compare vs. rkt View Software
    Visit Website
  • 3
    AWS Fargate

    AWS Fargate

    Amazon

    AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design.
  • 4
    Docker

    Docker

    Docker

    Docker takes away repetitive, mundane configuration tasks and is used throughout the development lifecycle for fast, easy and portable application development, desktop and cloud. Docker’s comprehensive end-to-end platform includes UIs, CLIs, APIs and security that are engineered to work together across the entire application delivery lifecycle. Get a head start on your coding by leveraging Docker images to efficiently develop your own unique applications on Windows and Mac. Create your multi-container application using Docker Compose. Integrate with your favorite tools throughout your development pipeline, Docker works with all development tools you use including VS Code, CircleCI and GitHub. Package applications as portable container images to run in any environment consistently from on-premises Kubernetes to AWS ECS, Azure ACI, Google GKE and more. Leverage Docker Trusted Content, including Docker Official Images and images from Docker Verified Publishers.
    Starting Price: $7 per month
  • 5
    Red Hat OpenShift
    The Kubernetes platform for big ideas. Empower developers to innovate and ship faster with the leading hybrid cloud, enterprise container platform. Red Hat OpenShift offers automated installation, upgrades, and lifecycle management throughout the container stack—the operating system, Kubernetes and cluster services, and applications—on any cloud. Red Hat OpenShift helps teams build with speed, agility, confidence, and choice. Code in production mode anywhere you choose to build. Get back to doing work that matters. Red Hat OpenShift is focused on security at every level of the container stack and throughout the application lifecycle. It includes long-term, enterprise support from one of the leading Kubernetes contributors and open source software companies. Support the most demanding workloads including AI/ML, Java, data analytics, databases, and more. Automate deployment and life-cycle management with our vast ecosystem of technology partners.
    Starting Price: $50.00/month
  • 6
    Apache Mesos

    Apache Mesos

    Apache Software Foundation

    Mesos is built using the same principles as the Linux kernel, only at a different level of abstraction. The Mesos kernel runs on every machine and provides applications (e.g., Hadoop, Spark, Kafka, Elasticsearch) with API’s for resource management and scheduling across entire datacenter and cloud environments. Native support for launching containers with Docker and AppC images.Support for running cloud native and legacy applications in the same cluster with pluggable scheduling policies. HTTP APIs for developing new distributed applications, for operating the cluster, and for monitoring. Built-in Web UI for viewing cluster state and navigating container sandboxes.
  • 7
    Mirantis Kubernetes Engine
    Mirantis Kubernetes Engine (formerly Docker Enterprise) provides simple, flexible, and scalable container orchestration and enterprise container management. Use Kubernetes, Swarm, or both, and experience the fastest time to production for modern applications across any environment. Enterprise container orchestration Avoid lock-in. Run Mirantis Kubernetes Engine on bare metal, or on private or public clouds—and on a range of popular Linux distributions. Reduce time-to-value. Hit the ground running with out-of-the-box dependencies including Calico for Kubernetes networking and NGINX for Ingress support. Leverage open source. Save money and maintain control by using a full stack of open source-based technologies that are production-proven, scalable, and extensible. Focus on apps—not infrastructure. Enable your IT team to focus on building business-differentiating applications when you couple Mirantis Kubernetes Engine with OpsCare Plus for a fully-managed K8s experience.
  • 8
    Podman

    Podman

    Containers

    What is Podman? Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. Simply put: alias docker=podman. Manage pods, containers, and container images. Supporting docker swarm. We believe that Kubernetes is the defacto standard for composing Pods and for orchestrating containers, making Kubernetes YAML a defacto standard file format. Hence, Podman allows the creation and execution of Pods from a Kubernetes YAML file (see podman-play-kube). Podman can also generate Kubernetes YAML based on a container or Pod (see podman-generate-kube), which allows for an easy transition from a local development environment to a production Kubernetes cluster.
  • 9
    Oracle Container Engine for Kubernetes
    Container Engine for Kubernetes (OKE) is an Oracle-managed container orchestration service that can reduce the time and cost to build modern cloud native applications. Unlike most other vendors, Oracle Cloud Infrastructure provides Container Engine for Kubernetes as a free service that runs on higher-performance, lower-cost compute shapes. DevOps engineers can use unmodified, open source Kubernetes for application workload portability and to simplify operations with automatic updates and patching. Deploy Kubernetes clusters including the underlying virtual cloud networks, internet gateways, and NAT gateways with a single click. Automate Kubernetes operations with web-based REST API and CLI for all actions including Kubernetes cluster creation, scaling, and operations. Oracle Container Engine for Kubernetes does not charge for cluster management. Easily and quickly upgrade container clusters, with zero downtime, to keep them up to date with the latest stable version of Kubernetes.
  • 10
    KubeSphere

    KubeSphere

    KubeSphere

    KubeSphere is a distributed operating system for cloud-native application management, using Kubernetes as its kernel. It provides a plug-and-play architecture, allowing third-party applications to be seamlessly integrated into its ecosystem. KubeSphere is also a multi-tenant enterprise-grade open-source Kubernetes container platform with full-stack automated IT operations and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich Kubernetes platform, which includes the most common functionalities needed for enterprise Kubernetes strategies. A CNCF-certified Kubernetes platform, 100% open-source, built and improved by the community. Can be deployed on an existing Kubernetes cluster or Linux machines, supports the online and air-gapped installation. Deliver DevOps, service mesh, observability, application management, multi-tenancy, storage, and networking management in a unified platform.
  • 11
    IBM WebSphere Hybrid Edition
    WebSphere Hybrid Edition is a flexible, all-in-one solution for WebSphere application server deployments that can enable organizations to meet current and future requirements. It will enable you to optimize your existing WebSphere entitlements, modernize your applications, and build new cloud-native Java EE applications. An all-in-one solution to help you run, modernize and create new Java applications. Use IBM Cloud® Transformation Advisor and IBM Mono2Micro to help assess the cloud readiness of your applications, explore options for containerization and microservices, and get assistance in adapting code. Explore and unlock the benefits of the all-in-one IBM WebSphere Hybrid Edition solution for your application run time and modernization features. Identify which WebSphere applications can easily move to containers for immediate savings. Manage costs, enhancements, and security proactively throughout the application lifecycle.
  • 12
    Ondat

    Ondat

    Ondat

    Accelerate your development by using a storage layer that works natively with your Kubernetes environment. Focus on running your application, while we make sure you have the persistent volumes that give you the scale and stability you need. Reduce complexity and increase efficiency in your app modernization journey by truly integrating stateful storage into Kubernetes. Run your database or any persistent workload in a Kubernetes environment without having to worry about managing the storage layer. Ondat gives you the ability to deliver a consistent storage layer across any platform. We give you the persistent volumes to allow you to run your own databases without paying for expensive hosted options. Take back control of your data layer in Kubernetes. Kubernetes-native storage with dynamic provisioning that works as it should. Fully API-driven, tight integration with your containerized applications.
  • 13
    OpenVZ

    OpenVZ

    Virtuozzo

    Open source container-based virtualization for Linux. Multiple secure, isolated Linux containers (otherwise known as VEs or VPSs) on a single physical server enabling better server utilization and ensuring that applications do not conflict. Each container performs and executes exactly like a stand-alone server; a container can be rebooted independently and have root access, users, IP addresses, memory, processes, files, applications, system libraries and configuration files.
  • 14
    Open Container Initiative (OCI)

    Open Container Initiative (OCI)

    Open Container Initiative (OCI)

    The Open Container Initiative is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes. Established in June 2015 by Docker and other leaders in the container industry, the OCI currently contains two specifications, the runtime specification (runtime-spec) and the image specification (image-spec). The runtime specification outlines how to run a “filesystem bundle” that is unpacked on disk. At a high-level an OCI implementation would download an OCI Image then unpack that image into an OCI Runtime filesystem bundle. At this point the OCI Runtime Bundle would be run by an OCI Runtime. The Open Container Initiative (OCI) is a lightweight, open governance structure (project), formed under the auspices of the Linux Foundation, for the express purpose of creating open industry standards around container formats and runtime. The OCI was launched on June 22nd 2015 by Docker, CoreOS and other leaders.
  • 15
    Oracle Cloud Infrastructure Compute
    Oracle Cloud Infrastructure provides fast, flexible, and affordable compute capacity to fit any workload need from performant bare metal servers and VMs to lightweight containers. OCI Compute provides uniquely flexible VM and bare metal instances for optimal price-performance. Select exactly the number of cores and the memory your applications need. Delivering high performance for enterprise workloads. Simplify application development with serverless computing. Your choice of technologies includes Kubernetes and containers. NVIDIA GPUs for machine learning, scientific visualization, and other graphics processing. Capabilities such as RDMA, high-performance storage, and network traffic isolation. Oracle Cloud Infrastructure consistently delivers better price performance than other cloud providers. Virtual machine-based (VM) shapes offer customizable core and memory combinations. Customers can optimize costs by choosing a specific number of cores.
    Starting Price: $0.007 per hour
  • 16
    Sandboxie

    Sandboxie

    Sandboxie

    Sandboxie is a sandbox-based isolation software for 32- and 64-bit Windows NT-based operating systems. It is being developed by David Xanatos since it became open source, before that it was developed by Sophos (which acquired it from Invincea, which acquired it earlier from the original author Ronen Tzur). It creates a sandbox-like isolated operating environment in which applications can be run or installed without permanently modifying the local or mapped drive. An isolated virtual environment allows controlled testing of untrusted programs and web surfing. Since the Open Sourcing sandboxie is being released in two flavors the classical build with a MFC based UI and as plus build that incorporates new features and an entirely new Q’t based UI. All newly added features target the plus branch but often can be utilized in the classical edition by manually editing the sandboxie.ini file.
  • 17
    LXD

    LXD

    Canonical

    LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. It's image based with pre-made images available for a wide number of Linux distributions and is built around a very powerful, yet pretty simple, REST API. To get a better idea of what LXD is and what it does, you can try it online! Then if you want to run it locally, take a look at our getting started guide. The LXD project was founded and is currently led by Canonical Ltd with contributions from a range of other companies and individual contributors. The core of LXD is a privileged daemon which exposes a REST API over a local unix socket as well as over the network (if enabled). Clients, such as the command line tool provided with LXD itself then do everything through that REST API. It means that whether you're talking to your local host or a remote server, everything works the same way.
  • 18
    balenaEngine
    An engine purpose-built for embedded and IoT use cases, based on Moby Project technology from Docker. 3.5x smaller than Docker CE, packaged as a single binary. Available for a wide variety of chipset architectures, supporting everything from tiny IoT devices to large industrial gateways. Bandwidth-efficient updates with binary diffs, 10-70x smaller than pulling layers in common scenarios. Extract layers as they arrive to prevent excessive writing to disk, protecting your storage from eventual corruption. Atomic and durable image pulls defend against partial container pulls in the event of power failure. Prevents page cache thrashing during image pull, so your application runs undisturbed in low-memory situations. balenaEngine is a new container engine purpose-built for embedded and IoT use cases and compatible with Docker containers. Based on Moby Project technology from Docker, balenaEngine supports container deltas for 10-70x more efficient bandwidth usage.
  • 19
    runc

    runc

    Open Container Initiative (OCI)

    CLI tool for spawning and running containers according to the OCI specification. runc only supports Linux. It must be built with Go version 1.17 or higher. In order to enable seccomp support, you will need to install libseccomp on your platform. runc supports optional build tags for compiling support of various features, with some of them enabled by default. runc currently supports running its test suite via Docker. To run the suite just type make test. There are additional make targets for running the tests outside of a container but this is not recommended as the tests are written with the expectation that they can write and remove anywhere. You can run a specific test case by setting the TESTFLAGS variable. You can run a specific integration test by setting the TESTPATH variable. You can run a specific rootless integration test by setting the ROOTLESS_TESTPATH variable. Please note that runc is a low-level tool not designed with an end-user in mind.
  • 20
    LXC

    LXC

    Canonical

    LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers. LXC containers are often considered as something in the middle between a chroot and a full fledged virtual machine. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel. LXC is free software, most of the code is released under the terms of the GNU LGPLv2.1+ license, some Android compatibility bits are released under a standard 2-clause BSD license and some binaries and templates are released under the GNU GPLv2 license. LXC's stable release support relies on the Linux distributions and their own commitment to pushing stable fixes and security updates.
  • 21
    Turbo

    Turbo

    Turbo.net

    Turbo lets you publish and manage all of your enterprise applications from a single point to every platform and device. Book a demo with our team to see Turbo in action. Deploy custom containerized applications on desktops, on-premises servers, and public and private clouds. The student digital workspace brings applications to every campus and personal device. Deliver applications everywhere from a single, configurable container environment. Freely migrate between devices and platforms with rich APIs and connectors. Deploy to managed and BYOD PCs with no installs. Stream to HTML5, Mac, and mobile with Turbo Application Server. Publish to existing Citrix and VMware VDI environments. Dynamically image applications onto non-persistent WVD instances. Bring course applications directly inside Canvas, Blackboard, and other major LMS systems. Authoring environment for creating your own containerized applications and components.
    Starting Price: $19 per month
  • 22
    Cloud Foundry

    Cloud Foundry

    Cloud Foundry

    Cloud Foundry makes it faster and easier to build, test, deploy and scale applications, providing a choice of clouds, developer frameworks, and application services. It is an open source project and is available through a variety of private cloud distributions and public cloud instances. Cloud Foundry has a container-based architecture that runs apps in any programming language. Deploy apps to CF using your existing tools and with zero modification to the code. Instantiate, deploy, and manage high-availability Kubernetes clusters with CF BOSH on any cloud. By decoupling applications from infrastructure, you can make individual decisions about where to host workloads – on premise, in public clouds, or in managed infrastructures – and move those workloads as necessary in minutes, with no changes to the app.
  • 23
    Oracle Solaris
    We’ve been designing the OS for for more than two decades, always ensuring that we’ve engineered in features to meet the latest market trends while maintaining backward compatibility. Our Application Binary Guarantee gives you the ability to run your newest and legacy applications on modern infrastructure. Integrated lifecycle management technologies allow you to issue a single command to update your entire cloud installation—clear down to the firmware and including all virtualized environments. One large financial services company saw a 16x efficiency gain by managing its virtual machines (VMs) using Oracle Solaris, compared to a third-party open-source platform. New additions to the Oracle Solaris Observability tools allow you to troubleshoot system and application problems in real time, giving you real-time and historical insight and allowing for unprecedented power to diagnose and resolve issues quickly and easily.
  • 24
    MicroK8s

    MicroK8s

    Canonical

    Low-ops, minimal production Kubernetes, for devs, cloud, clusters, workstations, Edge and IoT. MicroK8s automatically chooses the best nodes for the Kubernetes datastore. When you lose a cluster database node, another node is promoted. No admin needed for your bulletproof edge. MicroK8s is small, with sensible defaults that ‘just work’. A quick install, easy upgrades and great security make it perfect for micro clouds and edge computing. Full enterprise support available, with no subscription needed. Optional 24/7 support with 10 year security maintenance. Under the cell tower. On the racecar. On satellites or everyday appliances, MicroK8s delivers the full Kubernetes experience on IoT and micro clouds. Fully containerized deployment with compressed over-the-air updates for ultra-reliable operations. MicroK8s will apply security updates automatically by default, defer them if you want. Upgrade to a newer version of Kubernetes with a single command. It’s really that easy.
  • 25
    FreeBSD Jails
    Since system administration is a difficult task, many tools have been developed to make life easier for the administrator. These tools often enhance the way systems are installed, configured, and maintained. One of the tools which can be used to enhance the security of a FreeBSD system is jails. Jails have been available since FreeBSD 4.X and continue to be enhanced in their usefulness, performance, reliability, and security. Jails build upon the chroot(2) concept, which is used to change the root directory of a set of processes. This creates a safe environment, separate from the rest of the system. Jails improve on the concept of the traditional chroot environment in several ways. In a traditional chroot environment, processes are only limited in the part of the file system they can access. The rest of the system resources, system users, running processes, and the networking subsystem are shared by the chrooted processes and the processes of the host system.
  • 26
    Flockport

    Flockport

    Flockport

    One-click migration from your existing VM workloads. Get instant mobility of your applications across on-prem and clouds. Why settle for one-way cloud migration when you can have continuous mobility. Migrate from on-prem to the cloud, across clouds, or back. Embrace the cloud your way. Business continuity needs application mobility and a multi-cloud approach. Leave behind long drawn out and expensive VM migration projects. Instashift gives you single-click automation. No need to adopt complex approaches. Migrate your VMs complete with applications, databases, and states. Benefit from continuous mobility for your instashifted applications. Move to the cloud or back to on-prem in a click. Need to move thousands of VMs. Instashift gives you an automated solution that works seamlessly. A new innovation platform for sovereign and emerging cloud providers to deliver the same capabilities and flexibility users have come to expect from the public cloud.
  • 27
    Red Hat Integration
    Red Hat® Integration is a comprehensive set of integration and messaging technologies to connect applications and data across hybrid infrastructures. It is an agile, distributed, containerized, and API-centric solution. It provides service composition and orchestration, application connectivity and data transformation, real-time message streaming, change data capture, and API management, all combined with a cloud-native platform and toolchain to support the full spectrum of modern application development. Deploy enterprise integration patterns (EIPs) based integrations using 200+ pluggable connectors to connect new and existing data across the hybrid cloud. Create, deploy, monitor, and control APIs throughout their entire lifecycle. With an API-first approach, extend your integrations across hybrid and multi-cloud environments. Develop and manage services in popular container standards, as well as package and deploy lightweight containers in distributed environments.
  • 28
    Azure Container Apps
    Azure Container Apps is a fully managed Kubernetes-based application platform that helps you deploy apps from code or containers without orchestrating complex infrastructure. Build heterogeneous modern apps or microservices with unified centralized networking, observability, dynamic scaling, and configuration for higher productivity. Design resilient microservices with full support for Dapr and dynamic scaling powered by KEDA. Advanced identity and access management to monitor container governance at scale and secure your environment. Scalable, portable platform with low management costs for improved velocity to production. Achieve high developer velocity and app-centric productivity while using open standards on a cloud-native foundation with no programming model requirement.
    Starting Price: $0.000024 per second
  • 29
    DxEnterprise
    DxEnterprise is multi-platform Smart Availability software built on patented technology for Windows Server, Linux and Docker. It can be used to manage a variety of workloads at the instance level—as well as Docker containers. DxEnterprise (DxE) is particularly optimized for native or containerized Microsoft SQL Server deployments on any platform. It is also adept at management of Oracle on Windows. In addition to Windows file shares and services, DxE supports any Docker container on Windows or Linux, including Oracle, MySQL, PostgreSQL, MariaDB, MongoDB, and other relational database management systems. It also supports cloud-native SQL Server availability groups (AGs) in containers, including support for Kubernetes clusters, across mixed environments and any type of infrastructure. DxE integrates seamlessly with Azure shared disks, enabling optimal high availability for clustered SQL Server instances in the cloud.
  • 30
    Apache Hadoop YARN

    Apache Hadoop YARN

    Apache Software Foundation

    The fundamental idea of YARN is to split up the functionalities of resource management and job scheduling/monitoring into separate daemons. The idea is to have a global ResourceManager (RM) and per-application ApplicationMaster (AM). An application is either a single job or a DAG of jobs. The ResourceManager and the NodeManager form the data-computation framework. The ResourceManager is the ultimate authority that arbitrates resources among all the applications in the system. The NodeManager is the per-machine framework agent who is responsible for containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager/Scheduler. The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks.
  • 31
    Anthos

    Anthos

    Google

    Anthos lets you build, deploy, and manage applications anywhere in a secure, consistent manner. You can modernize existing applications running on virtual machines while deploying cloud-native apps on containers in an increasingly hybrid and multi-cloud world. Our application platform provides a consistent development and operations experience across all your deployments while reducing operational overhead and improving developer productivity. Anthos GKE: Enterprise-grade container orchestration and management service for running Kubernetes clusters anywhere, in both cloud and on-premises environments. Anthos Config Management: Define, automate, and enforce policies across environments in order to meet your organization’s unique security and compliance requirements. Anthos Service Mesh: Anthos unburdens operations and development teams by empowering them to manage and secure traffic between services while monitoring, troubleshooting, and improving application performance.
  • 32
    F5 Container Ingress Services
    Organizations are adopting containerized environments to speed app development. But these apps still need services, such as routing, SSL offload, scale, and security. F5 Container Ingress Services makes it easy to deliver advanced application services to your container deployments, enabling Ingress control HTTP routing, load balancing, and application delivery performance, as well as robust security services. Container Ingress Services easily integrates BIG-IP solutions with native container environments, such as Kubernetes, and PaaS container orchestration and management systems, such as RedHat OpenShift. Scale apps to meet container workloads and enable security services to protect container data. Container Ingress Services enables self-service app performance and security services within your orchestration by integrating BIG-IP platforms with your container environment.
  • 33
    Juniper Cloud-Native Router
    The Cloud-Native Router takes full advantage of container economics and operational efficiencies, giving service providers the flexibility they need to deploy 5G. The performant, software-based router combines Juniper’s proven routing technology, the Junos OS containerized routing protocol daemon (cRPD), and Contrail vRouter DPDK forwarding plane for x86 processors. It integrates seamlessly with the Kubernetes Container Network Interface (CNI) framework. The router complements Juniper’s physical routers with advanced networking features for cloud-native environments where space, power, and cooling are limited. Based on the same Junos OS routing technology, hybrid physical and virtual networks provide a single experience end to end. The Cloud-Native Router is a key component in the 5G Distributed Radio Access Network (D-RAN) and in 5G Core data centers hosted in hyper scaler cloud environments.
  • 34
    Oracle Container Cloud Service
    Oracle Container Cloud Service (also known as Oracle Cloud Infrastructure Container Service Classic) offers Development and Operations teams the benefits of easy and secure Docker containerization when building and deploying applications. Provides an easy-to-use interface to manage the Docker environment. Provides out-of-the-box examples of containerized services and application stacks that can be deployed in one click. Enables developers to easily connect to their private Docker registries (so they can ‘bring their own containers’). Enables developers to focus on building containerized application images and Continuous Integration/Continuous Delivery (CI/CD) pipelines, not on learning complex orchestration technologies.
  • 35
    Salad

    Salad

    Salad Technologies

    Salad allows gamers to mine crypto in their downtime. Turn your GPU power into credits that you can spend on things you love. Our Store features subscriptions, games, gift cards, and more. Download our free mining app and run while you're AFK to earn Salad Balance. Support a democratized web through providing decentralized infrastructure for distributing compute power. o cut down on the buzzwords—your PC does a lot more than just make you money. At Salad, our chefs will help support not only blockchain, but other distributed projects and workloads like machine learning and data processing. Take surveys, answer quizzes, and test apps through AdGate, AdGem, and OfferToro. Once you have enough balance, you can redeem items from the Salad Storefront. Your Salad Balance can be used to buy items like Discord Nitro, Prepaid VISA Cards, Amazon Credit, or Game Codes.
  • 36
    Apprenda

    Apprenda

    Apprenda

    Apprenda Cloud Platform empowers enterprise IT to create a Kubernetes-enabled shared service on the infrastructures of their choice and offer it to developers across business units. ACP supports your entire custom application portfolio. Rapidly build, deploy, run, and manage cloud-native, microservices, and container-based .NET and Java applications or modernize traditional workloads. ACP gives your developers self-service access to the tools they need to rapidly build applications, while IT operators can very easily orchestrate the environments and workflows. Enterprise IT becomes a true service provider. ACP is a single platform spanning your multiple data- centers and clouds. Run ACP on-premise or consume it as a managed service on the public cloud; both with the assurance of complete infrastructure independence. ACP enables policy-driven control over all of your application workloads' infrastructure utilization and DevOps processes.
  • 37
    Quarkus

    Quarkus

    Quarkus

    Quarkus tailors your application for GraalVM and HotSpot. Amazingly fast boot time, incredibly low RSS memory (not just heap size!) offering near-instant scale up and high-density memory utilization in container orchestration platforms like Kubernetes. We use a technique we call compile time boot. Quarkus provides a cohesive, fun-to-use, full-stack framework by leveraging a growing list of over fifty best-of-breed libraries that you love and use. A cohesive platform for optimized developer joy with unified configuration and no hassle native executable generation. Zero configs, live reload in the blink of an eye, and streamlined code for the 80% common usages, flexible for the remainder 20%. The combination of Quarkus and Kubernetes provides an ideal environment for creating scalable, fast, and lightweight applications. Quarkus significantly increases developer productivity with tooling, pre-built integrations, application services, and more.
  • 38
    Tigera

    Tigera

    Tigera

    Kubernetes-native security and observability. Security and observability as code for cloud-native applications. Cloud-native security as code for hosts, VMs, containers, Kubernetes components, workloads, and services to secure north-south and east-west traffic, enable enterprise security controls, and ensure continuous compliance. Kubernetes-native observability as code to collect real-time telemetry, enriched with Kubernetes context, for a live topographical view of interactions between components from hosts to services. Rapid troubleshooting with machine-learning powered anomaly and performance hotspot detection. Single framework to centrally secure, observe, and troubleshoot multi-cluster, multi-cloud, and hybrid-cloud environments running Linux or Window containers. Update and deploy policies in seconds to enforce security and compliance or resolve issues.
  • 39
    Authentic8 Silo

    Authentic8 Silo

    Authentic8

    Silo delivers secure anywhere, anytime web access, managed by policy and protected by rigorous controls. By shifting the exploit surface away from potential points of risk, Silo establishes trusted access to the web. Silo shifts your risk to an isolated cloud-native environment that you control. Silo can be configured specifically to meet your most demanding requirements. The Silo Web Isolation Platform is a secure, cloud-native execution environment for all web-based activity. Silo is built on the principles that all web code and critical data should be isolated from the endpoint, and that browsing capabilities should be configurable and auditable — like any other enterprise workflow. A cloud-based solution that deploys in seconds — whether it’s for a single user or thousands. Silo doesn’t require infrastructure investment; its ability to easily scale lets IT focus on solving business problems, not managing procurement.
  • 40
    Falcon Cloud Workload Protection
    Falcon Cloud Workload Protection provides complete visibility into workload and container events and instance metadata enabling faster and more accurate detection, response, threat hunting and investigation, to ensure that nothing goes unseen in your cloud environment. Falcon Cloud Workload Protection secures your entire cloud-native stack, on any cloud, across all workloads, containers and Kubernetes applications. Automate security and detect and stop suspicious activity, zero-day attacks, risky behavior to stay ahead of threats and reduce the attack surface. Falcon Cloud Workload Protection key integrations support continuous integration/continuous delivery (CI/CD) workflows allowing you to secure workloads at the speed of DevOps without sacrificing performance
  • 41
    Red Hat CodeReady Workspaces
    Red Hat® CodeReady Workspaces is a developer tool that makes cloud-native development practical for teams, using Kubernetes and containers to provide any member of the development or IT team with a consistent, preconfigured development environment. Developers can create code, build, and test in containers running on Red Hat OpenShift®. The user experience is as fast and familiar as an integrated developer environment (IDE) on their laptop. Workspace configurations are centrally managed and easy to share using a devfile that specifies what developers on your team need to start working. Includes an in-browser IDE, providing a desktop-like coding experience and allowing team members to update code from anywhere. Red Hat CodeReady Workspaces can be downloaded and moved to run behind your firewall, isolated from the public internet.
  • 42
    NeuVector
    NeuVector covers the entire CI/CD pipeline with complete vulnerability management and attack blocking in production with our patented container firewall. NeuVector has you covered with PCI-ready container security. Meet requirements with less time and less work. NeuVector protects your data and IP in public and private cloud environments. Continuously scan throughout the container lifecycle. Remove security roadblocks. Bake in security policies at the start. Comprehensive vulnerability management to establish your risk profile and the only patented container firewall for immediate protection from zero days, known, and unknown threats. Essential for PCI and other mandates, NeuVector creates a virtual wall to keep personal and private information securely isolated on your network. NeuVector is the only kubernetes-native container security platform that delivers complete container security.
    Starting Price: 1200/node/yr
  • 43
    Gloo Mesh

    Gloo Mesh

    solo.io

    Today's Kubernetes environments need help in scaling, securing and observing modern cloud-native applications. Gloo Mesh, based on the industry's leading Istio service mesh, simplifies multi-cloud and multi-cluster management of service mesh for containers and virtual machines. Gloo Mesh helps platform engineering teams to reduce costs, reduce risks, and improve application agility. Gloo Mesh is a modular component of Gloo Platform. The service mesh allows for application-aware network tasks to be managed independently from the application, adding observability, security, and reliability to distributed applications. By introducing the service mesh to your applications, you can: Simplify the application layer Provide more insights into your traffic Increase the security of your application
  • 44
    StackRox

    StackRox

    StackRox

    Only StackRox provides comprehensive visibility into your cloud-native infrastructure, including all images, container registries, Kubernetes deployment configurations, container runtime behavior, and more. StackRox’s deep integration with Kubernetes delivers visibility focused on deployments, giving security and DevOps teams a comprehensive understanding of their cloud-native infrastructure, including images, containers, pods, namespaces, clusters, and their configurations. You get at-a-glance views of risk across your environment, compliance status, and active suspicious traffic. Each summary view enables you to drill into more detail. Using StackRox, you can easily identify and analyze container images in your environment with native integrations and support for nearly every image registry.
  • 45
    Google Deep Learning Containers
    Build your deep learning project quickly on Google Cloud: Quickly prototype with a portable and consistent environment for developing, testing, and deploying your AI applications with Deep Learning Containers. These Docker images use popular frameworks and are performance optimized, compatibility tested, and ready to deploy. Deep Learning Containers provide a consistent environment across Google Cloud services, making it easy to scale in the cloud or shift from on-premises. You have the flexibility to deploy on Google Kubernetes Engine (GKE), AI Platform, Cloud Run, Compute Engine, Kubernetes, and Docker Swarm.
  • 46
    Alibaba Cloud Server Load Balancer (SLB)
    Server Load Balancer (SLB) provides disaster recovery at four levels for high availability. CLB and ALB support built-in Anti-DDoS services to ensure business security. In addition, you can integrate ALB with WAF in the console to ensure security at the application layer. ALB and CLB support cloud-native networks. ALB is integrated with other cloud-native services, such as Container Service for Kubernetes (ACK), Serverless App Engine (SAE), and Kubernetes, and functions as a cloud-native gateway to distribute inbound network traffic. Monitors the condition of backend servers regularly. SLB does not distribute network traffic to unhealthy backend servers to ensure availability. Server Load Balancer (SLB) supports cluster deployment and session synchronization. You can perform hot upgrades and monitor the health and performance of machines in real-time. Supports multi-zone deployment in specific regions to provide zone-disaster recovery.
  • 47
    Argo

    Argo

    Argo

    Open-source tools for Kubernetes to run workflows, manage clusters and do GitOps right. Kubernetes-native workflow engine supporting DAG and step-based workflows. Declarative continuous delivery with a fully-loaded UI. Advanced Kubernetes deployment strategies such as Canary and Blue-Green made easy. Argo Workflows is an open-source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD. Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a graph (DAG). Easily run compute-intensive jobs for machine learning or data processing in a fraction of the time using Argo Workflows on Kubernetes. Run CI/CD pipelines natively on Kubernetes without configuring complex software development products. Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based environments.
  • 48
    KubeMQ

    KubeMQ

    KubeMQ

    Innovative and modern message queue and message broker in a lightweight container developed to run in Kubernetes, certified in the CNCF landscape and connect natively to the cloud-native ecosystem. A message broker and message queue ideal for developers. Provides all messaging patterns, scalable, highly available, and secure. Connect microservices instantly using a rich set of connectors without writing any code. Easy-to-use SDKs and elimination of predefined topics, channels, brokers, and routes. Build & Deploy allows configurations of KubeMQ components to be built with a few clicks and deployed with kubectl command line. Innovative and modern message queue and message broker in a lightweight container developed to run in Kubernetes, certified in the CNCF landscape, and connect natively to the cloud-native ecosystem. Simple deployment in Kubernetes in less than 1 minute. Developer friendly by simple to use SDKs and elimination of the many developers and DevOps-centered challenges.
  • 49
    Drone

    Drone

    Harness

    Configuration as a code. Pipelines are configured with a simple, easy‑to‑read file that you commit to your git repository. Each pipeline step is executed inside an isolated Docker container that is automatically downloaded at runtime. Any source code manager. Drone integrates seamlessly with multiple source code management systems, including GitHub, GitHubEnterprise, Bitbucket, and GitLab. Any platform. Drone.io natively supports multiple operating systems and architectures, including Linux x64, ARM, ARM64 and Windows x64. Any language. Drone works with any language, database or service that runs inside a Docker container. Choose from thousands of public Docker images or provide your own. Create and share plugins. Drone uses containers to drop pre‑configured steps into your pipeline. Choose from hundreds of existing plugins, or create your own. Drone makes advanced customization easy. Implement custom access controls, approval workflows, secret management, yaml syntax extensions& more.
  • 50
    DBOS

    DBOS

    DBOS

    A simpler, more secure way to build fault-tolerant cloud applications, powered by the revolutionary cloud-native DBOS operating system. Based on 3 years of joint MIT-Stanford open source R&D, DBOS revolutionizes cloud-native architecture. DBOS is a cloud-native OS that builds on a relational database to radically simplify today's complex cloud application stacks. DBOS powers DBOS Cloud, a transactional serverless platform that provides fault-tolerance, observability, cyber-resilience, and easy cloud deployment to stateful TypeScript applications. OS services are implemented on top of a distributed DBMS. Built-in transactional, fault-tolerant state management that simplifies the stack, with no need for containers, cluster management, or workflow orchestration. Seamless scaling, high performance, and high availability. Metrics, logs, and traces are stored in SQL-accessible tables. Smaller cyber attack surface, cyberattack self-detection, and cyber-resilience.