Alternatives to Google Cloud Traffic Director

Compare Google Cloud Traffic Director alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Google Cloud Traffic Director in 2024. Compare features, ratings, user reviews, pricing, and more from Google Cloud Traffic Director competitors and alternatives in order to make an informed decision for your business.

  • 1
    Total Uptime Cloud Load Balancer

    Total Uptime Cloud Load Balancer

    Total Uptime Technologies

    Why choose a load balancer that locks you in to one cloud platform when you could choose a solution that works with everyone? Multi-Cloud / Hybrid-Cloud / Data Center / On-Prem - It works with everything everywhere. Total Uptime gives you control over all inbound application traffic. Route traffic around network outages, ISP issues and cloud failures. Secure them against malicious activity and attacks. Integrate devices on-prem, at colo facilities or in the cloud. Accelerate them and boost performance. It doesn't matter where they are because you have complete visibility AND control.
  • 2
    Kuma

    Kuma

    Kuma

    The open-source control plane for service mesh, delivering security, observability, routing and more. Built on top of Envoy, Kuma is a modern control plane for Microservices & Service Mesh for both K8s and VMs, with support for multiple meshes in one cluster. Out of the box L4 + L7 policy architecture to enable zero trust security, observability, discovery, routing and traffic reliability in one click. Getting up and running with Kuma only requires three easy steps. Natively embedded with Envoy proxy, Kuma Delivers easy to use policies that can secure, observe, connect, route and enhance service connectivity for every application and services, databases included. Build modern service and application connectivity across every platform, cloud and architecture. Kuma supports modern Kubernetes environments and Virtual Machine workloads in the same cluster, with native multi-cloud and multi-cluster connectivity to support the entire organization.
  • 3
    F5 NGINX Gateway Fabric
    The always-free NGINX Service Mesh scales from open source projects to a fully supported, secure, and scalable enterprise‑grade solution. Take control of Kubernetes with NGINX Service Mesh, featuring a unified data plane for ingress and egress management in a single configuration. The real star of NGINX Service Mesh is the fully integrated, high-performance data plane. Leveraging the power of NGINX Plus to operate highly available and scalable containerized environments, our data plane brings a level of enterprise traffic management, performance, and scalability to the market that no other sidecars can offer. It provides the seamless and transparent load balancing, reverse proxy, traffic routing, identity, and encryption features needed for production-grade service mesh deployments. When paired with the NGINX Plus-based version of NGINX Ingress Controller, it provides a unified data plane that can be managed with a single configuration.
  • 4
    Tetrate

    Tetrate

    Tetrate

    Connect and manage applications across clusters, clouds, and data centers. Coordinate app connectivity across heterogeneous infrastructure from a single management plane. Integrate traditional workloads into your cloud-native application infrastructure. Create tenants within your business to define fine-grained access control and editing rights for teams on shared infrastructure. Audit the history of changes to services and shared resources from day zero. Automate traffic shifting across failure domains before your customers notice. TSB sits at the application edge, at cluster ingress, and between workloads in your Kubernetes and traditional compute clusters. Edge and ingress gateways route and load balance application traffic across clusters and clouds while the mesh controls connectivity between services. A single management plane configures connectivity, security, and observability for your entire application network.
  • 5
    Kong Mesh
    Enterprise service mesh based on Kuma for multi-cloud and multi-cluster on both Kubernetes and VMs. Deploy with a single command. Connect to other services automatically with built-in service discovery, including an Ingress resource and remote CPs. Support across any environment, including multi-cluster, multi-cloud and multi-platform on both Kubernetes and VMs. Accelerate initiatives like zero-trust and GDPR with native mesh policies, improving the speed and efficiency of every application team. Deploy a single control plane that can scale horizontally to many data planes, or support multiple clusters or even hybrid service meshes running on both Kubernetes and VMs combined. Simplify cross-zone communication using an Envoy-based ingress deployment on both Kubernetes and VMs, as well as the built-in DNS resolver for service-to-service communication. Built on top of Envoy with 50+ observability charts out of the box, you can collect metrics, traces, and logs of all L4-L7 traffic.
  • 6
    Buoyant Cloud
    Fully managed Linkerd, right on your cluster. Running a service mesh shouldn’t require a team of engineers. Buoyant Cloud manages Linkerd so that you don’t have to. Automate away the toil. Buoyant Cloud automatically keeps your Linkerd control plane and data plane up to date with the latest versions and handles installs, trust anchor rotation, and more. Automate upgrades, installs, and more. Keep data plane proxy versions always in sync. Rotate TLS trust anchors without breaking a sweat. Never get taken unaware. Buoyant Cloud continuously monitors the health of your Linkerd deployments and proactively alerts you of potential issues before they escalate. Automatically track service mesh health. Get a global, cross-cluster view of Linkerd's behavior. Monitor and report Linkerd best practices. Forget overly-complicated solutions that pile one layer of complexity on top of another. Linkerd just works, and Buoyant Cloud makes Linkerd easier than ever.¿
  • 7
    Linkerd

    Linkerd

    Buoyant

    Linkerd adds critical security, observability, and reliability features to your Kubernetes stack—no code change required. Linkerd is 100% Apache-licensed, with an incredibly fast-growing, active, and friendly community. Built in Rust, Linkerd's data plane proxies are incredibly small (<10 mb) and blazing fast (p99 < 1ms). No complex APIs or configuration. For most applications, Linkerd will “just work” out of the box. Linkerd's control plane installs into a single namespace, and services can be safely added to the mesh, one at a time. Get a comprehensive suite of diagnostic tools, including automatic service dependency maps and live traffic samples. Best-in-class observability allows you to monitor golden metrics—success rate, request volume, and latency—for every service.
  • 8
    Traefik Mesh

    Traefik Mesh

    Traefik Labs

    Traefik Mesh is a straight-forward, easy to configure, and non-invasive service mesh that allows visibility and management of the traffic flows inside any Kubernetes cluster. By improving monitoring, logging, and visibility, as well as implementing access controls. Allows administrators to increase the security of their clusters easily and quickly. By being able to monitor and trace how applications communicate in your Kubernetes cluster, administrators are able to optimize internal communications, and improve application performance. Reducing the time to learn, install, and configure makes it easier to implement, and to provide value for the time actually spent implementing. Administrators can focus on their business applications. Being open source means that there is no vendor lock-in, as Traefik Mesh is opt-in by design.
  • 9
    VMware Avi Load Balancer
    Simplify application delivery with software-defined load balancers, web application firewall, and container ingress services for any application in any data center and cloud. Simplify administration with centralized policies and operational consistency across on-premises data centers, and hybrid and public clouds, including VMware Cloud (VMC on AWS, OCVS, AVS, GCVE), AWS, Azure, Google, and Oracle Cloud. Free infrastructure teams from manual tasks and enable DevOps teams with self-service. Application delivery automation toolkits include Python SDK, RESTful APIs, Ansible and Terraform integrations. Gain unprecedented insights, including network, end users and security, with real-time application performance monitoring, closed-loop analytics and deep machine learning.
  • 10
    Meshery

    Meshery

    Meshery

    Describe all of your cloud native infrastructure and manage as a pattern. Design your service mesh configuration and workload deployments. Apply intelligent canary strategies and performance profiles with service mesh pattern management. Assess your service mesh configuration against deployment and operational best practices with Meshery's configuration validator. Validate your service mesh's conformance to Service Mesh Interface (SMI) specifications. Dynamically load and manage your own WebAssembly filters in Envoy-based service meshes. Service mesh adapters provision, configure, and manage their respective service meshes.
  • 11
    Envoy

    Envoy

    Envoy Proxy

    As on the ground microservice practitioners quickly realize, the majority of operational problems that arise when moving to a distributed architecture are ultimately grounded in two areas: networking and observability. It is simply an orders of magnitude larger problem to network and debug a set of intertwined distributed services versus a single monolithic application. Envoy is a self contained, high performance server with a small memory footprint. It runs alongside any application language or framework. Envoy supports advanced load balancing features including automatic retries, circuit breaking, global rate limiting, request shadowing, zone local load balancing, etc. Envoy provides robust APIs for dynamically managing its configuration.
  • 12
    IBM Cloud Managed Istio
    Istio is an open technology that provides a way for developers to seamlessly connect, manage and secure networks of different microservices — regardless of platform, source or vendor. Istio is currently one of the fastest-growing open-source projects based on Github contributors, and its strength is its community. IBM is proud to be a founder and contributor of the Istio project and a leader of Istio Working Groups. Istio on IBM Cloud Kubernetes Service is offered as a managed add-on that integrates Istio directly with your Kubernetes cluster. A single click deploys a tuned, production-ready Istio instance on your IBM Cloud Kubernetes Service cluster. A single click runs Istio core components and tracing, monitoring and visualization tools. IBM Cloud updates all Istio components and manages the control-plane component's lifecycle.
  • 13
    Anthos Service Mesh
    Designing your applications as microservices provides many benefits. However, your workloads can become complex and fragmented as they scale. Anthos Service Mesh is Google's implementation of the powerful Istio open source project, which allows you to manage, observe, and secure services without having to change your application code. Anthos Service Mesh simplifies service delivery, from managing mesh telemetry and traffic to protecting communications between services, significantly reducing the burden on development and operations teams. Anthos Service Mesh is Google's fully managed service mesh, allowing you to easily manage these complex environments and reap all the benefits they offer. As a fully managed offering, Anthos Service Mesh takes the guesswork and effort out of purchasing and managing your service mesh solution. Focus on building great apps and let us take care of the mesh.
  • 14
    Istio

    Istio

    Istio

    Connect, secure, control, and observe services. Istio’s traffic routing rules let you easily control the flow of traffic and API calls between services. Istio simplifies configuration of service-level properties like circuit breakers, timeouts, and retries, and makes it easy to set up important tasks like A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits. It also provides out-of-box failure recovery features that help make your application more robust against failures of dependent services or the network. Istio Security provides a comprehensive security solution to solve these issues. This page gives an overview on how you can use Istio security features to secure your services, wherever you run them. In particular, Istio security mitigates both insider and external threats against your data, endpoints, communication, and platform. Istio generates detailed telemetry for all service communications within a mesh.
  • 15
    Gloo Mesh

    Gloo Mesh

    solo.io

    Today's Kubernetes environments need help in scaling, securing and observing modern cloud-native applications. Gloo Mesh, based on the industry's leading Istio service mesh, simplifies multi-cloud and multi-cluster management of service mesh for containers and virtual machines. Gloo Mesh helps platform engineering teams to reduce costs, reduce risks, and improve application agility. Gloo Mesh is a modular component of Gloo Platform. The service mesh allows for application-aware network tasks to be managed independently from the application, adding observability, security, and reliability to distributed applications. By introducing the service mesh to your applications, you can: Simplify the application layer Provide more insights into your traffic Increase the security of your application
  • 16
    Kiali

    Kiali

    Kiali

    Kiali is a management console for Istio service mesh. Kiali can be quickly installed as an Istio add-on or trusted as a part of your production environment. Using Kiali wizards to generate application and request routing configuration. Kiali provides Actions to create, update and delete Istio configuration, driven by wizards. Kiali offers a robust set of service actions, with accompanying wizards. Kiali provides a list and detailed views for your mesh components. Kiali provides filtered list views of all your service mesh definitions. Each view provides health, details, YAML definitions and links to help you visualize your mesh. Overview is the default Tab for any detail page. The overview tab provides detailed information, including health status, and a detailed mini-graph of the current traffic involving the component. The full set of tabs, as well as the detailed information, varies based on the component type.
  • 17
    Huawei Elastic Load Balance (ELB)
    Elastic Load Balance (ELB) automatically distributes incoming traffic across multiple servers to balance their workloads, increasing service capabilities and fault tolerance of your applications. ELB can establish up to 100 million concurrent connections and meet your requirements for handling huge numbers of concurrent requests. ELB is deployed in cluster mode and ensures that your services are uninterrupted. If servers in an AZ are unhealthy, ELB automatically routes traffic to healthy servers in other AZs. ELB makes sure that your applications always have enough capacity for varying levels of workloads. It works with Auto Scaling to flexibly adjust the number of servers and intelligently distribute incoming traffic across servers. A diverse set of protocols and algorithms enable you to configure traffic routing policies to suit your needs while keeping deployments simple.
  • 18
    AWS Elastic Load Balancing
    Elastic Load Balancing automatically routes incoming application traffic across multiple destinations, such as Amazon EC2 instances, containers, IP addresses, Lambda functions, and virtual appliances. You can control the variable load of your application traffic in a single zone or in multiple Availability Zones. Elastic Load Balancing offers four types of load balancers that have the necessary level of high availability, automatic scalability, and security to make your applications fault tolerant. Elastic Load Balancing is part of the AWS network, with native knowledge of fault limits like AZ to keep your applications available in one region, without requiring Global Server Load Balancing (GSLB). ELB is also a fully managed service, which means you can focus on delivering applications and not installing fleets of load balancers. Capacity is automatically added and removed based on the utilization of the underlying application servers.
    Starting Price: $0.027 USD per Load Balancer per hour
  • 19
    Calisti

    Calisti

    Cisco

    Calisti enables security, observability, traffic management for microservices and cloud native applications, and allows admins to switch between live and historical views. Configuring Service Level Objectives (SLOs), burn rate, error budget and compliance monitoring, Calisti sends a GraphQL alert to automatically scale based on SLO burn rate. Calisti manages microservices running on containers and virtual machines, allowing for application migration from VMs to containers in a phased manner. Reducing management overhead by applying policies consistently and meeting application Service Level Objectives across both K8s and VMs. Istio has new releases every three months. Calisti includes our Istio Operator that automates lifecycle management, and even enables canary deployment of the platform itself.
  • 20
    AWS App Mesh

    AWS App Mesh

    Amazon Web Services

    AWS App Mesh is a service mesh that provides application-level networking to facilitate communication between your services across various types of computing infrastructure. App Mesh offers comprehensive visibility and high availability for your applications. Modern applications are generally made up of multiple services. Each service can be developed using various types of compute infrastructure, such as Amazon EC2, Amazon ECS, Amazon EKS, and AWS Fargate. As the number of services within an application grows, it becomes difficult to pinpoint the exact location of errors, redirect traffic after errors, and safely implement code changes. Previously, this required creating monitoring and control logic directly in your code and redeploying your services every time there were changes.
  • 21
    greymatter.io

    greymatter.io

    greymatter.io

    Maximize your resources. Ensure optimal use of your clouds, platforms, and software. This is application and API network operations management redefined. The same governance rules, observability, auditing, and policy control for every application, API, and network across your multi-cloud, data center and edge environments, all in one place. Zero-trust micro-segmentation, omni-directional traffic splitting, infrastructure agnostic attestation, and traffic management to secure your resources. ​IT-informed decision-making is real. Application, API & network monitoring and control generate massive IT operations data. Use it in real time through AI. Logging, metrics, tracing, and audits through Grey Matter simplifies integration and standardizes aggregation for all IT Operations data. Fully leverage your mesh telemetry and securely and flexibly future-proof your hybrid infrastructure.
  • 22
    AppScaler

    AppScaler

    XPoint Network

    What does AppScaler CMS do? Managing, monitoring and reporting on growing distributed networks is increasingly complex and costly, AppScaler CMS allows you to manage one or more AppScaler devices from a single management server. AppScaler CMS provides organizations, distributed enterprises and service providers with a powerful and intuitive solution to centrally manage and rapidly deploy AppScaler devices and provides centralized, real-time monitoring and comprehensive application performance reporting. Central AppScaler Policy Management AppScaler CMS ensures governance and compliance with centrally managed configuration: Import the configuration from AppScaler device in one click. Comprehensive policy management on load balancing of each AppScaler device. Configuration backup and restore. AppScaler Firmware Upgrade Role-based access control. AppScaler CMS provides fine-grained, role-based access control with which you can grant access permissions.
  • 23
    Netmaker

    Netmaker

    Netmaker

    Netmaker is an open source tool based on the groundbreaking WireGuard protocol. Netmaker unifies distributed environments with ease, from multi-cloud to Kubernetes. Netmaker enhances Kubernetes clusters by providing flexible and secure networking for cross-environment scenarios. Netmaker uses WireGuard for modern, secure encryption. It is built with zero trust in mind, utilizes access control lists, and follows leading industry standards for secure networking. Netmaker enables you to create relays, gateways, full VPN meshes, and even zero trust networks. Netmaker is fully configurable to let you maximize the power of Wireguard.
  • 24
    HAProxy ALOHA

    HAProxy ALOHA

    HAProxy Technologies

    A plug-and-play hardware or virtual load balancer based on HAProxy Enterprise that supports proxying at Layer 4 and Layer 7. Its simple graphical interface, easy installation, and no limit on backend servers make it ideal for companies looking for a dedicated system to ensure high-performance load distribution for critical services. ALOHA Hardware Load Balancer adds patented PacketShield technology, providing protocol-level DDoS protection that filters illegitimate traffic at line rate, outperforming other types of firewalls. The modern enterprise demands reliable performance, ease of integration, advanced security and extensible features. HAProxy ALOHA Hardware Load Balancer gives enterprises an incredibly powerful, plug and play, appliance that can be deployed in any environment. HAProxy ALOHA’s simple graphical interface coupled with an advanced templating system makes it painless to deploy and configure.
  • 25
    A10 Thunder ADC

    A10 Thunder ADC

    A10 Networks

    High-performance advanced load balancing solution that enables your applications to be highly available, accelerated, and secure. Ensure efficient and reliable application delivery across multiple datacenters and cloud. Minimize latency and downtime, and enhance end-user experience. Increase application security with advanced SSL/TLS offload, single sign-on (SSO), DDoS protection and Web Application Firewall (WAF) capabilities. Integrate with the Harmony™ Controller to gain deep per-application visibility and comprehensive controls for secure application delivery across on-premises datacenters, public, private and hybrid clouds. Complete full-proxy Layer 4 load balancer and Layer 7 load balancer with flexible aFleX® scripting and customizable server health checks. High performance SSL Offload with up-to-date SSL/TLS ciphers enabling optimized and secure application service. Global Server Load Balancing (GSLB) extends load balancing on a global basis.
  • 26
    AVANU WebMux
    AVANU’s WebMux Network Traffic Manager (“WebMux”) is a cost-effective full-featured enterprise-class load balancing solution. WebMux integrates application delivery network (ADN) and global server load balancing (GSLB) with its built-in FireEdge™ for Apps Web Application Firewall (WAF). In development since 1987, WebMux is developed using intensive algorithms for sophisticated network designs that require load balancing flexibility to meet and manage the most stringent network traffic demands. It manages, controls, and secures local network traffic for high availability of applications assuring reliable peak performance with geographic disaster recovery and affinity services and enhanced applications security firewall features. The user-friendly menu-driven interface makes WebMux fast to deploy and easy to manage.
  • 27
    Alibaba Cloud Server Load Balancer (SLB)
    Server Load Balancer (SLB) provides disaster recovery at four levels for high availability. CLB and ALB support built-in Anti-DDoS services to ensure business security. In addition, you can integrate ALB with WAF in the console to ensure security at the application layer. ALB and CLB support cloud-native networks. ALB is integrated with other cloud-native services, such as Container Service for Kubernetes (ACK), Serverless App Engine (SAE), and Kubernetes, and functions as a cloud-native gateway to distribute inbound network traffic. Monitors the condition of backend servers regularly. SLB does not distribute network traffic to unhealthy backend servers to ensure availability. Server Load Balancer (SLB) supports cluster deployment and session synchronization. You can perform hot upgrades and monitor the health and performance of machines in real-time. Supports multi-zone deployment in specific regions to provide zone-disaster recovery.
  • 28
    F5 Aspen Mesh
    F5 Aspen Mesh empowers companies to drive more performance from their modern app environment by leveraging the power of their service mesh. As part of F5, Aspen Mesh is focused on delivering enterprise-class products that enhance companies’ modern app environments. Deliver new and differentiating features faster with microservices. Aspen Mesh lets you do that at scale, with confidence. Reduce the risk of downtime and improve your customers’ experience. If you’re scaling microservices to production on Kubernetes, Aspen Mesh will help you get the most out of your distributed systems. Aspen Mesh empowers companies to drive more performance from their modern app environment by leveraging the power of their service mesh. Alerts that decrease the risk of application failure or performance degradation based on data and machine learning models. Secure Ingress safely exposes enterprise apps to customers and the web.
  • 29
    Azure Application Gateway
    Protect your applications from common web vulnerabilities such as SQL injection and cross-site scripting. Monitor your web applications using custom rules and rule groups to suit your requirements and eliminate false positives. Get application-level load-balancing services and routing to build a scalable and highly available web front end in Azure. Autoscaling offers elasticity by automatically scaling Application Gateway instances based on your web application traffic load. Application Gateway is integrated with several Azure services. Azure Traffic Manager supports multiple-region redirection, automatic failover, and zero-downtime maintenance. Use Azure Virtual Machines, virtual machine scale sets, or the Web Apps feature of Azure App Service in your back-end pools. Azure Monitor and Azure Security Center provide centralized monitoring and alerting, and an application health dashboard. Key Vault offers central management and automatic renewal of SSL certificates.
    Starting Price: $18.25 per month
  • 30
    Traefik

    Traefik

    Traefik Labs

    What is Traefik Enterprise Edition? TraefikEE is a cloud-native load balancer and Kubernetes ingress controller that eases networking complexity for application teams. Built on top of open source Traefik, TraefikEE brings exclusive distributed and high-availability features combined with premium bundled support for production grade deployments. Split into proxies and controllers, TraefikEE supports clustered deployments to increase security, scalability and high availability. Deploy applications anywhere, on-premises or in the cloud, and natively integrate with top-notch infrastructure tooling. Save time and give better consistency while deploying, managing, and scaling applications by leveraging dynamic and automatic TraefikEE features. Improve the application development and delivery cycle by giving developers the visibility and ownership of their services.
  • 31
    Reblaze

    Reblaze

    Reblaze

    Reblaze is the leading provider of cloud-native web application and API protection, providing a fully managed security platform. Reblaze’s all-in-one solution supports flexible deployment options (cloud, multi-cloud, hybrid, data center and service mesh), deployed in minutes and includes state-of-the-art Bot Management, API Security, next-gen WAF, DDoS protection, advanced rate limiting, session profiling, and more. Unprecedented real time traffic visibility as well as highly granular policies enables full control of your web traffic. Machine learning provides accurate, adaptive threat detection, while dedicated VPC deployment ensures maximum privacy, performance and protection while minimizing overhead costs. Reblaze customers include Fortune 500 companies and innovative organizations across the globe.
  • 32
    OVH Load Balancer
    All our Cloud products can be scaled up or out with no constraints, in all our data centers. The OVH Load Balancer distributes the workload among your various services across our data centers. It ensures the scaling of your infrastructure in the event of heavy traffic, with optimized fault tolerance and response time. All this with a service level aiming for zero downtime. Configure and monitor your infrastructures from A to Z with our control panel. The let's encrypt DV SSL certificates are now included in all of our Load Balancer solutions – completely free – and will activate HTTPS protocol by default. The Anycast DNS system means that the server nearest to your user’s location will load your website, improving load times. Use metrics to monitor your Load Balancer's load and outgoing requests to your servers. This information can then be used to maximise your system’s performance!
    Starting Price: $22.99 per month
  • 33
    Yandex Network Load Balancer
    Load Balancer uses technologies running on Layer 4 of the OSI model. This lets you process network packets with minimum delay. You set rules for TCP or HTTP checks and load balancers monitor the status of cloud resources. Resources that fail the check aren’t used. You pay for the number of load balancers and the amount of incoming traffic. Outgoing traffic is charged the same as other Yandex Cloud services. Load balancers distribute load based on the client address and port, resource availability, and network protocol. If the instance group parameters or members change, the load balancer adjusts automatically. When incoming traffic changes abruptly, you don’t need to reconfigure the load balancers.
  • 34
    Azure Load Balancer
    Load-balance internet and private network traffic with high performance and low latency. Instantly add scale to your applications and enable high availability. Load Balancer works across virtual machines, virtual machine scale sets, and IP addresses. Equipped for load-balancing network layer traffic when high performance and super-low latency are needed. Standard Load Balancer routes traffic within and across regions, and to availability zones for high resiliency. Create highly available and scalable apps in minutes with built-in application load balancing for cloud services and virtual machines. Load Balancer supports TCP/UDP-based protocols such as HTTP, HTTPS, and SMTP, and protocols used for real-time voice and video messaging applications. Manage traffic between virtual machines inside your private virtual networks, or use it to create multiple-tiered hybrid applications.
  • 35
    Oracle Cloud Infrastructure Load Balancing
    Oracle Cloud Infrastructure (OCI) Flexible Load Balancing enables customers to distribute web requests across a fleet of servers or automatically route traffic across fault domains, availability domains, or regions—yielding high availability and fault tolerance for any application or data source. The portfolio comprises two services—Oracle Cloud Infrastructure Flexible Load Balancer (OCI Load Balancer) and Oracle Cloud Infrastructure Flexible Network Load Balancer (OCI Network Load Balancer). OCI Flexible Load Balancer primarily manages HTTP/HTTPS traffic and provides advanced routing features that distribute the requests based on the requests’ contents. In contrast, OCI Flexible Network Load Balancer performs at low latency, offering extreme performance. OCI Flexible Load Balancer offers a public IP address to front-end internet traffic within a single availability domain or across regions, ensuring applications are always available during peak demand.
    Starting Price: $0.0243 per hour
  • 36
    F5 Distributed Cloud DNS Load Balancer
    Leverage an expertly engineered global load balancing platform on infrastructure that ensures fast performance. The DNS is fully configurable via APIs, with DDoS protection and no appliances to manage. Direct traffic to the nearest application instance and/or route traffic for GDPR compliancy. Split loads across compute instances. Detect failed or degraded resource instances and reroute clients. Maintain high availability with disaster recovery. Automatically detect primary site failures, get zero-touch failover, and dynamically fail applications over to designated or available instances. Simplify cloud-based DNS management and load balancing and get disaster recovery to ease the burden on your operations and development teams. F5’s cloud-based, intelligent DNS with global server load balancing (GSLB) efficiently directs application traffic across environments globally, performs health checks, and automates responses to activities and events to maintain high performance among apps.
  • 37
    Google Cloud Load Balancer
    Scale your applications on Compute Engine from zero to full throttle with Cloud Load Balancing, with no pre-warming needed. Distribute your load-balanced compute resources in single or multiple regions—close to your users—and to meet your high availability requirements. Cloud Load Balancing can put your resources behind a single anycast IP and scale your resources up or down with intelligent autoscaling. Cloud Load Balancing comes in a variety of flavors and is integrated with Cloud CDN for optimal application and content delivery. With Cloud Load Balancing, a single anycast IP front-ends all your backend instances in regions around the world. It provides cross-region load balancing, including automatic multi-region failover, which gently moves traffic in fractions if backends become unhealthy. In contrast to DNS-based global load balancing solutions, Cloud Load Balancing reacts instantaneously to changes in users, traffic, network, backend health, and other related conditions.
  • 38
    PowerVille LB
    The Dialogic® PowerVille™ LB is a software-based high-performance, cloud-ready, purpose built and fully optimized network traffic load-balancer uniquely designed to meet challenges for today’s demanding Real-Time Communication infrastructure in both carrier and enterprise applications. Automatic load balancing for a variety of services including database, SIP, Web and generic TCP traffic across a cluster of applications. High availability, intelligent failover, contextual awareness and call state awareness features increase uptime. Efficient load balancing, resource assignment, and failover allow for full utilization of available network resources, to reduce costs without sacrificing reliability. Software agility and powerful management interface to reduce the effort and costs due to operations and maintenance.
  • 39
    PAS-K

    PAS-K

    PIOLINK

    The PAS-K, ADC in PIOLINK, is ideal for organizations in the fields of finance, education, public organizations, and telecommunication, with the high performance to accelerate application delivery and reinforce security. PAS-K provides the load balancing to distribute traffic to servers, firewalls, and VPNs, to maintain server resource and service stability. GSLB (Global Server Load Balancing) in PAS-K especially supports to build a disaster recovery center and a cloud data center. Guarantee your business continuity with flexible High Availability modes. PAS-K provides advanced acceleration features like memory caching, compression, FEO and SSL offloading to enhance your service quality and reduce server overloads. PAS-K series secure your data and systems from threatening various DDoS like HTTP DDoS, Syn flood, and Syn cookie. Also, it supports basic network firewall features to secure your network like filtering.
  • 40
    Yandex Application Load Balancer
    Application Load Balancer runs on OSI Layer 7, and helps use HTTP request attributes to distribute traffic and form or modify HTTP responses. All requests to your apps are recorded, and you can analyze events in the load balancer’s access logs. Distribute your cloud resources to multiple geographically distributed availability zones, and maintain your applications' availability even if one of the zones becomes unavailable. Use different load balancers for different applications If you use the Yandex Cloud infrastructure to deploy multiple applications, configure L4 and L7 load balancers to service them. Create backends for new app versions and shift the load between them in the HTTP router, changing the weight of the old and new backends.
  • 41
    ServiceStage

    ServiceStage

    Huawei Cloud

    Deploys your applications using containers, VMs, or serverless, and easily implements auto scaling, performance analysis, and fault diagnosis. Supports native Spring Cloud and Dubbo frameworks and Service Mesh, provides all-scenario capabilities, and supports mainstream languages such as Java, Go, PHP, Node.js, and Python. Supports cloud-native transformation of Huawei core services, meeting strict performance, usability, and security compliance requirements. Development frameworks, running environments, and common components are available for web, microservice, mobile, and AI applications. Full management of applications throughout the entire process, including deployment and upgrade. Monitoring, events, alarms, logs, and tracing diagnosis, and built-in AI capabilities, making O&M easy. Creates a flexibly customizable application delivery pipeline with only a few clicks.
    Starting Price: $0.03 per hour-instance
  • 42
    Network Service Mesh

    Network Service Mesh

    Network Service Mesh

    A common flat vL3 domain allowing DBs running in multiple clusters/clouds/hybrid to communicate just with each other for DB replication. Workloads from multiple companies connecting to a single ‘collaborative’ Service Mesh for cross company interactions. Each workload has a single option of what connectivity domain to be connected to, and only workloads in a given runtime domain could be part of its connectivity domain. In short: Connectivity Domains are Strongly Coupled to Runtime Domains. A central tenant of Cloud Native is Loose Coupling. In a Loosely Coupled system, the ability for each workload to receive service from alternative providers is preserved. What Runtime Domain a workload is running in is a non-sequitur to its communications needs. Workloads that are part of the same App need Connectivity between each other no matter where they are running.
  • 43
    Barracuda Load Balancer ADC
    The Barracuda Load Balancer ADC is ideal for organizations looking for a high-performance, yet cost-effective application delivery and security solution. Highly demanding enterprise networks require full-featured application delivery controller that optimizes application load balancing and performance while providing protection from an ever-expanding list of intrusions and attacks. The Barracuda Load Balancer ADC is a Secure Application Delivery Controller that enables Application Availability, Acceleration and Control, while providing Application Security Capabilities. Available in hardware, virtual and cloud instances, the Barracuda Load Balancer ADC provides advanced Layer 4 and Layer 7 load balancing with SSL Offloading and Application Acceleration. The built-in Global Server Load Balancing (GSLB) module allows you to deploy your applications across multiple geo-dispersed locations. The Application Security module ensures comprehensive web application protection.
    Starting Price: $1499.00/one-time
  • 44
    KubeSphere

    KubeSphere

    KubeSphere

    KubeSphere is a distributed operating system for cloud-native application management, using Kubernetes as its kernel. It provides a plug-and-play architecture, allowing third-party applications to be seamlessly integrated into its ecosystem. KubeSphere is also a multi-tenant enterprise-grade open-source Kubernetes container platform with full-stack automated IT operations and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich Kubernetes platform, which includes the most common functionalities needed for enterprise Kubernetes strategies. A CNCF-certified Kubernetes platform, 100% open-source, built and improved by the community. Can be deployed on an existing Kubernetes cluster or Linux machines, supports the online and air-gapped installation. Deliver DevOps, service mesh, observability, application management, multi-tenancy, storage, and networking management in a unified platform.
  • 45
    Eddie

    Eddie

    Eddie

    Eddie is a high availability clustering tool. It is an open source, 100% software solution written primarily in the functional programming language Erlang (www.erlang.org) and is available for Solaris, Linux and *BSD. At each site, certain servers are designated as Front End Servers. These servers are responsible for controlling and distributing incoming traffic across designated Back End Servers, and tracking the availability of Back End Web Servers within the site. Back End Servers may support a range of Web servers, including Apache. The Enhanced DNS server which provides load balancing and monitoring of site accessibility for geographically distributed web sites. This gives round the clock access to the entire available capacity of the web site, no matter where it is located." The Eddie white papers describe the need for products such as Eddie, and outlines the Eddie approach.
  • 46
    F5 NGINX Ingress Controller
    Streamline and simplify Kubernetes (north-south) network traffic management, delivering consistent, predictable performance at scale without slowing down your apps. Advanced app‑centric configuration – Use role‑based access control (RBAC) and self‑service to set up security guardrails (not gates), so your teams can manage their apps securely and with agility. Enable multi‑tenancy, reusability, simpler configs, and more. A native, type‑safe, and indented configuration style to simplify capabilities like circuit breaking, sophisticated routing, header manipulation, mTLS authentication, and WAF. Plus if you’re already using NGINX, NGINX Ingress resources make it easy to adapt existing configuration from your other environments.
  • 47
    Cisco Service Mesh Manager
    The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language. With the accelerating demand for digital transformation, businesses are increasingly adopting cloud-native architectures. Microservice-based applications are created with software functionality spread across multiple services that are independently deployable, easier to maintain and test, and can be more rapidly updated.
  • 48
    Openmix

    Openmix

    Citrix Systems

    Citrix Intelligent Traffic Management builds tools for large websites to efficiently use multi-vendor sourcing of data centers, cloud providers, and content delivery networks. Intelligent Traffic Management has combined real-user performance monitoring (or RUM) and data-driven DNS or API-based global load balancing into a unified service. The Intelligent Traffic Management platform is unique in that it employs end-user-based probes for collecting real-time information by clients, along with a programmable decision engine (called Openmix) that can make use of this data. The Openmix service allows highly flexible load-balancing decisions to be made dynamically, based on a wide variety of real-time data feeds including the data you provide.
  • 49
    F5 NGINX Plus
    The software load balancer, reverse proxy, web server, & content cache with the enterprise features and support you expect. Modern app infrastructure and dev teams love NGINX Plus. More than just the fastest web server around, NGINX Plus brings you everything you love about NGINX Open Source, adding enterprise‑grade features like high availability, active health checks, DNS system discovery, session persistence, and a RESTful API. NGINX Plus is a cloud‑native, easy-to-use reverse proxy, load balancer, and API gateway. Whether you need to integrate advanced monitoring, strengthen security controls, or orchestrate Kubernetes containers, NGINX Plus delivers the five‑star support you expect from NGINX. NGINX Plus provides scalable and reliable high availability along with monitoring to support debugging and diagnosing complex application architectures. Active health checks proactively poll upstream server status to get ahead of issues.
  • 50
    NFWare Virtual Load Balancer
    NFWare Virtual Load Balancer is a next-gen software load balancer that significantly decreases network loads and prevents harmful DDoS-attacks. It demonstrates high performance while running on standard x86 servers. The speed ensures optimal hardware utilization, critical for high-load deployments. The NFWare Load Balancer could be easily integrated into any virtual and cloud infrastructure, or run as bare-metal. Starting from scratch, we redefined the architecture for NFWare solution to design a high-performance software product that can handle the world's busiest websites. Meet the fastest software network load balancer that provides reliable and sophisticated load balancing capabilities, can process a huge amount of traffic, and enables an ultimate economy due to the software's nature.