Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
README.md | 2025-08-27 | 45.5 kB | |
v2.9.11 source code.tar.gz | 2025-08-27 | 3.4 MB | |
v2.9.11 source code.zip | 2025-08-27 | 5.7 MB | |
Totals: 3 Items | 9.2 MB | 0 |
Release v2.9.11
Important: Review the Install/Upgrade Notes before upgrading to any Rancher version.
Rancher v2.9.11 is the latest patch release of Rancher v2.9. This is a Prime version release that introduces maintenance updates and bug fixes. To learn more about Rancher Prime, see our page on the Rancher Prime Platform.
For more information on new features in the general minor release see the v2.9.0 release notes.
Changes Since v2.9.10
See the full list of changes.
Install/Upgrade Notes
- If you're installing Rancher for the first time, your environment must fulfill the installation requirements.
Upgrade Requirements
- Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
- CNI requirements:
- For Kubernetes v1.19 and later, disable firewalld as it's incompatible with various CNI plugins. See #28840.
- When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #1317.
- Requirements for air gapped environments:
- When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to
NO_PROXY
. See the documentation and issue #2725. - When installing Rancher with Docker in an air-gapped environment, you must supply a custom
registries.yaml
file to thedocker run
command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #28969. - Requirements for general Docker installs:
- When starting the Rancher Docker container, you must use the
privileged
flag. See documentation. - When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #33685.
Versions
Please refer to the README for the latest and stable Rancher versions.
Please review our version documentation for more details on versioning and tagging conventions.
Important: With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.
Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.
RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here.
Images
- rancher/rancher:v2.9.11
Tools
Kubernetes Versions for RKE
- v1.30.14 (Default)
- v1.29.15
- v1.28.15
- v1.27.16
Kubernetes Versions for RKE2/K3s
- v1.30.14 (Default)
- v1.29.15
- v1.28.15
- v1.27.16
Rancher Helm Chart Versions
In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0
. See #32294.
Other Notes
Experimental Features
Rancher now supports the ability to use an OCI Helm chart registry for Apps & Marketplace. View documentation on using OCI based Helm chart repositories and note this feature is in an experimental stage. See #29105 and #45062
Deprecated Upstream Projects
In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #29306 for details.
Removed Legacy Features
Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.
Also, rancher-external-dns
and rancher-global-dns
have been deprecated as of the Rancher v2.7 line.
The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #6864.
UI and Backend
- CIS Scans v1 (Cluster)
- Pipelines (Project)
- Istio v1 (Project)
- Logging v1 (Project)
- RancherD
UI
- Multiclusterapps (Global): Apps within the Multicluster Apps section of the Rancher UI.
Previous Rancher Behavior Changes
Previous Rancher Behavior Changes - Rancher General
- Rancher v2.9.0:
- Kubernetes v1.25 and v1.26 are no longer supported. Before you upgrade to Rancher v2.9.0, make sure that all clusters are running Kubernetes v1.27 or later. See #45882.
- The
external-rules
feature flag functionality is removed in Rancher v2.9.0 as the behavior is enabled by default. The feature flag is still present when upgrading from v2.8.5; however, enabling or disabling the feature won't have any effect. For more information, see CVE-2023-32196 and #45863. - Rancher now validates the Container Default Resource Limit on Projects. Validation mimics the upstream behavior of the Kubernetes API server when it validates LimitRanges. The container default resource configuration must have properly formatted quantities for all requests and limits. Limits for any resource must not be less than requests. See #39700.
- Rancher v2.8.4:
- The controller now cleans up instances of
ClusterUserAttribute
that have no correspondingUserAttribute
. See #44985. - Rancher v2.8.3:
- When Rancher starts, it now identifies all deprecated and unrecognized setting resources and adds a
cattle.io/unknown
label. You can list these settings with the commandkubectl get settings -l 'cattle.io/unknown==true'
. In Rancher v2.9 and later, these settings will be removed instead. See #43992. - Rancher v2.8.0:
- Rancher Compose is no longer supported, and all parts of it are being removed in the v2.8 release line. See #43341.
- Kubernetes v1.23 and v1.24 are no longer supported. Before you upgrade to Rancher v2.8.0, make sure that all clusters are running Kubernetes v1.25 or later. See #42828.
Previous Rancher Behavior Changes - Cluster Provisioning
- Rancher v2.8.4:
- Docker CLI 20.x is at end-of-life and no longer supported in Rancher. Please update your local Docker CLI versions to 23.0.x or later. Earlier versions may not recognize OCI compliant Rancher image manifests. See #45424.
- Rancher v2.8.0:
- Kontainer Engine v1 (KEv1) provisioning and the respective cluster drivers are now deprecated. KEv1 provided plug-ins for different targets using cluster drivers. The Rancher-maintained cluster drivers for EKS, GKE and AKS have been replaced by the hosted provider drivers, EKS-Operator, GKE-Operator and AKS-Operator. Node drivers are now available for self-managed Kubernetes.
- Rancher v2.7.2:
- When you provision a downstream cluster, the cluster's name must conform to RFC-1123. Previously, characters that did not follow the specification, such as
.
, were permitted and would result in clusters being provisioned without the necessary Fleet components. See #39248. - Privilege escalation is disabled by default when creating deployments from the Rancher API. See #7165.
Previous Rancher Behavior Changes - RKE Provisioning
- Rancher v2.9.4:
-
With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.
Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.
RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here. - Rancher v2.9.0: - With the release of Rancher Kubernetes Engine (RKE) v1.6.0, RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.
Please note, EOL for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.
RKE2 and K3s provide stronger security, and move away from the upstream-deprecated Docker machine. Learn more about replatforming at the official SUSE blog. - Rancher has added support for external Azure cloud providers in downstream RKE clusters. Note that migration to an external Azure cloud provider is required when running Kubernetes v1.30 and recommended when running Kubernetes v1.29. See #44857. - Weave CNI support for RKE clusters is removed in response to Weave CNI not being supported by upstream Kubernetes v1.30 and later. See #45954 - Rancher v2.8.0: - Rancher no longer supports the Amazon Web Services (AWS) in-tree cloud provider for RKE clusters. This is in response to upstream Kubernetes removing the in-tree AWS provider in Kubernetes v1.27. You should instead use the out-of-tree AWS cloud provider for any Rancher-managed clusters running Kubernetes v1.27 or later. See #43175. - The Weave CNI plugin for RKE v1.27 and later is now deprecated. Weave will be removed in RKE v1.30. See #42730.
Previous Rancher Behavior Changes - RKE2 Provisioning
- Rancher v2.9.2:
- Fixed an issue where downstream RKE2 clusters may become corrupted if KDM data (from rke-metadata-config setting) is invalid. Note that per the fix these clusters' status may change to "Updating" with a message indicating KDM data is missing instead of the cluster status stating "Active". See #46855.
- Rancher v2.9.0:
- Rancher has added support for external Azure cloud providers in downstream RKE2 clusters. Note that migration to an external Azure cloud provider is required when running Kubernetes v1.30 and recommended when running Kubernetes v1.29. See #44856.
- Added a new annotation,
provisioning.cattle.io/allow-dynamic-schema-drop
. When set totrue
, it drops thedynamicSchemaSpec
field from machine pool definitions. This prevents cluster nodes from re-provisioning unintentionally when the cluster object is updated from an external source such as Terraform or Fleet. See #44618. - Rancher v2.8.0:
- Rancher no longer supports the Amazon Web Services (AWS) in-tree cloud provider for RKE2 clusters. This is in response to upstream Kubernetes removing the in-tree AWS provider in Kubernetes v1.27. You should instead use the out-of-tree AWS cloud provider for any Rancher-managed clusters running Kubernetes v1.27 or later. See #42749.
Previous Rancher Behavior Changes - Cluster API
- Rancher v2.7.7:
- The
cluster-api
core provider controllers run in a pod in thecattle-provisioning-cattle-system
namespace, within the local cluster. These controllers are installed with a Helm chart. Previously, Rancher rancluster-api
controllers in an embedded fashion. This change makes it easier to maintaincluster-api
versioning. See #41094. - The token hashing algorithm generates new tokens using SHA3. Existing tokens that don't use SHA3 won't be re-hashed. This change affects ClusterAuthTokens (the downstream synced version of tokens for ACE) and Tokens (only when token hashing is enabled). SHA3 tokens should work with ACE and Token Hashing. Tokens that don't use SHA3 may not work when ACE and token hashing are used in combination. If, after upgrading to Rancher v2.7.7, you experience issues with ACE while token hashing is enabled, re-generate any applicable tokens. See #42062.
Previous Rancher Behavior Changes - Rancher App (Global UI)
- Rancher v2.9.3:
- The performance of the Clusters lists in the Home page and the Side Menu has greatly improved when there are hundreds of clusters. See #12050.
- Rancher v2.8.0:
- The built-in
restricted-admin
role is being deprecated in favor of a more flexible global role configuration, which is now available for different use cases other than only therestricted-admin
. If you want to replicate the permissions given through this role, use the newinheritedClusterRoles
feature to create a custom global role. A custom global role, like therestricted-admin
role, grants permissions on all downstream clusters. See #42462. Given its deprecation, therestricted-admin
role will continue to be included in future builds of Rancher through the v2.8.x and v2.9.x release lines. However, in accordance with the CVSS standard, only security issues scored as critical will be backported and fixed in therestricted-admin
role until it is completely removed from Rancher. - Reverse DNS server functionality has been removed. The associated
rancher/rdns-server
repository is now archived. Reverse DNS is already disabled by default. - The Rancher CLI configuration file
~/.rancher/cli2.json
previously had permissions set to0644
. Although0644
would usually indicate that all users have read access to the file, the parent directory would block users' access. New Rancher CLI configuration files will only be readable by the owner (0600
). Invoking the CLI will trigger a warning, in case old configuration files are world-readable or group-readable. See #42838.
Previous Rancher Behavior Changes - Rancher App (Helm Chart)
- Rancher v2.7.0:
- When installing or upgrading an official Rancher Helm chart app in a RKE2/K3s cluster, if a private registry exists in the cluster configuration, that registry will be used for pulling images. If no cluster-scoped registry is found, the global container registry will be used. A custom default registry can be specified during the Helm chart install and upgrade workflows. Previously, only the global container registry was used when installing or upgrading an official Rancher Helm chart app for RKE2/K3s node driver clusters.
Previous Rancher Behavior Changes - Continuous Delivery (Fleet)
- Rancher v2.9.0:
- Rancher now supports monitoring of continuous delivery. Starting with version
v104.0.1
of the Fleet (v0.10.1
of Fleet) andrancher-monitoring
chart, continuous delivery provides metrics about the state of its resources and therancher-monitoring
chart contains dashboards to visualize those metrics. Installing therancher-monitoring
chart to the local/upstream cluster automatically configures Prometheus to scrape metrics from the continuous delivery controllers and installs Grafana dashboards. These dashboards are accessible via Grafana but are not yet integrated into the Rancher UI. You can open Grafana from the Rancher UI by navigating to the Cluster > Monitoring > Grafana view. See [rancher/fleet#1408](https://github.com/rancher/fleet/issues/1408) for implementation details. - Continuous delivery in Rancher also introduces sharding with node selectors. See [rancher/fleet#1740](https://github.com/rancher/fleet/issues/1740) for implementation details and the Fleet documentation for instructions on how to use it.
- We have reduced image size and complexity by integrating the former external gitjob repository and by merging various controller codes. This also means that the gitjob container image (
rancher/gitjob
) is not needed anymore, as the required functionality is embedded into therancher/fleet
container image. The gitjob deployment will still be created but pointing to therancher/fleet
container image instead. Please also note that a complete list of necessary container images for air-gapped deployments is released alongside Rancher releases. You can find this list asrancher-images.txt
in the assets of the release on Github. See [rancher/fleet#2342](https://github.com/rancher/fleet/issues/2342) for more details. - Continuous delivery also adds experimental OCI content storage. See [rancher/fleet#2561](https://github.com/rancher/fleet/issues/2561) for implementation details and [rancher/fleet-docs#179](https://github.com/rancher/fleet-docs/issues/179) for documentation.
- Continuous delivery now splits components into containers and has switched to the controller-runtime framework. The rewritten controllers switch to structured logging.
- Leader election can now be configured (see [rancher/fleet#1981](https://github.com/rancher/fleet/issues/1981)), as well as the worker count for the fleet-controller (see [rancher/fleet#2430](https://github.com/rancher/fleet/issues/2430)).
- The release deprecates the "fleet test" command in favor of "target" and "deploy" with a dry-run option (see [rancher/fleet#2102](https://github.com/rancher/fleet/issues/2102)).
- Bug fixes enhance drift detection, cluster status reporting, and various operational aspects.
Previous Rancher Behavior Changes - Pod Security Standard (PSS) & Pod Security Admission (PSA)
- Rancher v2.7.2:
- You must manually change the
psp.enabled
value in the chart install yaml when you install or upgrade v102.x.y charts on hardened RKE2 clusters. Instructions for updating the value are available. See #41018.
Previous Rancher Behavior Changes - Authentication
- Rancher v2.8.3:
- Rancher uses additional trusted CAs when establishing a secure connection to the keycloak OIDC authentication provider. See #43217.
- Rancher v2.8.0:
- The
kubeconfig-token-ttl-minutes
setting has been replaced by the setting,kubeconfig-default-token-ttl-minutes
, and is no longer available in the UI. See #38535. - API tokens now have default time periods after which they expire. Authentication tokens expire after 90 days, while kubeconfig tokens expire after 30 days. See #41919.
- Rancher v2.7.2:
- Rancher might retain resources from a disabled auth provider configuration in the local cluster, even after you configure another auth provider. To manually trigger cleanup for a disabled auth provider, add the
management.cattle.io/auth-provider-cleanup
annotation with theunlocked
value to its auth config. See #40378.
Previous Rancher Behavior Changes - Rancher Webhook
- Rancher v2.8.3:
- The embedded Cluster API webhook is removed from the Rancher webhook and can no longer be installed from the webhook chart. It has not been used as of Rancher v2.7.7, where it was migrated to a separate Pod. See #44619.
- Rancher v2.8.0:
- Rancher's webhook now honors the
bind
andescalate
verbs for GlobalRoles. Users who have*
set on GlobalRoles will now have both of these verbs, and could potentially use them to escalate privileges in Rancher v2.8.0 and later. You should review current custom GlobalRoles, especially cases wherebind
,escalate
, or*
are granted, before you upgrade. - Rancher v2.7.5:
- Rancher installs the same pinned version of the
rancher-webhook
chart not only in the local cluster but also in all downstream clusters. Restoring Rancher from v2.7.5 to an earlier version will result in downstream clusters' webhooks being at the version set by Rancher v2.7.5, which might cause incompatibility issues. Local and downstream webhook versions need to be in sync. See #41730 and #41917. - The mutating webhook configuration for secrets is no longer active in downstream clusters. See #41613.
Previous Rancher Behavior Changes - Apps & Marketplace
- Rancher v2.8.0:
-
Legacy code for the following v1 charts is no longer available in the
rancher/system-charts
repository:rancher-cis-benchmark
rancher-gatekeeper-operator
rancher-istio
rancher-logging
rancher-monitoring
The code for these charts will remain available for previous versions of Rancher. - Helm v2 support is deprecated as of the Rancher v2.7 line and will be removed in Rancher v2.9. - Rancher v2.7.0: - Rancher no longer validates an app registration's permissions to use Microsoft Graph on endpoint updates or initial setup. You should add
Directory.Read.All
permissions of typeApplication
. If you configure a different set of permissions, Rancher may not have sufficient privileges to perform some necessary actions within Azure AD, causing errors. - The multi-cluster app legacy feature is no longer available. See #39525.
Previous Rancher Behavior Changes - OPA Gatekeeper
- Rancher v2.8.0:
- OPA Gatekeeper is now deprecated and will be removed in a future release. As a replacement for OPA Gatekeeper, consider switching to Kubewarden. See #42627.
Previous Rancher Behavior Changes - Feature Charts
- Rancher v2.7.0:
- A configurable
priorityClass
is available in the Rancher pod and its feature charts. Previously, pods critical to running Rancher didn't use a priority class. This could cause a cluster with limited resources to evict Rancher pods before other noncritical pods. See #37927.
Previous Rancher Behavior Changes - Backup/Restore
- Rancher v2.7.7:
- If you use a version of backup-restore older than v102.0.2+up3.1.2 to take a backup of Rancher v2.7.7, the migration will encounter a
capi-webhook
error. Make sure that the chart version used for backups is v102.0.2+up3.1.2, which hascluster.x-k8s.io/v1alpha4
resources removed from the resourceSet. If you can't use v102.0.2+up3.1.2 for backups, delete allcluster.x-k8s.io/v1alpha4
resources from the backup tar before using it. See #382.
Previous Rancher Behavior Changes - Logging
- Rancher v2.7.0:
- Rancher defaults to using the bci-micro image for sidecar audit logging. Previously, the default image was Busybox. See #35587.
Previous Rancher Behavior Changes - Monitoring
- Rancher v2.7.2:
- Rancher maintains a
/v1/counts
endpoint that the UI uses to display resource counts. The UI subscribes to changes to the counts for all resources through a websocket to receive the new counts for resources.- Rancher aggregates the changed counts and only sends a message every 5 seconds. This, in turn, requires the UI to update the counts at most once every 5 seconds, improving UI performance. Previously, Rancher would send a message each time the resource counts changed for a resource type. This lead to the UI needing to constantly stop other areas of processing to update the resource counts. See #36682.
- Rancher now only sends back a count for a resource type if the count has changed from the previously known number, improving UI performance. Previously, each message from this socket would include all counts for every resource type in the cluster, even if the counts only changed for one specific resource type. This would cause the UI to need to re-update resource counts for every resource type at a high frequency, with a significant performance impact. See #36681.
Previous Rancher Behavior Changes - Project Monitoring
- Rancher v2.7.2:
- The Helm Controller in RKE2/K3s respects the
managedBy
annotation. In its initial release, Project Monitoring V2 required a workaround to sethelmProjectOperator.helmController.enabled: false
, since the Helm Controller operated on a cluster-wide level and ignored themanagedBy
annotation. See #39724.
Previous Rancher Behavior Changes - Security
- Rancher v2.9.0:
-
When
agent-tls-mode
is set tostrict
, users must provide the certificate authority to Rancher or downstream clusters will disconnect from Rancher, and require manual intervention to fix. This applies to several setup types, including:- Let's Encrypt - When set to
strict
, users must upload the Let's Encrypt Certificate Authority and provideprivateCA=true
when installing the chart. - Bring Your Own Cert - When set to
strict
, users must upload the Certificate Authority used to generate the cert and provideprivateCA=true
when installing the chart. - Proxy/External - when the setting is
strict
, users must upload the Certificate Authority used by the proxy and provideprivateCA=true
when installing the chart.
See #45628 and #45655. - Rancher v2.8.0: - TLS v1.0 and v1.1 are no longer supported for Rancher app ingresses. See #42027.
- Let's Encrypt - When set to
Previous Rancher Behavior Changes - Extensions
- Rancher v2.9.0:
- UI extension owners must update and publish a new version of their extensions to be compatible with Rancher v2.9.0 and later. For more information see the Rancher v2.9 extension support page.
- A new feature flag
uiextensions
has been added for enabling and disabling the UI extension feature (this replaces the need to install theui-plugin-operator
). The first time it's set totrue
(the default value istrue
) it will create the CRD and enable the controllers and endpoints necessary for the feature to work. If set tofalse
, it won't create the CRD if it doesn't already exist, but it won't delete it if it does. It will also disable the controllers and endpoints used by the feature. Enabling or disabling the feature flag will cause Rancher to restart. See #44230 and #43089. - Rancher v2.8.4:
- The Rancher dashboard fails to load an extension that utilizes backported Vue 3 features, displaying an error in the console
object(...) is not a function
. New extensions that utilize thedefineComponent
will not be backwards compatible with older versions of the dashboard. Existing extensions should continue to work moving forward. See #10568.
Previous Rancher Behavior Changes - Virtualization Management (Harvester)
- Rancher v2.9.3:
- A warning banner has been added when provisioning a multi-node Harvester RKE2 cluster in Rancher that you will need to allocate one vGPU more than the number of nodes you have to avoid the "un-schedulable" errors seen after cluster updates. See #10989.
- On the Cloud Credential list, you can now easily see if a Harvester Credential is about to expire or has expired and choose to renew it. You will also be notified on the Cluster Management Clusters list when an associated Harvester Cloud Credential is about to expire or has expired. When upgrading, an existing expired Harvester Credential will not contain a warning. You can still renew the token on the resources menu. See #11270.
Previous Rancher Behavior Changes - Windows
- Rancher v2.9.4:
-
A change was made starting with RKE2 versions
v1.28.15
,v1.29.10
,v1.30.6
andv1.31.2
on Windows which allows the user to configure*_PROXY
environment variables on therke2
service after the node has already been provisioned. Previously any attempt to do so would be a no-op. With this change, If the*_PROXY
environment variables are set on the cluster after a Windows node is provisioned, they can be automatically removed from therke2
service. However, if the variables are set before the node is provisioned, they cannot be removed.More information can be found here. A workaround is to remove the environment variables from the
rancher-wins
service and restart the service or node. At which point*_PROXY
environment variables will no longer be set on either service.shell Remove-ItemProperty HKLM:SYSTEM\CurrentControlSet\Services\rancher-wins -Name Environment Restart-Service rancher-wins
See #47720.
Long-standing Known Issues
Long-standing Known Issues - Cluster Provisioning
-
Not all cluster tools can be installed on a hardened cluster.
-
Rancher v2.8.1:
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
[ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again
. As a workaround, you can unpause the cluster by runningkubectl edit clusters.cluster clustername -n fleet-default
and setspec.unpaused
tofalse
. See #43735. - Rancher v2.7.2:
- If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #8524.
Long-standing Known Issues - RKE Provisioning
- Rancher v2.9.0:
- The Weave CNI plugin for RKE v1.27 and later is now deprecated, due to the plugin being deprecated for upstream Kubernetes v1.27 and later. RKE creation will not go through as it will raise a validation warning. See #11322.
Long-standing Known Issues - RKE2 Provisioning
- Rancher v2.9.0:
- When adding the
provisioning.cattle.io/allow-dynamic-schema-drop
annotation through the cluster config UI, the annotation disappears before adding the value field. When viewing the YAML, the respective value field is not updated and is displayed as an empty string. As a workaround, when creating the cluster, set the annotation by using the Edit Yaml option located in the dropdown ⋮ attached to your respective cluster in the Cluster Management view. See #13655. - Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834. - Rancher v2.7.6:
- Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #37074.
- Rancher v2.7.2:
- When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI,
spec.rkeConfig.machineGlobalConfig.profile
is set tonull
, which is an invalid configuration. See #8480.
Long-standing Known Issues - K3s Provisioning
- Rancher v2.7.6:
- Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #37074.
- Rancher v2.7.2:
- Clusters remain in an
Updating
state even when they contain nodes in anError
state. See #39164.
Long-standing Known Issues - Rancher App (Global UI)
- Rancher v2.9.5:
- Users are able to create or edit clusters even when using an invalid Add-on YAML configuration. See #12324.
- Rancher v2.9.2:
- Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #11922.
- Rancher v2.7.7:
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
_
in theCluster Name
field. See #9416. - Rancher v2.7.2:
- When creating a GKE cluster in the Rancher UI you will see provisioning failures as the
clusterIpv4CidrBlock
andclusterSecondaryRangeName
fields conflict. See #8749.
Long-standing Known Issues - Hosted Rancher
- Rancher v2.7.5:
- The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #8524.
Long-standing Known Issues - EKS
- Rancher v2.7.0:
- EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #39392.
Long-standing Known Issues - Authentication
- Rancher v2.9.0:
- There are some known issues with the OpenID Connect provider support:
- When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #46104.
- When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #46105.
- When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project:
[projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg"
. However, the project is still created. See #46106.
Long-standing Known Issues - Rancher Webhook
- Rancher v2.7.2:
- A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
- If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #40816.
Long-standing Known Issues - Virtualization Management (Harvester)
- Rancher v2.7.2:
- If you're using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won't be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #3750.
Long-standing Known Issues - Continuous Delivery (Fleet)
- Rancher v2.9.10:
- In some cases, updating Fleet may result in a pre-existing
GitRepo
getting stuck in aGitUpdating
state, and a force update does not resolve the issue. An error message similar to the following would appear in the git job’s logs:level=fatal msg="secrets \"<secret-name>\" is forbidden: User \"system:serviceaccount:fleet-default:git-<name>\" cannot delete resource \"secrets\" in API group \"\" in the namespace \"fleet-default\""
. In Fleet v0.11.8, a workaround is available. See #2898.
Long-standing Known Issues - Backup/Restore
-
When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.
-
Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.