Menu

#1409 [BUG] MetalLb speakers do not announce VIP if there is no node-group `md0` in tenant clusters

open
nobody
None
2025-09-23
2025-09-11
Anonymous
No

Originally created by: kinseii

MetalLb speakers do not announce VIP if there is no node-group md0 in tenant clusters.
Even if you use another node pool with the ingress-nginx role, the speakers do not announce the VIP.

If MetalLb pods is restarted, errors will appear:

{"level":"error","ts":"2025-09-10T01:37:46Z","msg":"Reconciler
error","controller":"PoolStatusController","controllerGroup":"metallb.io","controllerKind":"IPAddressPool","IPAddressPool":
{"name":"cozystack","namespace":"cozy-metallb"},"namespace":"cozy-metallb","name":"cozystack","reconcileID":"d28b5f43-
4564-4469-af38-a98ab804b2f7","error":"Operation cannot be fulfilled on ipaddresspools.metallb.io \"cozystack\": the object has 
been modified; please apply your changes to the latest version and try again","stacktrace":"sigs.k8s.io/controller-
runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-
runtime@v0.20.4/pkg/internal/controller/controller.go:347\nsigs.k8s.io/controller-runtime/pkg/internal/controller.
(*Controller[...]).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-
runtime@v0.20.4/pkg/internal/controller/controller.go:294\nsigs.k8s.io/controller-runtime/pkg/internal/controller.
(*Controller[...]).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.4/pkg/internal/controller/controller.go:255"}

An example of a non-working configuration: node-pool md0 has minReplicas and maxReplicas set to zero, but there is another node-pool with the ingress-nginx role:

    nodeGroups:
      gnrlng0:
        ephemeralStorage: 50Gi
        instanceType: u1.xlarge # 4CPU/16Mem
        minReplicas: 1
        maxReplicas: 3
        resources:
          cpu: ""
          memory: ""
        gpus: []
        roles:

          - ingress-nginx
          - worker
      md0:
        ephemeralStorage: 5Gi
        instanceType: u1.small # 1CPU/2Mem
        minReplicas: 0
        maxReplicas: 0
        resources:
          cpu: ""
          memory: ""
        gpus: []
        roles:
          - ingress-nginx

Discussion

  • Anonymous

    Anonymous - 2025-09-16

    Originally posted by: kinseii

    Clarification: When I scale node group md0 from zero to 1, MetalLB speakers successfully announce VIP. But if I scale it back to 0 and assign the ingress-nginx role to another node group, the speakers stop announcing and return the error above. The reason I scale the md0 node group to zero is that it cannot be removed; it is hardcoded in the Helm template. I know there is a PR to remove it, but it is still preserved in the release versions.

     
  • Anonymous

    Anonymous - 2025-09-18

    Originally posted by: kvaps

    Yeah, this is intended. When you use service with externalTrafficPolicyLocal, it will wait for endpoints to appear

     
  • Anonymous

    Anonymous - 2025-09-22

    Originally posted by: kinseii

    Yeah, this is intended. When you use service with externalTrafficPolicyLocal, it will wait for endpoints to appear

    I understand, but why does it only work when there is a node group md0? With other node groups with the ingress-nginx role, VIPs are not assigned. Moreover, not only Ingress Nginx Load Balancers are not assigned, but also others, i.e., non-Ingress Nginx ones.

     
  • Anonymous

    Anonymous - 2025-09-23

    Originally posted by: kinseii

    @kvaps, as I understand it, this is because the Ingress-nginx service has External Traffic Policy: Local and its pods is located on another node. I will check with External Traffic Policy: Cluster.
    Doc here: https://metallb.universe.tf/usage/#local-traffic-policy

     

Log in to post a comment.

MongoDB Logo MongoDB