-
Notifications
You must be signed in to change notification settings - Fork 157
Description
Report
Hi,
We are encountering a strange issue when trying to expose our mongodb cluster outside of kubernetes - when using the 'external-dns.alpha.kubernetes.io/hostname' annotation, the created Route53 record points to the internal IP of the pod, instead of the IP/DNS of the (externally reachable) NLB.
More about the problem
Relevant part of our cluster configuration:
`
replsets:
- name: foo-cluster
size: 1
expose:
enabled: true
type: LoadBalancer
externalTrafficPolicy: Local
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-xxxxxxxx"
external-dns.alpha.kubernetes.io/hostname: "mon-hardcoded-hostname.foo.io"
`
Applying this, the load balancer for the pod is created correctly and it adds the pod as a listener - that is, we can use the auto-generated load balancer DNS to reach the pod just fine.
However, the DNS record subsequently created in Route53 looks like this:
foo-cluster-foo-cluster-0.mon-hardcoded-hostname.foo.io -> 10.204.37.174
Which is the internal IP of the pod, and not the IP/DNS of the services load balancer.
Steps to reproduce
Create a new cluster
Configure the cluster to be exposed w/ Load Balancers
Set the external-dns.alpha.kubernetes.io/hostname annotation
Check what the value of the created Route53 record is
Versions
Kubernetes version: v1.29.6
Helm chart version: 1.20.1
Operator version: 1.20.1
Cluster MongoDB version 7.0
Anything else?
No response