-
Notifications
You must be signed in to change notification settings - Fork 507
Description
In our environment, we deploy one ClickHouse Pod per node.
In this setup, using hostPort for networking seems like a reasonable option when neither hostNetwork nor a LoadBalancer is suitable.
However, the operator’s ContainerEnsurePortByName()
function always resets hostPort to 0, which makes it impossible to configure.
Here’s a simplified CHI spec we tried:
apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallation
metadata:
name: chi-hostport-test
spec:
templates:
podTemplates:
- name: default
spec:
containers:
- name: clickhouse
ports:
- containerPort: 8123
hostPort: 8123
name: http
- containerPort: 9000
hostPort: 9000
name: tcp
But in the StatefulSets generated by the operator, this gets rewritten with hostPort: 0.
As a workaround, we noticed that adding duplicate entries seems to preserve the hostPort values, for example:
templates:
podTemplates:
- name: default
spec:
containers:
- name: clickhouse
ports:
- name: http
containerPort: 8123
- name: tcp
containerPort: 9000
- name: httphost
hostPort: 8123
containerPort: 8123
- name: tcphost
hostPort: 9000
containerPort: 9000
---
templates:
podTemplates:
- name: default
spec:
containers:
- name: clickhouse
ports:
- name: http
containerPort: 8123
- name: tcp
containerPort: 9000
- name: http
hostPort: 8123
containerPort: 8123
- name: tcp
hostPort: 9000
containerPort: 9000
This technically works, but it results in duplicate port definitions.
Since Kubernetes uses containerPort + protocol as the merge key, this pattern is discouraged and may cause conflicts with Services or Probes that rely on unique port names.
I assume there was a deliberate reason for forcing hostPort = 0 — maybe to prevent scheduling or safety issues?
I’m mainly curious about the background: was this a policy decision, or just legacy behavior?
Understanding the reasoning would help us decide whether to look for an alternative pattern or propose a change.
Thanks!