-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
Description
Description
Observed Behavior:
When using amiFamily: Bottlerocket and specifying spec.blockDeviceMappings with only the data volume (/dev/xvdb) to increase container storage, newly provisioned nodes have the root/control volume (/dev/xvda) created as 2GiB and gp2.
This happens even though Bottlerocket docs show the default root/control device as 4GiB (and we expect gp3 in our environment / according to our configuration expectations).
In other words: overriding only /dev/xvdb appears to result in an unexpected default for /dev/xvda (size/type differs from Bottlerocket defaults shown in docs).
Expected Behavior:
If blockDeviceMappings includes only /dev/xvdb, Karpenter should either:
1. Preserve Bottlerocket defaults for /dev/xvda (4GiB, and expected volume type), or
2. Clearly document that specifying any blockDeviceMappings overrides the entire default mapping and that /dev/xvda will be derived from the underlying AMI/template defaults (which in our case become 2Gi gp2).
Reproduction Steps (Please include YAML):
1.Apply an EC2NodeClass for Bottlerocket where we only define /dev/xvdb:
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: bottlerocket-arm64
spec:
amiFamily: Bottlerocket
role: <KarpenterNodeRole>
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: <cluster-name>
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: <cluster-name>
blockDeviceMappings:
- deviceName: /dev/xvdb
ebs:
volumeSize: 100Gi
volumeType: gp3
encrypted: true
deleteOnTermination: true
tags:
architecture: arm64
- Create/ensure a NodePool that references this EC2NodeClass
---
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: apps-arm64-spot
spec:
template:
metadata:
labels:
"purpose": "apps-arm64-spot"
"arm64": "true"
"spot": "true"
spec:
requirements:
- key: kubernetes.io/arch
operator: In
values: ["arm64"]
- key: kubernetes.io/os
operator: In
values: ["linux"]
- key: karpenter.sh/capacity-type
operator: In
values: ["spot"]
- key: karpenter.k8s.aws/instance-category
operator: In
values: ["t", "c", "m", "r"]
- key: karpenter.k8s.aws/instance-generation
operator: Gt
values: ["4"]
taints:
- key: arm64_ready
value: "true"
effect: NoSchedule
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: bottlerocket-arm64
expireAfter: "720h" # 30 * 24h = 720h
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmptyOrUnderutilized
consolidateAfter: 0s # set it to 0 to have the same behavior as in v1beta1.
- Trigger provisioning (scale a deployment / create a pod that requires this NodePool).
- Verify attached volumes on the created EC2 instance:
Result: /dev/xvda shows up as 2GiB gp2 (unexpected), while /dev/xvdb is correctly created as gp3 100GiB.
Versions:
- Karpenter Version: v1.2.0 (controller image tag)
- Chart Version: v1.2.0
- Kubernetes Version (
kubectl version): v1.32 (kubectl= v1.31.1)
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment