You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Part of [#6740][#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator
to ensure that all of a PipelineRun's pods are scheduled to the same node.
Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace,
and mount the PVCs to the AA to avoid PV availability zone conflict.
This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict
issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone.
Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate,
this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec,
which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone.
This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature.
There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs.
[#6740]: #6740
[tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
// it may require a StorageClass with VolumeBindingMode: WaitForFirstConsumer when WS specifies PersistentVolumeClaims to avoid Availability Zone conflict
212
+
// (discussed in https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#co-locating-volumes).
213
+
// We need to update the TEP and documentation for this limitation if it is the case.
214
+
varvolumes []corev1.Volume
215
+
fori, claim:=rangeclaims {
216
+
volumes=append(volumes, corev1.Volume{
217
+
Name: fmt.Sprintf("workspace-%d", i),
218
+
VolumeSource: corev1.VolumeSource{
219
+
// A Pod mounting a PersistentVolumeClaim that has a StorageClass with
220
+
// volumeBindingMode: Immediate
221
+
// the PV is allocated on a Node first, and then the pod need to be
222
+
// scheduled to that node.
223
+
// To support those PVCs, the Affinity Assistant must also mount the
224
+
// same PersistentVolumeClaim - to be sure that the Affinity Assistant
225
+
// pod is scheduled to the same Availability Zone as the PV, when using
226
+
// a regional cluster. This is called VolumeScheduling.
0 commit comments