feat: enable HCP deployment on external clusters#989
feat: enable HCP deployment on external clusters#989apedriza merged 3 commits intok0sproject:mainfrom
Conversation
79f7d5d to
2098a74
Compare
|
The PR is marked as stale since no activity has been recorded in 30 days |
|
The PR is marked as stale since no activity has been recorded in 30 days |
b312e39 to
652f9b4
Compare
82b025d to
7809516
Compare
| // the k0smotron.Cluster. This is because the owner references cannot be used in this case. | ||
| if !kmcScope.inClusterHCP { | ||
| var errors []error | ||
| for _, descendant := range kmcScope.getDescendants(ctx, kmc) { |
There was a problem hiding this comment.
This is what I like the least since ownerreferences cannot be used for a background deletion. Happy to hear other options 😃
There was a problem hiding this comment.
Like we discussed, creating a "dummy" configmap or something acting as the "root" owner for everything might work. Requires some further testing and exploring so could be a separate PR, if that even works. 😂
| if foundStatefulSet.Status.ReadyReplicas == kmc.Spec.Replicas { | ||
| kmc.Status.Ready = true | ||
| } | ||
| return err | ||
| } | ||
| detectAndSetCurrentClusterVersion(foundStatefulSet, kmc) | ||
|
|
||
| return nil | ||
| if !isStatefulSetsEqual(&statefulSet, foundStatefulSet) { | ||
| return scope.client.Patch(ctx, &statefulSet, client.Apply, patchOpts...) | ||
| } | ||
|
|
||
| return err | ||
| if foundStatefulSet.Status.ReadyReplicas == 0 { | ||
| return fmt.Errorf("%w: no replicas ready yet for statefulset '%s' (%d/%d)", ErrNotReady, foundStatefulSet.GetName(), foundStatefulSet.Status.ReadyReplicas, kmc.Spec.Replicas) | ||
| } |
There was a problem hiding this comment.
I think we can set cluster status to ready if there is one replica already running instead of wait for all replicas to be ready. This way It decreases time for deploying worker nodes
| // HostingClusterKubeconfigRef is the reference to the kubeconfig of the hosting cluster. | ||
| // This kubeconfig will be used to deploy the k0s control plane. | ||
| //+kubebuilder:validation:Optional | ||
| HostingClusterKubeconfigRef *HostingClusterKubeconfigRef `json:"hostingClusterKubeconfigRef,omitempty"` |
There was a problem hiding this comment.
I'd call it just kubeconfigRef or kubeconfigSecretRef
1e2a463 to
a100a29
Compare
2966e5f to
a9eca86
Compare
| linters: | ||
| default: none | ||
| enable: | ||
| - depguard |
There was a problem hiding this comment.
After the recent changes in the golangci-lint configuration, some noisy errors appeared. I think it’s safe to remove the depguard linter for now. It deserves some investigation on how to configure it properly in follow-up changes.
Signed-off-by: apedriza <adripedriza@gmail.com>
a9eca86 to
01a2e8a
Compare
|
This is the contract fulfillment for k0smotron integrating ClusterProfile api https://github.com/kubernetes/enhancements/blob/master/keps/sig-multicluster/4322-cluster-inventory/README.md#secret-format. It is the responsibility of the user to locate, label and store with the proper key value for the kubeconfig secret accordingly to ClusterProfile API. That way we allow user use ClusterProfile API together with k0smotron but we are not strongly coupled to that API |
jnummelin
left a comment
There was a problem hiding this comment.
LGTM in big picture
Added few nit-pick type things, feel free to address in a followup PR
| Cluster *clusterv1.Cluster | ||
| } | ||
|
|
||
| type hostingClusterScope struct { |
There was a problem hiding this comment.
dunno if this could've been "merged" with the existing scope object? This way we'd be able to pass a single scope object around instead of multiple. Might simplify the code a bit?
There was a problem hiding this comment.
yes, I think using scope struct simplifies the change. I'll change it in this PR, thanks!
| // the k0smotron.Cluster. This is because the owner references cannot be used in this case. | ||
| if !kmcScope.inClusterHCP { | ||
| var errors []error | ||
| for _, descendant := range kmcScope.getDescendants(ctx, kmc) { |
There was a problem hiding this comment.
Like we discussed, creating a "dummy" configmap or something acting as the "root" owner for everything might work. Requires some further testing and exploring so could be a separate PR, if that even works. 😂
|
|
||
| // getDescendants returns the list of resources that are associated with the k0smotron.Cluster | ||
| // and that must be deleted when the k0smotron.Cluster is deleted. | ||
| func (scope *kmcScope) getDescendants(ctx context.Context, kmc *km.Cluster) []client.Object { |
There was a problem hiding this comment.
This could be maybe enhanced to more generic approach, something like:
var descendantTypes = []client.Object{&v1.ConfigMapList{}, &v1.ServiceList{}}
func (scope *kmcScope) getDescendants(ctx context.Context, kmc *km.Cluster) ([]client.Object, error) {
descendants := make([]client.Object, 0)
lOpt := client.MatchingLabels(kutil.DefaultK0smotronClusterLabels(kmc))
for _, dt := range descendantTypes {
err := c.List(ctx, dt, lOpt)
if err != nil {
return nil, fmt.Errorf("failed to list descendants of type %s K0smotronControlPlane %s/%s: %w", reflect.TypeOf(variable).String(), kcp.Namespace, kcp.Name, err)
}
descendants = append(descendants, dt.Items...)
}
}
Could be a follow-up PR too
There was a problem hiding this comment.
since this method is only used for the removal of resources belonging to the cluster and using that proposed dummy object for centralize the deletion is another different approach I will test both solutions is a separate PR
|
@apedriza any plans for docs on this? In a separate PR? |
@jnummelin I can add docs in this PR. Any ideas about what form it could take? A simple mention in the standalone documentation that this possibility exists? |
yeah. plus some docs around the expected format of the secret for the access Also I think we should add some notes on the RBAC expected for the "user" of the external client kubeconfig? |
Signed-off-by: apedriza <adripedriza@gmail.com>
|
Added docs and some refactoring for simplicity, as suggested. Other improvements will be addressed in a subsequent PR |
|
This is fantastic—exactly what I hoped for. Thank you so much! |
extension of k0smtron API to support use external cluster for HCP. Only needed is a kubeconfig secret related to the cluster where pods will be placed
fix #969