Skip to content

feat: enable HCP deployment on external clusters#989

Merged
apedriza merged 3 commits intok0sproject:mainfrom
apedriza:poc-clusterprofile-for-hcp
Aug 20, 2025
Merged

feat: enable HCP deployment on external clusters#989
apedriza merged 3 commits intok0sproject:mainfrom
apedriza:poc-clusterprofile-for-hcp

Conversation

@apedriza
Copy link
Contributor

@apedriza apedriza commented Apr 7, 2025

extension of k0smtron API to support use external cluster for HCP. Only needed is a kubeconfig secret related to the cluster where pods will be placed

fix #969

@apedriza apedriza force-pushed the poc-clusterprofile-for-hcp branch 6 times, most recently from 79f7d5d to 2098a74 Compare April 9, 2025 11:01
@github-actions
Copy link

github-actions bot commented May 9, 2025

The PR is marked as stale since no activity has been recorded in 30 days

@github-actions github-actions bot added Stale and removed Stale labels May 9, 2025
@github-actions
Copy link

github-actions bot commented Jun 9, 2025

The PR is marked as stale since no activity has been recorded in 30 days

@github-actions github-actions bot added the Stale label Jun 9, 2025
@apedriza apedriza added keep Exempts issues and pull requests from stale workflow and removed Stale labels Jun 10, 2025
@apedriza apedriza force-pushed the poc-clusterprofile-for-hcp branch 4 times, most recently from b312e39 to 652f9b4 Compare June 20, 2025 10:20
@apedriza apedriza changed the title Add clusterprofileref in order to target a cluster for HCP deployment feat: enable HCP deployment on external clusters Jun 20, 2025
@apedriza apedriza force-pushed the poc-clusterprofile-for-hcp branch 9 times, most recently from 82b025d to 7809516 Compare June 27, 2025 09:31
// the k0smotron.Cluster. This is because the owner references cannot be used in this case.
if !kmcScope.inClusterHCP {
var errors []error
for _, descendant := range kmcScope.getDescendants(ctx, kmc) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what I like the least since ownerreferences cannot be used for a background deletion. Happy to hear other options 😃

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like we discussed, creating a "dummy" configmap or something acting as the "root" owner for everything might work. Requires some further testing and exploring so could be a separate PR, if that even works. 😂

Comment on lines 559 to 574
if foundStatefulSet.Status.ReadyReplicas == kmc.Spec.Replicas {
kmc.Status.Ready = true
}
return err
}
detectAndSetCurrentClusterVersion(foundStatefulSet, kmc)

return nil
if !isStatefulSetsEqual(&statefulSet, foundStatefulSet) {
return scope.client.Patch(ctx, &statefulSet, client.Apply, patchOpts...)
}

return err
if foundStatefulSet.Status.ReadyReplicas == 0 {
return fmt.Errorf("%w: no replicas ready yet for statefulset '%s' (%d/%d)", ErrNotReady, foundStatefulSet.GetName(), foundStatefulSet.Status.ReadyReplicas, kmc.Spec.Replicas)
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can set cluster status to ready if there is one replica already running instead of wait for all replicas to be ready. This way It decreases time for deploying worker nodes

@apedriza apedriza marked this pull request as ready for review June 27, 2025 10:14
@apedriza apedriza requested a review from a team as a code owner June 27, 2025 10:14
@apedriza apedriza added the enhancement New feature or request label Jun 27, 2025
// HostingClusterKubeconfigRef is the reference to the kubeconfig of the hosting cluster.
// This kubeconfig will be used to deploy the k0s control plane.
//+kubebuilder:validation:Optional
HostingClusterKubeconfigRef *HostingClusterKubeconfigRef `json:"hostingClusterKubeconfigRef,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd call it just kubeconfigRef or kubeconfigSecretRef

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

@apedriza apedriza force-pushed the poc-clusterprofile-for-hcp branch 4 times, most recently from 1e2a463 to a100a29 Compare July 4, 2025 09:25
@apedriza apedriza force-pushed the poc-clusterprofile-for-hcp branch 2 times, most recently from 2966e5f to a9eca86 Compare August 18, 2025 07:31
linters:
default: none
enable:
- depguard
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After the recent changes in the golangci-lint configuration, some noisy errors appeared. I think it’s safe to remove the depguard linter for now. It deserves some investigation on how to configure it properly in follow-up changes.

Signed-off-by: apedriza <adripedriza@gmail.com>
@apedriza apedriza force-pushed the poc-clusterprofile-for-hcp branch from a9eca86 to 01a2e8a Compare August 18, 2025 07:44
@apedriza
Copy link
Contributor Author

apedriza commented Aug 18, 2025

This is the contract fulfillment for k0smotron integrating ClusterProfile api https://github.com/kubernetes/enhancements/blob/master/keps/sig-multicluster/4322-cluster-inventory/README.md#secret-format. It is the responsibility of the user to locate, label and store with the proper key value for the kubeconfig secret accordingly to ClusterProfile API. That way we allow user use ClusterProfile API together with k0smotron but we are not strongly coupled to that API

Copy link
Member

@jnummelin jnummelin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM in big picture

Added few nit-pick type things, feel free to address in a followup PR

Cluster *clusterv1.Cluster
}

type hostingClusterScope struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dunno if this could've been "merged" with the existing scope object? This way we'd be able to pass a single scope object around instead of multiple. Might simplify the code a bit?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I think using scope struct simplifies the change. I'll change it in this PR, thanks!

// the k0smotron.Cluster. This is because the owner references cannot be used in this case.
if !kmcScope.inClusterHCP {
var errors []error
for _, descendant := range kmcScope.getDescendants(ctx, kmc) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like we discussed, creating a "dummy" configmap or something acting as the "root" owner for everything might work. Requires some further testing and exploring so could be a separate PR, if that even works. 😂


// getDescendants returns the list of resources that are associated with the k0smotron.Cluster
// and that must be deleted when the k0smotron.Cluster is deleted.
func (scope *kmcScope) getDescendants(ctx context.Context, kmc *km.Cluster) []client.Object {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be maybe enhanced to more generic approach, something like:

var descendantTypes = []client.Object{&v1.ConfigMapList{}, &v1.ServiceList{}}

func (scope *kmcScope) getDescendants(ctx context.Context, kmc *km.Cluster) ([]client.Object, error) {
	descendants := make([]client.Object, 0)
	lOpt := client.MatchingLabels(kutil.DefaultK0smotronClusterLabels(kmc))
	
	for _, dt := range descendantTypes {
		err := c.List(ctx, dt, lOpt)
		if err != nil {
			return nil, fmt.Errorf("failed to list descendants of type %s K0smotronControlPlane %s/%s: %w", reflect.TypeOf(variable).String(), kcp.Namespace, kcp.Name, err)
		}
		descendants = append(descendants, dt.Items...)
	}

}

Could be a follow-up PR too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this method is only used for the removal of resources belonging to the cluster and using that proposed dummy object for centralize the deletion is another different approach I will test both solutions is a separate PR

@jnummelin
Copy link
Member

@apedriza any plans for docs on this? In a separate PR?

@apedriza
Copy link
Contributor Author

@apedriza any plans for docs on this? In a separate PR?

@jnummelin I can add docs in this PR. Any ideas about what form it could take? A simple mention in the standalone documentation that this possibility exists?

@jnummelin
Copy link
Member

jnummelin commented Aug 19, 2025

A simple mention in the standalone documentation that this possibility exists?

yeah. plus some docs around the expected format of the secret for the access

Also I think we should add some notes on the RBAC expected for the "user" of the external client kubeconfig?

Signed-off-by: apedriza <adripedriza@gmail.com>
Signed-off-by: apedriza <adripedriza@gmail.com>
@apedriza
Copy link
Contributor Author

Added docs and some refactoring for simplicity, as suggested. Other improvements will be addressed in a subsequent PR

@apedriza apedriza merged commit 353ea0f into k0sproject:main Aug 20, 2025
147 of 148 checks passed
@kahirokunn
Copy link
Contributor

This is fantastic—exactly what I hoped for. Thank you so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request keep Exempts issues and pull requests from stale workflow

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature Request: Extend K0smotronControlPlane to Support Dedicated Host Clusters via ClusterProfile API

4 participants