Skip to content

Error When Using Private Endpoint for Kubernetes Cluster in Terraform Provider #43

@brokedba

Description

@brokedba

Description

When using a private endpoint for the OKE cluster in the kubernetes provider configuration in Terraform, I encounter the following error during the terraform plan phase:

export TF_VAR_cluster_endpoint_visibility="Private"

image

Error: Provider configuration: cannot load Kubernetes client config

  with provider["registry.terraform.io/hashicorp/kubernetes"],
  on providers.tf line 24, in provider "kubernetes":
  24: provider "kubernetes" {

invalid configuration: default cluster has no server defined

Steps to Reproduce

  1. Configure a private endpoint for the Kubernetes cluster in the Terraform setup.

  2. Use the following provider configuration for kubernetes:

    provider "kubernetes" {
      host                   = local.cluster_endpoint
      cluster_ca_certificate = local.cluster_ca_certificate
      insecure               = local.external_private_endpoint
      exec {
        api_version = "client.authentication.k8s.io/v1beta1"
        args        = ["ce", "cluster", "generate-token", "--cluster-id", local.cluster_id, "--region", local.cluster_region]
        command     = "oci"
      }
    }
  3. Run terraform plan.

Expected Behavior

Terraform should successfully connect to the private endpoint and validate the configuration.

Actual Behavior

The following error is raised:

Error: Provider configuration: cannot load Kubernetes client config

invalid configuration: default cluster has no server defined

Additional Information

The following comment in the Terraform configuration file highlights important considerations when using private endpoints:

### Important Notice ###
# OCI Resource Manager Private Endpoint is only available when using Resource Manager.
# If you use local Terraform, you will need to setup an OCI Bastion for connectivity to the Private OKE.
# If using OCI CloudShell, you need to activate the OCI Private Endpoint for OCI CloudShell.
---
resource "oci_resourcemanager_private_endpoint" "private_kubernetes_endpoint" { .. }
---
# Resolves the private IP of the customer's private endpoint to a NAT IP.
data "oci_resourcemanager_private_endpoint_reachable_ip" "private_kubernetes_endpoint" {
  private_endpoint_id = var.create_new_oke_cluster ? oci_resourcemanager_private_endpoint.private_kubernetes_endpoint[0].id : var.existent_oke_cluster_private_endpoint
  private_ip          = trimsuffix(oci_containerengine_cluster.oke_cluster[0].endpoints.0.private_endpoint, ":6443") # TODO: Pending rule when has existent cluster

  count = (var.cluster_endpoint_visibility == "Private") ? 1 : 0
}

Possible Causes

  1. Misalignment between the provider's expected configuration and the way OCI OKE private endpoints are handled.
    2 Lack of documentation or automation for scenarios involving private endpoints with Terraform outside OCI Resource Manager.

Request

  1. Could you confirm whether the current terraform-oci-oke-quickstart supports private endpoints for the Kubernetes provider when using local Terraform?
  2. If supported, could you provide guidance on properly configuring Terraform with a private endpoint, considering the above comments?
  3. If this is a bug, could you suggest a workaround or plan for resolution?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions