diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 2ca434ce1..284aa1a3e 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -43,28 +43,28 @@ you can follow the steps below to test your changes: 1. Create the cluster: -```sh -kind create cluster operator-controller -``` + ```sh + kind create cluster operator-controller + ``` 2. Build your changes: -```sh -make build docker-build -``` + ```sh + make build docker-build + ``` 3. Load the image locally and Deploy to Kind -```sh -make kind-load kind-deploy -``` + ```sh + make kind-load kind-deploy + ``` ### Communication Channels - Email: [operator-framework-olm-dev](mailto:operator-framework-olm-dev@googlegroups.com) - Slack: [#olm-dev](https://kubernetes.slack.com/archives/C0181L6JYQ2) - Google Group: [olm-gg](https://groups.google.com/g/operator-framework-olm-dev) -- Weekly in Person Working Group Meeting: [olm-wg](https://github.com/operator-framework/community#operator-lifecycle-manager-working-group) +- Weekly in Person Working Group Meeting: [olm-wg](https://github.com/operator-framework/community#operator-lifecycle-manager-working-group) ## How are Milestones Designed? @@ -91,7 +91,7 @@ As discussed earlier, the operator-controller adheres to a microservice architec ## Submitting Issues -Unsure where to submit an issue? +Unsure where to submit an issue? - [The Operator-Controller project](https://github.com/operator-framework/operator-controller/), which is the top level component allowing users to specify operators they'd like to install. - [The Catalogd project](https://github.com/operator-framework/catalogd/), which hosts operator content and helps users discover installable content. diff --git a/docs/concepts/controlling-catalog-selection.md b/docs/concepts/controlling-catalog-selection.md index a6659f77e..dc2f90ab8 100644 --- a/docs/concepts/controlling-catalog-selection.md +++ b/docs/concepts/controlling-catalog-selection.md @@ -159,70 +159,71 @@ If the system cannot resolve to a single bundle due to ambiguity, it will genera 1. **Create or Update `ClusterCatalogs` with Appropriate Labels and Priority** - ```yaml - apiVersion: olm.operatorframework.io/v1 - kind: ClusterCatalog - metadata: - name: catalog-a - labels: - example.com/support: "true" - spec: - priority: 1000 - source: - type: Image - image: - ref: quay.io/example/content-management-a:latest - ``` - - ```yaml - apiVersion: olm.operatorframework.io/v1 - kind: ClusterCatalog - metadata: - name: catalog-b - labels: - example.com/support: "false" - spec: - priority: 500 - source: - type: Image - image: - ref: quay.io/example/content-management-b:latest - ``` - NB: an `olm.operatorframework.io/metadata.name` label will be added automatically to ClusterCatalogs when applied + ```yaml + apiVersion: olm.operatorframework.io/v1 + kind: ClusterCatalog + metadata: + name: catalog-a + labels: + example.com/support: "true" + spec: + priority: 1000 + source: + type: Image + image: + ref: quay.io/example/content-management-a:latest + ``` + + ```yaml + apiVersion: olm.operatorframework.io/v1 + kind: ClusterCatalog + metadata: + name: catalog-b + labels: + example.com/support: "false" + spec: + priority: 500 + source: + type: Image + image: + ref: quay.io/example/content-management-b:latest + ``` + !!! note + An `olm.operatorframework.io/metadata.name` label will be added automatically to ClusterCatalogs when applied 2. **Create a `ClusterExtension` with Catalog Selection** - ```yaml - apiVersion: olm.operatorframework.io/v1 - kind: ClusterExtension - metadata: - name: install-my-operator - spec: - packageName: my-operator - catalog: - selector: - matchLabels: - example.com/support: "true" - ``` + ```yaml + apiVersion: olm.operatorframework.io/v1 + kind: ClusterExtension + metadata: + name: install-my-operator + spec: + packageName: my-operator + catalog: + selector: + matchLabels: + example.com/support: "true" + ``` 3. **Apply the Resources** - ```shell - kubectl apply -f content-management-a.yaml - kubectl apply -f content-management-b.yaml - kubectl apply -f install-my-operator.yaml - ``` + ```shell + kubectl apply -f content-management-a.yaml + kubectl apply -f content-management-b.yaml + kubectl apply -f install-my-operator.yaml + ``` 4. **Verify the Installation** - Check the status of the `ClusterExtension`: + Check the status of the `ClusterExtension`: - ```shell - kubectl get clusterextension install-my-operator -o yaml - ``` + ```shell + kubectl get clusterextension install-my-operator -o yaml + ``` - The status should indicate that the bundle was resolved from `catalog-a` due to the higher priority and matching label. + The status should indicate that the bundle was resolved from `catalog-a` due to the higher priority and matching label. ## Important Notes diff --git a/docs/concepts/upgrade-support.md b/docs/concepts/upgrade-support.md index 5abd579f1..8287ff2b3 100644 --- a/docs/concepts/upgrade-support.md +++ b/docs/concepts/upgrade-support.md @@ -17,10 +17,12 @@ When determining upgrade edges, also known as upgrade paths or upgrade constrain By supporting legacy OLM semantics, OLM v1 now honors the upgrade graph from catalogs accurately. -* If there are multiple possible successors, OLM v1 behavior differs in the following ways: - * In legacy OLM, the successor closest to the channel head is chosen. - * In OLM v1, the successor with the highest semantic version (semver) is chosen. -* Consider the following set of file-based catalog (FBC) channel entries: +If there are multiple possible successors, OLM v1 behavior differs in the following ways: + +* In legacy OLM, the successor closest to the channel head is chosen. +* In OLM v1, the successor with the highest semantic version (semver) is chosen. + +Consider the following set of file-based catalog (FBC) channel entries: ```yaml # ... @@ -51,7 +53,7 @@ spec: version: "" ``` -where setting the `upgradeConstraintPolicy` to: +Setting the `upgradeConstraintPolicy` to: `SelfCertified` : Does not limit the next version to the set of successors, and instead allows for any downgrade, sidegrade, or upgrade. @@ -63,8 +65,8 @@ where setting the `upgradeConstraintPolicy` to: OLM supports Semver to provide a simplified way for package authors to define compatible upgrades. According to the Semver standard, releases within a major version (e.g. `>=1.0.0 <2.0.0`) must be compatible. As a result, package authors can publish a new package version following the Semver specification, and OLM assumes compatibility. Package authors do not have to explicitly define upgrade edges in the catalog. -> [!NOTE] -> Currently, OLM 1.0 does not support automatic upgrades to the next major version. You must manually verify and perform major version upgrades. For more information about major version upgrades, see [Manually verified upgrades and downgrades](#manually-verified-upgrades-and-downgrades). +!!! note + Currently, OLM 1.0 does not support automatic upgrades to the next major version. You must manually verify and perform major version upgrades. For more information about major version upgrades, see [Manually verified upgrades and downgrades](#manually-verified-upgrades-and-downgrades). ### Upgrades within the major version zero @@ -77,7 +79,8 @@ You must verify and perform upgrades manually in cases where automatic upgrades ## Manually verified upgrades and downgrades -**Warning:** If you want to force an upgrade manually, you must thoroughly verify the outcome before applying any changes to production workloads. Failure to test and verify the upgrade might lead to catastrophic consequences such as data loss. +!!! warning + If you want to force an upgrade manually, you must thoroughly verify the outcome before applying any changes to production workloads. Failure to test and verify the upgrade might lead to catastrophic consequences such as data loss. As a package admin, if you must upgrade or downgrade to version that might be incompatible with the currently installed version, you can set the `.spec.upgradeConstraintPolicy` field to `SelfCertified` on the relevant `ClusterExtension` resource. diff --git a/docs/contribute/developer.md b/docs/contribute/developer.md index b97c9d693..8a63f0d7c 100644 --- a/docs/contribute/developer.md +++ b/docs/contribute/developer.md @@ -3,10 +3,10 @@ The following `make run` starts a [KIND](https://sigs.k8s.io/kind) cluster for you to get a local cluster for testing, see the manual install steps below for how to run against a remote cluster. -> [!NOTE] -> You will need a container runtime environment, like Docker, or experimentally, Podman, installed, to run Kind. -> -> If you are on MacOS, see [Special Setup for MacOS](#special-setup-for-macos). +!!! note + You will need a container runtime environment, like Docker, or experimentally, Podman, installed, to run Kind. + + If you are on MacOS, see [Special Setup for MacOS](#special-setup-for-macos). ### Quickstart Installation @@ -20,9 +20,9 @@ This will build a local container image of the operator-controller, create a new ### To Install Any Given Release -> [!CAUTION] -> Operator-Controller depends on [cert-manager](https://cert-manager.io/). Running the following command -> may affect an existing installation of cert-manager and cause cluster instability. +!!! warning + Operator-Controller depends on [cert-manager](https://cert-manager.io/). Running the following command + may affect an existing installation of cert-manager and cause cluster instability. The latest version of Operator Controller can be installed with the following command: @@ -33,21 +33,21 @@ curl -L -s https://github.com/operator-framework/operator-controller/releases/la ### Manual Step-by-Step Installation 1. Install Instances of Custom Resources: -```sh -kubectl apply -f config/samples/ -``` + ```sh + kubectl apply -f config/samples/ + ``` 2. Build and push your image to the location specified by `IMG`: -```sh -make docker-build docker-push IMG=/operator-controller:tag -``` + ```sh + make docker-build docker-push IMG=/operator-controller:tag + ``` 3. Deploy the controller to the cluster with the image specified by `IMG`: -```sh -make deploy IMG=/operator-controller:tag -``` + ```sh + make deploy IMG=/operator-controller:tag + ``` ### Uninstall CRDs To delete the CRDs from the cluster: @@ -72,7 +72,8 @@ make manifests --- -**NOTE:** Run `make help` for more information on all potential `make` targets. +!!! note + Run `make help` for more information on all potential `make` targets. ### Rapid Iterative Development with Tilt @@ -124,17 +125,18 @@ This is typically as short as: tilt up ``` -**NOTE:** if you are using Podman, at least as of v4.5.1, you need to do this: +!!! note + If you are using Podman, at least as of v4.5.1, you need to do this: -```shell -DOCKER_BUILDKIT=0 tilt up -``` + ```shell + DOCKER_BUILDKIT=0 tilt up + ``` -Otherwise, you'll see an error when Tilt tries to build your image that looks similar to: + Otherwise, you'll see an error when Tilt tries to build your image that looks similar to: -```text -Build Failed: ImageBuild: stat /var/tmp/libpod_builder2384046170/build/Dockerfile: no such file or directory -``` + ```text + Build Failed: ImageBuild: stat /var/tmp/libpod_builder2384046170/build/Dockerfile: no such file or directory + ``` When Tilt starts, you'll see something like this in your terminal: diff --git a/docs/getting-started/olmv1_getting_started.md b/docs/getting-started/olmv1_getting_started.md index 49ec4a137..043b1de23 100644 --- a/docs/getting-started/olmv1_getting_started.md +++ b/docs/getting-started/olmv1_getting_started.md @@ -2,9 +2,9 @@ The following script will install OLMv1 on a Kubernetes cluster. If you don't have one, you can deploy a Kubernetes cluster with [KIND](https://sigs.k8s.io/kind). -> [!CAUTION] -> Operator-Controller depends on [cert-manager](https://cert-manager.io/). Running the following command -> may affect an existing installation of cert-manager and cause cluster instability. +!!! warning + Operator-Controller depends on [cert-manager](https://cert-manager.io/). Running the following command + may affect an existing installation of cert-manager and cause cluster instability. The latest version of Operator Controller can be installed with the following command: diff --git a/docs/howto/catalog-queries.md b/docs/howto/catalog-queries.md index 7eadb8501..c58a7bff1 100644 --- a/docs/howto/catalog-queries.md +++ b/docs/howto/catalog-queries.md @@ -1,7 +1,8 @@ # Catalog queries -**Note:** By default, Catalogd is installed with TLS enabled for the catalog webserver. -The following examples will show this default behavior, but for simplicity's sake will ignore TLS verification in the curl commands using the `-k` flag. +!!! note + By default, Catalogd is installed with TLS enabled for the catalog webserver. + The following examples will show this default behavior, but for simplicity's sake will ignore TLS verification in the curl commands using the `-k` flag. You can use the `curl` command with `jq` to query catalogs that are installed on your cluster. diff --git a/docs/howto/derive-service-account.md b/docs/howto/derive-service-account.md index ac0481ffc..4242aa89f 100644 --- a/docs/howto/derive-service-account.md +++ b/docs/howto/derive-service-account.md @@ -1,11 +1,11 @@ # Derive minimal ServiceAccount required for ClusterExtension Installation and Management -OLM v1 does not have permission to install extensions on a cluster by default. In order to install a [supported bundle](../project/olmv1_limitations.md), +OLM v1 does not have permission to install extensions on a cluster by default. In order to install a [supported bundle](../project/olmv1_limitations.md), OLM must be provided a ServiceAccount configured with the appropriate permissions. This document serves as a guide for how to derive the RBAC necessary to install a bundle. -### Required RBAC +## Required RBAC The required permissions for the installation and management of a cluster extension can be determined by examining the contents of its bundle image. This bundle image contains all the manifests that make up the extension (e.g. `CustomResourceDefinition`s, `Service`s, `Secret`s, `ConfigMap`s, `Deployment`s etc.) @@ -28,7 +28,7 @@ Keep in mind, that it is not possible to scope `create`, `list`, and `watch` per Depending on the scope, each permission will need to be added to either a `ClusterRole` or a `Role` and then bound to the service account with a `ClusterRoleBinding` or a `RoleBinding`. -### Example +## Example The following example illustrates the process of deriving the minimal RBAC required to install the [ArgoCD Operator](https://operatorhub.io/operator/argocd-operator) [v0.6.0](https://operatorhub.io/operator/argocd-operator/alpha/argocd-operator.v0.6.0) provided by [OperatorHub.io](https://operatorhub.io/). The final permission set can be found in the [ClusterExtension sample manifest](https://github.com/operator-framework/operator-controller/blob/main/config/samples/olm_v1alpha1_clusterextension.yaml) in the [samples](https://github.com/operator-framework/operator-controller/blob/main/config/samples/olm_v1alpha1_clusterextension.yaml) directory. @@ -51,9 +51,9 @@ The bundle includes the following manifests, which can be found [here](https://g The `ClusterServiceVersion` defines a single `Deployment` in `spec.install.deployments` named `argocd-operator-controller-manager` with a `ServiceAccount` of the same name. It declares the following cluster-scoped permissions in `spec.install.clusterPermissions`, and its namespace-scoped permissions in `spec.install.permissions`. -#### Derive permissions for the installer service account `ClusterRole` +### Derive permissions for the installer service account `ClusterRole` -##### Step 1. RBAC creation and management permissions +#### Step 1. RBAC creation and management permissions The installer service account must create and manage the `ClusterRole`s and `ClusterRoleBinding`s for the extension controller(s). Therefore, it must have the following permissions: @@ -75,10 +75,11 @@ Therefore, it must have the following permissions: resourceNames: [, ...] ``` -Note: The `resourceNames` field should be populated with the names of the `ClusterRole`s and `ClusterRoleBinding`s created by OLM v1. -These names are generated with the following format: `.`. Since it is not a trivial task -to generate these names ahead of time, it is recommended to use a wildcard `*` in the `resourceNames` field for the installation. -Then, update the `resourceNames` fields by inspecting the cluster for the generated resource names. For instance, for `ClusterRole`s: +!!! note + The `resourceNames` field should be populated with the names of the `ClusterRole`s and `ClusterRoleBinding`s created by OLM v1. + These names are generated with the following format: `.`. Since it is not a trivial task + to generate these names ahead of time, it is recommended to use a wildcard `*` in the `resourceNames` field for the installation. + Then, update the `resourceNames` fields by inspecting the cluster for the generated resource names. For instance, for `ClusterRole`s: ```terminal kubectl get clusterroles | grep argocd @@ -97,9 +98,9 @@ argocd-operator.v0-22gmilmgp91wu25is5i2ec598hni8owq3l71bbkl7iz3 2024-09-3 The same can be done for `ClusterRoleBindings`. -##### Step 2. `CustomResourceDefinition` permissions +#### Step 2. `CustomResourceDefinition` permissions -The installer service account must be able to create and manage the `CustomResourceDefinition`s for the extension, as well +The installer service account must be able to create and manage the `CustomResourceDefinition`s for the extension, as well as grant the extension controller's service account the permissions it needs to manage its CRDs. ```yaml @@ -113,7 +114,7 @@ as grant the extension controller's service account the permissions it needs to resourceNames: [applications.argoproj.io, appprojects.argoproj.io, argocds.argoproj.io, argocdexports.argoproj.io, applicationsets.argoproj.io] ``` -##### Step 3. `OwnerReferencesPermissionEnforcement` permissions +#### Step 3. `OwnerReferencesPermissionEnforcement` permissions For clusters that use `OwnerReferencesPermissionEnforcement`, the installer service account must be able to update finalizers on the ClusterExtension to be able to set blockOwnerDeletion and ownerReferences for clusters that use `OwnerReferencesPermissionEnforcement`. This is only a requirement for clusters that use the [OwnerReferencesPermissionEnforcement](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement) admission plug-in. @@ -126,7 +127,7 @@ This is only a requirement for clusters that use the [OwnerReferencesPermissionE resourceNames: [argocd-operator.v0.6.0] ``` -##### Step 4. Bundled cluster-scoped resource permissions +#### Step 4. Bundled cluster-scoped resource permissions Permissions must be added for the creation and management of any cluster-scoped resources included in the bundle. In this example, the ArgoCD bundle contains a `ClusterRole` called `argocd-operator-metrics-reader`. Given that @@ -140,12 +141,13 @@ is sufficient to add the `argocd-operator-metrics-reader`resource name to the `r resourceNames: [, ..., argocd-operator-metrics-reader] ``` -##### Step 5. Operator permissions declared in the ClusterServiceVersion +#### Step 5. Operator permissions declared in the ClusterServiceVersion Include all permissions defined in the `.spec.install.permissions` ([reference](https://github.com/argoproj-labs/argocd-operator/blob/da6b8a7e68f71920de9545152714b9066990fc4b/deploy/olm-catalog/argocd-operator/0.6.0/argocd-operator.v0.6.0.clusterserviceversion.yaml#L1091)) and `.spec.install.clusterPermissions` ([reference](https://github.com/argoproj-labs/argocd-operator/blob/da6b8a7e68f71920de9545152714b9066990fc4b/deploy/olm-catalog/argocd-operator/0.6.0/argocd-operator.v0.6.0.clusterserviceversion.yaml#L872)) stanzas in the bundle's `ClusterServiceVersion`. These permissions are required by the extension controller, and therefore the installer service account must be able to grant them. -Note: there may be overlap between the rules defined in each stanza. Overlapping rules needn't be added twice. +!!! note + There may be overlap between the rules defined in each stanza. Overlapping rules needn't be added twice. ```yaml # from .spec.install.clusterPermissions @@ -224,12 +226,12 @@ Note: there may be overlap between the rules defined in each stanza. Overlapping # verbs: ["create", "patch"] ``` -#### Derive permissions for the installer service account `Role` +### Derive permissions for the installer service account `Role` The following steps detail how to define the namespace-scoped permissions needed by the installer service account's `Role`. The installer service account must create and manage the `RoleBinding`s for the extension controller(s). -##### Step 1. `Deployment` permissions +#### Step 1. `Deployment` permissions The installer service account must be able to create and manage the `Deployment`s for the extension controller(s). The `Deployment` name(s) can be found in the `ClusterServiceVersion` resource packed in the bundle under `.spec.install.deployments` ([reference](https://github.com/argoproj-labs/argocd-operator/blob/da6b8a7e68f71920de9545152714b9066990fc4b/deploy/olm-catalog/argocd-operator/0.6.0/argocd-operator.v0.6.0.clusterserviceversion.yaml#L1029)). @@ -246,7 +248,7 @@ This example's `ClusterServiceVersion` can be found [here](https://github.com/ar resourceNames: [argocd-operator-controller-manager] ``` -##### Step 2: `ServiceAccount` permissions +#### Step 2: `ServiceAccount` permissions The installer service account must be able to create and manage the `ServiceAccount`(s) for the extension controller(s). The `ServiceAccount` name(s) can be found in deployment template in the `ClusterServiceVersion` resource packed in the bundle under `.spec.install.deployments`. @@ -263,7 +265,7 @@ This example's `ClusterServiceVersion` can be found [here](https://github.com/ar resourceNames: [argocd-operator-controller-manager] ``` -##### Step 3. Bundled namespace-scoped resource permissions +#### Step 3. Bundled namespace-scoped resource permissions The installer service account must also create and manage other namespace-scoped resources included in the bundle. In this example, the bundle also includes two additional namespace-scoped resources: @@ -291,9 +293,10 @@ Therefore, the following permissions must be given to the installer service acco resourceNames: [argocd-operator-manager-config] ``` -#### Putting it all together +### Putting it all together Once the installer service account required cluster-scoped and namespace-scoped permissions have been collected: + 1. Create the installation namespace 2. Create the installer `ServiceAccount` 3. Create the installer `ClusterRole` @@ -304,13 +307,13 @@ Once the installer service account required cluster-scoped and namespace-scoped A manifest with the full set of resources can be found [here](https://github.com/operator-framework/operator-controller/blob/main/config/samples/olm_v1alpha1_clusterextension.yaml). -### Alternatives +## Alternatives We understand that manually determining the minimum RBAC required for installation/upgrade of a `ClusterExtension` quite complex and protracted. In the near future, OLM v1 will provide tools and automation in order to simplify this process while maintaining our security posture. For users wishing to test out OLM v1 in a non-production settings, we offer the following alternatives: -#### Give the installer service account admin privileges +### Give the installer service account admin privileges The `cluster-admin` `ClusterRole` can be bound to the installer service account giving it full permissions to the cluster. While this obviates the need to determine the minimal RBAC required for installation, it is also dangerous. It is highly recommended @@ -344,9 +347,9 @@ kubectl create clusterrolebinding my-cluster-extension-installer-role-binding \ --serviceaccount=my-cluster-extension-namespace:my-cluster-installer-service-account ``` -#### hack/tools/catalog +### hack/tools/catalog In the spirit of making this process more tenable until the proper tools are in place, the scripts in [hack/tools/catalogs](https://github.com/operator-framework/operator-controller/blob/main/hack/tools/catalogs) were created to help the user navigate and search catalogs as well -as to generate the minimal RBAC requirements. These tools are offered as is, with no guarantees on their correctness, +as to generate the minimal RBAC requirements. These tools are offered as is, with no guarantees on their correctness, support, or maintenance. For more information, see [Hack Catalog Tools](https://github.com/operator-framework/operator-controller/blob/main/hack/tools/catalogs/README.md). diff --git a/docs/tutorials/downgrade-extension.md b/docs/tutorials/downgrade-extension.md index e400600fa..ee25a5136 100644 --- a/docs/tutorials/downgrade-extension.md +++ b/docs/tutorials/downgrade-extension.md @@ -51,14 +51,14 @@ spec: upgradeConstraintPolicy: SelfCertified ``` -** Disable CRD Upgrade Safety Check:** +**Command Example:** -**Patch the ClusterExtension Resource:** +If you prefer using the command line, you can use `kubectl` to modify the upgrade CRD safety check configuration. - ```bash - kubectl patch clusterextension --patch '{"spec":{"install":{"preflight":{"crdUpgradeSafety":{"policy":"Disabled"}}}}}' --type=merge - ``` - Kubernetes will apply the updated configuration, disabling CRD safety checks during the downgrade process. +```bash +kubectl patch clusterextension --patch '{"spec":{"install":{"preflight":{"crdUpgradeSafety":{"policy":"Disabled"}}}}}' --type=merge +``` +Kubernetes will apply the updated configuration, disabling CRD safety checks during the downgrade process. ### 2. Ignoring Catalog Provided Upgrade Constraints @@ -102,38 +102,39 @@ Once the CRD safety checks are disabled and upgrade constraints are set, you can 1. **Edit the ClusterExtension Resource:** - Modify the `ClusterExtension` custom resource to specify the target version and adjust the upgrade constraints. + Modify the `ClusterExtension` custom resource to specify the target version and adjust the upgrade constraints. - ```bash - kubectl edit clusterextension - ``` + ```bash + kubectl edit clusterextension + ``` 2. **Update the Version:** - Within the YAML editor, update the `spec` section as follows: - - ```yaml - apiVersion: olm.operatorframework.io/v1 - kind: ClusterExtension - metadata: - name: - spec: - source: - sourceType: Catalog - catalog: - packageName: - version: - install: - namespace: - serviceAccount: - name: - ``` - - - **`version`:** Specify the target version you wish to downgrade to. + Within the YAML editor, update the `spec` section as follows: + + ```yaml + apiVersion: olm.operatorframework.io/v1 + kind: ClusterExtension + metadata: + name: + spec: + source: + sourceType: Catalog + catalog: + packageName: + version: + install: + namespace: + serviceAccount: + name: + ``` + + `target_version` + : Specify the target version you wish to downgrade to. 3. **Apply the Changes:** - Save and exit the editor. Kubernetes will apply the changes and initiate the downgrade process. + Save and exit the editor. Kubernetes will apply the changes and initiate the downgrade process. ### 4. Post-Downgrade Verification @@ -143,31 +144,31 @@ After completing the downgrade, verify that the `ClusterExtension` is functionin 1. **Check the Status of the ClusterExtension:** - ```bash - kubectl get clusterextension -o yaml - ``` + ```bash + kubectl get clusterextension -o yaml + ``` - Ensure that the `status` reflects the target version and that there are no error messages. + Ensure that the `status` reflects the target version and that there are no error messages. 2. **Validate CRD Integrity:** - Confirm that all CRDs associated with the `ClusterExtension` are correctly installed and compatible with the downgraded version. + Confirm that all CRDs associated with the `ClusterExtension` are correctly installed and compatible with the downgraded version. - ```bash - kubectl get crd | grep - ``` + ```bash + kubectl get crd | grep + ``` 3. **Test Extension Functionality:** - Perform functional tests to ensure that the extension operates correctly in its downgraded state. + Perform functional tests to ensure that the extension operates correctly in its downgraded state. 4. **Monitor Logs:** - Check the logs of the operator managing the `ClusterExtension` for any warnings or errors. + Check the logs of the operator managing the `ClusterExtension` for any warnings or errors. - ```bash - kubectl logs deployment/ -n - ``` + ```bash + kubectl logs deployment/ -n + ``` ## Troubleshooting diff --git a/docs/tutorials/explore-available-content.md b/docs/tutorials/explore-available-content.md index 76bae2b6f..ada0855ef 100644 --- a/docs/tutorials/explore-available-content.md +++ b/docs/tutorials/explore-available-content.md @@ -13,8 +13,9 @@ Then you can query the catalog by using `curl` commands and the `jq` CLI tool to * You have added a ClusterCatalog of extensions, such as [OperatorHub.io](https://operatorhub.io), to your cluster. * You have installed the `jq` CLI tool. -**Note:** By default, Catalogd is installed with TLS enabled for the catalog webserver. -The following examples will show this default behavior, but for simplicity's sake will ignore TLS verification in the curl commands using the `-k` flag. +!!! note + By default, Catalogd is installed with TLS enabled for the catalog webserver. + The following examples will show this default behavior, but for simplicity's sake will ignore TLS verification in the curl commands using the `-k` flag. ## Procedure @@ -93,38 +94,38 @@ The following examples will show this default behavior, but for simplicity's sak !!! important Currently, OLM 1.0 does not support the installation of extensions that use webhooks or that target a single or specified set of namespaces. - * Return list of packages that support `AllNamespaces` install mode and do not use webhooks: +3. Return list of packages that support `AllNamespaces` install mode and do not use webhooks: - ``` terminal - curl -k https://localhost:8443/catalogs/operatorhubio/api/v1/all | jq -c 'select(.schema == "olm.bundle") | {"package":.package, "version":.properties[] | select(.type == "olm.bundle.object").value.data | @base64d | fromjson | select(.kind == "ClusterServiceVersion" and (.spec.installModes[] | select(.type == "AllNamespaces" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' + ``` terminal + curl -k https://localhost:8443/catalogs/operatorhubio/api/v1/all | jq -c 'select(.schema == "olm.bundle") | {"package":.package, "version":.properties[] | select(.type == "olm.bundle.object").value.data | @base64d | fromjson | select(.kind == "ClusterServiceVersion" and (.spec.installModes[] | select(.type == "AllNamespaces" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' + ``` + + ??? success + ``` text title="Example output" + {"package":"ack-acm-controller","version":"0.0.12"} + {"package":"ack-acmpca-controller","version":"0.0.5"} + {"package":"ack-apigatewayv2-controller","version":"1.0.7"} + {"package":"ack-applicationautoscaling-controller","version":"1.0.11"} + {"package":"ack-cloudfront-controller","version":"0.0.9"} + {"package":"ack-cloudtrail-controller","version":"1.0.8"} + {"package":"ack-cloudwatch-controller","version":"0.0.3"} + {"package":"ack-cloudwatchlogs-controller","version":"0.0.4"} + {"package":"ack-dynamodb-controller","version":"1.2.9"} + {"package":"ack-ec2-controller","version":"1.2.4"} + {"package":"ack-ecr-controller","version":"1.0.12"} + {"package":"ack-ecs-controller","version":"0.0.4"} + {"package":"ack-efs-controller","version":"0.0.5"} + {"package":"ack-eks-controller","version":"1.3.3"} + {"package":"ack-elasticache-controller","version":"0.0.29"} + {"package":"ack-emrcontainers-controller","version":"1.0.8"} + {"package":"ack-eventbridge-controller","version":"1.0.6"} + {"package":"ack-iam-controller","version":"1.3.6"} + {"package":"ack-kafka-controller","version":"0.0.4"} + {"package":"ack-keyspaces-controller","version":"0.0.11"} + ... ``` - ??? success - ``` text title="Example output" - {"package":"ack-acm-controller","version":"0.0.12"} - {"package":"ack-acmpca-controller","version":"0.0.5"} - {"package":"ack-apigatewayv2-controller","version":"1.0.7"} - {"package":"ack-applicationautoscaling-controller","version":"1.0.11"} - {"package":"ack-cloudfront-controller","version":"0.0.9"} - {"package":"ack-cloudtrail-controller","version":"1.0.8"} - {"package":"ack-cloudwatch-controller","version":"0.0.3"} - {"package":"ack-cloudwatchlogs-controller","version":"0.0.4"} - {"package":"ack-dynamodb-controller","version":"1.2.9"} - {"package":"ack-ec2-controller","version":"1.2.4"} - {"package":"ack-ecr-controller","version":"1.0.12"} - {"package":"ack-ecs-controller","version":"0.0.4"} - {"package":"ack-efs-controller","version":"0.0.5"} - {"package":"ack-eks-controller","version":"1.3.3"} - {"package":"ack-elasticache-controller","version":"0.0.29"} - {"package":"ack-emrcontainers-controller","version":"1.0.8"} - {"package":"ack-eventbridge-controller","version":"1.0.6"} - {"package":"ack-iam-controller","version":"1.3.6"} - {"package":"ack-kafka-controller","version":"0.0.4"} - {"package":"ack-keyspaces-controller","version":"0.0.11"} - ... - ``` - -3. Inspect the contents of an extension's metadata: +4. Inspect the contents of an extension's metadata: ``` terminal curl -k https://localhost:8443/catalogs/operatorhubio/api/v1/all | jq -s '.[] | select( .schema == "olm.package") | select( .name == "")' diff --git a/docs/tutorials/upgrade-extension.md b/docs/tutorials/upgrade-extension.md index 1c0e8b061..86ecaeb75 100644 --- a/docs/tutorials/upgrade-extension.md +++ b/docs/tutorials/upgrade-extension.md @@ -60,10 +60,10 @@ spec: EOF ``` - ??? success - ``` text title="Example output" - clusterextension.olm.operatorframework.io/argocd-operator configured - ``` + !!! success + ``` text title="Example output" + clusterextension.olm.operatorframework.io/argocd-operator configured + ``` Alternatively, you can use `kubectl patch` to update the version field: @@ -73,14 +73,14 @@ spec: `extension_name` : Specifies the name defined in the `metadata.name` field of the extension's CR. - + `target_version` : Specifies the version to upgrade or downgrade to. - ??? success - ``` text title="Example output" - clusterextension.olm.operatorframework.io/argocd-operator patched - ``` + !!! success + ``` text title="Example output" + clusterextension.olm.operatorframework.io/argocd-operator patched + ``` ### Verification @@ -91,74 +91,74 @@ spec: ``` ??? success - ``` text title="Example output" - apiVersion: olm.operatorframework.io/v1 - kind: ClusterExtension - metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"argocd"},"spec":{"install":{"namespace":"argocd","serviceAccount":{"name":"argocd-installer"}},"source":{"catalog":{"packageName":"argocd-operator","version":"0.6.0"},"sourceType":"Catalog"}}} - creationTimestamp: "2024-10-03T16:02:40Z" - finalizers: - - olm.operatorframework.io/cleanup-unpack-cache - - olm.operatorframework.io/cleanup-contentmanager-cache - generation: 2 - name: argocd - resourceVersion: "1174" - uid: 0fcaf3f5-d142-4c7e-8d88-c88a549f7764 - spec: - install: - namespace: argocd - serviceAccount: - name: argocd-installer - source: - catalog: - packageName: argocd-operator - selector: {} - upgradeConstraintPolicy: CatalogProvided - version: 0.6.0 - sourceType: Catalog - status: - conditions: - - lastTransitionTime: "2024-10-03T16:02:41Z" - message: "" - observedGeneration: 2 - reason: Deprecated - status: "False" - type: Deprecated - - lastTransitionTime: "2024-10-03T16:02:41Z" - message: "" - observedGeneration: 2 - reason: Deprecated - status: "False" - type: PackageDeprecated - - lastTransitionTime: "2024-10-03T16:02:41Z" - message: "" - observedGeneration: 2 - reason: Deprecated - status: "False" - type: ChannelDeprecated - - lastTransitionTime: "2024-10-03T16:02:41Z" - message: "" - observedGeneration: 2 - reason: Deprecated - status: "False" - type: BundleDeprecated - - lastTransitionTime: "2024-10-03T16:02:43Z" - message: Installed bundle quay.io/operatorhubio/argocd-operator@sha256:d538c45a813b38ef0e44f40d279dc2653f97ca901fb660da5d7fe499d51ad3b3 - successfully - observedGeneration: 2 - reason: Succeeded - status: "True" - type: Installed - - lastTransitionTime: "2024-10-03T16:02:43Z" - message: desired state reached - observedGeneration: 2 - reason: Succeeded - status: "False" - type: Progressing - install: - bundle: - name: argocd-operator.v0.6.0 - version: 0.6.0 - ``` + ``` text title="Example output" + apiVersion: olm.operatorframework.io/v1 + kind: ClusterExtension + metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"argocd"},"spec":{"install":{"namespace":"argocd","serviceAccount":{"name":"argocd-installer"}},"source":{"catalog":{"packageName":"argocd-operator","version":"0.6.0"},"sourceType":"Catalog"}}} + creationTimestamp: "2024-10-03T16:02:40Z" + finalizers: + - olm.operatorframework.io/cleanup-unpack-cache + - olm.operatorframework.io/cleanup-contentmanager-cache + generation: 2 + name: argocd + resourceVersion: "1174" + uid: 0fcaf3f5-d142-4c7e-8d88-c88a549f7764 + spec: + install: + namespace: argocd + serviceAccount: + name: argocd-installer + source: + catalog: + packageName: argocd-operator + selector: {} + upgradeConstraintPolicy: CatalogProvided + version: 0.6.0 + sourceType: Catalog + status: + conditions: + - lastTransitionTime: "2024-10-03T16:02:41Z" + message: "" + observedGeneration: 2 + reason: Deprecated + status: "False" + type: Deprecated + - lastTransitionTime: "2024-10-03T16:02:41Z" + message: "" + observedGeneration: 2 + reason: Deprecated + status: "False" + type: PackageDeprecated + - lastTransitionTime: "2024-10-03T16:02:41Z" + message: "" + observedGeneration: 2 + reason: Deprecated + status: "False" + type: ChannelDeprecated + - lastTransitionTime: "2024-10-03T16:02:41Z" + message: "" + observedGeneration: 2 + reason: Deprecated + status: "False" + type: BundleDeprecated + - lastTransitionTime: "2024-10-03T16:02:43Z" + message: Installed bundle quay.io/operatorhubio/argocd-operator@sha256:d538c45a813b38ef0e44f40d279dc2653f97ca901fb660da5d7fe499d51ad3b3 + successfully + observedGeneration: 2 + reason: Succeeded + status: "True" + type: Installed + - lastTransitionTime: "2024-10-03T16:02:43Z" + message: desired state reached + observedGeneration: 2 + reason: Succeeded + status: "False" + type: Progressing + install: + bundle: + name: argocd-operator.v0.6.0 + version: 0.6.0 + ```