Skip to main content
Version: 0.11

Upgrade Turtles import controller

Context​

When Turtles imports a CAPI cluster into Rancher, a number of Kubernetes resources must be created to represent the CAPI cluster in Rancher. The main resource is the one representing the CAPI cluster as a Rancher cluster, so it can be viewed and managed via Rancher:

clusters.provisioning.cattle.io

We also call this resource a v1 cluster. If you were using Turtles before v0.9.0, the default import controller was based on generating this resource for every CAPI cluster that is imported into Rancher. The controller we will be migrating to was disabled behind a feature gate managementv3_cluster.

Starting with Turtles v0.9.0, the v3 import controller is no longer behind a feature gate and represents the default import logic. This means we are moving to a different strategy for importing clusters, and switching to creating a different type of Rancher cluster resource:

clusters.management.cattle.io

Which you may also see referred to as v3 cluster.

Migrating from one controller to the other should be transparent but you will need to prepare your already imported clusters for migration to prevent the new controller from duplicating resources. To make the migration procedure even easier, you can use the provided migration script which will let you specify the namespace where your imported clusters exist and will migrate them automatically. If you prefer, you can apply changes to resources manually per cluster via kubectl.

note

Both cluster resources previously described are required to represent any Rancher clusters. This controller update only means that we'll be creating one from Turtles and Rancher will take care of provisioning the other.

Using the migration script - per namespace​

This method provides an automated process to migrate clusters in batches per namespace. To prepare your imported CAPI clusters for migration via migration script, you need to follow the steps below:

  1. Select the namespace where your imported clusters are deployed.
  2. Pass the namespace as argument to the migration script: this will prepare ALL CAPI clusters in the namespace for migration. If you want to migrate each cluster independently, we recommend following the manual procedure.
./import_controller_migration.sh <cluster-ns>
  1. You can pass as many namespaces as arguments as you need.
  2. Before proceeding, you will be prompted for confirmation on the namespaces you provided.
  3. For each namespace, it automatically fetches v1 CAPI clusters that are not already migrated (an annotation is set on the resource after it is successfully prepared for migration).
    • v1 clusters include a reference to v3 clusters via the status.clusterName field.
    • Labels cluster-api.cattle.io/capi-cluster-owner and cluster-api.cattle.io/capi-cluster-owner-ns are added to v3 cluster.
    • After successfully adding labels on v3 cluster resource, v1 cluster is annotated as migrated via cluster-api.cattle.io/migrated=true.

Manually editing resources - per cluster​

caution

Note that manually editing Kubernetes resources may cause unexpected failures due to wrong or missing values. If you are not confident with kubectl and interacting with the Kubernetes API and its resources, we recommend opting for the scripted migration.

The requirement for migrating imported clusters is setting the labels used by the new controller on the clusters.management.cattle.io (v3) cluster resource. It is important to mark v1 clusters as migrated after adding the labels, as this is the method that prevents the new controller from duplicating Rancher cluster resources.

  1. Select the cluster/s you would like to prepare for migration. We'll use cluster1 as an example for this guide.
    • We can list the CAPI cluster resource clusters.cluster.x-k8s.io. We need to provide the corresponding namespace as this is a namespaced resource.
    kubectl get clusters.cluster.x-k8s.io -n capi-ns
    In our example we can see cluster1:
    NAMESPACE         NAME             PHASE         AGE    VERSION
    capi-ns cluster1 Provisioned 6d1h
    • This CAPI cluster, when imported to Rancher, becomes linked to a Rancher cluster resource, represented by both clusters.provisioning.cattle.io and clusters.management.cattle.io. For now we're interested in clusters.provisioning.cattle.io or v1 cluster.
    kubectl get clusters.provisioning.cattle.io -n capi-ns
    And we get the Rancher cluster that Turtles generated to represent the CAPI cluster. You can see that the -capi suffix is added to the resource name, which effectively allows us to filter CAPI clusters from all Rancher clusters (which may be provisioned using other mechanisms).
    NAME               READY   KUBECONFIG
    cluster1-capi
  2. v3 cluster resources are assigned a randomly generated name and we need to find the resource that is linked to the v1 cluster we retrieved in the previous step. We will do this by exploring the values of the resource, specifically status.clusterName.
kubectl get clusters.provisioning.cattle.io -n capi-ns cluster1-capi -o jsonpath='{.status.clusterName}'

The resource we get is c-m-xyz1a2b3, which is the one we'll be labelling.

  1. Now that we know what resource to label, we can focus on what labels are required. These labels are used by the new controller to watch cluster resources.
cluster-api.cattle.io/capi-cluster-owner        # name of the cluster
cluster-api.cattle.io/capi-cluster-owner-ns # name of the namespace where the cluster lives
caution

Note that the -capi suffix must be stripped when assigning cluster name to cluster-api.cattle.io/capi-cluster-owner

a. Edit resource and apply labels.

kubectl label clusters.management.cattle.io c-m-xyz1a2b3 cluster-api.cattle.io/capi-cluster-owner=cluster1
kubectl label clusters.management.cattle.io c-m-xyz1a2b3 cluster-api.cattle.io/capi-cluster-owner-ns=capi-ns

b. You can now validate that the changes have been successfully applied.

kubectl get clusters.management.cattle.io c-m-xyz1a2b3 -o jsonpath='{.metadata.labels}'
  1. After the clusters.management.cattle.io v3 resource has been updated, it is very important to mark the clusters.provisioning.cattle.io v1 resource as migrated, so the controller behaves as expected. We do this by adding the annotation cluster-api.cattle.io/migrated
kubectl annotate clusters.provisioning.cattle.io -n capi-ns cluster1-capi cluster-api.cattle.io/migrated=true