Create & import a cluster using CAPI providers

This guide goes over the process of creating and importing CAPI clusters with a selection of the officially certified providers.

Keep in mind that most Cluster API Providers are upstream projects maintained by the Kubernetes open-source community.

Prerequisites

  • AWS EKS/RKE2/Kubeadm

  • Azure AKS

  • GCP GKE

  • Docker RKE2/Kubeadm

  • vSphere RKE2/Kubeadm

  • Rancher Manager cluster with Rancher Turtles installed

  • Cluster API Providers: you can find a guide on how to install a provider using the CAPIProvider resource here

    • Infrastructure provider for AWS, this is an example of AWS provider installation, follow the provider documentation if some options need to be customized:

      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: capa-system
      ---
      apiVersion: v1
      kind: Secret
      metadata:
        name: aws
        namespace: capa-system
      type: Opaque
      stringData:
        AWS_B64ENCODED_CREDENTIALS: xxx
      ---
      apiVersion: turtles-capi.cattle.io/v1alpha1
      kind: CAPIProvider
      metadata:
        name: aws
        namespace: capa-system
      spec:
        type: infrastructure
    • If using RKE2 or Kubeadm, it’s required to have Bootstrap/Control Plane provider for RKE2(installed by default) or Bootstrap/Control Plane provider for Kubeadm, example of Kubeadm installation:

      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: capi-kubeadm-bootstrap-system
      ---
      apiVersion: turtles-capi.cattle.io/v1alpha1
      kind: CAPIProvider
      metadata:
        name: kubeadm-bootstrap
        namespace: capi-kubeadm-bootstrap-system
      spec:
        name: kubeadm
        type: bootstrap
      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: capi-kubeadm-control-plane-system
      ---
      apiVersion: turtles-capi.cattle.io/v1alpha1
      kind: CAPIProvider
      metadata:
        name: kubeadm-control-plane
        namespace: capi-kubeadm-control-plane-system
      spec:
        name: kubeadm
        type: controlPlane
  • Rancher Manager cluster with Rancher Turtles installed

  • Cluster API Providers: you can find a guide on how to install a provider using the CAPIProvider resource here

    • Infrastructure provider for Azure, this is an example of Azure provider installation, follow the provider documentation if some options need to be customized:

      export AZURE_CLIENT_SECRET="xxx"
      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: capz-system
      ---
      apiVersion: turtles-capi.cattle.io/v1alpha1
      kind: CAPIProvider
      metadata:
        name: azure
        namespace: capz-system
      spec:
        type: infrastructure
      ---
      apiVersion: v1
      stringData:
        clientSecret: "${AZURE_CLIENT_SECRET}"
      kind: Secret
      metadata:
        name: cluster-identity-secret
        namespace: default
      type: Opaque
  • Rancher Manager cluster with Rancher Turtles installed

  • Cluster API Providers: you can find a guide on how to install a provider using the CAPIProvider resource here

    • Infrastructure provider for GCP, this is an example of GCP provider installation, follow the provider documentation if some options need to be customized:

      export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: capg-system
      ---
      apiVersion: v1
      kind: Secret
      metadata:
        name: gcp
        namespace: capg-system
      type: Opaque
      stringData:
        GCP_B64ENCODED_CREDENTIALS: "${GCP_B64ENCODED_CREDENTIALS}"
      ---
      apiVersion: turtles-capi.cattle.io/v1alpha1
      kind: CAPIProvider
      metadata:
        name: gcp
        namespace: capg-system
      spec:
        type: infrastructure
  • Rancher Manager cluster with Rancher Turtles installed

  • Cluster API Providers: you can find a guide on how to install a provider using the CAPIProvider resource here

    • Infrastructure provider for Docker, example of Docker provider installation:

      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: capd-system
      ---
      apiVersion: turtles-capi.cattle.io/v1alpha1
      kind: CAPIProvider
      metadata:
        name: docker
        namespace: capd-system
      spec:
        type: infrastructure
    • Bootstrap/Control Plane provider for RKE2(installed by default) or Bootstrap/Control Plane provider for Kubeadm, example of Kubeadm installation:

      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: capi-kubeadm-bootstrap-system
      ---
      apiVersion: turtles-capi.cattle.io/v1alpha1
      kind: CAPIProvider
      metadata:
        name: kubeadm-bootstrap
        namespace: capi-kubeadm-bootstrap-system
      spec:
        name: kubeadm
        type: bootstrap
      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: capi-kubeadm-control-plane-system
      ---
      apiVersion: turtles-capi.cattle.io/v1alpha1
      kind: CAPIProvider
      metadata:
        name: kubeadm-control-plane
        namespace: capi-kubeadm-control-plane-system
      spec:
        name: kubeadm
        type: controlPlane
  • Rancher Manager cluster with Rancher Turtles installed

  • Cluster API Providers: you can find a guide on how to install a provider using the CAPIProvider resource here

    • Infrastructure provider for vSphere, this is an example of vSphere provider installation, follow the provider documentation if some options need to be customized:

      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: capv-system
      ---
      apiVersion: v1
      kind: Secret
      metadata:
        name: vsphere
        namespace: capv-system
      type: Opaque
      stringData:
        VSPHERE_USERNAME: xxx
        VSPHERE_PASSWORD: xxx
      ---
      apiVersion: turtles-capi.cattle.io/v1alpha1
      kind: CAPIProvider
      metadata:
        name: vsphere
        namespace: capv-system
      spec:
        type: infrastructure
    • Bootstrap/Control Plane provider for RKE2(installed by default) or Bootstrap/Control Plane provider for Kubeadm, example of Kubeadm installation:

      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: capi-kubeadm-bootstrap-system
      ---
      apiVersion: turtles-capi.cattle.io/v1alpha1
      kind: CAPIProvider
      metadata:
        name: kubeadm-bootstrap
        namespace: capi-kubeadm-bootstrap-system
      spec:
        name: kubeadm
        type: bootstrap
      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: capi-kubeadm-control-plane-system
      ---
      apiVersion: turtles-capi.cattle.io/v1alpha1
      kind: CAPIProvider
      metadata:
        name: kubeadm-control-plane
        namespace: capi-kubeadm-control-plane-system
      spec:
        name: kubeadm
        type: controlPlane

Create Your Cluster Definition

  • AWS EC2 RKE2

  • AWS EC2 Kubeadm

  • Docker RKE2

  • Docker Kubeadm

  • vSphere RKE2

  • vSphere Kubeadm

  • Azure AKS

  • AWS EKS

  • GCP GKE

Before creating an AWS+RKE2 workload cluster, it is required to build an AMI for the RKE2 version that is going to be installed on the cluster. You can follow the steps in the RKE2 image-builder README to build the AMI.

We recommend you refer to the CAPRKE2 repository where you can find a samples folder with different CAPA+CAPRKE2 cluster configurations that can be used to provision downstream clusters. The internal folder contains cluster templates to deploy an RKE2 cluster on AWS using the internal cloud provider, and the external folder contains the cluster templates to deploy a cluster with the external cloud provider.

We will use the internal one for this guide, however the same steps apply for external.

To generate the YAML for the cluster, do the following:

  1. Open a terminal and run the following:

    export CLUSTER_NAME=cluster1
    export NAMESPACE=capi-clusters
    export CONTROL_PLANE_MACHINE_COUNT=1
    export WORKER_MACHINE_COUNT=1
    export RKE2_VERSION=v1.30.3+rke2r1
    export AWS_NODE_MACHINE_TYPE=t3a.large
    export AWS_CONTROL_PLANE_MACHINE_TYPE=t3a.large
    export AWS_SSH_KEY_NAME="aws-ssh-key"
    export AWS_REGION="aws-region"
    export AWS_AMI_ID="ami-id"
    
    curl -s https://raw.githubusercontent.com/rancher/cluster-api-provider-rke2/refs/heads/main/examples/aws/cluster-template.yaml | envsubst > cluster1.yaml
  2. View cluster1.yaml and examine the resulting YAML file. You can make any changes you want as well.

    The Cluster API quickstart guide contains more detail. Read the steps related to this section here.

  3. Create the cluster using kubectl

    kubectl create -f cluster1.yaml

To generate the YAML for the cluster, do the following:

  1. Open a terminal and run the following:

    export CLUSTER_NAME=cluster1
    export NAMESPACE=capi-clusters
    export AWS_CONTROL_PLANE_MACHINE_TYPE=t3.large
    export AWS_NODE_MACHINE_TYPE=t3.large
    export AWS_SSH_KEY_NAME="aws-ssh-key"
    export AWS_REGION="aws-region"
    export KUBERNETES_VERSION=v1.29.9
    export CONTROL_PLANE_MACHINE_COUNT=1
    export WORKER_MACHINE_COUNT=1
    
    curl -s https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template.yaml | envsubst > cluster1.yaml
  2. View cluster1.yaml to ensure there are no tokens (i.e. SSH keys or cloud credentials). You can make any changes you want as well.

    The Cluster API quickstart guide contains more detail. Read the steps related to this section here.

  3. Create the cluster using kubectl

     kubectl create -f cluster1.yaml
  4. Deploy CNI

    Once cluster is created a CNI is required for Nodes to become ready. You can refere to Cluster API documentation for example CNI installation here.

To generate the YAML for the cluster, do the following:

  1. Open a terminal and run the following:

    export CLUSTER_NAME=cluster1
    export NAMESPACE=capi-clusters
    export CONTROL_PLANE_MACHINE_COUNT=1
    export WORKER_MACHINE_COUNT=1
    export RKE2_VERSION=v1.30.2+rke2r1
    export KUBERNETES_VERSION=v1.30.4 # needed for the CAPI Docker provider to use proper image
    
    curl -s https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/test/e2e/data/cluster-templates/docker-rke2.yaml | envsubst > cluster1.yaml
  2. View cluster1.yaml to ensure there are no tokens. You can make any changes you want as well.

    The Cluster API quickstart guide contains more detail. Read the steps related to this section here.

  3. Create the cluster using kubectl

    kubectl create -f cluster1.yaml

To generate the YAML for the cluster, do the following:

  1. Open a terminal and run the following:

    export CLUSTER_NAME=cluster1
    export NAMESPACE=capi-clusters
    export CONTROL_PLANE_MACHINE_COUNT=1
    export WORKER_MACHINE_COUNT=1
    export KUBERNETES_VERSION=v1.30.4
    
    curl -s https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/test/e2e/data/cluster-templates/docker-kubeadm.yaml | envsubst > cluster1.yaml
  2. View cluster1.yaml to ensure there are no tokens. You can make any changes you want as well.

    The Cluster API quickstart guide contains more detail. Read the steps related to this section here.

  3. Create the cluster using kubectl

    kubectl create -f cluster1.yaml
  4. Deploy CNI

    Once cluster is created a CNI is required for Nodes to become ready. You can refere to Cluster API documentation for example CNI installation here.

Before creating a vSphere+RKE2 workload cluster, it is required to have a VM template with the necessary RKE2 binaries and dependencies. The template should already include RKE2 binaries if operating in an air-gapped environment, following the tarball method. You can find additional configuration details in the CAPRKE2 repository.

To generate the YAML for the cluster, do the following:

export CLUSTER_NAME=cluster1
export NAMESPACE=capi-clusters
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
export VSPHERE_USERNAME: "<username>"
export VSPHERE_PASSWORD: "<password>"
export VSPHERE_SERVER: "10.0.0.1"
export VSPHERE_DATACENTER: "SDDC-Datacenter"
export VSPHERE_DATASTORE: "DefaultDatastore"
export VSPHERE_NETWORK: "VM Network"
export VSPHERE_RESOURCE_POOL: "*/Resources"
export VSPHERE_FOLDER: "vm"
export VSPHERE_TEMPLATE: "ubuntu-1804-kube-v1.17.3"
export CONTROL_PLANE_ENDPOINT_IP: "192.168.9.230"
export VSPHERE_TLS_THUMBPRINT: "..."
export EXP_CLUSTER_RESOURCE_SET: "true"
export VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3N..."
export CPI_IMAGE_K8S_VERSION: "v1.30.0"
export KUBERNETES_VERSION=v1.30.0
  1. Open a terminal and run the following:

    curl -s https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/test/e2e/data/cluster-templates/vsphere-rke2.yaml | envsubst > cluster1.yaml
  2. View cluster1.yaml and examine the resulting YAML file. You can make any changes you want as well.

    The Cluster API quickstart guide contains more detail. Read the steps related to this section here.

  3. Create the cluster using kubectl

    kubectl apply -f cluster1.yaml

Before creating a vSphere+kubeadm workload cluster, it is required to have a VM template with the necessary kubeadm binaries and dependencies. The template should already include kubeadm, kubelet, and kubectl if operating in an air-gapped environment, following the image-builder project. You can find additional configuration details in the CAPV repository.

A list of published machine images (OVAs) is available here.

To generate the YAML for the cluster, do the following:

export CLUSTER_NAME=cluster1
export NAMESPACE=capi-clusters
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
export VSPHERE_USERNAME: "<username>"
export VSPHERE_PASSWORD: "<password>"
export VSPHERE_SERVER: "10.0.0.1"
export VSPHERE_DATACENTER: "SDDC-Datacenter"
export VSPHERE_DATASTORE: "DefaultDatastore"
export VSPHERE_NETWORK: "VM Network"
export VSPHERE_RESOURCE_POOL: "*/Resources"
export VSPHERE_FOLDER: "vm"
export VSPHERE_TEMPLATE: "ubuntu-1804-kube-vxxx"
export CONTROL_PLANE_ENDPOINT_IP: "192.168.9.230"
export VSPHERE_TLS_THUMBPRINT: "..."
export EXP_CLUSTER_RESOURCE_SET: "true"
export VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3N..."
export CPI_IMAGE_K8S_VERSION: "v1.30.0"
export KUBERNETES_VERSION=v1.30.0
  1. Open a terminal and run the following:

    curl -s https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/test/e2e/data/cluster-templates/vsphere-kubeadm.yaml | envsubst > cluster1.yaml
  2. View cluster1.yaml and examine the resulting YAML file. You can make any changes you want as well.

    The Cluster API quickstart guide contains more detail. Read the steps related to this section here.

  3. Create the cluster using kubectl

    kubectl apply -f cluster1.yaml

To generate the YAML for the cluster, do the following:

export CLUSTERCLASS_NAME=clusterclass1
export CLUSTER_NAME=cluster1
export NAMESPACE=capi-clusters
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
export KUBERNETES_VERSION=v1.30.4
export AZURE_SUBSCRIPTION_ID="xxx"
export AZURE_CLIENT_ID="xxx"
export AZURE_TENANT_ID="xxx"
  1. Open a terminal and run the following:

    curl -s https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/test/e2e/data/cluster-templates/azure-aks-topology.yaml | envsubst > cluster1.yaml
  2. View cluster1.yaml and examine the resulting YAML file. You can make any changes you want as well.

    The Cluster API quickstart guide contains more detail. Read the steps related to this section here.

  3. Create the cluster using kubectl

    kubectl apply -f cluster1.yaml

To generate the YAML for the cluster, do the following:

export CLUSTER_NAME=cluster1
export NAMESPACE=capi-clusters
export WORKER_MACHINE_COUNT=1
export KUBERNETES_VERSION=v1.30.4
  1. Open a terminal and run the following:

    curl -s https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/test/e2e/data/cluster-templates/aws-eks-mmp.yaml | envsubst > cluster1.yaml
  2. View cluster1.yaml and examine the resulting YAML file. You can make any changes you want as well.

    The Cluster API quickstart guide contains more detail. Read the steps related to this section here.

  3. Create the cluster using kubectl

    kubectl apply -f cluster1.yaml

To generate the YAML for the cluster, do the following:

export CLUSTER_NAME=cluster1
export NAMESPACE=capi-clusters
export GCP_PROJECT=cluster-api-gcp-project
export GCP_REGION=us-east4
export GCP_NETWORK_NAME=default
export WORKER_MACHINE_COUNT=1
  1. Open a terminal and run the following:

    curl -s https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/test/e2e/data/cluster-templates/gcp-gke.yaml | envsubst > cluster1.yaml
  2. View cluster1.yaml and examine the resulting YAML file. You can make any changes you want as well.

    The Cluster API quickstart guide contains more detail. Read the steps related to this section here.

  3. Create the cluster using kubectl

    kubectl apply -f cluster1.yaml

After your cluster is provisioned, you can check functionality of the workload cluster using kubectl:

kubectl describe cluster cluster1

Remember that clusters are namespaced resources. These examples provision clusters in the capi-clusters namespace, but you will need to provide yours if using a different one.

Mark Namespace or Cluster for Auto-Import

To automatically import a CAPI cluster into Rancher Manager, there are 2 options:

  1. Label a namespace so all clusters contained in it are imported.

  2. Label an individual cluster definition so that it’s imported.

Labeling a namespace:

kubectl label namespace capi-clusters cluster-api.cattle.io/rancher-auto-import=true

Labeling an individual cluster definition:

kubectl label cluster.cluster.x-k8s.io -n default cluster1 cluster-api.cattle.io/rancher-auto-import=true