ClusterClass
In this section we cover using ClusterClass with Rancher Turtles.
Setup
-
Azure
-
AWS
-
Docker
-
vSphere
To prepare the management Cluster, we are going to install the Cluster API Provider Azure, and create a ServicePrincipal identity to provision a new Cluster on Azure.
Before we start, a ServicePrincipal needs to be created, with at least Contributor access to an Azure subscription.
Refer to the CAPZ documentation for more details.
-
Provider installation
apiVersion: v1 kind: Namespace metadata: name: capz-system --- apiVersion: turtles-capi.cattle.io/v1alpha1 kind: CAPIProvider metadata: name: azure namespace: capz-system spec: type: infrastructure name: azure
-
Identity setup
A Secret containing the ADD Service Principal password need to be created first.
# Settings needed for AzureClusterIdentity used by the AzureCluster export AZURE_CLUSTER_IDENTITY_SECRET_NAME="cluster-identity-secret" export AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE="default" export AZURE_CLIENT_SECRET="<Password>" # Create a secret to include the password of the Service Principal identity created in Azure # This secret will be referenced by the AzureClusterIdentity used by the AzureCluster kubectl create secret generic "${AZURE_CLUSTER_IDENTITY_SECRET_NAME}" --from-literal=clientSecret="${AZURE_CLIENT_SECRET}" --namespace "${AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE}"
The AzureClusterIdentity can now be created to use the Service Principal identity.
Note that the AzureClusterIdentity is a namespaced resource and it needs to be created in the same namespace as the Cluster.
For more information on best practices when using Azure identities, please refer to the official documentation.Note that some variables are left to the user to substitute.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AzureClusterIdentity metadata: labels: clusterctl.cluster.x-k8s.io/move-hierarchy: "true" name: cluster-identity spec: allowedNamespaces: {} clientID: <AZURE_APP_ID> clientSecret: name: <AZURE_CLUSTER_IDENTITY_SECRET_NAME> namespace: <AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE> tenantID: <AZURE_TENANT_ID> type: ServicePrincipal
To prepare the management Cluster, we are going to install the Cluster API Provider AWS, and create a secret with the required credentials to provision a new Cluster on AWS.
-
Credentials setup
apiVersion: v1 kind: Namespace metadata: name: capa-system --- apiVersion: v1 kind: Secret metadata: name: aws namespace: capa-system type: Opaque stringData: AWS_B64ENCODED_CREDENTIALS: xxx
The content of the string,
AWS_B64ENCODED_CREDENTIALS
, is a base64-encoded string containing the AWS credentials. You will need to use an AWS IAM user with administrative permissions so you can create the cloud resources to host the cluster. We recommend you useclusterawsadm
to encode credentials to use with Cluster API Provider AWS. You can refer to the CAPA book for more information.These are the variables you will need to export to authenticate with AWS and use
clusterawsadm
to generate the encoded string, which are linked to your IAM user:AWS_REGION AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
-
Provider installation
apiVersion: turtles-capi.cattle.io/v1alpha1 kind: CAPIProvider metadata: name: aws namespace: capa-system spec: type: infrastructure configSecret: name: aws
-
Bootstrap/Control Plane provider for RKE2(installed by default) or Bootstrap/Control Plane provider for Kubeadm, example of Kubeadm installation:
apiVersion: v1 kind: Namespace metadata: name: capi-kubeadm-bootstrap-system --- apiVersion: turtles-capi.cattle.io/v1alpha1 kind: CAPIProvider metadata: name: kubeadm-bootstrap namespace: capi-kubeadm-bootstrap-system spec: name: kubeadm type: bootstrap --- apiVersion: v1 kind: Namespace metadata: name: capi-kubeadm-control-plane-system --- apiVersion: turtles-capi.cattle.io/v1alpha1 kind: CAPIProvider metadata: name: kubeadm-control-plane namespace: capi-kubeadm-control-plane-system spec: name: kubeadm type: controlPlane
To prepare the management Cluster, we are going to install the Docker Cluster API Provider.
-
Infrastructure Docker provider installation
apiVersion: v1 kind: Namespace metadata: name: capd-system --- apiVersion: turtles-capi.cattle.io/v1alpha1 kind: CAPIProvider metadata: name: docker namespace: capd-system spec: type: infrastructure
-
Bootstrap/Control Plane provider for RKE2(installed by default) or Bootstrap/Control Plane provider for Kubeadm, example of Kubeadm installation:
apiVersion: v1 kind: Namespace metadata: name: capi-kubeadm-bootstrap-system --- apiVersion: turtles-capi.cattle.io/v1alpha1 kind: CAPIProvider metadata: name: kubeadm-bootstrap namespace: capi-kubeadm-bootstrap-system spec: name: kubeadm type: bootstrap --- apiVersion: v1 kind: Namespace metadata: name: capi-kubeadm-control-plane-system --- apiVersion: turtles-capi.cattle.io/v1alpha1 kind: CAPIProvider metadata: name: kubeadm-control-plane namespace: capi-kubeadm-control-plane-system spec: name: kubeadm type: controlPlane
To prepare the management Cluster, we are going to install the Cluster API Provider vSphere.
The global credentials are set to blanks, as we are going to use VSphereClusterIdentity
instead.
-
Provider installation
apiVersion: v1 kind: Namespace metadata: name: capv-system --- apiVersion: turtles-capi.cattle.io/v1alpha1 kind: CAPIProvider metadata: name: vsphere namespace: capv-system spec: type: infrastructure variables: VSPHERE_USERNAME: "" VSPHERE_PASSWORD: ""
-
Bootstrap/Control Plane provider for RKE2(installed by default) or Bootstrap/Control Plane provider for Kubeadm, example of Kubeadm installation:
apiVersion: v1 kind: Namespace metadata: name: capi-kubeadm-bootstrap-system --- apiVersion: turtles-capi.cattle.io/v1alpha1 kind: CAPIProvider metadata: name: kubeadm-bootstrap namespace: capi-kubeadm-bootstrap-system spec: name: kubeadm type: bootstrap --- apiVersion: v1 kind: Namespace metadata: name: capi-kubeadm-control-plane-system --- apiVersion: turtles-capi.cattle.io/v1alpha1 kind: CAPIProvider metadata: name: kubeadm-control-plane namespace: capi-kubeadm-control-plane-system spec: name: kubeadm type: controlPlane
-
Identity Setup
In this example we are going to use a
VSphereClusterIdentity
to provision vSphere Clusters.
A Secret containing the credentials needs to be created in the namespace where the vSphere provider is installed.
TheVSphereClusterIdentity
can reference this Secret to allow Cluster provisioning. For this example we are allowing usage of the identity across all namespaces, so that it can be easily reused.
You can refer to the official documentation to learn more about identity management.apiVersion: v1 kind: Secret metadata: name: cluster-identity namespace: capv-system type: Opaque stringData: username: xxx password: xxx --- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: VSphereClusterIdentity metadata: name: cluster-identity spec: secretName: cluster-identity allowedNamespaces: selector: matchLabels: {}
Create a Cluster from a ClusterClass
Examples using |
-
Azure RKE2
-
Azure AKS
-
AWS Kubeadm
-
Docker Kubeadm
-
Docker RKE2
-
vSphere Kubeadm
-
vSphere RKE2
-
An Azure ClusterClass can be found among the Turtles examples.
kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/clusterclasses/azure/rke2/clusterclass-rke2-example.yaml
-
Additionally, the Azure Cloud Provider will need to be installed on each downstream Cluster, for the nodes to be initialized correctly.
For this example we are also going to install Calico as the default CNI.We can do this automatically at Cluster creation using the Cluster API Add-on Provider Fleet.
This Add-on provider is installed by default with Rancher Turtles.
TwoHelmApps
need to be created first, to be applied on the new Cluster via label selectors.kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/ccm/azure/helm-chart.yaml kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/cni/calico/helm-chart.yaml
-
Create the Azure Cluster from the example ClusterClass
Note that some variables are left to the user to substitute.
Also beware that theinternal-first
registrationMethod
variable is used as a workaround for correct provisioning.
This immutable variable however will lead to issues when scaling or rolling out control plane nodes.
A patch will support this case in a future release of CAPZ, but the Cluster will need to be reprovisioned to change theregistrationMethod
apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: labels: cluster-api.cattle.io/rancher-auto-import: "true" cloud-provider: azure cni: calico name: azure-quickstart spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 topology: class: azure-rke2-example controlPlane: replicas: 3 variables: - name: subscriptionID value: <AZURE_SUBSCRIPTION_ID> - name: location value: <AZURE_LOCATION> - name: resourceGroup value: <AZURE_RESOURCE_GROUP> - name: azureClusterIdentityName value: cluster-identity - name: registrationMethod value: internal-first version: v1.31.1+rke2r1 workers: machineDeployments: - class: rke2-default-worker name: md-0 replicas: 3
-
An Azure AKS ClusterClass can be found among the Turtles examples.
kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/clusterclasses/azure/aks/clusterclass-aks-example.yaml
-
Create the Azure AKS Cluster from the example ClusterClass
Note that some variables are left to the user to substitute.
apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: labels: cluster-api.cattle.io/rancher-auto-import: "true" name: azure-aks-quickstart spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 topology: class: azure-aks-example variables: - name: subscriptionID value: <AZURE_SUBSCRIPTION_ID> - name: location value: <AZURE_LOCATION> - name: resourceGroup value: <AZURE_RESOURCE_GROUP> - name: azureClusterIdentityName value: cluster-identity version: v1.31.1 workers: machinePools: - class: default-system name: system-1 replicas: 1 - class: default-worker name: worker-1 replicas: 1
-
An AWS ClusterClass can be found among the Turtles examples.
kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/clusterclasses/aws/kubeadm/clusterclass-kubeadm-example.yaml
-
For this example we are also going to install Calico as the default CNI.
-
The Cloud Controller Manager AWS will need to be installed on each downstream Cluster for the nodes to be functional.
-
Additionally, we will also enable AWS EBS CSI Driver.
We can do this automatically at Cluster creation using the Cluster API Add-on Provider Fleet.
This Add-on provider is installed by default with Rancher Turtles.
TwoHelmApps
need to be created first, to be applied on the new Cluster via label selectors. This will take care of deploying Calico and the EBS CSI Driver in the workload cluster.kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/csi/aws/helm-chart.yaml kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/cni/aws/calico/helm-chart.yaml
We will need to create a Fleet Bundle to deploy the AWS Cloud Controller Manager as the upstream Helm chart has limitations that retrict us from applying the desired configuration via CAPI Add-on Provider Fleet. We expect this to be a temporary solution until the official chart is capable of supporting our requirements.
kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/ccm/aws/fleet-bundle.yaml
-
Create the AWS Cluster from the example ClusterClass
Note that some variables are left to the user to substitute.
apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: labels: cluster-api.cattle.io/rancher-auto-import: "true" cni: calico cloud-provider: aws csi: aws-ebs-csi-driver name: aws-quickstart spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 topology: class: aws-kubeadm-example controlPlane: replicas: 1 variables: - name: region value: eu-west-2 - name: sshKeyName value: <AWS_SSH_KEY_NAME> - name: controlPlaneMachineType value: <AWS_CONTROL_PLANE_MACHINE_TYPE> - name: workerMachineType value: <AWS_NODE_MACHINE_TYPE> version: v1.31.0 workers: machineDeployments: - class: default-worker name: md-0 replicas: 1
-
A Docker Kubeadm ClusterClass can be found among the Turtles examples.
kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/clusterclasses/docker/kubeadm/clusterclass-docker-kubeadm.yaml
-
For this example we are also going to install Calico as the default CNI.
We can do this automatically at Cluster creation using the Cluster API Add-on Provider Fleet.
This Add-on provider is installed by default with Rancher Turtles.
TwoHelmApps
need to be created first, to be applied on the new Cluster via label selectors.kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/cni/calico/helm-chart.yaml
-
Create the Docker Kubeadm Cluster from the example ClusterClass
Note that some variables are left to the user to substitute.
apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: docker-kubeadm-quickstart labels: cni: calico spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 serviceDomain: cluster.local services: cidrBlocks: - 10.96.0.0/24 topology: class: docker-kubeadm-example controlPlane: replicas: 3 version: v1.31.6 workers: machineDeployments: - class: default-worker name: md-0 replicas: 3
-
A Docker RKE2 ClusterClass can be found among the Turtles examples.
kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/clusterclasses/docker/rke2/clusterclass-docker-rke2.yaml
-
For this example we are also going to install Calico as the default CNI.
We can do this automatically at Cluster creation using the Cluster API Add-on Provider Fleet.
This Add-on provider is installed by default with Rancher Turtles.
TwoHelmApps
need to be created first, to be applied on the new Cluster via label selectors.kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/cni/calico/helm-chart.yaml
-
Create the LoadBalancer ConfigMap for Docker RKEv2 Cluster
apiVersion: v1 kind: ConfigMap metadata: name: docker-rke2-lb-config annotations: "helm.sh/resource-policy": keep data: value: |- # generated by kind global log /dev/log local0 log /dev/log local1 notice daemon # limit memory usage to approximately 18 MB # (see https://github.com/kubernetes-sigs/kind/pull/3115) maxconn 100000 resolvers docker nameserver dns 127.0.0.11:53 defaults log global mode tcp option dontlognull # TODO: tune these timeout connect 5000 timeout client 50000 timeout server 50000 # allow to boot despite dns don't resolve backends default-server init-addr none frontend stats mode http bind *:8404 stats enable stats uri /stats stats refresh 1s stats admin if TRUE frontend control-plane bind *:{{ .FrontendControlPlanePort }} {{ if .IPv6 -}} bind :::{{ .FrontendControlPlanePort }}; {{- end }} default_backend kube-apiservers backend kube-apiservers option httpchk GET /healthz {{range $server, $backend := .BackendServers }} server {{ $server }} {{ JoinHostPort $backend.Address $.BackendControlPlanePort }} check check-ssl verify none resolvers docker resolve-prefer {{ if $.IPv6 -}} ipv6 {{- else -}} ipv4 {{- end }} {{- end}} frontend rke2-join bind *:9345 {{ if .IPv6 -}} bind :::9345; {{- end }} default_backend rke2-servers backend rke2-servers option httpchk GET /v1-rke2/readyz http-check expect status 403 {{range $server, $backend := .BackendServers }} server {{ $server }} {{ $backend.Address }}:9345 check check-ssl verify none {{- end}}
-
Create the Docker Kubeadm Cluster from the example ClusterClass
apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: docker-rke2-example labels: cni: calico annotations: cluster-api.cattle.io/upstream-system-agent: "true" spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/24 serviceDomain: cluster.local topology: class: docker-rke2-example controlPlane: replicas: 3 variables: - name: rke2CNI value: none - name: dockerImage value: kindest/node:v1.31.6 version: v1.31.6+rke2r1 workers: machineDeployments: - class: default-worker name: md-0 replicas: 3
-
A vSphere ClusterClass can be found among the Turtles examples.
kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/clusterclasses/vsphere/kubeadm/clusterclass-kubeadm-example.yaml
-
Additionally, the vSphere Cloud Provider will need to be installed on each downstream Cluster, for the nodes to be initialized correctly.
The Container Storage Interface (CSI) driver for vSphere will be used as storage solution.
Finally, for this example we are going to install Calico as the default CNI.We can install all applications automatically at Cluster creation using the Cluster API Add-on Provider Fleet.
This Add-on provider is installed by default with Rancher Turtles.
TwoHelmApps
need to be created first, to be applied on the new Cluster via label selectors.kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/ccm/vsphere/helm-chart.yaml kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/cni/calico/helm-chart.yaml
Since the vSphere CSI driver is not packaged in Helm, we are going to include its entire manifest in a Fleet Bundle, that will be applied to the downstream Cluster.
kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/csi/vsphere/bundle.yaml
-
Cluster configuration
The vSphere Cloud Provider and the vSphere CSI controller need additional configuration to be applied on the downstream Cluster.
Similarly to the steps above, we can create two additional Fleet Bundles, that will be applied to the downstream Cluster.
Please beware that these Bundles are configured to target the downstream Cluster by name:vsphere-kubeadm-quickstart
.
If you use a different name for your Cluster, change the Bundle targets accordingly.kind: Bundle apiVersion: fleet.cattle.io/v1alpha1 metadata: name: vsphere-csi-config spec: resources: - content: |- apiVersion: v1 kind: Secret type: Opaque metadata: name: vsphere-config-secret namespace: vmware-system-csi stringData: csi-vsphere.conf: |+ [Global] thumbprint = "<VSPHERE_THUMBPRINT>" [VirtualCenter "<VSPHERE_SERVER>"] user = "<VSPHERE_USER>" password = "<VSPHERE_PASSWORD>" datacenters = "<VSPHERE_DATACENTED>" [Network] public-network = "<VSPHERE_NETWORK>" [Labels] zone = "" region = "" targets: - clusterSelector: matchLabels: csi: vsphere cluster.x-k8s.io/cluster-name: 'vsphere-kubeadm-quickstart' --- kind: Bundle apiVersion: fleet.cattle.io/v1alpha1 metadata: name: vsphere-cloud-credentials spec: resources: - content: |- apiVersion: v1 kind: Secret type: Opaque metadata: name: vsphere-cloud-secret namespace: kube-system stringData: <VSPHERE_SERVER>.password: "<VSPHERE_PASSWORD>" <VSPHERE_SERVER>.username: "<VSPHERE_USER>" targets: - clusterSelector: matchLabels: cloud-provider: vsphere cluster.x-k8s.io/cluster-name: 'vsphere-kubeadm-quickstart'
-
Create the vSphere Cluster from the example ClusterClass
Note that for this example we are using kube-vip as a Control Plane load balancer.
TheKUBE_VIP_INTERFACE
will be used to bind theCONTROL_PLANE_IP
in ARP mode. Depending on your operating system and network device configuration, you need to configure this value accordingly - for example, toeth0
.
Thekube-vip
static manifest is embedded in the ClusterClass definition. For more information on how to generate a static kube-vip manifest for your own ClusterClasses, please consult the official documentation.apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: labels: cni: calico cloud-provider: vsphere csi: vsphere cluster-api.cattle.io/rancher-auto-import: "true" name: 'vsphere-kubeadm-quickstart' spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 topology: class: vsphere-kubeadm-example version: v1.31.4 controlPlane: replicas: 1 workers: machineDeployments: - class: vsphere-kubeadm-example-worker name: md-0 replicas: 1 variables: - name: vSphereClusterIdentityName value: cluster-identity - name: vSphereTLSThumbprint value: <VSPHERE_THUMBPRINT> - name: vSphereDataCenter value: <VSPHERE_DATACENTER> - name: vSphereDataStore value: <VSPHERE_DATASTORE> - name: vSphereFolder value: <VSPHERE_FOLDER> - name: vSphereNetwork value: <VSPHERE_NETWORK> - name: vSphereResourcePool value: <VSPHERE_RESOURCE_POOL> - name: vSphereServer value: <VSPHERE_SERVER> - name: vSphereTemplate value: <VSPHERE_TEMPLATE> - name: controlPlaneIpAddr value: <CONTROL_PLANE_IP> - name: controlPlanePort value: 6443 - name: sshKey value: <SSH_KEY> - name: kubeVIPInterface value: <KUBE_VIP_INTERFACE>
-
A vSphere ClusterClass can be found among the Turtles examples.
kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/clusterclasses/vsphere/rke2/clusterclass-rke2-example.yaml
-
Additionally, the vSphere Cloud Provider will need to be installed on each downstream Cluster, for the nodes to be initialized correctly.
The Container Storage Interface (CSI) driver for vSphere will be used as storage solution.
Finally, for this example we are going to install Calico as the default CNI.We can install all applications automatically at Cluster creation using the Cluster API Add-on Provider Fleet.
This Add-on provider is installed by default with Rancher Turtles.
TwoHelmApps
need to be created first, to be applied on the new Cluster via label selectors.kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/ccm/vsphere/helm-chart.yaml kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/cni/calico/helm-chart.yaml
Since the vSphere CSI driver is not packaged in Helm, we are going to include its entire manifest in a Fleet Bundle, that will be applied to the downstream Cluster.
kubectl apply -f https://raw.githubusercontent.com/rancher/turtles/refs/heads/main/examples/applications/csi/vsphere/bundle.yaml
-
Cluster configuration
The vSphere Cloud Provider and the vSphere CSI controller need additional configuration to be applied on the downstream Cluster.
Similarly to the steps above, we can create two additional Fleet Bundles, that will be applied to the downstream Cluster.
Please beware that these Bundles are configured to target the downstream Cluster by name:vsphere-rke2-quickstart
.
If you use a different name for your Cluster, change the Bundle targets accordingly.kind: Bundle apiVersion: fleet.cattle.io/v1alpha1 metadata: name: vsphere-csi-config spec: resources: - content: |- apiVersion: v1 kind: Secret type: Opaque metadata: name: vsphere-config-secret namespace: vmware-system-csi stringData: csi-vsphere.conf: |+ [Global] thumbprint = "<VSPHERE_THUMBPRINT>" [VirtualCenter "<VSPHERE_SERVER>"] user = "<VSPHERE_USER>" password = "<VSPHERE_PASSWORD>" datacenters = "<VSPHERE_DATACENTED>" [Network] public-network = "<VSPHERE_NETWORK>" [Labels] zone = "" region = "" targets: - clusterSelector: matchLabels: csi: vsphere cluster.x-k8s.io/cluster-name: 'vsphere-rke2-quickstart' --- kind: Bundle apiVersion: fleet.cattle.io/v1alpha1 metadata: name: vsphere-cloud-credentials spec: resources: - content: |- apiVersion: v1 kind: Secret type: Opaque metadata: name: vsphere-cloud-secret namespace: kube-system stringData: <VSPHERE_SERVER>.password: "<VSPHERE_PASSWORD>" <VSPHERE_SERVER>.username: "<VSPHERE_USER>" targets: - clusterSelector: matchLabels: cloud-provider: vsphere cluster.x-k8s.io/cluster-name: 'vsphere-rke2-quickstart'
-
Create the vSphere Cluster from the example ClusterClass
Note that for this example we are using kube-vip as a Control Plane load balancer.
TheKUBE_VIP_INTERFACE
will be used to bind theCONTROL_PLANE_IP
in ARP mode. Depending on your operating system and network device configuration, you need to configure this value accordingly - for example, toeth0
.
Thekube-vip
static manifest is embedded in the ClusterClass definition. For more information on how to generate a static kube-vip manifest for your own ClusterClasses, please consult the official documentation.apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: labels: cni: calico cloud-provider: vsphere csi: vsphere cluster-api.cattle.io/rancher-auto-import: "true" name: 'vsphere-rke2-quickstart' spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 topology: class: vsphere-rke2-example version: v1.31.4+rke2r1 controlPlane: replicas: 1 workers: machineDeployments: - class: vsphere-rke2-example-worker name: md-0 replicas: 1 variables: - name: vSphereClusterIdentityName value: cluster-identity - name: vSphereTLSThumbprint value: <VSPHERE_THUMBPRINT> - name: vSphereDataCenter value: <VSPHERE_DATACENTER> - name: vSphereDataStore value: <VSPHERE_DATASTORE> - name: vSphereFolder value: <VSPHERE_FOLDER> - name: vSphereNetwork value: <VSPHERE_NETWORK> - name: vSphereResourcePool value: <VSPHERE_RESOURCE_POOL> - name: vSphereServer value: <VSPHERE_SERVER> - name: vSphereTemplate value: <VSPHERE_TEMPLATE> - name: controlPlaneIpAddr value: <CONTROL_PLANE_IP> - name: controlPlanePort value: 6443 - name: sshKey value: <SSH_KEY> - name: kubeVIPInterface value: <KUBE_VIP_INTERFACE>