Air-gapped Environment

Rancher Turtles provides support for an air-gapped environment out-of-the-box by leveraging features of the Cluster API Operator, the required dependency for installing Rancher Turtles.

To provision and configure Cluster API providers, Turtles uses the CAPIProvider resource to allow managing Cluster API Operator manifests in a declarative way. Every field provided by the upstream CAPI Operator resource for the desired spec.type is also available in the spec of the CAPIProvider resouce.

A new installation of Turtles only includes the core CAPI provider and its CRDs. The default installation mechanism for this provider does not require fetching the manifest from a remote source, so it is fully functional in an air-gapped environment as it retrieves the yaml definition from a local ConfigMap, embedded in the application chart.

The version of core CAPI shipped with Turtles is actively tested and validated, and the chart is pre-configured to select this version by default using CAPI Operator. However, if you have specific requirements about versioning or what repository to use, you can still customize the behavior of CAPI Operator with your own fetchConfig.

Community vs Prime Users

Rancher Turtles supports two types of air-gapped configurations depending on your deployment:

Prime Users: Rancher Prime users benefit from pre-mirrored CAPI provider OCI artifacts available in the Rancher Prime Registry (registry.rancher.com). These providers are automatically validated, tested, and maintained for your Turtles release. To see which providers and versions are available for your Turtles version, refer to the config-prime.yaml configuration file.

Community Users: Community users have access to the core CAPI provider but do not have pre-mirrored provider OCI artifacts. They are free to manually mirror any CAPI provider or use fetched manifests according to their requirements. The config-community.yaml configuration shows the minimal community setup without provider version pinning.

This section provides guidance on how to use CAPIProvider and the CAPI Operator functionality in different air-gapped scenarios.

CAPI Provider installation with OCI artifact

As an administrator working in an air-gapped environment, you need to fetch CAPI Provider components from within your cluster since external internet access is restricted. This section demonstrates how to deploy CAPI providers using OCI artifacts.

  • Prime Users

  • Community Users

As a Rancher Prime user, you can directly mirror pre-validated provider OCI artifacts from the Rancher Prime Registry to your private registry.

Set your private registry URL:

export REGISTRY=<YOUR_PRIVATE_REGISTRY>

Mirror the provider from Rancher Prime Registry to your private registry. Refer to config-prime.yaml for the exact provider versions available for your Turtles release.

For example, to use Azure provider:

# Set the version from config-prime.yaml
export PROVIDER_VERSION=<VERSION_FROM_CONFIG_PRIME>

# Pull from Rancher Prime Registry
oras pull registry.rancher.com/rancher/cluster-api-azure-controller-components:${PROVIDER_VERSION}

# Push to your private registry
oras push ${REGISTRY}/cluster-api-azure-controller-components:${PROVIDER_VERSION} infrastructure-components.yaml:application/vnd.test.file metadata.yaml:application/vnd.test.file

Create and apply the CAPIProvider resource pointing to your private registry:

capz-provider-oci.yaml
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
  name: azure
  namespace: capz-system
spec:
  type: infrastructure
  name: azure
  version: ${PROVIDER_VERSION}
  fetchConfig:
    oci: ${REGISTRY}/cluster-api-azure-controller-components:${PROVIDER_VERSION}

As a community user, you need to manually build and package provider OCI artifacts to your private registry. Community users do not have access to pre-mirrored artifacts from registry.rancher.com.

Set your private registry URL and provider version:

export REGISTRY=<YOUR_PRIVATE_REGISTRY>
export RELEASE_TAG=<CAPI_PROVIDER_VERSION>

You’ll need to build the provider manifests from source by following the upstream provider’s documentation. See the "CAPI Provider fetch and push OCI artifact" section below for detailed instructions on building from source and packaging as OCI artifacts.

Once you have built and pushed the OCI artifact to your private registry, create and apply the CAPIProvider resource pointing to your private registry:

capz-provider-oci.yaml
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
  name: azure
  namespace: capz-system
spec:
  type: infrastructure
  name: azure
  version: ${RELEASE_TAG}
  fetchConfig:
    oci: ${REGISTRY}:${RELEASE_TAG}

How to mirror CAPI Provider OCI artifact

This section shows how to mirror CAPI Provider OCI artifacts to your private registry for use in an air-gapped environment.

  • Prime Users

  • Community Users

As a Rancher Prime user, you can mirror pre-validated provider OCI artifacts from the Rancher Prime Registry. Check config-prime.yaml for available providers and their versions for your Turtles release.

Install the ORAS (OCI Registry As Storage) CLI tool to manage OCI artifacts. Follow the installation instructions at: https://oras.land/docs/installation

Set your private registry URL and provider version:

export REGISTRY=<YOUR_PRIVATE_REGISTRY>
export PROVIDER_VERSION=<VERSION_FROM_CONFIG_PRIME>

Create working directory:

mkdir capi-oci-artifacts
cd capi-oci-artifacts

Pull the OCI artifact from Rancher Prime Registry. For example, for Azure provider:

oras pull registry.rancher.com/rancher/cluster-api-azure-controller-components:${PROVIDER_VERSION}

Publish the OCI artifact to your private registry:

oras push ${REGISTRY}/cluster-api-azure-controller-components:${PROVIDER_VERSION} infrastructure-components.yaml:application/vnd.test.file metadata.yaml:application/vnd.test.file

Create and apply the CAPIProvider resource:

capz-provider-oci.yaml
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
  name: azure
  namespace: capz-system
spec:
  type: infrastructure
  name: azure
  version: ${PROVIDER_VERSION}
  fetchConfig:
    oci: ${REGISTRY}/cluster-api-azure-controller-components:${PROVIDER_VERSION}

Use kubectl to create namespace and apply the configuration:

kubectl create namespace capz-system
kubectl apply -f capz-provider-oci.yaml

As a community user, you need to build CAPI Provider OCI artifacts from source and must follow the upstream provider documentation to build release manifests.

To build and package provider manifests:

  1. Follow the upstream provider’s documentation to clone the repository and build release artifacts (infrastructure-components.yaml and metadata.yaml). For example, for the Azure provider, refer to the Azure CAPI Provider repository.

  2. Install the ORAS (OCI Registry As Storage) CLI tool to manage OCI artifacts. Follow the installation instructions at: https://oras.land/docs/installation

  3. Set your private registry URL and provider version:

    export REGISTRY=<YOUR_PRIVATE_REGISTRY>
    export RELEASE_TAG=<CAPI_PROVIDER_VERSION>
  4. Once you have built the infrastructure-components.yaml and metadata.yaml files, publish the OCI artifact to your private registry:

oras push ${REGISTRY}:${RELEASE_TAG} infrastructure-components.yaml:application/vnd.test.file metadata.yaml:application/vnd.test.file

Create and apply the CAPIProvider resource:

capz-provider-oci.yaml
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
  name: azure
  namespace: capz-system
spec:
  type: infrastructure
  name: azure
  version: ${RELEASE_TAG}
  fetchConfig:
    oci: ${REGISTRY}:${RELEASE_TAG}

Use kubectl to create namespace and apply the configuration:

kubectl create namespace capz-system
kubectl apply -f capz-provider-oci.yaml

CAPI Provider fetch and push OCI artifact

This section demonstrates how to build and publish OCI artifacts from source. This is primarily useful for Community users who want to build providers themselves or customize provider versions. Prime users typically don’t need this workflow as they can use pre-mirrored artifacts from the Rancher Prime Registry.

  • Using kubectl operator

  • Using Oras

You can build OCI artifacts for any CAPI Provider listed in the Rancher Turtles configuration: https://github.com/rancher/turtles/blob/main/internal/controllers/clusterctl/config-community.yaml

Install cluster-api-operator plugin for kubectl

Clone the official Azure CAPI Provider repository and navigate to the project directory

git clone https://github.com/kubernetes-sigs/cluster-api-provider-azure/
cd cluster-api-provider-azure

Choose the specific version you want to deploy. You can either list all available tags

There is no latest version in OCI scenario, so version needs to be set at all times.
git tag

or automatically select the latest release:

export RELEASE_TAG=`git describe --abbrev=0`

Set your private registry URL and replace with your actual registry:

export PROD_REGISTRY=<YOUR_PRIVATE_REGISTRY>

Build the release artifacts infrastructure-components.yaml and metadata.yaml:

make release

Go to the output directory containing the artifacts:

cd out

Create and publish an OCI artifact containing the Azure CAPI Provider manifests to your private registry:

kubectl operator publish -u ${PROD_REGISTRY}:${RELEASE_TAG} infrastructure-components.yaml metadata.yaml

You can build OCI artifacts for any CAPI Provider listed in the Rancher Turtles configuration: https://github.com/rancher/turtles/blob/main/internal/controllers/clusterctl/config-community.yaml

Clone the official Azure CAPI Provider repository and navigate to the project directory

git clone https://github.com/kubernetes-sigs/cluster-api-provider-azure/
cd cluster-api-provider-azure

Choose the specific version you want to deploy. You can either list all available tags

There is no latest version in OCI scenario, so version needs to be set at all times.
git tag

or automatically select the latest release:

export RELEASE_TAG=`git describe --abbrev=0`

Set your private registry URL and replace with your actual registry:

export PROD_REGISTRY=<YOUR_PRIVATE_REGISTRY>

Build the release artifacts infrastructure-components.yaml and metadata.yaml:

make release

Go to the output directory containing the artifacts:

cd out

Install the ORAS (OCI Registry As Storage) CLI tool to manage OCI artifacts. Follow the installation instructions at: https://oras.land/docs/installation

Create and publish an OCI artifact containing the Azure CAPI Provider manifests to your private registry:

oras push ${PROD_REGISTRY}:${RELEASE_TAG} infrastructure-components.yaml:application/vnd.test.file metadata.yaml:application/vnd.test.file

Create and apply Azure CAPIProvider resource that instructs Rancher Turtles to fetch the Azure provider from your private OCI registry:

capz-provider-oci.yaml
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
  name: azure
  namespace: capz-system
spec:
  type: infrastructure
  name: azure
  version: ${RELEASE_TAG}
  fetchConfig:
    oci: ${PROD_REGISTRY}:${RELEASE_TAG}

Use kubectl to create namespace capz-system and apply the capz-provider-oci.yaml file to the cluster:

kubectl apply -f capz-provider-oci.yaml

CAPI Provider installation with fetched manifest

This section demonstrates an alternative approach for air-gapped installations using ConfigMaps instead of OCI artifacts. This method works for both Community and Prime users and can be useful when OCI registry access is limited or when you prefer to manage provider manifests directly.

As an admin, you need to fetch the vSphere provider (CAPV) components from within the cluster because you are working in an air-gapped environment.

In this example, there is a ConfigMap in the capv-system namespace that defines the components and metadata of the provider. It can be created manually or by running the following commands:

# Get the file contents from the GitHub release
curl -L https://github.com/rancher-sandbox/cluster-api-provider-vsphere/releases/download/v1.12.0/infrastructure-components.yaml -o components.yaml
curl -L https://github.com/rancher-sandbox/cluster-api-provider-vsphere/releases/download/v1.12.0/metadata.yaml -o metadata.yaml

# Create the configmap from the files
kubectl create configmap v1.12.0 --namespace=capv-system --from-file=components=components.yaml --from-file=metadata=metadata.yaml --dry-run=client -o yaml > configmap.yaml

This command example would need to be adapted to the provider and version you want to use. The resulting config map will look similar to the example below:

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    provider-components: vsphere
  name: v1.12.0
  namespace: capv-system
data:
  components: |
    # Components for v1.12.0 YAML go here
  metadata: |
    # Metadata information goes here

A CAPIProvider resource will need to be created to represent the vSphere infrastructure provider. It will need to be configured with a fetchConfig. The label selector allows the operator to determine the available versions of the vSphere provider and the Kubernetes resources that need to be deployed (i.e. contained within ConfigMaps which match the label selector).

Since the provider’s version is marked as v1.12.0, the operator uses the components information from the ConfigMap with matching label to install the vSphere provider.

apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
  name: vsphere
  namespace: capv-system
spec:
  name: vsphere
  type: infrastructure
  version: v1.12.0
  configSecret:
    name: vsphere-variables
  fetchConfig:
    selector:
      matchLabels:
        provider-components: vsphere
  deployment:
    containers:
    - name: manager
      imageUrl: "registry.suse.com/rancher/cluster-api-vsphere-controller:v1.12.0"
  variables:
    CLUSTER_TOPOLOGY: "true"
    EXP_CLUSTER_RESOURCE_SET: "true"
    EXP_MACHINE_POOL: "true"

Additionally the CAPIProvider overrides the container image to use for the provider using the deployment.containers[].imageUrl field. This allows the operator to pull the image from a registry within the air-gapped environment.

ConfigMap size limitations

There is a limit on the maximum size of a ConfigMap - 1MiB. If the manifests do not fit into this size, Kubernetes will generate an error and provider installation fail. To avoid this, you can archive the manifests and put them in the ConfigMap that way.

For example, you have two files: components.yaml and metadata.yaml. To create a working config map you need:

  1. Archive components.yaml using gzip cli tool

    gzip -c components.yaml > components.gz
  2. Create a ConfigMap manifest from the archived data

    kubectl create configmap v1.12.0 --namespace=capv-system --from-file=components=components.gz --from-file=metadata=metadata.yaml --dry-run=client -o yaml > configmap.yaml
  3. Edit the file by adding "provider.cluster.x-k8s.io/compressed: true" annotation

    yq eval -i '.metadata.annotations += {"provider.cluster.x-k8s.io/compressed": "true"}' configmap.yaml
    Without this annotation, the operator won’t be able to determine if the data is compressed or not.
  4. Add labels that will be used to match the ConfigMap in fetchConfig section of the provider

    yq eval -i '.metadata.labels += {"my-label": "label-value"}' configmap.yaml
  5. Create a ConfigMap in your Kubernetes cluster using kubectl

    kubectl create -f configmap.yaml