Air-gapped Environment
Rancher Turtles provides support for an air-gapped environment out-of-the-box by leveraging features of the Cluster API Operator, the required dependency for installing Rancher Turtles.
To provision and configure Cluster API providers, Turtles uses the CAPIProvider resource to allow managing Cluster API Operator manifests in a declarative way. Every field provided by the upstream CAPI Operator resource for the desired spec.type
is also available in the spec
of the CAPIProvider resouce.
To install Cluster API providers in an air-gapped environment the following will need to be done:
-
Configure the Cluster API Operator for an air-gapped environment:
-
The operator chart will be fetched and stored as a part of the Turtles chart.
-
Provide image overrides for the operator from an accessible image repository.
-
-
Configure Cluster API providers for an air-gapped environment:
-
Provide fetch configuration for each provider from an accessible location (e.g., an internal github/gitlab server) or from pre-created ConfigMaps within the cluster.
-
Provide image overrides for each provider to pull images from an accessible image repository.
-
-
Configure Rancher Turtles for an air-gapped environment:
-
Collect and publish Rancher Turtles images and publish to the private registry. Example of cert-manager installation for the reference.
-
Provide fetch configuration and image values for
core
andcaprke2
providers in values.yaml. -
Provider image value for the Cluster API Operator Helm chart dependency in values.yaml. Image values specified with the cluster-api-operator key will be passed along to the Cluster API Operator.
-
CAPI Provider installation with OCI artifact
As an administrator working in an air-gapped environment, you need to fetch the Azure Cluster API Provider (CAPZ) components from within your cluster since external internet access is restricted. This section demonstrates how to deploy CAPZ using OCI artifacts stored in your private registry.
Set your private registry URL and replace with your actual registry:
export REGISTRY=<YOUR_PRIVATE_REGISTRY>
export RELEASE_TAG=<CAPI_PROVIDER_VERSION>
Create and apply Azure CAPIProvider resource that instructs Rancher Turtles to fetch the Azure provider from your private OCI registry:
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: azure
namespace: capz-system
spec:
type: infrastructure
name: azure
version: ${RELEASE_TAG}
fetchConfig:
oci: ${REGISTRY}:${RELEASE_TAG}
How to mirror CAPI Provider OCI artifact
You mirror OCI artifacts for any CAPI Provider listed in the Rancher Turtles configuration: https://github.com/rancher/turtles/blob/main/internal/controllers/clusterctl/config.yaml
Install the ORAS (OCI Registry As Storage) CLI tool to manage OCI artifacts. Follow the installation instructions at: https://oras.land/docs/installation
Set your private registry URL and replace with your actual registry:
export REGISTRY=<YOUR_PRIVATE_REGISTRY>
export RELEASE_TAG=<CAPI_PROVIDER_VERSION>
Create working directory
mkdir capi-oci-artifacts
cd capi-oci-artifacts
Pull OCI artifacts for CAPI Provider which you want to use for example Azure Provider OCI artifact in version 1.19.4:
oras pull registry.rancher.com/rancher/cluster-api-azure-controller-components:v1.19.4
Publish an OCI artifact containing the Azure CAPI Provider manifests to your private registry:
oras push ${REGISTRY}:${RELEASE_TAG} infrastructure-components.yaml:application/vnd.test.file metadata.yaml:application/vnd.test.file
Create and apply Azure CAPIProvider resource that instructs Rancher Turtles to fetch the Azure provider from your private OCI registry:
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: azure
namespace: capz-system
spec:
type: infrastructure
name: azure
version: ${RELEASE_TAG}
fetchConfig:
oci: ${REGISTRY}:${RELEASE_TAG}
Use kubectl
to create namespace capz-system
and apply the capz-provider-oci.yaml
file to the cluster:
kubectl apply -f capz-provider-oci.yaml
CAPI Provider fetch and push OCI artifact
-
Using kubectl operator
-
Using Oras
You can build OCI artifacts for any CAPI Provider listed in the Rancher Turtles configuration: https://github.com/rancher/turtles/blob/main/internal/controllers/clusterctl/config.yaml
Install cluster-api-operator plugin for kubectl
Clone the official Azure CAPI Provider repository and navigate to the project directory
git clone https://github.com/kubernetes-sigs/cluster-api-provider-azure/
cd cluster-api-provider-azure
Choose the specific version you want to deploy. You can either list all available tags
There is no latest version in OCI scenario, so version needs to be set at all times.
|
git tag
or automatically select the latest release:
export RELEASE_TAG=`git describe --abbrev=0`
Set your private registry URL and replace with your actual registry:
export PROD_REGISTRY=<YOUR_PRIVATE_REGISTRY>
Build the release artifacts infrastructure-components.yaml and metadata.yaml:
make release
Go to the output directory containing the artifacts:
cd out
Create and publish an OCI artifact containing the Azure CAPI Provider manifests to your private registry:
kubectl operator publish -u ${PROD_REGISTRY}:${RELEASE_TAG} infrastructure-components.yaml metadata.yaml
You can build OCI artifacts for any CAPI Provider listed in the Rancher Turtles configuration: https://github.com/rancher/turtles/blob/main/internal/controllers/clusterctl/config.yaml
Clone the official Azure CAPI Provider repository and navigate to the project directory
git clone https://github.com/kubernetes-sigs/cluster-api-provider-azure/
cd cluster-api-provider-azure
Choose the specific version you want to deploy. You can either list all available tags
There is no latest version in OCI scenario, so version needs to be set at all times.
|
git tag
or automatically select the latest release:
export RELEASE_TAG=`git describe --abbrev=0`
Set your private registry URL and replace with your actual registry:
export PROD_REGISTRY=<YOUR_PRIVATE_REGISTRY>
Build the release artifacts infrastructure-components.yaml and metadata.yaml:
make release
Go to the output directory containing the artifacts:
cd out
Install the ORAS (OCI Registry As Storage) CLI tool to manage OCI artifacts. Follow the installation instructions at: https://oras.land/docs/installation
Create and publish an OCI artifact containing the Azure CAPI Provider manifests to your private registry:
oras push ${PROD_REGISTRY}:${RELEASE_TAG} infrastructure-components.yaml:application/vnd.test.file metadata.yaml:application/vnd.test.file
Create and apply Azure CAPIProvider resource that instructs Rancher Turtles to fetch the Azure provider from your private OCI registry:
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: azure
namespace: capz-system
spec:
type: infrastructure
name: azure
version: ${RELEASE_TAG}
fetchConfig:
oci: ${PROD_REGISTRY}:${RELEASE_TAG}
Use kubectl
to create namespace capz-system
and apply the capz-provider-oci.yaml
file to the cluster:
kubectl apply -f capz-provider-oci.yaml
CAPI Provider installation with fetched manifest
As an admin, I need to fetch the vSphere provider (CAPV) components from within the cluster because I am working in an air-gapped environment.
In this example, there is a ConfigMap in the capv-system
namespace that defines the components and metadata of the provider. It can be created manually or by running the following commands:
# Get the file contents from the GitHub release
curl -L https://github.com/rancher-sandbox/cluster-api-provider-vsphere/releases/download/v1.12.0/infrastructure-components.yaml -o components.yaml
curl -L https://github.com/rancher-sandbox/cluster-api-provider-vsphere/releases/download/v1.12.0/metadata.yaml -o metadata.yaml
# Create the configmap from the files
kubectl create configmap v1.12.0 --namespace=capv-system --from-file=components=components.yaml --from-file=metadata=metadata.yaml --dry-run=client -o yaml > configmap.yaml
This command example would need to be adapted to the provider and version you want to use. The resulting config map will look similar to the example below:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
provider-components: vsphere
name: v1.12.0
namespace: capv-system
data:
components: |
# Components for v1.12.0 YAML go here
metadata: |
# Metadata information goes here
A CAPIProvider resource will need to be created to represent the vSphere infrastructure provider. It will need to be configured with a fetchConfig
. The label selector allows the operator to determine the available versions of the vSphere provider and the Kubernetes resources that need to be deployed (i.e. contained within ConfigMaps which match the label selector).
Since the provider’s version is marked as v1.12.0
, the operator uses the components information from the ConfigMap with matching label to install the vSphere provider.
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: vsphere
namespace: capv-system
spec:
name: vsphere
type: infrastructure
version: v1.12.0
configSecret:
name: vsphere-variables
fetchConfig:
selector:
matchLabels:
provider-components: vsphere
deployment:
containers:
- name: manager
imageUrl: "registry.suse.com/rancher/cluster-api-vsphere-controller:v1.12.0"
variables:
CLUSTER_TOPOLOGY: "true"
EXP_CLUSTER_RESOURCE_SET: "true"
EXP_MACHINE_POOL: "true"
Additionally the CAPIProvider overrides the container image to use for the provider using the deployment.containers[].imageUrl
field. This allows the operator to pull the image from a registry within the air-gapped environment.
ConfigMap size limitations
There is a limit on the maximum size of a ConfigMap - 1MiB. If the manifests do not fit into this size, Kubernetes will generate an error and provider installation fail. To avoid this, you can archive the manifests and put them in the ConfigMap that way.
For example, you have two files: components.yaml
and metadata.yaml
. To create a working config map you need:
-
Archive components.yaml using
gzip
cli toolgzip -c components.yaml > components.gz
-
Create a ConfigMap manifest from the archived data
kubectl create configmap v1.12.0 --namespace=capv-system --from-file=components=components.gz --from-file=metadata=metadata.yaml --dry-run=client -o yaml > configmap.yaml
-
Edit the file by adding "provider.cluster.x-k8s.io/compressed: true" annotation
yq eval -i '.metadata.annotations += {"provider.cluster.x-k8s.io/compressed": "true"}' configmap.yaml
Without this annotation, the operator won’t be able to determine if the data is compressed or not. -
Add labels that will be used to match the ConfigMap in
fetchConfig
section of the provideryq eval -i '.metadata.labels += {"my-label": "label-value"}' configmap.yaml
-
Create a ConfigMap in your Kubernetes cluster using kubectl
kubectl create -f configmap.yaml