GitOps Best Practices for Turtles and CAPI
This section provides best practices for implementing GitOps with Turtles and Cluster API (CAPI), focusing on using Fleet in combination with Cluster API Addon Provider Fleet(CAAPF
) for automated agent provisioning and integration into the CAPI
ecosystem.
These tips are transferable and can be applied even when using a different GitOps solution.
Common Tips
-
Separate CAPI Clusters and ClusterClass Configurations: Keep CAPI cluster configurations (
clusters/
) distinct from CAPIClusterClass
definitions (templates/
). This separation allows multiple clusters to be provisioned from shared templates while reducing the risk of unintended infrastructure changes. -
Isolate Addon Configurations: Store application manifests in
applications/
, ensuring they are independent ofClusters
andClusterClasses
. This enables selective or grouped provisioning based on user demand or matching selectors, preventing interference with cluster infrastructure. Please be aware that Helm chart dependencies need to be refreshed prior to addon provisioning. -
Utilize Nested GitRepo Resources: Leverage nested
GitRepo
resources in Fleet to structure repository content efficiently. This approach allows for modular provisioning, combiningClusterClass
definitions, required addons/applications, and cluster configurations while maintaining separation of concerns. -
Use GitOps Integration Tools When Available When configuring a GitOps setup, take into consideration available tooling and features it provides. An example of this is
CAAPF
, which automates Fleet GitOps installation onCAPI
clusters and provides convenient automation. -
Apply Kustomize Overlays for Environment-Specific Customization: Use
overlays/
to store Kustomize overlays for different environments. This ensures that configuration changes, such as scaling adjustments, networking modifications, or component upgrades, can be applied dynamically without altering base manifests. -
Utilize Compare Patches: Fleet allows performing selective ignoring of resource differences if the bundle contains proper rules for
diff
detection. This would prevent bundles from being stuck in a modified state.
Folder Separation Strategy
A well-structured repository simplifies management and improves clarity. The following folder separation strategy is recommended:
repo-root/ │── fleet/ │ ├── providers/ # CAPIProvider files to use with templates │ │ ├── core/ # Core provider configuration │ │ ├── infra-docker/ # Docker provider configuration │ │ ├── infra-aws/ # AWS provider configuration │ ├── clusters/ # Cluster-specific configurations │ │ ├── prod/ # Production clusters │ │ ├── staging/ # Staging clusters │ │ ├── dev/ # Development clusters │ ├── addons/ # Application-specific manifests focused on cluster bootstrapping, like CNI, CCM, CPI configurations. │ │ ├── cni/ # Group applications based on purpose. Provide fleet.yaml per each sub-directory to maintain bundle separation. │ │ ├── ccm/ │ ├── templates/ # CAPI ClusterClass templates for cluster provisioning │ ├── fleet.yaml # Fleet bundle configuration │── overlays/ # Kustomize overlays for different environments │ ├── prod/. # Kustomize overlays for production environments │ ├── staging/. # Kustomize overlays for staging environments │ ├── dev/. # Kustomize overlays for development environments
Providers Configuration
CAPI providers define the infrastructure components necessary for cluster provisioning. A simple setup may look like:
repo-root/ │── fleet/ │ ├── providers/ │ │ ├── core/ │ │ | ├── core.yaml # Core `CAPIProvider` definition │ │ | ├── fleet.yaml # Fleet.yaml definition for the provider setup
where the core provider contains:
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: cluster-api
spec:
type: core
Templates Configuration
Specify dependsOn
in fleet.yaml
for specific CAPIProvider
bundles or external GitRepo
components. This ensures templates roll out in the correct order and prevents issues where component controllers are missing or not ready to accept ClusterClass
definitions.
namespace: capa-clusters
dependsOn:
- name: core # Ensure core bundle is installed first
- name: infra-aws # Wait for AWS provider installation
Clusters Configuration
Use labels to dynamically match clusters with required addons, allowing deployment of only necessary workloads.
Example label usage:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: aws-cluster
labels:
cni: calico
ccm: aws
env: dev
It is recommended to separate cluster definitions from the templates, as such cluster bundle removal would allow the CAPI controller to process deletion in the correct order. Removing |
Addon Configuration
Addon configuration is essential for streamlining the provisioning of clusters, ensuring they reach the Ready
state without manual intervention or third-party tooling.
Cluster addons should define precise selection rules to match against appropriately labeled clusters.
The example below showcases a cluster that requires the installation of the Calico
CNI and the AWS
Cloud Controller Manager (CCM):
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: aws-cluster
labels:
cni: calico
ccm: aws
This setup requires two GitRepo
resources, each responsible for provisioning addons based on specific labels - one for cni: calico
and another for ccm: aws
.
Here is an example of a GitRepo
resource for deploying the Calico
CNI:
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: calico
spec:
branch: main
paths:
- /fleet/applications/calico
repo: https://github.com/rancher-sandbox/cluster-api-addon-provider-fleet.git
targets:
- clusterSelector:
matchLabels:
cni: calico
env: dev
This ensures that the Calico
CNI is deployed only on clusters labeled with cni: calico
and env: dev
, allowing for selective environment provisioning.
Helm Chart Values Templating
Addon configuration needs to dynamically adjust the Calico
workload based on the matching cluster definition and specific cluster state. This can be achieved using a fleet.yaml
setup, similar to the following structure:
repo-root/ │── fleet/ │ ├── applications/ │ │ ├── calico/ │ │ │ ├── fleet.yaml # Fleet configuration for the Calico setup
The fleet.yaml
file defines templating rules and comparePatches
to ensure a smooth rollout, independent of cluster configuration. To learn more about CAAPF
templating, which is leveraged here, refer to the official documentation
helm:
releaseName: projectcalico
repo: https://docs.tigera.io/calico/charts
chart: tigera-operator
templateValues:
installation: |-
cni:
type: Calico
ipam:
type: HostLocal
calicoNetwork:
bgp: Disabled
mtu: 1350
ipPools:
${- range $cidr := .ClusterValues.Cluster.spec.clusterNetwork.pods.cidrBlocks }
- cidr: "${ $cidr }"
encapsulation: None
natOutgoing: Enabled
nodeSelector: all()${- end}
diff:
comparePatches:
- apiVersion: operator.tigera.io/v1
kind: Installation
name: default
operations:
- {"op":"remove", "path":"/spec/kubernetesProvider"}