Building a GitOps pipeline pt. 2
After what I have done in the previous post, the next step is to start building our infrastructure. For that, I will be using a few tools, namely: building a Kubernetes cluster, harnessing the power of k3s and kustomize, both of which greatly simplify the creation and configuration of the cluster and the different environments; and Argo CD as our continuous delivery tool that’s k8s native. All of this, of course, will be greatly enhanced by our already existing Git forge, and all the configuration files used are made available on the gitops-demo-config repository - this wouldn’t be a GitOps pipeline otherwise.
k3s and cilium#
Spinning up the cluster with k3s is as easy as a issuing a single command. But before that, I recommend exporting a simple variable that’s bound to save you some headaches: export KUBECONFIG=~/.kube/config. Ideally this will live in your .bashrc (or equivalent), and its purpose is to tell all k8s native applications where the configuration is being kept. Once that’s done, we’re set for the creation of the cluster:
k3s server --flannel-backend none --disable-network-policy
Of note here is the 2 args we pass tell k3s that we don’t want it to manage the CNI, as we’re going to install Cilium for that purpose. That’s also relatively simple, with the help of the helm chart that is made available by the project:
helm repo add cilium https://helm.cilium.io && helm install cilium cilium/cilium --namespace kube-system --set kubeProxyReplacement=true
With the help of kubectl cluster-info and kubectl get nodes -n kube-system, we should eventually see that both our cluster and Cilium are now up and ready. Great! Finally, feel free to make the configuration of k3s permanent by adding the following options to the /etc/rancher/k3s/config.yaml file:
flannel-backend: "none"
disable-kube-proxy: true
disable-network-policy: true
cluster-init: true
disable:
- servicelb
- traefik
We also need to add a few more options to Cilium, so as to guarantee that its ingress will work:
helm upgrade cilium \
--set cluster.id=1 \
--set cluster.name=<your-hostname> \
--set k8sServiceHost=<your-ip> \
--set k8sServicePort=6443 \
--set ipam.operator.clusterPoolIPv4PodCIDRList="10.42.0.0/16" \
--set kubeProxyReplacement=true \
--helm-set=operator.replicas=1 \
--set ingressController.enabled=true \
--set ingressController.loadbalancerMode=shared
This will enable you to make the options permanent, and manage the k3s cluster with ease by using systemd with the systemctl start/stop k3s.service command.
Setting up the repository#
We can now set up the git repository where our configuration files will live. We start by laying out a simple structure, like so:
gitops-demo-config/
├── base/
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── configmap.yaml
│ └── kustomization.yaml
└── environments/
├── dev/
│ └── kustomization.yaml
├── staging/
│ └── kustomization.yaml
└── prod/
└── kustomization.yaml
The base layer is where the common information will live. The configmap holds the configurations for the app we want to deploy; the service helps with mapping the networking information; the deployment ties them all together, as well as giving some defaults for the resources used by each container. Finally, there’s the kustomization files: the one that lives at the base layer gives us the defaults for each environment; while then we can patch and change each different environment for more fine-grained control over them (compare the ones for the dev, staging and prod environments).
With the manifests in place, we can start by validating the files. In the gitops-demo-config directory, issuing the command kubectl kustomize base | yq is bound to be a great help here. The second step would be a simple bash script:
for env in dev staging prod; do
echo "=== $env environment ==="
kubectl kustomize environments/$env | kubectl apply --dry-run=client -f -
done
If all went well, this is when we start to create and deploy our infra:
kubectl create namespace dev
kubectl create namespace staging
kubectl create namespace prod
kubectl apply -k environments/dev
kubectl rollout status deployment/gitops-demo-dev -n dev --timeout=300s
kubectl get pods -n dev
And we test all of this by probing our app:
kubectl port-forward -n dev svc/gitops-demo-dev 8180:80 &
curl http://localhost:8180/health
curl http://localhost:8180/version
If all went well, we should see the following output:
$ ~ curl http://localhost:8180/health
{"status":"healthy","timestamp":1763626390,"version":"latest"}
$ ~ curl http://localhost:8180/version
{"feature_enabled":true,"version":"latest"}
With this, we will go ahead and push all of the changes we did in the gitops-demo-config so that all of our configurations starts to live on git:
git add .
git commit -m "feat: complete K8s manifests with Kustomize"
git push origin main
And so it lives!
Argo CD#
Now it’s time to start assembling the Continuous Deployment part of our GitOps pipeline. Argo CD is my choice for this part, since it fits perfectly into the philosophy of self-hosting, and integrates with the architecture that we have assembled so far. We start by creating a namespace for it, and installing it there.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=argocd-server -n argocd --timeout=600s
Let us configure access to Argo CD, so that we can apply changes to it and see the deployments taking place from the argocd CLI interface:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
kubectl port-forward svc/argocd-server -n argocd 8081:443 &
argocd login localhost:8081 --insecure --username admin --password <password>
argocd account update-password --current-password <password> --new-password <new_password>
And finally, we add our Gitea repository with the configuration files to Argo CD, so that it knows where to fetch the information for the management of the infrastructure. Here it’s important to note something: the best practice is to create an API token in Gitea to give access to Argo CD. Create and execute the script below. After this, you can check the status of the repo by using the argocd repo list command.
#!/bin/bash
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: gitea-repo
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
url: https://git.assilvestrar.club/gitops/gitops-demo-config.git
username: gitops
password: YOUR_GITEA_PASSWORD_OR_TOKEN
EOF
We can now create the project, where we specify what environments we want Argo CD to manage. This time around, the command to check the results of this setup is argocd proj list:
argocd proj create gitops-demo \
--description "GitOps Demo Project" \
--dest https://kubernetes.default.svc,dev \
--dest https://kubernetes.default.svc,staging \
--dest https://kubernetes.default.svc,prod \
--src https://git.assilvestrar.club/gitops/gitops-demo-config.git
And with that out of the way, we can now enter the last step of setting up Argo CD: we create the directory structure in our gitops-demo-config repository, so that the manifest for each environment we manage with it can have a place to live with proper version control. This is the new directory structure:
gitops-demo-config/
├── base/
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── configmap.yaml
│ └── kustomization.yaml
├── environments/
│ ├── dev/
│ │ └── kustomization.yaml
│ ├── staging/
│ │ └── kustomization.yaml
│ └── prod/
│ └── kustomization.yaml
└── argocd-apps/
├── dev.yaml
├── staging.yaml
└── prod.yaml
Feel free to check the respective manifest files for each environment: dev, staging, and prod. As you can see, they’re all pretty similar, but the level of control you have over them is quite big, which is a big boon if, say, you want your prod environments to not self-heal (not that you should do it, but you get the idea).
We then have to apply the manifests we just created by using the kubectl apply -f argocd-apps/ command, and seeing it all apply with watch -n 2 'argocd app list'. If you see that everything is at it should be, it’s time to push them into your git repo.
Now, for the best part: let’s actually change something in our configuration, and check if everything is working in tandem. Say that you want to increase the replicas in your dev environment from 1 to 2. In the environments/dev/kustomization.yaml file, the replicas[0].count field can be changed to reflect that. You save your changes and commit them to git:
git add .
git commit -m "test: scalling dev to 2 replicas"
git push
If you check the output of kubectl get pods -n dev you should now see that this namespace now has two replicas of your app living side-by-side, instead of just one:
$ kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
gitops-demo-dev-b64df69d9-l89g9 1/1 Running 0 13h
gitops-demo-dev-b64df69d9-z9l7z 1/1 Running 1 (44m ago) 17h
Congratulations, you now have a version controlled, declarative, continuous deployment pipeline!
What’s next#
We now have both a CI and CD pipeline that’s fully managed by configuration files that live in your git forge. Next up, we will be adding some observability and reliability tools to our infrastructure. Stay tuned!