After installing and setting up all the different pieces of our CI/CD pipeline, we still need to do a bit of work to make sure that this process is as automated as possible, as is working in tandem to our benefit. This post is mostly about that.

Ingress and SSL#

Even though cilium was installed in our system, we still need to configure the different environments to use it as its ingress, as well as configuring a load balancer and certificate manager. Thankfully these are all relatively straight-forward tasks. Let us start with the load balancer, which is going to be metallb. This is achieved by installing the following manifest: kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml. Feel free to download it beforehand to double check its contents. After that is done and all the pods and services are up and running, we create the file metallb-config.yaml with the following contents:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - ---------  # Your server's IP
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-advert
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool

Which we then add to our cluster by running kubectl apply -f metallb-config.yaml. With this out of the way, we can start to configure cert-manager, which is again as simple as installing it and issuing a simple configuration file:

  helm repo add jetstack https://charts.jetstack.io
  helm repo update
  
  helm install cert-manager jetstack/cert-manager \
    -n cert-manager \
    --create-namespace \
    --set installCRDs=true
  
  kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=cert-manager -n cert-manager --timeout=300s

We then create the file cluster-issuer.yaml which will instruct cert-manager to issue self-signed certificates:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}

And issue a kubectl apply -f cluster-issuer.yaml command. Which, at this points, leads us to create the certificate.yaml files for each of our environments. You can check the exact files for dev, staging, and prod, but they’re all of the following form:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: gitops-demo-prod-tls
spec:
  secretName: gitops-demo-prod-tls
  issuerRef:
    name: selfsigned-issuer
    kind: ClusterIssuer
  dnsNames:
  - prod.gitops-demo.local

Finally, it’s time to configure the ingress files for each environment. Again, each environment is slightly different (check: dev, staging, prod), but they’re all similar:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gitops-demo
  namespace: prod
  annotations:
    cert-manager.io/cluster-issuer: selfsigned-issuer
spec:
  ingressClassName: cilium
  tls:
  - hosts:
    - prod.gitops-demo.local
    secretName: gitops-demo-prod-tls
  rules:
  - host: prod.gitops-demo.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: gitops-demo-prod
            port:
              number: 80

One last part: you need to add the following line to your /etc/hosts so that DNS resolution works:

<your-ip> dev.gitops-demo.local staging.gitops-demo.local prod.gitops-demo.local

You can now apply all the changes to the environments by using kubectl:

kubectl apply -k environments/{dev,staging,prod}

And that’s it! With a simple curl -k https://dev.gitops-demo.local/version (note the -k argument), you should see the expected results! That was easy, no?

Automating the CI/CD pipeline#

It’s true that with Argo CD, we have a pipeline for continuous deployment. But as it is, our current CI system isn’t able to actually deploy anything. For this, we need to change our workflow, by adding new steps to it. We will adopt a conservative strategy: for dev, we want to deploy straight away our changes, as that’s where we wish to see our changes reflected the soonest. For staging and prod, we will have another strategy: push the changes to a new branch, and create a pull request for merging with the main branch, which is the one that Argo CD watches. Let’s see:

Pushing to dev#

First, and most importantly, we need a small change to the kustomize.yaml in each of the environments. We will opt to use a patch strategy to guarantee that Argo CD picks up the changes that we will push to the environments. For that, we need to add the following section under the patches spec that was already present in the files:

- target:
    kind: Deployment
    name: gitops-demo
  patch: |
    - op: replace
      path: /spec/template/spec/containers/0/image
      value: "git.assilvestrar.club/gitops/gitops-demo-app:latest"

With this change, we can now add the first part of our workflow change, responsible to pushing changes to the git repository. In the .gitea/workflows/ci.yaml file in our gitops-demo-app repo, we need to add the first part of the automation job:

  update-config:
    needs: build-and-push
    runs-on: self-hosted
    if: github.ref == 'refs/heads/main'
    steps:
    - uses: actions/checkout@v4
    
    - name: Update dev environment
      run: |
        git clone https://${{ secrets.REGISTRY_USER }}:${{ secrets.REGISTRY_PASS }}@${REGISTRY_HOST}/${{ secrets.REGISTRY_USER }}/gitops-demo-config.git config-repo
        cd config-repo
        yq e '.patches[1].patch = "- op: replace\n  path: /spec/template/spec/containers/0/image\n  value: \"git.assilvestrar.club/gitops/gitops-demo-app:'${{ github.sha }}'\""' -i environments/dev/kustomization.yaml
        git config --global user.email "ci@gitops.local"
        git config --global user.name "CI Bot"
        git add environments/dev/kustomization.yaml
        git commit -m "ci: update dev image to ${{ github.sha }}"
        git push https://${{ secrets.REGISTRY_USER }}:${{ secrets.REGISTRY_PASS }}@${REGISTRY_HOST}/${{ secrets.REGISTRY_USER }}/gitops-demo-config.git main

What this does is that once the build was successful and the new image is pushed to the docker registry we are hosting, we then proceed to patch the kustomization.yaml file for the dev environment (hence the yq tool being used there - it’s a lifesaver to make sure yaml is properly formatted). We then proceed to straight away push this change to the git repository’s main branch. Which means that Argo CD will then pick up this change and apply it to the containers in your cluster. Pretty cool, we actually have automatic deployments to the dev environment by now.

Managing staging and prod#

As mentioned before, the strategy for merging on staging and prod ought to be a bit different, as we don’t want automatic deployment of the new version of an image; we wish to vet the new image in a dev environment, and only then promote said image to staging, and eventually prod. With this in mind, we adopt a different strategy: first, we create a separate branch for each of staging and prod, and there we commit the changes that would bring the environment to the new image. At the same time, we create a pull request to make it easy to merge those changes once the time is right. With this in mind, the actual code that manages this workflow is as follows:

  create-staging-pr:
    needs: update-config
    runs-on: self-hosted
    if: github.ref == 'refs/heads/main'
    steps:
    - uses: actions/checkout@v4

    - name: Create staging promotion PR
      run: |
        git clone https://${{ secrets.REGISTRY_USER }}:${{ secrets.REGISTRY_PASS }}@${REGISTRY_HOST}/${{ secrets.REGISTRY_USER }}/gitops-demo-config.git config-repo
        cd config-repo
        git switch -c promote/${{ github.sha }}-to-staging
        yq e '.patches[1].patch = "- op: replace\n  path: /spec/template/spec/containers/0/image\n  value: \"git.assilvestrar.club/gitops/gitops-demo-app:'${{ github.sha }}'\""' -i environments/staging/kustomization.yaml
        git config --global user.email "ci@gitops.local"
        git config --global user.name "CI Bot"
        git add environments/staging/kustomization.yaml
        git commit -m "ci: promote ${{ github.sha }} to staging"
        git push https://${{ secrets.REGISTRY_USER }}:${{ secrets.REGISTRY_PASS }}@${REGISTRY_HOST}/${{ secrets.REGISTRY_USER }}/gitops-demo-config.git promote/${{ github.sha }}-to-staging
        curl -v -X POST https://${REGISTRY_HOST}/api/v1/repos/${{ secrets.REGISTRY_USER }}/gitops-demo-config/pulls -H "Authorization: token ${{ secrets.REGISTRY_PASS }}" -H "Content-Type: application/json" -d '{"title":"Promote ${{ github.sha }} to staging","head":"promote/${{ github.sha }}-to-staging","base":"main"}'

  create-prod-pr:
    needs: update-config
    runs-on: self-hosted
    if: github.ref == 'refs/heads/main'
    steps:
    - uses: actions/checkout@v4

    - name: Create prod promotion PR
      run: |
        git clone https://${{ secrets.REGISTRY_USER }}:${{ secrets.REGISTRY_PASS }}@${REGISTRY_HOST}/${{ secrets.REGISTRY_USER }}/gitops-demo-config.git config-repo
        cd config-repo
        git switch -c promote/${{ github.sha }}-to-prod
        yq e '.patches[1].patch = "- op: replace\n  path: /spec/template/spec/containers/0/image\n  value: \"git.assilvestrar.club/gitops/gitops-demo-app:'${{ github.sha }}'\""' -i environments/prod/kustomization.yaml
        git config --global user.email "ci@gitops.local"
        git config --global user.name "CI Bot"
        git add environments/prod/kustomization.yaml
        git commit -m "ci: promote ${{ github.sha }} to prod"
        git push https://${{ secrets.REGISTRY_USER }}:${{ secrets.REGISTRY_PASS }}@${REGISTRY_HOST}/${{ secrets.REGISTRY_USER }}/gitops-demo-config.git promote/${{ github.sha }}-to-prod
        curl -v -X POST https://${REGISTRY_HOST}/api/v1/repos/${{ secrets.REGISTRY_USER }}/gitops-demo-config/pulls -H "Authorization: token ${{ secrets.REGISTRY_PASS }}" -H "Content-Type: application/json" -d '{"title":"Promote ${{ github.sha }} to prod","head":"promote/${{ github.sha }}-to-prod","base":"main"}'

You can see that both the create-staging-pr and create-prod-pr are pretty much similar, with the only difference being the branches where they are pushing the changes needed. We finish everything off by creating a pull request on main, and you can see how that looks here: pr for staging and pr for prod. And now we have a pretty much automated CI/CD pipeline!

What’s next#

In what’s bound to be the last part of this series, we shall add some niceties to this pipeline, mostly related with reliability and observability. Stay tuned for more!