Menu

Tekton Integration Testing & Kubernetes Operators

Today, I tried to implement some integration tests for a Kubernetes controller, in the context of Tekton Pipelines.

Docker-in-Docker

I would run my tests on my own production cluster. I do not want to impact existing operations. As such, I want to run my tests in some isolated environment.

The Tekton Catalog gives a sample building a Docker container image, using a Docker-in-Docker sidecar container, offering with some Docker runtime.

This is typically used on Kubernetes clusters that don’t rely on the Docker container runtime (eg: Cri-O), or whenever we do not want to share the Kubernetes node’s Docker socket file to its containers – which is good security practice.

In our case, we could re-use such a sidecar, executing arbitrary containers, which would help running our tests isolated from the underlying Kubernetes cluster.

Kubernetes-in-Docker

“Kubernetes-in-Docker”, or “KIND”, is part of the Kubernetes SIGs project. It allows to easily deploy a Kubernetes cluster on top of Docker. While you would not use this deploying a production cluster, it’s a perfect solution running some tests.

Cluster topology can be customized. Runtime version can be chosen. Making this ideal running integration tests of Kubernetes controllers

Tekton

All we need is to write some Task, that would integrate Kubernetes-in-Docker, Docker-in-Docker, with the deployment and tests of your controller.

Here is one way to do it:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: kind-test-operator
spec:
  params:
  - default: docker.io/curlimages/curl:7.72.0
    description: Image providing curl, pulling binaries required by this Task
    name: init_image
    type: string
  - default: 1.21.1
    description: Kubernetes cluster version
    name: k8s_version
    type: string
  - default: 0.11.1
    description: KIND version
    name: kind_version
    type: string
  - default: docker.io/kindest/node
    description: KinD Node Image repository, including Kubernetes in Docker runtime
    name: kind_node_image
    type: string
  - default: docker.io/library/docker:stable
    description: The location of the docker builder image.
    name: builderimage
    type: string
  - default: docker dind
    description: The location of the docker-in-docker image.
    name: dindimage
    type: string
  steps:

  # first, download kubectl client and kind binary
  # next containers won't have curl
  - args:
    - -c
    - |
        set -x;

        # install kubectl
        curl -o /ci-bin/kubectl -fsL \
              https://dl.k8s.io/release/v$(params.k8s_version)/bin/linux/amd64/kubectl;
        chmod +x /ci-bin/kubectl;

        # install kind
        curl -o /ci-bin/kind -fsL \
            https://github.com/kubernetes-sigs/kind/releases/download/v$(params.kind_version)/kind-linux-amd64;
        chmod +x /ci-bin/kind;

        test -x /ci-bin/kind -a -x /ci-bin/kubectl;
        exit $?;
    command:
    - /bin/sh
    image: $(params.init_image)
    name: setup
    securityContext:
      runAsUser: 1000
    volumeMounts:
    - mountPath: /ci-bin
      name: temp-bin

  # next, using the Docker Builder Image, connecting to the Docker-in-Docker sidecar
  # create a Kubernetes cluster, using kind
  # deploy your operator, using kubectl
  # and proceed with testing your controller
  - args:
    - -c
    - |
        export PATH=/ci-bin:$PATH;

        # start kube cluster
        kind create cluster --image=$(params.kind_node_image):v$(params.k8s_version);

        # test cluster OK
        kubectl get nodes
        if ! kubectl get nodes 2>&1 | grep Ready >/dev/null; then
            echo K8S KO - bailing out;
            exit 1;
        fi;

        # deploy controller / adapt to fit your own usecase
        kubectl create ns opsperator;
        kubectl create -f $(workspaces.source.path)/deploy/kubernetes/crd.yaml;
        kubectl create -f $(workspaces.source.path)/deploy/kubernetes/rbac.yaml;
        kubectl create -f $(workspaces.source.path)/deploy/kubernetes/namespace.yaml;
        grep -vE ' (resources|limits|memory|cpu|nodeSelector|node-role.kubernetes.io/.*):( |$)' \
            $(workspaces.source.path)/deploy/kubernetes/run-ephemeral.yaml | kubectl apply -f-;
        echo Waiting for operator to start ...;
        while true;
        do
            kubectl get pods -n opsperator;
            kubectl get pods -n opsperator | grep 1/1 >/dev/null && break;
            sleep 10;
        done;

        # dummy test for controller
        echo Creating test resource ...;
        kubectl create ns collab-demo;
        sed -e 's|do_network_policy.*|do_network_policy: false|' \
            -e 's|do_exporters.*|do_exporters: false|' \
            $(workspaces.source.path)/deploy/kubernetes/cr/draw.yaml \
            | kubectl apply -f-;
        echo Waiting for draw to start ...;
        while true;
        do
            kubectl get draw -n collab-demo;
            kubectl get draw -n collab-demo -o yaml | grep -A20 'status:' \
                | grep 'ready: true' >/dev/null && break;
            sleep 10;
        done;

        # check assets created by controller
        echo Checking pods:;
        kubectl get pods -n collab-demo -o wide;
        echo Checking ingress:;
        kubectl get ingress,svc -n collab-demo;
        # if needed: include some additional steps, with proper runtime, testing your componets

        echo Done;
        exit 0;
    command:
    - /bin/sh
    env:
    - name: DOCKER_HOST
      value: tcp://localhost:2376
    - name: DOCKER_TLS_VERIFY
      value: '1'
    - name: DOCKER_CERT_PATH
      value: /certs/client
    image: $(params.builderimage)
    name: kind
    securityContext:
      runAsUser: 1000
    volumeMounts:
    - mountPath: /ci-bin
      name: temp-bin
    - mountPath: /certs/client
      name: dind-certs

  # the Docker-in-Docker Sidecar Container
  # where your Kubernetes-in-Docker cluster is being executed
  sidecars:
  - args:
    - --storage-driver=vfs
    - --userland-proxy=false
    env:
    - name: DOCKER_TLS_CERTDIR
      value: /certs
    image: $(params.dindimage)
    name: dind
    readinessProbe:
      periodSeconds: 1
      exec:
        command:
        - ls
        - /certs/client/ca.pem
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /certs/client
      name: dind-certs
  volumes:
  - name: temp-bin
    emptyDir: {}
  - name: dind-certs
    emptyDir: {}
  workspaces:
  - name: source

The steps deploying your controller and testing its functioning properly would vary. The example above includes some hardcoded commands for simplicity. Scaling out, you may want to figure out some generic way of proceeding — repositories that would respect some naming convention providing with sample deployment configurations, and unit testing scripts.

Conclusion

This may not be the best way to proceed. If you can afford to run your tests on some actual cluster, without affecting its operations, then this would be easier. You may query the Kubernetes cluster API hosting your Tekton installation, rather than bootstraping Kubernetes in Kubernetes.

Still this was fun to look at. Kubernetes running in Docker-in-Docker. in a Kubernetes. That doesn’t use Docker.

Leave a reply

Your email address will not be published.

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>