{"id":1021,"date":"2022-01-23T16:22:20","date_gmt":"2022-01-23T15:22:20","guid":{"rendered":"https:\/\/blog.unetresgrossebite.com\/?p=1021"},"modified":"2022-01-23T16:41:47","modified_gmt":"2022-01-23T15:41:47","slug":"tekton-integration-testing-kubernetes-operators","status":"publish","type":"post","link":"https:\/\/blog.unetresgrossebite.com\/?p=1021","title":{"rendered":"Tekton Integration Testing &#038; Kubernetes Operators"},"content":{"rendered":"\n<p>Today, I tried to implement some integration tests for a Kubernetes controller, in the context of Tekton Pipelines.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2>Docker-in-Docker<\/h2>\n\n\n\n<p>I would run my tests on my own production cluster. I do not want to impact existing operations. As such, I want to run my tests in some isolated environment.<\/p>\n\n\n\n<p>The Tekton Catalog gives a sample building a Docker container image, using a Docker-in-Docker sidecar container, offering with some Docker runtime.<\/p>\n\n\n\n<p>This is typically used on Kubernetes clusters that don&#8217;t rely on the Docker container runtime (eg: Cri-O), or whenever we do not want to share the Kubernetes node&#8217;s Docker socket file to its containers &#8211; which is good security practice.<\/p>\n\n\n\n<p>In our case, we could re-use such a sidecar, executing arbitrary containers, which would help running our tests isolated from the underlying Kubernetes cluster.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2>Kubernetes-in-Docker<\/h2>\n\n\n\n<p>&#8220;Kubernetes-in-Docker&#8221;, or &#8220;KIND&#8221;, is part of the Kubernetes SIGs project. It allows to easily deploy a Kubernetes cluster on top of Docker. While you would not use this deploying a production cluster, it&#8217;s a perfect solution running some tests.<\/p>\n\n\n\n<p>Cluster topology can be customized. Runtime version can be chosen. Making this ideal running integration tests of Kubernetes controllers<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2>Tekton<\/h2>\n\n\n\n<p>All we need is to write some Task, that would integrate Kubernetes-in-Docker, Docker-in-Docker, with the deployment and tests of your controller.<\/p>\n\n\n\n<p>Here is one way to do it:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">apiVersion: tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: kind-test-operator\nspec:\n  params:\n  - default: docker.io\/curlimages\/curl:7.72.0\n    description: Image providing curl, pulling binaries required by this Task\n    name: init_image\n    type: string\n  - default: 1.21.1\n    description: Kubernetes cluster version\n    name: k8s_version\n    type: string\n  - default: 0.11.1\n    description: KIND version\n    name: kind_version\n    type: string\n  - default: docker.io\/kindest\/node\n    description: KinD Node Image repository, including Kubernetes in Docker runtime\n    name: kind_node_image\n    type: string\n  - default: docker.io\/library\/docker:stable\n    description: The location of the docker builder image.\n    name: builderimage\n    type: string\n  - default: docker dind\n    description: The location of the docker-in-docker image.\n    name: dindimage\n    type: string\n  steps:\n\n  # first, download kubectl client and kind binary\n  # next containers won't have curl\n  - args:\n    - -c\n    - |\n        set -x;\n\n        # install kubectl\n        curl -o \/ci-bin\/kubectl -fsL \\\n              https:\/\/dl.k8s.io\/release\/v$(params.k8s_version)\/bin\/linux\/amd64\/kubectl;\n        chmod +x \/ci-bin\/kubectl;\n\n        # install kind\n        curl -o \/ci-bin\/kind -fsL \\\n            https:\/\/github.com\/kubernetes-sigs\/kind\/releases\/download\/v$(params.kind_version)\/kind-linux-amd64;\n        chmod +x \/ci-bin\/kind;\n\n        test -x \/ci-bin\/kind -a -x \/ci-bin\/kubectl;\n        exit $?;\n    command:\n    - \/bin\/sh\n    image: $(params.init_image)\n    name: setup\n    securityContext:\n      runAsUser: 1000\n    volumeMounts:\n    - mountPath: \/ci-bin\n      name: temp-bin\n\n  # next, using the Docker Builder Image, connecting to the Docker-in-Docker sidecar\n  # create a Kubernetes cluster, using kind\n  # deploy your operator, using kubectl\n  # and proceed with testing your controller\n  - args:\n    - -c\n    - |\n        export PATH=\/ci-bin:$PATH;\n\n        # start kube cluster\n        kind create cluster --image=$(params.kind_node_image):v$(params.k8s_version);\n\n        # test cluster OK\n        kubectl get nodes\n        if ! kubectl get nodes 2>&amp;1 | grep Ready >\/dev\/null; then\n            echo K8S KO - bailing out;\n            exit 1;\n        fi;\n\n        # deploy controller \/ adapt to fit your own usecase\n        kubectl create ns opsperator;\n        kubectl create -f $(workspaces.source.path)\/deploy\/kubernetes\/crd.yaml;\n        kubectl create -f $(workspaces.source.path)\/deploy\/kubernetes\/rbac.yaml;\n        kubectl create -f $(workspaces.source.path)\/deploy\/kubernetes\/namespace.yaml;\n        grep -vE ' (resources|limits|memory|cpu|nodeSelector|node-role.kubernetes.io\/.*):( |$)' \\\n            $(workspaces.source.path)\/deploy\/kubernetes\/run-ephemeral.yaml | kubectl apply -f-;\n        echo Waiting for operator to start ...;\n        while true;\n        do\n            kubectl get pods -n opsperator;\n            kubectl get pods -n opsperator | grep 1\/1 >\/dev\/null &amp;&amp; break;\n            sleep 10;\n        done;\n\n        # dummy test for controller\n        echo Creating test resource ...;\n        kubectl create ns collab-demo;\n        sed -e 's|do_network_policy.*|do_network_policy: false|' \\\n            -e 's|do_exporters.*|do_exporters: false|' \\\n            $(workspaces.source.path)\/deploy\/kubernetes\/cr\/draw.yaml \\\n            | kubectl apply -f-;\n        echo Waiting for draw to start ...;\n        while true;\n        do\n            kubectl get draw -n collab-demo;\n            kubectl get draw -n collab-demo -o yaml | grep -A20 'status:' \\\n                | grep 'ready: true' >\/dev\/null &amp;&amp; break;\n            sleep 10;\n        done;\n\n        # check assets created by controller\n        echo Checking pods:;\n        kubectl get pods -n collab-demo -o wide;\n        echo Checking ingress:;\n        kubectl get ingress,svc -n collab-demo;\n        # if needed: include some additional steps, with proper runtime, testing your componets\n\n        echo Done;\n        exit 0;\n    command:\n    - \/bin\/sh\n    env:\n    - name: DOCKER_HOST\n      value: tcp:\/\/localhost:2376\n    - name: DOCKER_TLS_VERIFY\n      value: '1'\n    - name: DOCKER_CERT_PATH\n      value: \/certs\/client\n    image: $(params.builderimage)\n    name: kind\n    securityContext:\n      runAsUser: 1000\n    volumeMounts:\n    - mountPath: \/ci-bin\n      name: temp-bin\n    - mountPath: \/certs\/client\n      name: dind-certs\n\n  # the Docker-in-Docker Sidecar Container\n  # where your Kubernetes-in-Docker cluster is being executed\n  sidecars:\n  - args:\n    - --storage-driver=vfs\n    - --userland-proxy=false\n    env:\n    - name: DOCKER_TLS_CERTDIR\n      value: \/certs\n    image: $(params.dindimage)\n    name: dind\n    readinessProbe:\n      periodSeconds: 1\n      exec:\n        command:\n        - ls\n        - \/certs\/client\/ca.pem\n    securityContext:\n      privileged: true\n    volumeMounts:\n    - mountPath: \/certs\/client\n      name: dind-certs\n  volumes:\n  - name: temp-bin\n    emptyDir: {}\n  - name: dind-certs\n    emptyDir: {}\n  workspaces:\n  - name: source\n<\/pre>\n\n\n\n<p>The steps deploying your controller and testing its functioning properly would vary. The example above includes some hardcoded commands for simplicity. Scaling out, you may want to figure out some generic way of proceeding &#8212; repositories that would respect some naming convention providing with sample deployment configurations, and unit testing scripts.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2>Conclusion<\/h2>\n\n\n\n<p>This may not be the best way to proceed. If you can afford to run your tests on some actual cluster, without affecting its operations, then this would be easier. You may query the Kubernetes cluster API hosting your Tekton installation, rather than bootstraping Kubernetes in Kubernetes.<\/p>\n\n\n\n<p>Still this was fun to look at. Kubernetes running in Docker-in-Docker. in a Kubernetes. That doesn&#8217;t use Docker.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Today, I tried to implement some integration tests for a Kubernetes controller, in the context of Tekton Pipelines. Docker-in-Docker I would run my tests on my own production cluster. I do not want to impact existing operations. As such, I want to run my tests in some isolated environment. The Tekton Catalog gives a sample [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/posts\/1021"}],"collection":[{"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1021"}],"version-history":[{"count":3,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/posts\/1021\/revisions"}],"predecessor-version":[{"id":1025,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/posts\/1021\/revisions\/1025"}],"wp:attachment":[{"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1021"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1021"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1021"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}