{"id":911,"date":"2020-05-30T20:58:51","date_gmt":"2020-05-30T18:58:51","guid":{"rendered":"https:\/\/blog.unetresgrossebite.com\/?p=911"},"modified":"2020-11-21T20:10:13","modified_gmt":"2020-11-21T19:10:13","slug":"deploying-kubernetes-with-kube-spray","status":"publish","type":"post","link":"https:\/\/blog.unetresgrossebite.com\/?p=911","title":{"rendered":"Deploying Kubernetes with KubeSpray"},"content":{"rendered":"<p>I should first admit OpenShift 4 is slowly recovering from its architectural do-over. I&#8217;m still missing something that would be production ready, and quite disappointed by the waste of resources, violent upgrades, broken CSI, somewhat unstable RH-CoreOS, a complicated deployment scheme when dealing with bare-metal, &#8230; among lesser critical bugs.<\/p>\n<p>OpenShift 3 is still an interesting platform hosting production workloads, although its being based on Kubernetes 1.11 makes it quite an old version already.<\/p>\n<p>After some experimentation on a Raspberry-Pi lab, I figured I would give Kubernetes a try on x86. Doing so, I would be looking at <a href=\"https:\/\/github.com\/kubernetes-sigs\/kubespray\">KubeSpray<\/a>.<\/p>\n<p>\u00a0<\/p>\n<p>If you&#8217;re familiar with OpenShift 3 cluster deployments, you may have been using <a href=\"https:\/\/github.com\/openshift\/openshift-ansible\">openshift-ansible<\/a> already. Kube-spray is a similar solution, focused on Kubernetes, simplifying the process of bootstrapping, scaling and upgrading highly available clusters.<\/p>\n<p>Currently, kube-spray allows for deploying Kubernetes with container runtimes such as docker, cri-o, containerd, SDN based on flannel, weave, calico, &#8230; as well as a registry, some nginx based ingress controller, certs manager controller, integrated metrics, or the localvolumes, rbd and cephfs provisioner plugins.<\/p>\n<p>Comparing with OpenShift 4, the main missing components would be the cluster and developer consoles, RBAC integrating with users and groups from some third-party authentication provider. Arguably, the OLM, though I never really liked that one &#8212; makes your operators deployment quite abstract, and complicated to troubleshoot, as it involves several namespaces and containers, &#8230; The Prometheus Operator, that could still be deployed manually.<br \/>I can confirm everything works perfectly deploying on Debian Buster nodes, with <i>containerd<\/i> and <i>calico<\/i>. Keeping pretty much all defaults in place and activating all addons.<\/p>\n<p>\u00a0<\/p>\n<p>The sample variables shipping with kube-spray are pretty much on point. We would create an inventory file, such as the following:<\/p>\n<blockquote>\n<p>all:<br \/>\u00a0\u00a0hosts:<br \/>\u00a0\u00a0\u00a0\u00a0master1:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0access_ip: 10.42.253.10<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ansible_host: 10.42.253.10<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ip: 10.42.253.10<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_labels:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra.utgb\/zone: momos-adm<br \/>\u00a0\u00a0\u00a0\u00a0master2:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0access_ip: 10.42.253.11<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ansible_host: 10.42.253.11<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ip: 10.42.253.11<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_labels:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra.utgb\/zone: thaoatmos-adm<br \/>\u00a0\u00a0\u00a0\u00a0master3:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0access_ip: 10.42.253.12<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ansible_host: 10.42.253.12<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ip: 10.42.253.12<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_labels:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra.utgb\/zone: moros-adm<br \/>\u00a0\u00a0\u00a0\u00a0infra1:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0access_ip: 10.42.253.13<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ansible_host: 10.42.253.13<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ip: 10.42.253.13<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_labels:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node-role.kubernetes.io\/infra: &#8220;true&#8221;<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra.utgb\/zone: momos-adm<br \/>\u00a0\u00a0\u00a0\u00a0infra2:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0access_ip: 10.42.253.14<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ansible_host: 10.42.253.14<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ip: 10.42.253.14<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_labels:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node-role.kubernetes.io\/infra: &#8220;true&#8221;<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra.utgb\/zone: thanatos-adm<br \/>\u00a0\u00a0\u00a0\u00a0infra3:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0access_ip: 10.42.253.15<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ansible_host: 10.42.253.15<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ip: 10.42.253.15<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_labels:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node-role.kubernetes.io\/infra: &#8220;true&#8221;<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra.utgb\/zone: moros-adm<br \/>\u00a0\u00a0\u00a0\u00a0compute1:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0access_ip: 10.42.253.20<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ansible_host: 10.42.253.20<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ip: 10.42.253.20<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_labels:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node-role.kubernetes.io\/worker: &#8220;true&#8221;<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra.utgb\/zone: momos-adm<br \/>\u00a0\u00a0\u00a0\u00a0compute2:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0access_ip: 10.42.253.21<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ansible_host: 10.42.253.21<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ip: 10.42.253.21<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_labels:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node-role.kubernetes.io\/worker: &#8220;true&#8221;<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra.utgb\/zone: moros-adm<br \/>\u00a0\u00a0\u00a0\u00a0compute3:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0access_ip: 10.42.253.22<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ansible_host: 10.42.253.22<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ip: 10.42.253.22<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_labels:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node-role.kubernetes.io\/worker: &#8220;true&#8221;<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra.utgb\/zone: momos-adm<br \/>\u00a0\u00a0\u00a0\u00a0compute4:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0access_ip: 10.42.253.23<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ansible_host: 10.42.253.23<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ip: 10.42.253.23<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_labels:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node-role.kubernetes.io\/worker: &#8220;true&#8221;<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra.utgb\/zone: moros-adm<br \/>\u00a0\u00a0\u00a0\u00a0children:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0kube-master:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0hosts:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0master1:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0master2:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0master3:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0kube-infra:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0hosts:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra1:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra2:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0infra3:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0kube-worker:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0hosts:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute1:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute2:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute3:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute4:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0kube-node:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0children:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0kube-master:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0kube-infra:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0kube-worker:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0etcd:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0hosts:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0master1:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0master2:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0master3:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0k8s-cluster:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0children:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0kube-master:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0kube-node:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0calico-rr:<br \/>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0hosts: {}<\/p>\n<\/blockquote>\n\n\n<p>Then, we&#8217;ll edit the sample <i>group_vars\/etcd.yml<\/i>:<\/p>\n\n\n<blockquote>\n<p>etcd_compaction_retention: &#8220;8&#8221;<br>etcd_metrics: basic<br>etcd_memory_limit: 5GB<br>etcd_quota_backend_bytes: 2147483648<br># ^ WARNING: sample var tells about &#8220;2G&#8221;<br># which results in etcd not starting (deployment_type=host)<br># journalctl shows errors such as:<br># &gt; invalid value &#8220;2G&#8221; for ETCD_QUOTA_BACKEND_BYTES: strconv.ParseInt: parsing &#8220;2G&#8221;: invalid syntax<br># Also note: here, I&#8217;m setting 20G, not 2.<br>etcd_deployment_type: host<\/p>\n<\/blockquote>\n\n\n<p>Next, common variables in <i>group_vars\/all\/all.yml<\/i>:<\/p>\n\n\n<blockquote>\n<p>etcd_data_dir: \/var\/lib\/etcd<br>bin_dir: \/usr\/local\/bin<br>kubelet_load_modules: true<br>upstream_dns_servers:<br>&#8211; 10.255.255.255<br>searchdomains:<br>&#8211; intra.unetresgrossebite.com<br>&#8211; unetresgrossebite.com<br>additional_no_proxy: &#8220;*.intra.unetresgrossebite.com,10.42.0.0\/15&#8221;<br>http_proxy: &#8220;http:\/\/netserv.vms.intra.unetresgrossebite.com:3128\/&#8221;<br>https_proxy: &#8220;{{ http_proxy }}&#8221;<br>download_validate_certs: False<br>cert_management: script<br>download_container: true<br>deploy_container_engine: true<br>apiserver_loadbalancer_domain_name: api-k8s.intra.unetresgrossebite.com<br>loadbalancer_apiserver:<br>&nbsp;&nbsp;address: 10.42.253.152<br>&nbsp;&nbsp;port: 6443<br>loadbalancer_apiserver_localhost: false<br>loadbalancer_apiserver_port: 6443<\/p>\n<\/blockquote>\n\n\n<p>We would also want to customize the variables in <i>group_vars\/k8s-cluster\/k8s-cluster.yml<\/i>:<\/p>\n\n\n<blockquote>\n<p>kube_config_dir: \/etc\/kubernetes<br>kube_script_dir: &#8220;{{ bin_dir }}\/kubernetes-scripts&#8221;<br>kube_manifest_dir: &#8220;{{ kube_config_dir }}\/manifests&#8221;<br>kube_cert_dir: &#8220;{{ kube_config_dir }}\/ssl&#8221;<br>kube_token_dir: &#8220;{{ kube_config_dir }}\/tokens&#8221;<br>kube_users_dir: &#8220;{{ kube_config_dir }}\/users&#8221;<br>kube_api_anonymous_auth: true<br>kube_version: v1.18.3<br>kube_image_repo: &#8220;k8s.gcr.io&#8221;<br>local_release_dir: &#8220;\/tmp\/releases&#8221;<br>retry_stagger: 5<br>kube_cert_group: kube-cert<br>kube_log_level: 2<br>credentials_dir: &#8220;{{ inventory_dir }}\/credentials&#8221;<br>\nkube_api_pwd: &#8220;{{ lookup(&#8216;password&#8217;, credentials_dir + &#8216;\/kube_user.creds length=15 chars=ascii_letters,digits&#8217;) }}&#8221;<br>kube_users:<br>&nbsp;&nbsp;kube:<br>&nbsp;&nbsp;&nbsp;&nbsp;pass: &#8220;{{ kube_api_pwd }}&#8221;<br>&nbsp;&nbsp;&nbsp;&nbsp;role: admin<br>&nbsp;&nbsp;&nbsp;&nbsp;groups:<br>&nbsp;&nbsp;&nbsp;&nbsp;&#8211; system:masters<br>kube_oidc_auth: false<br>kube_basic_auth: true<br>kube_token_auth: true<br>kube_network_plugin: calico<br>kube_network_plugin_multus: false<br>kube_service_addresses: 10.233.0.0\/18<br>kube_pods_subnet: 10.233.64.0\/18<br>kube_network_node_prefix: 24<br>kube_apiserver_ip: &#8220;{{ kube_service_addresses|ipaddr(&#8216;net&#8217;)|ipaddr(1)|ipaddr(&#8216;address&#8217;) }}&#8221;<br>kube_apiserver_port: 6443<br>kube_apiserver_insecure_port: 0<br>kube_proxy_mode: ipvs<br># using metallb, set to true<br>kube_proxy_strict_arp: false<br>kube_proxy_nodeport_addresses: []<br>kube_encrypt_secret_data: false<br>cluster_name: cluster.local<br>ndots: 2<br>kubeconfig_localhost: true<br>kubectl_localhost: true<br>dns_mode: coredns<br>enable_nodelocaldns: true<br>nodelocaldns_ip: 169.254.25.10<br>nodelocaldns_health_port: 9254<br>enable_coredns_k8s_external: false<br>coredns_k8s_external_zone: k8s_external.local<br>enable_coredns_k8s_endpoint_pod_names: false<br>system_reserved: true<br>system_memory_reserved: 512M<br>system_cpu_reserved: 500m<br>system_master_memory_reserved: 256M<br>system_master_cpu_reserved: 250m<br>deploy_netchecker: false<br>skydns_server: &#8220;{{ kube_service_addresses|ipaddr(&#8216;net&#8217;)|ipaddr(3)|ipaddr(&#8216;address&#8217;) }}&#8221;<br>\nskydns_server_secondary: &#8220;{{ kube_service_addresses|ipaddr(&#8216;net&#8217;)|ipaddr(4)|ipaddr(&#8216;address&#8217;) }}&#8221;<br>\ndns_domain: &#8220;{{ cluster_name }}&#8221;<br>kubelet_deployment_type: host<br>helm_deployment_type: host<br>kubeadm_control_plane: false<br>kubeadm_certificate_key: &#8220;{{ lookup(&#8216;password&#8217;, credentials_dir + &#8216;\/kubeadm_certificate_key.creds length=64 chars=hexdigits&#8217;) | lower }}&#8221;<br>k8s_image_pull_policy: IfNotPresent<br>kubernetes_audit: false<br>dynamic_kubelet_configuration: false<br>default_kubelet_config_dir: &#8220;{{ kube_config_dir }}\/dynamic_kubelet_dir&#8221;<br>dynamic_kubelet_configuration_dir: &#8220;{{ kubelet_config_dir | default(default_kubelet_config_dir) }}&#8221;<br>authorization_modes:<br>&#8211; Node<br>&#8211; RBAC<br>podsecuritypolicy_enabled: true<br>container_manager: containerd<br>resolvconf_mode: none<br>etcd_deployment_type: host<\/p>\n<\/blockquote>\n\n\n<p>Finally, we may enable additional components in <i>group_vars\/k8s-cluster\/addons.yml<\/i>:<\/p>\n\n\n<blockquote>\n<p>dashboard_enabled: true<br>helm_enabled: false<\/p>\n<p>registry_enabled: false<br>registry_namespace: kube-system<br>registry_storage_class: rwx-storage<br>registry_disk_size: 500Gi<\/p>\n<p>metrics_server_enabled: true<br>metrics_server_kubelet_insecure_tls: true<br>metrics_server_metric_resolution: 60s<br>metrics_server_kubelet_preferred_address_types: InternalIP<\/p>\n<p>cephfs_provisioner_enabled: true<br>cephfs_provisioner_namespace: cephfs-provisioner<br>cephfs_provisioner_cluster: ceph<br>cephfs_provisioner_monitors: &#8220;10.42.253.110:6789,10.42.253.111:6789,10.42.253.112:6789&#8221;<br>cephfs_provisioner_admin_id: admin<br>\ncephfs_provisioner_secret: <i>key returned by &#8216;ceph auth get client.admin&#8217;<\/i><br>cephfs_provisioner_storage_class: rwx-storage<br>cephfs_provisioner_reclaim_policy: Delete<br>cephfs_provisioner_claim_root: \/volumes<br>cephfs_provisioner_deterministic_names: true<\/p>\n<p>rbd_provisioner_enabled: true<br>rbd_provisioner_namespace: rbd-provisioner<br>rbd_provisioner_replicas: 2<br>rbd_provisioner_monitors: &#8220;10.42.253.110:6789,10.42.253.111:6789,10.42.253.112:6789&#8221;<br>rbd_provisioner_pool: kube<br>rbd_provisioner_admin_id: admin<br>rbd_provisioner_secret_name: ceph-secret-admin<br>rbd_provisioner_secret: <i>key retured by &#8216;ceph auth get client.admin&#8217;<\/i><br>rbd_provisioner_user_id: kube<br>rbd_provisioner_user_secret_name: ceph-secret-user<br>rbd_provisioner_user_secret: <i>key returned by &#8216;ceph auth gt client.kube&#8217;<\/i><br>rbd_provisioner_user_secret_namespace: &#8220;{{ rbd_provisioner_namespace }}&#8221;<br>rbd_provisioner_fs_type: ext4<br>rbd_provisioner_image_format: &#8220;2&#8221;<br>rbd_provisioner_image_features: layering<br>rbd_provisioner_storage_class: rwo-storage<br>rbd_provisioner_reclaim_policy: Delete<\/p>\n<p>ingress_nginx_enabled: true<br>ingress_nginx_host_network: true<br>ingress_publish_status_address: &#8220;&#8221;<br>ingress_nginx_nodeselector:<br>&nbsp;&nbsp;node-role.kubernetes.io\/infra: &#8220;true&#8221;<br>ingress_nginx_namespace: ingress-nginx<br>ingress_nginx_insecure_port: 80<br>ingress_nginx_secure_port: 443<br>ingress_nginx_configmap:<br>&nbsp;&nbsp;map-hash-bucket-size: &#8220;512&#8221;<\/p>\n<p>cert_manager_enabled: true<br>cert_manager_namespace: cert-manager<\/p>\n<\/blockquote>\n\n\n<p>We now have pretty much everything ready. Last, we would deploy some haproxy node, proxying requests to Kubernetes API. To do so, I would use a pair of VMs, with <i>keepalived<\/i> and <i>haproxy<\/i>. On both, install necessary packages and configuration:<\/p>\n\n\n<blockquote>\n<p>apt-get update ; apt-get install keepalived haproxy hatop<br>\ncat &lt;&lt;&nbsp;EOF&gt;\/etc\/keepalived\/keepalived.conf<br>\nglobal_defs {<br>\n&nbsp;&nbsp;notification_email {<br>\n&nbsp;&nbsp;&nbsp;&nbsp;contact@example.com<br>\n&nbsp;&nbsp;}<br>\n&nbsp;notification_email_from keepalive@$(hostname -f)<br>\n&nbsp;&nbsp;smtp_server smtp.example.com<br>\n&nbsp;&nbsp;smtp_connect_timeout 30<br>\n}<\/p>\n<p>vrrp_instance VI_1 {<br>\n&nbsp;&nbsp;state MASTER<br>\n&nbsp;&nbsp;interface ens3<br>\n&nbsp;&nbsp;virtual_router_id 101<br>\n&nbsp;&nbsp;priority 10<br>\n&nbsp;&nbsp;advert_int 101<br>\n&nbsp;&nbsp;authentication {<br>\n&nbsp;&nbsp;&nbsp;&nbsp;auth_type PASS<br>\n&nbsp;&nbsp;&nbsp;&nbsp;auth_pass your_secret<br>\n&nbsp;&nbsp;}<br>\n&nbsp;&nbsp;virtual_ipaddress {<br>\n&nbsp;&nbsp;10.42.253.152<br>\n&nbsp;&nbsp;}<br>\n}<br>EOF<br>echo net.ipv4.conf.all.forwarding=1 &gt;&gt;\/etc\/sysctl.conf<br>sysctl -w net.ipv4.conf.all.forwarding=1<br>systemctl restart keepalived &amp;&amp; systemctl enable keepalived<br>#hint: use distinct priorities on nodes<br>cat &lt;&lt;&nbsp;EOF&gt;\/etc\/haproxy\/haproxy.cfg<br>\nglobal<br>&nbsp;&nbsp;log \/dev\/log local0<br>&nbsp;&nbsp;log \/dev\/log local1 notice<br>&nbsp;&nbsp;chroot \/var\/lib\/haproxy<br>&nbsp;&nbsp;stats socket \/run\/haproxy\/admin.sock mode 660 level admin expose-fd listeners<br>&nbsp;&nbsp;stats timeout 30s<br>&nbsp;&nbsp;user haproxy<br>&nbsp;&nbsp;group haproxy<br>&nbsp;&nbsp;daemon<br>&nbsp;&nbsp;ca-base \/etc\/ssl\/certs<br>&nbsp;&nbsp;crt-base \/etc\/ssl\/private<br>&nbsp;&nbsp;ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS<br>&nbsp;&nbsp;ssl-default-bind-options no-sslv3<\/p>\n<p>defaults<br>&nbsp;&nbsp;log global<br>&nbsp;&nbsp;option dontlognull<br>&nbsp;&nbsp;timeout connect 5000<br>&nbsp;&nbsp;timeout client  50000<br>&nbsp;&nbsp;timeout server  50000<br>&nbsp;&nbsp;errorfile 400 \/etc\/haproxy\/errors\/400.http<br>&nbsp;&nbsp;errorfile 403 \/etc\/haproxy\/errors\/403.http<br>&nbsp;&nbsp;errorfile 408 \/etc\/haproxy\/errors\/408.http<br>&nbsp;&nbsp;errorfile 500 \/etc\/haproxy\/errors\/500.http<br>&nbsp;&nbsp;errorfile 502 \/etc\/haproxy\/errors\/502.http<br>&nbsp;&nbsp;errorfile 503 \/etc\/haproxy\/errors\/503.http<br>&nbsp;&nbsp;errorfile 504 \/etc\/haproxy\/errors\/504.http<\/p>\n<p>listen kubernetes-apiserver-https<br>&nbsp;&nbsp;bind 0.0.0.0:6443<br>&nbsp;&nbsp;mode tcp<br>&nbsp;&nbsp;option log-health-checks<br>&nbsp;&nbsp;server master1 10.42.253.10:6443 check check-ssl verify none inter 10s<br>&nbsp;&nbsp;server master2 10.42.253.11:6443 check check-ssl verify none inter 10s<br>&nbsp;&nbsp;server master3 10.42.253.12:6443 check check-ssl verify none inter 10s<br>&nbsp;&nbsp;balance roundrobin<br>EOF<br>systemctl restart haproxy &amp;&amp; systemctl enable haproxy<br>cat &lt;&lt;&nbsp;EOF&gt;\/etc\/profile.d\/hatop.sh<br>alias hatop=&#8217;hatop -s \/run\/haproxy\/admin.sock&#8217;<br>EOF<\/p>\n<\/blockquote>\n\n\n<p>We may now deploy our cluster:<\/p>\n\n\n<blockquote>\n<p>ansible -i <i>path\/to\/inventory<\/i> cluster.yml<\/p>\n<\/blockquote>\n\n\n<p>For a 10 nodes cluster, it shouldn&#8217;t take more than an hour.<br>It is quite nice, to see you can have some reliable Kubernetes deployment, with less than 60 infra Pods.<\/p>\n\n\n\n<p>I&#8217;m also noticing that while the CSI provisioner is being used, creating Ceph RBD and CephFS volumes: the host is still in charge of mounting our those volumes &#8211; which is, in a way, a workaround to the CSI attacher plugins.<br>Although, on that note, I&#8217;ve heard those issues with blocked volumes during nodes failures was in its way to being solved, involving a fix to the CSI spec.<br>Sooner or later, we should be able to use the full CSI stack.<\/p>\n\n\n\n<p>All in all, kube-spray is quite a satisfying solution.<br>Having struggled quite a lot with <i>openshift-ansible<\/i>, and not quite yet satisfied with their lasts <i>installer<\/i>, kube-spray definitely feels like some reliable piece of software, code is well organized, it goes straight to the point, &#8230;<br>Besides, I need a break from CentOS. I&#8217;m amazed I did not try it earlier.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I should first admit OpenShift 4 is slowly recovering from its architectural do-over. I&#8217;m still missing something that would be production ready, and quite disappointed by the waste of resources, violent upgrades, broken CSI, somewhat unstable RH-CoreOS, a complicated deployment scheme when dealing with bare-metal, &#8230; among lesser critical bugs. OpenShift 3 is still an [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[8,12,14,2],"tags":[],"_links":{"self":[{"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/posts\/911"}],"collection":[{"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=911"}],"version-history":[{"count":11,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/posts\/911\/revisions"}],"predecessor-version":[{"id":944,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=\/wp\/v2\/posts\/911\/revisions\/944"}],"wp:attachment":[{"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=911"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=911"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.unetresgrossebite.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=911"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}