Forge Home

k8s_workernode

This is a module that installs and configures kubelet and kube-proxy. Inspired by https://github.com/kelseyhightower/kubernetes-the-hard-way .

1,706 downloads

923 latest version

3.1 quality score

We run a couple of automated
scans to help you access a
module's quality. Each module is
given a score based on how well
the author has formatted their
code and documentation and
modules are also checked for
malware using VirusTotal.

Please note, the information below
is for guidance only and neither of
these methods should be considered
an endorsement by Puppet.

Version information

  • 0.3.0 (latest)
  • 0.2.0
  • 0.1.0
released Feb 19th 2021
This version is compatible with:
  • Puppet Enterprise 2019.8.x, 2019.7.x, 2019.5.x, 2019.4.x, 2019.3.x, 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x, 2017.3.x, 2017.2.x, 2016.4.x
  • Puppet >= 4.10.0 < 7.0.0
  • , , ,

Start using this module

  • r10k or Code Manager
  • Bolt
  • Manual installation
  • Direct download

Add this module to your Puppetfile:

mod 'arrnorets-k8s_workernode', '0.1.0'
Learn more about managing modules with a Puppetfile

Add this module to your Bolt project:

bolt module add arrnorets-k8s_workernode
Learn more about using this module with an existing project

Manually install this module globally with Puppet module tool:

puppet module install arrnorets-k8s_workernode --version 0.1.0

Direct download is not typically how you would use a Puppet module to manage your infrastructure, but you may want to download the module in order to inspect the code.

Download

Documentation

arrnorets/k8s_workernode — version 0.1.0 Feb 19th 2021

Table of contents

  1. Common purpose
  2. Compatibility
  3. Installation
  4. Config example in Hiera and result files

1. Common purpose

This is a module that installs and configures kubelet and kube-proxy - Kubernetes components that are required for worker node in a cluster. In the end you will receive a server that is a part of Kubernetes cluster and ready for the further network configuration. See https://github.com/kelseyhightower/kubernetes-the-hard-way chapter 09 for detailed explanation.

Inspired by https://github.com/kelseyhightower/kubernetes-the-hard-way .

2. Compatibility

This module was tested on CentOS 7.

3. Installation

mod 'k8s_workernode',
    :git => 'https://github.com/arrnorets/puppet-k8s_workernode.git',
    :ref => 'main'

4. Config example in Hiera and result files

This module follows the concept of so called "XaaH in Puppet". The principles are described here and here.

Here is the example of config in Hiera:


# First of all you have to generate at least CA and Kubernetes key-cert pairs in order to configure authentication against API server. 
# Kubernetes key-cert pair will be used as K8s API TLS credentials. See more deatils on https://github.com/kelseyhightower/kubernetes-the-hard-way, chapters 04, 05 and 06.

---
k8s_tls_certs:
  entities:
    ca:
      key: |
        <Insert your CA key here!>
      cert: |
        <Insert your CA certificate here!>

k8s_workernode:
  # Create root directories for configs and kubeconfigs on a workernode
  rootdirs:
    - '/etc/k8s'
    - '/etc/k8s/conf'
    - '/etc/k8s/kubeconfig'

  # Backend description. We are using containerd.
  container_backend:
    name: 'containerd'
    pkg_name: 'containerd.io'
    pkg_version: '1.4.3-3.1.el7'

  # Install basic set of CNI plugins
  cni:
    pkg_version: '0.8.2-1.el7'

  # Crictl for containers handling and debugging on workernodes.
  crictl:
    pkg_name: 'kubernetes-crictl'
    pkg_version: '1.18.14-1.el7'

  # /* Common kube-proxy and kubelet settings */
  kube-proxy:
    pkg_name: 'kubernetes-kube-proxy'
    pkg_version: '1.18.14-1.el7'
    enable: true

    parameters:
      binarypath: '/opt/k8s/kube-proxy'
      common:
        config: '/etc/k8s/conf/kube-proxy.yaml'
      conf:
        '/etc/k8s/conf/kube-proxy.yaml':
          kind: KubeProxyConfiguration
          apiVersion: kubeproxy.config.k8s.io/v1alpha1
          clientConnection:
            kubeconfig: "/etc/k8s/kubeconfig/kube-proxy.kubeconfig"
          mode: "iptables"
          clusterCIDR: "10.200.0.0/16"

  kubelet:
    pkg_name: 'kubernetes-kubelet'
    pkg_version: '1.18.14-1.el7'
    enable: true

    parameters:
      # /* Settings for systemd unit file */
      binarypath: '/opt/k8s/kubelet'
      common:
        config: '/etc/k8s/conf/kubelet-config.yaml'
        container-runtime: 'remote'
        container-runtime-endpoint: 'unix:///var/run/containerd/containerd.sock'
        image-pull-progress-deadline: '2m'
        network-plugin: 'cni'
        register-node: true
        v: 2
      # /* END BLOCK */
      
      # /* Kubelet configuration files */
      conf:
        '/etc/k8s/conf/kubelet-config.yaml':
          kind: KubeletConfiguration
          apiVersion: kubelet.config.k8s.io/v1beta1
          authentication:
            anonymous:
              enabled: false
            webhook:
              enabled: true
            x509:
              clientCAFile: "/var/lib/kubelet/own_ca.crt"
          authorization:
            mode: Webhook
          clusterDomain: "cluster.local"
          clusterDNS:
            - "10.32.0.10"
          podCIDR: "10.240.0.0/16"
          runtimeRequestTimeout: "15m"
      # /* END BLOCK */

  # /* END BLOCK */

# /* Kube-proxy kubeconfig is a commmon one for all workernodes. */

k8s_kubeconfigs:
  '/etc/k8s/kubeconfig/kube-proxy.kubeconfig':
  <Your YAML-formatted kubeconfig for kube-proxy starts here!>

  # /* END BLOCK */

Additionally, we have to configure Hiera for each worker node in the cluster with the following content.

per_node_kubelet_conf:
  # IP address of worker node
  node-ip: '<worker node ip>'

  # Root directory for kubelet var files and certs
  vardir: "/var/lib/kubelet"
  tlsCertFile:
    name: "/var/lib/kubelet/k8s-node1.asgardahost.ru-kubelet.crt"
    value: |
      <Insert your node's private cert for kubelet here!>
  tlsPrivateKeyFile:
    name: "/var/lib/kubelet/k8s-node1.asgardahost.ru-kubelet.key"
    value: |
      <Insert your node's private key for kubelet here!>

  # Kubelet kubeconfig description.
  kubeconfig:
    '/etc/k8s/kubeconfig/k8s-node1.asgardahost.ru.kubeconfig':
      <Your YAML-formatted kubeconfig for kubelet on node k8s-node1 starts here!>

It will install kubelet and kube-proxy packages, put keys, certs, confiigs and kubeconfigs under specified directories and generate a systemd unit file for kube-proxy and kubelet. Here is a list of generated files:

/etc/k8s
/etc/k8s/conf
/etc/k8s/conf/kubelet-config.yaml
/etc/k8s/conf/kube-proxy.yaml
/etc/k8s/kubeconfig
/etc/k8s/kubeconfig/kube-proxy.kubeconfig
/etc/k8s/kubeconfig/k8s-node1.asgardahost.ru.kubeconfig
/etc/systemd/system/k8s-kubelet.service
/etc/systemd/system/k8s-kubeproxy.service
/var/lib/kubelet
/var/lib/kubelet/own_ca.crt
/var/lib/kubelet/k8s-node1.asgardahost.ru-kubelet.key
/var/lib/kubelet/k8s-node1.asgardahost.ru-kubelet.crt