Innstall and manage Kubernetes cluster

Alexander Ursu



1,874 latest version

5.0 quality score

Version information

  • 0.3.1 (latest)
  • 0.2.3
  • 0.2.2
  • 0.2.1
  • 0.2.0
released Dec 28th 2020
This version is compatible with:
  • Puppet Enterprise 2021.0.x, 2019.8.x, 2019.7.x, 2019.5.x, 2019.4.x, 2019.3.x, 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x
  • Puppet >= 5.5.0 < 8.0.0
  • CentOS

Start using this module


aursu/kubeinstall — version 0.3.1 Dec 28th 2020


Kubernetes cluster installation following kubernetes.io installation Guide

Table of Contents

  1. Description
  2. Setup - The basics of getting started with kubeinstall
  3. Usage - Configuration options and additional functionality
  4. Limitations - OS compatibility, etc.
  5. Development - Guide for contributing to the module


Module has ability to setup Kubernetes control plain host (included base profile kubeinstall::profile::controller), Kubernetes worker host (using base profile kubeinstall::install::worker)

It supports atomatic Kubernetes cluster setup using Puppet exported resources via PuppetDB


What kubeinstall affects OPTIONAL

Module install Kubernetes components including kubeadm and its configuration for proper Nodes bootstrap.

Also it by default:

  • disable swap (see kubeinstall::system::swap),
  • disable firewalld (see kubeinstall::system::firewall::noop),
  • disable selinux (see kubeinstall::system::selinux::noop),
  • set kernel settings for iptables (see kubeinstall::system::sysctl::net_bridge)
  • install Docker as CRI (see kubeinstall::runtime::docker)
  • install Calico as CNI (see kubeinstall::install::calico)
  • install Kubernetes Dashboard UI on controller (see kubeinstall::install::dashboard)

Setup Requirements OPTIONAL

It requires non-published on Puppet Forge module aursu::dockerinstall which is set of different Docker related features

Puppetfile setup:

mod 'dockerinstall',
  :git => 'https://github.com/aursu/puppet-dockerinstall.git',
  :tag => 'v0.9.1'

Beginning with kubeinstall


In order to use kubeinstall and setup yoour controller node it is enough to create such Puppet profile:

class profile::kubernetes::controller {
  class { 'kubeinstall::profile::kubernetes': }
  class { 'kubeinstall::profile::controller': }

and for worker node:

class profile::kubernetes::worker {
  class { 'kubeinstall::profile::kubernetes': }
  class { 'kubeinstall::profile::worker': }

In order to setup settings it iss possible to use Hiera:

kubeinstall::cluster_name: projectname
kubeinstall::control_plane_endpoint: kube.intern.domain.tld

Cluster features

Class kubeinstall::cluster is responsible for bootstrap token exchange between controller and worker nodes (for worker bootstrap). For this PuppetDB is required because exported resource (kubeinstall::token_discovery) and exported resources collector (implemnted via custom function kubeinstall::discovery_hosts) are in use.

Also there is a feature of exporting local PersistentVolume resources from worker nodes into controller directory /etc/kubernetes/manifests/persistentvolumes. To activate it is required to setup properly flag kubeinstall::cluster::cluster_role on both worker and controller hosts and provide all requirements to export PVs on worker node.


See REFERENCE.md for reference



Release Notes/Contributors/Etc. Optional