Forge Home


Innstall and manage Kubernetes cluster


64 latest version

5.0 quality score

We run a couple of automated
scans to help you access a
module's quality. Each module is
given a score based on how well
the author has formatted their
code and documentation and
modules are also checked for
malware using VirusTotal.

Please note, the information below
is for guidance only and neither of
these methods should be considered
an endorsement by Puppet.

Version information

  • 0.41.1 (latest)
  • 0.40.1
  • 0.39.4
  • 0.26.0
  • 0.25.1
  • 0.25.0
  • 0.24.0
  • 0.18.1
  • 0.17.3
  • 0.16.8
  • 0.14.0
  • 0.13.0
  • 0.12.1
  • 0.12.0
  • 0.11.0
  • 0.10.3
  • 0.10.2 (deleted)
  • 0.10.1 (deleted)
  • 0.10.0
  • 0.8.0
  • 0.7.0
  • 0.3.1
  • 0.2.3
  • 0.2.2
  • 0.2.1
  • 0.2.0 (deleted)
released May 8th 2024
This version is compatible with:
  • Puppet Enterprise 2023.7.x, 2023.6.x, 2023.5.x, 2023.4.x, 2023.3.x, 2023.2.x, 2023.1.x, 2023.0.x, 2021.7.x, 2021.6.x, 2021.5.x, 2021.4.x, 2021.3.x, 2021.2.x, 2021.1.x, 2021.0.x, 2019.8.x, 2019.7.x, 2019.5.x, 2019.4.x, 2019.3.x, 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x
  • Puppet >= 5.5.0 < 9.0.0
  • ,
  • cgroup2
  • cgroup2
  • containerd
  • setup

Start using this module

  • r10k or Code Manager
  • Bolt
  • Manual installation
  • Direct download

Add this module to your Puppetfile:

mod 'aursu-kubeinstall', '0.41.1'
Learn more about managing modules with a Puppetfile

Add this module to your Bolt project:

bolt module add aursu-kubeinstall
Learn more about using this module with an existing project

Manually install this module globally with Puppet module tool:

puppet module install aursu-kubeinstall --version 0.41.1

Direct download is not typically how you would use a Puppet module to manage your infrastructure, but you may want to download the module in order to inspect the code.



aursu/kubeinstall — version 0.41.1 May 8th 2024


Kubernetes cluster installation following installation Guide

Table of Contents

  1. Description
  2. Setup - The basics of getting started with kubeinstall
  3. Usage - Configuration options and additional functionality
  4. Limitations - OS compatibility, etc.
  5. Development - Guide for contributing to the module


Module has ability to setup Kubernetes control plain host (included base profile kubeinstall::profile::controller), Kubernetes worker host (using base profile kubeinstall::install::worker)

It supports atomatic Kubernetes cluster setup using Puppet exported resources via PuppetDB


What kubeinstall affects OPTIONAL

Module install Kubernetes components including kubeadm and its configuration for proper Nodes bootstrap.

Also it by default:

  • disable swap (see kubeinstall::system::swap),
  • disable firewalld (see kubeinstall::system::firewall::noop),
  • disable selinux (see kubeinstall::system::selinux::noop),
  • set kernel settings for iptables (see kubeinstall::system::sysctl::net_bridge)
  • install CRI-O as CRI (see kubeinstall::runtime::crio). Also Docker CRI is available via kubeinstall::runtime::docker
  • install Calico as CNI (see kubeinstall::install::calico)
  • install Kubernetes Dashboard UI on controller (see kubeinstall::install::dashboard)

Setup Requirements OPTIONAL

CentOS 7 operating system or similar.

Beginning with kubeinstall


In order to use kubeinstall and setup yoour controller node it is enough to create such Puppet profile:

class profile::kubernetes::controller {
  class { 'kubeinstall::profile::kubernetes': }
  class { 'kubeinstall::profile::controller': }

and for worker node:

class profile::kubernetes::worker {
  class { 'kubeinstall::profile::kubernetes': }
  class { 'kubeinstall::profile::worker': }

In order to setup settings it is possible to use Hiera:

kubeinstall::cluster_name: projectname
kubeinstall::control_plane_endpoint: kube.intern.domain.tld

Cluster features

Class kubeinstall::cluster is responsible for bootstrap token exchange between controller and worker nodes (for worker bootstrap). For this PuppetDB is required because exported resource (kubeinstall::token_discovery) and exported resources collector (implemnted via custom function kubeinstall::discovery_hosts) are in use.

Also there is a feature of exporting local PersistentVolume resources from worker nodes into controller directory /etc/kubectl/manifests/persistentvolumes. To activate it is required to setup properly flag kubeinstall::cluster::cluster_role on both worker and controller hosts and provide all requirements to export PVs on worker node.


See for reference



Release Notes/Contributors/Etc. Optional