Forge Home


Deploy multi-node Microk8s cluster on LXD VMs on dedicated server


120 latest version

1.3 quality score

We run a couple of automated
scans to help you access a
module's quality. Each module is
given a score based on how well
the author has formatted their
code and documentation and
modules are also checked for
malware using VirusTotal.

Please note, the information below
is for guidance only and neither of
these methods should be considered
an endorsement by Puppet.

Version information

  • 1.2.0 (latest)
  • 1.1.0
  • 1.0.0
  • 0.1.0
released Dec 24th 2023
This version is compatible with:
  • Puppet Enterprise 2023.7.x, 2023.6.x, 2023.5.x, 2023.4.x, 2023.3.x, 2023.2.x, 2023.1.x, 2021.7.x
  • Puppet >= 7.24 < 9.0.0

Start using this module

  • r10k or Code Manager
  • Bolt
  • Manual installation
  • Direct download

Add this module to your Puppetfile:

mod 'bahsardlaleh-microk8s', '1.2.0'
Learn more about managing modules with a Puppetfile

Add this module to your Bolt project:

bolt module add bahsardlaleh-microk8s
Learn more about using this module with an existing project

Manually install this module globally with Puppet module tool:

puppet module install bahsardlaleh-microk8s --version 1.2.0

Direct download is not typically how you would use a Puppet module to manage your infrastructure, but you may want to download the module in order to inspect the code.



bahsardlaleh/microk8s — version 1.2.0 Dec 24th 2023


Welcome to my new module.

Table of Contents

  1. Description
  2. Setup - The basics of getting started with microk8s
  3. Usage - Configuration and customization options
  4. Limitations - OS compatibility, etc.
  5. Development - Guide for contributing to the module


Microk8s module deploys a production-ready multi-node Microk8s cluster on LXD VMs on a dedicated server, this approach is helpful for those who want to deploy a small and affordable Kubernetes cluster to take advantage of the automation and observability it provides but not for those looking for high availability as this is still a one node cluster physically.


Setup Requirements

  • This module was tested on Ubuntu 20.04 dedicated server.
  • at least 32GB of RAM and 4 CPU cores.
  • public ipv4 address for your server.

Beginning with microk8s

in the following sections you will see how to define your cluster nodes specifications and provision the cluster with minimal effort.


you can use this module either with the default values or by defining you own values:


this module already has default values defined for everything as below, you can go with these if you don't wish to make any changes,

ipv4 address range:
number of nodes:           one master and 2 workers
memory per node:           8 GB
cpu per node:              2
disk per node:             60 GB
local NFS storage on host: enabled
master ipv4:     
NFS shared folder:         /mnt/k8s_nfs_share

so you just have to include the module in your puppet manifest:

include microk8s

Note: by default we are defining a local NFS storage on the host on the path /mnt/k8s_nfs_share which is better than using Kubernetes hostpath storage but you can disable that as you will see in the section below.


If you wish to customize the defaults you can pass them to the main class microk8s,

class {'microk8s': 
  ipv4_address_cidr => '',
  ipv6_address_cidr => 'fd42:81d2:b869:f61c::1/64',
  nodes => [{
              'vm_name'      => 'master',
              'ipv4_address' => '',
              'memory'       => '8GB',
              'cpu'          => '2',
              'disk'         => '60GiB',
              'passwd'       => '$1$SaltSalt$YhgRYajLPrYevs14poKBQ0',
              'master'       => true
              'vm_name'      => 'worker1',
              'ipv4_address' => '',
              'memory'       => '8GB',
              'cpu'          => '2',
              'disk'         => '60GiB',
              'passwd'       => '$1$SaltSalt$YhgRYajLPrYevs14poKBQ0',
              'vm_name'      => 'worker2',
              'ipv4_address' => '',
              'memory'       => '8GB',
              'cpu'          => '2',
              'disk'         => '60GiB',
              'passwd'       => '$1$SaltSalt$YhgRYajLPrYevs14poKBQ0',
  local_nfs_storage => true,
  master_ip         => '',
  master_name       => 'master',
  nfs_shared_folder => '/mnt/k8s_nfs_share',
  enable_host_ufw   => false,
  kubectl_user      => 'ubuntu',
  kubectl_user_home => '/home/ubuntu',

Note: note that the nodes specifications are passed as an array of hashes so if you wish to change even one value you'll have to pass all the other values inside the nodes array with it.

Note: the 'passwd' parameter is the hashed password for the ubuntu user created inside the LXD VM which you'll prbably won't need as you can exec into the VM from the host without it.


For a list of supported operating systems, see metadata.json.


If you wish to contribute to this project you can submit a pull request to the repo

Important Notes
  1. you must run puppet commands on agents as root, sudo user is not enough.
  2. if you have multiple environments on puppet server you might need to run puppet generate types on server so you don't have any errors with 'archive' package.
  3. you can ignore any warnings related to iptables rules persistency.
some ideas for contribution
  1. currently the used LXD profiles only work for VMs, you can try to make it work for containers which is better for local development use.
  2. add parameters for enabling the most popular K8S addons like Prometheus, ELK,....etc