rke2

RKE2, also known as RKE Government, is Rancher's next-generation Kubernetes distribution.

2,351 downloads

2,351 latest version

4.7 quality score

Version information

  • 2.0.0 (latest)
released Jul 3rd 2024
This version is compatible with:
  • Puppet Enterprise 2025.4.x, 2025.3.x, 2025.2.x, 2025.1.x, 2023.8.x, 2023.7.x, 2023.6.x, 2023.5.x, 2023.4.x, 2023.3.x, 2023.2.x, 2023.1.x, 2023.0.x, 2021.7.x, 2021.6.x, 2021.5.x, 2021.4.x, 2021.3.x, 2021.2.x, 2021.1.x, 2021.0.x
  • Puppet >= 7.0.0 < 9.0.0
  • AlmaLinux

Start using this module

  • r10k or Code Manager
  • Bolt
  • Manual installation
  • Direct download

Add this module to your Puppetfile:

mod 'lsst-rke2', '2.0.0'
Learn more about managing modules with a Puppetfile

Add this module to your Bolt project:

bolt module add lsst-rke2
Learn more about using this module with an existing project

Manually install this module globally with Puppet module tool:

puppet module install lsst-rke2 --version 2.0.0

Direct download is not typically how you would use a Puppet module to manage your infrastructure, but you may want to download the module in order to inspect the code.

Download

Documentation

lsst/rke2 — version 2.0.0 Jul 3rd 2024

rke2

Table of Contents

  1. Overview
  2. Description
  3. Usage - Configuration options and additional functionality
  4. Reference - An under-the-hood peek at what the module is doing and how

Overview

RKE2, also known as RKE Government, is Rancher's next-generation Kubernetes distribution.

Description

This module installs rke2 from packages (E.g. a yum repo) and configures the installation via config.yaml.

[!IMPORTANT] The rspec-beaker tests timeout / fail under github actions and at not part of an active workflow. The acceptance tests will need to be run manually prior to the merge of PRs.

Usage

Example role defined via hiera.

---
lookup_options:
  rke2::config:
    merge:
      strategy: "deep"
      knockout_prefix: "--"
classes:
  - "rke2"
rke2::config:
  server: "https://%{::cluster}.%{::site}.example.com:9345"
  token: "ENC[PKCS7,...]"
  node-name: "%{facts.hostname}"
  tls-san:
    - "%{::cluster}.%{::site}.example.com"
  node-label:
    - "role=storage-node"
  disable:
    - "rke2-ingress-nginx"
  disable-cloud-controller: true

In this example, a DNS A/AAAA record for %{::cluster}.%{::site}.example.com is required.

If the cluster is being provisioned from scratch. In other words, when there are no pre-existing etcd instances. The server key will need to be manually deleted from /etc/rancher/rke2/config.yaml on one (and only one) node and the rke2-server service restarted. While this key could be knocked on a single node via hiera, if the node without the server key is ever re-provisioned, it would create a new standalone cluster instance which is detached from the existing etcd instances.

Reference

See REFERENCE