clone_army
Version information
This version is compatible with:
- Puppet Enterprise 2019.8.x, 2019.7.x, 2019.5.x, 2019.4.x, 2019.3.x, 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x
- Puppet >= 5.4.0 < 7.0.0
- , , ,
Start using this module
Add this module to your Puppetfile:
mod 'sharpie-clone_army', '0.2.1'
Learn more about managing modules with a PuppetfileDocumentation
Puppet Clone Army
This module configures a Linux node running SystemD to host a fleet of
puppet-agent
clones running inside containers. This sort of clone
deployment is useful for applying load to Puppet infrastructure servers
during testing and development.
This module currently only supports PE running on RedHat 7. Support for other operating systems and Open Source Puppet may be added in a future release.
The clones run inside systemd-nspawn
containers that use OverlayFS to
share a common puppet-agent
install with container-local modifications.
This containerization strategy allows experimental patches to be applied to
the fleet as a whole, or to individual agents by modifying files on the host
filesystem. The use of SystemD containers also allows both the puppet
and pxp-agent
services to run, and run "as root" with full functionality.
Setup
Although not required, you may want to configure your Puppet Server with a liberal autosigning policy:
# cat /etc/puppetlabs/puppet/autosign.conf
*
This is insecure, but can save time in a development environment by eliminating the need to approve certificates created by the clones.
Usage
To use the module, classify an agent node with the clone_army
class.
bolt apply
may also be used and requires a value to be set for the master
parameter of the clone_army
class:
bolt apply -e 'class {"clone_army": master => "<certname of your master>"}'
The num_clones
parameter may be used to specify the number of clones to
create on the agent. By default, the module will take the total RAM, subtract
an allotment for the host puppet-agent
and OS, and then divide the remainder
by 150 MB to determine the number of clones to create.
Interacting with Clones
Individual clones can be controlled using the puppet-clone-army@<clone name>
service template:
systemctl start puppet-clone-army@clone1
systemctl stop puppet-clone-army@clone1
The entire fleet of clones hosted by a particular node can be controlled
using the puppet-clone-army.target
unit:
systemctl start puppet-clone-army.target
systemctl stop puppet-clone-army.target
machinectl
can be used to list running clones, as well as gather the
status of services in an individual clone:
machinectl list
machinectl status clone1
machinectl
can also be used to open a shell on a clone:
machinectl login clone1
The password for the root
user is set to puppetlabs
and typing Ctrl-]
three times will close shells created by machinectl login
. SELinux may
have to be set to permissive mode to prevent it from denying access to
machinectl login
.
Editing Filesystems Used by Clones
The module provisions a base OS image under /var/lib/puppet-clone-army/<base name>
and then creates a OverlayFS mount for each clone under /var/lib/machines/<clone name>
that consists of the base image as a lower layer followed by /var/lib/puppet-clone-army/<clone name>-overlay
as an upper layer. Edits to the base images will affect the entire fleet of
clones while edits to <clone name>-overlay
will affect only one specific clone.
The <base name>
and <clone name>-overlay
file systems should not be edited
while clones are running as this is undefined behavior for OverlayFS. At best,
the edits will not be visible to the clones.
Stopping a clone by using puppet-clone-army@<clone name>
, or all clones
by using puppet-clone-army.target
, will automatically unmount the overlay
filesystems and allow for edits to be done safely. Starting the units will
remount the overlays.
Notes
This module is based on some prior art:
-
The
puppetlabs/clamps
module, which creates a similar setup, but with user accounts and cron instead of containers and running services. -
Julia Evans' amazing blog posts: https://jvns.ca/blog/2019/11/18/how-containers-work--overlayfs/
Changelog
All notable changes to this project will be documented in this file.
[0.2.1] - 2020-01-06
Fixed
-
puppet-agent is installed to base images after networking has been configured.
-
Partial fix for the selinux-policy-targeted package causing SELinux to block container operations.
0.2.0 - 2020-01-04
Changed
-
The OverlayFS mounts used by clones are now controlled by SystemD mount units instead of Puppet mount resources. This ensures the filesystems are automatically mounted when clones start and ejected when clones are stopped, which allows for safe edits to the layers in the overlay.
-
Clones are no longer enforced to be in a running state.
Added
-
A
puppet-clone-army.target
unit that can be used to start, stop, or restart all clones at once. -
DNS behavior in clones is synced with the host by copying
/etc/resolv.conf
. -
Hostnames of the clones are set to be
<clone name>.<host certname>
.
Fixed
- A symlink is no longer used for
/etc/hosts
.
0.1.0 - 2020-01-03
Added
- A
clone_army
profile with support for runningpuppet-agent
clones insystemd-nspawn
containers on RedHat 7.
Dependencies
- puppetlabs-stdlib (>= 4.19.0 < 7.0.0)