This version is compatible with:
- Puppet Enterprise 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x, 2017.3.x, 2017.2.x, 2017.1.x, 2016.5.x, 2016.4.x
- Puppet >= 4.7.0 < 6.12.0
Start using this module
- Setup Requirements
- Usage - Configuration options and additional functionality
Before using this module, please make sure you are unable to use the built-in
puppet-backup command to backup and restore your PE installation. An example scenario where you would be unable to use the aforementioned command would be moving to a new host OS that does support the current version of Puppet Enterprise installed.
If you are unable to use the
puppet-backup command, you can use the
pe_migrate module. This module provides a number of tasks that can be utilized to manually migrate a Puppet Enterprise installation.
With the steps provided in this module you'll prepare and move certs, database, classifier, configuration files, Puppet code, and Hiera data while keeping your current infrastructure up and running.
This module should be installed on the current Puppet Primary Server.
The tasks provided in this module require the
rsync packages. You can manually install these packages, or you can apply the
pe_migrate::prep class to the Puppet Primary Server and PE-PostgreSQL node (if you are utilizing an external PE-PostgreSQL node). This class will set up the Puppet Tools repository and install both
rsync. Although you will need to manually install
rsync on the target host.
Requirement: Utilization of this module also requires SSH access as
root to the new host via an ssh-key.
Please Note: These tasks should not be used with
PE 2019.4 or newer, as there are changes to PE-PuppetDB which prevents the successful use of these tasks. I suggest migrating to
PE 2019.2.1, and then performing an in-place upgrade to the latest version. Although you will want to double check for any upgrade cautions based on the current version of your PE installation.
Use of this module assumes that you're using the instance of PostgreSQL provided by PE and a default Hiera configuration with
hiera.yamlin the default location,
If you are migrating from PE 2018.1 or older, you may need to remove the class mcollective_middleware_hosts from the PE Infrastructure node group before moving to PE 2019.x.
- PE 2018.1+ no longer includes MCollective. To prepare, migrate your MCollective work to Puppet orchestrator to automate tasks and create consistent, repeatable administrative processes.
While most of the migration can be automated with the use of these tasks, there are a few manual steps you will need to perform. Please follow the guideline below when performing a migration of your PE installation. The tasks can be ran from the command line as shown, or from the PE Console.
Step One: Back up the SSL directory.
puppet task run pe_migrate::ssl_backup backupdir=</BACKUP/DIR> --nodes <PRIMARY SERVER FQDN>
Step Two: Back up the databases.
puppet task run pe_migrate::db_backup backupdir=</BACKUP/DIR> --nodes <POSTGRESQL NODE FQDN>
Note: If you see the following error, you need to temporarily comment out
Defaults requiretty in the
ERROR: sudo: sorry, you must have a tty to run sudo
Step Three: Transfer the backups.
puppet task run pe_migrate::backup_transfer environment=<MODULE ENVIRONMENT> backupdir=</BACKUP/DIR> targetdir=</DIR/ON/TARGETHOST> targethost=<NEW PRIMARY SERVER FQDN> --nodes <PRIMARY SERVER FQDN>
Step Four: Restore the SSL directory.
puppet task run pe_migrate::restore_ssldir privatekey=</PATH/TO/PRIVATEKEY> targetdir=</DIR/ON/TARGETHOST> targethost=<NEW PRIMARY SERVER FQDN> --targets <PRIMARY SERVER FQDN>
Step Five: Install PE
- On the new primary server, install PE 2019.2.
Step Six: Restore the databases and classifier data.
On the new primary server, manually run the following bash script:
cd </path/to/backupdir> bash restore_databases.sh
Step Seven: (Optional) Deactivate and clear certificates on your old infrastructure nodes.
This step is not required for the migration, but completing it deactivates infrastructure nodes in PuppetDB, deletes the old primary server's information cache, frees up licenses, and allows you to reuse hostnames on new nodes.
Warning: If your old primary server and the new primary server have the same certificate name, do not complete this step; it will delete your new primary server.
On the new primary server, run the following command:
puppet node purge <OLD PRIMARY SERVER CERTNAME> ; find /etc/puppetlabs/puppet/ssl -name <OLD PRIMARY SERVER CERTNAME>.pem -delete
Step Eight: Manually migrate configuration files, Puppet code, and Hiera data.
Your deployment determines the specifics of how to migrate your configuration files, Puppet code, and Hiera data.
Common steps include:
puppet.confto add customizations from your old deployment on the new primary server.
If you use Code Manager or r10k, configure Code Manager or r10k to deploy code on the new primary server.
If you don't use Code Manager or r10k, copy the contents of the code directory
/etc/puppetlabs/code/to the new primary server.
Move your Hiera data and copy your old
/etc/puppetlabs/puppet/hiera.yamlon the new primary server.
Copy classification customizations to the new installation.
Step Nine: Configure your agents and regenerate compiler certificates.
Configure your new agents and compilers:
Point the agents at the new primary server. On each agent, update
puppet config set server <NEW PRIMARY SERVER FQDN>
Regenerate certs for all compilers using our documentation for PE 2019.2, making sure to include
--allow-dns-alt-nameswhen signing the compiler's certificate request.
Upgrade your compilers to the same version as your new primary server. SSH into each compiler and run:
/opt/puppetlabs/puppet/bin/curl --cacert /etc/puppetlabs/puppet/ssl/certs/ca.pem https://<PRIMARY SERVER FQDN>:8140/packages/current/upgrade.bash | sudo bash
If you migrated to a newer version of PE, upgrade the agent nodes.
What are tasks?
Modules can contain tasks that take action outside of a desired state managed by Puppet. It’s perfect for troubleshooting or deploying one-off changes, distributing scripts to run across your infrastructure, or automating changes that need to happen in a particular order as part of an application deployment.