patching
Version information
This version is compatible with:
- Puppet Enterprise 2019.8.x, 2019.7.x, 2019.5.x, 2019.4.x, 2019.3.x, 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x, 2017.3.x, 2017.2.x, 2016.4.x
- Puppet >= 4.10.0 < 7.0.0
- , , , , , , ,
Tasks:
- available_updates
- cache_remove
- cache_update
- history
Plans:
- patching
- available_updates
- check_online
- check_puppet
Start using this module
Add this module to your Puppetfile:
mod 'encore-patching', '0.4.0'
Learn more about managing modules with a PuppetfileDocumentation
patching
Table of Contents
- Description
- Setup
- Architecture
- Design
- Patching Workflow
- Usage
- Configuration Options
- Reference
- Limitations
- Development
- Contributors
Description
A framework for building patching workflows. This module is designed to be used as building blocks for complex patching environments of Windows and Linux (RHEL, Ubuntu) systems.
No Puppet agent is required on the end nodes. The node executing the patching will need to
have bolt
installed.
Setup
Setup Requirements
Module makes heavy use of bolt, you'll need to install it to get started. Install instructions are here.
If you want to use the patching::snapshot_vmware
plan/function then you'll
need the rbvmomi gem installed in the
bolt ruby environment:
/opt/puppetlabs/bolt/bin/gem install --user-install rbvmomi
Quick Start
cat << EOF >> ~/.puppetlabs/bolt/Puppetfile
mod 'puppetlabs/stdlib'
mod 'encore/patching'
EOF
bolt puppetfile install
bolt plan run patching::available_updates --nodes group_a
# install rbvmomi for VMware snapshot support
/opt/puppetlabs/bolt/bin/gem install --user-install rbvmomi
Architecture
This module is designed to work in enterprise patching environments.
Assumptions:
- RHEL nodes are registered to Satellite / Foreman or the internet
- Ubuntu nodes are registered to Landscape or the internet
- Windows nodes are registered to WSUS and Chocolatey (optional)
Registration to a central patching server is preferred for speed of software downloads and control of phased patching promotions.
At some point in the future we will include tasks and plans to promote patches through these central patching server tools.
Design
patching
is designed around bolt
tasks and plans.
Individual tasks have been written to accomplish targeted steps in the patching process.
Examples: patching::available_updates
is used to check for available updates on target nodes.
Plans are then used to pretty up output and tie tasks together.
This way end users can use the tasks and plans as build blocks to create their own custom patching workflows (we all know, there is no such thing as one size fits all).
For more info on tasks and plans, see the Usage and Reference sections.
Going further, many of the settings for the plans are configurable by setting vars
on your groups in the bolt inventory file.
For more info on customizing settings using vars, see the Configuration Options section
Patching Workflow
Our default patching workflow is implented in the patching
plan patching/init.pp.
This workflow consists of the following phases:
- Organize inventory into groups, in the proper order required for patching
- For each group...
- Check for available updates
- Disable monitoring
- Snapshot the VMs
- Pre-patch custom tasks
- Update the host (patch)
- Post-patch custom tasks
- Reboot that require a reboot
- Delete snapshots
- Enable monitoring
Usage
Check for available updates
This will reach out to all nodes in group_a
in your inventory and check for any available
updates through the system's package manager:
- RHEL = yum
- Ubuntu = apt
- Windows = Windows Update + Chocolatey (if installed)
bolt plan run patching::available_updates --nodes group_a
Disable monitoring
Prior to performing the snapshotting and patching steps, the plan will disable monitoring alerts in SolarWinds (by default).
This plan/task utilizes the remote
transport []
bolt plan run patching::monitoring_solarwinds --nodes group_a action=disable' monitoring_target=solarwinds
Create snapshots
This plan will snapshot all of the hosts in VMware. The name of the VM in VMware is assumed to
be the uri
of the node the inventory file.
/opt/puppetlabs/bolt/bin/gem install rbvmomi
bolt plan run patching::snapshot_vmware --nodes group_a action='create' vsphere_host='vsphere.domain.tld' vsphere_username='xyz' vsphere_password='abc123' vsphere_datacenter='dctr1'
Perform pre-patching checks and actions
This plan is designed to perform custom service checks and shutdown actions before
applying patches to a node.
If you have custom actions that need to be perform prior to patching, place them in the
pre_update
scripts and this plan will execute them.
Best practice is to define and distribute these scripts as part of your normal Puppet code
as part of othe role for that node.
bolt plan run patching::pre_update --nodes group_a
By default this executes the following scripts (nodes where the script doesn't exist are ignored):
- Linux =
/opt/patching/bin/pre_update.sh
- Windows =
C:\ProgramData\patching\pre_update.ps1
Deploying pre/post patching scripts
An easy way to deploy pre/post patching scripts is via the patching
Puppet manifest or the patching::script
resource.
Using the patching
class:
class {'patching':
scripts => {
'pre_patch.sh': {
content => template('mymodule/patching/custom_app_post_patch.sh'),
},
'post_patch.sh': {
source => 'puppet:///mymodule/patching/custom_app_post_patch.sh',
},
},
}
Via patching::script
resources:
patching::script { 'custom_app_pre_patch.sh':
content => template('mymodule/patching/custom_app_pre_patch.sh'),
}
patching::script { 'custom_app_post_patch.sh':
source => 'puppet:///mymodule/patching/custom_app_post_patch.sh',
}
Or via Hiera:
patching::scripts:
custom_app_pre_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_pre_patch.sh'
custom_app_post_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_post_patch.sh'
Run a the full patching workflow end-to-end
Organize the inventory into groups:
patching::ordered_groups
Then, for each group:
patching::cache_updates
patching::available_updates
patching::snapshot_vmware action='create'
patching::pre_update
patching::update
patching::post_update
patching::reboot_required
patching::snapshot_vmware action='delete'
bolt plan run patching --nodes group_a
Configuration Options
This module allows many aspects of its runtime to be customized using configuration options in the inventory file.
For details on all of the available configuration options, see REFERENCE_CONFIGURATION.md
Example: Let's say we want to prevent some nodes from rebooting during patching.
This can be customized with the patching_reboot_strategy
variable in inventory:
groups:
- name: no_reboot_nodes
vars:
patching_reboot_strategy: 'never'
targets:
- abc123.domain.tld
- def4556.domain.tld
Reference
See REFERENCE.md
Limitations
This module has been tested on the following operating systems:
- Windows
- 2008
- 2012
- 2016
- RHEL
- 6
- 7
- 8
- Ubuntu
- 16.04
- 18.04
Development
See DEVELOPMENT.md
Contributors
Reference
Table of Contents
Classes
patching
: allows global customization of the patching resourcespatching::params
: params for the patching module resources
Defined types
patching::script
: manages a script for custom patching actions
Functions
patching::snapshot_vmware
: Creates/deletes snapshots on VMs using the VMware vSphere API.patching::target_names
: Returns an array of names, one for each target, based on the $name_property
Tasks
available_updates
: Collects information about available updates on a target systemavailable_updates_linux
: Collects information about available updates on a target systemavailable_updates_windows
: Collects information about available updates on a target systemcache_remove
: Removes/clears the target's update cache. For RHEL/CentOS this means ayum clean all
. For Debian this means aapt update
. For Windows thicache_remove_linux
: Removes/clears the target's update cache. For RHEL/CentOS this means ayum clean all
. For Debian this means aapt update
. For Windows thicache_remove_windows
: Removes/clears the target's update cache. For RHEL/CentOS this means ayum clean all
. For Debian this means aapt update
. For Windows thicache_update
: Updates the targets update cache. For RHEL/CentOS this means ayum makecache fast
. For Debian this means aapt update
. For Windows this mcache_update_linux
: Updates the targets update cache. For RHEL/CentOS this means ayum makecache fast
. For Debian this means aapt update
. For Windows this mcache_update_windows
: Updates the targets update cache. For RHEL/CentOS this means ayum makecache fast
. For Debian this means aapt update
. For Windows this mhistory
: Reads the update history from the JSON 'result_file'.monitoring_solarwinds
: Enable or disable monitoring alerts on hosts in SolarWinds.post_update
: Run post-update script on target host(s), only if it exists. If the script doesn't exist or isn't executable, then this task succeeds (this apre_post_update_linux
: Pre-post-update definition to make bolt not throw a warning. Best to use pre_update or post_update directly.pre_post_update_windows
: Pre-post-update definition to make bolt not throw a warning. Best to use pre_update or post_update directly.pre_update
: Run pre-update script on target host(s), only if it exists. If the script doesn't exist or isn't executable, then this task succeeds (this alpuppet_facts
: Gather system facts using 'puppet facts'. Puppet agent MUST be installed for this to work.reboot_required
: Checks if a reboot is pendingreboot_required_linux
: Checks if a reboot is pendingreboot_required_windows
: Checks if a reboot is pendingupdate
: Execute OS updates on the target. For RedHat/CentOS this runsyum update
. For Debian/Ubuntu runsapt upgrade
. For Windows this runs Windoupdate_history
: Reads the update history from the JSON 'result_file'.update_history_linux
: Reads the update history from the JSON 'result_file'.update_history_windows
: Reads the update history from the JSON 'result_file'.update_linux
: Execute OS updates on the target. For RedHat/CentOS this runsyum update
. For Debian/Ubuntu runsapt upgrade
. For Windows this runs Windoupdate_windows
: Execute OS updates on the target. For RedHat/CentOS this runsyum update
. For Debian/Ubuntu runsapt upgrade
. For Windows this runs Windo
Plans
patching
: Our generic and semi-opinionated workflow.patching::available_updates
: Checks all nodes for available updates reported by their Operating System.patching::check_online
: Checks each node to see they're online.patching::check_puppet
: Checks each node to see if Puppet is installed, then gather Facts on all nodes.patching::deploy_scripts
:patching::get_targets
: get_targets() except it also performs online checks and gathers facts in one step.patching::monitoring_solarwinds
: Creates or deletes VM snapshots on nodes in VMware.patching::ordered_groups
: Takes a set of targets then groups and sorts them by the patching_order var set on the target.patching::post_update
: Executes a custom post-update script on each node.patching::pre_post_update
: Common entry point for executing the pre/post update custom scriptspatching::pre_update
: Executes a custom pre-update script on each node.patching::puppet_facts
: Plan thatr runs 'puppet facts' on the nodes and sets them as facts on the Target objects.patching::reboot_required
: Querys a nodes operating system to determine if a reboot is required and then reboots the nodes that require rebooting.patching::snapshot_vmware
: Creates or deletes VM snapshots on nodes in VMware.patching::update_history
: Collect update history from the results JSON file on the targets
Classes
patching
allows global customization of the patching resources
Examples
Basic usage
include patching
Customizing script location
class {'patching':
bin_dir => '/my/custom/patching/scripts',
}
Customizing the owner/group/mode of the scripts
class {'patching':
owner => 'svc_patching',
group => 'svc_patching',
mode => '0700',
}
Customizing from hiera
patching::bin_dir: '/my/custom/app/patching/dir'
patching::owner: 'svc_patching'
patching::group: 'svc_patching'
patching::mode: '0700'
Deploying scripts from hiera
patching::scripts:
custom_app_pre_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_pre_patch.sh'
custom_app_post_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_post_patch.sh'
Parameters
The following parameters are available in the patching
class.
patching_dir
Data type: Any
Global directory as the base for bin_dir
and log_dir
Default value: $patching::params::patching_dir
bin_dir
Data type: Any
Global directory where the scripts will be installed
Default value: $patching::params::bin_dir
log_dir
Data type: Any
Directory where log files will be written during patching
Default value: $patching::params::log_dir
owner
Data type: Any
Default owner of installed scripts
Default value: $patching::params::owner
group
Data type: Any
Default group of installed scripts
Default value: $patching::params::group
mode
Data type: Any
Default file mode of installed scripts
Default value: $patching::params::mode
scripts
Data type: Optional[Hash]
Hash of script resources to instantiate. Useful for declaring script installs from hiera.
Default value: undef
patching::params
params for the patching module resources
Defined types
patching::script
manages a script for custom patching actions
Examples
Basic usage from static file
include patching
patching::script { 'pre_patch.sh':
source => 'puppet://mymodule/patching/custom_app_pre_patch.sh',
}
Basic usage from template
include patching
patching::script { 'pre_patch.sh':
content => template('mymodule/patching/custom_app_pre_patch.sh'),
}
Installing the script into a different path with a different name
include patching
patching::script { 'custom_app_pre_patch.sh':
content => template('mymodule/patching/custom_app_pre_patch.sh'),
bin_dir => '/my/custom/app/patching/dir',
}
Installing multiple scripts into a different path
class {'patching':
bin_dir => '/my/custom/app/patching/dir',
}
# we don't have to override bin_dir on each of these because
# we configured it gobally in the patching class above
patching::script { 'custom_app_pre_patch.sh':
content => template('mymodule/patching/custom_app_pre_patch.sh'),
}
patching::script { 'custom_app_post_patch.sh':
content => template('mymodule/patching/custom_app_post_patch.sh'),
}
From hiera
patching::bin_dir: '/my/custom/app/patching/dir'
patching::scripts:
custom_app_pre_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_pre_patch.sh'
custom_app_post_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_post_patch.sh'
Parameters
The following parameters are available in the patching::script
defined type.
source
Data type: Any
Source (puppet path) for the file
resource of the script.
Either source
our content
must be specified. If neither are specified an error will be thrown.
Default value: undef
content
Data type: Any
Content (raw string, result of template()
, etc) for the file
resource of the script.
Either source
our content
must be specified. If neither are specified an error will be thrown.
Default value: undef
bin_dir
Data type: Any
Directory where the script will be installed
Default value: $patching::bin_dir
owner
Data type: Any
Owner of the script file
Default value: $patching::owner
group
Data type: Any
Group of the script file
Default value: $patching::group
mode
Data type: Any
File mode to set on the script
Default value: $patching::mode
Functions
patching::snapshot_vmware
Type: Ruby 4.x API
Creates/deletes snapshots on VMs using the VMware vSphere API.
patching::snapshot_vmware(Array $vm_names, String $snapshot_name, String $vsphere_host, String $vsphere_username, String $vsphere_password, String $vsphere_datacenter, Optional[Boolean] $vsphere_insecure, Optional[String] $snapshot_description, Optional[Boolean] $snapshot_memory, Optional[Boolean] $snapshot_quiesce, Optional[String] $action)
Creates/deletes snapshots on VMs using the VMware vSphere API.
Returns: Array
Results from the snapshot create/delete tasks
vm_names
Data type: Array
Array of VM names to create/delete snapshots on
snapshot_name
Data type: String
Name of the snapshot to create/delete
vsphere_host
Data type: String
Hostname/IP of the vSphere server
vsphere_username
Data type: String
Username to use for authenticating to vSphere
vsphere_password
Data type: String
Password to use for authenticating to vSphere
vsphere_datacenter
Data type: String
Datacenter in the vSphere to use when search for VMs
vsphere_insecure
Data type: Optional[Boolean]
Flag to enable HTTPS without SSL verification
snapshot_description
Data type: Optional[String]
Description of the snapshot, when creating.
snapshot_memory
Data type: Optional[Boolean]
Snapshot the VMs memory, when creating.
snapshot_quiesce
Data type: Optional[Boolean]
Quiesce/flush the VMs filesystem when creating the snapshot
action
Data type: Optional[String]
Action to perform on the snapshot, 'create' or 'delete'
patching::target_names
Type: Puppet Language
Returns an array of names, one for each target, based on the $name_property
patching::target_names(TargetSpec $targets, Enum['name', 'uri'] $name_property)
The patching::target_names function.
Returns: Array[String]
Array of names, one for each target
targets
Data type: TargetSpec
List of targets to extract the name from
name_property
Data type: Enum['name', 'uri']
Property in the Target to use as the name
Tasks
available_updates
Collects information about available updates on a target system
Supports noop? true
Parameters
provider
Data type: Optional[String[1]]
What update provider to use. For Linux (RHEL, Debian, etc) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
available_updates_linux
Collects information about available updates on a target system
Supports noop? true
Parameters
provider
Data type: Optional[String[1]]
What update provider to use. For Linux (RHEL, Debian, etc) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
available_updates_windows
Collects information about available updates on a target system
Supports noop? true
Parameters
provider
Data type: Optional[String[1]]
What update provider to use. For Linux (RHEL, Debian, etc) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
cache_remove
Removes/clears the target's update cache. For RHEL/CentOS this means a yum clean all
. For Debian this means a apt update
. For Windows this means a Windows Update refresh.
Supports noop? false
cache_remove_linux
Removes/clears the target's update cache. For RHEL/CentOS this means a yum clean all
. For Debian this means a apt update
. For Windows this means a Windows Update refresh.
Supports noop? false
cache_remove_windows
Removes/clears the target's update cache. For RHEL/CentOS this means a yum clean all
. For Debian this means a apt update
. For Windows this means a Windows Update refresh.
Supports noop? false
cache_update
Updates the targets update cache. For RHEL/CentOS this means a yum clean expire-cache
. For Debian this means a apt update
. For Windows this means a Windows Update refresh.
Supports noop? true
cache_update_linux
Updates the targets update cache. For RHEL/CentOS this means a yum makecache fast
. For Debian this means a apt update
. For Windows this means a Windows Update refresh.
Supports noop? true
cache_update_windows
Updates the targets update cache. For RHEL/CentOS this means a yum makecache fast
. For Debian this means a apt update
. For Windows this means a Windows Update refresh.
Supports noop? true
history
Reads the update history from the JSON 'result_file'.
Supports noop? false
Parameters
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. This is data that was written by patching::update. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/PuppetLabs/patching/patching.json
monitoring_solarwinds
Enable or disable monitoring alerts on hosts in SolarWinds.
Supports noop? true
Parameters
nodes
Data type: Array[String[1]]
List of hostnames or IP addresses for nodes in SolarWinds that will have monitoring alerts either enabled or disabled.
action
Data type: Enum['enable', 'disable']
Action to perform on monitored nodes. 'enable' will enable monitoring alerts. 'disable' will disable monitoring alerts on nodes.
post_update
Run post-update script on target host(s), only if it exists. If the script doesn't exist or isn't executable, then this task succeeds (this allows us to run thist task on all hosts, even if they don't have a post-update script).
Supports noop? true
Parameters
script
Data type: Optional[String[1]]
Absolute path of the script to execute. If no script name is passed on Linux hosts a default is used: /opt/patching/bin/post_update.sh. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/bin/post_update.ps1.
pre_post_update_linux
Pre-post-update definition to make bolt not throw a warning. Best to use pre_update or post_update directly.
Supports noop? true
Parameters
script
Data type: Optional[String[1]]
Absolute path of the script to execute. If no script name is passed on Linux hosts a default is used: /opt/patching/bin/pre_update.sh. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/bin/pre_update.ps1.
pre_post_update_windows
Pre-post-update definition to make bolt not throw a warning. Best to use pre_update or post_update directly.
Supports noop? true
Parameters
script
Data type: Optional[String[1]]
Absolute path of the script to execute. If no script name is passed on Linux hosts a default is used: /opt/patching/bin/pre_update.sh. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/bin/pre_update.ps1.
pre_update
Run pre-update script on target host(s), only if it exists. If the script doesn't exist or isn't executable, then this task succeeds (this allows us to run thist task on all hosts, even if they don't have a pre-update script).
Supports noop? true
Parameters
script
Data type: Optional[String[1]]
Absolute path of the script to execute. If no script name is passed on Linux hosts a default is used: /opt/patching/bin/pre_update.sh. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/bin/pre_update.ps1.
puppet_facts
Gather system facts using 'puppet facts'. Puppet agent MUST be installed for this to work.
Supports noop? false
reboot_required
Checks if a reboot is pending
Supports noop? false
reboot_required_linux
Checks if a reboot is pending
Supports noop? false
reboot_required_windows
Checks if a reboot is pending
Supports noop? false
update
Execute OS updates on the target. For RedHat/CentOS this runs yum update
. For Debian/Ubuntu runs apt upgrade
. For Windows this runs Windows Update and choco update
.
Supports noop? false
Parameters
provider
Data type: Optional[String[1]]
What update provider to use. For Linux (RHEL, Debian, etc) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
names
Data type: Optional[Array[String]]
Name of the package(s) to update. If nothing is passed then all packages will be updated. Note: this currently only works for Linux, Windows support will be added in the future for both Windows Update and Chocolatey (TODO)
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. The data is written to a log file so that you can collect it later by running patching::history. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/patching.json
log_file
Data type: Optional[String[1]]
Log file for OS specific output during the patching process. This file will contain OS specific (RHEL/CentOS = yum history, Debian/Ubuntu = /var/log/apt/history.log, Windows = ??) data that this task used to generate its output. If no script name is passed on Linux hosts a default is used: /var/log/patching.log. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/patching.log
update_history
Reads the update history from the JSON 'result_file'.
Supports noop? false
Parameters
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. This is data that was written by patching::update. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/PuppetLabs/patching/patching.json
update_history_linux
Reads the update history from the JSON 'result_file'.
Supports noop? false
Parameters
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. This is data that was written by patching::update. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/PuppetLabs/patching/patching.json
update_history_windows
Reads the update history from the JSON 'result_file'.
Supports noop? false
Parameters
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. This is data that was written by patching::update. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/PuppetLabs/patching/patching.json
update_linux
Execute OS updates on the target. For RedHat/CentOS this runs yum update
. For Debian/Ubuntu runs apt upgrade
. For Windows this runs Windows Update and choco update
.
Supports noop? false
Parameters
provider
Data type: Optional[String[1]]
What update provider to use. For Linux (RHEL, Debian, etc) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
names
Data type: Optional[Array[String]]
Name of the package(s) to update. If nothing is passed then all packages will be updated. Note: this currently only works for Linux, Windows support will be added in the future for both Windows Update and Chocolatey (TODO)
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. The data is written to a log file so that you can collect it later by running patching::history. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/patching.json
log_file
Data type: Optional[String[1]]
Log file for OS specific output during the patching process. This file will contain OS specific (RHEL/CentOS = yum history, Debian/Ubuntu = /var/log/apt/history.log, Windows = ??) data that this task used to generate its output. If no script name is passed on Linux hosts a default is used: /var/log/patching.log. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/patching.log
update_windows
Execute OS updates on the target. For RedHat/CentOS this runs yum update
. For Debian/Ubuntu runs apt upgrade
. For Windows this runs Windows Update and choco update
.
Supports noop? false
Parameters
provider
Data type: Optional[String[1]]
What update provider to use. For Linux (RHEL, Debian, etc) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
names
Data type: Optional[Array[String]]
Name of the package(s) to update. If nothing is passed then all packages will be updated. Note: this currently only works for Linux, Windows support will be added in the future for both Windows Update and Chocolatey (TODO)
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. The data is written to a log file so that you can collect it later by running patching::history. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/patching.json
log_file
Data type: Optional[String[1]]
Log file for OS specific output during the patching process. This file will contain OS specific (RHEL/CentOS = yum history, Debian/Ubuntu = /var/log/apt/history.log, Windows = ??) data that this task used to generate its output. If no script name is passed on Linux hosts a default is used: /var/log/patching.log. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/patching.log
Plans
patching
It serves as a showcase of how all of the building blocks in this module can be tied together to create a full blown patching workflow. This is a great initial workflow to patch servers. We fully expect others to take this workflow as a build-block and customize it to meet their needs.
Examples
CLI - Basic usage
bolt plan run patching --nodes linux_patching,windows_patching
CLI - Disable snapshot creation, because an old patching run failed and we have an old snapshot to rely on
bolt plan run patching --nodes linux_patching,windows_patching snapshot_create=false
CLI - Disable snapshot deletion, because we want to wait for app teams to test.
bolt plan run patching --nodes linux_patching,windows_patching snapshot_delete=true
# sometime in the future, delete the snapshots
bolt plan run patching::snapshot_vmare --nodes linux_patching,windows_patching action='delete'
CLI - Customize the pre/post update plans to use your own module's version
bolt plan run patching --nodes linux_patching pre_update_plan='mymodule::pre_update' post_update_plan='mymodule::post_update'
Parameters
The following parameters are available in the patching
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
filter_offline_nodes
Data type: Boolean
Flag to determine if offline nodes should be filtered out of the list of targets returned by this plan. If true, when running the puppet_agent::version check, any nodes that return an error will be filtered out and ignored. Those targets will not be returned in any of the data structures in the result of this plan. If false, then any nodes that are offline will cause this plan to error immediately when performing the online check. This will result in a halt of the patching process.
Default value: false
monitoring_plan
Data type: Optional[String]
Name of the plan to use for disabling/enabling monitoring steps of the workflow.
Default value: 'patching::monitoring_solarwinds'
pre_update_plan
Data type: String
Name of the plan to use for executing the pre-update step of the workflow.
Default value: 'patching::pre_update'
post_update_plan
Data type: String
Name of the plan to use for executing the post-update step of the workflow.
Default value: 'patching::post_update'
reboot_strategy
Data type: Enum['only_required', 'never', 'always']
Determines the reboot strategy for the run.
- 'only_required' only reboots hosts that require it based on info reported from the OS
- 'never' never reboots the hosts
- 'always' will reboot the host no matter what
Default value: 'only_required'
reboot_message
Data type: String
Message displayed to the user prior to the system rebooting
Default value: 'NOTICE: This system is currently being updated.'
snapshot_plan
Data type: Optional[String]
Name of the plan to use for executing snaphot creation and deletion steps of the workflow
You can also pass ''
or undef'
as an easy way to disable both creation and deletion.
Default value: 'patching::snapshot_vmware'
snapshot_create
Data type: Boolean
Flag to enable/disable creating snapshots before patching groups.
A common usecase to disabling snapshot creation is that, say you run patching
with snapshot_create
enabled and something goes wrong during patching and
the run fails. The sanpshot still exists and you want to retry patching
but don't want to create ANOTHER snapshot on top of the one we already have.
In this case we would pass in snapshot_create=false
when running the second time.
Default value: true
snapshot_delete
Data type: Boolean
Flag to enable/disable deleting snapshots after patching groups.
A common usecase to disable snapshot deletion is that, say you want to patch your
hosts and wait a few hours for application teams to test after you're done patching.
In this case you can run with snapshot_delete=false
and then a few hours later
you can run the patching::snapshot_vmware action=delete
sometime in the future.
Default value: true
noop
Data type: Boolean
Flag to enable noop mode for the underlying plans and tasks.
Default value: false
patching::available_updates
This uses the patching::available_updates task to query each Target's Operating System for available updates. The results from the OS are parsed and formatted into easy to consume JSON data, such that further code can be written against the output.
- RHEL: This ultimately performs a yum check-update.
- Ubuntu: This ultimately performs a apt upgrade --simulate.
- Windows:
- Windows Update API: Queries the WUA for updates. This is the standard update mechanism for Windows.
- Chocolatey: If installed, runs choco outdated. If not installed, Chocolatey is ignored.
Examples
CLI - Basic Usage
bolt plan run patching::available_updates --nodes linux_hosts
CLI - Get available update information in CSV format for creating reports
bolt plan run patching::available_updates --nodes linux_hosts format=csv
Plan - Basic Usage
run_plan('patching::available_updates',
nodes => $linux_hosts)
Plan - Get available update information in CSV format for creating reports
run_plan('patching::available_updates',
nodes => $linux_hosts,
format => 'csv')
Parameters
The following parameters are available in the patching::available_updates
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
format
Data type: Enum['none', 'pretty', 'csv']
Output format for printing user-friendly information during the plan run. This also determines the format of the information returned from this plan.
- 'none' : Prints no data to the screen. Returns the raw ResultSet from the patching::available_updates task
- 'pretty' : Prints the data out in a easy to consume format, one line per host, showing the number of available updates per host. Returns a Hash containing two keys: 'has_updates' - an array of TargetSpec that have updates available, 'no_updates' - an array of hosts that have no updates available.
- 'csv' : Prints and returns CSV formatted data, one row for each update of each host.
Default value: 'pretty'
noop
Data type: Boolean
Run this plan in noop mode, meaning no changes will be made to end systems. In this case, noop mode has no effect.
Default value: false
patching::check_online
Online checks are done querying for the node's Puppet version using the puppet_agent::version task. This plan is designed to be used ad-hoc as a quick health check of your inventory. It is the intention of this plan to be used as "first pass" when onboarding new nodes into a Bolt rotation. One would build their inventory file of all nodes from their trusted data sources. Then take the inventory files and run this plan against them to isolate problem nodes and remediate them. Once this plan runs successfuly on your inventory, you know that Bolt can connect and can begin the patching proces.
There are no results returned by this plan, instead data is pretty-printed to the screen in two lists:
- List of targets that failed to connect. This list is a YAML list where each line is the name of a Target that failed to connect. The intention here is that you can use this YAML list to modify your inventory and remove these problem hosts from your groups.
- Details for each failed target. This provides details about the error that occured when connecting. Failures can occur for many reasons, host being offline host not listening on the right port, firewall blocking, invalid credentials, etc. The idea here is to give the end-user a easily digestible summary so that action can be taken to remediate these hosts.
Examples
CLI - Basic usage
bolt plan run patching::check_online
Parameters
The following parameters are available in the patching::check_online
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
patching::check_puppet
Executes the puppet_agent::version task to check if Puppet is installed on all of the nodes. Once finished, the result is split into two groups:
- Nodes with puppet
- Nodes with no puppet
The nodes with puppet are queried for facts using the patching::puppet_facts plan. Nodes without puppet are queried for facts using the simpler facts plan.
This plan is designed to be the first plan executed in a patching workflow. It can be used to stop the patching process if any hosts are offline by setting filter_offline_nodes=false (default). It can also be used to patch any hosts that are currently available and ignoring any offline nodes by setting filter_offline_nodes=true.
Examples
CLI - Basic usage (error if any nodes are offline)
bolt plan run patching::check_puppet --nodes linux_hosts
CLI - Filter offline nodes (only return online nodes)
bolt plan run patching::check_puppet --nodes linux_hosts filter_offline_nodes=true
Plan - Basic usage (error if any nodes are offline)
$results = run_plan('patching::check_puppet',
nodes => $linux_hosts)
$targets_has_puppet = $results['has_puppet']
$targets_no_puppet = $results['no_puppet']
$targets_all = $results['all']
Plan - Filter offline nodes (only return online nodes)
$results = run_plan('patching::check_puppet',
nodes => $linux_hosts,
filter_offline_nodes => true)
$targets_online_has_puppet = $results['has_puppet']
$targets_online_no_puppet = $results['no_puppet']
$targets_online = $results['all']
Parameters
The following parameters are available in the patching::check_puppet
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
filter_offline_nodes
Data type: Boolean
Flag to determine if offline nodes should be filtered out of the list of targets returned by this plan. If true, when running the puppet_agent::version check, any nodes that return an error will be filtered out and ignored. Those targets will not be returned in any of the data structures in the result of this plan. If false, then any nodes that are offline will cause this plan to error immediately when performing the online check. This will result in a halt of the patching process.
Default value: false
patching::deploy_scripts
The patching::deploy_scripts class.
Examples
CLI deploy a pre patching script
bolt plan run patching::deploy_scripts scripts='{"pre_patch.sh": {"source": "puppet:///modules/test/patching/pre_patch.sh"}}'
CLI deploy a pre and post patching script
bolt plan run patching::deploy_scripts scripts='{"pre_patch.sh": {"source": "puppet:///modules/test/patching/pre_patch.sh"}, "post_patch.sh": {"source": "puppet:///modules/test/patching/post_patch.sh"}}'
Parameters
The following parameters are available in the patching::deploy_scripts
plan.
owner
Data type: Optional[String]
Default owner of installed scripts
Default value: undef
group
Data type: Optional[String]
Default group of installed scripts
Default value: undef
mode
Data type: Optional[String]
Default file mode of installed scripts
Default value: undef
nodes
Data type: TargetSpec
scripts
Data type: Hash
patching_dir
Data type: Optional[String]
Default value: undef
bin_dir
Data type: Optional[String]
Default value: undef
log_dir
Data type: Optional[String]
Default value: undef
patching::get_targets
A very common requirement when running individual plans from the commandline is that each plan would need to perform the following steps:
- Convert the TargetSpec from a string into an Array[Target] using get_targets($nodes)
- Check for nodes that are online (calls plan patching::check_puppet
- Gather facts about the nodes
This plan combines all of that into one so that it can be reused in all of the other plans within this module. It also adds some smart checking so that, if multiple plans invoke each other, each of which call this plan. The online check and facts gathering only hapens once.
Examples
Plan - Basic usage
plan mymodule::myplan (
TargetSpec $nodes
) {
$targets = run_plan('patching::get_targets', nodes => $ndoes)
# do normal stuff with your $targets
}
Parameters
The following parameters are available in the patching::get_targets
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
patching::monitoring_solarwinds
Communicates to the vSphere API from the local Bolt control node using the rbvmomi Ruby gem.
To install the rbvmomi gem on the bolt control node:
/opt/puppetlabs/bolt/bin/gem install --user-install rbvmomi
TODO config variables
Examples
Remote target definition for $monitoring_target
vars:
patching_monitoring_plan: 'patching::monitoring_solarwinds'
patching_monitoring_target: 'solarwinds'
groups:
- name: solarwinds
config:
transport: remote
remote:
port: 17778
username: 'domain\svc_bolt_sw'
password:
_plugin: pkcs7
encrypted_value: >
ENC[PKCS7,xxx]
targets:
- solawrinds.domain.tld
Parameters
The following parameters are available in the patching::monitoring_solarwinds
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
action
Data type: Enum['enable', 'disable']
What action to perform on the monitored nodes:
enable
Resumes monitoring alerts- 'disable' Supresses monitoring alerts
target_name_property
Data type: Optional[Enum['name', 'uri']]
Determines what property on the Target object will be used as the name when mapping the Target to a Node in SolarWinds.
uri
: use theuri
property on the Target. This is preferred because If you specify a list of Targets in the inventory file, the value shown in that list is set as theuri
and not thename
, in this casename
will beundef
.name
: use thename
property on the Target, this is not preferred becausename
is usually a short name or nickname.
Default value: undef
monitoring_target
Data type: TargetSpec
Name or reference to the remote transport target of the Monitoring server. This will be used when to determine how to communicate with the SolarWinds API. The remote transport should have the following properties:
- [Integer] port Port to use when communicating with SolarWinds API (default: 17778)
- [String] username Username for authenticating with the SolarWinds API
- [Password] password Password for authenticating with the SolarWinds API
Default value: .vars['patching_monitoring_target']
noop
Data type: Boolean
Flag to enable noop mode. When noop mode is enabled no snapshots will be created or deleted.
Default value: false
patching::ordered_groups
When patching hosts it is common that you don't want to patch them all at the same time, for obvious reasons. To facilitate this we devised the concept of a "patching order". Patching order is a mechanism to allow nodes to be organized into groups and then sorted so that a custom order can be defined for your specific usecase.
The way one assigns a patching order to a target or group is using vars in the Bolt inventory file.
Example:
---
groups:
- name: primary_nodes
vars:
patching_order: 1
targets:
- sql01.domain.tld
- name: backup_nodes
vars:
patching_order: 2
targets:
- sql02.domain.tld
When the patching_order is defined at the group level, it is inherited by all nodes within that group.
The reason this plan exists is that there is no concept of a "group" in the bolt runtime, so we need to artificially recreate them using our patching_order vars paradigm.
An added benefit to this paradigm is that you may have grouped your nodes logically on a different dimension, say by application. If it's OK that multiple applications be patched at the same time, we can assign the same patching order to multiple groups in the inventory. Then, when run through this plan, they will be aggregated together into one large group of nodes that will all be patched concurrently.
Example, app_xxx and app_zzz both can be patched at the same time, but app_yyy needs to go later in the process:
---
groups:
- name: app_xxx
vars:
patching_order: 1
targets:
- xxx
- name: app_yyy
vars:
patching_order: 2
targets:
- yyy
- name: app_zzz
vars:
patching_order: 1
targets:
- zzz
This is returned as an Array, because an Array has a defined order when you iterate over it using .each. Ordering is important in patching so we wanted this to be very concrete.
Examples
Basic usage
$ordered_groups = run_plan('patching::ordered_groups', nodes => $targets)
$ordered_groups.each |$group_hash| {
$group_order = $group_hash['order']
$group_nodes = $group_hash['nodes']
# run your patching process for the group
}
Parameters
The following parameters are available in the patching::ordered_groups
plan.
nodes
Data type: TargetSpec
Set of targets to created ordered groups of.
patching::post_update
Often in patching it is necessary to run custom commands before/after updates are applied to a host. This plan allows for that customization to occur.
By default it executes a Shell script on Linux and a PowerShell script on Windows hosts. The default script paths are:
- Linux:
/opt/patching/bin/post_update.sh
- Windows:
C:\ProgramData\patching\bin\post_update.ps1
One can customize the script paths by overriding them on the CLI, or when calling the plan
using the script_linux
and script_windows
parameters.
The script paths can also be customzied in the inventory configuration vars
:
Example:
vars:
patching_post_update_script_windows: C:\scripts\patching.ps1
patching_post_update_script_linux: /usr/local/bin/mysweetpatchingscript.sh
groups:
# these nodes will use the pre patching script defined in the vars above
- name: regular_nodes
targets:
- tomcat01.domain.tld
# these nodes will use the customized patching script set for this group
- name: sql_nodes
vars:
patching_post_update_script_linux: /bin/sqlpatching.sh
targets:
- sql01.domain.tld
Examples
CLI - Basic usage
bolt plan run patching::post_update --nodes all_hosts
CLI - Custom scripts
bolt plan run patching::post_update --nodes all_hosts script_linux='/my/sweet/script.sh' script_windows='C:\my\sweet\script.ps1'
Plan - Basic usage
run_plan('patching::post_update',
nodes => $all_hosts)
Plan - Custom scripts
run_plan('patching::post_update',
nodes => $all_hosts,
script_linux => '/my/sweet/script.sh',
script_windows => 'C:\my\sweet\script.ps1')
Parameters
The following parameters are available in the patching::post_update
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
script_linux
Data type: String[1]
Path to the script that will be executed on Linux nodes.
Default value: '/opt/patching/bin/post_update.sh'
script_windows
Data type: String[1]
Path to the script that will be executed on Windows nodes.
Default value: 'C:\ProgramData\patching\bin\post_update.ps1'
noop
Data type: Boolean
Flag to enable noop mode for the underlying plans and tasks.
Default value: false
patching::pre_post_update
Common entry point for executing the pre/post update custom scripts
- See also patching::pre_update patching::post_update
Parameters
The following parameters are available in the patching::pre_post_update
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
task
Data type: String[1]
Name of the pre/post update task to execute.
script_linux
Data type: Optional[String[1]]
Path to the script that will be executed on Linux nodes.
Default value: undef
script_windows
Data type: Optional[String[1]]
Path to the script that will be executed on Windows nodes.
Default value: undef
noop
Data type: Boolean
Flag to enable noop mode for the underlying plans and tasks.
Default value: false
patching::pre_update
Often in patching it is necessary to run custom commands before/after updates are applied to a host. This plan allows for that customization to occur.
By default it executes a Shell script on Linux and a PowerShell script on Windows hosts. The default script paths are:
- Linux:
/opt/patching/bin/pre_update.sh
- Windows:
C:\ProgramData\patching\bin\pre_update.ps1
One can customize the script paths by overriding them on the CLI, or when calling the plan
using the script_linux
and script_windows
parameters.
The script paths can also be customzied in the inventory configuration vars
:
Example:
vars:
patching_pre_update_script_windows: C:\scripts\patching.ps1
patching_pre_update_script_linux: /usr/local/bin/mysweetpatchingscript.sh
groups:
# these nodes will use the pre patching script defined in the vars above
- name: regular_nodes
targets:
- tomcat01.domain.tld
# these nodes will use the customized patching script set for this group
- name: sql_nodes
vars:
patching_pre_update_script_linux: /bin/sqlpatching.sh
targets:
- sql01.domain.tld
Examples
CLI - Basic usage
bolt plan run patching::pre_update --nodes all_hosts
CLI - Custom scripts
bolt plan run patching::pre_update --nodes all_hosts script_linux='/my/sweet/script.sh' script_windows='C:\my\sweet\script.ps1'
Plan - Basic usage
run_plan('patching::pre_update',
nodes => $all_hosts)
Plan - Custom scripts
run_plan('patching::pre_update',
nodes => $all_hosts,
script_linux => '/my/sweet/script.sh',
script_windows => 'C:\my\sweet\script.ps1')
Parameters
The following parameters are available in the patching::pre_update
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
script_linux
Data type: String[1]
Path to the script that will be executed on Linux nodes.
Default value: '/opt/patching/bin/pre_update.sh'
script_windows
Data type: String[1]
Path to the script that will be executed on Windows nodes.
Default value: 'C:\ProgramData\patching\bin\pre_update.ps1'
noop
Data type: Boolean
Flag to enable noop mode for the underlying plans and tasks.
Default value: false
patching::puppet_facts
This is inspired by: https://github.com/puppetlabs/puppetlabs-facts/blob/master/plans/init.pp
Except instead of just running facter
it runs puppet facts
to set additional
facts that are only present when in the context of puppet.
Under the hood it is executeing the patching::puppet_facts
task.
Parameters
The following parameters are available in the patching::puppet_facts
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
patching::reboot_required
Patching in different environments comes with various unique requirements, one of those is rebooting hosts. Sometimes hosts need to always be reboot, othertimes never rebooted.
To provide this flexibility we created this function that wraps the reboot
plan with
a strategy
that is controllable as a parameter. This provides flexibilty in
rebooting specific nodes in certain ways (by group). Along with the power to expand
our strategy offerings in the future.
Parameters
The following parameters are available in the patching::reboot_required
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
strategy
Data type: Enum['only_required', 'never', 'always']
Determines the reboot strategy for the run.
- 'only_required' only reboots hosts that require it based on info reported from the OS
- 'never' never reboots the hosts
- 'always' will reboot the host no matter what
Default value: 'only_required'
message
Data type: String
Message displayed to the user prior to the system rebooting
Default value: 'NOTICE: This system is currently being updated.'
noop
Data type: Boolean
Flag to determine if this should be a noop operation or not. If this is a noop, no hosts will ever be rebooted, however the "reboot required" information will still be queried and returned.
Default value: false
patching::snapshot_vmware
Communicates to the vSphere API from the local Bolt control node using the rbvmomi Ruby gem.
To install the rbvmomi gem on the bolt control node:
/opt/puppetlabs/bolt/bin/gem install --user-install rbvmomi
TODO config variables
Parameters
The following parameters are available in the patching::snapshot_vmware
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
action
Data type: Enum['create', 'delete']
What action to perform on the snapshots:
create
creates a new snapshot- 'delete' deletes snapshots by matching the
snapshot_name
passed in.
target_name_property
Data type: Optional[Enum['name', 'uri']]
Determines what property on the Target object will be used as the VM name when mapping the Target to a VM in vSphere.
uri
: use theuri
property on the Target. This is preferred because If you specify a list of Targets in the inventory file, the value shown in that list is set as theuri
and not thename
, in this casename
will beundef
.name
: use thename
property on the Target, this is not preferred becausename
is usually a short name or nickname.
Default value: undef
vsphere_host
Data type: String[1]
Hostname of the vSphere server that we're going to use to create snapshots via the API.
Default value: .vars['vsphere_host']
vsphere_username
Data type: String[1]
Username to use when authenticating with the vSphere API.
Default value: .vars['vsphere_username']
vsphere_password
Data type: String[1]
Password to use when authenticating with the vSphere API.
Default value: .vars['vsphere_password']
vsphere_datacenter
Data type: String[1]
Name of the vSphere datacenter to search for VMs under.
Default value: .vars['vsphere_datacenter']
vsphere_insecure
Data type: Boolean
Flag to enable insecure HTTPS connections by disabling SSL server certificate verification.
Default value: .vars['vsphere_insecure']
snapshot_name
Data type: String[1]
Name of the snapshot
Default value: 'Bolt Patching Snapshot'
snapshot_description
Data type: String
Description of the snapshot
Default value: ''
snapshot_memory
Data type: Boolean
Capture the VMs memory in the snapshot
Default value: false
snapshot_quiesce
Data type: Boolean
Quiesce/flush the filesystem when snapshotting the VM. This requires VMware tools be installed in the guest OS to work properly.
Default value: true
noop
Data type: Boolean
Flag to enable noop mode. When noop mode is enabled no snapshots will be created or deleted.
Default value: false
patching::update_history
When executing the patching::update
task, the data that is returned to Bolt
is also written into a "results" file. This plan reads the last JSON document
from that results file, then formats the results in various ways.
This is useful for gather patching report data on a fleet of servers.
If you're using this in a larger workflow and you've run patching::update
inline.
You can pass the ResultSet from that task into the history
parameter of this
plan and we will skip retrieving the history from the targets and simply use
that data.
Parameters
The following parameters are available in the patching::update_history
plan.
nodes
Data type: TargetSpec
Set of targets to run against.
history
Data type: Optional[ResultSet]
Optional ResultSet from the patching::update
or patching::update_history
tasks
that contains update result data to be formatted.
Default value: undef
report_file
Data type: Optional[String]
Optional filename to save the formatted repot into.
If undef
is passed, then no report file will be written.
Default value: 'patching_report.csv'
format
Data type: Enum['none', 'pretty', 'csv']
The method of formatting to use for the data.
Default value: 'pretty'
What are tasks?
Modules can contain tasks that take action outside of a desired state managed by Puppet. It’s perfect for troubleshooting or deploying one-off changes, distributing scripts to run across your infrastructure, or automating changes that need to happen in a particular order as part of an application deployment.
Tasks in this module release
cache_remove
Removes/clears the target's update cache. For RHEL/CentOS this means a `yum clean all`. For Debian this means a `apt update`. For Windows this means a Windows Update refresh.
cache_update
Updates the targets update cache. For RHEL/CentOS this means a `yum clean expire-cache`. For Debian this means a `apt update`. For Windows this means a Windows Update refresh.
puppet_facts
Gather system facts using 'puppet facts'. Puppet agent MUST be installed for this to work.
reboot_required
Checks if a reboot is pending
What are plans?
Modules can contain plans that take action outside of a desired state managed by Puppet. It’s perfect for troubleshooting or deploying one-off changes, distributing scripts to run across your infrastructure, or automating changes that need to happen in a particular order as part of an application deployment.
Changelog
All notable changes to this project will be documented in this file.
Development
Release 0.4.0 (2020-01-06)
-
Add support for SUSE Linux Enterprise. (Enhancement)
Contributed by Michael Surato (@msurato)
-
Modify the scripts to use /etc/os-release. This will fallback to older methods in the absense of /etc/os-release. (Enhancement)
Contributed by Michael Surato (@msurato)
-
Re-establish all targets availability after reboot
Contributed by Vadym Chepkov (@vchepkov)
-
Fixed a bug in
patching::puppet_facts
where the sub command would fail to run on installations with customGEM_PATH
settings. (Bug Fix)Contributed by Nick Maludy (@nmaludy)
-
Changed the property we use to look up SolarWinds nodes from
'Caption'
to'DNS'
by default. Also made the property configurable using thepatching_monitoring_name_property
. There are now new parameters on thepatching::monitoring_solarwinds
task and plans to allow specifying what property we are matching for on the SolarWinds side. (Enhancement)Contributed by Nick Maludy (@nmaludy)
Release 0.3.0 (2019-10-30)
-
Add support for RHEL 8 based distributions (Enhancement)
Contributed by Vadym Chepkov (@vchepkov)
-
Added shields/badges to the README. (Enhancement)
Contributed by Nick Maludy (@nmaludy)
-
Added the ability to enable/disable monitoring during patching. The first implementation is to do this in the SolarWinds monitoring tool:
- Task -
patching::monitoring_solarwinds
: This task enables/disbles monitoring for a list of node names. - Plan -
patching::monitoring_solarwinds
: Wraps thepatching::monitoring_solarwinds
task in an easier to consume fashion, along with configuration option parsing and pretty printing. (Enhancement)
Contributed by Nick Maludy (@nmaludy)
- Task -
-
Changed the name of the configuration option
patching_vm_name_property
topatching_snapshot_target_name_property
. This correlates to the new property that was just added (below). (Enhancement)Contributed by Nick Maludy (@nmaludy)
-
Added a new configs:
patching_monitoring_plan
Name of the plan to execute for monitoring alerts control. (default:patching::monitoring_solarwinds
)patching_monitoring_enabled
Enable/disable the monitoring phases of patching. (default:true
)patching_monitoring_target_name_property
Determines what property on the target maps to the node's name in the monitoring tool (SolarWinds). This was intentionally made discinct frompatching_snapshot_target_name_property
in case the tools used different names for the same node/target.
Contributed by Nick Maludy (@nmaludy)
-
Empty strings
''
for plan names no longer disable the execution of plans (thepick()
function removes these, so it gets ignored). Instead pass in the string'disabled'
to disable the use of a pluggable plan. (Bug fix)Contributed by Nick Maludy (@nmaludy)
Release 0.2.0
-
Renamed task implementations to
_linux
and_windows
to work around a Forge bug where it didn't support that Bolt feature and was denying module submission. Due to this i also had to create matching task metadata for_linux
and_windows
and mark them as"private": true
so that they are not visible inbolt task show
. (Enhancement)Contributed by Nick Maludy (@nmaludy)
Release 0.1.0
Features
Bugfixes
Known Issues
Dependencies
- puppetlabs/puppet_agent (>= 2.2.0 < 3.0.0)
- puppetlabs/stdlib (>= 4.13.1 < 7.0.0)
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.