patching
Version information
This version is compatible with:
- Puppet Enterprise 2023.2.x, 2023.1.x, 2023.0.x, 2021.7.x, 2021.6.x, 2021.5.x, 2021.4.x, 2021.3.x, 2021.2.x, 2021.1.x, 2021.0.x, 2019.8.x, 2019.7.x, 2019.5.x, 2019.4.x, 2019.3.x, 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x, 2017.3.x, 2017.2.x, 2016.4.x
- Puppet >= 4.10.0 < 8.0.0
- , , , , , , ,
Tasks:
- available_updates
- cache_remove
- cache_update
- history
Plans:
- patching
- available_updates
- check_online
- check_puppet
Start using this module
Add this module to your Puppetfile:
mod 'encore-patching', '1.7.0'
Learn more about managing modules with a PuppetfileDocumentation
patching
Table of Contents
- Module description
- Setup
- Architecture
- Design
- Patching Workflow
- Usage
- Configuration Options
- Reference
- Limitations
- Development
- Contributors
Module description
A framework for building patching workflows. This module is designed to be used as building blocks for complex patching environments of Windows and Linux (RHEL, Ubuntu) systems.
No Puppet agent is required on the end targets. The node executing the patching will need to
have bolt
installed.
Setup
Setup Requirements
Module makes heavy use of bolt, you'll need to install it to get started. Install instructions are here.
If you want to use the patching::snapshot_vmware
plan/function then you'll
need the rbvmomi gem installed in the
bolt ruby environment:
/opt/puppetlabs/bolt/bin/gem install --user-install rbvmomi
Quick Start
cat << EOF >> ~/.puppetlabs/bolt/Puppetfile
mod 'puppetlabs/stdlib'
mod 'encore/patching'
EOF
bolt puppetfile install
bolt plan run patching::available_updates --targets group_a
# install rbvmomi for VMware snapshot support
/opt/puppetlabs/bolt/bin/gem install --user-install rbvmomi
Architecture
This module is designed to work in enterprise patching environments.
Assumptions:
- RHEL targets are registered to Satellite / Foreman or the internet
- Ubuntu targets are registered to Landscape or the internet
- Windows targets are registered to WSUS and Chocolatey (optional)
Registration to a central patching server is preferred for speed of software downloads and control of phased patching promotions.
At some point in the future we will include tasks and plans to promote patches through these central patching server tools.
Design
patching
is designed around bolt
tasks and plans.
Individual tasks have been written to accomplish targeted steps in the patching process.
Examples: patching::available_updates
is used to check for available updates on targets.
Plans are then used to pretty up output and tie tasks together.
This way end users can use the tasks and plans as build blocks to create their own custom patching workflows (we all know, there is no such thing as one size fits all).
For more info on tasks and plans, see the Usage and Reference sections.
Going further, many of the settings for the plans are configurable by setting vars
on your groups in the bolt inventory file.
For more info on customizing settings using vars, see the Configuration Options section
Patching Workflow
Our default patching workflow is implented in the patching
plan patching/init.pp.
This workflow consists of the following phases:
- Organize inventory into groups, in the proper order required for patching
- For each group...
- Check for available updates
- Disable monitoring
- Snapshot the VMs
- Pre-patch custom tasks
- Update the host (patch)
- Post-patch custom tasks
- Reboot that require a reboot
- Delete snapshots
- Enable monitoring
Usage
Check for available updates
This will reach out to all targets in group_a
in your inventory and check for any available
updates through the system's package manager:
- RHEL = yum
- Ubuntu = apt
- Windows = Windows Update + Chocolatey (if installed)
bolt plan run patching::available_updates --targets group_a
Disable monitoring
Prior to performing the snapshotting and patching steps, the plan will disable monitoring alerts in SolarWinds (by default).
This plan/task utilizes the remote
transport []
bolt plan run patching::monitoring_solarwinds --targets group_a action=disable' monitoring_target=solarwinds
Create snapshots
This plan will snapshot all of the hosts in VMware. The name of the VM in VMware is assumed to
be the uri
of the node the inventory file.
/opt/puppetlabs/bolt/bin/gem install rbvmomi
bolt plan run patching::snapshot_vmware --targets group_a action='create' vsphere_host='vsphere.domain.tld' vsphere_username='xyz' vsphere_password='abc123' vsphere_datacenter='dctr1'
Perform pre-patching checks and actions
This plan is designed to perform custom service checks and shutdown actions before
applying patches to a node.
If you have custom actions that need to be perform prior to patching, place them in the
pre_update
scripts and this plan will execute them.
Best practice is to define and distribute these scripts as part of your normal Puppet code
as part of othe role for that node.
bolt plan run patching::pre_update --targets group_a
By default this executes the following scripts (targets where the script doesn't exist are ignored):
- Linux =
/opt/patching/bin/pre_update.sh
- Windows =
C:\ProgramData\patching\pre_update.ps1
Deploying pre/post patching scripts
An easy way to deploy pre/post patching scripts is via the patching
Puppet manifest or the patching::script
resource.
Using the patching
class:
class {'patching':
scripts => {
'pre_patch.sh': {
content => template('mymodule/patching/custom_app_post_patch.sh'),
},
'post_patch.sh': {
source => 'puppet:///mymodule/patching/custom_app_post_patch.sh',
},
},
}
Via patching::script
resources:
patching::script { 'custom_app_pre_patch.sh':
content => template('mymodule/patching/custom_app_pre_patch.sh'),
}
patching::script { 'custom_app_post_patch.sh':
source => 'puppet:///mymodule/patching/custom_app_post_patch.sh',
}
Or via Hiera:
patching::scripts:
custom_app_pre_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_pre_patch.sh'
custom_app_post_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_post_patch.sh'
Run a the full patching workflow end-to-end
Organize the inventory into groups:
patching::ordered_groups
Then, for each group:
patching::cache_updates
patching::available_updates
patching::snapshot_vmware action='create'
patching::pre_update
patching::update
patching::post_update
patching::reboot_required
patching::snapshot_vmware action='delete'
bolt plan run patching --targets group_a
Patching with Puppet Enterprise (PE)
When executing patching with Puppet Enterprise Bolt will use the pcp
transport.
This transport has a default timeout of 1000
seconds. Windows patching is MUCH
slower than this and the timeouts will need to be increased.
If you do not modify this default timeout, you may experience the following error
in the patching::update
task or any other long running task:
Starting: task patching::update on windowshost.company.com
Finished: task patching::update with 1 failure in 1044.63 sec
The following hosts failed during update:
[{"target":"windowshost.company.com","action":"task","object":"patching::update","status":"failure","result":{"_output":"null","_error":{"kind":"puppetlabs.tasks/task-error","issue_code":"TASK_ERROR","msg":"The task failed with exit code unknown","details":{"exit_code":"unknown"}}},"node":"windowshost.company.com"}]
Below is an example bolt.yaml
with the settings modified:
---
pcp:
# 2 hours = 120 minutes = 7,200 seconds
job-poll-timeout: 7200
For a complete reference of the available settings for the pcp
transport see
bolt configuration reference
documentation.
Configuration Options
This module allows many aspects of its runtime to be customized using configuration options in the inventory file.
For details on all of the available configuration options, see REFERENCE_CONFIGURATION.md
Example: Let's say we want to prevent some targets from rebooting during patching.
This can be customized with the patching_reboot_strategy
variable in inventory:
groups:
- name: no_reboot_nodes
vars:
patching_reboot_strategy: 'never'
targets:
- abc123.domain.tld
- def4556.domain.tld
Reference
See REFERENCE.md
Limitations
This module has been tested on the following operating systems:
- Windows
- 2008
- 2012
- 2016
- RHEL
- 6
- 7
- 8
- Ubuntu
- 16.04
- 18.04
Development
See DEVELOPMENT.md
Contributors
Reference
Table of Contents
Classes
patching
: allows global customization of the patching resourcespatching::params
: params for the patching module resources
Defined types
patching::script
: manages a script for custom patching actions
Functions
patching::snapshot_vmware
: Creates/deletes snapshots on VMs using the VMware vSphere API.patching::target_names
: Returns an array of names, one for each target, based on the $name_property
Tasks
available_updates
: Collects information about available updates on a target systemavailable_updates_linux
: Collects information about available updates on a target systemavailable_updates_windows
: Collects information about available updates on a target systemcache_remove
: Removes/clears the target's update cache. For RHEL/CentOS this means ayum clean all
. For Debian this means aapt update
. For Windows thicache_remove_linux
: Removes/clears the target's update cache. For RHEL/CentOS this means ayum clean all
. For Debian this means aapt update
. For Windows thicache_remove_windows
: Removes/clears the target's update cache. For RHEL/CentOS this means ayum clean all
. For Debian this means aapt update
. For Windows thicache_update
: Updates the targets update cache. For RHEL/CentOS this means ayum clean expire-cache
. For Debian this means aapt update
. For Windows thcache_update_linux
: Updates the targets update cache. For RHEL/CentOS this means ayum clean expire-cache
. For Debian this means aapt update
.cache_update_windows
: Updates the targets update cache. For Windows this means a Windows Update refresh.history
: Reads the update history from the JSON 'result_file'.monitoring_prometheus
: Create or remove alert silences for hosts in Prometheus.monitoring_solarwinds
: Enable or disable monitoring alerts on hosts in SolarWinds.post_update
: Run post-update script on target host(s), only if it exists. If the script doesn't exist or isn't executable, then this task succeeds (this apre_post_update_linux
: Pre-post-update definition to make bolt not throw a warning. Best to use pre_update or post_update directly.pre_post_update_windows
: Pre-post-update definition to make bolt not throw a warning. Best to use pre_update or post_update directly.pre_update
: Run pre-update script on target host(s), only if it exists. If the script doesn't exist or isn't executable, then this task succeeds (this alpuppet_facts
: Gather system facts using 'puppet facts'. Puppet agent MUST be installed for this to work.reboot_required
: Checks if a reboot is pendingreboot_required_linux
: Checks if a reboot is pendingreboot_required_windows
: Checks if a reboot is pendingsnapshot_kvm
: Creates or deletes snapshots on a set of KVM/Libvirt hypervisorsupdate
: Execute OS updates on the target. For RedHat/CentOS this runsyum update
. For Debian/Ubuntu runsapt upgrade
. For Windows this runs Windoupdate_history
: Reads the update history from the JSON 'result_file'.update_history_linux
: Reads the update history from the JSON 'result_file'.update_history_windows
: Reads the update history from the JSON 'result_file'.update_linux
: Execute OS updates on the target. For RedHat/CentOS this runsyum update
. For Debian/Ubuntu runsapt upgrade
. For SLES this runs `zypperupdate_windows
: Execute OS updates on the target. For RedHat/CentOS this runsyum update
. For Debian/Ubuntu runsapt upgrade
. For Windows this runs Windo
Plans
patching
: Our generic and semi-opinionated workflow.patching::available_updates
: Checks all targets for available updates reported by their Operating System.patching::check_online
: Checks each node to see they're online.patching::check_puppet
: Checks each node to see if Puppet is installed, then gather Facts on all targets.patching::deploy_scripts
patching::get_facts
: Sets patching facts on targetspatching::get_targets
: get_targets() except it also performs online checks and gathers facts in one step.patching::monitoring_multiple
: Disable monitoring for targets in multiple servicespatching::monitoring_prometheus
: Create or remove alert silences for hosts in Prometheus.patching::monitoring_solarwinds
: Enable or disable monitoring alerts on hosts in SolarWinds.patching::ordered_groups
: Takes a set of targets then groups and sorts them by the patching_order var set on the target.patching::post_update
: Executes a custom post-update script on each node.patching::pre_post_update
: Common entry point for executing the pre/post update custom scriptspatching::pre_update
: Executes a custom pre-update script on each node.patching::puppet_facts
: Plan thatr runs 'puppet facts' on the targets and sets them as facts on the Target objects.patching::reboot_required
: Querys a targets operating system to determine if a reboot is required and then reboots the targets that require rebooting.patching::set_facts
: Sets patching facts on targetspatching::snapshot_kvm
: Creates or deletes VM snapshots on targets in KVM/Libvirt.patching::snapshot_vmware
: Creates or deletes VM snapshots on targets in VMware.patching::update_history
: Collect update history from the results JSON file on the targets
Classes
patching
allows global customization of the patching resources
Examples
Basic usage
include patching
Customizing script location
class {'patching':
bin_dir => '/my/custom/patching/scripts',
}
Customizing the owner/group/mode of the scripts
class {'patching':
owner => 'svc_patching',
group => 'svc_patching',
mode => '0700',
}
Customizing from hiera
patching::bin_dir: '/my/custom/app/patching/dir'
patching::owner: 'svc_patching'
patching::group: 'svc_patching'
patching::mode: '0700'
Deploying scripts from hiera
patching::scripts:
custom_app_pre_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_pre_patch.sh'
custom_app_post_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_post_patch.sh'
Parameters
The following parameters are available in the patching
class:
patching_dir
Data type: Any
Global directory as the base for bin_dir
and log_dir
Default value: $patching::params::patching_dir
bin_dir
Data type: Any
Global directory where the scripts will be installed
Default value: $patching::params::bin_dir
log_dir
Data type: Any
Directory where log files will be written during patching
Default value: $patching::params::log_dir
owner
Data type: Any
Default owner of installed scripts
Default value: $patching::params::owner
group
Data type: Any
Default group of installed scripts
Default value: $patching::params::group
mode
Data type: Any
Default file mode of installed scripts
Default value: $patching::params::mode
scripts
Data type: Optional[Hash]
Hash of script resources to instantiate. Useful for declaring script installs from hiera.
Default value: undef
patching::params
params for the patching module resources
Defined types
patching::script
manages a script for custom patching actions
Examples
Basic usage from static file
include patching
patching::script { 'pre_patch.sh':
source => 'puppet://mymodule/patching/custom_app_pre_patch.sh',
}
Basic usage from template
include patching
patching::script { 'pre_patch.sh':
content => template('mymodule/patching/custom_app_pre_patch.sh'),
}
Installing the script into a different path with a different name
include patching
patching::script { 'custom_app_pre_patch.sh':
content => template('mymodule/patching/custom_app_pre_patch.sh'),
bin_dir => '/my/custom/app/patching/dir',
}
Installing multiple scripts into a different path
class {'patching':
bin_dir => '/my/custom/app/patching/dir',
}
# we don't have to override bin_dir on each of these because
# we configured it gobally in the patching class above
patching::script { 'custom_app_pre_patch.sh':
content => template('mymodule/patching/custom_app_pre_patch.sh'),
}
patching::script { 'custom_app_post_patch.sh':
content => template('mymodule/patching/custom_app_post_patch.sh'),
}
From hiera
patching::bin_dir: '/my/custom/app/patching/dir'
patching::scripts:
custom_app_pre_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_pre_patch.sh'
custom_app_post_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_post_patch.sh'
Parameters
The following parameters are available in the patching::script
defined type:
source
Data type: Any
Source (puppet path) for the file
resource of the script.
Either source
our content
must be specified. If neither are specified an error will be thrown.
Default value: undef
content
Data type: Any
Content (raw string, result of template()
, etc) for the file
resource of the script.
Either source
our content
must be specified. If neither are specified an error will be thrown.
Default value: undef
bin_dir
Data type: Any
Directory where the script will be installed
Default value: $patching::bin_dir
owner
Data type: Any
Owner of the script file
Default value: $patching::owner
group
Data type: Any
Group of the script file
Default value: $patching::group
mode
Data type: Any
File mode to set on the script
Default value: $patching::mode
Functions
patching::snapshot_vmware
Type: Ruby 4.x API
Creates/deletes snapshots on VMs using the VMware vSphere API.
patching::snapshot_vmware(Array $vm_names, String $snapshot_name, String $vsphere_host, String $vsphere_username, String $vsphere_password, String $vsphere_datacenter, Optional[Boolean] $vsphere_insecure, Optional[String] $snapshot_description, Optional[Boolean] $snapshot_memory, Optional[Boolean] $snapshot_quiesce, Optional[String] $action)
Creates/deletes snapshots on VMs using the VMware vSphere API.
Returns: Array
Results from the snapshot create/delete tasks
vm_names
Data type: Array
Array of VM names to create/delete snapshots on
snapshot_name
Data type: String
Name of the snapshot to create/delete
vsphere_host
Data type: String
Hostname/IP of the vSphere server
vsphere_username
Data type: String
Username to use for authenticating to vSphere
vsphere_password
Data type: String
Password to use for authenticating to vSphere
vsphere_datacenter
Data type: String
Datacenter in the vSphere to use when search for VMs
vsphere_insecure
Data type: Optional[Boolean]
Flag to enable HTTPS without SSL verification
snapshot_description
Data type: Optional[String]
Description of the snapshot, when creating.
snapshot_memory
Data type: Optional[Boolean]
Snapshot the VMs memory, when creating.
snapshot_quiesce
Data type: Optional[Boolean]
Quiesce/flush the VMs filesystem when creating the snapshot
action
Data type: Optional[String]
Action to perform on the snapshot, 'create' or 'delete'
patching::target_names
Type: Puppet Language
Returns an array of names, one for each target, based on the $name_property
patching::target_names(TargetSpec $targets, Enum['hostname', 'name', 'uri'] $name_property)
The patching::target_names function.
Returns: Array[String]
Array of names, one for each target
targets
Data type: TargetSpec
List of targets to extract the name from
name_property
Data type: Enum['hostname', 'name', 'uri']
Property in the Target to use as the name
Tasks
available_updates
Collects information about available updates on a target system
Supports noop? true
Parameters
provider
Data type: Optional[String]
What update provider to use. For Linux (RHEL, Debian, SUSE, etc.) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
available_updates_linux
Collects information about available updates on a target system
Supports noop? true
Parameters
provider
Data type: Optional[String[1]]
What update provider to use. For Linux (RHEL, Debian, etc) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
available_updates_windows
Collects information about available updates on a target system
Supports noop? true
Parameters
provider
Data type: Optional[String[1]]
What update provider to use. For Linux (RHEL, Debian, etc) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
cache_remove
Removes/clears the target's update cache. For RHEL/CentOS this means a yum clean all
. For Debian this means a apt update
. For Windows this means a Windows Update refresh.
Supports noop? false
cache_remove_linux
Removes/clears the target's update cache. For RHEL/CentOS this means a yum clean all
. For Debian this means a apt update
. For Windows this means a Windows Update refresh.
Supports noop? false
cache_remove_windows
Removes/clears the target's update cache. For RHEL/CentOS this means a yum clean all
. For Debian this means a apt update
. For Windows this means a Windows Update refresh.
Supports noop? false
cache_update
Updates the targets update cache. For RHEL/CentOS this means a yum clean expire-cache
. For Debian this means a apt update
. For Windows this means a Windows Update refresh.
Supports noop? true
cache_update_linux
Updates the targets update cache. For RHEL/CentOS this means a yum clean expire-cache
. For Debian this means a apt update
.
Supports noop? true
cache_update_windows
Updates the targets update cache. For Windows this means a Windows Update refresh.
Supports noop? true
history
Reads the update history from the JSON 'result_file'.
Supports noop? false
Parameters
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. This is data that was written by patching::update. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/PuppetLabs/patching/patching.json
monitoring_prometheus
Create or remove alert silences for hosts in Prometheus.
Supports noop? true
Parameters
targets
Data type: Variant[String[1], Array[String[1]]]
List of hostnames for targets in Prometheus that will have monitoring alerts either enabled or disabled.
action
Data type: Enum['enable', 'disable']
Action to perform on monitored targets. 'enable' will enable monitoring alerts. 'disable' will disable monitoring alerts on targets.
prometheus_server
Data type: String[1]
FQDN of the Prometheus server to create an alert silence for
silence_duration
Data type: Optional[Integer]
How long the alert silence will be alive for
silence_units
Data type: Optional[Enum['minutes', 'hours', 'days', 'weeks']]
Goes with the silence duration to determine how long the alert silence will be alive for
monitoring_solarwinds
Enable or disable monitoring alerts on hosts in SolarWinds.
Supports noop? true
Parameters
targets
Data type: Variant[String[1], Array[String[1]]]
List of hostnames or IP addresses for targets in SolarWinds that will have monitoring alerts either enabled or disabled.
name_property
Data type: Optional[String[1]]
Property to use when looking up an Orion.Node in SolarWinds from a Bolt::Target. By default we check to see if the node is an IP address, if it is then we use the 'IPAddress' property, otherwise we use 'DNS'. If you want to change what the 'other' property is when the node name isn't an IP address, then specify this property.
action
Data type: Enum['enable', 'disable']
Action to perform on monitored targets. 'enable' will enable monitoring alerts. 'disable' will disable monitoring alerts on targets.
post_update
Run post-update script on target host(s), only if it exists. If the script doesn't exist or isn't executable, then this task succeeds (this allows us to run thist task on all hosts, even if they don't have a post-update script).
Supports noop? true
Parameters
script
Data type: Optional[String[1]]
Absolute path of the script to execute. If no script name is passed on Linux hosts a default is used: /opt/patching/bin/post_update.sh. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/bin/post_update.ps1.
pre_post_update_linux
Pre-post-update definition to make bolt not throw a warning. Best to use pre_update or post_update directly.
Supports noop? true
Parameters
script
Data type: Optional[String[1]]
Absolute path of the script to execute. If no script name is passed on Linux hosts a default is used: /opt/patching/bin/pre_update.sh. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/bin/pre_update.ps1.
pre_post_update_windows
Pre-post-update definition to make bolt not throw a warning. Best to use pre_update or post_update directly.
Supports noop? true
Parameters
script
Data type: Optional[String[1]]
Absolute path of the script to execute. If no script name is passed on Linux hosts a default is used: /opt/patching/bin/pre_update.sh. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/bin/pre_update.ps1.
pre_update
Run pre-update script on target host(s), only if it exists. If the script doesn't exist or isn't executable, then this task succeeds (this allows us to run thist task on all hosts, even if they don't have a pre-update script).
Supports noop? true
Parameters
script
Data type: Optional[String[1]]
Absolute path of the script to execute. If no script name is passed on Linux hosts a default is used: /opt/patching/bin/pre_update.sh. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/bin/pre_update.ps1.
puppet_facts
Gather system facts using 'puppet facts'. Puppet agent MUST be installed for this to work.
Supports noop? false
reboot_required
Checks if a reboot is pending
Supports noop? false
reboot_required_linux
Checks if a reboot is pending
Supports noop? false
reboot_required_windows
Checks if a reboot is pending
Supports noop? false
snapshot_kvm
Creates or deletes snapshots on a set of KVM/Libvirt hypervisors
Supports noop? true
Parameters
vm_names
Data type: Variant[String[1], Array[String[1]]]
List of VM names, in KVM/Libvirt these are called domains.
snapshot_name
Data type: Optional[String[1]]
Name of the snapshot
snapshot_description
Data type: Optional[String[1]]
Description of the snapshot
snapshot_memory
Data type: Optional[Boolean]
Snapshot the VMs memory
snapshot_quiesce
Data type: Optional[Boolean]
Quiesce the filesystem during the snapshot, can be a PITA.
action
Data type: Enum['create', 'delete']
Action to perform on the snapshots. 'create' will create new snapshots on the VMs. 'delete' will delete snapshots on the VMs.
update
Execute OS updates on the target. For RedHat/CentOS this runs yum update
. For Debian/Ubuntu runs apt upgrade
. For Windows this runs Windows Update and choco update
.
Supports noop? false
Parameters
provider
Data type: Optional[String]
What update provider to use. For Linux (RHEL, Debian, etc) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
names
Data type: Optional[Array[String]]
Name of the package(s) to update. If nothing is passed then all packages will be updated. Note: this currently only works for Linux, Windows support will be added in the future for both Windows Update and Chocolatey (TODO)
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. The data is written to a log file so that you can collect it later by running patching::history. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/log/patching.json
log_file
Data type: Optional[String[1]]
Log file for OS specific output during the patching process. This file will contain OS specific (RHEL/CentOS = yum history, Debian/Ubuntu = /var/log/apt/history.log, Windows = ??) data that this task used to generate its output. If no script name is passed on Linux hosts a default is used: /var/log/patching.log. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/log/patching.log
update_history
Reads the update history from the JSON 'result_file'.
Supports noop? false
Parameters
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. This is data that was written by patching::update. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/log/patching.json
update_history_linux
Reads the update history from the JSON 'result_file'.
Supports noop? false
Parameters
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. This is data that was written by patching::update. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/log/patching.json
update_history_windows
Reads the update history from the JSON 'result_file'.
Supports noop? false
Parameters
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. This is data that was written by patching::update. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/log/patching.json
update_linux
Execute OS updates on the target. For RedHat/CentOS this runs yum update
. For Debian/Ubuntu runs apt upgrade
. For SLES this runs zypper up
. For Windows this runs Windows Update and choco update
.
Supports noop? false
Parameters
provider
Data type: Optional[String[1]]
What update provider to use. For Linux (RHEL, Debian, SUSE, etc.) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
names
Data type: Optional[Array[String]]
Name of the package(s) to update. If nothing is passed then all packages will be updated. Note: this currently only works for Linux, Windows support will be added in the future for both Windows Update and Chocolatey (TODO)
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. The data is written to a log file so that you can collect it later by running patching::history. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/log/patching.json
log_file
Data type: Optional[String[1]]
Log file for OS specific output during the patching process. This file will contain OS specific (RHEL/CentOS = yum history, Debian/Ubuntu = /var/log/apt/history.log, SLES = /var/log/zypp/history, Windows = ??) data that this task used to generate its output. If no script name is passed on Linux hosts a default is used: /var/log/patching.log. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/log/patching.log
update_windows
Execute OS updates on the target. For RedHat/CentOS this runs yum update
. For Debian/Ubuntu runs apt upgrade
. For Windows this runs Windows Update and choco update
.
Supports noop? false
Parameters
provider
Data type: Optional[String[1]]
What update provider to use. For Linux (RHEL, Debian, etc) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
names
Data type: Optional[Array[String]]
Name of the package(s) to update. If nothing is passed then all packages will be updated. Note: this currently only works for Linux, Windows support will be added in the future for both Windows Update and Chocolatey (TODO)
result_file
Data type: Optional[String[1]]
Log file for patching results. This file will contain the JSON output that is returned from these tasks. The data is written to a log file so that you can collect it later by running patching::history. If no script name is passed on Linux hosts a default is used: /var/log/patching.json. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/log/patching.json
log_file
Data type: Optional[String[1]]
Log file for OS specific output during the patching process. This file will contain OS specific (RHEL/CentOS = yum history, Debian/Ubuntu = /var/log/apt/history.log, Windows = ??) data that this task used to generate its output. If no script name is passed on Linux hosts a default is used: /var/log/patching.log. If no script name is passed on Windows hosts a default is used: C:/ProgramData/patching/log/patching.log
Plans
patching
It serves as a showcase of how all of the building blocks in this module can be tied together to create a full blown patching workflow. This is a great initial workflow to patch servers. We fully expect others to take this workflow as a build-block and customize it to meet their needs.
Examples
CLI - Basic usage
bolt plan run patching --targets linux_patching,windows_patching
CLI - Disable snapshot creation, because an old patching run failed and we have an old snapshot to rely on
bolt plan run patching --targets linux_patching,windows_patching snapshot_create=false
CLI - Disable snapshot deletion, because we want to wait for app teams to test.
bolt plan run patching --targets linux_patching,windows_patching snapshot_delete=true
# sometime in the future, delete the snapshots
bolt plan run patching::snapshot_vmare --targets linux_patching,windows_patching action='delete'
CLI - Customize the pre/post update plans to use your own module's version
bolt plan run patching --targets linux_patching pre_update_plan='mymodule::pre_update' post_update_plan='mymodule::post_update'
CLI - Wait 10 minutes for systems to become available as some systems take longer to reboot.
bolt plan run patching --targets linux_patching,windows_patching --reboot_wait 600
Parameters
The following parameters are available in the patching
plan:
targets
filter_offline_targets
monitoring_enabled
monitoring_plan
update_provider
pre_update_plan
post_update_plan
reboot_message
reboot_strategy
reboot_wait
snapshot_plan
snapshot_create
snapshot_delete
report_format
report_file
noop
targets
Data type: TargetSpec
Set of targets to run against.
filter_offline_targets
Data type: Boolean
Flag to determine if offline targets should be filtered out of the list of targets returned by this plan. If true, when running the puppet_agent::version check, any targets that return an error will be filtered out and ignored. Those targets will not be returned in any of the data structures in the result of this plan. If false, then any targets that are offline will cause this plan to error immediately when performing the online check. This will result in a halt of the patching process.
Default value: false
monitoring_enabled
Data type: Optional[Boolean]
Flag to enable/disable the execute of the monitoring_plan.
This is useful if you don't want to call out to a monitoring system during provisioning.
To configure this globally, use the patching_monitoring_enabled
var.
Default value: undef
monitoring_plan
Data type: Optional[String]
Name of the plan to use for disabling/enabling monitoring steps of the workflow.
To configure this globally, use the patching_monitoring_plan
var.
Default value: undef
update_provider
Data type: Optional[String]
What update provider to use. For Linux (RHEL, Debian, SUSE, etc.) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
Default value: undef
pre_update_plan
Data type: Optional[String]
Name of the plan to use for executing the pre-update step of the workflow.
Default value: undef
post_update_plan
Data type: Optional[String]
Name of the plan to use for executing the post-update step of the workflow.
Default value: undef
reboot_message
Data type: Optional[String]
Message displayed to the user prior to the system rebooting
Default value: undef
reboot_strategy
Data type: Optional[Enum['only_required', 'never', 'always']]
Determines the reboot strategy for the run.
- 'only_required' only reboots hosts that require it based on info reported from the OS
- 'never' never reboots the hosts
- 'always' will reboot the host no matter what
Default value: undef
reboot_wait
Data type: Optional[Integer]
Time in seconds that the plan waits before continuing after a reboot. This is necessary in case one of the groups affects the availability of a previous group. Two use cases here:
- A later group is a hypervisor. In this instance the hypervisor will reboot causing the VMs to go offline and we need to wait for those child VMs to come back up before collecting history metrics.
- A later group is a linux router. In this instance maybe the patching of the linux router affects the reachability of previous hosts.
Default value: undef
snapshot_plan
Data type: Optional[String]
Name of the plan to use for executing snaphot creation and deletion steps of the workflow
You can also pass 'disabled'
or undef'
as an easy way to disable both creation and deletion.
Default value: undef
snapshot_create
Data type: Optional[Boolean]
Flag to enable/disable creating snapshots before patching groups.
A common usecase to disabling snapshot creation is that, say you run patching
with snapshot_create
enabled and something goes wrong during patching and
the run fails. The sanpshot still exists and you want to retry patching
but don't want to create ANOTHER snapshot on top of the one we already have.
In this case we would pass in snapshot_create=false
when running the second time.
Default value: undef
snapshot_delete
Data type: Optional[Boolean]
Flag to enable/disable deleting snapshots after patching groups.
A common usecase to disable snapshot deletion is that, say you want to patch your
hosts and wait a few hours for application teams to test after you're done patching.
In this case you can run with snapshot_delete=false
and then a few hours later
you can run the patching::snapshot_vmware action=delete
sometime in the future.
Default value: undef
report_format
Data type: Optional[Enum['none', 'pretty', 'csv']]
The method of formatting the report data.
Default value: undef
report_file
Data type: Optional[String]
Path of the filename where the report should be written. Default = 'patching_report.csv'. If you would like to disable writing the report file, specify a value of 'disabled'. NOTE: If you're running PE, then you'll need to disable writing reports because it will fail when running from the console.
Default value: undef
noop
Data type: Boolean
Flag to enable noop mode for the underlying plans and tasks.
Default value: false
patching::available_updates
This uses the patching::available_updates task to query each Target's Operating System for available updates. The results from the OS are parsed and formatted into easy to consume JSON data, such that further code can be written against the output.
- RHEL: This ultimately performs a yum check-update.
- Ubuntu: This ultimately performs a apt upgrade --simulate.
- Windows:
- Windows Update API: Queries the WUA for updates. This is the standard update mechanism for Windows.
- Chocolatey: If installed, runs choco outdated. If not installed, Chocolatey is ignored.
Examples
CLI - Basic Usage
bolt plan run patching::available_updates --targets linux_hosts
CLI - Get available update information in CSV format for creating reports
bolt plan run patching::available_updates --targets linux_hosts format=csv
Plan - Basic Usage
run_plan('patching::available_updates', $linux_hosts)
Plan - Get available update information in CSV format for creating reports
run_plan('patching::available_updates', $linux_hosts,
format => 'csv')
Parameters
The following parameters are available in the patching::available_updates
plan:
targets
Data type: TargetSpec
Set of targets to run against.
format
Data type: Enum['none', 'pretty', 'csv']
Output format for printing user-friendly information during the plan run. This also determines the format of the information returned from this plan.
- 'none' : Prints no data to the screen. Returns the raw ResultSet from the patching::available_updates task
- 'pretty' : Prints the data out in a easy to consume format, one line per host, showing the number of available updates per host. Returns a Hash containing two keys: 'has_updates' - an array of TargetSpec that have updates available, 'no_updates' - an array of hosts that have no updates available.
- 'csv' : Prints and returns CSV formatted data, one row for each update of each host.
Default value: 'pretty'
noop
Data type: Boolean
Run this plan in noop mode, meaning no changes will be made to end systems. In this case, noop mode has no effect.
Default value: false
provider
Data type: Optional[String]
What update provider to use. For Linux (RHEL, Debian, SUSE, etc.) this parameter is not used. For Windows the available values are: 'windows', 'chocolatey', 'all' (both 'windows' and 'chocolatey'). The default value for Windows is 'all'. If 'all' is passed and Chocolatey isn't installed then Chocolatey will simply be skipped. If 'chocolatey' is passed and Chocolatey isn't installed, then this will error.
Default value: undef
patching::check_online
Online checks are done querying for the node's Puppet version using the puppet_agent::version task. This plan is designed to be used ad-hoc as a quick health check of your inventory. It is the intention of this plan to be used as "first pass" when onboarding new targets into a Bolt rotation. One would build their inventory file of all targets from their trusted data sources. Then take the inventory files and run this plan against them to isolate problem targets and remediate them. Once this plan runs successfuly on your inventory, you know that Bolt can connect and can begin the patching proces.
There are no results returned by this plan, instead data is pretty-printed to the screen in two lists:
- List of targets that failed to connect. This list is a YAML list where each line is the name of a Target that failed to connect. The intention here is that you can use this YAML list to modify your inventory and remove these problem hosts from your groups.
- Details for each failed target. This provides details about the error that occured when connecting. Failures can occur for many reasons, host being offline host not listening on the right port, firewall blocking, invalid credentials, etc. The idea here is to give the end-user a easily digestible summary so that action can be taken to remediate these hosts.
Examples
CLI - Basic usage
bolt plan run patching::check_online
Parameters
The following parameters are available in the patching::check_online
plan:
targets
Data type: TargetSpec
Set of targets to run against.
patching::check_puppet
Executes the puppet_agent::version task to check if Puppet is installed on all of the targets. Once finished, the result is split into two groups:
- Targets with puppet
- Targets with no puppet
The targets with puppet are queried for facts using the patching::puppet_facts plan. Targets without puppet are queried for facts using the simpler facts plan.
This plan is designed to be the first plan executed in a patching workflow. It can be used to stop the patching process if any hosts are offline by setting filter_offline_targets=false (default). It can also be used to patch any hosts that are currently available and ignoring any offline targets by setting filter_offline_targets=true.
Examples
CLI - Basic usage (error if any targets are offline)
bolt plan run patching::check_puppet --targets linux_hosts
CLI - Filter offline targets (only return online targets)
bolt plan run patching::check_puppet --targets linux_hosts filter_offline_targets=true
Plan - Basic usage (error if any targets are offline)
$results = run_plan('patching::check_puppet', $linux_hosts)
$targets_has_puppet = $results['has_puppet']
$targets_no_puppet = $results['no_puppet']
$targets_all = $results['all']
Plan - Filter offline targets (only return online targets)
$results = run_plan('patching::check_puppet', $linux_hosts,
filter_offline_targets => true)
$targets_online_has_puppet = $results['has_puppet']
$targets_online_no_puppet = $results['no_puppet']
$targets_online = $results['all']
Parameters
The following parameters are available in the patching::check_puppet
plan:
targets
Data type: TargetSpec
Set of targets to run against.
filter_offline_targets
Data type: Boolean
Flag to determine if offline targets should be filtered out of the list of targets returned by this plan. If true, when running the puppet_agent::version check, any targets that return an error will be filtered out and ignored. Those targets will not be returned in any of the data structures in the result of this plan. If false, then any targets that are offline will cause this plan to error immediately when performing the online check. This will result in a halt of the patching process.
Default value: false
patching::deploy_scripts
The patching::deploy_scripts class.
Examples
CLI deploy a pre patching script
bolt plan run patching::deploy_scripts scripts='{"pre_patch.sh": {"source": "puppet:///modules/test/patching/pre_patch.sh"}}'
CLI deploy a pre and post patching script
bolt plan run patching::deploy_scripts scripts='{"pre_patch.sh": {"source": "puppet:///modules/test/patching/pre_patch.sh"}, "post_patch.sh": {"source": "puppet:///modules/test/patching/post_patch.sh"}}'
Parameters
The following parameters are available in the patching::deploy_scripts
plan:
owner
Data type: Optional[String]
Default owner of installed scripts
Default value: undef
group
Data type: Optional[String]
Default group of installed scripts
Default value: undef
mode
Data type: Optional[String]
Default file mode of installed scripts
Default value: undef
targets
Data type: TargetSpec
scripts
Data type: Hash
patching_dir
Data type: Optional[String]
Default value: undef
bin_dir
Data type: Optional[String]
Default value: undef
log_dir
Data type: Optional[String]
Default value: undef
patching::get_facts
Sets patching facts on targets
Examples
Get the patching_group fact (default)
bolt plan run patching::get_facts --targets xxx
Get different facts
bolt plan run patching::get_facts --targets xxx names='["fact1", "fact2"]'
Parameters
The following parameters are available in the patching::get_facts
plan:
targets
Data type: TargetSpec
Set of targets to run against.
names
Data type: Variant[String, Array[String]]
Name or list of fact names to retrieve from the targets
Default value: ['patching_group']
patching::get_targets
A very common requirement when running individual plans from the commandline is that each plan would need to perform the following steps:
- Convert the TargetSpec from a string into an Array[Target] using get_targets($targets)
- Check for targets that are online (calls plan patching::check_puppet
- Gather facts about the targets
This plan combines all of that into one so that it can be reused in all of the other plans within this module. It also adds some smart checking so that, if multiple plans invoke each other, each of which call this plan. The online check and facts gathering only hapens once.
Examples
Plan - Basic usage
plan mymodule::myplan (
TargetSpec $targets
) {
$targets = run_plan('patching::get_targets', $targets)
# do normal stuff with your $targets
}
Parameters
The following parameters are available in the patching::get_targets
plan:
targets
Data type: TargetSpec
Set of targets to run against.
patching::monitoring_multiple
Disable monitoring for targets in multiple services
Examples
Remote target definition for $monitoring_target
vars:
patching_monitoring_plan: 'patching::monitoring_multiple'
patching_monitoring_plan_multiple:
- plan: 'patching::monitoring_solarwinds'
target: 'solarwinds'
- plan: 'patching::monitoring_prometheus'
target: 'prometheus'
groups:
- name: solarwinds
config:
transport: remote
remote:
port: 17778
username: 'domain\svc_bolt_sw'
password:
_plugin: pkcs7
encrypted_value: >
ENC[PKCS7,xxx]
targets:
- solarwinds.domain.tld
- name: prometheus
config:
transport: remote
remote:
username: 'domain\prom_user'
password:
_plugin: pkcs7
encrypted_value: >
ENC[PKCS7,xxx]
targets:
- prometheus.domain.tld
Parameters
The following parameters are available in the patching::monitoring_multiple
plan:
targets
Data type: TargetSpec
Set of targets to run against.
action
Data type: Enum['enable', 'disable']
What action to perform on the monitored targets:
enable
Resumes monitoring alerts- 'disable' Supresses monitoring alerts
noop
Data type: Boolean
Flag to enable noop mode.
Default value: false
monitoring_plans
Data type: Array[Hash]
Default value: .vars['patching_monitoring_plan_multiple']
patching::monitoring_prometheus
Create or remove alert silences for hosts in Prometheus.
Examples
Remote target definition for $monitoring_target
vars:
patching_monitoring_target: 'prometheus'
patching_monitoring_silence_duration: 24
patching_monitoring_silence_units: 'hours'
groups:
- name: prometheus
config:
transport: remote
remote:
username: 'domain\prom_user'
password:
_plugin: pkcs7
encrypted_value: >
ENC[PKCS7,xxx]
targets:
- prometheus.domain.tld
Parameters
The following parameters are available in the patching::monitoring_prometheus
plan:
targets
Data type: TargetSpec
Set of targets to run against.
action
Data type: Enum['enable', 'disable']
What action to perform on the monitored targets:
enable
Resumes monitoring alerts- 'disable' Supresses monitoring alerts
monitoring_silence_duration
Data type: Optional[Integer]
How long the alert silence will be alive for
Default value: undef
monitoring_silence_units
Data type: Optional[Enum['minutes', 'hours', 'days', 'weeks']]
Goes with the silence duration to determine how long the alert silence will be alive for
Default value: undef
monitoring_target
Data type: Optional[TargetSpec]
Name or reference to the remote transport target of the Prometheus server. The remote transport should have the following properties:
- [String] username Username for authenticating with Prometheus
- [Password] password Password for authenticating with Prometheus
Default value: undef
noop
Data type: Boolean
Flag to enable noop mode. When noop mode is enabled no snapshots will be created or deleted.
Default value: false
patching::monitoring_solarwinds
TODO config variables
Examples
Remote target definition for $monitoring_target
vars:
patching_monitoring_plan: 'patching::monitoring_solarwinds'
patching_monitoring_target: 'solarwinds'
groups:
- name: solarwinds
config:
transport: remote
remote:
port: 17778
username: 'domain\svc_bolt_sw'
password:
_plugin: pkcs7
encrypted_value: >
ENC[PKCS7,xxx]
targets:
- solarwinds.domain.tld
Parameters
The following parameters are available in the patching::monitoring_solarwinds
plan:
targets
Data type: TargetSpec
Set of targets to run against.
action
Data type: Enum['enable', 'disable']
What action to perform on the monitored targets:
enable
Resumes monitoring alerts- 'disable' Supresses monitoring alerts
target_name_property
Data type: Optional[Enum['name', 'uri']]
Determines what property on the Target object will be used as the name when mapping the Target to a Node in SolarWinds.
uri
: use theuri
property on the Target. This is preferred because If you specify a list of Targets in the inventory file, the value shown in that list is set as theuri
and not thename
, in this casename
will beundef
.name
: use thename
property on the Target, this is not preferred becausename
is usually a short name or nickname.
Default value: undef
monitoring_target
Data type: Optional[TargetSpec]
Name or reference to the remote transport target of the Monitoring server. This will be used when to determine how to communicate with the SolarWinds API. The remote transport should have the following properties:
- [Integer] port Port to use when communicating with SolarWinds API (default: 17778)
- [String] username Username for authenticating with the SolarWinds API
- [Password] password Password for authenticating with the SolarWinds API
Default value: undef
monitoring_name_property
Data type: Optional[String[1]]
Determines what property to match in SolarWinds when looking up targets. By default we determine if the target's name is an IP address, if it is then we use the 'IPAddress' property, otherwise we use whatever property this is set to. Available options that we've seen used are 'DNS' if the target's name is a DNS FQDN, or 'Caption' if you're looking up by a nick-name for the target. This can really be any field on the Orion.Nodes table.
Default value: undef
noop
Data type: Boolean
Flag to enable noop mode. When noop mode is enabled no snapshots will be created or deleted.
Default value: false
patching::ordered_groups
When patching hosts it is common that you don't want to patch them all at the same time, for obvious reasons. To facilitate this we devised the concept of a "patching order". Patching order is a mechanism to allow targets to be organized into groups and then sorted so that a custom order can be defined for your specific usecase.
The way one assigns a patching order to a target or group is using vars in the Bolt inventory file.
Example:
---
groups:
- name: primary_nodes
vars:
patching_order: 1
targets:
- sql01.domain.tld
- name: backup_nodes
vars:
patching_order: 2
targets:
- sql02.domain.tld
When the patching_order is defined at the group level, it is inherited by all targets within that group.
The reason this plan exists is that there is no concept of a "group" in the bolt runtime, so we need to artificially recreate them using our patching_order vars paradigm.
An added benefit to this paradigm is that you may have grouped your targets logically on a different dimension, say by application. If it's OK that multiple applications be patched at the same time, we can assign the same patching order to multiple groups in the inventory. Then, when run through this plan, they will be aggregated together into one large group of targets that will all be patched concurrently.
Example, app_xxx and app_zzz both can be patched at the same time, but app_yyy needs to go later in the process:
---
groups:
- name: app_xxx
vars:
patching_order: 1
targets:
- xxx
- name: app_yyy
vars:
patching_order: 2
targets:
- yyy
- name: app_zzz
vars:
patching_order: 1
targets:
- zzz
This is returned as an Array, because an Array has a defined order when you iterate over it using .each. Ordering is important in patching so we wanted this to be very concrete.
Examples
Basic usage
$ordered_groups = run_plan('patching::ordered_groups', $targets)
$ordered_groups.each |$group_hash| {
$group_order = $group_hash['order']
$group_targets = $group_hash['targets']
# run your patching process for the group
}
Parameters
The following parameters are available in the patching::ordered_groups
plan:
targets
Data type: TargetSpec
Set of targets to created ordered groups of.
patching::post_update
Often in patching it is necessary to run custom commands before/after updates are applied to a host. This plan allows for that customization to occur.
By default it executes a Shell script on Linux and a PowerShell script on Windows hosts. The default script paths are:
- Linux:
/opt/patching/bin/post_update.sh
- Windows:
C:\ProgramData\patching\bin\post_update.ps1
One can customize the script paths by overriding them on the CLI, or when calling the plan
using the script_linux
and script_windows
parameters.
The script paths can also be customzied in the inventory configuration vars
:
Example:
vars:
patching_post_update_script_windows: C:\scripts\patching.ps1
patching_post_update_script_linux: /usr/local/bin/mysweetpatchingscript.sh
groups:
# these targets will use the pre patching script defined in the vars above
- name: regular_nodes
targets:
- tomcat01.domain.tld
# these targets will use the customized patching script set for this group
- name: sql_nodes
vars:
patching_post_update_script_linux: /bin/sqlpatching.sh
targets:
- sql01.domain.tld
Examples
CLI - Basic usage
bolt plan run patching::post_update --targets all_hosts
CLI - Custom scripts
bolt plan run patching::post_update --targets all_hosts script_linux='/my/sweet/script.sh' script_windows='C:\my\sweet\script.ps1'
Plan - Basic usage
run_plan('patching::post_update', $all_hosts)
Plan - Custom scripts
run_plan('patching::post_update', $all_hosts,
script_linux => '/my/sweet/script.sh',
script_windows => 'C:\my\sweet\script.ps1')
Parameters
The following parameters are available in the patching::post_update
plan:
targets
Data type: TargetSpec
Set of targets to run against.
script_linux
Data type: String[1]
Path to the script that will be executed on Linux targets.
Default value: '/opt/patching/bin/post_update.sh'
script_windows
Data type: String[1]
Path to the script that will be executed on Windows targets.
Default value: 'C:\ProgramData\patching\bin\post_update.ps1'
noop
Data type: Boolean
Flag to enable noop mode for the underlying plans and tasks.
Default value: false
patching::pre_post_update
Common entry point for executing the pre/post update custom scripts
- See also
- patching::pre_update
- patching::post_update
Parameters
The following parameters are available in the patching::pre_post_update
plan:
targets
Data type: TargetSpec
Set of targets to run against.
task
Data type: String[1]
Name of the pre/post update task to execute.
script_linux
Data type: Optional[String[1]]
Path to the script that will be executed on Linux targets.
Default value: undef
script_windows
Data type: Optional[String[1]]
Path to the script that will be executed on Windows targets.
Default value: undef
noop
Data type: Boolean
Flag to enable noop mode for the underlying plans and tasks.
Default value: false
patching::pre_update
Often in patching it is necessary to run custom commands before/after updates are applied to a host. This plan allows for that customization to occur.
By default it executes a Shell script on Linux and a PowerShell script on Windows hosts. The default script paths are:
- Linux:
/opt/patching/bin/pre_update.sh
- Windows:
C:\ProgramData\patching\bin\pre_update.ps1
One can customize the script paths by overriding them on the CLI, or when calling the plan
using the script_linux
and script_windows
parameters.
The script paths can also be customzied in the inventory configuration vars
:
Example:
vars:
patching_pre_update_script_windows: C:\scripts\patching.ps1
patching_pre_update_script_linux: /usr/local/bin/mysweetpatchingscript.sh
groups:
# these targets will use the pre patching script defined in the vars above
- name: regular_nodes
targets:
- tomcat01.domain.tld
# these targets will use the customized patching script set for this group
- name: sql_nodes
vars:
patching_pre_update_script_linux: /bin/sqlpatching.sh
targets:
- sql01.domain.tld
Examples
CLI - Basic usage
bolt plan run patching::pre_update --targets all_hosts
CLI - Custom scripts
bolt plan run patching::pre_update --targets all_hosts script_linux='/my/sweet/script.sh' script_windows='C:\my\sweet\script.ps1'
Plan - Basic usage
run_plan('patching::pre_update', $all_hosts)
Plan - Custom scripts
run_plan('patching::pre_update', $all_hosts,
script_linux => '/my/sweet/script.sh',
script_windows => 'C:\my\sweet\script.ps1')
Parameters
The following parameters are available in the patching::pre_update
plan:
targets
Data type: TargetSpec
Set of targets to run against.
script_linux
Data type: String[1]
Path to the script that will be executed on Linux targets.
Default value: '/opt/patching/bin/pre_update.sh'
script_windows
Data type: String[1]
Path to the script that will be executed on Windows targets.
Default value: 'C:\ProgramData\patching\bin\pre_update.ps1'
noop
Data type: Boolean
Flag to enable noop mode for the underlying plans and tasks.
Default value: false
patching::puppet_facts
This is inspired by: https://github.com/puppetlabs/puppetlabs-facts/blob/master/plans/init.pp
Except instead of just running facter
it runs puppet facts
to set additional
facts that are only present when in the context of puppet.
Under the hood it is executeing the patching::puppet_facts
task.
Parameters
The following parameters are available in the patching::puppet_facts
plan:
targets
Data type: TargetSpec
Set of targets to run against.
patching::reboot_required
Patching in different environments comes with various unique requirements, one of those is rebooting hosts. Sometimes hosts need to always be reboot, othertimes never rebooted.
To provide this flexibility we created this function that wraps the reboot
plan with
a strategy
that is controllable as a parameter. This provides flexibilty in
rebooting specific targets in certain ways (by group). Along with the power to expand
our strategy offerings in the future.
Parameters
The following parameters are available in the patching::reboot_required
plan:
targets
Data type: TargetSpec
Set of targets to run against.
strategy
Data type: Enum['only_required', 'never', 'always']
Determines the reboot strategy for the run.
- 'only_required' only reboots hosts that require it based on info reported from the OS
- 'never' never reboots the hosts
- 'always' will reboot the host no matter what
Default value: undef
message
Data type: String
Message displayed to the user prior to the system rebooting
Default value: undef
wait
Data type: Integer
Time in seconds that the plan waits before continuing after a reboot. This is necessary in case one of the groups affects the availability of a previous group. Two use cases here:
- A later group is a hypervisor. In this instance the hypervisor will reboot causing the VMs to go offline and we need to wait for those child VMs to come back up before collecting history metrics.
- A later group is a linux router. In this instance maybe the patching of the linux router affects the reachability of previous hosts.
Default value: undef
noop
Data type: Boolean
Flag to determine if this should be a noop operation or not. If this is a noop, no hosts will ever be rebooted, however the "reboot required" information will still be queried and returned.
Default value: false
patching::set_facts
For Linux targets the facts will be written to /etc/facter/facts.d/patching.yaml. For Windows targets the facts will be written to 'C:/ProgramData/PuppetLabs/facter/facts.d/patching.yaml'.
The contents of the patching.yaml file will be overwritten by this plan. TODO: Provide an option to merge with existing facts.
Once the facts are written, by default, the facts will be ran and uploaded to PuppetDB. If you wish to disable this, simply set upload=false
Examples
Set the patching_group fact
bolt plan run patching::set_facts --targets xxx patching_group=tuesday_night
Set the custom facts
bolt plan run patching::set_facts --targets xxx custom_facts='{"fact1": "blah"}'
Don't upload facts to PuppetDB
bolt plan run patching::set_facts --targets xxx patching_group=tuesday_night upload=false
Parameters
The following parameters are available in the patching::set_facts
plan:
targets
Data type: TargetSpec
Set of targets to run against.
patching_group
Data type: Optional[String]
Name of the patching group that the targets are a member of. This will be the value for the patching_group fact.
Default value: undef
custom_facts
Data type: Hash
Hash of custom facts that will be set on these targets. This can be anything you like and will merged with the other facts above.
Default value: {}
upload
Data type: Boolean
After setting the facts, perform a puppet facts upload so the new facts are stored in PuppetDB.
Default value: true
patching::snapshot_kvm
Runs commands on the CLI of the KVM/Libvirt hypervisor host.
Parameters
The following parameters are available in the patching::snapshot_kvm
plan:
targets
action
target_name_property
snapshot_name
snapshot_description
snapshot_memory
snapshot_quiesce
hypervisor_targets
noop
targets
Data type: TargetSpec
Set of targets to run against.
action
Data type: Enum['create', 'delete']
What action to perform on the snapshots:
create
creates a new snapshot- 'delete' deletes snapshots by matching the
snapshot_name
passed in.
target_name_property
Data type: Optional[Enum['hostname', 'name', 'uri']]
Determines what property on the Target object will be used as the VM name when mapping the Target to a VM in vSphere.
uri
: use theuri
property on the Target. This is preferred because If you specify a list of Targets in the inventory file, the value shown in that list is set as theuri
and not thename
, in this casename
will beundef
.name
: use thename
property on the Target, this is not preferred becausename
is usually a short name or nickname.hostname
: use thehostname
value to use host component ofuri
property on the Target this can be useful if VM name doesn't include domain name
Default value: undef
snapshot_name
Data type: Optional[String[1]]
Name of the snapshot
Default value: undef
snapshot_description
Data type: Optional[String]
Description of the snapshot
Default value: undef
snapshot_memory
Data type: Optional[Boolean]
Capture the VMs memory in the snapshot
Default value: undef
snapshot_quiesce
Data type: Optional[Boolean]
Quiesce/flush the filesystem when snapshotting the VM. This requires VMware tools be installed in the guest OS to work properly.
Default value: undef
hypervisor_targets
Data type: Optional[TargetSpec]
Name or reference to the targets of the KVM hypervisors. We will login to this host an run the snapshot tasks so that the local CLI can be used. Default target name is "kvm_hypervisors", this can be a group of targets too!
Default value: undef
noop
Data type: Boolean
Flag to enable noop mode. When noop mode is enabled no snapshots will be created or deleted.
Default value: false
patching::snapshot_vmware
Communicates to the vSphere API from the local Bolt control node using the rbvmomi Ruby gem.
To install the rbvmomi gem on the bolt control node:
/opt/puppetlabs/bolt/bin/gem install --user-install rbvmomi
TODO config variables
Parameters
The following parameters are available in the patching::snapshot_vmware
plan:
targets
action
target_name_property
vsphere_host
vsphere_username
vsphere_password
vsphere_datacenter
vsphere_insecure
snapshot_name
snapshot_description
snapshot_memory
snapshot_quiesce
noop
targets
Data type: TargetSpec
Set of targets to run against.
action
Data type: Enum['create', 'delete']
What action to perform on the snapshots:
create
creates a new snapshot- 'delete' deletes snapshots by matching the
snapshot_name
passed in.
target_name_property
Data type: Optional[Enum['hostname', 'name', 'uri']]
Determines what property on the Target object will be used as the VM name when mapping the Target to a VM in vSphere.
uri
: use theuri
property on the Target. This is preferred because If you specify a list of Targets in the inventory file, the value shown in that list is set as theuri
and not thename
, in this casename
will beundef
.name
: use thename
property on the Target, this is not preferred becausename
is usually a short name or nickname.hostname
: use thehostname
value to use host component ofuri
property on the Target this can be useful if VM name doesn't include domain name
Default value: undef
vsphere_host
Data type: String[1]
Hostname of the vSphere server that we're going to use to create snapshots via the API.
Default value: .vars['vsphere_host']
vsphere_username
Data type: String[1]
Username to use when authenticating with the vSphere API.
Default value: .vars['vsphere_username']
vsphere_password
Data type: String[1]
Password to use when authenticating with the vSphere API.
Default value: .vars['vsphere_password']
vsphere_datacenter
Data type: String[1]
Name of the vSphere datacenter to search for VMs under.
Default value: .vars['vsphere_datacenter']
vsphere_insecure
Data type: Boolean
Flag to enable insecure HTTPS connections by disabling SSL server certificate verification.
Default value: .vars['vsphere_insecure']
snapshot_name
Data type: Optional[String[1]]
Name of the snapshot
Default value: undef
snapshot_description
Data type: Optional[String]
Description of the snapshot
Default value: undef
snapshot_memory
Data type: Optional[Boolean]
Capture the VMs memory in the snapshot
Default value: undef
snapshot_quiesce
Data type: Optional[Boolean]
Quiesce/flush the filesystem when snapshotting the VM. This requires VMware tools be installed in the guest OS to work properly.
Default value: undef
noop
Data type: Boolean
Flag to enable noop mode. When noop mode is enabled no snapshots will be created or deleted.
Default value: false
patching::update_history
When executing the patching::update
task, the data that is returned to Bolt
is also written into a "results" file. This plan reads the last JSON document
from that results file, then formats the results in various ways.
This is useful for gather patching report data on a fleet of servers.
If you're using this in a larger workflow and you've run patching::update
inline.
You can pass the ResultSet from that task into the history
parameter of this
plan and we will skip retrieving the history from the targets and simply use
that data.
By default the report is also written to a file patching_report.csv
.
If you would like to disable this you can pass in undef
or 'disabled'
to
report_file
parameter. You can also customize this as by specifying the
patching_report_file
var on the target or group.
Patching format can also be customized using the inventory var patching_report_format
on the target or group.
Parameters
The following parameters are available in the patching::update_history
plan:
targets
Data type: TargetSpec
Set of targets to run against.
history
Data type: Optional[ResultSet]
Optional ResultSet from the patching::update
or patching::update_history
tasks
that contains update result data to be formatted.
Default value: undef
report_file
Data type: Optional[String]
Optional filename to save the formatted repot into.
If undef
or 'disabled'
are passed, then no report file will be written.
NOTE: If you're running PE, then you'll need to disable writing reports because it will
fail when running from the console.
Default value: 'patching_report.csv'
format
Data type: Enum['none', 'pretty', 'csv']
The method of formatting to use for the data.
Default value: 'pretty'
What are tasks?
Modules can contain tasks that take action outside of a desired state managed by Puppet. It’s perfect for troubleshooting or deploying one-off changes, distributing scripts to run across your infrastructure, or automating changes that need to happen in a particular order as part of an application deployment.
Tasks in this module release
cache_remove
Removes/clears the target's update cache. For RHEL/CentOS this means a `yum clean all`. For Debian this means a `apt update`. For Windows this means a Windows Update refresh.
cache_update
Updates the targets update cache. For RHEL/CentOS this means a `yum clean expire-cache`. For Debian this means a `apt update`. For Windows this means a Windows Update refresh.
puppet_facts
Gather system facts using 'puppet facts'. Puppet agent MUST be installed for this to work.
reboot_required
Checks if a reboot is pending
What are plans?
Modules can contain plans that take action outside of a desired state managed by Puppet. It’s perfect for troubleshooting or deploying one-off changes, distributing scripts to run across your infrastructure, or automating changes that need to happen in a particular order as part of an application deployment.
Changelog
All notable changes to this project will be documented in this file.
Development
Release 1.7.0 (2022-04-18)
-
Added support for AlmaLinux. (Enhancement)
Contributed by Vadym Chepkov (@vchepkov)
-
Fixing issue where the update history was not being reported back on RHEL8 systems.
Contributed by Bradley Bishop (@bishopbm1)
Release 1.6.0 (2021-07-08)
-
Added support for Oracle Linux.
Contributed by Sean Millichamp (@seanmil)
Release 1.5.1 (2021-07-08)
-
remove unused hiera configuration
Contributed by Vadym Chepkov (@vchepkov)
Release 1.5.0 (2021-07-07)
-
Added disconnect_wait input to be passed to the reboot plan so that there can be controls around when the plan checks if the server has rebooted.
Contributed by Bradley Bishop (@bishopbm1)
-
Added support for Rocky Linux. (Enhancement)
Contributed by Vadym Chepkov (@vchepkov)
Release 1.4.0 (2021-04-30)
-
Added a new plan and task
patching::snapshot_kvm
for creating/deleting snapshots on KVM/libvirt.Contributed by Nick Maludy (@nmaludy)
Release 1.3.0 (2021-03-05)
-
Fixed issue where puppet facts were not in the expected spot causing puppet_facts plan to fail. We added a conditional to check for the facts in both places
Contributed by Bradley Bishop (@bishopbm1)
-
Bumped module
puppetlabs/puppet_agent
to< 5.0.0
Contributed by @fetzerms
-
Added module
puppetlabs/reboot
to>= 3.0.0 < 5.0.0
Contributed by @fetzerms
-
Bumped
puppet
requirement to< 8.0.0
to support Puppet 7Contributed by Nick Maludy (@nmaludy)
-
PDK update to
2.0.0
Contributed by Nick Maludy (@nmaludy)
-
Remove tests for Puppet
5
. NOTICE Puppet 5 support will be removed in next major version.Contributed by Nick Maludy (@nmaludy)
-
Added tests for Puppet
7
Contributed by Nick Maludy (@nmaludy)
Release 1.2.1 (2021-02-02)
-
Fixed issue where agruments for reboot strategy are being overridden by inventory file.
Contributed by Bradley Bishop (@bishopbm1)
-
Switch from Travis to GitHub Actions
Contributed by Nick Maludy (@nmaludy)
Release 1.2.0 (2020-12-02)
-
Added monitoring_prometheus bolt plan and task to optionally create/delete silences in Prometheus to suppress alerts for the given targets.
-
Added monitoring_multiple bolt plan to enable/disable monitoring for multiple different services at once.
Contributed by John Schoewe (@jschoewe)
Release 1.1.1 (2020-06-09)
-
Fixed header line for CSV
Contributed by Haroon Rafique
-
Fixed trivial bug with useless use of cat
Contributed by Haroon Rafique
-
Added new configuration option:
patching_update_provider
: Parameter sets the provider in the update tasks.
Contributed by Bill Sirinek (@sirinek)
-
Fixed bug in
patching::available_updates_windows
where ifchoco outdated
printed an error, but returned a0
exit status our output parsing code was throwing an exception causing a unhelpful error to be printed. Now, we check for this condition and if we can't successfully parse the output ofchoco outdated
we explicitly fail the task and return the raw output from the command.Contributed by Nick Maludy (@nmaludy)
Release 1.1.0 (2020-04-15)
-
Added new plans
patching::get_facts
to retrieve a set of facts from a list of targets andpatching::set_facts
to set facts on a list of targets. This is used to assign thepatching_group
fact so that we can query PuppetDB for group information in dynamic Bolt inventories.Contributed by Nick Maludy (@nmaludy)
-
Fixed a bug with a hard coded wait for reboot. (Bug Fix)
Contributed by Michael Surato (@msurato)
-
Add
hostname
as a choice for patching::snapshot_vmware::target_name_property It can be used in cases where target discovery uses fully qualified domain names and VM names don't have domain name componentContributed by Vadym Chepkov (@vchepkov)
-
Fixed a bug in
patching::monitoring_solarwinds
plan wherepatching_monitoring_name_property
config value wasn't being honored. (Bug Fix)Contributed by Nick Maludy (@nmaludy)
-
Fixed a bug in
patching::update
task on RHEL where errors in theyum
command we're being reported due to the use of a|
. Now we check$PIPESTATUS[0]
instead of$?
. (Bug Fix)Contributed by Nick Maludy (@nmaludy)
-
Added new configuration options:
patching_reboot_wait
: Parameter controls thereboot_wait
option for the number of seconds to wait between reboots. Default = 300paching_report_file
: Customize the name of the report file to write to disk. You can disable writing the report files by specifying this as'disabled'
. NOTE: for PE users writing files to disk throws an error, so you'll be happy you can now disable writing these files! Default =patching_report.csv
patching_report_format
: Customize the format of the reports written to the report file. Default =pretty
(Enhancement)
Contributed by Nick Maludy (@nmaludy)
-
To support the new configuration options above, the
patching::reboot_required
plan had its parameterreboot_wait
renamed towait
. (Enhancement)Contributed by Nick Maludy (@nmaludy)
Release 1.0.1 (2020-03-04)
-
Ensure the
patching.json
file exists on Windows by creating a blank file if it was previously missing.Contributed by Bill Sirinek (@sirinek)
-
use
name
instead ofhost
to better represent targets in inventoryContributed by Vadym Chepkov (@vchepkov)
-
Fixed a bug where if
patching::update_history
task was called and no results were returned thepatching::update_history
plan would fail. Now, we default to an empty array so a 0 is displayed.Contributed by Nick Maludy (@nmaludy)
Release 1.0.0 (2020-02-28)
-
BREAKING CHANGE Converted from
nodes
totargets
for all plans and tasks. This is in support of Bolt2.0
. Any calling plans or CLI will need to use thetargets
parameter to pass in the hosts to be patched. (Feature)Contributed by Nick Maludy (@nmaludy)
-
Fixed inconsistent documentation for result file location, proper location is:
C:/ProgramData/patching/log/patching.json
. (Bug Fix) #28Contributed by Nick Maludy (@nmaludy)
-
Added documentation for patching with PE and
pcp
timeouts. (Documentation) #28Contributed by Nick Maludy (@nmaludy)
-
PDK sync to 1.17.0 template (Enhancement)
Contributed by Nick Maludy (@nmaludy)
Release 0.5.0 (2020-02-20)
-
Made the timeout after reboot a configurable parameter. (Enhancement)
Contributed by Michael Surato (@msurato)
-
Fixed bug in
patching::snapshot_vmware
where the wrong snapshot name was printed to the user. (Bug Fix)Contributed by Nick Maludy (@nmaludy)
-
Fixed bug in
patching::available_updates_windows
where usingprovider=windows
threw an error. (Bug Fix)Contributed by Nick Maludy (@nmaludy)
-
Add support for Fedora Linux. (Enhancement)
Contributed by Vadym Chepkov (@vchepkov)
-
Modified location of the puppet executable on Linux to use the supported wrapper. This sets library paths to solve consistency issues. (Bug Fix)
Contributed by Michael Surato (@msurato)
Release 0.4.0 (2020-01-06)
-
Add support for SUSE Linux Enterprise. (Enhancement)
Contributed by Michael Surato (@msurato)
-
Modify the scripts to use /etc/os-release. This will fallback to older methods in the absense of /etc/os-release. (Enhancement)
Contributed by Michael Surato (@msurato)
-
Re-establish all targets availability after reboot
Contributed by Vadym Chepkov (@vchepkov)
-
Fixed a bug in
patching::puppet_facts
where the sub command would fail to run on installations with customGEM_PATH
settings. (Bug Fix)Contributed by Nick Maludy (@nmaludy)
-
Changed the property we use to look up SolarWinds nodes from
'Caption'
to'DNS'
by default. Also made the property configurable using thepatching_monitoring_name_property
. There are now new parameters on thepatching::monitoring_solarwinds
task and plans to allow specifying what property we are matching for on the SolarWinds side. (Enhancement)Contributed by Nick Maludy (@nmaludy)
Release 0.3.0 (2019-10-30)
-
Add support for RHEL 8 based distributions (Enhancement)
Contributed by Vadym Chepkov (@vchepkov)
-
Added shields/badges to the README. (Enhancement)
Contributed by Nick Maludy (@nmaludy)
-
Added the ability to enable/disable monitoring during patching. The first implementation is to do this in the SolarWinds monitoring tool:
- Task -
patching::monitoring_solarwinds
: This task enables/disbles monitoring for a list of node names. - Plan -
patching::monitoring_solarwinds
: Wraps thepatching::monitoring_solarwinds
task in an easier to consume fashion, along with configuration option parsing and pretty printing. (Enhancement)
Contributed by Nick Maludy (@nmaludy)
- Task -
-
Changed the name of the configuration option
patching_vm_name_property
topatching_snapshot_target_name_property
. This correlates to the new property that was just added (below). (Enhancement)Contributed by Nick Maludy (@nmaludy)
-
Added a new configs:
patching_monitoring_plan
Name of the plan to execute for monitoring alerts control. (default:patching::monitoring_solarwinds
)patching_monitoring_enabled
Enable/disable the monitoring phases of patching. (default:true
)patching_monitoring_target_name_property
Determines what property on the target maps to the node's name in the monitoring tool (SolarWinds). This was intentionally made discinct frompatching_snapshot_target_name_property
in case the tools used different names for the same node/target.
Contributed by Nick Maludy (@nmaludy)
-
Empty strings
''
for plan names no longer disable the execution of plans (thepick()
function removes these, so it gets ignored). Instead pass in the string'disabled'
to disable the use of a pluggable plan. (Bug fix)Contributed by Nick Maludy (@nmaludy)
Release 0.2.0
-
Renamed task implementations to
_linux
and_windows
to work around a Forge bug where it didn't support that Bolt feature and was denying module submission. Due to this i also had to create matching task metadata for_linux
and_windows
and mark them as"private": true
so that they are not visible inbolt task show
. (Enhancement)Contributed by Nick Maludy (@nmaludy)
Release 0.1.0
Features
Bugfixes
Known Issues
Dependencies
- puppetlabs/puppet_agent (>= 2.2.0 < 5.0.0)
- puppetlabs/reboot (>= 3.0.0 < 5.0.0)
- puppetlabs/stdlib (>= 4.13.1 < 7.0.0)
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.